text
stringlengths
216
4.52M
meta
dict
\section{Introduction}\label{sec:intr} The optimal dividend problem concerns maximizing the expected discounted cumulative dividend paid up to the time of ruin. The classical optimal dividend problem has been considered by many authors since De Finetti \cite{Finetti}, who first introduced the barrier strategy, in which all surpluses above a given level are transferred to a beneficiary, and raised the question of optimizing its level. Gerber and Shiu \cite{gerber2004optimal}, Asmussen and Taksar \cite{asmussen1997controlled} and Jeanblanc and Shiryaev \cite{jeanblanc1995optimization} considered the the optimal dividend problem in the Brownian motion setting. Azcue and Muler \cite{azcue2005optimal} and Schmidli \cite{schmidli2006optimisation} studied the optimal dividend strategy under the Cram\'{e}r-Lundberg model using a Hamilton-Jacobi-Bellman (HJB) system of equations. Further Avram et.al. \cite{avram2007optimal, avram2015gerber}, Kyprianou and Palmowski \cite {kyprianou2007distributional}, Loeffen \cite{loeffen2008optimality, loeffen2009optimal}, Loeffen and Renaud \cite{loeffen2010finetti}, Czarna and Palmowski \cite{czarna2010dividend} and many other authors analyze the dividend problem for the L\'{e}vy risk process using the probabilistic approach. In this paper, we deal with so-called dual risk process. In the dual model, in contrast to the classical model, the premium rate is viewed as a cost function and thus it is negative. While claims, on the other hand, should be considered as profits or gains and therefore, they make the surplus increase; see \cite{AvanGerShiu}, \cite{AvanziGerber}, \cite{Yamazaki}, \cite{Ng}. There are various interpretations for the dual model. For instance, it is appropriate for the company which pays continuously expenses relevant to research and occasionally gains some random income from some inventions or discoveries. As an example one can consider pharmaceutical companies, real estate agent offices or brokerage firms that sell mutual funds or insurance products with a front-end load. For more detailed information, we refer the reader to \cite{AvanGerShiu}. There is a good deal of work being done on dividend barrier in the dual model under the assumption that the cost function is constant and gains are modeled by a compound Poisson process. Avanzi et al. \cite{AvanGerShiu} considered cases when profits or gains follow an exponential or a mixture of exponential distributions and they derived explicit formulas for the expected discounted dividend values. They also found the optimal barrier level. The connection between dual and classical model was explained and used by Afonso et al. \cite{Afonso}. Avanzi and Gerber \cite{AvanziGerber} studied a dual model perturbed by a diffusion. Using Laplace transform method they determined the optimal strategy among the set of all barrier strategies. Bayraktar et al. \cite{Yamazaki} used the fluctuation theory to prove the optimality of a barrier strategies for all spectrally positive L\'{e}vy processes. They characterized the optimal barrier using a so-called scale functions. Moreover, they identify the solution to the dividend problem with capital injections. Finally Albrecher et al. \cite{Albrecher} using Laplace transforms examined a dual model in the presence of tax payments. In this paper, we will analyze the dividend problem in a dual model with a reserve-dependent risk process. We will use the theory of piecewise deterministic Markov processes (PDMP) and martingale properties. Assuming the absence of transaction costs, we find the corresponding HJB system. In the next step, we find the optimal barrier strategy and we will give sufficient conditions for the barrier strategy to be optimal. The corresponding classical model has been already analyzed in \cite{MarPal}. In this paper, we also show connections between both models. As a side product we obtained some exit problems formulas, which can be used to solve problems with capital injections (see \cite{Yamazaki}). The paper is organized as follows. In Section \ref{sec:dual}, we introduce the basic notation and we describe the dual model we deal with. Section \ref{sec:classical} is dedicated to the corresponding classical model, where we show results on the exit and capital injections problems. In Section \ref{sec:main}, we present the Verification Theorem. We also analyze the barrier strategy and give sufficient conditions for the barrier strategy to be optimal among all admissible strategies. In Section \ref{sec:proofs}, we give all the proofs. Section \ref{sec:examples} is devoted to some examples. \section{The Dual Model}\label{sec:dual} In the dual model the surplus process $R$ (without payment of dividends) with an initial capital $x>0$, solves the following differential equation: \begin{equation*} R_t=x-\int_0^t p(R_s)ds+\sum_{k=1}^{N(t)}C_k, \end{equation*} where $p$ is a given deterministic, absolutely continuous, positive cost function and $\left\{C_k\right\}_{i=1}^\infty$ is a sequence of i.i.d. positive random variables with distribution function $F$, representing the capital injections. Above $N$ is an independent Poisson process with intensity $\lambda>0$ modeling the times at which the capital injections occur. We assume that $R_t\rightarrow\infty$ a.s. as $t\rightarrow\infty$, $F$ is absolutely continuous with respect of Lebesgue measure with density $f$ and $\mathbb{E}\,C_1<\infty$. To approach the dividend problem, we consider the controlled surplus process $X^\pi$ satisfying the following stochastic differential equation: \begin{equation*} X_t^\pi=x-\int_0^t p(X^\pi_s)ds+\sum_{k=1}^{N(t)}C_k-L_t^\pi, \end{equation*} where $\pi$ is a strategy from the class $\Pi$ of all ,,admisibble'' dividend controlls resulting in the cummulative amount of dividends $L_t^\pi$ paid up to time $t$. We say that a dividend strategy $\pi$ is admissible, if the process $L^\pi$ is c\`{a}dl\`{a}g, nondecreasing, starting from zero ($L_{0-}^\pi=0$) and adapted to the natural filtration of the risk process $R$ that satisfies the usual conditions. Moreover, at any time $t$ the dividend payment is smaller than the size of the available reserves ($L^\pi_{t}-L^\pi_{t-}\leq X^\pi_{t-}$). The object of interest is the averange value of the cumulative discounted dividends paid up to the ruin time: \begin{equation}\label{vpi} v_\pi(x):=\mathbb{E}_x\left[\int_0^{\sigma^\pi}e^{-qt}dL^\pi_t\right], \end{equation} where $\sigma^\pi:=\inf\{t\geq 0: X^\pi_t\leq 0\}$ is the ruin time and $q\geq 0$ is a given discount rate. We adopt the convention that $\mathbb{E}_x$ is the expectation with respect to $\mathbb{P}_x(\cdot):=\mathbb{P}(\cdot|X^\pi_{0-}=x)$. Note that unless otherwise stated we write $\sigma$ instead of $\sigma^\pi$ to simplify the notation. The dividend optimization problem consists in finding the so-called value function $v$ given by \begin{equation}\label{dualpr} v(x):=\sup_{\pi\in\Pi}v_\pi(x) \end{equation} and the optimal strategy $\pi^*\in\Pi$ such that $$v(x)=v_{\pi^*}(x)\quad\text{for }x\geq 0.$$ In the next subsection we present some of the results for the classical model that can be used to solve the dual problem. \section{The Classical model}\label{sec:classical} The connection between the classical and dual model is crucial, since we will use some methods and results already derived for the classical model. In the classical model, we consider the surplus process $\tilde{R}$ (before any regulation) with an initial capital $\tilde{x}>0$, described by the following stochastic differential equation: \begin{equation*} \tilde{R}_t=\tilde{x}+\int_0^t \tilde{p}(\tilde{R}_s)ds-\sum_{k=1}^{N(t)}C_k, \end{equation*} where $\tilde{p}$ is a given deterministic, positive and absolutely continuous premium function. Here, the generic $C$ denotes the claim size or lost that arrives according to Poisson process $N$. Let $\tilde{X}_t$ be a reflected process satisfing the equation \begin{equation*} \tilde{X}_t=\tilde{x}+\int_0^t \tilde{p}(\tilde{X}_s)ds-\sum_{k=1}^{N(t)}C_k+\tilde{L}_t^{0}, \end{equation*} where $\Delta \tilde{L}_t^{0}=\left(-\tilde{X}_t\right)\mathbb{I}_{\{\tilde{X}_t<0\}}$. Note that $\tilde{L}^{0}$ is a nondecreasing adapted process with $\tilde{L}_{0-}^0=0$. Above $\tilde{L}_t^0$ represents the total amount of capital injections up to time $t$. We will first demonstrate how to identify \begin{equation*} \mathbb{E}_{x}\left[\int_0^{\tilde{T}_a^+}e^{-qt}d\tilde{L}^{0}_t\right], \end{equation*} where $\tilde{T}_a^+:=\inf\{t\geq 0:\tilde{X}_t\geq a\}$. It is the average discounted value of injected capital paid up to the time of reaching the barrier $a>0$ by the controlled process $\tilde{X}$. Two functions, $\tilde{W}_q$ and $\tilde{G}_{q,w}$, are crucial to solve this problem. There are related with two-sided and one-sided exit problems of $\tilde{R}$ in the following way: \begin{equation}\label{exit1} \mathbb{E}_x\left[e^{-q\tilde{\tau}_a^+}\mathbb{I}_{\{\tilde{\tau}_a^+<\tilde{\tau}^-_0\}}\right]=\frac{\tilde{W}_q(x)}{\tilde{W}_q(a)},\qquad\text{for }x\in[0,a] \end{equation} \begin{equation}\label{exit2} \tilde{G}_{q,w}(x):=\mathbb{E}_x\left[e^{-q\tilde{\tau}_{0}^-}w(|\tilde{R}_{\tilde{\tau}^-_{0}}|)\mathbb{I}_{\{\tilde{\tau}_{0}^-<\infty\}}\right],\qquad\text{for }x>0, \end{equation} where $a>0$, $\tilde{\tau}_a^+=\inf\{t\geq 0: \tilde{R}_t\geq a\}$ and $\tilde{\tau}_a^-=\inf\{t\geq 0: \tilde{R}_t< a\}$. Function $\tilde{G}_{q,w}$ is defined for some general positive penalty function $w$. For the existence and properties of the functions $\tilde{W}_q$ and $\tilde{G}_{q,w}$ we refer the reader to \cite{Corina} and \cite{MarPal}. Note that $\tilde{R}$ is a piecewise deterministic Markov process. By $\tilde{\mathcal{A}}$ we denote the full generator of $\tilde{R}$, i.e. we have: $$\tilde{\mathcal{A}} m(x)=\tilde{p}(x)m'(x)+\lambda\int_0^\infty (m(x-z)-m(x))dF(z)$$ acting on absolutely continuous functions $m$ such that \begin{equation*}\label{domain} \mathbb{E}\bigg[\sum_{\sigma_i\leq t}|m(\tilde{R}_{\sigma_i})-m(\tilde{R}_{\sigma_i-})|\bigg]<\infty \qquad\text{for any}\ t\geq 0. \end{equation*} Above $\{\sigma_i\}_{i\in \mathbb{N}\cup\{0\}}$ denotes the times at which the claims occur (see Davis \cite{Davis} and Rolski et al. \cite{Rolski}). Moreover, $m'$ denotes the density of~$m$. Note that any function, which is absolutely continuous and ultimately dominated by an affine function, is in the domain of the full generator $\mathcal{A}$ as a consequence of the assumption that $\mathbb{E} C_1<\infty$. Recall that for any function $m$ from the domain of $\mathcal{A}$ the process $$\bigg\{e^{-qt}m(\tilde{R}_t) -\int_0^te^{-qs}\left(\tilde{\mathcal{A}}-q\mathbf{I}\right) m(\tilde{R}_s)\,ds, t\geq 0\bigg\}$$ is a martingale. Marciniak and Palmowski \cite{MarPal} showed that, if the claim size distribution of $C$ is absolutely continuous, then functions $\tilde{W}_q$ and $\tilde{G}_{q,w}$ are differentiable and satisfy equations: \begin{equation}\label{eq:h} \tilde{\mathcal{A}} \tilde{W}_q(x)=q\tilde{W}_q (x)\quad \text{for } x\geq 0,\qquad \tilde{W}_q (x)=0\quad \text{for } x<0, \quad \tilde{W}_q(0)=1 \end{equation} and \begin{equation}\label{eq:G} \tilde{\mathcal{A}} \tilde{G}_{q,w}(x)=q\tilde{G}_{q,w} (x) \quad \text{for }x\geq 0,\qquad \tilde{G}_{q,w} (x)=w(x)\quad \text{for } x<0. \end{equation} Now we are drawing our attention to the exit problem for the reflected process $\tilde{X}$. The first passage time of a positive level $a>0$ we denote by $$T_a^+:=\inf\{t\geq0:\tilde{X}_t> a\}.$$ The following result express the Laplace transform of the exit time $T^+_a$ in terms of the functions $\tilde{W}_q$ and $\tilde{G}_{q,1}$. \begin{theorem}\label{e-qT_a} Let $a>0$, $x\in[0,a]$ and $q\geq 0$. If $T^+_a<\infty$ $\mathbb{P}$-a.s. then $$\mathbb{E}_x\left[e^{-qT_a^+}\right]=\tilde{Z}(x)/\tilde{Z}(a),$$ where \begin{equation}\label{Z} \tilde{Z}(x) :=\big(1-\tilde{G}_{q,1}(0)\big)\tilde{W}_q(x)+\tilde{G}_{q,1}(x)\qquad\text{for }x\geq 0 \end{equation} and $\tilde{Z}(x)=1$ for $x<0$. \end{theorem} \begin{proof} By the properties of $ \tilde{G}_{q,1}$ and $\tilde{W}_q$ \begin{equation}\label{eqZ} (\tilde{\mathcal{A}}-q\mathbf{I})\tilde{Z}(x)=0\quad \text{for all } x> 0,\qquad \tilde{Z}(x)=1 \text{ for } x\leq 0. \end{equation} Note that $\tilde{Z}$ is continuous on $\mathbb{R}$ and continuously differentiable on $\mathbb{R}\setminus \{0\}$ with right and left derivative at $0$. Thus $\tilde{Z}$ is absolutly continuous on $\mathbb{R}$. Moreover, $\tilde{Z}$ is dominated by some affine function on the set $(-\infty,a)$. Therefore, $\tilde{Z}$ is in the domain of the piecewise deterministic Markov process $\tilde{X}_{t\wedge T^+_a}$ (see Davis \cite{Davis} and Rolski et al. \cite{Rolski}). This means that the process \begin{eqnarray*} K_{t\wedge T_a^+}:=e^{-q(t\wedge T^+_a)}\tilde{Z}(\tilde{X}_{t\wedge T^+_a})-\int_0^{t\wedge T^+_a} e^{-qs}\Big(\tilde{\mathcal{A}}^X-q\mathbf{I}\Big)\tilde{Z}(\tilde{X}_{s})\,ds \end{eqnarray*} is a martingale for the generator $\tilde{\mathcal{A}}^X$ of $\tilde{X}_{t\wedge T^+_a}$ given by: $$\tilde{\mathcal{A}}^Xm(x)=\tilde{p}(x)m'(x)+\lambda\int_0^x(m(x-z)-m(x))dF(z)+\lambda(m(0)-m(x))\mathbb{P}(C_1>x).$$ Note that $\tilde{Z}(x-z)=\tilde{Z}(0)$ for all $z\geq x$. Thus, by (\ref{eqZ}), we have $K_{t\wedge T_a^+}=e^{-q(t\wedge T^+_a)}\tilde{Z}(\tilde{X}_{t\wedge T^+_a})$, which is bounded by $\sup_{x\in[0,a]}\{\tilde{Z}(x)\}<\infty$. Hence $K_{t\wedge T_a^+}$ is a uniformly integrable (UI) martingale and $$\tilde{Z}(x)=\mathbb{E}_x[K_{t\wedge T_a^+}]\rightarrow\mathbb{E}_x[K_{T_a^+}]=\tilde{Z}(a)\mathbb{E}_x[e^{-qT_a^+}]\text{ as }t\rightarrow\infty.$$ We used in above the assumption that $T_a^+<\infty$ a.s. This completes the proof. \end{proof} \begin{remark}\rm Note that the function $\mathbb{E}_x[e^{-qT_a^+}]$ is increasing on $(0,a)$ for any $a>0$, thus $\tilde{Z}(x)$ has to be increasing on $(0,\infty)$. Moreover, $\tilde{Z}$ is continuous on $\mathbb{R}$ and continuously differentiable on $\mathbb{R}\setminus\{0\}$. \end{remark} From \eqref{exit2} and Theorem \ref{e-qT_a}, using the strong Markov property of $\tilde{R}$ and $\tilde{X}$, one can obtain the following crucial identities. \begin{theorem}\label{tg} (i) For $x\geq 0$ and $q>0$ it holds that \begin{eqnarray} \tilde{g}(x)&:=&\mathbb{E}_x\left[\int_0^\infty e^{-qs}d\tilde{L}^{0}_s\right]=-\mathbb{E}_x\left[e^{-q\tilde{\tau}_{0}^-}\tilde{R}_{\tilde{\tau}_{0}^-}\right]+\mathbb{E}_x\left[e^{-q\tilde{\tau}_{0}^-}\right]\mathbb{E}_{0}\left[\int_0^\infty e^{-qs}d\tilde{L}^{0}_s\right]\nonumber\\ &&= \tilde{G}_{q,|x|}(x)+\tilde{G}_{q,1}(x)\tilde{g}(0).\label{gmale} \end{eqnarray} (ii) Let $a\in(0,\infty)$. For all $x\in[0,a)$ we have \begin{equation}\label{barrier} \mathbb{E}_x\left[\int_0^{T_a^+}e^{-qs}d\tilde{L}^{0}_s\right]=\tilde{g}(x)-\tilde{Z}(x)g(a)/\tilde{Z}(a). \end{equation} \end{theorem} \section{Optimal Dividend Strategy}\label{sec:main} \subsection{HJB equations} To prove the optimality of a particular dividend strategy $\pi$ among all admissible strategies $\Pi$ for the dual problem (\ref{dualpr}), we consider the following Hamilton-Jacobi-Bellman (HJB) system: \begin{eqnarray}\label{hjb} \begin{split} \max \left\{(\mathcal{A}-q\mathbf{I})m(x),1-m'(x))\right\}&=&0\quad\text{for }x>0,\\ m(x)&=&0 \quad\text{for } x\leq 0, \end{split} \end{eqnarray} where $\mathcal{A}$ is the full generator of the piecewise deterministic Markov process $R$ given by: \begin{equation}\label{cAdual} \mathcal{A} m(x)=-p(x)m'(x)+\lambda\int_0^\infty(m(x+y)-m(x))\, dF(y), \end{equation} acting on absolutely continuous functions $m$ such that \begin{equation*} \mathbb{E}\bigg[\sum_{\sigma_i\leq t}|m(R_{\sigma_i})-m(R_{\sigma_i-})|\bigg]<\infty \qquad\text{for any}\ t\geq 0, \end{equation*} where $\{\sigma_i\}_{i\in \mathbb{N}\cup\{0\}}$ denotes the times at which the claims occur (see Davis \cite{Davis} and Rolski et al. \cite{Rolski}). In this case $m'$ denotes the density of~$m$. Recall that any function, which is absolutely continuous and ultimately dominated by an affine function, is in the domain of the full generator $\mathcal{A}$. \begin{theorem}\label{verif} \textit{(Verification Theorem)} Assume that $m:[0,\infty)\rightarrow[0,\infty)$ is continuous and $m(0)=0$. Extend $m$ to the negative half-axis by $m(x)=0$ for $x<0$. Suposse that $m$ is $C^1$ on $(0,\infty)$. If $m$ satisfies (\ref{hjb}), then $m\geq v$ on $(0,\infty)$. \end{theorem} The proof of Theorem \ref{verif} is given in Appendix. \subsection{Barrier Strategies} In this subsection, we consider the barrier strategy $\pi_\beta$, which pays everything above a fixed level $\beta\geq 0$ as dividends. We find $v_{\pi_\beta}$ defined by (\ref{vpi}). To this end, we will use methods from the classical model. \begin{figure} \begin{center} \includegraphics[scale=0.7]{clasdual.png} \end{center} \caption{Classical vs. dual model. } \label{figure:fig1} \vspace{-10pt} \end{figure} Consider the classical risk process $\tilde{R}^{\beta}$ with \[\tilde{p}(\cdot)=p_{\beta}(\cdot):=p(\beta-\cdot),\qquad \tilde{x}=\beta-x.\] The idea is to transfer the jumps over barrier $\beta$ in the dual model into the injections in the classical model that happen when it gets below zero. The Figure~\ref{figure:fig1} shows this connection between the classical and the dual model. Using this correspondence, by a direct application of the results given in \eqref{barrier}, we have the following \ of the value function under the barrier strategy. \begin{theorem}\label{valuebarrier} We have, \begin{eqnarray}\label{va} v_{\pi_\beta}(x):=v_\beta(x) &=& \begin{cases} g^{\beta}(\beta-x)-Z^{\beta}(\beta-x)g^{\beta}(\beta)/Z^{\beta}(\beta) & 0\leq x \leq \beta,\\ &\\ x-\beta+v_\beta(\beta), & x > \beta, \end{cases} \end{eqnarray} where $g^{\beta} (\beta-x):=\tilde{g}(\tilde{x})$ is given in \eqref{gmale}. \end{theorem} \begin{remark}\label{newremark}\rm Note that $v_\beta$ is continuous on $[0,\infty)$ with $v_\beta(0) = 0$. Note also that $v_{\beta}$ solves an integro-differential equation: \begin{equation}\label{eqvb} (\mathcal{A}-q\mathbf{I})v_{\beta}(x)=0\qquad\text{for }x\in(0,\beta). \end{equation} Moreover, $v_\beta$ is $C^1$ on $(0,\beta)$ and $v_\beta'$, $v_\beta$ are left-contiuous at $\beta$ and right-continuous at $0$. \end{remark} We start from the case when optimal barrier is located at $0$, that is, according to this strategy it is optimal to pay all initial capital as dividends. \begin{theorem} If $-p(x)+\lambda\mathbb{E} C_1-qx\leq 0$ for all $x\geq 0$ then $v_0(x)=v(x)$ for all $x$ and the barrier strategy $\pi_\beta$ with $\beta=0$ is optimal. \end{theorem} \proof Consider $h(x)=x$. We have $$(\mathcal{A}-q)h(x)=-p(x)+\lambda\int_0^\infty zdF(z)-qx\leq 0.$$ Thus $h$ satisfies (\ref{hjb}), and by Theorem \ref{verif} we have $h\geq v$ on $(0,\infty)$. However, $h$ is the value function for the barrier strategy $\pi_0$. Thus $h(x)=v(x)$ for all $x\geq 0$ and $\pi_0$ is indeed optimal. \qed \noindent We shall examine the smoothness of $v_\beta$ at $x=\beta$ in order to find a candidate $\beta^*$ for the optimal barrier level. In particular, we will show that the optimal barrier $\beta^*$ is given by: \begin{equation}\label{betastar} \beta^*:=\inf\left\{\beta\geq 0:v_{\beta}'(\beta-)=1\right\}. \end{equation} The next theorem concerns the existence of $\beta^*$. \begin{theorem}\label{bs exist} Assume that $p$ is a $C^1$ function on $[0,\infty)$ such that $\lambda\mathbb{E} C_1>p(0)>0$ and there exists $\hat{x}>0$ such that $p(\hat{x})>\lambda\mathbb{E} C_1-q\hat{x}$. Then $\beta^*$ exists and $\beta^*\in(0,\hat{x})$. \end{theorem} In next result we give sufficient conditions for the barrier strategy to be optimal. \begin{theorem} \label{T:optimal} Assume that $\beta^*<\infty$ exists and $p\in C^1$. Let $-p'(x)-q<0$ for all $x\leq\beta^*$. Then the barrier strategy $\pi_{\beta^*}$ is optimal and $v(x)=v_{\beta^*}(x)$ for all $x\geq 0$. Moreover, in this case $v(x)$ uniquely solves the equation \eqref{eqvb} with the boundary condition $v^\prime(\beta^*)=1$. \end{theorem} \begin{remark}\rm Note that in the case when $p(x)=p$ is constant all assumptions are satisfied and the barrier strategy is always optimal. It was already proved in \cite{Yamazaki}. In the case of general premium function we conjecture that this is not always true (even if $\beta^*$ is well-defined). Unfortunately, due to complexity of the HJB equation in this case, it's difficult to construct a counterexample. \end{remark} \section{Examples}\label{sec:examples} In this section, we assume that injection size $C_1$ has an exponential distribution with a parameter $\mu>0$. Then the equation (\ref{eqvb}) can be transformed into \begin{equation}\label{eq:vb} -p(x)v_{\beta}''(x)+(\mu p(x)-p'(x)-\lambda-q)v_{\beta}'(x)+\mu q v_{\beta}(x)=0 \end{equation} with the initial conditions $v_{\beta}(0)=0$ and $$p(0)v_{\beta}'(0)=\lambda\mu\int_0^\beta v_{\beta}(z)e^{-\mu z}\,dz+\mu \int_\beta^\infty(z-\beta+v_{\beta}(\beta))e^{-\mu z}\,dz.$$ \textit{Example 1.} Consider an increasing rational cost function given by $$p_1(x)=c\big(2-(1+x)^{-1}\big).$$ This premium function tends to constant $2c$ for large present reserves and it is within range $[c,2c]$. Solving equation (\ref{eq:vb}) numerically, we can identify $\beta^*$ and add later some observations. For calculations we used representation \eqref{va} of the equation for the barrier value function. We skip details here. Note also that for $c>\lambda/\mu$ from Theorem \ref{bs exist} it follows that the barrier level $\beta^*$ is well-defined and by Theorem \ref{T:optimal} the barrier strategy $\pi_{\beta^*}$ is optimal. In the Tables 1, 2 and 3 we present numerical results. \begin{table}[h] \caption{ \small Dependence $\beta^*$ of $c$.} \begin{center} \begin{tabular}{c|c|c|c|c|c|c|c|c} \hline \multicolumn{9}{c}{$q=0.1$, $\mu=0.01$, $\lambda=0.1$}\\ \hline $c$ & 1 & 1.4& 1.6& 2& 2.6& 3 & 4 &4.5 \\ \hline $\beta^*$ & 26.5 &32.2& 34.25& 37.1 &38.3& 37.1&26.6& 17.5 \\ \hline \end{tabular} \end{center} \bigskip \caption{ \small Dependence $\beta^*$ of $q$.} \begin{center} \begin{tabular}{c|c|c|c|c|c|c|c} \hline \multicolumn{8}{c}{$\mu=0.01$, $\lambda=0.1$, $c=2$}\\ \hline $q$ & 0.08& 0.09&0.1 & 0.12 &0.14&0.15&0.17\\ \hline $\beta^*$ &49.4 & 42.3 &37.1 & 29.8 & 25.5&23.2&20.25\\ \hline \end{tabular} \end{center} \bigskip \caption{ \small Dependence $\beta^*$ of $\mu$.} \begin{center} \begin{tabular}{c|c|c|c|c|c|c|c|c} \hline \multicolumn{9}{c}{ $q=0.1$,$\lambda=0.1$, $c=2$}\\ \hline $\mu$ &0.005 & 0.007 & 0.01 & 0.015 & 0.017&0.02& 0.025&0.27\\ \hline $\beta^*$ &54.7& 46.6 & 37.1 &24.2 &19.7& 13.1 &4.65&2.93 \\ \hline \end{tabular} \medskip \end{center} \end{table} \begin{figure} \begin{center} \includegraphics[scale=0.8]{rys2.png} \end{center} \vspace{-20pt} \caption{ The optimal barrier as a function of $q$, $\mu$ and $c$ respectively with $\beta^* 1$ corresponding to the model with $p_1$ premium function and $\beta^* 2$ to the model with $p_2$. } \label{figure:fig2} \vspace{20pt} \end{figure} \textit{Example 2.} In this example we investigate the exponential-like cost function $$p_2(x)=2c(1+e^{-x})^{-1}$$ which produces quicker convergence to the constant than in the previous example. Still, we expect similiar properties as both functions are increasing with the same range of values between $c$ and $2c$. Again the equation (\ref{eq:vb}) produces numerical values of barrier level $\beta^*$ which is by Theorems \ref{bs exist} and \ref{T:optimal} well-defined and optimal within all admissible strategies. Figure~\ref{figure:fig2} compare $\beta^*$ in both examples and as predicted respective values are close. From above numerical analysis it follows that there is a huge impact on $\beta^*$ of the parameters $q$, $\mu$ and $c$. Moreover, the optimal barrier level $\beta^*$ decreases rapidly with $\mu$ increasing as well as with increasing $q$ in both cases. We can also observe concavity of the optimal barrier level with respect to $c$. \textit{Example 3.} Let's consider a decreasing premium function given by $$p_3(x)=c+\frac{0.1}{1+x}.$$ Theorem \ref{T:optimal} implies that with $q>0.1$ the barrier strategy $\beta^*$ is optimal among all admissible strategies. It seems that when it comes to properties of $\beta^*$, they remaind the same as in the previous examples (see Figure~\ref{figure:fig3}). \begin{figure} \begin{center} \bigskip \includegraphics[scale=0.8]{rys3.png} \end{center} \caption{ The optimal barrier as a function of $q$ and $c$ respectively with $\beta^* 3$ corresponding to the premium function $p_3$.} \label{figure:fig3} \vspace{-10pt} \end{figure} \newpage \section{Appendix}\label{sec:proofs} \subsection{Proof of the Verification Theorem \ref{verif}} Fix an arbitrary $x\in(0,\infty)$ and $\pi\in\Pi$. By $\{t_n\}_{n=1}^{\infty}$ denote all jumping times for $L^{\pi}$. Since $m$ is $C^1(0,\infty)$ and $X^{\pi}_{t\wedge \sigma^\pi}\in[0,\infty)$, we are allowed to use It\^{o}'s formula to the process $Y_t:=e^{-q(t\wedge \sigma^\pi)}m(X^\pi_{t\wedge\sigma^\pi})$, which gives: \begin{eqnarray*} Y_t-Y_0&=&\int_0^{t\wedge\sigma^\pi}e^{-qs}(\mathcal{A} -q\mathbf{I})m(X^\pi_{s-})ds+M_t-\int_0^{t\wedge \sigma^\pi}e^{-qs}dL^{\pi,c}_s\\ &&+\sum_{0\leq t_n\leq t\wedge \sigma^\pi}e^{-qt_n}[m(X^\pi_{t_n-}+C_{N_{t_n}}\Delta N_{t_n}-\Delta L^\pi_{t_n})-m(X^\pi_{t_n-}+C_{N_{t_n}}\Delta N_{t_n})], \end{eqnarray*} where $L^{\pi,c}$ denotes the continuous part of $L^{\pi}$ and $M_t$ is a martingale with $M_0=0$. Since $m$ satisfies (\ref{hjb}) we have \begin{eqnarray*} Y_t-Y_0&\leq& \int_0^{t\wedge\sigma^\pi}e^{-qs}(\mathcal{A} -q\mathbf{I})m(X^\pi_{s-})ds+M_t-\int_0^{t\wedge \sigma^\pi}e^{-qs}dL^{\pi,c}_s-\sum_{0\leq t_n\leq t\wedge \sigma^\pi}e^{-qt_n}\Delta L^\pi_{t_n}\\ &\leq & -\int_0^{t\wedge \sigma^\pi}e^{-qs}dL^{\pi}_s+M_t. \end{eqnarray*} Taking expectations and using the fact that $m\geq 0$, we obtain \begin{eqnarray*} m(x)&\geq & \mathbb{E}_x \big[e^{-q(t\wedge \sigma^\pi)}m(X^\pi_{t\wedge\sigma^\pi})\big]+\mathbb{E}_x\int_0^{t\wedge \sigma^\pi}e^{-qs}dL^\pi_{s}\\ &\geq &\mathbb{E}_x\int_0^{t\wedge \sigma^\pi}e^{-qs}dL^\pi_{s}. \end{eqnarray*} Letting $t\rightarrow\infty$ and applying the Monotone Covergence Theorem gives: \begin{eqnarray*} m(x)&\geq &v_\pi(x). \end{eqnarray*} Therefore, since $\pi\in\Pi$ and $x$ were arbitrary, we proved the desired inequality \linebreak $m(x)\geq v(x)$ for all $x\in[0,\infty)$. \qed \noindent \subsection{Additional facts} Before we prove the main results of this paper we demonstrate few auxiliary facts used in these proofs. \begin{lemma}\label{hbeta} Let $\beta\geq 0$. Assume that we have a function $h_\beta:\mathbb{R}\rightarrow [0,\infty)$, such that $h_\beta(x)=x-\beta+h_\beta(\beta)$ for all $x>\beta$ and $h_\beta(x)=0$ for all $x\leq 0$. If the function $h_\beta(x)$ solves the equation $$(\mathcal{A}-q\mathbf{I})h_\beta(x)=0\qquad\text{ for } 0<x<\beta$$ with the boundary conditions $h_\beta'(\beta)=1$, then $$h_\beta(x)=v_\beta(x)\qquad\text{ for all }x\geq 0.$$ \end{lemma} \textit{Proof of Lemma \ref{hbeta}.} Denote $X^\beta:=X^{\pi_\beta}$. Take an arbitrary $x\in[0,\beta]$. Applying Ito's formula for $e^{-qt}h_\beta(X^\beta_t)$, we obtain \begin{eqnarray*} \mathbb{E}_x\left[e^{-q(t\wedge\sigma^\beta)}h_\beta(X^\beta_{t\wedge\sigma^\beta})\right]&=&h_\beta(x)+\mathbb{E}_x\left[\int_0^{t\wedge\sigma^\beta} e^{-qs}(\mathcal{A}-q\mathbf{I})h_\beta(X_s^\beta)ds\right]+\mathbb{E}_x\left[\int_0^{t\wedge\sigma^\beta}e^{-qs}h_\beta(X^\beta_s)dL^{\beta,c}_s\right]\notag\\ &&+\mathbb{E}_x\left[\sum_{0\leq s\leq {t\wedge\sigma^\beta}}e^{-qs}\left(h_\beta(X_{s-}^\beta-\Delta L_s^\beta)-h_\beta(X_{s-}^\beta)\right)\mathbb{I}_{\Delta L_s^\beta>0}\right]\\ &=&h_\beta(x)+\mathbb{E}_x\left[\int_0^{t\wedge\sigma^\beta} e^{-qs}(\mathcal{A}-q\mathbf{I})h_\beta(X_s^\beta)ds\right]-\mathbb{E}_x\left[\sum_{0\leq s\leq {t\wedge\sigma^\beta}}e^{-qs}\Delta L_s^\beta\mathbb{I}_{\Delta L_s^\beta>0}\right]. \end{eqnarray*} Note that between the positive jumps the process $X^\beta$ is decreasing and hence $L^{\beta,c}\equiv 0$. Moreover, $X_{s-}^\beta>\beta$ on $\{\Delta L^\beta_s>0\}$ and $h_\beta(X_{s-}^\beta-\Delta L_s^\beta)=h_\beta(\beta)$. Hence, after rearranging and letting $t\rightarrow\infty$, we get $$h_\beta(x)=\mathbb{E}_x\left[\int_0^{\sigma^\beta}e^{-qs}dL^\beta_s\right]=v_\beta(x).$$ This completes the proof. \qed \noindent \begin{lemma} \label{concave} Assume that $p$ is $C^1$ on $(0,\infty)$ and that $-p'(x) -q<0$ for all $x\in(0,\beta]$. If $v_\beta'(\beta-)\geq 1$ then $v_{\beta}(x)$ is in $C^2$ on $(0,\beta)$ and it is increasing and concave on $(0,\infty)$. \end{lemma} \proof We begin by proving that $v_{\beta}$ is increasing on $(0,\beta)$. Let $\tau^+_a:=\inf\{t\geq 0:R_t\geq a\}$ and $\tau^-_0:=\inf\{t\geq 0:R_t<0\}$. By the Strong Markov property of the PDMP $R_t$ for all $0<y<x<\beta$ $$ v_{\beta}(y)=v_{\beta}(x)\mathbb{E}_y\left[e^{-q\tau_x^+};\tau_x^+<\tau_0^-\right]<v_{\beta(x)},$$ which completes the proof of this statement. Furthermore, since $p$ is $C^1$ by (\ref{eqvb}), we have that $v_{\beta}$ is $C^2$ on $(0,\beta)$ and $v_{\beta}$ is left-continuous at $\beta$. Moreover, if we differentiate (\ref{eqvb}) with respect to x we get: \begin{equation} p(x)v_{\beta}''(x)=(-p'(x)-\lambda-q)v_{\beta}'(x)+\lambda\int_0^{\beta-x}v_{\beta}'(x+z)f(z)\, dz+\lambda\int_{\beta-x}^\infty f(z)\, dz. \end{equation} Consequently, since $v_{\beta}'(\beta-)\geq 1$, we obtain $$p(\beta)v_{\beta}''(\beta-)\leq -p'(\beta)-\lambda-q+\lambda\int_{0}^\infty f(z)\, dz=-p'(\beta)-q<0.$$ Assume now a contrario that $v_{\beta}$ is not concave. Then by continuity of $v_{\beta}''$, there exists $\hat{x}\in (0,\beta)$, such that $v_{\beta}''(\hat{x})=0$ and $v_{\beta}''(x)<0$ for all $x\in (\hat{x},\beta)$. Hence, from the assumption that $v_\beta'(\beta-)\geq 1$, \begin{eqnarray*} 0=p(\hat{x})v_{\beta}''(\hat{x})&=&(-p'(\hat{x})-\lambda-q)v_{\beta}'(\hat{x})+\lambda\int_0^{\beta-\hat{x}}v_{\beta}'(\hat{x}+z)f(z)\, dz+\lambda\int_{\beta-\hat{x}}^\infty f(z)\, dz\\ &< &(-p'(\hat{x})-\lambda-q)v_{\beta}'(\hat{x})+\lambdav_{\beta}'(\hat{x})\int_0^{\beta-\hat{x}}f(z)\, dz+\lambdav_{\beta}'(\hat{x})\int_{\beta-\hat{x}}^\infty f(z)\, dz\\ &= &(-p'(\hat{x})-q)v_{\beta}'(\hat{x})<0. \end{eqnarray*} This leads to the contradiction. Therefore, $v_{\beta}''(x)<0$ for all $x\in(0,\beta)$. \qed We are ready to prove the main results. \subsection{Proof of the Theorem \ref{bs exist}} From \eqref{eqvb} and \eqref{cAdual} we can conclude that $v_\beta$ on $(0,\beta)$ satisfies the equation: \begin{equation}\label{pretrans} -p(x)v_\beta'(x)-(\lambda+q)v_\beta(x)+\lambda\int_0^{\beta-x}v_\beta(x+z)f(z)\,dz+\lambda\int_{\beta-x}^\infty \left(x+z-\beta+v_\beta(\beta)\right)f(z)\,dz=0 \end{equation} with the initial condition $v_\beta(0)=0$. We recall that we are looking for $\beta^*$ satisfying $v_{\beta^*}'(\beta^*)=1$. Let \[u_\beta(x):=v_{\beta}'(x).\] Transforming the equation \eqref{pretrans}, we obtain the following Fredholm equation for $u_\beta$: \begin{equation}\label{ubeta} u_\beta(x)=G_\beta(x)+\int_0^\beta K(x,y)u_\beta(y)\,dy,\end{equation} where $$G_\beta(x):=\frac{\lambda}{p(x)}\left(\int_{\beta-x}^\infty \left(x+z-\beta\right)f(z)dz\right),$$ \begin{eqnarray*} K(x,y):=\left\{ \begin{array}{lll} \frac{-q}{p(x)}&\text{ for }& 0\leq y\leq x\\ \frac{\lambda}{p(x)}\int_{y-x}^\infty f(z)\,dz&\text{ for }& y>x\geq 0. \end{array} \right. \end{eqnarray*} Taking $x=\beta$ in \eqref{ubeta} leads to the equation: \begin{equation}\label{gamma} u_\beta(\beta)=\frac{\lambda}{p(\beta)}\mathbb{E}(C_1)-\frac{q}{p(\beta)}\int_0^\beta u_\beta(y)\,dy.\end{equation} We denote: \[\gamma(\beta):=u_\beta(\beta).\] We want to prove the existence of $\beta^*$ such that \begin{equation}\label{stat} \gamma(\beta^*)=1.\end{equation} By our assumption: \begin{equation}\label{wiekjeden0} \gamma(0)=\frac{\lambda}{p(0)}\mathbb{E}(C_1)>1.\end{equation} We will show that \begin{equation}\label{wiekjeden}\gamma(\hat{x})\leq 1.\end{equation} Indeed, assume a contrario that $\gamma(\hat{x})>1$. Then by Lemma \ref{concave}, we have that $u_{\hat{x}}$ is increasing and consequently $u_{\hat{x}}(y)>1$ for all $y\in(0,\hat{x})$. Thus \begin{eqnarray*} \gamma(\hat{x})&=&\frac{\lambda}{p(\hat{x})}\mathbb{E}(C_1)-\frac{q}{p(\hat{x})}\int_0^{\hat{x}} u_{\hat{x}}(y)\,dy\\ &\leq& \frac{\lambda}{p(\hat{x})}\mathbb{E}(C_1)-\frac{q}{p(\hat{x})}\hat{x}<1. \end{eqnarray*} In this way we derived a contradiction. To get \eqref{stat} by inequalities \eqref{wiekjeden0} and \eqref{wiekjeden}, it suffices to prove that $\gamma$ is a continuous function. The latter follows from the following observation. We denote $\Delta_\beta b_\beta(x)=b_{\beta}(x)-b_{\beta-}(x)$ for a general function $b_\beta$. From \eqref{ubeta} it follows that \[\Delta_\beta u_\beta(\beta)=\int_0^\beta K(\beta,y)\Delta_\beta u_\beta(y)\,dy\] and hence function $\beta\rightarrow \Delta_\beta u_\beta(\beta)$ is continuous. Note that from the last conclusion it follows that $\Delta_\beta\Delta_\beta u_\beta(\beta)=\Delta_\beta u_\beta(\beta)=0$ and function $\beta\rightarrow u_\beta(\beta)=\gamma(\beta)$ is also continuous. This completes the proof. \qed \subsection{Proof of the Theorem \ref{T:optimal}} Due to the Lemma \ref{concave}, we have $v_{\beta^*}'(x)\geq v_{\beta^*}'(\beta^*)=1$ for all $x\in(0,\beta^*)$. Moreover, $(\mathcal{A}-q\mathbf{I})v_{\beta^*}(x)=0$ for $x\in(0,\beta^*)$. It remains to prove that $(\mathcal{A}-q\mathbf{I})v_{\beta^*}(x)\leq 0$ for $x>\beta^*$. Since $v_{\beta^*}(x)=x-\beta^*+v_{\beta^*}(\beta^*)$ for $x>\beta^*$, we have: \begin{equation}\label{eq1} (\mathcal{A}-q\mathbf{I})v_{\beta^*}(x)=-p(x)+\lambda\int_0^\infty zdF(z)-q(x-\beta^*+v_{\beta^*}(\beta^*)). \end{equation} Note that $v_{\beta^*}$ is $C^1$ and therefore, by assumption that $p\in C^1$, the function $(\mathcal{A}-q\mathbf{I})v_{\beta^*}(x)$ is continuous at $x=\beta^*$. Thus we have $(\mathcal{A}-q\mathbf{I})v_{\beta^*}(\beta^*)=0$. The assumption $-p'(x)-q\leq 0$, together with (\ref{eq1}), give the required inequality. Hence by Theorem \ref{verif}, $v_{\beta^*}(x)\geq v(x)$ for all $x\geq0$. At the same time from the definition of the value function we have that $v_{\beta^*}(x)\leq v(x)$ for all $x\geq0$. Consequently $v_{\beta^*}(x)=v(x)$ and by Lemma \ref{hbeta} $v$ must uniquely solve the equation \eqref{eqvb} with the boundary condition $v^\prime(\beta^*)=1$. This completes the proof. \qed
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} A very natural and fundamental problem in mathematical logic is the determination of the precise set of axioms that are enough to guarantee the completeness property of a certain logic. That is, to axiomatize the logic in question so that there is a relation between semantic validity and syntactic provability. For classical and intuitionistic first-order and propositional logic the answers are known. As soon as the expressive power of the logic increases, proving a completeness theorem usually requires a stronger metatheory, often relying on set-theoretical assumptions. For example, completeness theorems for infinitary classical logic are known to depend on the addition of large cardinal axioms in the metatheory. In this work, we plan to consider infinitary intuitionistic logics in full generality, and to provide axiomatizations and completeness theorems. Unlike the common techniques used to establish completeness, the main insight we will follow is to establish a general framework within categorical logic that results adequate for the development of infinitary languages. Building on previous work by Karp \cite{karp} and Makkai \cite{makkai}, we will see that this setting is precisely that of infinitary first-order categorical logic. The unifying power of category-theoretic language provided the right means to subsume and amalgamate the existing completeness theorems in a unique framework, where new completeness results branch out, combining aspects of categorical logic, sheaf theory, model theory and set theory. Completeness theorems for classical propositional logic were developed by Hilbert and Ackermann, and extended to the classical first-order case by G\"odel in his doctoral dissertation \cite{godel}. For intuitionistic logic the completeness in terms of the semantic of possible worlds was introduced by Kripke in \cite{kripke}. Meanwhile, completeness of infinitary logics was obtained by Karp in the monograph \cite{karp}, where systems for classical infinitary propositional and first-order logic have been described and studied extensively with Hilbert-type systems (see also \cite{mt} for a related development with Gentzen's sequents). Nadel developed infinitary intuitionistic propositional logic for countable many conjunctions/disjunctions in \cite{nadel}, proving completeness with respect to the infinitary version of Kripke semantics. On the other hand, Makkai considered infinitary regular theories in \cite{makkai} together with a corresponding completeness theorem. Makkai's work has a great deal of categorical logic, a subject prominent since the seventies, when the school of Montr\'eal introduced the discipline exploiting the techniques of category theory to study logical theories. The philosophy of theories as categories had reached its climax in the work of Makkai and Reyes \cite{mr} (1977), where the authors discuss an extensive treatment of categorical logic for first-order theories, and include some infinitary versions. We intend to provide the jump from Makkai's regular case in \cite{makkai} to the first-order case by means of a completeness theorem for infinitary coherent logic, which together with a generalization of an unpublished theorem of A. Joyal will allow us to derive completeness for infinitary intuitionistic propositional and first-order logics, in terms of infinitary Kripke semantics, as well as sheaf and categorical models. Unlike classical infinitary logics, whose related completeness results have been known for decades, the main difficulty in studying infinitary intuitionistic logics is the huge variety of non-equivalent formulas that one can obtain. This, as we will show, suggests the introduction of large cardinal axioms appropriate to handle this unexpected richness that one gets by dropping the excluded middle. Instead of relying on Henkin's method of adding constants, which encounters some problems in the infinitary case, the way that we will prove completeness relies on manipulating sheaf models for infinitary logics (these are explained for example in \cite{bj}). The expert will however find that the methods here developed are a refinement of those of Henkin, in the sense that instead of introducing witnessing constants for existential sentences, they will indirectly appear naturally as a result of forcing statements in our way to building models for the theory. The key contribution is the identification of the transfinite transitivity property as the correct categorical counterpart of the axiomatic treatment proposed. This property is a generalization of the transitivity property for Grothendieck topologies, and plays an essential r\^ole in the completeness proof. The structure of this work is as follows. In the first part we introduce a system for infinitary intuitionistic logic and we prove that the addition of the axiom of excluded middle results in a system equivalent to Karp's classical system. The second part deals with the categorical counterpart of this type of logic, and introduces many of the developments of categorical logic generalized to the infinitary case; it also contains completeness proofs for infinitary intuitionistic logic in terms of sheaf and more general categorical models. The third part contains the proof of completeness of infinitary intuitionistic logic with respect to Kripke semantics (a generalization of Joyal's theorem) relying on a large cardinal hypothesis and on a completeness theorem for infinitary coherent logic. Along the way we prove an infinitary version of completeness for Beth semantics. We derive also Karp completeness theorem from ours, as well as Makkai's and Fourman/Grayson's. When restricting to the finitary case, these theorems reduce to the completeness theorems of G\"odel, Kripke, Beth, and the completeness of the regular and coherent fragment of first-order logic, providing thus new proofs for all of them. \subsection{Karp's system} Infinitary languages $\mathcal{L}_{\kappa, \lambda}$ are defined according to the length of infinitary conjunctions/disjunctions as well as quantification it allows. In that way, assuming a supply of $\kappa$ variables to be interpreted as ranging over a nonempty domain, one includes in the inductive definition of formulas an infinitary clause for conjunctions and disjunctions, namely, whenever the ordinal indexed sequence $A_0, ..., A_{\delta}, ...$ of formulas has length less than $\kappa$, one can form the infinitary conjunction/disjunction of them to produce a formula. Analogously, whenever an ordinal indexed sequence of variables has length less than $\lambda$, one can introduce one of the quantifiers $\forall$ or $\exists$ together with the sequence of variables in front of a formula to produce a new formula. One also stipulates that $\kappa$ be a regular cardinal, so that the length of any well-formed formula is less than $\kappa$ itself. It is then a natural question to ask for which of these infinitary languages one can provide a notion of provability for which a form of completeness theorem can be proven, in terms, for example, of the obvious Tarskian semantics associated to them. In \cite{karp}, Karp proves completeness theorems for the classical logic $\mathcal{L}_{\kappa, \kappa}$, for inaccessible $\kappa$, within a Hilbert-style system including the distributivity and the dependent choice axioms. These axioms consist of the following schemata: \begin{enumerate} \item $A \to [B \to A]$ \item $[A \to [B \to C] \to [[A \to B] \to [A \to C]]]$ \item $[\neg B \to \neg A] \to [A \to B]$ \item $[\bigwedge_{i<\alpha} [A \to A_i]] \to [A \to \bigwedge_{i<\alpha} A_i]$ \item $[\bigwedge_{i<\alpha} A_i] \to A_j$ \item $[\forall \mathbf{x} [A \to B] \to [A \to \forall \mathbf{x} B]]$ provided no variable in $\mathbf{x}$ occurs free in $A$; \item $\forall \mathbf{x} A \to S_f(A)$ where $S_f(A)$ is a substitution based on a function $f$ from $\mathbf{x}$ to the terms of the language; \item Equality axioms: \begin{enumerate} \item $t=t$ \item $[\bigwedge_{i<\alpha}t_i=t'_i] \to [\phi(t_0, ..., t_\xi, ...)=\phi(t'_0, ..., t'_\xi, ...)]$ \item $[\bigwedge_{i<\alpha}t_i=t'_i] \to [P(t_0, ..., t_\xi, ...) \to P(t'_0, ..., t'_\xi, ...)]$ for each $\alpha<\kappa$, where $t, t_i$ are terms and $\phi$ is a function symbol of arity $\alpha$ and $P$ a relation symbol of arity $\alpha$; \end{enumerate} \item Classical distributivity axiom\footnote{Throughout this work the notation $\alpha^{\beta}$ for ordinals $\alpha, \beta$ will always denote the set of functions $f: \beta \to \alpha$, and should not be confused with ordinal exponentiation.}: $$\bigwedge_{i<\gamma}\bigvee_{j<\gamma} \psi_{ij} \to \bigvee_{f \in \gamma^{\gamma}}\bigwedge_{i<\gamma} \psi_{if(i)}$$ \item Classical dependent choice axiom: $$\bigwedge_{\alpha < \gamma} \forall_{\beta < \alpha}\mathbf{x}_{\beta} \exists \mathbf{x}_{\alpha} \psi_{\alpha} \to \exists_{\alpha < \gamma} \mathbf{x}_{\alpha} \bigwedge_{\alpha < \gamma} \psi_{\alpha}$$ provided the sets $\mathbf{x}_{\alpha}$ are pairwise disjoint and no variable in $\mathbf{x}_{\alpha}$ is free in $\psi_{\beta}$ for $\beta<\alpha$. \end{enumerate} The inference rules are modus ponens, conjunction introduction and generalization. In the same way that for finitary languages proofs are finitary objects, the right metatheory to study formal proofs of infinitary languages is that of sets hereditarily of cardinal less than $\kappa$. Similarly, G\"odel numberings of finitary formulas can be generalized to the infinitary case if one uses (not necessarily finite) ordinal numbers (see \cite{karp}), by considering one-to-one functions from the symbols of the language into $\kappa$. It is then possible to consider G\"odel numbers of formulas and prove that they correspond to those sets hereditarily of cardinal less than $\kappa$ that satisfy a precise ordinary predicate in a certain metalanguage. Moreover, G\"odel numbers of provable formulas must satisfy a precise provability predicate in such metalanguage. The development of \cite{karp} is classical, that is, the infinitary systems considered formalize infinitary classical logic. Intuitionistic systems of infinitary logic using countable many conjunctions and disjunctions was studied in \cite{nadel}. Our purpose here is to study systems for the intuitionistic general case, together with corresponding completeness theorems. \subsection{Infinitary first-order systems} Let $\kappa$ be an inaccessible cardinal (we consider $\omega$ to be inaccessible as well, so that our account embodies in particular the finitary case). The syntax of intuitionistic $\kappa$-first-order logics $\mathcal{L}_{\kappa, \kappa}$ consists of a (well-ordered) set of sorts and a set of function and relation symbols, these latter together with the corresponding type, which is a subset with less than $\kappa$ many sorts. Therefore, we assume that our signature may contain relation and function symbols on $\gamma<\kappa$ many variables, and we suppose there is a supply of $\kappa$ many fresh variables of each sort. Terms and atomic formulas are defined as usual, and general formulas are defined inductively according to the following: \begin{defs} If $\phi, \psi, \{\phi_{\alpha}: \alpha<\gamma\}$ (for each $\gamma<\kappa$) are formulas of $\mathcal{L}_{\kappa, \kappa}$, the following are also formulas: $\bigwedge_{\alpha<\gamma}\phi_{\alpha}$, $\bigvee_{\alpha<\gamma}\phi_{\alpha}$, $\phi \to \psi$, $\forall_{\alpha<\gamma} x_{\alpha} \phi$ (also written $\forall \mathbf{x}_{\gamma} \phi$ if $\mathbf{x}_{\gamma}=\{x_{\alpha}: \alpha<\gamma\}$), $\exists_{\alpha<\gamma} x_{\alpha} \phi$ (also written $\exists \mathbf{x}_{\gamma} \phi$ if $\mathbf{x}_{\gamma}=\{x_{\alpha}: \alpha<\gamma\}$). \end{defs} The inductive definition of formulas allows to place them in hierarchies or levels up to $\kappa$. Formulas in a successor level are built using the clauses of the definition from formulas in the previous level, while at limit levels one takes the union of all formulas in all levels so far defined. Proofs by induction on the complexity of the formula are proofs by transfinite induction on the least level of the formulas. The infinitary systems that we will use in categorical logic for our purposes have all the rules of finitary first-order logic, except that in the case of $\mathcal{L}_{\kappa, \kappa}$ we allow infinite sets of variables as contexts of the sequents. Since the variables of each sort are assumed to in correspondence with (the elements of) $\kappa$, each subset of variables comes with an inherited well-order, which we will assume as given when we quantify over sets of variables. There are two special types of formulas that one can consider. One is the class of $\kappa$-\emph{regular} formulas (see \cite{makkai}), which are those build of atomic formulas, $\kappa$-conjunctions and $\kappa$-existential quantification. Adding $\kappa$-disjunction results in the class of $\kappa$-\emph{coherent} formulas. We shall introduce both of these in more detail later. We use sequent style calculus to formulate the axioms of first-order logic, as can be found, e.g., in \cite{johnstone}, D1.3. The system for $\kappa$-first order logic is described below. Its key feature and the difference with the system of Karp essentially resides, besides being an intuitionistic system, in the introduction of the transfinite transitivity rule, which, as we shall see, is an intuitionistic way of merging the classical distributivity and dependent choice axioms. The intuitive meaning of this rule will be further explained after the following: \begin{defs}\label{sfol} The system of axioms and rules for $\kappa$-first-order logic consists of \begin{enumerate} \item Structural rules: \begin{enumerate} \item Identity axiom: \begin{mathpar} \phi \vdash_{\mathbf{x}} \phi \end{mathpar} \item Substitution rule: \begin{mathpar} \inferrule{\phi \vdash_{\mathbf{x}} \psi}{\phi[\mathbf{s}/\mathbf{x}] \vdash_{\mathbf{y}} \psi[\mathbf{s}/\mathbf{x}]} \end{mathpar} where $\mathbf{y}$ is a string of variables including all variables occurring in the string of terms $\mathbf{s}$. \item Cut rule: \begin{mathpar} \inferrule{\phi \vdash_{\mathbf{x}} \psi \\ \psi \vdash_{\mathbf{x}} \theta}{\phi \vdash_{\mathbf{x}} \theta} \end{mathpar} \end{enumerate} \item Equality axioms: \begin{enumerate} \item \begin{mathpar} \top \vdash_{x} x=x \end{mathpar} \item \begin{mathpar} (\mathbf{x}=\mathbf{y}) \wedge \phi \vdash_{\mathbf{z}} \phi[\mathbf{y}/\mathbf{x}] \end{mathpar} where $\mathbf{x}$, $\mathbf{y}$ are contexts of the same length and type and $\mathbf{z}$ is any context containing $\mathbf{x}$, $\mathbf{y}$ and the free variables of $\phi$. \end{enumerate} \item Conjunction axioms and rules: $$\bigwedge_{i<\gamma} \phi_i \vdash_{\mathbf{x}} \phi_j$$ \begin{mathpar} \inferrule{\{\phi \vdash_{\mathbf{x}} \psi_i\}_{i<\gamma}}{\phi \vdash_{\mathbf{x}} \bigwedge_{i<\gamma} \psi_i} \end{mathpar} for each cardinal $\gamma<\kappa$. \item Disjunction axioms and rules: $$\phi_j \vdash_{\mathbf{x}} \bigvee_{i<\gamma} \phi_i$$ \begin{mathpar} \inferrule{\{\phi_i \vdash_{\mathbf{x}} \theta\}_{i<\gamma}}{\bigvee_{i<\gamma} \phi_i \vdash_{\mathbf{x}} \theta} \end{mathpar} for each cardinal $\gamma<\kappa$. \item Implication rule: \begin{mathpar} \mprset{fraction={===}} \inferrule{\phi \wedge \psi \vdash_{\mathbf{x}} \theta}{\phi \vdash_{\mathbf{x}} \psi \to \theta} \end{mathpar} \item Existential rule: \begin{mathpar} \mprset{fraction={===}} \inferrule{\phi \vdash_{\mathbf{x} \mathbf{y}} \psi}{\exists \mathbf{y}\phi \vdash_{\mathbf{x}} \psi} \end{mathpar} where no variable in $\mathbf{y}$ is free in $\psi$. \item Universal rule: \begin{mathpar} \mprset{fraction={===}} \inferrule{\phi \vdash_{\mathbf{x} \mathbf{y}} \psi}{\phi \vdash_{\mathbf{x}} \forall \mathbf{y}\psi} \end{mathpar} where no variable in $\mathbf{y}$ is free in $\phi$. \item Transfinite transitivity rule: \begin{mathpar} \inferrule{\phi_{f} \vdash_{\mathbf{y}_{f}} \bigvee_{g \in \gamma^{\beta+1}, g|_{\beta}=f} \exists \mathbf{x}_{g} \phi_{g} \\ \beta<\gamma, f \in \gamma^{\beta} \\\\ \phi_{f} \dashv \vdash_{\mathbf{y}_{f}} \bigwedge_{\alpha<\beta}\phi_{f|_{\alpha}} \\ \beta < \gamma, \text{ limit }\beta, f \in \gamma^{\beta}}{\phi_{\emptyset} \vdash_{\mathbf{y}_{\emptyset}} \bigvee_{f \in \gamma^{\gamma}} \exists_{\beta<\gamma}\mathbf{x}_{f|_{\beta +1}} \bigwedge_{\beta<\gamma}\phi_{f|_\beta}} \end{mathpar} \\ for each cardinal $\gamma < \kappa$, where $\mathbf{y}_{f}$ is the canonical context of $\phi_{f}$, provided that, for every $f \in \gamma^{\beta+1}$, $FV(\phi_{f}) = FV(\phi_{f|_{\beta}}) \cup \mathbf{x}_{f}$ and $\mathbf{x}_{f|_{\beta +1}} \cap FV(\phi_{f|_{\beta}})= \emptyset$ for any $\beta<\gamma$, as well as $FV(\phi_{f}) = \bigcup_{\alpha<\beta} FV(\phi_{f|_{\alpha}})$ for limit $\beta$. Note that we assume that there is a fixed well-ordering of $\gamma^{\gamma}$ for each $\gamma<\kappa$. \end{enumerate} \end{defs} In this formulation the double line indicates a bidirectional rule. Note that the axiom (schema) of excluded middle, which is not assumed here, is $\top \vdash_{\mathbf{x}} \phi \vee \neg \phi$. Note also that in full infinitary first-order logic we can dispense with the use of sequents and treat $\phi \vdash_{\mathbf{x}} \psi$ as simply $\forall \mathbf{x}(\phi \to \psi)$. Conversely, any formula $\phi(\mathbf{x})$ can be interpreted as the sequent $\top \vdash_{\mathbf{x}} \phi$, thereby obtaining a translation with Hilbert style systems. In $\kappa$-first-order logic the following two axioms are provable: \begin{enumerate} \item Small distributivity axiom $$\phi \wedge \bigvee_{i<\gamma} \psi_{i} \vdash_{\mathbf{x}} \bigvee_{i<\gamma}\phi \wedge \psi_{i}$$ for each cardinal $\gamma<\kappa$. \item Frobenius axiom: $$\phi \wedge \exists \mathbf{y} \psi \vdash_{\mathbf{x}} \exists \mathbf{y} (\phi \wedge \psi)$$ \noindent where no variable in $\mathbf{y}$ is in the context $\mathbf{x}$. \end{enumerate} However, when working in smaller fragments without implication (like the $\kappa$-coherent fragment to be introduced later) they need to be included as they are not derivable (for the small distributivity axiom this can be seen for example in the propositional case by considering non-distributive lattices satisfying all other axioms, and a similar idea works for Frobenius axiom considering categorical semantics). The transfinite transitivity rule can be understood as follows. Consider $\gamma^{\leq \gamma}$, the $\gamma$-branching tree of height $\gamma$, i.e., the poset of functions $f: \beta \to \gamma$ for $\beta \leq \gamma$ with the order given by inclusion. Suppose there is an assignment of formulas $\phi_f$ to each node $f$ of $\gamma^{\leq \gamma}$. Then the rule expresses that if the assignment is done in a way that the formula assigned to each node entails the join of the formulas assigned to its immediate successors, and if the formula assigned to a node in a limit level is equivalent to the meet of the formulas assigned to its predecessors, then the formula assigned to the root entails the join of the formulas assigned to the nodes in level $\gamma$. There is also a version of the deduction theorem that holds here: \begin{lemma}\label{dt} Let $\Sigma$ be a set of sequents and let $\sigma$ be a sentence. If the theory $\Sigma \cup \{\top \vdash \sigma\}$ derives the sequent $\phi \vdash_{\mathbf{x}} \psi$, then the theory $\Sigma$ derives the sequent $\phi \wedge \sigma \vdash_{\mathbf{x}} \psi$. \end{lemma} \begin{proof} Straightforward induction on the length of the derivation. \end{proof} In full first-order logic the transfinite transitivity rule can be replaced by the axiom schema, for each $\gamma < \kappa$: $$\bigwedge_{f \in \gamma^{\beta}, \beta<\gamma} \forall (\mathbf{y}_{f} \setminus \mathbf{y}_{\emptyset}) \left(\phi_{f} \to \bigvee_{g \in \gamma^{\beta+1}, g|_{\beta}=f} \exists \mathbf{x}_{g} \phi_{g}\right)$$ $$\wedge \bigwedge_{\beta < \gamma, \text{ limit }\beta, f \in \gamma^{\beta}} \forall (\mathbf{y}_{f} \setminus \mathbf{y}_{\emptyset}) \left(\phi_{f} \leftrightarrow \bigwedge_{\alpha<\beta}\phi_{f|_{\alpha}}\right)$$ $$\vdash_{\mathbf{y}_{\emptyset}} \phi_{\emptyset} \to \bigvee_{f \in \gamma^{\gamma}} \exists_{\beta<\gamma}\mathbf{x}_{f|_{\beta +1}} \bigwedge_{\beta<\gamma}\phi_{f|_{\beta}}.$$ There are two particular cases of the transfinite transitivity rule which are of interest: \begin{enumerate} \item Distributivity rule: \begin{mathpar} \inferrule{\phi_{f} \vdash_{\mathbf{x}} \bigvee_{g \in \gamma^{\beta+1}, g|_{\beta}=f} \phi_{g} \\ \beta<\gamma, f \in \gamma^{\beta} \\\\ \phi_{f} \dashv \vdash_{\mathbf{x}} \bigwedge_{\alpha<\beta}\phi_{f|_{\alpha}} \\ \beta < \gamma, \text{ limit }\beta, f \in \gamma^{\beta}}{\phi_{\emptyset} \vdash_{\mathbf{x}} \bigvee_{f \in \gamma^{\gamma}}\bigwedge_{\beta<\gamma}\phi_{f|_{\beta}}} \end{mathpar} for each $\gamma<\kappa$ (we assume that there is a fixed well-ordering of $\gamma^{\gamma}$ for each $\gamma<\kappa$). \item Dependent choice rule: \begin{mathpar} \inferrule{\phi_{\beta} \vdash_{\mathbf{y}_{\beta}} \exists \mathbf{x}_{\beta+1} \phi_{\beta +1} \\ \beta<\gamma \\\\ \phi_{\beta} \dashv \vdash_{\mathbf{y}_{\beta}} \bigwedge_{\alpha<\beta}\phi_{\alpha} \\ \beta \leq \gamma, \text{ limit }\beta}{\phi_{\emptyset} \vdash_{\mathbf{y}_{\emptyset}} \exists_{\beta<\gamma}\mathbf{x}_{\beta +1}\phi_{\gamma}} \end{mathpar} for each $\gamma < \kappa$, where $\mathbf{y}_{\beta}$ is the canonical context of $\phi_{\beta}$, provided that, for every $f \in \gamma^{\beta+1}$, $FV(\phi_{f}) = FV(\phi_{f|_{\beta}}) \cup \mathbf{x}_{f}$ and $\mathbf{x}_{f|_{\beta +1}} \cap FV(\phi_{f|_{\beta}})= \emptyset$ for any $\beta<\gamma$, as well as $FV(\phi_{f}) = \bigcup_{\alpha<\beta} FV(\phi_{f|_{\alpha}})$ for limit $\beta$. \end{enumerate} Again, if implication and universal quantification are available in the fragment we are considering, we can instead replace the distributivity and dependent choice rules by axiom schemata expressible with single sequents, for each $\gamma < \kappa$: $$\bigwedge_{f \in \gamma^{\beta}, \beta<\gamma} \left( \phi_{f} \to \bigvee_{g \in \gamma^{\beta+1}, g|_{\beta}=f} \phi_{g}\right)$$ $$\wedge \bigwedge_{\beta < \gamma, \text{ limit }\beta, f \in \gamma^{\beta}} \left(\phi_{f} \leftrightarrow \bigwedge_{\alpha<\beta}\phi_{f|_{\alpha}}\right) \vdash_{\mathbf{x}} \phi_{\emptyset} \to \bigvee_{f \in \gamma^{\gamma}} \bigwedge_{\beta<\gamma}\phi_{f|_{\beta}}$$ \\ and $$\bigwedge_{\beta < \gamma} \forall (\mathbf{y}_{\beta} \setminus \mathbf{y}_{\emptyset}) \left(\phi_{\beta} \to \exists \mathbf{x}_{\beta +1} \phi_{\beta +1} \right)$$ $$\wedge \bigwedge_{\beta \leq \gamma, \text{ limit }\beta} \forall (\mathbf{y}_{\beta} \setminus \mathbf{y}_{\emptyset}) \left(\phi_{\beta} \leftrightarrow \bigwedge_{\alpha<\beta}\phi_{\alpha} \right) \vdash_{\mathbf{y}_{\emptyset}} \phi_{\emptyset} \to \exists_{\alpha < \gamma} \mathbf{x}_{\alpha} \phi_{\gamma}.$$ In turn, the rule of dependent choice has as a particular case the rule of choice: \begin{mathpar} \inferrule{\phi \vdash_{\mathbf{x}} \bigwedge_{\beta<\gamma} \exists \mathbf{x}_{\beta} \phi_{\beta}}{\phi \vdash_{\mathbf{x}} \exists_{\beta<\gamma}\mathbf{x}_{\beta}\bigwedge_{\beta<\gamma}\phi_{\beta}} \end{mathpar} \noindent where the $\mathbf{x}_{\beta}$ are disjoint canonical contexts of the $\phi_{\beta}$. This can be seen by applying dependent choice to the formulas $\psi_{\beta}=\phi \wedge \bigwedge_{\alpha<\beta}\phi_{\alpha+1}$. \begin{lemma}\label{equivl} All instances of the classical distributivity axiom: $$\bigwedge_{i<\gamma}\bigvee_{j<\gamma} \psi_{ij} \vdash_{\mathbf{x}} \bigvee_{f \in \gamma^{\gamma}}\bigwedge_{i<\gamma} \psi_{if(i)}$$ \\ are derivable from those of the axiom: $$\bigwedge_{f \in \gamma^{\beta}, \beta<\gamma}\left(\phi_{f} \to \bigvee_{g \in \gamma^{\beta+1}, g|_{\beta}=f} \phi_{g}\right)$$ $$\wedge \bigwedge_{\beta < \gamma, \text{ limit }\beta, f \in \gamma^{\beta}} \left(\phi_{f} \leftrightarrow \bigwedge_{\alpha<\beta}\phi_{f|_{\alpha}}\right) \vdash_{\mathbf{x}} \phi_{\emptyset} \to \bigvee_{f \in \gamma^{\gamma}} \bigwedge_{\beta<\gamma}\phi_{f|_{\beta}}.$$ \\ Moreover, if excluded middle is available and $\kappa$ is inaccessible, the converse holds. \end{lemma} \begin{proof} Assign to the nodes of the tree $\gamma^{< \gamma}$ the following formulas: to the immediate succesors of a node $\phi_{f}$, for $f \in \gamma^{\beta}$, assign the formulas $\psi_{\beta j}$, then set $\phi_{\emptyset}= \top$, and $\phi_{f}=\bigwedge_{\alpha<\beta}\phi_{f|_{\alpha}}$ for $f \in \gamma^{\beta}$ and limit $\beta$. Then we have the axiom $\bigwedge_{i<\gamma}\bigvee_{j<\gamma} \psi_{ij} \vdash_{\mathbf{x}} \bigvee_{g \in \gamma^{\beta+1}, g|_{\beta}=f} \phi_{g}$ for each $f$, from which we can further derive: $$\bigwedge_{i<\gamma}\bigvee_{j<\gamma} \psi_{ij} \vdash_{\mathbf{x}} \phi_{f} \to \bigvee_{g \in \gamma^{\beta+1}, g|_{\beta}=f} \phi_{g}$$ \\ for each $f$. Thus, applying the distributivity and the cut rule we get: $$\bigwedge_{i<\gamma}\bigvee_{j<\gamma} \psi_{ij} \vdash_{\mathbf{x}} \bigvee_{f \in \gamma^{\gamma}}\bigwedge_{i<\gamma} \psi_{if(i)}$$ \\ as we wanted. If excluded middle is available, we have $\top \vdash_{\mathbf{x}} \bigwedge_{f \in \gamma^{\beta}, \beta<\gamma} (\phi_f \vee \neg \phi_f)$. If $I=\{f \in \gamma^{\beta}, \beta<\gamma\}$, the distributivity axiom implies that then $\top \vdash_{\mathbf{x}} \bigvee_{g \in 2^I} \bigwedge_{f \in I} a_g(\phi_f)$, where $a_g(\phi_f)=\phi_f$ if $g(f)=0$ and $a_g(\phi_f)=\neg \phi_f$ if $g(f)=1$. We have therefore: $$\phi_{\emptyset} \equiv \bigvee_{g \in 2^I} \left(\phi_{\emptyset} \wedge \bigwedge_{f \in I} a_g(\phi_f)\right).$$ \\ If $\phi_{\emptyset} \wedge \bigwedge_{f \in I} a_g(\phi_f)$ is not equivalent to $\bot$, $a_g(\phi_{\emptyset})=\phi_{\emptyset}$, and hence $\phi_{\emptyset}$ is the join: $$\bigvee_{\phi_{\emptyset} \wedge \bigwedge_{f \in I} a_g(\phi_f) \not\equiv \bot} \bigwedge_{f \in I} a_g(\phi_f).$$ We will prove that each disjunct in this join entails $\bigvee_{f \in I} \bigwedge_{\beta<\gamma}\phi_{f|_{\beta}}$, which is clearly enough. Take one such disjunct, $b=\bigwedge_{f \in I} a_g(\phi_f) \not\equiv \bot$ for a certain $g$. First, notice that since $\top \vdash_{\mathbf{x}} \phi_{f} \to \bigvee_{h \in \gamma^{\beta+1}, h|_{\beta}=f} \phi_{h}$, if $b \vdash_{\mathbf{x}} \phi_{f}$ then $b \wedge \phi_h \not\equiv \bot$ for at least one $h$, in which case it follows that $a_g(\phi_h)=\phi_h$ and hence $b \vdash_{\mathbf{x}} \phi_h$. With this observation, we can inductively define a path $f \in \gamma^{\gamma}$ such that $b \vdash_{\mathbf{x}} \phi_{f|_{\alpha}}$ for each $\alpha<\gamma$. Thus, $b \vdash_{\mathbf{x}} \bigvee_{f \in \gamma^{\gamma}} \bigwedge_{\beta<\gamma}\phi_{f|_{\beta}}$, as we wanted. This finishes the proof. \end{proof} \begin{rmk} The intuitionistic form of the distributivity law is strictly stronger than the classical version. For example, the interval $[0, 1]$ with the supremum and infimum as join and meet, respectively, satisfies the classical distributivity law for every $\gamma<\kappa$, but the intuitionistic distributivity law fails for $\gamma=\omega_1$. \end{rmk} Note that the distributivity rule implies in particular, due to Lemma \ref{equivl}, the small distributivity axiom, which is a particular case of classical distributivity. \begin{lemma}\label{equivl2} All instances of the classical dependent choice axiom: $$\bigwedge_{\alpha < \gamma} \forall_{\beta < \alpha}\mathbf{x}_{\beta} \exists \mathbf{x}_{\alpha} \psi_{\alpha} \vdash_{\mathbf{x}} \exists_{\alpha < \gamma} \mathbf{x}_{\alpha} \bigwedge_{\alpha < \gamma} \psi_{\alpha}$$ \\ are derivable from those of the axiom: $$\bigwedge_{\beta < \gamma} \forall (\mathbf{y}_{\beta} \setminus \mathbf{y}_{\emptyset}) \left(\phi_{\beta} \to \exists \mathbf{x}_{\beta +1} \phi_{\beta +1} \right)$$ $$\wedge \bigwedge_{\beta \leq \gamma, \text{ limit }\beta} (\mathbf{y}_{\beta} \setminus \mathbf{y}_{\emptyset}) \left(\phi_{\beta} \leftrightarrow \bigwedge_{\alpha<\beta}\phi_{\alpha} \right) \vdash_{\mathbf{y}_{\emptyset}} \phi_{\emptyset} \to \exists_{\alpha < \gamma} \mathbf{x}_{\alpha} \phi_{\gamma}.$$ \\ Moreover, if excluded middle is available, the converse holds. \end{lemma} \begin{proof} Define the following formulas: set $\phi_{\emptyset}= (\mathbf{x}=\mathbf{x})$, $\phi_{\beta +1}=\psi_{\beta}$ and $\phi_{\alpha}=\bigwedge_{\beta < \alpha} \phi_{\beta}$ if $\alpha$ is a limit ordinal. Then we have: $$\bigwedge_{\alpha < \gamma} \forall_{\beta < \alpha}\mathbf{x}_{\beta} \exists \mathbf{x}_{\alpha} \psi_{\alpha} \vdash_{\mathbf{y_{\beta}}} \exists \mathbf{x}_{\beta} \phi_{\beta +1}$$ \\ and in particular, $$\bigwedge_{\alpha < \gamma} \forall_{\beta < \alpha}\mathbf{x}_{\beta} \exists \mathbf{x}_{\alpha} \psi_{\alpha} \vdash_{\mathbf{y_{\beta}}} \phi_{\beta} \to \exists \mathbf{x}_{\beta} \phi_{\beta +1}.$$ \\ Taking universal quantification and conjunctions on the right side, together with the sentence $\bigwedge_{\beta \leq \gamma, \text{ limit }\beta} \forall (\mathbf{y}_{\beta} \setminus \mathbf{y}_{\emptyset}) \left(\phi_{\beta} \leftrightarrow \bigwedge_{\alpha<\beta}\phi_{\alpha} \right)$, we can then use dependent choice and the cut rule to get: $$\bigwedge_{\alpha < \gamma} \forall_{\beta < \alpha}\mathbf{x}_{\beta} \exists \mathbf{x}_{\alpha} \psi_{\alpha} \vdash_{\mathbf{x}} (\mathbf{x}=\mathbf{x}) \to \exists_{\alpha < \gamma} \mathbf{x}_{\alpha} \bigwedge_{\alpha < \gamma} \psi_{\alpha},$$ \\ as desired. If excluded middle is available, notice that we have: $$\forall (\mathbf{y}_{\beta} \setminus \mathbf{y}_{\emptyset}) \left(\phi_{\beta} \to \exists \mathbf{x}_{\beta +1} \phi_{\beta +1} \right) \vdash_{\mathbf{y}_{\emptyset}} \forall (\mathbf{y}_{\beta} \setminus \mathbf{y}_{\emptyset}) \exists \mathbf{x}_{\beta +1} \left(\phi_{\beta} \to \phi_{\beta +1} \right)$$ \\ for $\beta < \gamma$ by the independence of premise principle, classically provable. Taking conjunctions and applying classical dependent choice and the cut rule, we get: $$\bigwedge_{\beta < \gamma} \forall (\mathbf{y}_{\beta} \setminus \mathbf{y}_{\emptyset}) \left(\phi_{\beta} \to \exists \mathbf{x}_{\beta +1} \phi_{\beta +1} \right) \vdash_{\mathbf{y}_{\emptyset}} \exists_{\alpha < \gamma} \mathbf{x}_{\alpha} \bigwedge_{\beta<\gamma} \left(\phi_{\beta} \to \phi_{\beta +1} \right)$$ \\ Since we also have $$\bigwedge_{\beta \leq \gamma, \text{ limit }\beta} \forall (\mathbf{y}_{\beta} \setminus \mathbf{y}_{\emptyset}) \left(\phi_{\beta} \leftrightarrow \bigwedge_{\alpha<\beta}\phi_{\alpha} \right) \wedge \exists_{\alpha < \gamma} \mathbf{x}_{\alpha} \bigwedge_{\beta<\gamma} \left(\phi_{\beta} \to \phi_{\beta +1} \right) \vdash_{\mathbf{y}_{\emptyset}} \phi_{\emptyset} \to \exists_{\alpha < \gamma} \mathbf{x}_{\alpha} \bigwedge_{\beta<\gamma} \phi_{\beta}$$ \\ the result follows. \end{proof} We have seen that the transfinite transitivity rule implies both classical distributivity and classical dependent choice. We will see now that if excluded middle is available, the converse holds: \begin{thm}\label{equiv} Assuming excluded middle, if $\kappa$ is inaccessible, all the instances of the transfinite transitivity rule: \begin{mathpar} \inferrule{\phi_{f} \vdash_{\mathbf{y}_{f}} \bigvee_{g \in \gamma^{\beta+1}, g|_{\beta}=f} \exists \mathbf{x}_{g} \phi_{g} \\ \beta<\gamma, f \in \gamma^{\beta} \\\\ \phi_{f} \dashv \vdash_{\mathbf{y}_{f}} \bigwedge_{\alpha<\beta}\phi_{f|_{\alpha}} \\ \beta < \gamma, \text{ limit }\beta, f \in \gamma^{\beta}}{\phi_{\emptyset} \vdash_{\mathbf{y}_{\emptyset}} \bigvee_{f \in \gamma^{\gamma}} \exists_{\beta<\gamma}\mathbf{x}_{f|_{\beta +1}} \bigwedge_{\beta<\gamma}\phi_{f|_\beta}} \end{mathpar} \\ are derivable from instances of the classical distributivity axiom: $$\bigwedge_{i<\gamma}\bigvee_{j<\gamma} \psi_{ij} \vdash_{\mathbf{x}} \bigvee_{f \in \gamma^{\gamma}}\bigwedge_{i<\gamma} \psi_{if(i)}$$ \\and the classical dependent choice axiom: $$\bigwedge_{\alpha < \gamma} \forall_{\beta < \alpha}\mathbf{x}_{\beta} \exists \mathbf{x}_{\alpha} \psi_{\alpha} \vdash_{\mathbf{x}} \exists_{\alpha < \gamma} \mathbf{x}_{\alpha} \bigwedge_{\alpha < \gamma} \psi_{\alpha}.$$ \\ In particular, the addition to the axiom of excluded middle to the system of $\kappa$-first-order logic results in a system equivalent to Karp's. \end{thm} \begin{proof} If $\phi_{f} \vdash_{\mathbf{y}_{f}} \bigvee_{g \in \gamma^{\beta+1}, g|_{\beta}=f} \exists \mathbf{x}_{g} \phi_{g}$ for each $\beta<\gamma$, $f \in \gamma^{\beta}$, then we can derive, using the independence of premise principle, the sequent: $$\top \vdash_{\mathbf{y}_{\emptyset}} \forall_{\alpha < \beta}(\cup_{f \in \gamma^{\beta}}\mathbf{x}_{f|_{\alpha+1}}) \bigwedge_{f \in \gamma^{\beta}} \exists_{g \in \gamma^{\beta+1}, g|_{\beta}=f} \mathbf{x}_{g} \left(\phi_f \to \bigvee_{g \in \gamma^{\beta+1}, g|_{\beta}=f} \phi_{g}\right).$$ \\ Using the rule of choice, derivable from that of dependent choice (which is in turn classically derivable from classical dependent choice by Lemma \ref{equivl2}), we get: $$\top \vdash_{\mathbf{y}_{\emptyset}} \forall_{\alpha < \beta}(\cup_{f \in \gamma^{\beta}}\mathbf{x}_{f|_{\alpha+1}}) \exists (\cup_{g \in \gamma^{\beta+1}}\mathbf{x}_{g}) \bigwedge_{f \in \gamma^{\beta}} \left(\phi_f \to \bigvee_{g \in \gamma^{\beta+1}, g|_{\beta}=f} \phi_{g}\right).$$ \\ Since we have one such sequent for each $\beta<\gamma$, by classical dependent choice we can thus infer: $$\top \vdash_{\mathbf{y}_{\emptyset}} \exists_{\alpha < \gamma} (\cup_{f \in \gamma^{\gamma}}\mathbf{x}_{f|_{\alpha+1}}) \bigwedge_{\beta<\gamma, f \in \gamma^{\beta}} \left(\phi_f \to \bigvee_{g \in \gamma^{\beta+1}, g|_{\beta}=f} \phi_{g}\right). \qquad (1)$$ \\ On the other hand, by the distributivity property (provable by Lemma \ref{equivl} from the classical distributivity property, since excluded middle is available), we have: $$\bigwedge_{\beta<\gamma, f \in \gamma^{\beta}} \left(\phi_f \to \bigvee_{g \in \gamma^{\beta+1}, g|_{\beta}=f} \phi_{g}\right) \wedge \bigwedge_{\beta < \gamma, \text{ limit }\beta, f \in \gamma^{\beta}} \left(\phi_{f} \leftrightarrow \bigwedge_{\alpha<\beta}\phi_{f|_{\alpha}}\right)$$ $$\vdash_{\mathbf{y}_{\emptyset} \cup \bigcup_{\alpha < \gamma, f \in \gamma^{\gamma}} \mathbf{x}_{f|_{\alpha +1}}} \phi_{\emptyset} \to \bigvee_{f \in \gamma^{\gamma}} \bigwedge_{\beta<\gamma}\phi_{f|_\beta}.$$ \\ Therefore, the sequent $(1)$ together with the premises $\top \vdash_{\mathbf{y}_{\emptyset} \cup \bigcup_{\alpha < \gamma, f \in \gamma^{\gamma}} \mathbf{x}_{f|_{\alpha +1}}} \phi_{f} \leftrightarrow \bigwedge_{\alpha<\beta}\phi_{f|_{\alpha}}$ for $\beta < \gamma$, $\text{ limit }\beta$, $f \in \gamma^{\beta}$, entail: $$\top \vdash_{\mathbf{y}_{\emptyset}} \phi_{\emptyset} \to \exists_{\alpha < \gamma} (\cup_{f \in \gamma^{\gamma}}\mathbf{x}_{f|_{\alpha+1}}) \bigvee_{f \in \gamma^{\gamma}} \bigwedge_{\beta<\gamma}\phi_{f|_\beta}$$ \\ which in turn entails: $$\top \vdash_{\mathbf{y}_{\emptyset}} \phi_{\emptyset} \to \bigvee_{f \in \gamma^{\gamma}} \exists_{\beta<\gamma}\mathbf{x}_{f|_{\beta +1}} \bigwedge_{\beta<\gamma}\phi_{f|_\beta}.$$ \\ This concludes the proof. \end{proof} \section{Infinitary first-order categorical logic} \subsection{$\kappa$-coherent categories} We begin now to study the categorical counterpart of the infinitary systems. We start with the following: \begin{defs} The $\kappa$-coherent fragment of $\kappa$-first-order logic is the fragment of those sequents where formulas are $\kappa$-coherent, i.e., only use $\bigwedge$, $\bigvee$, $\exists$, where the axioms and rules of $\kappa$-first-order logic are restricted to instantiations on $\kappa$-coherent formulas only, and where we add the corresponding instances of the small distributivity and the Frobenius axioms. \end{defs} The $\kappa$-coherent fragment of first-order logic, which is an extension of the usual finitary coherent fragment, has a corresponding category which we are now going to define. Following \cite{makkai}, consider a $\kappa$-chain in a category $\mathcal{C}$ with $\kappa$-limits, i.e., a diagram $\Gamma: \gamma^{op} \to \mathcal{C}$ specified by morphisms $(h_{\beta, \alpha}: C_{\beta} \to C_{\alpha})_{\alpha \leq \beta<\gamma}$ such that the restriction $\Gamma|_{\beta}$ is a limit diagram for every limit ordinal $\beta$. We say that the morphisms $h_{\beta, \alpha}$ compose transfinitely, and take the limit projection $f_{\beta, 0}$ to be the transfinite composite of $h_{\alpha+1, \alpha}$ for $\alpha<\beta$. Given a cardinal $\gamma<\kappa$, consider the tree $T=\gamma^{<\gamma}$. We will consider diagrams $F: T^{op} \to \mathcal{C}$, which determine, for each node $f$, a family of arrows in $\mathcal{C}$, $\{h_{g, f}: C_{g} \to C_{f} | f \in \gamma^{\beta}, g \in \gamma^{\beta+1}, g|_{\beta}=f\}$. A $\kappa$-family of morphisms with the same codomain is said to be \emph{jointly covering} if the union of the images of the morphisms is the whole codomain. We say that a diagram $F: T^{op} \to \mathcal{C}$ is \emph{proper} if the $\{h_{g, f}: f \in T\}$ are jointly covering and, for limit $\beta$, $h_{f, \emptyset}$ is the transfinite composition of the $h_{f|_{\alpha+1}, f|_{\alpha}}$ for $\alpha+1<\beta$. Given a proper diagram, we say that the families $\{h_{g, f}: f \in T\}$ compose transfinitely, and refer to the projections $\{h_{g, \emptyset} | g \in \gamma^{\gamma}\}$ as the transfinite composites of these families. If in a proper diagram the transfinite composites of the $\kappa$-families of morphisms form itself a jointly covering family, we will say that the diagram is completely proper. \begin{defs} A $\kappa$-coherent category is a $\kappa$-complete coherent category with $\kappa$-complete subobject lattices where unions of cardinality less than $\kappa$ are stable under pullback, and where every proper diagram is completely proper, i.e., the transfinite composites of jointly covering $\kappa$-families of morphisms form a jointly covering family. \end{defs} The latter property, which is the categorical analogue of the transfinite transitivity rule, can be considered as an exactness property of $\ensuremath{\mathbf{Set}}$, generalizing the property in \cite{makkai} where the families consisted of single morphisms. The transfinite transitivity property expresses that transfinite compositions of covering families (in the Grothendieck topology given by the jointly covering families of less than $\kappa$-morphisms) are again covering families; whence its name. It is easy to see that the property holds in \ensuremath{\mathbf{Set}}, and in fact in every presheaf category. $\kappa$-coherent categories have an internal logic, in a signature containing one sort for each object, no relation symbols and one unary function symbol for each arrow, and axiomatized by the following sequents: $$\top \vdash_x \it Id_X(x)=x$$ \\ for all objects $X$ (here $x$ is a variable of sort $X$); $$\top \vdash_x f(x)=h(g(x))$$ \\ for all triples of arrows such that $f=h \circ g$ (here $x$ is a variable whose sort is the domain of $f$); $$\top \vdash_y \exists x f(x)=y$$ \\ for all covers $f$ (here $x$ is a variable whose sort is the domain of $f$); $$\top \vdash_x \bigvee_{i<\gamma}\exists y_i m_i(y_i)=x$$ \\ whenever the sort $A$ of $x$ is the union of $\gamma$ subobjects $m_i: A_i \rightarrowtail A$ (here $y_i$ is a variable of sort $A_i$); $$\bigwedge_{i:I \to J}\overline{i}(x_I)=x_J \vdash_{\{x_I: I \in \mathbf{I}\}} \exists x \bigwedge_{I \in \mathbf{I}} \pi_I(x)=x_I$$ $$\bigwedge_{I \in \mathbf{I}}\pi_I(x)=\pi_I(y) \vdash_{x, y} x=y$$ \\ whenever there is a $\kappa$-small diagram $\Phi: \mathbf{I} \to \mathcal{C}$, $(\{C_I\}_{I \in \mathbf{I}}, \{\overline{i}: C_I \to C_J)\}_{i: I \to J})$ and a limit cone $\pi: \Delta_C \Rightarrow \Phi$, $(\pi_I: C \to C_I)_{I \in \mathbf{I}}$. Here $x_I$ is a variable of type $C_I$, and $x, y$ are variables of type $C$. Functors preserving this logic, i.e., $\kappa$-coherent functors, are just coherent functors which preserve $\kappa$-limits and $\kappa$-unions of subobjects, and they can be easily seen to correspond to structures of the internal theory in a given $\kappa$-coherent category, where we use a straightforward generalization of categorical semantics, to be explained in the next section. \subsection{Categorical semantics and Kripke-Joyal forcing} Categorical model theory techniques explore the study of models in arbitrary categories besides the usual category of sets. Unlike classical model theory, the logics one uses for this purpose formulate theories in terms of sequents; the type of theory studied depends on the type of formula one encounters in these sequents. The theories of the fragments mentioned so far all correspond to specific types of categories. We have the $\kappa$-regular categories, which are categories with $\kappa$-limits, regular epimorphism-monomorphism factorizations stable under pullback and where the transfinite composition of epimorphisms is an epimorphism. The $\kappa$-coherent categories have, in addition to this, stable $\kappa$-unions of subobjects and satisfy the property that the transfinite composition of jointly covering families is jointly covering. Finally, the $\kappa$-Heyting categories have, in addition, right adjoint for pullback functors between subobject lattices, which makes interpreting universal quantification possible. There is a categorical semantics that one can associate with each type of category and theory, which is usually defined according to some inductive clauses. Following \cite{johnstone}, D1.2, given a category $\mathcal{C}$, for each signature $\Sigma$ of a first order language we can associate the so called $\Sigma$-structure within $\mathcal{C}$ in a way that generalizes the $\ensuremath{\mathbf{Set}}$-valued interpretations to all $\kappa$-Heyting categories: \begin{defs} A $\Sigma$-structure in $\mathcal{C}$ consists of the following data: \begin{enumerate} \item for each sort $A$ of variables in $\Sigma$ there is a corresponding object $M(A)$; \item for each $\gamma$-ary function symbol $f$ there is a morphism $M(f): M(A_1, ..., A_{\alpha}, ...)=\Pi_{i<\gamma} M(A_i) \rightarrow M(B)$; \item for each $\gamma$-ary relation symbol $R$ there is a subobject $M(R) \rightarrowtail M(A_1, ..., A_{\alpha}, ...)$, where $A_i$ are the sorts corresponding to the individual variables corresponding to $R$ (which will specify, by definition, the type of $R$). \end{enumerate} \end{defs} The $\Sigma$-structure will serve as a setup for interpreting all formulas of the language considered. Due to the need of distinguishing the context in which the free variables of the formula occur, for the purpose of a correct interpretation, we shall adopt the notation $(\mathbf{x}, \phi)$ to represent a term/formula $\phi$ whose free variables occur within $\mathbf{x}=x_1, ...,x_{\alpha}, ...$. We now define the interpretation of such formulas by induction on their complexity: \begin{defs} Given a term in context $(\mathbf{x}, s)$ of a $\kappa$-first order theory, its interpretation $[\![\mathbf{x}, s]\!]$ within the $\kappa$-Heyting category $\mathcal{C}$ is a morphism of $\mathcal{C}$ defined in the following way: \begin{enumerate} \item If $s$ is a variable, it is necessarily some $x_i$, and then the corresponding morphism is $[\![\mathbf{x}, x_i]\!]=\pi_i: M(A_0, ..., A_{\alpha}, ...) \rightarrow M(A_i)$, the $i$-th product projection. \item If $s$ is a term $f(t_0, ..., t_{\alpha}, ...)$, where each term $t_{\alpha}$ is of type $C_{\alpha}$, its interpretation is the composite: \begin{displaymath} \xymatrix{ M(A_0, ..., A_{\alpha}, ...) \ar@{->}[rrr]^{([\![\mathbf{x}, t_0]\!], ..., [\![\mathbf{x}, t_{\alpha}]\!], ...)} & & & M(C_0, ..., C_{\alpha}, ...) \ar@{->}[rr]^{M(f)} & & M(B)\\ } \end{displaymath} \end{enumerate} The interpretation in $\mathcal{C}$ of the formula in context $(\mathbf{x}, \phi)$, where $\mathbf{x}=x_0...x_{\alpha} ...$ and $x_i$ is a variable of sort $A_i$, is defined as a subobject $[\![\mathbf{x}, \phi]\!] \rightarrowtail M(A_0, ..., A_{\alpha}, ...)$ in the following way: \begin{enumerate} \item If $\phi$ is the formula $R(t_0, ..., t_{\alpha}, ...)$, where $R$ is a $\gamma$-ary relation symbol of type $B_0, ..., B_{\alpha}, ...$, then $[\![\mathbf{x}, \phi]\!]$ is given by the pullback: \begin{displaymath} \xymatrix{ [\![\mathbf{x}, \phi]\!] \ar@{->}[rrr] \ar@{ >->}[dd] & & & M(R) \ar@{ >->}[dd]\\ & & & \\ M(A_0, ..., A_{\alpha}, ...) \ar@{->}[rrr]^{([\![\mathbf{x}, t_1]\!], ..., [\![\mathbf{x}, t_{\alpha}]\!], ...)} & & & M(B_0, ..., B_{\alpha}, ...)\\ } \end{displaymath} \item If $\phi$ is the formula $s=t$ where $s, t$ are terms of sort $B$, then $[\![\mathbf{x}, \phi]\!]$ is the equalizer of the arrows: \begin{displaymath} \xymatrix{ M(A_0, ..., A_{\alpha}, ...) \ar@/^{1pc}/[rr]^{[\![\mathbf{x}, s]\!]} \ar@/_{1pc}/[rr]_{[\![\mathbf{x}, t]\!]} & & M(B)\\ } \end{displaymath} \\ Equivalently, $[\![\mathbf{x}, \phi]\!]$ is the pullback of the diagonal $M(B) \rightarrowtail M(B) \times M(B)$ along the morphism $([\![\mathbf{x}, s]\!], [\![\mathbf{x}, t]\!])$. \item If $\phi$ is the formula $\bigvee_{i<\gamma}\psi_i$, then $[\![\mathbf{x}, \phi]\!]$ is the union $\bigvee_{i<\gamma}[\![\mathbf{x}, \psi_i]\!]$ in\\ $\mathcal{S}ub(M(A_0, ..., A_{\alpha}, ...))$. If $\phi$ is the formula $\bigwedge_{i<\gamma}\psi_i$, then $[\![\mathbf{x}, \phi]\!]$ is the intersection $\bigwedge_{i<\gamma}[\![\mathbf{x}, \psi_i]\!]$ in $\mathcal{S}ub(M(A_0, ..., A_{\alpha}, ...))$. Similarly, if $\phi$ is the formula $\neg \psi$, the corresponding subobject is $\neg[\![\mathbf{x}, \psi]\!]$. \item If $\phi$ is the formula $(\exists y)\psi$, then $[\![\mathbf{x}, \phi]\!]$ is the image of the composite: \begin{displaymath} \xymatrix{ [\![\mathbf{x}y, \psi]\!] \ar@{ >->}[r] & M(A_0, ..., A_{\alpha}, ..., B) \ar@{->}[r]^{\pi} & M(A_0, ..., A_{\alpha}, ...)\\ } \end{displaymath} \\ where $\pi$ is the projection to the first $\gamma$ coordinates. Equivalently, this amounts to applying the left adjoint $\exists_{\pi}$ to the pullback functor\\ $\pi^{-1}: \mathcal{S}ub(M(A_0, ..., A_{\alpha}, ..., B)) \to \mathcal{S}ub(M(A_0, ..., A_{\alpha}, ...))$. \item If $\phi$ is the formula $(\forall y)\psi$, then $[\![\mathbf{x}, \phi]\!]$ can be obtained by applying to $[\![\mathbf{x}y, \psi]\!]$ the right adjoint $\forall_{\pi}$ to the pullback functor\\ $\pi^{-1}: \mathcal{S}ub(M(A_0, ..., A_{\alpha}, ..., B)) \to \mathcal{S}ub(M(A_0, ..., A_{\alpha}, ...))$, where $\pi$ is the projection to the first $\gamma$ coordinates. Implication can be seen as a particular case of this right adjoint, by considering in $\mathcal{S}ub(M(A_0, ..., A_{\alpha}, ...))$ the pullback functor $\phi \wedge -: \mathcal{S}ub(M(A_0, ..., A_{\alpha}, ...)) \to \mathcal{S}ub(M(A_0, ..., A_{\alpha}, ...))$. \end{enumerate} \end{defs} Functors between the appropriate categories preserving the corresponding structure correpond to models in the codomain category of the internal theory of the domain category. Such functors are called conservative if they reflect isomorphisms, and hence they reflect also the validity of formulas in the corresponding models. One then has: \begin{lemma}\label{soundness} $\kappa$-coherent logic is sound with respect to models in $\kappa$-coherent categories. \end{lemma} \begin{proof} This is straightforward for all axioms and rules, except for the rule of transfinite transitivity. But here the proof is the natural generalization of that of the soundness of dependent choice, presented in \cite{makkai} for $\kappa$-regular logic. Let $S_{\mathbf{y}_{f}}$ be the product of the sorts assigned to the variables in $\mathbf{y}_{f}$ in the structure within a $\kappa$-coherent category, and assume that the premises of the transfinite transitivity rule hold there. We must show that the conclusion holds. We can also assume, without loss of generality, that: $$\phi_{g} \vdash_{\mathbf{y}_{g}} \phi_{f}$$ \\ for each $g \in \gamma^{\beta+1}, g|_{\beta}=f$; otherwise we can take, for each $f \in \gamma^{\beta}$: $$\psi_{f}=\bigwedge_{\alpha \leq \beta} \phi_{f|_{\alpha}}$$ \\ which, using the small distributivity law as well as Frobenius axiom can be seen to satisfy the premises of the rule as well, and both this form of distributivity and Frobenius axiom hold in any $\kappa$-coherent category because $\kappa$-unions and covers are stable under pullback. Let $m_{\alpha}: C_{\alpha} \to S_{\mathbf{y}_{f}}$ be the monomorphism representing the subobject $[\mathbf{y}_{f}, \phi_{f}]$. The assumption we have provides arrows: $$h_{g, f}: C_{g} \to C_{f}$$ \\ for $g \in \gamma^{\beta+1}$, $g|_{\beta}=f$, and by interpreting the premises of the rule it follows that the arrows: $$\{h_{g, f} | g \in \gamma^{\beta+1}, g|_{\alpha}=f\}$$ \\ form a jointly covering family. For a fixed $f \in \gamma^{\gamma}$ and limit $\beta$, the limit of the diagram formed by the $C_{f|_{\alpha}}$ for $\alpha<\beta$ is given by the intersection in the subobject lattice of $S_{\mathbf{y}_{f|_{\beta}}}$ of the pullbacks of each $m_{\alpha}$ along the projections $\pi_{f|_{\beta}, f|_{\alpha}}: S_{\mathbf{y}_{f|_{\beta}}} \to S_{\mathbf{y}_{f|_{\alpha}}}$. This intersection is in turn given by the subobject: $$C_{f|_{\beta}}=\bigwedge_{\alpha<\beta}\phi_{f|_{\alpha}} \to S_{\mathbf{y}_{f|_{\beta}}}$$ \\ By the property of the $\kappa$-coherent category, the arrows $C_{f|_{\beta}} \to C_{\emptyset}$ for $f \in \gamma^{\gamma}$ form a jointly covering family whenever $\beta$ is a limit ordinal, and the interpretation of the conclusion of the rule is precisely this statement for the case $\beta=\gamma$. This proves the soundness of the rule. \end{proof} One particular case of categorical semantics is given by the so called Kripke-Joyal semantics in a topos. A sheaf topos is in particular a Heyting category, and so it will be $\kappa$-Heyting precisely when it satisfies the transfinite transitivity property. The verification of the Heyting properties of a sheaf topos presents no major difficulties, but it is instructive to point how the connectives and quantifiers are interpreted. Following \cite{maclane-moerdijk}, section III.8, given a sheaf topos $\mathcal{S}h(\mathcal{C}, \tau)$ and some subsheaves $\{A, B, A_i: i \in I\}$ in the lattice of subobjects of a sheaf $E$, we have: \begin{enumerate} \item The subsheaf $0$, initial in $\mathcal{S}ub(E)$, is determined by the property $x \in 0(C) \Leftrightarrow \emptyset$ is a cover of $C$ and $x \in E(C)$. \item The subsheaf $\bigwedge_{i \in I} A_i$ is determined by the property $\bigwedge_{i \in I} A_i(C)= \bigcap_{i \in I} A_i(C)$. \item The subsheaf $\bigvee_{i \in I} A_i$ is determined by the property\footnote{For $e \in E(C)$ and $f: D \to C$, we denote by $e.f \in E(D)$ the element $E(f)(e)$.} $e \in \bigvee_{i \in I} A_i(C) \Leftrightarrow \{f: D \to C/e.f \in \bigcup_{i \in I}A_i(D)\}$ is a cover of $C$. \item The subsheaf $A \rightarrow B$ is determined by the property $e \in (A \rightarrow B)(C) \Leftrightarrow$ for all $f: D \to C$, $e \in A(D)$ implies $e \in B(D)$. \item Given a morphism $\phi: E \to F$ inducing the pullback functor between subsheaves $\phi^{-1}: \mathcal{S}ub(F) \to \mathcal{S}ub(E)$, the action of its left adjoint $\exists_{\phi}$ is determined by the property $y \in \exists_{\phi}(A)(C) \Leftrightarrow \{f: D \to C/\exists a \in A(D): \phi_D(a)=y.f\}$ is a cover of $C$. \item Given a morphism $\phi: E \to F$ inducing the pullback functor between subsheaves $\phi^{-1}: \mathcal{S}ub(F) \to \mathcal{S}ub(E)$, the action of its right adjoint $\forall_{\phi}$ is determined by the property $y \in \forall_{\phi}(A)(C) \Leftrightarrow$ for all $f: D \to C$, $\phi_D^{-1}(y.f) \subseteq A(D)$. \end{enumerate} If we have a model of a theory in a sheaf topos and $C$ is now any object from $\mathcal{C}$, the forcing relation $C \Vdash \phi(\boldsymbol\alpha)$ for $\boldsymbol\alpha: [-, C] \to [\![\mathbf{x}, \top]\!]$ holds by definition if $\boldsymbol\alpha$ factors through the subobject $[\![\mathbf{x}, \phi(\mathbf{x})]\!] \rightarrowtail [\![\mathbf{x}, \top]\!]$. From the previous clauses, one can see that the following relations hold: \begin{enumerate} \item $C \Vdash \bot \Leftrightarrow$ $\emptyset$ is a cover of $C$. \item $C \Vdash \bigwedge_{i \in I} \phi_i(\boldsymbol\alpha) \Leftrightarrow C \Vdash \phi_i(\boldsymbol\alpha)$ for every $i \in I$. \item $C \Vdash \bigvee_{i \in I} \phi_i(\boldsymbol\alpha) \Leftrightarrow $ there is a cover $\{f_j: C_j \to C\}_{j \in J}$ such that for every $j \in J$, $C_j \Vdash \phi_{i_j}(\boldsymbol\alpha f_j)$ for some $i_j \in I$ (we say that the covering family witnesses the forcing clause). \item $C \Vdash \phi(\boldsymbol\alpha) \to \psi(\boldsymbol\alpha) \Leftrightarrow $ for all $f: D \to C$, $D \Vdash \phi(\boldsymbol\alpha f)$ implies $D \Vdash \psi(\boldsymbol\alpha f)$. \item $C \Vdash \exists \mathbf{x}\phi(\mathbf{x}, \boldsymbol\alpha) \Leftrightarrow $ there is a cover $\{f_j: C_j \to C\}_{j \in J}$ such that for every $j \in J$, $C_j \Vdash \phi(\mathbf{c_j}, \boldsymbol\alpha f_j)$ for some $\mathbf{c_j}: C_{j} \to [\![\mathbf{x}, \top]\!]$ (we say that the covering family witnesses the forcing clause). \item $C \Vdash \forall \mathbf{x}\phi(\mathbf{x}, \boldsymbol\alpha) \Leftrightarrow $ for all $f: D \to C$ and $\mathbf{c}: D \to [\![\mathbf{x}, \top]\!]$ we have $D \Vdash \phi(\mathbf{c}, \boldsymbol\alpha f)$. \end{enumerate} (These last two clauses are obtained from the case in which the morphism $\phi$ is the projection between the products of corresponding sorts). There are two important properties of this forcing notion. First, it is clear that if $C \Vdash \phi(\boldsymbol\alpha)$ and $f: D \to C$, then $D \Vdash \phi(\boldsymbol\alpha f)$. The second property states that if $\{f_i: C_i \to C\}_{i \in I}$ is a cover of $C$ and $C_i \Vdash \phi(\boldsymbol\alpha f_i)$ for all $i \in I$, then also $C \Vdash \phi(\boldsymbol\alpha)$. This is simply a reformulation of the glueing condition for the subsheaf $[\![\mathbf{x}, \phi(\mathbf{x})]\!] \rightarrowtail [\![\mathbf{x}, \top]\!]$. The Kripke-Joyal forcing and its properties will become useful in our proof of completeness of infinitary logics. \subsection{Syntactic categories} The development of syntactic categories for infinite quantifier logics follows precisely the same pattern as the finitary case, except that instead of finite contexts for the objects of the syntactic category of a theory over such logic, we allow arbitrary sets of variables of cardinality less than $\kappa$, following, e.g., \cite{makkai}. Given a $\kappa$-coherent theory $\ensuremath{\mathbb{T}}$, we explain how to define, following \cite{johnstone}, D1.4, its syntatic category $\mathcal{C}_{\ensuremath{\mathbb{T}}}$ and a categorical model $M_{\ensuremath{\mathbb{T}}}$ inside it, in such a way that a formula in $\ensuremath{\mathbb{T}}$ will be provable if and only if its interpretation in $\mathcal{C}_{\ensuremath{\mathbb{T}}}$ is satisfied by the model $M_{\ensuremath{\mathbb{T}}}$. Formulas shall be considered in suitable contexts, which are (possibly empty) subsets of variables of cardinality less than $\kappa$ containing the free variables of the formula. We will say that two formulas in context $(\mathbf{x}, \phi), (\mathbf{y}, \psi)$ are $\alpha$-equivalent if the second has been obtained from the first after renaming the bound variables of $\phi$ and the variables in the context (some of them appearing as free variables in $\phi$). We take the objects of $\mathcal{C}_{\ensuremath{\mathbb{T}}}$ to be the $\alpha$-equivalence classes of formulas $(\mathbf{x}, \phi)$. To describe the morphisms, consider two objects $[\mathbf{x}, \phi], [\mathbf{y}, \psi]$, and assume, without loss of generality, that their set of variables $\mathbf{x}, \mathbf{y}$ are disjoint. Consider now a formula $\theta$ that satisfies the following conditions:\\ \\ a) Its free variables are amongst $\mathbf{x}\mathbf{y}$.\\ b) The following sequents are provable in $\ensuremath{\mathbb{T}}$:\\ $$ \theta(\mathbf{x}, \mathbf{y}) \vdash_{\mathbf{x}\mathbf{y}} \phi(\mathbf{x}) \wedge \psi(\mathbf{y})$$ $$ \phi(\mathbf{x}) \vdash_{\mathbf{x}} \exists \mathbf{y} (\theta(\mathbf{x},\mathbf{y}))$$ $$ \theta(\mathbf{x}, \mathbf{y}) \wedge \theta(\mathbf{x}, \mathbf{z}/\mathbf{y}) \vdash_{\mathbf{x}\mathbf{y}\mathbf{z}} (\mathbf{y}=\mathbf{z})$$ \\ Define now the morphisms between $[\mathbf{x}, \phi]$ and $[\mathbf{y}, \psi]$ to be the provable-equivalence class of all those formulas of $\ensuremath{\mathbb{T}}$ that satisfy conditions a) and b) above. The idea behind this definition is to allow only those morphisms that are exactly needed for our purposes. More precisely, the first formula in condition b) restricts the interpretation $[\![\theta(\mathbf{x}, \mathbf{y})]\!]$ in any model to be a subobject of $[\![\phi(\mathbf{x}) \wedge \psi(\mathbf{y})]\!]$, while the last two formulas imply, if the category has finite limits, that it will be the graph of a morphism from $[\![\phi(\mathbf{x})]\!]$ to $[\![\psi(\mathbf{y})]\!]$. Because of the particular construction of the category $\mathcal{C}_{\ensuremath{\mathbb{T}}}$, this says exactly that the class $[\mathbf{x} \mathbf{y}, \theta(\mathbf{x}, \mathbf{y})]$ is a morphism from $[\mathbf{x}, \phi(\mathbf{x})]$ to $[\mathbf{y}, \psi(\mathbf{y})]$. The composite of two morphisms: \begin{displaymath} \xymatrix{ [\mathbf{x}, \phi] \ar@{->}[r]^{[\mathbf{x}\mathbf{y},\theta]} & [\mathbf{y}, \psi] \ar@{->}[r]^{[\mathbf{y}\mathbf{z},\delta]} & [\mathbf{z}, \eta]\\ } \end{displaymath} \\ is defined to be the class $[\mathbf{x}\mathbf{z}, \exists \mathbf{y} (\theta \wedge \delta)]$. It can be verified that this definition does not depend on the choice of representatives $\theta, \delta$ and that this morphism so defined satisfies conditions a) and b) above. It can also be verified that composition of morphisms is associative. Finally, the identity morphism on an object $[\mathbf{x}, \phi]$ can be defined to be arrow: \begin{displaymath} \xymatrix{ [\mathbf{x}, \phi] \ar@{->}[rrr]^{[\mathbf{x} \mathbf{y}, \phi(\mathbf{x}) \wedge (\mathbf{x}=\mathbf{y})]} & & & [\mathbf{y}, \phi(\mathbf{y}/\mathbf{x})]\\ } \end{displaymath} \\ Again, it is easily checked that this morphism satisfies condition a) and b) and that it is the unity for composition. Also, note that these definitions do not depend on the choices of representatives in each class. This makes $\mathcal{C}_{\ensuremath{\mathbb{T}}}$ a small category. Our goal is to relate syntactical provability in $\ensuremath{\mathbb{T}}$ with semantic validity in the categorical model $M_{\ensuremath{\mathbb{T}}}$ to be defined. One aspect of this relation is given by the following lemma, which highlights the syntactical properties of $\mathcal{C}_{\ensuremath{\mathbb{T}}}$: \begin{lemma}\label{lemmap1} 1) A morphism $[\mathbf{x}\mathbf{y}, \theta]: [\mathbf{x}, \phi] \rightarrow [\mathbf{y}, \psi]$ is an isomorphism if and only if $[\mathbf{y}\mathbf{x}, \theta]: [\mathbf{y}, \psi] \rightarrow [\mathbf{x}, \phi]$ is a valid morphism in $\mathcal{C}_{\ensuremath{\mathbb{T}}}$ (i.e., it satisfies conditions a) and b) of the definition of morphism).\\ 2) A morphism $[\mathbf{x}\mathbf{y}, \theta]: [\mathbf{x}, \phi] \rightarrow [\mathbf{y}, \psi]$ is a monomorphism if and only if the sequent $ \theta(\mathbf{x}, \mathbf{y}) \wedge \theta(\mathbf{z}, \mathbf{y}) \vdash_{\mathbf{x}\mathbf{y}\mathbf{z}} \mathbf{x}=\mathbf{z})$ is provable in $\ensuremath{\mathbb{T}}$.\\ 3) Every subobject of $[\mathbf{y}, \phi]$ is isomorphic to one of the form: \begin{displaymath} \xymatrix{ [\mathbf{x}, \psi] \ar@{ >->}[rr]^{[\psi \wedge (\mathbf{x}=\mathbf{y})]} & & [\mathbf{y}, \phi]\\ } \end{displaymath} \\ where $\psi$ is such that the sequent $\psi(\mathbf{y}) \vdash_{\mathbf{y}} \phi(\mathbf{y})$ is provable in $\ensuremath{\mathbb{T}}$. Moreover, any two subobjects $[\mathbf{y}, \psi], [\mathbf{y}, \eta]$ in $\mathcal{S}ub ([\mathbf{y}, \phi])$ satisfy $[\mathbf{y}, \psi] \leq [\mathbf{y}, \eta]$ if and only if the sequent $\psi(\mathbf{y}) \vdash_{\mathbf{y}} \eta(\mathbf{y})$ is provable in $\ensuremath{\mathbb{T}}$.\end{lemma} \begin{proof} To prove 1), suppose $[\mathbf{y}\mathbf{x}, \theta]$ is a valid morphism from $[\mathbf{y}, \psi]$ to $[\mathbf{x}, \phi]$. Then it can be easily checked that $[\mathbf{y}\mathbf{x}, \theta]$ itself is an inverse for $[\mathbf{x}\mathbf{y}, \theta]$. Conversely, if $[\mathbf{x}\mathbf{y}, \theta]: [\mathbf{x}, \phi] \rightarrow [\mathbf{y}, \psi]$ has an inverse $[\mathbf{y}\mathbf{x}, \delta]$ (which is a valid morphism), then it can be verified that $\theta$ and $\delta$ are necessarily provable equivalent in $\ensuremath{\mathbb{T}}$, from which the result follows.\\ To prove 2), construct the kernel pair of $[\mathbf{x}\mathbf{y}, \theta]: [\mathbf{x}, \phi] \rightarrow [\mathbf{y}, \psi]$, which, using the construction of products and equalizers, can be verified to be the class $[\mathbf{x}\mathbf{z}, \exists \mathbf{y} (\theta(\mathbf{x}, \mathbf{y}) \wedge \theta(\mathbf{z}, \mathbf{y}))]$. Then, as can be easily checked, the provability of the stated sequent is equivalent, by 1), to the fact that the diagonal morphism from $[\mathbf{x}, \phi]$ to this kernel pair is an isomorphism, which is in turn equivalent to the fact that $[\mathbf{x}\mathbf{y}, \theta]$ is a monomorphism.\\ Finally, suppose we have a monomorphism $[\mathbf{x}\mathbf{y}, \theta]: [\mathbf{x}, \psi] \rightarrowtail [\mathbf{y}, \phi]$. By 1), the morphism $[\mathbf{x}\mathbf{y}, \theta]: [\mathbf{x}, \psi] \rightarrow [\mathbf{y}, \exists \mathbf{x} \theta(\mathbf{x}, \mathbf{y})]$ is an isomorphism. Then, composing its inverse with the original monomorphism we have a subobject of the stated form, where $\psi(\mathbf{y})$ is the formula $\exists \mathbf{x} \theta(\mathbf{x}, \mathbf{y})$. Now, two subobjects $[\mathbf{y}, \psi], [\mathbf{y}, \eta]$ of $[\mathbf{y}, \phi]$ satisfy $[\mathbf{y}, \psi] \leq [\mathbf{y}, \eta]$ if and only if there exists a monomorphism $[\mathbf{y}, \psi] \rightarrowtail [\mathbf{y}, \eta]$, which by the previous argument must have the form $[\psi' \wedge (\mathbf{x}=\mathbf{y})]: [\mathbf{x}, \psi'] \rightarrowtail [\mathbf{y}, \eta]$ for some $\psi'$. But then, since $\psi$ and $\psi'$ must be provable equivalent, this is a valid morphism if and only if the sequent $\psi(\mathbf{y}) \vdash_{\mathbf{y}} \eta(\mathbf{y})$ is provable in $\ensuremath{\mathbb{T}}$. This completes the proof of 3).\end{proof} To construct the desired model $M_{\ensuremath{\mathbb{T}}}$ in the syntactic category of $\ensuremath{\mathbb{T}}$, note that there is a natural $\Sigma$-structure assigning to the sort $A$ the formula $[x, \top]$ where $x$ is a variable of sort $A$, and to the relation symbols $R$ over variables $\mathbf{x}=x_1, ..., x_{\alpha}, ...$ of sorts $A_, ..., A_{\alpha}, ...$ respectively, the subobject $[\mathbf{x}, R(x_1, ..., x_{\alpha}, ...)] \rightarrowtail [\mathbf{x}, \top]$. We have now finally gotten to the important relationship between syntactic provability and semantic validity in $M_{\ensuremath{\mathbb{T}}}$: \begin{proposition}\label{thmp2} The sequent $\phi(\mathbf{x}) \vdash_{\mathbf{x}} \psi(\mathbf{x})$ is satisfied by the $\Sigma$-structure $M_{\ensuremath{\mathbb{T}}}$ if and only if it is provable in $\ensuremath{\mathbb{T}}$. Consequently, a formula $\eta(\mathbf{x})$ has full extension in $M_{\ensuremath{\mathbb{T}}}$ if and only if it is provable in $\ensuremath{\mathbb{T}}$. \end{proposition} \begin{proof} By definition, the stated sequent is satisfied by $M_{\ensuremath{\mathbb{T}}}$ if and only if the corresponding subobjects in the interpretation satisfy $[\![\mathbf{x}, \phi]\!] \leq [\![\mathbf{x}, \psi]\!]$. By the construction of $M_{\ensuremath{\mathbb{T}}}$, a straightforward induction on the complexity of $\phi$ proves that the interpretation $[\![\mathbf{x}, \phi]\!]$ is the subobject $[\mathbf{x}, \phi] \rightarrowtail [\mathbf{x}, \top]$. Therefore, the assertion $[\![\mathbf{x}, \phi]\!] \leq [\![\mathbf{x}, \psi]\!]$ is equivalent to the fact that the two subobjects $[\mathbf{x}, \phi], [\mathbf{x}, \psi]$ of $[\mathbf{x}, \top]$ satisfy $[\mathbf{x}, \phi] \leq [\mathbf{x}, \psi]$, which, by Lemma \ref{lemmap1} 3), is in turn equivalent to the fact that $\phi(\mathbf{x}) \vdash_{\mathbf{x}} \psi(\mathbf{x})$ is provable in $\ensuremath{\mathbb{T}}$.\end{proof} Proposition \ref{thmp2} says in a way that the model $M_{\ensuremath{\mathbb{T}}}$ reflects all syntactical relations in the theory $\ensuremath{\mathbb{T}}$; therefore, the analysis of categorical properties of $M_{\ensuremath{\mathbb{T}}}$ will reveal facts about provability in $\ensuremath{\mathbb{T}}$. We now have: \begin{proposition}\label{catcomp} If $\ensuremath{\mathbb{T}}$ is a $\kappa$-coherent (resp. $\kappa$-Heyting) theory, then $\mathcal{C}_{\ensuremath{\mathbb{T}}}$ is a $\kappa$-coherent (resp. $\kappa$-Heyting) category. \end{proposition} \begin{proof} To prove $\mathcal{C}_{\ensuremath{\mathbb{T}}}$ has $\kappa$-limits it suffices to prove it has $\kappa$-products and equalizers. As the product of $\gamma$-many objects $[\mathbf{x_i}, \phi_i]_{i<\gamma}$ (where the $\mathbf{x_i}$ are assumed to be disjoint) we can take the class $[\bigcup_{i<\gamma}\mathbf{x_i}, \bigwedge_{i<\gamma} \phi_i]$ together with the projections indicated below: \begin{displaymath} \xymatrix{ [\mathbf{z}, \chi] \ar@{-->}[dd]_{[\mathbf{z} \cup (\bigcup_{i<\gamma}\mathbf{x_i}), \bigwedge_{j<\gamma} \theta_j]} \ar@{->}[ddrrrrrr]^{[\mathbf{z}\mathbf{x_j'}, \theta_j]} & & & & & &\\ & & & & & & \\ [\bigcup_{i<\gamma}\mathbf{x_i}, \bigwedge_{i<\gamma} \phi_i] \ar@{->}[rrrrrr]_{[\bigcup_{i<\gamma}\mathbf{x_i}\mathbf{x_j'}, \bigwedge_{i<\gamma} \phi_i \wedge (\mathbf{x_j'}=\mathbf{x_j})]} & & & & & & [\mathbf{x_j'}, \phi_j]\\ } \end{displaymath} \\ Given morphisms $[\mathbf{z}\mathbf{x_j'}, \theta_j]$, the induced morphism into the product is given by the class $[\mathbf{z}\cup (\bigcup_{i<\gamma}\mathbf{x_i}), \bigwedge_{j<\gamma} \theta_j]$, since it can be easily verified that this is the only morphism that makes the diagram commute. For the equalizer of a parallel pair of morphisms $[\mathbf{x}\mathbf{y}, \theta], [\mathbf{x}\mathbf{y}, \delta]$, we take: \begin{displaymath} \xymatrix{ [\mathbf{x'}, \exists \mathbf{y} (\theta(\mathbf{x'}, \mathbf{y}) \wedge \delta(\mathbf{x'}, \mathbf{y}))] \ar@{->}[rrrrr]^{[\mathbf{x'}\mathbf{x}, \exists \mathbf{y} (\theta \wedge \delta \wedge (\mathbf{x'}=\mathbf{x}))]} & & & & & [\mathbf{x}, \phi] \ar@/^{1pc}/[rr]^{[\mathbf{x}\mathbf{y}, \theta]} \ar@/_{1pc}/[rr]_{[\mathbf{x}\mathbf{y}, \delta]} & & [\mathbf{y}, \psi]\\ & & & & & & & & \\ & & & & & & & & \\ [\mathbf{z}, \chi] \ar@{->}[uuurrrrr]_{[\mathbf{z}\mathbf{x}, \eta]} \ar@{-->}[uuu]^{[\mathbf{z}\mathbf{x'}, \eta]} & & & & & & & & \\ } \end{displaymath} \\ and the universal property is satisfied with the indicated induced morphism. This proves that $\mathcal{C}_T$ has $\kappa$-limits. Note as well that there is an initial object given by $[\{\}, \bot]$, and a terminal object given by $[\{\}, \top]$. To prove that the category has image factorizations, given a morphism $[\mathbf{x}\mathbf{y}, \theta]: [\mathbf{x}, \phi] \rightarrow [\mathbf{y}, \psi]$ we take its image as the subobject $[\mathbf{y}, \exists \mathbf{x} (\theta)] \rightarrowtail [\mathbf{y}, \psi]$. In particular, $[\mathbf{x}\mathbf{y}, \theta]$ is a cover if and only if the sequent $\psi(\mathbf{y}) \vdash_{\mathbf{y}} \exists \mathbf{x} \theta(\mathbf{x}, \mathbf{y})$ is provable in $\ensuremath{\mathbb{T}}$. Then, from the construction of limits above, it can be verified straightforwardly using Frobenius axiom that covers are stable under pullbacks. To prove that the category has unions, take subobjects $[\mathbf{x}, \phi_i]_{i<\gamma}$ of $[\mathbf{x}, \phi]$ and define their union to be $[\mathbf{x}, \bigvee_{i<\gamma} \phi_i]$; the small distributivity law then ensures that unions are stable under pullback. The validity of the transfinite transitivity property is proven using the transfinite transitivity rule, the construction of limits and the fact that $([\mathbf{x}_i\mathbf{y}, \theta_i]: [\mathbf{x}_i, \phi_i] \rightarrow [\mathbf{y}, \psi])_{i<\gamma}$ are jointly covering if and only if the sequent $\psi(\mathbf{y}) \vdash_{\mathbf{y}} \bigvee_{i<\gamma} \exists \mathbf{x}_i \theta_i(\mathbf{x}_i, \mathbf{y})$ is provable in $\ensuremath{\mathbb{T}}$. It is necessary to compute the limit of a $\kappa$-chain $([\mathbf{x_{\alpha+1}}\mathbf{x_{\alpha}}, \theta_{\alpha}]: [\mathbf{x_{\alpha+1}}, \phi_{\alpha+1}] \to [\mathbf{x_{\alpha}}, \phi_{\alpha}])_{\alpha<\gamma}$. This can be computed using the construction of limits with products and equalizers; in this case, one can verify that the limit of such a chain reduces to compute the equalizer of the following diagram: \begin{displaymath} \xymatrix{ [\mathbf{x}, \bigwedge_{\alpha<\gamma} \phi_{\alpha}] \ar@/^{1pc}/[rrrrrrrr]^{[\mathbf{x}\mathbf{x}', \bigwedge_{\alpha<\gamma} \phi_{\alpha} \wedge \bigwedge_{\alpha<\gamma} x_{\alpha}=x_{\alpha}']} \ar@/_{1pc}/[rrrrrrrr]_{[\mathbf{x}\mathbf{x}', \bigwedge_{\alpha<\gamma} \phi_{\alpha} \wedge \bigwedge_{\alpha<\gamma} \exists y_{\alpha+1} \theta(y_{\alpha+1}, x_{\alpha}') \wedge y_{\alpha+1}=x_{\alpha+1}]} & & & & & & & & [\mathbf{x'}, \bigwedge_{\alpha<\gamma} \phi_{\alpha}]\\ } \end{displaymath} \noindent where $\mathbf{x}=x_0, ...x_{\alpha}, ...$. From this construction and the construction of equalizers in the syntactic category we can derive the sequent that expresses that the transfinite composition of jointly covering families are jointly covering, and verify that the sequent is provable within the theory making use of the transfinite transitivity rule. Finally, if the theory is $\kappa$-Heyting, to construct universal quantification along a morphism $[\mathbf{x}\mathbf{y}, \theta]: [\mathbf{x}, \phi] \rightarrow [\mathbf{y}, \psi]$, take a subobject $[\mathbf{x}, \eta]$ of its domain, in the canonical form given in Lemma \ref{lemmap1} 3). Then define $\forall_{[\mathbf{x}\mathbf{y}, \theta]}([\mathbf{x}, \eta])$ to be the subobject $[\mathbf{y}, \psi \wedge \forall \mathbf{x} (\theta \to \eta)]$ of $[\mathbf{y}, \psi]$. It follows from Lemma \ref{lemmap1} 3) that this works. This concludes the proof.\end{proof} Due to the need of distinguishing several strengths of completeness, we introduce now the following terminology: \begin{defs} A theory in a fragment of $\kappa$-first-order logic is said to be complete with respect to models of a certain type if whenever a sequent is valid in those models, it is provable from the axioms of the theory. \end{defs} With this definition, we now get: \begin{thm}\label{cc} If $\kappa$ is any inaccessible cardinal, $\kappa$-coherent (resp. $\kappa$-Heyting) theories are semantically complete with respect to models in $\kappa$-coherent (resp. $\kappa$-Heyting) categories. \end{thm} \subsection{Joyal's theorem and Morleyization} The construction of the syntactic category is an aspect of the philosophy of theories as categories, which is supplemented by the concept of internal theory of a given category and the functorial semantics associated with it. For, say, a coherent category \synt{C}\ there is a canonical signature and coherent axioms associated to the category in such a way that coherent models of this theory correspond to coherent functors having the category as a domain. That is, functors which preserve the categorical properties are seen as models of the internal theory of the categories in the codomain categories. Moreover, model homomorphisms correspond in this view to natural transformations of functors. This allows us to think, for example, of the category $\cat{M}$ of set-valued coherent models of a theory as corresponding functors from the syntactic category of the theory to the category \ensuremath{\mathbf{Set}}\ of sets. Consider now the further functor category $\ensuremath{\mathbf{Set}}^{\cat{M}}$. To each coherent formula in context we can assign its extension in each of the models of $\cat{M}$, or equivalently, evaluate the models, seen as functors, on the corresponding object represented by the formula. This assignment is in fact functorial, and thus each coherent formula in context gives rise to a functor in $\ensuremath{\mathbf{Set}}^{\cat{M}}$, which we call the evaluation functor at the corresponding formula. If we do this for every coherent formula in context, the assignment of evaluation functors at formulas is itself functorial, and gives rise to a functor $ev :\ \synt{C}{\ensuremath{\mathbb{T}}}\to \ensuremath{\mathbf{Set}}^{\cat{M}}$. In its original version, Joyal's theorem is a statement over ZFC which could be described as follows: \begin{thm}\label{joyal}(Joyal) Let \ensuremath{\mathbb{T}}\ be a coherent theory and let $\cat{M}$ be the category of coherent models of $\ensuremath{\mathbb{T}}$. Then the functor \[ev :\ \synt{C}{\ensuremath{\mathbb{T}}}\to \ensuremath{\mathbf{Set}}^{\cat{M}}\] is conservative and preserves any right adjoint to pullback functors that might exist in $\synt{C}{\ensuremath{\mathbb{T}}}$. \end{thm} For the proof of Joyal's theorem we refer to \cite{mr}, Ch. 6, pp. 189, since we will later study and prove an infinitary version. The significance of the theorem resides in that it encapsulates three different completeness theorems. The conservativity of $ev$ is a categorical way of saying that models in $\cat{M}$ are semantically complete for coherent logic. In the particular case in which the logic is classical, this is precisely G\"odel's completeness theorem for first order logic. But even when we consider intuitionistic logic, the preservation of the right adjoint entails that $ev$ preserves the first-order structure of $\synt{C}{\ensuremath{\mathbb{T}}}$, and through categorical semantics in the presheaf category $\ensuremath{\mathbf{Set}}^{\cat{M}}$ we can see that the conservative embedding provides a universal Kripke model of the theory, resulting thus in Kripke completeness theorem for first-order intuitionistic logic. We shall go some steps further and consider variations that provide new completeness theorems in the infinitary case. More especifically, the work in Proposition \ref{scohcomp} and Theorem \ref{bt} will allow us to prove the following infinitary version of Joyal's theorem: \begin{thm} Let $\kappa$ be a weakly (resp. strongly) compact cardinal; let \ensuremath{\mathbb{T}}\ be a $\kappa$-coherent theory of cardinality at most $\kappa$ (resp. of arbitrary cardinality) and let $\cat{M}$ be the category of $\kappa$-coherent models of $\ensuremath{\mathbb{T}}$ of cardinality at most $\kappa$ (resp. of arbitrary cardinality). Then the functor \[ev :\ \synt{C}{\ensuremath{\mathbb{T}}}\to \ensuremath{\mathbf{Set}}^{\cat{M}}\] is conservative and preserves any right adjoint to pullback functors that might exist in $\synt{C}{\ensuremath{\mathbb{T}}}$. \end{thm} We will see that the infinitary version of Joyal's theorem subsumes the completeness of $\kappa$-coherent logic with respect to \ensuremath{\mathbf{Set}}-valued models (the conservativity of $ev$), that of $\kappa$-first-order classical logic (the particular case when $\synt{C}{\ensuremath{\mathbb{T}}}$ is Boolean), and the completeness of $\kappa$-first order intuitionistic logic with respect to infinitary Kripke semantics (the universal Kripke model given by the embedding into the presheaf category). We are also going to need one more fragment to work with: \begin{defs} The $\kappa$-regular fragment is the fragment of $\kappa$-coherent logic that drops the disjunction $\vee$ from the language, and hence drops the rules involving it, but keeps the rule of dependent choice. \end{defs} The internal $\kappa$-coherent theory of a, say, $\kappa$-Heyting category can alternatively be described by a different axiomatization, which will be simpler for our purposes. Following \cite{johnstone}, where the process of rewriting of a classical first-order theory as an equivalent coherent theory is referred to as ``Morleyization'', we will also call ``Morleyizing'' a theory, in general, rewriting it into a theory in a less expressive fragment. From a categorical viewpoint (as opposed to the standard syntactic point of view), the syntactic category \synt{C}{\ensuremath{\mathbb{T}}} of, for example, an intuitionistic $\kappa$-first-order theory \ensuremath{\mathbb{T}}\, which is a $\kappa$-Heyting category, is also a $\kappa$-coherent (resp. $\kappa$-regular) category, and thus \synt{C}{\ensuremath{\mathbb{T}}} has an internal $\kappa$-coherent theory (resp. internal $\kappa$-regular theory), which we refer to as ``the theory of $\kappa$-coherent (resp. $\kappa$-regular) models of \ensuremath{\mathbb{T}}'', (its ``Morleyization'' $\ensuremath{\mathbb{T}}^m$). It is not difficult to see (as it will become evident from the definition) that the theory and its Morleyization have equivalent syntactic categories: \[\synt{C}{\ensuremath{\mathbb{T}}}\simeq \synt{C}{\ensuremath{\mathbb{T}}^m}\] Although for classical $\kappa$-first-order theories, the $\kappa$-coherent Morleyization will have the same models in all Boolean $\kappa$-coherent categories, in general when Morleyizing a $\kappa$-first-order theory to a $\kappa$-coherent one (or a $\kappa$-coherent theory to a $\kappa$-regular one), this is not the case, but there still some gain in considering the category of models of the Morleyized theory, as our adaptation of Joyal's theorem will show. \begin{defs} The theory of $\kappa$-regular ((i)--(iv) below) and the theory of $\kappa$-coherent ((i)--(v)) models $\ensuremath{\mathbb{T}}^m$ of a $\kappa$-first-order (resp. $\kappa$-coherent) theory \ensuremath{\mathbb{T}}\ over a signature $\Sigma$ is defined as follows: its signature $\Sigma^m$ extends $\Sigma$ by adding for each $\kappa$-first-order (resp. $\kappa$-coherent) formula $\phi$ over $\Sigma$ with free variables \alg{x} the relation symbol $P_{\phi}(\alg{x})$; then $\ensuremath{\mathbb{T}}^m$ is the theory axiomatized by the following axioms: \begin{enumerate}[(i)] \item $P_{\phi}\dashv\vdash_{\alg{x}}\phi$ for every atomic formula $\phi$ \item $P_{\phi}\vdash_{\alg{x}}P_{\psi}$ for every sequent $\phi\vdash_{\alg{x}}{\psi}$ provable in \ensuremath{\mathbb{T}}; \item $P_{\bigwedge_{i<\gamma}\phi_i}\dashv\vdash_{\alg{x}}\bigwedge_{i<\gamma}P_{\phi_i}$; \item $P_{\exists{\mathbf{y}}\phi}\dashv\vdash_{\alg{x}}\exists{\mathbf{y}}P_{\phi}$; \item $P_{\bigvee_{i<\gamma}\phi_i}\dashv\vdash_{\alg{x}}\bigvee_{i<\gamma}P_{\phi_i}$. \end{enumerate} \end{defs} The theory of $\kappa$-coherent models of a positive $\kappa$-coherent\footnote{The fragment which results after discarding $\bot$.} theory is defined similarly; alternatively (since we are only discarding $\bot$) we could also treat $\bot$ as a propositional variable and add the axioms \[\bot\vdash_{\alg{x}}\phi\] for all formulas $[\mathbf{x}, \phi]$ in context. \begin{defs} We will say that a positive $\kappa$-coherent model of a $\kappa$-coherent theory is \emph{possibly exploding}, and make the convention that such a model is \emph{exploding} if it assigns $\bot$ the value true. \end{defs} Note that since $P_{\phi}\vdash_{\alg{x}}P_{\psi}$ in $\ensuremath{\mathbb{T}}^m$ if and only if $\phi\vdash_{\alg{x}}{\psi}$ in \ensuremath{\mathbb{T}}, if positive $\kappa$-coherent theories are complete for $\ensuremath{\mathbf{Set}}$-valued models, then $\kappa$-coherent theories will be complete for modified (i.e., possibly exploding) $\ensuremath{\mathbf{Set}}$-valued models. Incidentally, any model of $\ensuremath{\mathbb{T}}^m$ that assigns $\bot$ the value true must be inhabited, since $\bot\vdash \fins{x} (x=x)\in \ensuremath{\mathbb{T}}^m$. \subsection{Beth and Kripke models} The following is a direct generalization to the infinitary case of the Beth models (see \cite{beth}) used in intuitionistic logic, except that we allow the nodes of the underlying tree to be possibly exploding (i.e., to force $\bot$): \begin{defs}\label{bethmodelt} A Beth model for pure $\kappa$-first-order logic over $\Sigma$ is a quadruple $\mathcal{B}=(K, \leq, D, \Vdash)$, where $(K, \leq)$ is a tree of height $\kappa$ and levels of size less than $\kappa$, and with a set $B$ of branches (i.e., maximal chains in the partial order) each of size $\kappa$; $D$ is a set-valued functor on $K$, and the forcing relation $\Vdash$ is a binary relation between elements of $K$ and sentences of the language with constants from $\bigcup_{k \in K}D(k)$, defined recursively for formulas $\phi$ as follows. There is an interpretation of function and relation symbols in each $D(k)$; if $R_k \subseteq D(k)^{\lambda}$ is the interpretation in $D(k)$ of the $\lambda$-ary relation symbol $R$ in the language, we have $k \leq l \implies R_k(D_{kl}(\mathbf{c})) \subseteq R_l(\mathbf{c})$ for $\mathbf{c} \subseteq D_k$, and: \begin{enumerate} \item $k \Vdash R(\mathbf{s}(\mathbf{d})) \iff \exists \alpha<\kappa \forall b \in B_k \exists l \in b, level(l)=level(k)+\alpha$ $(R_l(\mathbf{s}(D_{kl}(\mathbf{d}))))$ \item $k \Vdash \bigwedge_{i<\gamma}\phi_i(\mathbf{d}) \iff k \Vdash \phi_i(\mathbf{d}) \text{ for every } i<\gamma$ \item $k \Vdash \bigvee_{i<\gamma}\phi_i(\mathbf{d}) \iff \exists \alpha<\kappa \forall b \in B_k \exists l \in b, level(l)=level(k)+\alpha \qquad (l \Vdash \phi_i(D_{kl}(\mathbf{d})) \text{ for some } i<\gamma)$ \item $k \Vdash \phi(\mathbf{d}) \to \psi(\mathbf{d'}) \iff \forall k' \geq k (k' \Vdash \phi(D_{kk'}(\mathbf{d})) \implies k' \Vdash \psi(D_{kk'}(\mathbf{d'}))$ \item $k \Vdash \exists \mathbf{x} \phi(\mathbf{x}, \mathbf{d}) \iff \exists \alpha<\kappa \forall b \in B_k \exists l \in b, level(l)=level(k)+\alpha \qquad \exists \mathbf{e} \subseteq D(l) (l \Vdash \phi(\mathbf{e}, D_{kl}(\mathbf{d}))$ \item $k \Vdash \forall \mathbf{x} \phi(\mathbf{x}, \mathbf{d}) \iff \forall k' \geq k \forall \mathbf{e} \subseteq D_{k'} (k' \Vdash \phi(\mathbf{e}, D_{kk'}(\mathbf{d})))$ \end{enumerate} A Beth model for a theory \ensuremath{\mathbb{T}}\ is a Beth model for $\kappa$-first-order logic forcing all the axioms of the theory and not forcing $\bot$. \end{defs} We have now: \begin{proposition} $\kappa$-first-order logic is sound for Beth models. \end{proposition} \begin{proof} The key part of the proof is to note that the following property holds: for any $\kappa$-first-order formula $\phi(\mathbf{x})$ and any node $k$ in the Beth model, we have $k \Vdash \phi(\mathbf{c}) \iff \exists \alpha<\kappa \forall b \in B_k \exists l \in b, level(l)=level(k)+\alpha \qquad (l \Vdash \phi(D_{kl}(\mathbf{c})))$. This in turn can be easily proved by induction on the complexity of $\phi$ making use of the regularity of $\kappa$. Using now this property, it is easy to check the validity of all axioms and rules of $\kappa$-first-order logic; the regularity of $\kappa$ is used in the soundness of the transfinite transitivity rule. \end{proof} The restriction on the height and the size of the levels of the Beth model was motivated by proving soundness for full $\kappa$-first-order logic. This restriction can be relaxed if we do not intend to do so. We make thus the following: \begin{defs} Given a regular cardinal $\delta<\kappa$ and a set \ensuremath{\mathbb{T}}\ consisting of logical and non-logical axioms in a $\kappa$-first-order language, a partial Beth model of height $\delta$ for \ensuremath{\mathbb{T}}\ is defined like a Beth model for $\ensuremath{\mathbb{T}}$, except that the branches have size $\delta$, the ordinal $\alpha$ in the clauses for atomic formulas, disjunction and existential quantification satisfies $\alpha<\delta$, and the model only forces axioms in $\ensuremath{\mathbb{T}}$. \end{defs} A Kripke model is a special kind of Beth model none of whose nodes forces $\bot$ and where the forcing relation for atomic formulas, disjunction and existential quantification satisfies the stronger condition $level(l)=level(k)$: \begin{defs} A Kripke model for pure first-order logic over $\Sigma$ is a quadruple $\mathcal{K}=(K, \leq, D, \Vdash)$, where $(K, \leq)$ is a tree, $D$ is a set-valued functor on $K$ and the forcing relation $\Vdash$ is a binary relation between elements of $K$ and sentences of the language with constants from $\bigcup_{k \in K}D(k)$, satisfying $k \nVdash \bot$ and defined recursively for formulas $\phi$ as follows. There is an interpretation of function and relation symbols in each $D(k)$; if $R_k \subseteq D(k)^{\lambda}$ is the interpretation in $D(k)$ of the $\lambda$-ary relation symbol $R$ in the language, we have $k \leq l \implies R_k(D_{kl}(\mathbf{c})) \subseteq R_l(\mathbf{c})$ for $\mathbf{c} \subseteq D_k$, and: \begin{enumerate} \item $k \Vdash R(\mathbf{s}(\mathbf{d})) \iff R_k(\mathbf{s}(\mathbf{d}))$ \item $k \Vdash \bigwedge_{i<\gamma}\phi_i(\mathbf{d}) \iff k \Vdash \phi_i(\mathbf{d}) \text{ for every } i<\gamma$ \item $k \Vdash \bigvee_{i<\gamma}\phi_i(\mathbf{d}) \iff k \Vdash \phi_i(\mathbf{d}) \text{ for some } i<\gamma$ \item $k \Vdash \phi(\mathbf{d}) \to \psi(\mathbf{d'}) \iff \forall k' \geq k (k' \Vdash \phi(D_{kk'}(\mathbf{d})) \implies k' \Vdash \psi(D_{kk'}(\mathbf{d'})))$ \item $k \Vdash \exists \mathbf{x} \phi(\mathbf{x}, \mathbf{d}) \iff \exists \mathbf{e} \subseteq D(k) (k \Vdash \phi(\mathbf{e}, \mathbf{d}))$ \item $k \Vdash \forall \mathbf{x} \phi(\mathbf{x}, \mathbf{d}) \iff \forall k' \geq k \forall \mathbf{e} \subseteq D_{k'} (k' \Vdash \phi(\mathbf{e}, D_{kk'}(\mathbf{d})))$ \end{enumerate} A Kripke model for a theory \ensuremath{\mathbb{T}}\ is a Kripke model forcing all the axioms of the theory. \end{defs} A Kripke model can also be seen categorically as a model on a presheaf category. That is, if \synt{C}{\ensuremath{\mathbb{T}}}\ is the syntactic category of the theory, a Kripke model on $(K, \leq)$ is nothing but a $\kappa$-Heyting functor $F: \synt{C}{\ensuremath{\mathbb{T}}}\to \ensuremath{\mathbf{Set}}^{K}$, since such a functor determines the set-valued functor $D=F([x, \top]): K \to \ensuremath{\mathbf{Set}}$ which specifies the underlying domains of the nodes. In this case the forcing relation is given by $k \Vdash \phi(\mathbf{d})$ for $\mathbf{d} \in D(k)^n$ if and only if $\mathbf{d}: [-, k] \to D^n=F([\mathbf{x}, \top])$ factors through the subobject $F([\mathbf{x}, \phi]) \rightarrowtail F([\mathbf{x}, \top])$, where we use Yoneda lemma to identify elements of $D(k)$ with natural transformations $[-, k] \to D$. This definition is precisely the forcing relation for the Kripke-Joyal semantics in the topos $\ensuremath{\mathbf{Set}}^{K}$, whence the name Kripke associated to it. We therefore have: \begin{proposition} $\kappa$-first-order logic is sound for Kripke models. \end{proposition} \begin{proof} It is enough to note that presheaf categories are $\kappa$-Heyting and apply Theorem \ref{cc}. \end{proof} More generally, one can consider Kripke models on arbitrary categories $\mathcal{M}$ instead of the tree $K$, and it turns out that the semantics of the Kripke model over $\mathcal{M}$ can be recovered in terms of Kripke semantics over a certain collection of trees. To do that, consider first the poset $P$ which consists of finite composable sequences of morphisms of $\mathcal{M}$, i.e., chains $A_0 \to ... \to A_n$ in $\mathcal{M}$. One such sequence is below another in $P$ if the former is an initial segment of the latter. There is a functor $E:P \to \mathcal{M}$ sending each chain to the last object in it and sending any morphism $f$ of $P$ to the composite of the morphisms of $\mathcal{M}$ that are in the codomain minus the domain of $f$. Now, given a Kripke model $F: \synt{C}{\ensuremath{\mathbb{T}}}\to \ensuremath{\mathbf{Set}}^{\mathcal{M}}$, we can compose $F$ with the transpose $E^*: \ensuremath{\mathbf{Set}}^{\mathcal{M}} \to \ensuremath{\mathbf{Set}}^{P}$, and if this latter is a conservative $\kappa$-Heyting functor, this will provide a Kripke model on $P$ forcing precisely the same formulas as the original model. Finally, the Kripke model on $P$ can be regarded as a collection of Kripke models on trees, where the roots of the trees are given by one-element chains. This construction amounts to build the Diaconescu cover of the topos $\ensuremath{\mathbf{Set}}^{\mathcal{M}}$ (see e.g. \cite{maclane-moerdijk}). In our case the discussion above shows that for our purposes it is enough to prove the following, which is the infinitary counterpart of section 1.744 of \cite{fs}: \begin{lemma} The functor $E^*: \ensuremath{\mathbf{Set}}^{\mathcal{M}} \to \ensuremath{\mathbf{Set}}^{P}$ is conservative and $\kappa$-Heyting. \end{lemma} \begin{proof} The conservativity of $E^*$ follows from the fact that $E$ is surjective on objects and arrows. To prove that it is $\kappa$-Heyting, the non-trivial part is proving that it preserves $\forall$. For a natural transformation $f: F \to G$ in $\mathcal{S}et^{\mathcal{M}}$ and a subfunctor $A$ of $F$, we need to show that $E^*(\forall_fA)$ is the same subfunctor of $E^*(G)$ as $\forall_{E^*(f)}E^*(A)$. By definition, for any object $p$ in $P$ and $y \in E^*(G)(p)=G(E(p))$, we have $y \in \forall_{E^*(f)}E^*(A)(p)$ if and only if for all arrows $l: p \to q$ in $P$ one has: $$E^*(f)_q^{-1}(G(E(l))(y)) \subseteq E^*(A)(q)$$ $$\iff \forall x (E^*(f)_q(x)=G(E(l))(y) \implies x \in E^*(A)(q))$$ $$\iff \forall x (f_{E(q)}(x)=G(E(l))(y) \implies x \in A(E(q))) \qquad (1)$$ On the other hand, also by definition, for $y \in G(E(p))$ one has $y \in \forall_fA(E(p))$ if and only if for all arrows $t: E(p) \to r$ in $\mathcal{M}$ one has: $$f_r^{-1}(G(t)(y)) \subseteq A(r)$$ $$\iff \forall x (f_r(x)=G(t)(y) \implies x \in A(r)) \qquad (2)$$ But because the functor $E$ is surjective (both on objects and arrows), we can find $q, l \in P$ such that $r=E(q)$ and $t=E(l)$, from which we deduce that $(1)$ and $(2)$ above are equivalent. Hence, $E^* \forall=\forall E^*$, as we wanted. \end{proof} \subsection{Syntactic sites} The syntactic categories for fragments of $\kappa$-first-order logic can be equipped with appropriate Grothendieck topologies in such a way that the corresponding sheaf toposes are conservative models of the corresponding theories. Given a $\kappa$-regular category, we can define the $\kappa$-regular coverage, where the covering families are all singletons $f$ where $f$ is a cover. Similarly, for a $\kappa$-coherent category we can define the $\kappa$-coherent coverage, where the covering families are given by families of arrows $f_i: A_i \to A$ of cardinality less than $\kappa$ such that the union of their images is the whole of $A$ (in particular, the initial object $0$ is covered by the empty family). We can also find (see \cite{bj}) a conservative sheaf model given by Yoneda embedding into the sheaf topos obtained with the $\kappa$-coherent coverage. As proven in \cite{bj}, the embedding preserves $\kappa$-unions and $\kappa$-intersections, as well as any Heyting structure that might exist in $\mathcal{C}$. To highlight the fact that images and unions are stable under pullback is crucial, we prove the following lemma, which can be regarded as a generalization of the result corresponding to the finitary case: \begin{lemma}\label{shemb} Given a $\kappa$-coherent (resp. $\kappa$-Heyting) category $\mathcal{C}$ with the $\kappa$-coherent coverage $\tau$, Yoneda embedding $y: \mathcal{C} \to \mathcal{S}h(\mathcal{C}, \tau)$ is a conservative $\kappa$-coherent (resp. $\kappa$-Heyting) functor and $\mathcal{S}h(\mathcal{C}, \tau)$ is a $\kappa$-Heyting category. \end{lemma} \begin{proof} By \cite{mr}, Proposition 3.3.3, we know that all representable functors are sheaves for the $\kappa$-coherent coverage, since the fact that the union of the images of the arrows in a covering family over $A$ is the whole of $A$ is equivalent to the fact that the family is effective epimorphic, and this is precisely the sheaf condition on representable functors. In case $\mathcal{C}$ is $\kappa$-Heyting, the embedding preserves universal quantification as shown in \cite{bj}, Lemma 3.1. For the sake of completeness we reproduce here the proof. Let $f: A \to B$ be a morphism of $\mathcal{C}$, $A' \rightarrowtail A$ a subobject of A, and write $B' \rightarrowtail B$ for $\forall_f(A' \rightarrowtail A)$. Let $R' \rightarrowtail y(B)$ be any subobject of $y(B)$ in $\mathcal{S}h(\mathcal{C}, \tau)$ such that $(y(f))^*(R) \leq y(A')$ in $\mathcal{S}ub(y(A))$; we must show that $R \leq y(B')$, i.e. that every morphism in $R$ factors through $B' \rightarrowtail B$. Let $g: C \to B$ be such a morphism, and let $h: D \to A$ be its pullback along $f$; then $h \in f^*(R)$, and so $h$ factors through $A' \rightarrowtail A$, that is, the image $A'' \rightarrowtail A$ of $h$ satisfies $A''<A'$. Since image factorizations in $\mathcal{C}$ are stable under pullback, it follows that the image $B'' \rightarrowtail B$ of $B$ satisfies $B'' \leq B'$; so we have our required factorization. Yoneda embedding preserves limits, and limits of sheaves are computed as in presheaves, so it remains to prove that it preserves images and $\kappa$-unions. Given a cover $f: A \twoheadrightarrow B$, we need to prove that $[-, A] \to [-, B]$ is a sheaf epimorphism, i.e., that it is locally surjective. For this it is enough to find a covering family over each object $C$ that witnesses the local surjectivity. Given an element $g$ in $[C, B]$, we can simply form the pullback of $f$ along $g$, obtaining thus a covering family over consisting on the single arrow $g^*(f)$ which will clearly witness the local surjectivity. The argument for the preservation of unions is similar: given the union $\bigvee_{i<\gamma}A_i$ of subobjects $f_i: A_i \to B$ we need to show that $[-, \bigvee_{i<\gamma}A_i]$ is the union of the sheaves $[-, A_i]$ . Given an object $C$ and an element $g$ in $[C, \bigvee_{i<\gamma}A_i]$, the pullbacks along $g$ of $f'_i: A_i \to \bigvee_{i<\gamma}A_i$ give a covering family $\{g^*(f'_i): P_i \to C\}_{i<\gamma}$ with the property that $g.g^*(f'_i) \in [P_i, \bigvee_{i<\gamma}A_i]$ belongs to $[P_i, A_i]$, which is enough to guarantee that $[-, \bigvee_{i<\gamma}A_i]$ is indeed the union of the $[-, A_i]$. Finally, we show that the sheaf topos is a $\kappa$-coherent category by proving that the transfinite transitivity property holds in $\mathcal{S}h(\mathcal{C}, \tau)$. To this end, suppose we have a family of sheaves $\{S_{f}: \beta<\gamma, f \in \gamma^{\beta}\}$ satisfying the premises of the transfinite transitivity property, that is, that $\{S_{g} \to S_f: g \in \gamma^{\beta+1}, g|_{\beta}=f\}$ form a jointly covering family and that $S_{f|_{\beta}}=\lim_{\alpha<\beta}S_{f|_{\alpha}}$ for limit $\beta$. Then given $c \in S_{\emptyset}(C)$ we define by transfinite recursion a covering family $\{l_f: C_{f} \to C: \beta<\gamma, f \in \gamma^{\beta}\}$ such that, given $f \in \gamma^{\gamma}$, $c.l_f \in \bigwedge_{\alpha<\gamma}S_{f_i|_{\alpha}}(C)$ for some $f_i \in \gamma^{\gamma}$, witnessing that $\{\bigwedge_{\alpha<\gamma}S_{f|_{\alpha}} \to S_{\emptyset}: f \in \gamma^{\gamma}\}$ is a jointly covering family. In fact, the covering family over $C$ will be such that for any fixed $\beta<\gamma$ we will have that $\{l_f: C_{f} \to C: f \in \gamma^{\beta}\}$ is a witness of the joint covering of the sheaves $\{S_{f}: f \in \gamma^{\beta}\}$, that is, given $f \in \gamma^{\beta}$ we will have $c.l_f \in S_{f_i}(C)$ for some $f_i \in \gamma^{\beta}$. Supposing that $\{l_f: C_{f} \to C: \beta<\mu, f \in \gamma^{\beta}\}$ has been defined, we show how to define the family at level $\mu$. If $\mu$ is a successor ordinal $\mu=\alpha+1$, we have by inductive hypothesis a covering $\{l_f: C_{f} \to C: f \in \gamma^{\alpha}\}$ such that, given $f \in \gamma^{\alpha}$, $c.l_f \in S_{f_i}(C)$ for some $f_i \in \gamma^{\alpha}$. Then, because $\{S_{g} \to S_{f_i}: g \in \gamma^{\mu}, g|_{\alpha}=f_i\}$ is jointly covering, we can find a covering $\{h_{gf_i}: C_g \to C_{f_i} : g \in \gamma^{\mu}, g|_{\alpha}=f_i\}$ such that, given $g \in \gamma^{\mu}, g|_{\alpha}=f_i$, $c.l_g=(c.l_{f_i}).h_{gf_i} \in S_{g_j}(C)$ for some $g_j \in \gamma^{\mu}$. This extends, by transitivity, the definition of the covering family to level $\mu$. If $\mu$ is a limit ordinal and $f \in \gamma^\mu$, we simply take $C_f$ to be the limit of the diagram formed by $C_{f|_{\alpha}}: \alpha<\mu$. Then clearly, given $f \in \gamma^\mu$, $c.l_f \in \bigwedge_{\alpha<\mu}S_{f_k|_{\alpha}}$ for some $f_k \in \gamma^\mu$. This finishes the recursive construction of the family over $C$ and proves the transfinite transitivity property for the sheaves. \end{proof} We get immediately: \begin{thm} If $\kappa$ is any inaccessible cardinal, $\kappa$-first-order theories are complete with respect to models in $\kappa$-Heyting Grothendieck toposes. \end{thm} \section{Completeness} \subsection{Completeness of infinitary coherent logic} We will need the following technical lemma, which corresponds to the canonical well-ordering of $\kappa \times \kappa$ from \cite{jechst}: \begin{lemma}\label{dwo} For every cardinal $\kappa$ there is a well-ordering $f: \kappa \times \kappa \to \kappa$ with the property that $f(\beta, \gamma) \geq \gamma$. \end{lemma} \begin{proof} We define $f$ by induction on $\max (\beta, \gamma)$ as follows: $$f(\beta, \gamma)=\begin{cases} \sup\{f(\beta', \gamma')+1: \beta', \gamma'<\gamma\}+\beta & \mbox{if } \beta<\gamma \\ \sup\{f(\beta', \gamma')+1: \beta', \gamma'<\beta\}+\beta+\gamma & \mbox{if } \gamma \leq \beta \end{cases}$$ \noindent which satisfies the required property (see \cite{jechst}, Theorem 3.5). \end{proof} We have now: \begin{thm}\label{shcomp} Let $\kappa$ be an inaccessible cardinal. Then any $\kappa$-coherent theory of cardinality less than $\kappa$ has a partial Beth model of height less than $\kappa$. \end{thm} \begin{proof} Consider the syntactic category $\mathcal{C}_{\ensuremath{\mathbb{T}}}$ of the theory and its conservative embedding in the topos of sheaves with the $\kappa$-coherent coverage, $\mathcal{C}_{\ensuremath{\mathbb{T}}} \to \mathcal{S}h(\mathcal{C}_{\ensuremath{\mathbb{T}}}, \tau)$. By assumption, the cardinality of the set $S$ of all subformulas of axioms of the theory is $\delta<\kappa$. Consider for any object $A$ the set of basic covering families over $A$ (which are given by jointly cover sets of arrows of cardinality less than $\kappa$) that witness than some subformula in $S$ is forced by $A$. That is, if $S$ contains a subformula $\phi$ which is a (nonempty) disjunction $\bigvee_{i<\gamma}\phi_i(\boldsymbol{\beta})$ (resp. an existential formula $\exists_{\alpha<\gamma}\mathbf{x}_{\alpha}\psi(\mathbf{x}_0, ...,\mathbf{x}_{\alpha}, ..., \boldsymbol{\beta})$), and $A \Vdash \phi(\boldsymbol{\beta})$, we include in the set of coverings one of the form $l_j: C_j \to A$, where for each $j$ we have $C_j \Vdash \phi_{i_j}(\boldsymbol{\beta} l_j)$ for some $i_j<\gamma$ (resp. $C_j \Vdash \psi(\boldsymbol{\beta_0^j}, ..., \boldsymbol{\beta_{\alpha}^j}, ... \boldsymbol{\beta} l_j)$ for some $\boldsymbol{\beta_0^j}, ..., \boldsymbol{\beta_{\alpha}^j}, ...$). In case $\phi$ is $\bot$, or $\phi$ is a conjunctive subformula, or $A \nVdash \phi(\boldsymbol{\beta})$ we just consider the identity arrow as a cover. The set of covering families over a given object $A$ just specified has thus cardinality $\delta$. By adding identity covers to each set we can assume without loss of generality that $\delta$ is regular and bigger than the maximum arity of function and relation symbols in $S$. Construct a functor from a tree of height $\delta$ to the syntactic category, defined recursively on the levels of the tree. Start with a well-ordering $f: \delta \times \delta \to \delta$ as in Lemma \ref{dwo}, i.e., with the property that $f(\beta, \gamma) \geq \gamma$. We describe by an inductive definition how the tree obtained as the image of the functor is constructed. The root of that tree is the terminal object. Suppose now that the tree is defined for all levels $\lambda<\mu$; we show how to define the nodes of level $\mu$. Suppose first that $\mu$ is a successor ordinal $\mu=\alpha+1$, and let $\alpha=f(\beta, \gamma)$. Since by hypothesis $f(\beta, \gamma) \geq \gamma$, the nodes $\{p_i\}_{i<m_{\gamma}}$ at level $\gamma$ are defined. Consider the morphisms $g_{ij}^{\alpha}$ over $p_i$ assigned to the paths from each of the nodes $p_i$ to the nodes of level $\alpha$. To define the nodes at level $\alpha+1$, take then the $\beta-th$ covering family over each $p_i$ and pull it back along the morphisms $g_{ij}^{\alpha}$. This produces covering families over each node at level $\alpha$, whose domains are then the nodes of level $\alpha+1$. Suppose now that $\mu$ is a limit ordinal. Then each branch of the tree of height $\mu$ already defined determines a diagram, whose limit is defined to be the node at level $\mu$ corresponding to that branch. The tree has height $\delta$, and clearly, the morphisms assigned to the paths from any node $p$ till the nodes of level $\alpha$ in the subtree over $p$ form a basic covering family of $p$ because of the transfinite transitivity property. Define now a partial Beth model $B$ over this tree by defining as the underlying set of a node $q$ the set of arrows from $q$ to the object $[x, \top]$ in the syntactic category, and where the function between the underlying set of a node and its successor is given by composition with the corresponding arrow. There is an interpretation of the function symbols in the subset underlying each node which corresponds to composition with the interpretation in the category of the corresponding function symbol. For relations $R$ (including equality), we set by definition $R_q(\mathbf{s}(\boldsymbol{\alpha}))$ if and only if $q$ forces $R(\mathbf{s}(\boldsymbol{\alpha}))$ in the sheaf semantics of the topos, that is, if $q \Vdash R(\mathbf{s}(\boldsymbol{\alpha}))$ (we identify the category with its image through Yoneda embedding). We shall now prove the following:\\ $Claim:$ For every node $p$, every tuple $\boldsymbol{\alpha}$ and every formula $\phi \in S$, $p \Vdash \phi(\boldsymbol{\alpha})$ if and only if $p \Vdash_B \phi(\boldsymbol{\alpha})$.\\ The proof goes by induction on $\phi$. \begin{enumerate} \item If $\phi$ is atomic, the result is immediate by definition of the underlying structures on each node. \item If $\phi=\bigwedge_{i<\gamma}\psi_i$, the result follows easily from the inductive hypothesis, since we have $p \Vdash \bigwedge_{i<\gamma}\psi_i(\boldsymbol{\alpha})$ if and only if $p \Vdash \psi_i(\boldsymbol{\alpha})$ for each $i<\gamma$, if and only if $p \Vdash_B \psi_i(\boldsymbol{\alpha})$ for each $i<\gamma$, if and only if $p \Vdash_B \bigwedge_{i<\gamma}\psi_i(\boldsymbol{\alpha})$. \item Suppose $\phi= \bigvee_{i<\gamma} \psi_i$. If $p \Vdash \bigvee_{i<\gamma} \psi_i$, then there is a basic covering family $\{f_i: A_i \to p\}_{i<\lambda}$ that appears at some point in the well-ordering, such that for each $i<\lambda$, $A_i \Vdash \psi_{k_i}(\boldsymbol{\alpha} f_i)$ for some $k_i<\gamma$. Now this covering family is pulled back along all paths $g_j$ of a subtree to create the nodes of a certain level of the subtree over $p$. Hence, every node $m_j$ in such a level satisfies $m_j \Vdash \psi_{k_j}(\boldsymbol{\alpha} fg'_j)$ for some $k_j<\gamma$. By inductive hypothesis, $m_j \Vdash_B \psi_{k_j}(\boldsymbol{\alpha} fg'_j)$, and hence we have $p \Vdash_B \bigvee_{i<\gamma} \psi_i$. Conversely, if $p \Vdash_B \bigvee_{i<\gamma} \psi_i$, there is a level in the subtree over $p$ such that for every node $m_j$ there one has $m_j \Vdash_B \psi_{k_j}(\boldsymbol{\alpha} f_j)$ for some $k_j<\gamma$, so by inductive hypothesis $m_j \Vdash \psi_{k_j}(\boldsymbol{\alpha} f_j)$. Since $\{f_k: m_k \to p\}$ is, by construction, a basic covering family, we must have $p \Vdash \bigvee_{i<\gamma} \psi_i$. \item Suppose $\phi=\exists \mathbf{x} \psi(\mathbf{x}, \boldsymbol{\alpha})$. If $p \Vdash \exists \mathbf{x} \psi(\mathbf{x}, \boldsymbol{\alpha})$, then there is a basic covering family $\{f_i: A_i \to p\}_{i<\lambda}$ that appears at some point in the well-ordering, such that for each $i$ one has $A_i \Vdash \psi(\boldsymbol{\beta_i}, \boldsymbol{\alpha} f_i)$ for some $\boldsymbol{\beta_i}: A_i \to [\mathbf{x}, \top]$. This basic cover is hence pulled back along all paths $g_j$ of a subtree to create the nodes of a certain level of the subtree over $p$. The nodes $m_{ij}$ in this level will have the property that $m_{ij} \Vdash \psi(\boldsymbol{\beta_i}g'_j, \boldsymbol{\alpha} f_ig'_j)$, and hence, by inductive hypothesis, that $m_{ij} \Vdash_B \psi(\boldsymbol{\beta_i}g'_j, \boldsymbol{\alpha} f_ig'_j)$. By definition, we get thus $p \Vdash_B \exists \mathbf{x} \psi(\mathbf{x}, \boldsymbol{\alpha})$. Conversely, suppose that $p \Vdash_B \exists \mathbf{x} \psi(\mathbf{x}, \boldsymbol{\alpha})$. Then there is a level in the subtree over $p$ such that for every node $m_k$ there one has $m_k \Vdash_B \psi(\boldsymbol{\beta_k}, \boldsymbol{\alpha} f_k)$ for some $\boldsymbol{\beta_k}: m_k \to [\mathbf{x}, \top]$, and hence, by inductive hypothesis, such that $m_k \Vdash \psi(\boldsymbol{\beta_k}, \boldsymbol{\alpha} f_k)$. Since the arrows $f_k: m_k \to p$ form a basic cover of $p$, we must have $p \Vdash \exists \mathbf{x} \psi(\mathbf{x}, \boldsymbol{\alpha})$. \end{enumerate} \end{proof} \begin{proposition}\label{cohcomp} If $\kappa$ is an inaccessible cardinal, $\kappa$-coherent theories of cardinality less than $\kappa$ are complete with respect to \ensuremath{\mathbf{Set}}-valued models. \end{proposition} \begin{proof} It is enough to prove that every object in the sheaf model forcing the antecedent $\phi(\boldsymbol{\alpha})$ of a valid sequent $\phi \vdash_{\mathbf{x}} \psi$ also forces the consequent $\psi(\boldsymbol{\alpha})$ for every tuple $\boldsymbol{\alpha}$ in the domain. Construct a Beth model over a tree as above but taking as the root of the tree a given object forcing $\phi(\boldsymbol{\alpha})$ and including in the set of formulas $S$ also the subformulas of $\phi$ and $\psi$. For each branch $\mathbf{b}$ of the tree, consider the directed colimit $\mathbf{D_b}$ of all the underlying structures in the nodes of the branch, with the corresponding functions between them. Such a directed colimit is a structure under the definitions: \begin{enumerate} \item for each function symbol $f$, we define $f(\overline{x_0}, ..., \overline{x_{\lambda}}, ...)=\overline{f(x_0, ..., x_{\lambda}, ...)}$ for some representatives $x_i$ of $\overline{x_i}$; in particular, constants $\mathbf{c}$ are interpreted as $\overline{\mathbf{c}}=\overline{c_0}, ..., \overline{c_{\lambda}}, ...$; \item for each relation symbol $R$ we define $R(\overline{x_0}, ..., \overline{x_{\lambda}}, ...) \iff R(x_0, ..., x_{\lambda}, ...)$ for some representatives $x_i$ of $\overline{x_i}$. \end{enumerate} It is easy to check, using the regularity of $\delta$, that the structure is well defined and that the choice of representatives is irrelevant. We will show that such a structure is a (possible exploding) positive $\kappa$-coherent model of the theory satisfying $\phi(\overline{\boldsymbol{\alpha}})$. Indeed, we have the following: $Claim:$ Given any $\kappa$-coherent formula $\phi(x_0, ..., x_{\lambda}, ...) \in S$, we have $\mathbf{D_b} \vDash \phi(\overline{\alpha_0}, ..., \overline{\alpha_{\lambda}}, ...)$ if and only if for some node $n$ in the path $\mathbf{b}$, the underlying structure $C_n$ satisfies $C_n \Vdash \phi(\alpha_0, ..., \alpha_{\lambda}, ...)$ for some representatives $\alpha_i$ of $\overline{\alpha_i}$. The proof of the claim is by induction on the complexity of $\phi$. \begin{enumerate} \item If $\phi$ is $R(t_0, ..., t_{\lambda}, ...)$ or $s=t$ for given terms $t_i, s, t$, the result follows by definition of the structure. \item If $\phi$ is of the form $\bigwedge_{i<\gamma} \theta_i$ the result follows from the inductive hypothesis: $\theta_i$ is forced at some node $n_i$ in the path $\mathbf{b}$, and therefore $\bigwedge_{i<\gamma} \theta_i$ will be forced in any upper bound of $\{n_i: i<\gamma\}$ (here we use the regularity of $\delta$). \item If $\phi$ is of the form $\bigvee_{i<\gamma} \theta_i$ and $\mathbf{D_b} \vDash \phi(\overline{\alpha_0}, ..., \overline{\alpha_s}, ...)$, then we can assume that $\mathbf{D_b} \vDash \theta_i(\overline{\alpha_0}, ..., \overline{\alpha_s}, ...)$ for some $i<\gamma$, so that by inductive hypothesis we get $C_n \Vdash \phi(\alpha_1, ..., \alpha_s, ...)$ for some node $n$ in $\mathbf{b}$. Conversely, if $C_n \Vdash \phi(\alpha_0, ..., \alpha_s, ...)$ for some node $n$ in $\mathbf{b}$, by definition of the forcing there is a node $m$ above $n$ in $\mathbf{b}$ and a function $f_{nm}: D_n \to D_m$ for which $C_m \Vdash \theta_i(f_{nm}(\alpha_0), ..., f_{nm}(\alpha_s), ...)$ for some $i<\gamma$, so that by inductive hypothesis we get $\mathbf{D_b} \vDash \phi(\overline{\alpha_0}, ..., \overline{\alpha_s}, ...)$. \item Finally, if $\phi$ is of the form $\exists \mathbf{x} \psi(\mathbf{x}, x_0, ..., x_s, ...)$ and $\mathbf{D_b} \vDash \phi(\overline{\alpha_0}, ..., \overline{\alpha_s}, ...)$, then $\mathbf{D_b} \vDash \psi(\boldsymbol{\overline{\alpha}}, \overline{\alpha_0}, ..., \overline{\alpha_s}, ...)$ for some $\boldsymbol{\overline{\alpha}}$, and then $C_n \Vdash \psi(\boldsymbol{\alpha}, \alpha_0, ..., \alpha_s, ...)$ for some node $n$ by inductive hypothesis. Conversely, if $C_n \Vdash \phi(\alpha_0, ..., \alpha_s, ...)$ for some node $n$ in $\mathbf{b}$, then by definition of the forcing there is a node $m$ above $n$ in $\mathbf{b}$ and a function $f_{nm}: D_n \to D_m$ for which $C_m \Vdash \psi(f_{nm}(\boldsymbol{\alpha}), f_{nm}(\alpha_0), ..., f_{nm}(\alpha_s), ...)$, which implies that $\mathbf{D_b} \vDash \psi(\boldsymbol{\overline{\alpha}}, \overline{\alpha_0}, ..., \overline{\alpha_s}, ...)$ and hence $\mathbf{D_b} \vDash \phi(\overline{\alpha_0}, ..., \overline{\alpha_s}, ...)$. \end{enumerate} Since $\psi(\overline{\boldsymbol{\alpha}})$ is satisfied in all $\kappa$-coherent models of the theory satisfying $\phi(\overline{\boldsymbol{\alpha}})$, it is satisfied in all models of the form $\mathbf{D_b}$ (even if the structure $\mathbf{D_b}$ is exploding). Hence, $\psi(\boldsymbol{\alpha})$ is forced at a certain node of every branch of the tree. By taking limits over the diagram formed by each branch $\mathbf{b}$ we get nodes at level $\delta$ which also have to force $\psi(\boldsymbol{\alpha})$. Because these nodes form a basic covering family, $\psi(\boldsymbol{\alpha})$ is therefore forced at the root, as we wanted to prove. \end{proof} One can remove the restriction on the cardinality of the theory if one assumes instead that $\kappa$ is a weakly (resp. strongly) compact cardinal: \begin{proposition}\label{scohcomp} If $\kappa$ is a weakly (resp. strongly) compact cardinal, $\kappa$-coherent theories of cardinality at most $\kappa$ (resp. of arbitrary cardinality) are complete with respect to \ensuremath{\mathbf{Set}}-valued models. \end{proposition} \begin{proof} Suppose that the sequent $\phi \vdash_{\mathbf{x}} \psi$ is valid in every model of a certain theory but not provable. Then it is not provable in any subtheory of cardinality less than $\kappa$. Therefore, if we add to the language new constants $\mathbf{c}$ and axioms $\top \vdash \phi(\mathbf{c})$ and $\psi(\mathbf{c}) \vdash \bot$, any subtheory of cardinality less than $\kappa$ together with these two new axioms has, by Proposition \ref{cohcomp}, a model. Since $\kappa$ is weakly (resp. strongly) compact, the whole theory has a model, which provides a model for the original theory where $\phi \vdash_{\mathbf{x}} \psi$ is not valid. \end{proof} \begin{rmk} In the particular case when $\kappa=\omega$, Proposition \ref{scohcomp} reduces to the completeness theorem for coherent logic. The proof given is thus an alternative path to the usual categorical treatment of Henkinization that we see in \cite{johnstone}, D 1.5, and is closer to the methods in \cite{clr}. \end{rmk} Incidentally, a slight modification on the definition of witnessing covers that build the underlying tree of the partial Beth model in Theorem \ref{shcomp} yields a full model for $\kappa$-first-order theories: \begin{thm} Let $\kappa$ be an inaccessible cardinal. Then any $\kappa$-first-order theory of cardinality at most $\kappa$ has a universal\footnote{By universal we mean a conservative model, that is, one in which true sequents are provable.} Beth model. \end{thm} \begin{proof} We consider, for any object, a well-ordering of the set of basic covering families over the object, which are given by jointly covering sets of arrows (of cardinality less than $\kappa$), that is, we include in the set $S$ all possible covering families and not just the ones that witness some disjunction or existential formula. Since $\kappa$ is inaccessible, it is easy to see that this set has cardinality $\kappa$. We define a tree of height $\kappa$ as before, using a well-ordering $f: \kappa \times \kappa \to \kappa$ as in Lemma \ref{dwo}. There is an obvious interpretation of the function symbols in the subset underlying each node, as before. If for relations $R$ (including equality), we set by definition $R_q(\mathbf{s}(\boldsymbol{\alpha}))$ if and only if $q$ forces $R(\mathbf{s}(\boldsymbol{\alpha}))$ in the sheaf semantics of the topos (we identify the category with its image through Yoneda embedding), we will have, as before, the: $Claim:$ For every node $p$, every tuple $\boldsymbol{\alpha}$ and every formula $\phi \in S$, $p \Vdash \phi(\boldsymbol{\alpha})$ if and only if $p \Vdash_B \phi(\boldsymbol{\alpha})$.\\ The proof is again by induction on $\phi$, and the steps for the atomic formulas, conjunctions, disjunctions and existential quantification are the same as before. We need only to consider implication and universal quantification. \begin{enumerate} \item Suppose $\phi=\psi \to \theta$. If $p \Vdash \psi(\boldsymbol{\alpha}) \to \theta(\boldsymbol{\alpha})$, for every $f: c \to p$ in the category one has $c \Vdash \psi(\boldsymbol{\alpha} f) \implies c \Vdash \theta(\boldsymbol{\alpha} f)$. In particular, this holds when $c$ is any node $q$ in the tree above $p$, and by inductive hypothesis one has $q \Vdash_B \psi(\boldsymbol{\alpha} f) \implies q \Vdash_B \theta(\boldsymbol{\alpha} f)$ for all such nodes. Therefore, $p \Vdash_B \psi(\boldsymbol{\alpha}) \to \theta(\boldsymbol{\alpha})$. Conversely, suppose that $p \Vdash_B \psi(\boldsymbol{\alpha}) \to \theta(\boldsymbol{\alpha})$ and consider an arrow $f: c \to p$. Together with the identity, this arrow forms a covering family which appears at some point in the well-ordering and is hence pulled back along paths $g_j$ of a subtree to build the next level of the subtree over $p$. Suppose that $c \Vdash \psi(\boldsymbol{\alpha})$; then $g_j^*(c) \Vdash \psi(\boldsymbol{\alpha} g'_j)$, so by inductive hypothesis one has $g_j^*(c) \Vdash_B \psi(\boldsymbol{\alpha} g'_j)$. Therefore, we get $g_j^*(c) \Vdash_B \theta(\boldsymbol{\alpha} g'_j)$, and using once more the inductive hypothesis, $g_j^*(c) \Vdash \theta(\boldsymbol{\alpha} g'_j)$. But $g'_j=f^*(g_j): g_j^*(c) \to c$ is a basic cover of $c$ (since the $g_j$ form a basic cover of $p$), and hence we will have $c \Vdash \theta(\boldsymbol{\alpha})$. We have, thus, proved that $p \Vdash \psi(\boldsymbol{\alpha}) \to \theta(\boldsymbol{\alpha})$ \item Suppose $\phi=\forall \mathbf{x} \psi(\mathbf{x}, \boldsymbol{\alpha})$. If $p \Vdash \forall \mathbf{x} \psi(\mathbf{x}, \boldsymbol{\alpha})$, for every $f: c \to p$ in the category and every $\boldsymbol{\beta}: c \to [\mathbf{x}, \top]$ one has $c \Vdash \psi(\boldsymbol{\beta}, \boldsymbol{\alpha})$. In particular, this holds when $c$ is any node $q$ in the tree above $p$, and by inductive hypothesis one has $q \Vdash_B \psi(\boldsymbol{\beta}, \boldsymbol{\alpha})$ for all such nodes. Therefore, $p \Vdash_B \forall \mathbf{x} \psi(\mathbf{x}, \boldsymbol{\alpha})$. Conversely, suppose that $p \Vdash_B \forall \mathbf{x} \psi(\mathbf{x}, \boldsymbol{\alpha})$ and consider an arrow $f: c \to p$. Together with the identity, this arrow forms a covering family which appears at some point in the well-ordering and is hence pulled back along the paths $g_j$ of a subtree to build the next level of the subtree over $p$. Suppose we have some $\boldsymbol{\beta}: c \to [\mathbf{x}, \top]$; then we have arrows $\boldsymbol{\beta} f^*(g_j): g_j^*(c) \to [\mathbf{x}, \top]$, and by definition we must have $g_j^*(c) \Vdash_B \psi(\boldsymbol{\beta} f^*(g_j), \boldsymbol{\alpha} fg'_j)$, so by inductive hypothesis one has $g_j^*(c) \Vdash \psi(\boldsymbol{\beta} f^*(g_j), \boldsymbol{\alpha} fg'_j)$. But $f^*(g_j): g_j^*(c) \to c$ is a basic cover of $c$ (since the $g_j$ form a basic cover of $p$), and hence we will have $c \Vdash \psi(\boldsymbol{\beta}, \boldsymbol{\alpha})$. We have thus proved that $p \Vdash \forall \mathbf{x} \psi(\mathbf{x}, \boldsymbol{\alpha})$. \end{enumerate} This finishes the proof. \end{proof} We conclude with an equivalent characterization of weak (resp. strong) compactness. \begin{defs} A $\kappa$-complete lattice will be called $\kappa$-distributive if it satisfies the intuitionistic distributivity law, i.e., if $a \wedge \bigvee_{i<\gamma} b_{i} \vdash \bigvee_{i<\gamma}a \wedge b_{i}$ for every $\gamma<\kappa$, and if the following propositional version of transfinite transitivity property holds: for every $\gamma<\kappa$ and all elements $\{a_f: f \in \gamma^{\beta}, \beta<\gamma\}$ such that $$a_{f} \leq \bigvee_{g \in \gamma^{\beta+1}, g|_{\beta}=f} a_{g}$$ \noindent for all $f \in \gamma^{\beta}, \beta<\gamma$, and $$a_{f} = \bigwedge_{\alpha<\beta}a_{f|_{\alpha}}$$ \noindent for all limit $\beta$, $f \in \gamma^{\beta}, \beta<\gamma$, we have that $$a_{\emptyset} \leq \bigvee_{f \in \gamma^{\gamma}} \bigwedge_{\beta<\gamma}a_{f|_{\beta}}.$$ A $\kappa$-complete filter in the lattice is a filter $\mathcal{F}$ such that whenever $a_i \in \mathcal{F}$ for every $i \in I$, $|I|<\kappa$, then $\bigwedge_{i \in I}a_i \in \mathcal{F}$. Dually, a $\kappa$-complete ideal in the lattice is an ideal $\mathcal{I}$ such that whenever $a_i \in \mathcal{I}$ for every $i \in I$, $|I|<\kappa$, then $\bigvee_{i \in I}a_i \in \mathcal{I}$. A $\kappa$-prime filter in the lattice is a filter $\mathcal{F}$ such that whenever $\bigvee_{i \in I}a_i$ is in $\mathcal{F}$ for $|I|<\kappa$ then $a_i \in \mathcal{F}$ for some $i \in I$. \end{defs} \begin{lemma}\label{quotient} Given any proper $\kappa$-complete filter $\mathcal{F}$ in a $\kappa$-complete, $\kappa$-distributive lattice $\mathcal{L}$, there exists a surjective morphism $\theta: \mathcal{L} \to \mathcal{K}$ into a $\kappa$-complete, $\kappa$-distributive lattice $\mathcal{K}$ such that $\theta^{-1}(1)=\mathcal{F}$. \end{lemma} \begin{proof} The proof is an adaptation of that exposed e.g. in \cite{maclane-moerdijk}, V.9, for Heyting algebras. Consider first the case in which the filter $\mathcal{F}$ is principal, given by some element $u$ as the set $\{u': u' \geq u\}$. Define now an equivalence relation on $\mathcal{L}$ by setting $a \cong b \iff a \wedge u=b \wedge u$. Denoting by $a_u$ the equivalence class of $a$, we can give the quotient $\mathcal{L}/u$ a structure of a $\kappa$-complete lattice with the partial order given by $a_u \leq b_u \iff a \wedge u \leq b \wedge u$. On the resulting poset of equivalence classes, the meet and the join are given as follows: $$\bigwedge_{i \in \gamma}a^i_u =\left(\bigwedge_{i \in \gamma}a^i\right)_u$$ $$\bigvee_{i \in \gamma}a^i_u =\left(\bigvee_{i \in \gamma}a^i\right)_u$$ \\ That these operations are well defined follows from the fact that $\mathcal{L}$ is $\kappa$-distributive. With these operations, it also follows that $\mathcal{L}/u$ inherits the $\kappa$-distributivity. Now, if we had $u \leq v \leq w$ in $\mathcal{L}$, there are evident morphisms $\mathcal{L}/w \to \mathcal{L}/v \to \mathcal{L}/u$ with composite $\mathcal{L}/w \to \mathcal{L}/u$. Then for a general filter $\mathcal{F}$, we just define $\mathcal{K}$ as the colimit: $$\lim_{u \in \mathcal{F}}\mathcal{L}/u.$$ \\ Since $\mathcal{F}$ is $\kappa$-complete, it follows that the colimit is $\kappa$-filtered, and hence it inherits all the structure of $\kappa$-complete, $\kappa$-distributive lattice of each $\mathcal{L}/u$. This finishes the proof. \end{proof} We can state now: \begin{proposition}\label{filter} A cardinal $\kappa$ is weakly (resp. strongly) compact if and only if, given any proper $\kappa$-complete ideal $\mathcal{I}$ in a $\kappa$-complete, $\kappa$-distributive lattice $\mathcal{L}$ of cardinality at most $\kappa$ (resp. of arbitrary cardinality), and given a $\kappa$-complete filter $\mathcal{F}$ disjoint from $\mathcal{I}$, there exists a $\kappa$-complete, $\kappa$-prime filter containing $\mathcal{F}$ and disjoint from $\mathcal{I}$. \end{proposition} \begin{proof} The ``if'' part follows by restricting ourselves to the case when the lattice is a Boolean algebra; we prove here the ``only if'' part. Consider the quotient $\mathcal{L}'$ of the lattice $\mathcal{L}$ by $\mathcal{F}$ given by Lemma \ref{quotient}; this is a $\kappa$-complete and $\kappa$-distributive lattice, and the image of $\mathcal{I}$ is a proper $\kappa$-complete ideal $\mathcal{I}'$. Define a new propositional variable $P_a$ for each element $a$ of $\mathcal{L}'$ and consider the theory axiomatized by the following axioms: \begin{enumerate} \item $P_a \vdash \bot$ for all $a \in \mathcal{I}'$ \item $\bigwedge_{i \in I}P_{a_i} \vdash P_{\left(\bigwedge_{i \in I}a_i\right)}$ for all $a_i$ and $|I|<\kappa$ \item $P_{\left(\bigvee_{i \in I}a_i\right)} \vdash \bigvee_{i \in I}P_{a_i}$ for all $a_i$ and $|I|<\kappa$ \end{enumerate} This is a theory over the $\kappa$-coherent fragment which has cardinality at most $\kappa$. Each subtheory of cardinality less than $\kappa$ involves $\gamma<\kappa$ propositional variables, whose correspondent elements generate a (non trivial) $\kappa$-complete sublattice of $\mathcal{L}'$ of cardinality $2^{\gamma}<\kappa$ (as every element there is equivalent to one of the form $\bigvee_{i<\gamma} \bigwedge_{j<\gamma} P_{a_{ij}}$) containing, thus, less than $\kappa$ many $\kappa$-prime filters. By Proposition \ref{cohcomp}, the intersection of all $\kappa$-prime filters of any such sublattice is $\{1\}$, and hence any such sublattice contains at least one $\kappa$-prime filter disjoint from elements of $\mathcal{I}'$. This shows that each subtheory of cardinality less than $\kappa$ has a model. Since $\kappa$ is weakly (resp. strongly) compact, the whole theory has a model, which corresponds to a $\kappa$-complete, $\kappa$-prime filter of $\mathcal{L}'$ whose preimage along the quotient map provides a $\kappa$-complete, $\kappa$-prime filter in $\mathcal{L}$ containing $\mathcal{F}$ and disjoint from $\mathcal{I}$. \end{proof} \begin{rmk} Proposition \ref{filter} can be used to provide a representation theorem for $\kappa$-complete, $\kappa$-distributive lattices of cardinality at most $\kappa$ (resp. of arbitrary cardinality) in terms of lattices of subsets. More especifically, the $\kappa$-complete lattice morphism $f$ that maps an element $e \in \mathcal{L}$ to the set of $\kappa$-complete, $\kappa$-prime filters containing $e$ provides an isomorphism of lattices $\mathcal{L} \to f(\mathcal{L})$ precisely when $\kappa$ is weakly (resp. strongly) compact. \end{rmk} \subsection{Completeness of infinitary intuitionistic first-order logic} Having now at hand a completeness theorem for $\kappa$-coherent theories, we can adapt the proof of Joyal's theorem by replacing the category of coherent models with that of $\kappa$-coherent models. As a result, we get: \begin{thm}\label{bt} If $\kappa$ is weakly (resp. strongly) compact, $\kappa$-first-order theories of cardinality at most $\kappa$ (resp. of arbitrary cardinality) are complete with respect to Kripke models. \end{thm} \begin{proof} Consider the syntactic category $\mathcal{C}$ of the $\kappa$-coherent Morleyization $\ensuremath{\mathbb{T}}^m$ of the theory $\ensuremath{\mathbb{T}}$. Let $\mathcal{C}oh(\mathcal{C})$ be the category of $\kappa$-coherent models of $\ensuremath{\mathbb{T}}^m$ of size at most $\kappa$ (resp. of arbitrary size), and where arrows are model homomorphisms. We have a functor $ev: \mathcal{C} \to \ensuremath{\mathbf{Set}}^{\mathcal{C}oh(\mathcal{C})}$ sending an object $A$ to the evaluation functor $ev(A)$. It is clear that this functor is $\kappa$-coherent, and by Proposition \ref{scohcomp}, it is also conservative (this is because any model contains an elementary submodel of cardinality at most $\kappa$, by the usual L\"owenheim-Skolem construction). We must prove that $ev$ also preserves $\forall$. Given an arrow $f: A \to B$, a subobject $C \rightarrowtail A$ and the subobject $Y=\forall_f(C) \rightarrowtail B$, we need to show that $ev(Y)=\forall_{ev(f)} (ev(C))$ as subobject of $ev(B)$. By the definition of $\forall$ in the Heyting category $\ensuremath{\mathbf{Set}}^{\mathcal{C}oh(\mathcal{C})}$, this reduces to proving the following equivalence, for every $\mathbf{y} \in ev(B)(M)=M(B)$: $$\mathbf{y} \in ev(Y)(M) \iff \text{ For every model } N, \text{ for every model homomorphism}$$ $$\phi: M \to N,$$ $$(ev(f)_N)^{-1}(\phi_B(\mathbf{y})) \subseteq ev(C)(N)$$ that is: $$\mathbf{y} \in M(Y) \iff \text{ For every model } N, \text{ for every model homomorphism}$$ $$\phi: M \to N,$$ $$(N(f))^{-1}(\phi_B(\mathbf{y})) \subseteq N(C)$$ The implication $\implies$ can be proven as follows: if $\mathbf{y} \in M(Y)$, then $\phi_B(\mathbf{y}) \in N(Y)$, and so, since $N$ is $\kappa$-coherent, $\phi_B(\mathbf{y}) = N(f)(\mathbf{x})$ gives $\mathbf{x} \in N(f)^{-1} (N(\forall_f(C)))=N(f^{-1} \forall_f(C)) \subseteq N(C)$. Let us focus on the other implication. Consider the following diagram in $\mathcal{C}$: \begin{displaymath} \xymatrix{ C=[\mathbf{x}, \theta] \ar@{ >->}[dd] & & \forall_f(C)=[\mathbf{y}, \gamma] \ar@{ >->}[dd]\\ & & \\ A=[\mathbf{x}, \phi] \ar@{->}[rr]_{f=[\mathbf{x}\mathbf{y}, \lambda]} & & B=[\mathbf{y}, \psi] \\ } \end{displaymath} Applying the functor $ev$ and evaluating at a model $M$ gives the diagram: \begin{displaymath} \xymatrix{ \{\mathbf{d} | M \vDash \theta(\mathbf{d})\} \ar@{ >->}[dd] & & \{\mathbf{c} | M \vDash \gamma(\mathbf{c})\} \ar@{ >->}[dd]\\ & & \\ \{\mathbf{d} | M \vDash \phi(\mathbf{d})\} \ar@{->}[rr]_{\{\mathbf{d}, \mathbf{c} | M \vDash \lambda(\mathbf{d}, \mathbf{c})\}} & & \{\mathbf{c} | M \vDash \psi(\mathbf{c})\} \\ } \end{displaymath} Given $\mathbf{c} \in \forall_{ev(f)} (ev(C))$, we need to prove that $M \vDash \gamma(\mathbf{c})$. Consider the positive diagram of $M$, $Diag_+(M)$, which, in a language extended with constants $c$ for every element $c$ of the underlying set of $M$, consists of all sequents of the form $\top \vdash \psi(c_0, ..., c_{\alpha}, ...)$ for every positive atomic $\psi$ such that $M \vDash \psi(c_0, ..., c_{\alpha}, ...)$ (we identify the constants symbols with the elements of $M$, to simplify the exposition). If $N'$ is a model of $Th(M)$ of size at most $\kappa$ (resp. of arbitrary size), then, defining $N$ as the reduct of $N'$ with respect to the elements $\{c^{N'}: c \in M\}$ we can define $\phi: M \to N$ by $\phi(c)=c^{N'}$, which is a well defined model homomorphism. But we know that for all $\phi: M \to N$ one has $N(f)^{-1}(\phi_B(\mathbf{c})) \subseteq N(C)$. This implies that for all models $N'$ of $Th(M)$ of size at most $\kappa$ (resp. of arbitrary size), the sequent $\lambda(\mathbf{x}, \mathbf{c}/\mathbf{y}) \vdash_{\mathbf{x}} \theta(\mathbf{x})$ holds, and therefore, the sequent $\psi(\mathbf{c}) \wedge \lambda(\mathbf{x}, \mathbf{c}/\mathbf{y}) \vdash_{\mathbf{x}} \theta(\mathbf{x})$ also holds. By Proposition \ref{scohcomp}, this means that such a sequent is provable in $Th(M)$. Besides sequents in $\ensuremath{\mathbb{T}}^m$, this proof uses less than $\kappa$ sequents of the general form $\top \vdash \phi_i(\mathbf{c}, \mathbf{c_0}, ..., \mathbf{c_{\alpha}}, ...)$, where the $\phi_i$ are positive atomic sentences corresponding to the diagram of $M$ and the $\mathbf{c_i}$ are elements of $M$. Considering the conjunction $\xi$ of the $\phi_i$, we see that there is a proof in $\ensuremath{\mathbb{T}}^m$ from: $$\top \vdash \xi(\mathbf{c}, \mathbf{c_0}, ..., \mathbf{c_{\alpha}}, ...)$$ \\ to $$\psi(\mathbf{c}) \wedge \lambda(\mathbf{x}, \mathbf{c}/\mathbf{y}) \vdash_{\mathbf{x}} \theta(\mathbf{x})$$ \\ By the deduction theorem (Lemma \ref{dt}), since $\xi(\mathbf{c}, \mathbf{c_0}, ..., \mathbf{c_{\alpha}}, ...)$ is a sentence, we obtain in $\ensuremath{\mathbb{T}}^m$ a derivation of: $$\xi(\mathbf{c}, \mathbf{c_0}, ..., \mathbf{c_{\alpha}}, ...) \wedge \psi(\mathbf{c}) \wedge \lambda(\mathbf{x}, \mathbf{c}/\mathbf{y}) \vdash_{\mathbf{x}} \theta(\mathbf{x})$$ \\ But it is always possible to replace the constants by variables as long as they are added to the contexts of the sequents, so using the existential rule, we have also a derivation of: $$\exists \mathbf{x_0} ... \mathbf{x_{\alpha}} ... \xi(\mathbf{y}, \mathbf{x_0}, ..., \mathbf{x_{\alpha}}, ...) \wedge \psi(\mathbf{y}) \wedge \lambda(\mathbf{x}, \mathbf{y}) \vdash_{\mathbf{x} \mathbf{y}} \theta(\mathbf{x})$$ \\ Calling $Y'=[\mathbf{y}, \Phi(\mathbf{y})]$ the subobject of $B$ given by the interpretation in $\mathcal{C}$ of the formula: $$\exists \mathbf{x_0} ... \mathbf{x_{\alpha}} ... \xi(\mathbf{y}, \mathbf{x_0}, ..., \mathbf{x_{\alpha}}, ...) \wedge \psi(\mathbf{y})$$ \\ we have a proof of the sequent: $$\Phi(\mathbf{y}) \wedge \lambda(\mathbf{x}, \mathbf{y}) \vdash_{\mathbf{x} \mathbf{y}} \theta(\mathbf{x})$$ \\ and hence also of the sequent: $$\exists \mathbf{y} (\Phi(\mathbf{y}) \wedge \lambda(\mathbf{x}, \mathbf{y})) \vdash_{\mathbf{x}} \theta(\mathbf{x})$$ \\ Now the antecedent is precisely the pullback of the subobject $\Phi(\mathbf{y})$ of $B$ along $f$, so by adjunction we have $Y' \leq \forall_f(C)=[\mathbf{y}, \gamma]$, i.e., the sequent $\Phi(\mathbf{y}) \vdash_{\mathbf{y}} \gamma(\mathbf{y})$ is provable. Therefore, since $M \vDash \Phi(\mathbf{c})$, it follows that $M \vDash \gamma(\mathbf{c})$, as we wanted to prove. \end{proof} There is also the following converse of Theorem \ref{bt}: \begin{proposition}\label{converse} The completeness theorem of $\kappa$-first order/$\kappa$-coherent theories of cardinality $\kappa$ (resp. of arbitrary cardinality) with respect to Kripke models/Tarski models implies that $\kappa$ is weakly (resp. strongly) compact. \end{proposition} \begin{proof} We prove first that completeness of $\kappa$-coherent theories of cardinality $\kappa$ is entailed by Kripke completeness of $\kappa$-first-order theories of cardinality $\kappa$. Given a $\kappa$-coherent theory, suppose a coherent sequent is valid in all $\kappa$-coherent models in \ensuremath{\mathbf{Set}}. Then it is necessarily forced at every node of every Kripke model, and therefore provable from the axioms of the theory in $\kappa$-first-order logic. Because $\kappa$-first-order logic is conservative over $\kappa$-coherent logic (as there is, by Lemma \ref{shemb}, a conservative embedding of the syntactic category of the latter into a sheaf topos verifying the former), it has to be provable already in $\kappa$-coherent logic. To prove that the completeness of $\kappa$-coherent theories of cardinality $\kappa$ implies weak compactness, given a tree of height $\kappa$ and levels of size less than $\kappa$, consider the theory of a cofinal branch, over a language containing a unary relation symbol $P$ and one constant $a$ for every node in the tree and axiomatized as follows: $$\top \vdash \bigvee_{a \in L_{\alpha}}P(a)$$ \\ for each $\alpha<\kappa$, where $L_{\alpha}$ is the level of height $\alpha$; $$P(a) \wedge P(b) \vdash \bot$$ \\ for each pair $a \neq b \in L_{\alpha}$ and each $\alpha<\kappa$; $$P(a) \vdash P(b)$$ \\ for each pair $a, b$ such that $a$ is a successor of $b$. Then the theory is certainly consistent within $\mathcal{L}_{\kappa, \kappa}$, as every subtheory of cardinality less than $\kappa$ has a Tarski model, so by completeness it follows that the whole theory has a Tarski model, corresponding to a cofinal branch. Finally, in case we have Kripke completeness of $\kappa$-first-order theories of arbitrary cardinality, we can deduce in a similar way as before the completeness of $\kappa$-coherent theories of arbitrary cardinalities. To show that this latter implies strong compactness, consider a $\kappa$-complete filter $\mathcal{F}$ in the lattice $\mathcal{L}$ of subsets of a set. In a language containing a constant $a$ for every $a \in \mathcal{L}/\mathcal{F}$ and a unary relation symbol $P$, consider the following theory of a $\kappa$-complete ultrafilter: \begin{enumerate} \item $P(a) \vdash P(b)$ for every pair $a \leq b$ in $\mathcal{L}$ \item $\bigwedge_{i<\gamma}P(a_i) \vdash P(\bigwedge_{i<\gamma}a_i)$ for all families $\{a_i\}_{i<\gamma}$ such that $\gamma<\kappa$ \item $\top \vdash P(a) \vee P(\neg a)$ for every $a \in \mathcal{L}$ \end{enumerate} Since the theory is consistent (as it has a model in $\mathcal{L}/\mathcal{F}$ itself), by $\kappa$-coherent completeness it has a Tarski model, which provides a $\kappa$-complete ultrafilter in $\mathcal{L}/\mathcal{F}$ whose preimage along the quotient map yields a $\kappa$-complete ultrafilter in $\mathcal{L}$ extending $\mathcal{F}$. Therefore, $\kappa$ is strongly compact. \end{proof} \subsection{Heyting cardinals} It is known, as we will prove in the next section, that for inaccessible $\kappa$, $\kappa$-first-order classical logic is complete for theories of cardinality less than $\kappa$. This was first observed by Karp in \cite{karp}, and one naturally wonders if the analogous situation holds in the intuitionistic case. This motivates the following: \begin{defs} We say that $\kappa$ is a \emph{Heyting} cardinal if $\kappa$ is inaccessible and $\kappa$-first-order logic is complete for theories of cardinality less than $\kappa$. \end{defs} By Theorem \ref{bt} we know that, in terms of strength, a Heyting cardinal lies between inaccessible and weakly compact cardinals. Its exact large cardinal strength and relationship with other classical large cardinal properties is however currently unknown, although we will outline some evidence that relates it to an instance of weak compactness. As applications of completeness, we obtain the disjunction and existence properties: \begin{proposition}\label{dp} If $\kappa$ is a Heyting cardinal, $\kappa$-first-order intuitionistic logic $\mathcal{L}_{\kappa, \kappa}$ has the infinitary disjunction property. That is, if $\top \vdash \bigvee_{i \in I}\phi_i$ is provable in the empty theory, then, for some $i \in I$, $\top \vdash \phi_i$ is already provable. \end{proposition} \begin{proof} This is a straightforward generalization of the usual semantic proof in the finitary case, based on the completeness with respect to Kripke models over trees. If no sequent $\top \vdash \phi_i$ was provable, there would be a countermodel for each. Then we can build a new Kripke tree appending to these countermodels a bottom node whose underlying set consists just of the constants of the language, with the obvious injections into the roots of the countermodels (and forcing no atoms). Such a Kripke tree would then be a countermodel for $\top \vdash \bigvee_{i \in I}\phi_i$. \end{proof} \begin{proposition} If $\kappa$ is a Heyting cardinal, $\kappa$-first-order intuitionistic logic $\mathcal{L}_{\kappa, \kappa}$ over a language without function symbols and with at least one constant symbol has the infinitary existence property. That is, if $\top \vdash \exists \mathbf{x} \phi(\mathbf{x})$ is provable in the empty theory, then, for some constants $\mathbf{c}$, $\top \vdash \phi(\mathbf{c})$ is already provable. \end{proposition} \begin{proof} The proof of this is similar to the one given for the disjunction property. If no sequent $\top \vdash \phi(\mathbf{c})$ was provable, there would be a countermodel for each choice of $\mathbf{c}$. Then we can build a new Kripke tree appending to these countermodels a bottom node forcing no atoms, whose underlying domain contains just the constants of the language, again with the obvious injections into the roots of the countermodels (and forcing no atoms). Such a Kripke tree would then be a countermodel for $\top \vdash \exists \mathbf{x} \phi(\mathbf{x})$. \end{proof} It is possible that the disjunction and existence properties themselves represent a large cardinal notion different from mere inaccessibility. To see this, note first that the methods of \cite{jongh} allow to prove that $\kappa$-propositional logic (including the distributivity rule) with at least two atoms has $\kappa$ many mutually non-equivalent formulas\footnote{This is a remarkable property of infinitary intuitionistic logic. In classical infinitary propositional logic $\mathcal{L}_{\kappa}$, for example, the distributivity property implies, by a theorem of Tarski (see \cite{tarski}), that when there are less than $\kappa$-many atoms, the logic has less than $\kappa$ many mutually non-equivalent formulas.}, and so the same holds in general for $\kappa$-first-order logic. This implies that the $\kappa$-coherent Morleyization of $\kappa$-first-order logic contains $\kappa$ many relation symbols which are mutually non-equivalent, and so it has has signature of cardinality $\kappa$ (as well as $\kappa$-many axioms). By Proposition \ref{cohcomp}, this theory is $\kappa$-satisfiable, i.e., any subset of less than $\kappa$ axioms has a model. But the following proposition will show that the whole theory has a model, without using an instance of weak compactness: \begin{proposition} If $\kappa$-first-order logic has the disjunction and the existence properties, its $\kappa$-coherent Morleyization has a \ensuremath{\mathbf{Set}}-valued model. \end{proposition} \begin{proof} Consider the syntactic category $\mathcal{C}$ of the theory $\ensuremath{\mathbb{T}}$. We will show that the representable functor $y(1)=[1, -]: \mathcal{C} \to \ensuremath{\mathbf{Set}}$ is a $\kappa$-coherent functor, providing thus the model of the Morleyized theory $\ensuremath{\mathbb{T}}^m$. Since $y(1)$ preserves all limits, it will preserve in particular $\kappa$-limits, so it is enough to show that it preserves $\kappa$-unions and covers. That $\bigvee_{i<\gamma} y(1)(A_i) \leq y(1)(\bigvee_{i<\gamma}A_i)$ follows since $y(1)$ preserves limits and hence monomorphisms. To see the other inequality, let $f: 1 \to \bigvee_{i<\gamma}A_i$. Then we have $1=\bigvee_{i<\gamma}f^{-1}(A_i)$. By the disjunction property, $1=f^{-1}(A_j)$ for some $j<\gamma$, and hence $f$ factors through $A_j \rightarrowtail \bigvee_{i<\gamma}A_i$. This shows that $y(1)(\bigvee_{i<\gamma}A_i) \leq \bigvee_{i<\gamma} y(1)(A_i)$, as we wanted. Finally, suppose we have a cover $g: A \twoheadrightarrow B$; we must show that any $h: 1 \to B$ factors through $A$. For this, in turn, it is enough to take the pullback $h^{-1}(g): h^{-1}(A) \twoheadrightarrow 1$ and find a section $s$ for $h^{-1}(g)$, then we can take the required factorization as the composite $g^{-1}(h) \circ s: 1 \to A$. Let $h^{-1}(A)=[\mathbf{x}, \phi(\mathbf{x})]$. Since $h^{-1}(g)$ is a cover, we have $\top \vdash \exists \mathbf{x} \phi(\mathbf{x})$. By the existence property, there are some $\mathbf{c}$ with $[\![\mathbf{c}]\!]: 1 \to [\mathbf{x}, \top]$ such that $\top \vdash \phi(\mathbf{c})$. But $\phi(\mathbf{c})$ is the pullback of $[\mathbf{x}, \phi(\mathbf{x})] \rightarrowtail [\mathbf{x}, \top]$ along $[\![\mathbf{c}]\!]$, which provides the required section since $1=\phi(\mathbf{c})$. This finishes the proof. \end{proof} To summarize, if the disjunction and existence properties hold for $\kappa$ (for example if $\kappa$ is a Heyting cardinal), there is a $\kappa$-satisfiable $\mathcal{L}_{\kappa, \kappa}$-theory over a signature of cardinality $\kappa$, and with $\kappa$-many axioms, which has a model. Since there are $\kappa$ many mutually non-equivalent relation symbols, it follows that models for subtheories of less than $\kappa$ many axioms do not trivially extend to models of the whole theory. The existence of a model for the whole theory could thus require an instance of weak compactness in an essential way. \subsection{Karp's theorem and completeness of classical infinitary systems} In the classical case, when the syntactic category is Boolean, we can prove that Proposition \ref{cohcomp} reduces to the completeness theorem of Karp in \cite{karp}, since we know that the transfinite transitivity property can be rewritten into the classical distributivity and dependent choice axiom schemata. This reduction will be possible due to classical Morleyization, which is the infinitary version of the process explained in \cite{johnstone}, D 1.5: \begin{thm}\label{karp} (Karp) If $\kappa$ is inaccessible, theories of cardinality less than $\kappa$ within the classical first-order system of Karp are complete with respect to \ensuremath{\mathbf{Set}}-valued models. \end{thm} \begin{proof} Suppose the sequent $\phi \vdash_{\alg{x}} \psi$ is valid in all models in \ensuremath{\mathbf{Set}}. Let $S$ be the set of subformulas of some axiom of $\ensuremath{\mathbb{T}}$ or of $\phi$ or $\psi$. We consider now the classical Morleyized theory $\ensuremath{\mathbb{T}}^m$, over a signature $\Sigma^m$ that extends the original signature $\Sigma$ by adding for each $\kappa$-first-order formula $\phi \in S$ over $\Sigma$ with free variables \alg{x} two new relation symbols $C_{\phi}(\alg{x})$ and $D_{\phi}(\alg{x})$, and whose axioms are: \begin{enumerate}[(i)] \item $C_{\phi}(\alg{x}) \wedge D_{\phi}(\alg{x}) \vdash_{\alg{x}} \bot$; \item $\top \vdash_{\alg{x}} C_{\phi}(\alg{x}) \vee D_{\phi}(\alg{x})$; \item $C_{\phi}\dashv\vdash_{\alg{x}}\phi$ for every atomic formula $\phi$; \item $C_{\phi}\vdash_{\alg{x}}C_{\psi}$ for every axiom $\phi\vdash_{\alg{x}}{\psi}$ of \ensuremath{\mathbb{T}}; \item $C_{\bigwedge_{i<\gamma}\phi_i}\dashv\vdash_{\alg{x}}\bigwedge_{i<\gamma}C_{\phi_i}$; \item $C_{\bigvee_{i<\gamma}\phi_i}\dashv\vdash_{\alg{x}}\bigvee_{i<\gamma}C_{\phi_i}$; \item $C_{\phi \rightarrow \psi}\dashv\vdash_{\alg{x}}D_{\phi} \vee C_{\psi}$ \item $C_{\exists{\mathbf{y}}\phi}\dashv\vdash_{\alg{x}}\exists{\mathbf{y}}C_{\phi}$; \item $D_{\forall{\mathbf{y}}\phi}\dashv\vdash_{\alg{x}}\exists{\mathbf{y}}D_{\phi}$. \end{enumerate} These axioms are all $\kappa$-coherent and they ensure that the interpretations of $[\alg{x}, C_{\phi}(\alg{x})]$ and $[\alg{x}, D_{\phi}(\alg{x})]$ in any Boolean category (including \ensuremath{\mathbf{Set}}) will coincide with those of $[\alg{x}, \phi(\alg{x})]$ and $[\alg{x}, \neg \phi(\alg{x})]$, respectively, and that, moreover, $\ensuremath{\mathbb{T}}^m$-models coincide with $\ensuremath{\mathbb{T}}$-models in such categories. Also, since $S$ has cardinality less than $\kappa$, so does $\ensuremath{\mathbb{T}}^m$. Now, if a sequent $\phi \vdash_{\alg{x}} \psi$ is valid in all $\ensuremath{\mathbb{T}}$-models in \ensuremath{\mathbf{Set}}, then $C_{\phi}\vdash_{\alg{x}}C_{\psi}$ will be valid in every $\ensuremath{\mathbb{T}}^m$-model in \ensuremath{\mathbf{Set}}, and therefore will be provable in $\ensuremath{\mathbb{T}}^m$ by Proposition \ref{cohcomp}. Replace in this proof every subformula of the form $C_{\phi}(t_1, ..., t_{\alpha}, ...)$ by the corresponding substitution instance of $\phi(\alg{t}/\alg{x})$, and every subformula of the form $D_{\phi}(t_1, ..., t_{\alpha}, ...)$ by the corresponding substitution instance of $\neg \phi(\alg{t}/\alg{x})$. We claim that this way we will get a proof in $\ensuremath{\mathbb{T}}$ of the sequent $\phi \vdash_{\alg{x}} \psi$ using the rules of $\kappa$-first-order systems. Indeed, the effect of the transformation just described on the axioms of $\ensuremath{\mathbb{T}}^m$ produces either axioms of $\ensuremath{\mathbb{T}}$ or sequents classically provable from $\ensuremath{\mathbb{T}}$; finally, any instance of the transfinite transitivity rule used is, by Theorem \ref{equiv}, classically provable from the instances of classical distributivity and classical dependent choice. \end{proof} If one sticks to the transfinite transitivity rule, it is possible to sharpen these methods to prove completeness of a variety of classical systems. We will consider a variant of the instances of transfinite transitivity rule in which the conclusion is slightly modified. We call $TT_\gamma^B$ the rule: \begin{mathpar} \inferrule{\phi_{f} \vdash_{\mathbf{y}_{f}} \bigvee_{g \in \gamma^{\beta+1}, g|_{\beta}=f} \exists \mathbf{x}_{g} \phi_{g} \\ \beta<\gamma, f \in \gamma^{\beta} \\\\ \phi_{f} \dashv \vdash_{\mathbf{y}_{f}} \bigwedge_{\alpha<\beta}\phi_{f|_{\alpha}} \\ \beta < \gamma, \text{ limit }\beta, f \in \gamma^{\beta}}{\phi_{\emptyset} \vdash_{\mathbf{y}_{\emptyset}} \bigvee_{f \in B} \exists_{\beta<\delta_f}\mathbf{x}_{f|_{\beta +1}} \bigwedge_{\beta<\delta_f}\phi_{f|_\beta}} \end{mathpar} \\ subject to the same provisos as the original rule, but where now $B \subseteq \gamma^{\gamma}$ consists of the minimal elements of a given bar over the tree $\gamma^{\gamma}$, and the $\delta_f$ are the levels of the corresponding $f \in B$. The proof of soundness of this rule in $\ensuremath{\mathbf{Set}}$ is an easy modification of that of the original rule. If $TT_{\gamma}=\cup_{B}TT_{\gamma}^B$ and we denote by $\mathcal{L}_{\gamma^+, \gamma}(TT_{\gamma})$ Karp's classical system in which distributivity and dependent choice are replaced by the rules $TT_{\gamma}$, we have now: \begin{thm}\label{karp2} If $\gamma$ is regular and $\gamma^\alpha=\gamma$ for every $\alpha<\gamma$, theories of cardinality at most $\gamma$ in the classical system $\mathcal{L}_{\gamma^+, \gamma}(TT_{\gamma})$ are complete for \ensuremath{\mathbf{Set}}-valued models. \end{thm} \begin{proof} The proof follows the same lines as the proof for \ref{karp}, except that instead of relying on Proposition \ref{cohcomp}, we prove an analogous completeness theorem for theories with at most $\gamma$ axioms in the $(\gamma^+, \gamma, \gamma)$-coherent fragment of $\mathcal{L}_{\gamma^+, \gamma, \gamma}$. The system for this fragment is like that of the infinitary coherent fragment, with two differences: first, only disjunctions can be indexed by sets of cardinality $\gamma$, while the indexing sets of conjunctions and existential quantification is always less than $\gamma$ (this means that in the inductive definition of formulas of the type $\bigwedge_{i<\alpha} \phi_i$ and $\bigvee_{i<\alpha} \phi_i$ we need to make sure that $|\cup_{i<\alpha}FV(\phi_i)|<\gamma$, and in particular, the contexts of the formulas in the syntactic category have length always less than $\gamma$); second, the transfinite transitivity rule is replaced by the rule $TT_{\gamma}$, which is expressible in the fragment due to the hypothesis on $\gamma$. There is an obvious embedding from the syntactic category of this fragment into a sheaf topos, and one can show much as in the proof of Lemma \ref{shemb} that the embedding preserves the relevant structure. We prove this completeness theorem much as in the proof of Proposition \ref{cohcomp}, but now the building covering families over each object $A$, used to construct the tree, witness if $A$ forces each of the $\gamma$-many subformulas of the axioms or of the valid sequent $\phi \vdash_x \psi$ (we call this set of subformulas $S$); that is, if the subformula $\eta$ is a (nonempty) disjunction $\bigvee_{i<\gamma}\phi_i(\boldsymbol{\beta})$ (resp. an existential formula $\exists_{\alpha<\gamma}\mathbf{x}_{\alpha}\psi(\mathbf{x}_0, ...,\mathbf{x}_{\alpha}, ..., \boldsymbol{\beta})$), and $A \Vdash \eta(\boldsymbol{\beta})$, we include in the set of coverings one of the form $l_j: C_j \to A$, where for each $j$ we have $C_j \Vdash \phi_{i_j}(\boldsymbol{\beta} l_j)$ for some $i_j<\gamma$ (resp. $C_j \Vdash \psi(\boldsymbol{\beta_0^j}, ..., \boldsymbol{\beta_{\alpha}^j}, ... \boldsymbol{\beta} l_j)$ for some $\boldsymbol{\beta_0^j}, ..., \boldsymbol{\beta_{\alpha}^j}, ...$). In case $\eta$ is $\bot$, or $\eta$ is a conjunctive subformula, or $A \nVdash \eta(\boldsymbol{\beta})$ we just consider the identity arrow as a cover. Thus although the tree has branching type $\gamma^+$, its height is $\gamma$. It is enough to prove that every object in the sheaf model forcing the antecedent $\phi(\boldsymbol{\alpha})$ of the valid sequent $\phi \vdash_x \psi$ also forces the consequent $\psi(\boldsymbol{\alpha})$ for every tuple $\boldsymbol{\alpha}$ in the domain. We can thus consider a partial Beth model over the tree of height $\gamma$ so defined taking as the root of the tree an object forcing $\phi(\boldsymbol{\alpha})$, and the directed colimit $\mathbf{D_b}$ of all the underlying structures in the nodes of a cofinal branch $b$ of the tree. We then make it into a structure with the expected definition and prove that it is a model of the theory. For this, we prove that given any $\kappa$-coherent formula $\phi(x_0, ..., x_{\lambda}, ...) \in S$, we have $\mathbf{D_b} \vDash \phi(\overline{\alpha_0}, ..., \overline{\alpha_{\lambda}}, ...)$ if and only if for some node $n$ in $b$, the underlying structure $C_n$ satisfies $C_n \Vdash \phi(\alpha_0, ..., \alpha_{\lambda}, ...)$ for some representatives $\alpha_i$ of $\overline{\alpha_i}$ (the regularity of $\gamma$ is used in the definition of the structure in $\mathbf{D_b}$ and in the inductive step for conjunctions). Now, since any $(\gamma^+, \gamma, \gamma)$-coherent formula satisfied in the models given by the directed colimits of the underlying structures of the nodes along all cofinal branches, is forced at some node of every branch, an application of the categorical property corresponding to the rule $TT_{\gamma}$ proves that $\psi(\boldsymbol{\alpha})$ is forced at the roots, and the completeness of the $(\gamma^+, \gamma, \gamma)$-coherent fragment follows. Finally, for the classical Morleyization, we add to the axioms of $\ensuremath{\mathbb{T}}^m$ also the axioms: $$D_{\bigwedge_{i<{\gamma^+}}\phi_i}\dashv\vdash_{\alg{x}}\bigvee_{i<{\gamma^+}}D_{\phi_i},$$ \\ which, taking advantage of the classical relation between conjunctions and disjunctions, are able to code conjunctions with indexing set of size $\gamma$ into the $(\gamma^+, \gamma, \gamma)$-coherent fragment considered. Then we proceed as in Theorem \ref{karp} to finish the proof. \end{proof} \begin{rmk} Theorem \ref{karp2} applies, for example, when $\gamma$ is inaccessible, or for any regular $\gamma$ as long as we assume the Generalized Continuum Hypothesis. It is also best possible in terms of the cardinality of theories for which one is able to derive completeness, since it is known from Jensen (see \cite{jensen}) that $V=L$ implies the existence of $\gamma^+$-Aronszajn trees, for which the theory of a cofinal branch (\emph{cf.} proof of Proposition \ref{converse}) is consistent but has no model. \end{rmk} \begin{cor}\label{karp3} (Karp) Countable theories in the classical system $\mathcal{L}_{\omega_1, \omega}$ are complete with respect to \ensuremath{\mathbf{Set}}-valued models. \end{cor} \begin{proof} It is enough to apply Theorem \ref{karp2} and notice that $TT_{\omega}$ is actually provable in $\mathcal{L}_{\omega_1, \omega}$. To see this, we construct the proof tree as follows. Let $$C=\bigvee_{f \in B} \exists_{\beta<\delta_f}\mathbf{x}_{f|_{\beta +1}} \bigwedge_{\beta<\delta_f}\phi_{f|_\beta}.$$ \\ To the premise: $$\phi_{\emptyset} \vdash_{\mathbf{y}_{\emptyset}} \bigvee_{g \in \gamma} \exists \mathbf{x}_{g} \phi_{g}$$ \\ we append the sequent: $$\bigvee_{g \in \gamma} \exists \mathbf{x}_{g} \phi_{g} \vdash_{\mathbf{y}_{\emptyset}} C$$ \\ to prepare an application of the cut rule and derive the conclusion we want. This latter sequent is in turn the conclusion of an application of the disjunction elimination rule from the set of sequents $\exists \mathbf{x}_{g} \phi_{g} \vdash_{\mathbf{y}_{\emptyset}} C$ for each $g \in \gamma$. To prove each of these, we use the following instance of the cut rule: \begin{mathpar} \inferrule{\exists \mathbf{x}_{g} \phi_{g} \vdash_{\mathbf{y}_{\emptyset}} \bigvee_{h \in \gamma^{2}, h|_{1}=g} \exists \mathbf{x}_{g} \mathbf{x}_{h} (\phi_{g} \wedge \phi_{h}) \\ \bigvee_{h \in \gamma^{2}, h|_{1}=g} \exists \mathbf{x}_{g} \mathbf{x}_{h} (\phi_{g} \wedge \phi_{h}) \vdash_{\mathbf{y}_{\emptyset}} C}{\exists \mathbf{x}_{g} \phi_{g} \vdash_{\mathbf{y}_{\emptyset}} C} \end{mathpar} \\ and note that the first sequent on the left is derivable from a premise, and involves nodes of level $2$. This procedure is then repeated to the second sequent on the right, with further applications of the disjunction elimination rule and the cut rule, and we continue the tree proceeding upwards on every sequent containing $C$, until we reach all nodes in $B$. Since each node has finite height, the tree is well founded (i.e., it has no infinite ascending chains), and since each leaf is either a sequent provable from a premise or an instance of the disjunction introduction axiom, the tree provides the proof we wanted. \end{proof} \subsection{Makkai's theorem} Consider the $\kappa$-regular fragment; the syntactic category of a theory over this fragment is a $\kappa$-regular category. If we consider the topos of sheaves over this category with the regular coverage given by single epimorphisms, the coverage is subcanonical and the topos is a conservative sheaf model for the theory, as can be proved analogously to Lemma \ref{shemb}. We have now: \begin{proposition} Let $\kappa$ be a regular cardinal. Then any $\kappa$-regular theory of cardinality at most $\kappa$ has a linear partial Beth model of height $\kappa$. \end{proposition} \begin{proof} The proof follows the same pattern as the proof of Theorem \ref{shcomp}, but is simplified in this case because the building covering families over each object $A$, used to construct the tree, consist of one element, thereby guaranteeing that the tree constructed will be linear. More especifically, the cover over an object $A$ witness if $A$ forces each of the less than $\kappa$ many subformulas of the axioms; that is, if the subformula $\eta$ is an existential formula $\exists_{\alpha<\gamma}\mathbf{x}_{\alpha}\psi(\mathbf{x}_0, ...,\mathbf{x}_{\alpha}, ..., \boldsymbol{\beta})$), and $A \Vdash \eta(\boldsymbol{\beta})$, we include in the set of coverings one of the form $l: C \to A$, where we have $C \Vdash \psi(\boldsymbol{\beta_0}, ..., \boldsymbol{\beta_{\alpha}}, ... \boldsymbol{\beta} l)$ for some $\boldsymbol{\beta_0}, ..., \boldsymbol{\beta_{\alpha}}, ...$. In case $\eta$ is a conjunctive formula or $A \nVdash \eta(\boldsymbol{\beta})$ we just consider the identity arrow as a cover. To guarantee that the tree has height $\kappa$, we add to the set of covers over an object identity arrows to the set of building covering families over each object $A$ until it has cardinality $\kappa$. \end{proof} As a consequence, we immediately get: \begin{proposition}\label{regbcomp} If $\kappa$ is a regular cardinal, $\kappa$-regular theories of cardinality less than $\kappa$ are complete with respect to \ensuremath{\mathbf{Set}}-valued models. \end{proposition} \begin{proof} Again, the proof is similar to that of Proposition \ref{cohcomp}. It is enough to prove that every object in the sheaf model forcing the antecedent $\phi(\boldsymbol{\alpha})$ of a sequent $\phi \vdash_x \psi$ also forces the consequent $\psi(\boldsymbol{\alpha})$ for every tuple $\boldsymbol{\alpha}$ in the domain. We can thus consider a partial Beth model over a linear tree as above but taking instead as the root of the tree an object forcing $\phi(\boldsymbol{\alpha})$, and including in the set of subformulas of the axioms also subformulas in the valid sequent $\phi \vdash_x \psi$ (we call this set of subformulas $S$). Consider the directed colimit $\mathbf{D}$ of all the underlying structures in the nodes of the tree; we then make it into a structure with the expected definition and prove that it is a model of the theory. For this, we prove that given any $\kappa$-regular formula $\phi(x_0, ..., x_{\lambda}, ...) \in S$, we have $\mathbf{D} \vDash \phi(\overline{\alpha_0}, ..., \overline{\alpha_{\lambda}}, ...)$ if and only if for some node $n$ in the linear tree, the underlying structure $C_n$ satisfies $C_n \Vdash \phi(\alpha_0, ..., \alpha_{\lambda}, ...)$ for some representatives $\alpha_i$ of $\overline{\alpha_i}$ (the regularity of $\kappa$ is used in the definition of the structure in $\mathbf{D}$ and in the inductive step for conjunctions). Finally, since any $\kappa$-regular formula satisfied in the models given by the directed colimits of the underlying structures of the nodes in the linear trees, is forced at their roots (as can be seen by an application of the dependent choice property), the result follows. \end{proof} Proposition \ref{regbcomp} can be improved by removing the restriction on the cardinality of the theories in question, which leads us to a result of \cite{makkai}: \begin{thm}\label{sregbcomp} (Makkai) If $\kappa$ is a regular cardinal, $\kappa$-regular theories are complete with respect to \ensuremath{\mathbf{Set}}-valued models. \end{thm} \begin{proof} Suppose that the sequent $\phi \vdash_{\mathbf{x}} \psi$ is valid in every model of a certain theory but not provable. Then it is not provable in any subtheory of cardinality less than $\kappa$. Therefore, if we add to the language new constants $\mathbf{c}$ and axioms $\top \vdash \phi(\mathbf{c})$ and $\psi(\mathbf{c}) \vdash \bot$, any subtheory of cardinality less than $\kappa$ together with these two new axioms has, by Proposition \ref{regbcomp}, a model. Now we can build a model for the whole theory by an usual reduced product construction using each of the models so far available, but where we only use a $\kappa$-complete filter instead of an ultrafilter (which will be enough for our purposes since we do not have disjunction among the connectives). To do so, suppose the cardinality of the theory is $\lambda$, and define $P_{\kappa}(\lambda)=\{x \subseteq \lambda: |x|<\kappa\}$. For each $x \in P_{\kappa}(\lambda)$ let $\mathcal{M}_x$ be a model for the axioms in $x$. Now notice that the set: $$\{\{x \in P_{\kappa}(\lambda): y \subseteq x\}: y \in P_{\kappa}(\lambda)\}$$ \\ generates a $\kappa$-complete filter $\mathcal{F}$ on $P_{\kappa}(\lambda)$ by the regularity of $\kappa$. Then we can define a model for the whole theory as the reduced product $\Pi_{x \in P_{\kappa}(\lambda)}\mathcal{M}_x / \mathcal{F}$. The usual proof of \L{}o\'s theorem can be adapted for the $\kappa$-regular case with the filter $\mathcal{F}$ (the use of an ultrafilter is in the inductive step showing that disjunctive subformulas satisfy the theorem, and so it is not needed here). This provides a model for the original theory where $\phi \vdash_{\mathbf{x}} \psi$ is not valid. \end{proof} \begin{rmk} In the particular case when $\kappa=\omega$, Theorem \ref{sregbcomp} reduces to the completeness theorem for regular logic. As it happens with coherent logic, the proof given is thus an alternative path to the usual categorical treatment of Henkinization from \cite{johnstone}. \end{rmk} \subsection{Fourman-Grayson theorem} In \cite{fourman}, the completeness of countably axiomatized geometric propositional theories was proven. Here we check that the general case, where we include also existential quantification, is deduced from the same techniques employed so far in the completeness theorems, and at this point it only needs to be outlined. We have: \begin{thm}\label{fg} Countable geometric theories are complete with respect to \ensuremath{\mathbf{Set}}-valued models. \end{thm} \begin{proof} The proof follows the same lines as the proof for Proposition \ref{cohcomp}, except that building covering families over each object $A$, used to construct the tree, witness if $A$ forces each of the (countably many) antecedents and consequents of the axioms or of the valid sequent $\phi \vdash_{\mathbf{x}} \psi$; that is, if an antecedent or consequent $\eta$ is a (nonempty) disjunction of the form $\bigvee_{i<\gamma}\exists \mathbf{x}_{0}...\mathbf{x}_{n_i} \psi_i(\mathbf{x}_0, ...,\mathbf{x}_{n_i}, \boldsymbol{\beta})$, and $A \Vdash \eta(\boldsymbol{\beta})$, we include in the set of coverings one of the form $l_j: C_j \to A$, where for each $j$ we have $C_j \Vdash \psi_{i_j}(\boldsymbol{\beta_0^j}, ..., \boldsymbol{\beta_{n_{i_j}}^j}, \boldsymbol{\beta} l_j)$ for some $i_j$ and some $\boldsymbol{\beta_0^j}, ..., \boldsymbol{\beta_{n_{i_j}}^j}$. In case $\eta$ is $\bot$, or $\eta$ is a conjunctive subformula, or $A \nVdash \eta(\boldsymbol{\beta})$ we just consider the identity arrow as a cover. This guarantees that the tree that we will use to build the partial Beth model will have countable height (although its branching type could be quite high), and the structures built as the filtered colimits of underlying structures in a cofinal branch will be (possibly exploding) positive geometric models of the theory. Finally, since any geometric formula satisfied in the models given by the directed colimits of the underlying structures of the nodes along all cofinal branches, is forced at some node of every branch, an application of the categorical property corresponding to the rule $TT_{\omega}$ (itself provable in geometric logic, with the same proof sketched in the proof of Corollary \ref{karp3}) proves that the consequent of the valid sequent is forced at the roots, and the completeness result follows. \end{proof} \begin{rmk} Theorem \ref{fg} is best possible in terms of the cardinality of the theories. Indeed, given an $\omega_1$-Aronszajn tree, the theory of a cofinal branch there (\emph{cf.} proof of Proposition \ref{converse}), is obviously geometric and of cardinality $\omega_1$, but although consistent, it has no model. \end{rmk} \subsection{Future work} Besides finding the exact strength of Heyting cardinals and the disjunction and existence properties within the large cardinal hierarchy, another aspect which was not considered here is the question of whether conceptual completeness results could be established, or to what extent the category of models of a given theory determines the theory, up to $\kappa$-pretopos completion. Conceptual completeness theorems were obtained in \cite{mr}, and Makkai provided an even stronger result in \cite{makkai2}, by proving that the category of models of a theory could be endowed with certain structure whose study would allow to recover the theory. Awodey and Forssell provided in \cite{af} a different approach by considering a topological structure on the category of models. Neither of these approaches has been generalized to the infinitary first-order case, although Makkai gave in \cite{makkai} an answer for the infinitary regular fragment. It is therefore a natural step first to try to obtain conceptual completeness theorems for the infinitary first-order case, and second, to identify which type of structure could be given to the category of models to be able to recover the theory, possibly using some large cardinal assumptions. \subsection{Acknowledgements} The main ideas of this work have been presented in my PhD thesis at Stockholm University, and my period there was funded by the Swedish Research Council. I am indebted to all people that encouraged and supported me during my studies, in particular to my advisors Erik Palmgren and Henrik Forssell, as well as to Peter Lumsdaine for useful discussions about the subject. \bibliographystyle{amsalpha} \renewcommand{\bibname}{References}
{ "redpajama_set_name": "RedPajamaArXiv" }
\subsection{Design of ES Attack} \begin{figure*}[!tb] \centering \begin{subfigure}{0.28\textwidth} \centering \includegraphics[width=\linewidth]{figs/steal_0.png} \caption{Initial Steal ($t=0$)} \end{subfigure} \begin{subfigure}{0.28\textwidth} \centering \includegraphics[width=\linewidth]{figs/steal_1.png} \caption{The $1$st Steal ($t=1$)} \end{subfigure} \begin{subfigure}{0.28\textwidth} \centering \includegraphics[width=\linewidth]{figs/steal_2.png} \caption{The $2$st Steal ($t=2$)} \end{subfigure} \begin{subfigure}{0.28\textwidth} \centering \includegraphics[width=\linewidth]{figs/steal_3.png} \caption{The $3$rd Steal ($t=3$)} \end{subfigure} \begin{subfigure}{0.28\textwidth} \centering \includegraphics[width=\linewidth]{figs/steal_4.png} \caption{The $4$th Steal ($t=4$)} \end{subfigure} \begin{subfigure}{0.28\textwidth} \centering \includegraphics[width=\linewidth]{figs/steal_legend.png} \end{subfigure} \vspace{-0.5em} \caption{\textbf{Progress of Data Synthesis During \textit{ES Attack}\xspace.} We compare the synthetic datasets $\mathcal{D}_{\text{syn}}^{(t)}$ (in red), generated by our proposed attack, with the victim's training dataset $\mathcal{D}_{\text{train}}$ (in blue) and the auxiliary dataset $\mathcal{D}_{\text{aux}}$ (in green). The auxiliary dataset $\mathcal{D}_{\text{aux}}$ may share similar input space with the victim's training dataset $\mathcal{D}_{\text{train}}$, but a large space of $\mathcal{D}_{\text{train}}$ cannot be covered by the auxiliary dataset. In our attack, we first initial a randomly generated synthetic dataset $\mathcal{D}_{\text{syn}}^{(0)}$. $\mathcal{D}_{\text{syn}}^{(0)}$ share some attributes with $\mathcal{D}_{\text{train}}$, which might be less than $\mathcal{D}_{\text{aux}}$. During our attacks, $\mathcal{D}_{\text{syn}}^{(t)} (t=1, 2, 3)$ get information from our substitute models and explore more space in $\mathcal{D}_{\text{train}}$ in each iteration. The goal of the attacks is to cover the input space of $\mathcal{D}_{\text{train}}$ as much as possible. Note that the adversary trains the substitute model on a synthetic dataset $\mathcal{D}_{\text{syn}}^{(i)}$, ($i=0, 1, \cdots, N$) in each stealing epoch. As a sum, the substitute model is trained on all the synthetic datasets. } \label{fig:steal} \vspace{-0.5em} \end{figure*} Model stealing attacks aim to build a model $f_s$ that is functionally equivalent to the victim model $f_v$. Theoretically, if the adversary can train the substitute model on all the samples in the input space of $f_v$, the substitute model can achieve the same performance as the victim model. However, it is infeasible to query all the samples in the input space. Therefore, a critical step for model stealing attacks is to explore the input sub-space that represents the task domain $\mathcal{T}$. Adversaries will mimic the behavior of victim models in the input sub-space. \cite{orekondy2019knockoff,correia2018copycat,pal2019framework} leverage public datasets as an auxiliary dataset to train the substitute model to approximate the output of the victim model. The auxiliary data share common attributes with $\mathcal{D}_{\text{train}}$, which can be used to train the substitute model. However, these approaches are not practical due to many reasons: i) Data with shared attributes is not always available. Confidential data such as medical images, financial records are not publicly available. The scarcity of data is still a critical problem in many domains. ii) The relationship between the available auxiliary data on the public domain and the task domain $\mathcal{T}$ is unclear, which brings a challenge to select a proper auxiliary dataset. The rationale for selecting a specific auxiliary dataset is missing in most of the existing approaches. In the experiments, we show that using a randomly generated dataset, a special case of an auxiliary dataset, fails to steal the victim model. iii) The quality of data used for training the substitute model cannot be improved during model stealing. The data samples are fixed in the auxiliary dataset. Therefore, we propose an \textit{ES Attack}\xspace to heuristically explore the potential input space related to task domain $\mathcal{T}$ by learning from the victim model. We outline \textit{ES Attack}\xspace in Algorithm~\ref{alg:attack}. First, \textit{ES Attack}\xspace initializes a randomly synthetic dataset $\mathcal{D}_{\text{syn}}^{(0)}$, which may share few attributes with $\mathcal{D}_{\text{train}}$, most likely fewer than $\mathcal{D}_{\text{aux}}$. Second, \textit{ES Attack}\xspace trains a substitute model $f_s$ based on the samples from the synthetic dataset and their predictions from the victim model. Then, \textit{ES Attack}\xspace can generate a better synthetic dataset using the improved substitute model; in the meanwhile, the better synthetic dataset can help improve the substitute model. In this way, \textit{ES Attack}\xspace iteratively synthesizes the datasets and trains the substitute model to improve the quality of the synthetic dataset and the performance of the substitute model. Eventually, the synthetic datasets will approximate the private training dataset, and the substitute model will approximate the victim model or steal the victim model. Figure~\ref{fig:steal} illustrates the progress of data synthesis during \textit{ES Attack}\xspace. In Figure~\ref{fig:steal}, we compare the synthetic datasets $\mathcal{D}_{\text{syn}}^{(t)}$ (in red) with the victim's training dataset $\mathcal{D}_{\text{train}}$ (in blue) and the auxiliary dataset $\mathcal{D}_{\text{aux}}$ (in green). $\mathcal{D}_{\text{aux}}$ may share similar input space with $\mathcal{D}_{\text{train}}$, but in most cases, adversaries may not know the distance between the distribution of $\mathcal{D}_{\text{aux}}$ and the distribution of $\mathcal{D}_{\text{train}}$. Hence, $\mathcal{D}_{\text{train}}$ may not be fully covered by $\mathcal{D}_{\text{aux}}$. However, after initializing the synthetic dataset $\mathcal{D}_{\text{syn}}^{(0)}$, \textit{ES Attack}\xspace will iteratively improve the synthetic datasets $\mathcal{D}_{\text{syn}}^{(t)} (t=1, 2, 3, 4, \ldots)$, and explore more space in the training dataset $\mathcal{D}_{\text{train}}$. Two key steps in \textit{ES Attack}\xspace, E-Step and S-Step, are described as follows. \par\noindent \textbf{E-Step: } Estimate parameters in the substitute model on the synthetic dataset using knowledge distillation. The knowledge distillation approach transfers the knowledge from $f_v$ to $f_s$ with minimal performance degradation: \begin{equation} \label{eq:estep} f_s^{(t)} \gets \argmin_{f_s} \mathrm{L}_{\text{KD}}(f_s, f_v;\mathcal{D}_{\text{syn}}^{(t-1)}), \end{equation} where $f_s^{(t)}$ denotes the substitute model at iteration $t$ and $\mathcal{D}_{\text{syn}}^{(t-1)}$ denotes the synthetic dataset at the previous iteration $t-1$. The objective function $\mathrm{L}_{\text{KD}}$ is defined as knowledge distillation loss to make $f_s^{(t)}$ approximate the victim model $f_v$: \begin{equation} \mathrm{L}_{\text{KD}}(f_s, f_v; \mathcal{D}_{\text{syn}}) = \frac{1}{|\mathcal{D}_{\text{syn}}|}\sum_{\bm{x} \in \mathcal{D}_{\text{syn}}}\mathrm{L}_{\text{CE}}(f_s(\bm{x}), f_v(\bm{x})), \end{equation} where $L_{\text{\text{CE}}}$ denotes the cross-entropy loss. We train the substitute model by minimizing the objective function (Equation~\ref{eq:estep}) for $M$ epochs using Adam~\cite{kingma2014adam}. \par\noindent \textbf{S-Step: } Synthesize the dataset $\mathcal{D}_{\text{syn}}^{(t)} = \{\bm{x}\}$ consisted of the synthetic input samples. \begin{algorithm}[!tb] \caption{\textit{ES Attack}\xspace} \label{alg:attack} \textbf{INPUT}: \\ \text{\quad The black-box victim model $f_v$}\\ \text{\quad Number of classes $K$}\\ \text{\quad Number of stealing epochs $N$}\\ \text{\quad Number of training epochs for each stealing epoch $M$}\\ \textbf{OUTPUT}: \\ \text{\quad The substitute model $f_s^{(N)}$} \begin{algorithmic}[1] \State Initialize a synthetic dataset $\mathcal{D}_{\text{syn}}^{(0)}$ by randomly sampling $\bm{x}$ from a Gaussian distribution. \State Construct an initial substitute model $f_s^{(0)}$ by initializing the parameters in the model. \For{$t \gets 1$ to $N$} \State \textbf{E-Step:} \xy{Estimate the parameters in the substitute model $f_s^{(t)}$ using knowledge distillation for $M$ epochs on the synthetic dataset $\mathcal{D}_{\text{syn}}^{(t-1)}$}. \State \textbf{S-Step:} Synthesize a new dataset $\mathcal{D}_{\text{syn}}^{(t)}$ based on the knowledge of the substitute model $f_s^{(t)}$. \EndFor \State \textbf{return} $f_s^{(N)}$. \end{algorithmic} \end{algorithm} \subsection{Two approaches for Data Synthesis (S-Step)} \label{sec:data_syn} \xy{ Data synthesis (S-step in \textit{ES Attack}\xspace) aims to explore the possible data that reveal the data distribution of the victim's training dataset and benefit the substitute model. The most challenging problem in data synthesis is the lack of gradient of the victim model. The adversary can only get the prediction rather than the gradient from the victim model. Accordingly, the data generated by the adversary cannot be tuned by the victim model directly. The existing approaches used in data-free knowledge distillation fail to be applied in the model stealing attacks, since they require to back-propagate the gradients from the victim model. More discussion about the difference between model stealing and data-free knowledge distillation can be found in Section~\ref{sec:related_knowledge}. To overcome this challenge, we propose two data synthesis approaches that make use of the gradients of the substitute model as a proxy for updating the synthetic input data. Specifically, we introduce two approaches to generate the synthetic data: namely, \textit{DNN-SYN} and \textit{OPT-SYN}. Both approaches start with generating a set of random labels and initial data samples. Further the approaches update the data samples based on the assigned labels and the gradients from the substitute model. Once the substitute model is close to the victim model, the synthetic data becomes close to the distribution of the victim's training dataset. } \subsubsection{DNN-SYN.} \begin{figure}[!t] \centering \includegraphics[width=0.35\linewidth]{figs/acgan.png} \vspace{-0.5em} \caption{\textbf{DNN generator $G$ in \textit{DNN-SYN}.}} \label{fig:acgan} \vspace{-1em} \end{figure} We design a DNN-based generator to synthesize images that can be classified by our substitute model with high confidence. The design of image generator $G$ follows the major architecture of Auxiliary Classifier GAN (ACGAN)~\cite{odena2017conditional}, a variant of Generative Adversarial Network (GAN), which can generate images with label conditioning. We refer to the data generation approach using DNN as \textit{DNN-SYN}. We describe the procedure of \textit{DNN-SYN} as follows: \begin{enumerate}[\textbf{Step} 1:] \item Randomly assign a set of labels $\bm{l}=\{\bm{l_1}, \bm{l_2}, \ldots, \bm{l_n}\}$, where $\bm{l_i}$ denotes a $K$-dimensional one-hot vector. \item Train a DNN generator $G$ with parameter $w_G$ to generate data from a random latent vector $\bm{z_i}$. $G$ is optimized that the generated data can be classified by $f_s$ as assigned labels $L$ with high confidence: \begin{equation} \label{eq:gan} \min_{w_G} \quad \mathrm{L_{\text{img}}}(G, \bm{l}) \buildrel \text{def}\over = \sum_{i}^n \mathrm{L}_{\text{CE}}(f_s(G(\bm{z_i}, \bm{l_i})), \bm{l_i}). \end{equation} \item Generate a synthetic dataset using the generator trained in Step 2: $\mathcal{D}_{\text{syn}} = \{G(\bm{z_i}, \bm{l_i})\}$. \end{enumerate} In addition, mode collapse is one of the critical problems for training GANs. To avoid mode collapse in \textit{DNN-SYN}, we use a mode seeking approach to increase the diversity of data samples~\cite{mao2019mode}. Mode seeking has been shown simple yet effective in mitigating mode collapse. We generate two images $G(\bm{z_i^1})$ and $G(\bm{z_i^2})$ using latent vectors $\bm{z_i^1}$ and $\bm{z_i^2}$ and maximize the ratio of the distance of images to the distance of latent vectors. In other words, we minimize the mode seeking loss $\mathrm{L}_{\text{ms}}$: \xy{ \begin{equation} \mathrm{L}_{\text{ms}}(G, \bm{l}) \buildrel \text{def}\over = \sum_{i}^n \frac{d(\bm{z_i^1}, \bm{z_i^2})}{d(G(\bm{z_i^1}, \bm{l_i}), G(\bm{z_i^2}, \bm{l_i}))}, \end{equation} } where $d(\cdot, \cdot)$ denotes a distance metric. In this paper, we use $\ell_2$ norm distance. We sum up the original objective function $\mathrm{L}_{\text{img}}$ and the regularization term $\mathrm{L}_{\text{ms}}$ and minimize the new objective function: \begin{equation} \mathrm{L_{\text{DNN}}} = \mathrm{L}_{\text{img}} + \lambda \mathrm{L}_{\text{ms}}, \end{equation} where $\lambda$ denotes the hyper-parameter to adjust the value of regularization. In the experiment, we set $\lambda$ as $1$. Figure~\ref{fig:acgan} illustrates the architecture of DNN-based generator in \textit{DNN-SYN}. We follow the major design of ACGAN~\cite{odena2017conditional}. We feed a random latent vector $\bm{z_i}$ and a generated one-hot label $l_i$ into generator $G$. We concatenate these two vectors and up-sample them using several transposed convolution layers. Transposed convolution layers are parameterized by learnable weights. Each transposed convolution layer is followed by a batch normalization layer and a ReLU activation function except the last transposed convolution layer. 4 transposed convolution layers are used in the model. In the final layer, we use a Tanh function to output a synthesis image within $(-1, 1)$. \subsubsection{OPT-SYN.} \begin{algorithm}[!tb] \caption{Data Synthesis of \textit{OPT-SYN}} \label{alg:opt} \textbf{INPUT}:\\ \text{\quad The substitute model $f_s^{t}$ at iteration $t$}\\ \text{\quad Number of synthetic data $S$}\\ \text{\quad Number of output classes $K$}\\ \text{\quad Number of optimization iterations $m$}\\ \textbf{OUTPUT}: \text{\quad A set of optimized data samples $\mathcal{X}$} \begin{algorithmic}[1] \State $\mathcal{X} \gets \emptyset$ \For{$i \gets 1, S$} \State // Generate a $K$-dimensional random parameter $\alpha$ from a Gaussian distribution \State $\bm{\alpha} \sim \mathcal{N}(0, 1)$ \State // Sample a prediction vector $\bm{y}$ from a Dirichlet distribution \State $\bm{y} \sim D(K, \bm{\alpha})$ \State // Initialize a data sample $\bm{x}$ from Gaussian distribution \State $\bm{x} \sim \mathcal{N}(0, 1)$ \State // Minimize Equation~\ref{eq:opt} for $m$ iterations \State $\bm{x}^* \gets \argmin_{\bm{x}} \quad \mathrm{L}_{\text{CE}}(f_s^{(t)}(\bm{x}), \bm{y})$ \State // Add $\bm{x}^*$ to set $X$ \State $\mathcal{X} \gets \mathcal{X} \cup \bm{x}^*$ \EndFor \State \textbf{return}: $\mathcal{X}$. \end{algorithmic} \end{algorithm} Instead of training a generator to synthesize the dataset, we propose an optimization-based data synthesis approach, \textit{OPT-SYN}, which operates on the input space directly and does not suffer the problem of mode collapse. In addition, \textit{OPT-SYN} explores a more diverse label space compared to the one-hot labels used in \textit{DNN-SYN}. \textit{OPT-SYN} first explores the possible prediction vectors $\{\bm{y}\}$ in the task domain $\mathcal{T}$ and then minimizes the cross-entropy loss between $\{\bm{y}\}$ and the substitute model's prediction on the synthetic data: \begin{equation} \label{eq:opt} \min_{\bm{x}} \mathrm{L}_{\text{CE}}(f_s^{(t)}(\bm{x}), \bm{y}), \end{equation} where $f_s^{(t)}$ denotes the substitute model at the $t$th stealing epoch. In the experiments, we find that \textit{OPT-SYN} performs better than \textit{DNN-SYN} does in most scenarios. The proposed \textit{OPT-SYN} approach is detailed in Algorithm~\ref{alg:opt}. First, to explore the possible prediction vectors, we sample each random vector $\bm{y} = \{y_1, y_2, \ldots, y_K\}$ from a $K$-dimensional Dirichlet distribution with parameter $\bm{\alpha}$. Dirichlet distribution is commonly used as conjugate prior distribution of categorical distribution. From the Dirichlet distribution, we can sample prior probabilities $\{y_1, y_2, \ldots, y_K\}$, where $y_i \in (0, 1)$ and $\sum_{i=1}^K y_i = 1$. $\bm{\alpha}$ is referred to as the concentration parameter, which controls the distribution. The probability density function of Dirichlet distribution $Dir(K, \bm{\alpha})$ can be calculated by: \begin{equation} f(y_1, y_2, \ldots, y_K, \bm{\alpha}) = \frac{1}{B(\bm{\alpha})}\prod_{i=1}^K y_i^{\alpha_i -1}, \end{equation} where $B(\bm{\alpha})$ denotes the gamma function and $\sum y_1, y_2, \ldots, y_K = 1$. In the experiment, we randomly sample the parameter $\bm{\alpha}$ from a Gaussian distribution: $\bm{\alpha} \sim \mathcal{N}(0, 1)$ to explore the possible Dirichlet distribution. Given the prediction vector $\bm{y}$, we synthesize data $\bm{x}$ by iteratively minimizing the objective function~\ref{eq:opt}. The goal is to generate a data sample $\bm{x}^*$ that $f_s^{(t)}$ predicts $\bm{x}^*$ close to $\bm{y}$. An adaptive gradient-based optimization algorithm, Adam~\cite{kingma2014adam}, is applied to optimize the objective function iteratively. \subsection{Experiment Setup} \subsubsection{Victim Models and Datasets} In our experiments, we evaluate model stealing attacks on four image classification datasets: MNIST, KMNIST, SVHN, CIFAR-10. The MNIST dataset~\cite{lecun1998gradient} contains 70,000 28-by-28 gray images of 10 digits. We use 60,000 images for training and 10,000 images for testing following the original train/test split in the MNIST dataset. Kuzushiji-MNIST (KMNIST)~\cite{clanuwat2018deep} is a similar dataset to MNIST, containing 70,000 28-by-28 grey images of 10 Hiragana characters. We use 60,000 images for training and 10,000 images for testing. The SVHN~\cite{netzer2011reading} dataset consists of 60,000 32-by-32 RGB images from house numbers (10 classes from 0 to 9) in the Google Street View dataset. The CIFAR10~\cite{krizhevsky2009learning} dataset contains 60,000 32-by-32 RGB images with 10 classes. We train three types of DNN models on four datasets and use them as the victim models in our experiments. We train LeNet5~\cite{lecun1998gradient} on the MNIST~\cite{lecun1998gradient} and KMNIST~\cite{clanuwat2018deep} datasets. We train ResNet18~\cite{he2016deep} and ResNet34~\cite{he2016deep} on the SVHN~\cite{netzer2011reading} and CIFAR10~\cite{krizhevsky2009learning} datasets. LeNet5 is trained for 30 epochs using an SGD optimizer with a learning rate of 0.1 on the MNIST and KMNIST datasets. We train ResNet18 and ResNet34 for 200 epochs with an initial learning rate of 0.1 on the SVHN and CIFAR10 datasets. We reduce the learning rate by 10 after 80 and 120 epochs. We select the models with the highest test accuracies as the victim models. \subsubsection{Settings of \textit{ES Attack}\xspace} For \textit{DNN-SYN}, we input a $100$-dimensional random latent vector and a one-hot label vector into DNN-based generator $G$. The substitute model $f_s$ and DNN-based generator $G$ is trained by an Adam optimizer with a learning rate of 0.001. $f_s$ and $G$ are trained alternatively for 2,000 epochs each on the MNIST, KMNIST, and SVHN dataset ($N=2000, M=1$), and 15,000 epochs each on the CIFAR10 dataset ($N=15000$, $M=1$). For \textit{OPT-SYN}, we synthesize data for $30$ iterations ($M=30$) in each stealing epoch using an Adam optimizer with a learning rate of 0.01. We train the adversary model for $10$ epochs on the synthetic dataset ($M=10$). We repeat the stealing for $200$ epochs on the MNIST, KMNIST, and SVHN dataset ($N=200$), and 1,500 epochs on the CIFAR10 dataset ($N=1500$). To speed up the stealing process, we augment the synthetic dataset by random horizontal flip, horizontal shift, and adding Gaussian noise. \subsubsection{Baseline Model Stealing Attacks} We compare \textit{ES Attack}\xspace with two baseline attacks - model stealing using randomly generated data and auxiliary data. First, if the adversary has no knowledge of the victim's training data, randomly generated data could be the only dataset the adversary can leverage. We form a random dataset with the random data sampled from a Gaussian Distribution $\mathcal{N}(0,1)$ and their prediction from the victim model. We train our substitute model using the random dataset iteratively. Second, we consider public data as an auxiliary dataset. We use data samples from other public datasets and query the victim model with them. We construct an auxiliary dataset and train the substitute model on it. To make a fair comparison, we make sure that all the model stealing attacks, including two baseline attacks and two \textit{ES Attacks} (\textit{DNN-SYN} and \textit{OPT-SYN}), train their substitute models for the same epochs. \begin{table}[!tb] \renewcommand{\arraystretch}{1.3} \centering \caption{Performance comparison of model stealing attacks. } \label{tab:function} \vspace{-0.5em} \begin{tabular}{@{}llclc@{}} \toprule Dataset & Model & \begin{tabular}[c]{@{}c@{}} Victim \\accuracy (\%)\end{tabular} & Attacks & \begin{tabular}[c]{@{}c@{}} Substitute \\accuracy (\%) \end{tabular} \\ \hline \multirow{4}{*}{SVHN} & \multirow{4}{*}{ResNet18} & \multirow{4}{*}{95.40} & Random & 50.71 \\ && & Auxiliary & 74.84\\ && & DNN-SYN & 93.95 \\ && & OPT-SYN & \textbf{93.97}\\ \hline \multirow{4}{*}{SVHN} & \multirow{4}{*}{ResNet34} & \multirow{4}{*}{95.94} & Random & 60.95\\ && & Auxiliary & 82.00 \\ && & DNN-SYN & \textbf{93.34} \\ && & OPT-SYN & 93.19\\ \hline \multirow{4}{*}{CIFAR10} & \multirow{4}{*}{ResNet18}& \multirow{4}{*}{91.12} & Random & 11.72\\ && & Auxiliary & 48.73 \\ && & DNN-SYN & 33.44 \\ && & OPT-SYN & \textbf{84.60}\\ \hline \multirow{4}{*}{CIFAR10} & \multirow{4}{*}{ResNet34}& \multirow{4}{*}{91.93} & Random & 14.45 \\ && & Auxiliary & 38.55\\ && & DNN-SYN & 12.69 \\ && & OPT-SYN & \textbf{80.79}\\ \hline \multirow{4}{*}{MNIST} & \multirow{4}{*}{LeNet5}& \multirow{4}{*}{99.10} & Random & 72.18 \\ && & Auxiliary & \textbf{98.96} \\ && & DNN-SYN & 91.02 \\ && & OPT-SYN & 92.03 \\ \hline \multirow{4}{*}{KMNIST} & \multirow{4}{*}{LeNet5}& \multirow{4}{*}{95.67} & Random & 56.39 \\ && & Auxiliary & 59.43 \\ && & DNN-SYN & 90.37 \\ && & OPT-SYN & \textbf{90.37}\\ \bottomrule \end{tabular} \vspace{-0.5em} \end{table} \subsection{Performance Evaluation} \label{eval:func} We evaluate the performance of \textit{ES Attacks} using two data synthesis approaches and compare them with two baseline attacks. We report the accuracy of model stealing attacks in Table~\ref{tab:function}. We compare the results with two baseline attacks that use randomly generated data (Random) and auxiliary data (Auxiliary) to steal the victim model. From the evaluation, we observe that \textit{OPT-SYN} can successfully steal the victim models over all the datasets and model architectures. Our proposed attacks achieve better performance compared with two baseline attacks. \textbf{On average, \textit{OPT-SYN} improves the best accuracy by 44.57\% compared to the best results of two baseline attacks. } \textit{OPT-SYN} performs as well as \textit{DNN-SYN} on the SVHN, MNIST, and KMNIST datasets. However, \textit{DNN-SYN} cannot achieve a good performance on the CIFAR10 dataset, which is a more complex dataset and the generator $G$ in \textit{DNN-SYN} may still cause the mode collapse problem. Both our proposed attacks perform worse than the attacks using auxiliary data (KMNIST) on the MNIST dataset, which suggests that the auxiliary data can be used in the model stealing if the auxiliary data well-represent the target task and the data are available to the adversary. \begin{figure}[!tb] \centering \begin{subfigure}{0.45\linewidth} \centering \includegraphics[width=\linewidth]{figs/svhn_resnet18.png} \caption{ResNet18 on SVHN} \end{subfigure} \begin{subfigure}{0.45\linewidth} \centering \includegraphics[width=\linewidth]{figs/svhn_resnet34.png} \caption{ResNet34 on SVHN} \end{subfigure} \begin{subfigure}{0.45\linewidth} \centering \includegraphics[width=\linewidth]{figs/cifar_resnet18.png} \caption{ResNet18 on CIFAR10} \end{subfigure} \begin{subfigure}{0.45\linewidth} \centering \includegraphics[width=\linewidth]{figs/cifar_resnet34.png} \caption{ResNet34 on CIFAR10} \end{subfigure} \begin{subfigure}{0.45\linewidth} \centering \includegraphics[width=\linewidth]{figs/mnist.png} \caption{LeNet5 on MNIST} \end{subfigure} \begin{subfigure}{0.45\linewidth} \centering \includegraphics[width=\linewidth]{figs/kmnist.png} \caption{LeNet5 on KMNIST} \end{subfigure} \vspace{-0.5em} \caption{\textbf{Substitute model accuracy during attacks.}} \label{fig:acc} \vspace{-0.5em} \end{figure} Note that we assume that the adversary has no knowledge of any victim's data, which means the adversary cannot evaluate the substitute model on a validation dataset and select the best substitute model during the attack. If the performance of the stealing attacks fluctuates, then the adversary cannot guarantee the best performance of the substitute model. The convergence of the substitute model is essential for stealing attacks without a validation dataset. Our experiments show that the performance of the substitute model converges after a few stealing epochs (Figure~\ref{fig:acc}). If the adversary has the knowledge of a validation dataset or the victim's test dataset $\mathcal{D}_{\text{test}}$, the adversary will achieve the best accuracy. Otherwise, the adversary will use the substitute model in the last stealing epoch ($t=N$). We observe the subtle difference between the best accuracy and the last accuracy achieved by the substitute model (0.79\% difference on average for \textit{OPT-SYN} and 1.53\% for \textit{DNN-SYN}). The stable convergence suggests that our proposed attacks do not rely on a validation dataset. \xy{We find that model stealing attacks do not require a large query budget in real-world settings. The query to the victim model only occurs in the E-Step, where the adversary uses the synthetic data to get the model prediction. In our experiments, to steal the victim model trained on the MNIST, KMNIST, and SVHN dataset using \textit{OPT-SYN}, the adversary only needs to pay \$30K for all the required queries (around 120M queries) according to the pricing of Amazon AWS~\cite{amazon}. For for the CIFAR10 dataset, it costs \$187.5K to steal the victim model (around 750M queries). The expenses are much less than hiring ML experts and collecting data from scratch. This indicates that the adversary can replicate the function of the existing MLaaS models with much less cost than the victim's actual cost of establishing a machine learning service. } \subsection{Sensitivity Analysis of DNN Architectures} \begin{table}[!tb] \renewcommand{\arraystretch}{1.3} \centering \caption{\textit{ES Attack}\xspace using Different DNN Architectures.} \label{tab:arch} \vspace{-0.5em} \begin{tabular}{@{}lllcc@{}} \toprule Dataset & \begin{tabular}[l]{@{}l@{}} Victim \\ model\end{tabular} & \begin{tabular}[l]{@{}l@{}} Substitute \\ model\end{tabular} & \begin{tabular}[c]{@{}c@{}} Victim \\accuracy (\%) \end{tabular} & \begin{tabular}[c]{@{}c@{}} Substitute \\accuracy (\%) \end{tabular} \\ \midrule \multirow{2}{*}{MNIST} & ResNet18 & LeNet5 & 99.31 & 97.28 \\ & LeNet5 & ResNet18 & 92.03 & 98.13 \\\hline \multirow{2}{*}{SVHN} & ResNet18 & ResNet34 & 95.40 & 94.64 \\ & ResNet34 & ResNet18 & 95.94 & 94.03 \\ \hline \multirow{2}{*}{CIFAR10} & ResNet18 & ResNet34 & 91.12 & 82.54 \\ & ResNet34 & ResNet18 & 91.93 & 62.73 \\ \bottomrule \end{tabular} \vspace{-0.5em} \end{table} \xy{In this section, we consider a more realistic scenario where the adversary has no knowledge of the victim model's architecture. The adversary may choose a different architecture of the substitute model from that of the victim model. \xyy{Thus, we investigate when the adversary uses a different neural network architecture from the victim. For the MNIST dataset, we use ResNet18 for the victim model and LeNet5 for the substitute model, and vise versa. For the SVHN and CIFAR10 datasets, we consider ResNet18 and ResNet34 as the victim model and the substitute model's architecture. Because \textit{OPT-SYN} outperforms \textit{DNN-SYN} in most model stealing attacks, we use \textit{OPT-SYN} to evaluate the sensitivity of DNN architectures. From Table~\ref{tab:arch}, we do not observe a significant performance loss due to different DNN architectures for the MNIST and SVHN dataset, compared with using the same architecture. For the CIFAR10 dataset, we observe subtle performance loss using ResNet34 to steal ResNet18 models. The only degradation of performance occurs when the adversary uses a small model ResNet18 to steal a large model ResNet34. } We believe the degradation is due to the gap between the size of the victim model and the substitute model. } We find similar performance degradation in many other tasks using knowledge distillation. The performance of the student model (substitute model in our paper) will be degraded if there is a gap between student and teacher (victim)~\cite{mirzadeh2019improved}. The adversary can easily avoid performance degradation by selecting a large model. From our experiments, if the adversary chooses a DNN model with the same size or the large size compared with the victim model, the adversary will be able to steal a substitute model with high accuracy. \subsection{Convergence of \textit{ES Attack}\xspace} Figure~\ref{fig:acc} illustrates the convergence of \textit{ES Attack}\xspace. We observe that the accuracy of the substitute model always converges at the end of the stealing. We observe the subtle difference between the best accuracy and the last accuracy achieved by the substitute model (0.79\% difference on average for \textit{OPT-SYN} and 1.53\% for \textit{DNN-SYN}). The stable convergence at the end of the attacks suggests that our proposed attacks do not rely on a test dataset. Hence, the adversary can successfully steal the victim model even without knowing test data, which suggests the practicality of \textit{ES Attack}\xspace and substantially raises the severity of model stealing attacks. \subsection{Quality Analysis of Synthetic Data} \label{eval:syn_data} In this section, we investigate the quality of synthetic data. Figure~\ref{fig:syn_data} shows examples of the synthetic data used in model stealing. We compare them with the victim's training data. Humans cannot recognize these images, yet our substitute model can be well-trained using these synthetic data. \input{synthetic_data} Therefore, we further investigate synthetic data in terms of quality and diversity. Inspired by the measurements for GANs, we use Inception Score (IS)~\cite{salimans2016improved} and Fr\'{e}chet Inception Distance (FID)~\cite{heusel2017gans} to evaluate the synthetic data. In the experiments, we observe that the synthetic data achieves better quality and higher diversity compared to the auxiliary data. \textbf{Inception Score (IS)} was originally proposed to measure the quality of generated images using a pre-trained Inception-V3 network. In our experiments, we replaced the Inception-V3 network with the victim models. To be consistent with the concept, we keep Inception Score as the name of our metric. Given the prediction provided by the victim models, Inception Score compares the conditional prediction distribution with the marginal prediction distribution: \begin{equation} \text{IS} = \exp(\mathbb{E}_{\bm{x}} D_{\text{KL}}(p(\bm{y}|\bm{x})) || p(\bm{y})), \end{equation} where $D_{\text{KL}}$ denotes the KL divergence. A high Inception Score indicates: 1) generated images are meaningful and predictable to the victim model, so that $p(\bm{y}|\bm{x})$ has low entropy; 2) generated images are diverse, so that $p(\bm{y})$ has high entropy. \textbf{Fr\'{e}chet Inception Distance (FID)} was proposed to improve IS by comparing the intermediate features of the victim model. FID models the real images and the synthesis data as two multivariate Gaussian distributions and calculates the distance between the two distributions: \begin{equation} \text{FID} = ||\mu_t - \mu_s||_2^2 + Tr(\Sigma_t + \Sigma_s - 2(\Sigma_t\Sigma_s)^{\frac{1}{2}}), \end{equation} where $(\mu_t, \Sigma_t)$ and $(\mu_s, \Sigma_s)$ denote the mean and covariance matrix of intermediate features predicted using training data and synthesis data. $Tr(\cdot)$ denotes the trace of a matrix. A low FID indicates better image quality and diversity. FID is shown to be more consistent with human perception than IS~\cite{heusel2017gans} and more robust to mode collapse~\cite{lucic2018gans}. In our experiments, we used the features from the layer before the last linear layer and compared our synthesis data with the training data using FID. These two metrics are widely used to measure the quality of generated images. \begin{table*}[!tb] \renewcommand{\arraystretch}{1.3} \centering \caption{Analysis of Synthetic Data using IS and FID.} \label{tab:data_eval} \vspace{-0.5em} \begin{tabular}{@{}llr|rr|rr|rr@{}} \toprule \multirow{2}{*}{Dataset} & \multirow{2}{*}{\shortstack{Model}} & \shortstack{Victim $\mathcal{D}_{\text{train}}$} & \multicolumn{2}{c|}{{Auxiliary $\mathcal{D}_{\text{aux}}$}} & \multicolumn{2}{c|}{{Random $\mathcal{D}_{syn}^{(0)}$}} & \multicolumn{2}{c}{{Synthetic $\mathcal{D}_{syn}^{(N)}$}} \\ \cline{3-9} & & IS & IS & FID & IS & FID & IS & FID \\ \hline MNIST & LeNet5 & 9.86 & 4.22 & 274.80 & 1.22 & 500.16 & 4.63 & 257.55 \\ KMNIST & LeNet5 & 9.96 & 4.75 & 1073.92 & 1.78 & 1498.75 & 4.70 & 962.47 \\ CIFAR10 & ResNet18 & 6.89 & 2.58 & 5.34 & 3.45 & 5.82 & 4.32 & 3.31 \\ CIFAR10 & ResNet34 & 7.64 & 4.16 & 18.16 & 6.75 & 14.88 & 6.65 & 18.29 \\ SVHN & ResNet18 & 6.89 & 2.24 & 7.62 & 3.45 & 5.82 & 4.32 & 3.31 \\ SVHN & ResNet34 & 7.64 & 4.16 & 18.16 & 6.75 & 14.88 & 6.65 & 18.29 \\ \bottomrule \end{tabular} \vspace{-0.5em} \end{table*} We compare the four types of data and report the average values of IS and FID in Table~\ref{tab:data_eval}: 1) victim's training dataset, 2) auxiliary dataset used in the baseline attack, 3) random data generated in the first initialization epoch ($t=0$), 4) the synthetic data generated in the last stealing epoch ($t=N$). The value of FID is evaluated by comparing the data with the victim's training data. From our analysis, we find that synthetic data usually achieves better quality and high diversity than the auxiliary data (higher IS value and lower FID value). On average over six settings, synthetic data $\mathcal{D}_{syn}^{(1)}$ achieves 60.58\% higher IS values and 27.64\% lower FID values than the auxiliary data $\mathcal{D}_{\text{aux}}$, which suggests better quality and higher diversity of synthetic images. The victim's training data always achieve the highest IS value: the training data is the best representation of the input space among data we investigate. The Random Data are always the worst data due to the low IS values and high FID values. \subsection{Rounding Prediction} The MLaaS providers fix the decimals of the output prediction and zero-out the rest to provide only the necessary information. For example, if we round the prediction with 2 decimals, then $round(0.2474, r=2) = 0.25$. We deploy rounding predictions with 2 decimals as a defensive strategy. Our experiments show that none of the model stealing attacks are affected by rounding to two decimals (Table~\ref{tab:defense}). On average, the best accuracy of the substitute model even increases by 0.55\% and the last accuracy only decreases by 0.30\%. \begin{table*}[!tb] \renewcommand{\arraystretch}{1.3} \centering \caption{Evaluation of defenses against model stealing.} \label{tab:defense} \vspace{-0.5em} \begin{tabular}{llrrr} \toprule Dataset & \begin{tabular}[l]{@{}l@{}} Model\end{tabular} &\begin{tabular}[l]{@{}l@{}} Accuracy without defense (\%)\end{tabular} &\begin{tabular}[l]{@{}l@{}} Accuracy with rounding (\%)\end{tabular} &\begin{tabular}[l]{@{}l@{}} Accuracy with Top-K (\%)\end{tabular}\\\hline SVHN & ResNet34 & 93.19 & 92.58 & 88.76\\ CIFAR10 & ResNet34 & 80.79 & 80.31 & 69.64\\ MNIST & LeNet5 & 92.03 & 96.69 & 95.43\\ KMNIST & LeNet5 & 90.37 & 90.03 & 91.11\\ \bottomrule \end{tabular} \vspace{-0.5em} \end{table*} We further investigate the impact of rounding decimals on the \textit{ES Attack}\xspace. Figure~\ref{fig:round_mnist} and~\ref{fig:round_kmnist} show the results of experiments with class probabilities rounded to 0-5 decimals. We compare the after-rounding classification accuracy of the substitute model and the victim model. Class probabilities rounded to 2 to 5 decimals have no effect on the adversary's success. When rounding further to 1 decimal, the attack is weakened, but still successful. When we round the precision to 0 decimal - the victim model only outputs $0$ or $1$ - the attack is further weakened, but still can predict in most cases (over 80\% accuracy). We observe that rounded information brings the uncertainty of the prediction, while this uncertainty sometimes will slightly improve the training. \begin{figure}[!tb] \centering \includegraphics[width=0.7\linewidth]{figs/round_mnist.png} \caption{\textbf{Evaluation of Rounding Prediction on MNIST.}} \label{fig:round_mnist} \end{figure} \begin{figure}[!tb] \centering \includegraphics[width=0.7\linewidth]{figs/round_kmnist.png} \caption{\textbf{Evaluation of Rounding Prediction on KMNIST.}} \label{fig:round_kmnist} \vspace{-1em} \end{figure} \subsection{Top-K Prediction} Instead of providing the predictions of all the classes, MLaaS providers can provide partial information - predictions of K classes with the highest probabilities. Since this defense can be easily detected by the adversary, we assume the adversary is aware of the top-K defenses equipped by the MLaaS provider and will try to circumvent such defense. Therefore, the adversary can slightly change the attack by making up the missing probabilities of the rest classes. The adversary remains the probabilities of the Top-K classes and fills up the rest classes with the same probabilities. For example, given an prediction output of $[0.5, 0.02, 0.3, 0.02, 0.15, 0.01]$, by using Top-2 defense, MLaaS provider can hide the predictions of eight classes and only respond with the prediction of $[0.5, 0.0, 0.3, 0.0, 0.0, 0.0]$. By knowing the top-k defense, The adversary can then convert the predictions to $[0.5, 0.05, 0.3, 0.05, 0.05, 0.05]$ and resume \textit{ES Attack}\xspace. From the experiments, we observe that Top-1 prediction will not affect much on most datasets (Table~\ref{tab:defense}). For the MNIST and KMNIST datasets, we find that the accuracy of the substitute model even gets improved. Top-1 prediction is only effective in preventing model stealing on the CIFAR10 dataset. However, we believe Top-1 prediction is a very strong defense, which will also affect normal users by providing very limited information. On average, the best accuracy of the substitute model with Top-1 prediction is only decreased by 2.86\%. In addition, we investigate the impact of probabilities with different numbers ($K$) of classes on our model stealing attacks (Figure~\ref{fig:topk_mnist} and~\ref{fig:topk_kmnist}). The performance of model stealing attacks is not decreased with fewer classes providing probabilities (small $K$). We find our attacks are minimally impacted by reducing the informativeness of black-box predictions in the response. \begin{figure}[!tb] \centering \includegraphics[width=0.7\linewidth]{figs/topk_mnist.png} \caption{\textbf{Evaluation of Top-K Prediction on MNIST.}} \label{fig:topk_mnist} \vspace{-0.2cm} \end{figure} \begin{figure}[!tb] \centering \includegraphics[width=0.7\linewidth]{figs/topk_kmnist.png} \caption{\textbf{Evaluation of Top-K Prediction on KMNIST.}} \label{fig:topk_kmnist} \vspace{-1em} \end{figure} \subsection{Anomaly Detection} Anomaly detection identifies the abnormal queries sent from users and detects the abnormal behavior that deviates from normal ones. For example, PRADA assumes that the distance between normal queries follows Gaussian distribution and detects the abnormal queries~\cite{juuti2018prada}. By evaluating how deviated the distance from Gaussian distribution, PRADA detects model stealing attacks. For the details of PRADA, we refer readers to~\cite{juuti2018prada}. We evaluate the effectiveness of anomaly detection using PRADA against \textit{OPT-SYN} We analyzed 300,000 image samples from the first five stealing epochs on the MNIST dataset. None of the synthetic images can be detected by PRADA. We believe that because the images are generated starting from the Gaussian distribution, the distances between queried images are too small to be detected by PRADA. Moreover, we find it is not practical for MLaaS providers to deploy a PRADA detector due to its high response time. In our experiments, it takes about 33 hours to process 300,000 images (2.46 images per second on average). With more images to be detected, the average response time will be further increased. Therefore, PRADA is ineffective and infeasible to detect the proposed \textit{ES Attack}\xspace. \subsection{Model Stealing Attacks} Several studies have been proposed for model stealing attacks. Tramer \textit{et al}.\@\xspace investigated stealing model parameters using equation solving~\cite{tramer2016stealing}. However, this approach is hard to extend to DNNs, which contains a larger number of than conventional machine learning models do. Papernot \textit{et al}.\@\xspace proposed a similar framework to steal DNNs by training a substitute model~\cite{papernot2017practical}. Their goal is to approximate the victim model's decision boundaries to facilitate the adversarial attacks rather than to maximize the substitute model's accuracy. Thus their substitute model achieves a much lower classification accuracy compared to our work. In addition, to generate adversarial examples, their approach requires a small set of inputs that represents the input domain. \xyy{In our work, we eliminate this requirement, where the adversary does not need to have prior knowledge, making the attacks more feasible in the real world.} From the experimental results, the stolen model from \textit{ES Attack}\xspace achieves a higher accuracy compared to that from ~\cite{papernot2017practical}. Existing model stealing attacks against DNNs require an auxiliary dataset. Orekondy \textit{et al}.\@\xspace proposed stealing attacks that assume access to a large dataset and use active learning to select the best samples to query~\cite{orekondy2019knockoff}. Correia-Silva \textit{et al}.\@\xspace leveraged public datasets from the same task domain but with different distributions to steal DNNs. \xy{Different from these works, we assume the adversary does not have any auxiliary data related to the task domain. The experiments show that \textit{ES Attack}\xspace can achieve comparable performance, compared with the attacks using auxiliary datasets.} \xy{ Zhou~\textit{et al}.\@\xspace used the same assumption with our work - unknown victim's training data and leveraged a generative model to synthesize the training data~\cite{zhou2020dast}. However, the goal of DaST is to perform a successful adversarial attack, which is different from ours. In this paper, we aim to improve the prediction performance of the substitute model. Accordingly, the substitute model trained by DaST achieves much lower accuracy than \textit{ES Attack}\xspace. } \xy{ MAZE investigated a similar problem with our work, namely data-free model stealing attack~\cite{kariyappa2021maze}. To address the problem of unknown victim's training data, MAZE tried to solve the same challenge as our work, that is in generating synthetic data, the gradient for updating the synthetic data cannot be backpropagated using the victim model. MAZE addressed this issue by approximating the gradients from the victim model using zeroth-order gradient estimation, which is widely used in black-box adversarial attacks, whereas in our work, we generate the gradients by using the substitute model as a proxy of the victim model. Both approaches achieve comparable attacking performance. In addition, the two approaches are orthogonal and could be further integrated together for better performance. We will explore the new approach benefiting from \textit{ES Attack}\xspace and MAZE in the future. } \vspace{-0.5em} \subsection{Model Stealing Defenses} Several detection approaches have been proposed for model stealing attacks. Juuti \textit{et al}.\@\xspace detected the deviated distribution of queries from normal behavior~\cite{juuti2018prada}. Similarly, Kesarwani \textit{et al}.\@\xspace proposed a detection tool that uses information gain to measure the model learning rate by users with the increasing number of queries~\cite{kesarwani2018model}. The learning rate is measured to the coverage of the input feature space in the presence of collusion. We evaluate \cite{juuti2018prada} in our experiments and find that the detection approach is ineffective for our model stealing attacks. \subsection{Knowledge Distillation} \label{sec:related_knowledge} In our model stealing attacks, we use distillation to transfer knowledge from the victim model to the substitute model. Knowledge distillation is widely used in model compression by transferring the knowledge from one model (teacher model) to another (student model)~\cite{hinton2015distilling,bucilua2006model}. Most knowledge distillation approaches require the knowledge of training data. Recently, knowledge distillation without training data has recently been investigated~\cite{chen2019data,lopes2017data,nayak2019zero,yin2020dreaming,haroush2020knowledge} when the training data is infeasible due to large data size or privacy concerns. \xy{However, these data-free knowledge distillation approaches cannot be used for model stealing since the adversary cannot acquire the required information. For example, the model gradient is required to update the parameters of the generator in~\cite{chen2019data}. Similarly, model parameters are required to calculate class similarity in~\cite{nayak2019zero} or to calculate the feature map statistics in~\cite{yin2020dreaming} and batch normalization statistics in~\cite{haroush2020knowledge}. Therefore, beyond data-free knowledge distillation, we introduce two data synthesis approaches in \textit{ES Attack}\xspace to construct a novel data-free model stealing attack. } \section*{Synthetic Example Images by \textit{OPT-SYN}} We show the synthetic images using \textit{OPT-SYN} with the best substitute model. We compare them with the victim's training data in the SVHN, CIFAR-10, MNIST and KMNIST datasets (Figure~\ref{fig:svhn_img},~\ref{fig:cifar_img},~\ref{fig:mnist_img},~\ref{fig:kmnist_img}). First row: images in the victim's training dataset. Second row: images in the synthetic dataset. Third Row: corresponding labels of images. \begin{figure*}[!b] \centering \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/SVHN_train_0.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/SVHN_train_1.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/SVHN_train_2.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/SVHN_train_3.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/SVHN_train_4.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/SVHN_train_5.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/SVHN_train_6.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/SVHN_train_7.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/SVHN_train_8.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/SVHN_train_9.png} \end{subfigure} \par\bigskip \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/SVHN_syn_0.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/SVHN_syn_1.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/SVHN_syn_2.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/SVHN_syn_3.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/SVHN_syn_4.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/SVHN_syn_5.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/SVHN_syn_6.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/SVHN_syn_7.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/SVHN_syn_8.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/SVHN_syn_9.png} \end{subfigure} \par\bigskip \begin{subfigure}{0.09\textwidth} \centering 0 \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering 1 \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering 2 \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering 3 \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering 4 \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering 5 \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering 6 \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering 7 \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering 8 \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering 9 \end{subfigure} \caption{\textbf{The SVHN Dataset}} \label{fig:svhn_img} \end{figure*} \begin{figure*}[!tb] \centering \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/CIFAR_train_0.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/CIFAR_train_1.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/CIFAR_train_2.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/CIFAR_train_3.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/CIFAR_train_4.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/CIFAR_train_5.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/CIFAR_train_6.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/CIFAR_train_7.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/CIFAR_train_8.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/CIFAR_train_9.png} \end{subfigure} \par\bigskip \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/CIFAR_syn_0.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/CIFAR_syn_1.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/CIFAR_syn_2.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/CIFAR_syn_3.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/CIFAR_syn_4.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/CIFAR_syn_5.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/CIFAR_syn_6.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/CIFAR_syn_7.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/CIFAR_syn_8.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/CIFAR_syn_9.png} \end{subfigure} \par\bigskip \begin{subfigure}{0.09\textwidth} \centering plane \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering car \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering bird \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering cat \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering deer \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering dog \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering frog \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering horse \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering ship \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering truck \end{subfigure} \caption{\textbf{The CIFAR-10 Dataset}} \label{fig:cifar_img} \end{figure*} \begin{figure*}[!tb] \centering \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/MNIST_train_0.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/MNIST_train_1.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/MNIST_train_2.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/MNIST_train_3.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/MNIST_train_4.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/MNIST_train_5.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/MNIST_train_6.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/MNIST_train_7.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/MNIST_train_8.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/MNIST_train_9.png} \end{subfigure} \par\bigskip \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/MNIST_syn_0.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/MNIST_syn_1.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/MNIST_syn_2.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/MNIST_syn_3.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/MNIST_syn_4.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/MNIST_syn_5.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/MNIST_syn_6.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/MNIST_syn_7.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/MNIST_syn_8.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/MNIST_syn_9.png} \end{subfigure} \par\bigskip \begin{subfigure}{0.09\textwidth} \centering 0 \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering 1 \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering 2 \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering 3 \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering 4 \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering 5 \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering 6 \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering 7 \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering 8 \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering 9 \end{subfigure} \caption{\textbf{The MNIST Dataset}} \label{fig:mnist_img} \end{figure*} \begin{figure*}[!tb] \centering \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/KMNIST_train_0.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/KMNIST_train_1.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/KMNIST_train_2.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/KMNIST_train_3.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/KMNIST_train_4.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/KMNIST_train_5.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/KMNIST_train_6.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/KMNIST_train_7.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/KMNIST_train_8.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/KMNIST_train_9.png} \end{subfigure} \par\bigskip \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/KMNIST_syn_0.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/KMNIST_syn_1.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/KMNIST_syn_2.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/KMNIST_syn_3.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/KMNIST_syn_4.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/KMNIST_syn_5.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/KMNIST_syn_6.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/KMNIST_syn_7.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/KMNIST_syn_8.png} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/imgs/KMNIST_syn_9.png} \end{subfigure} \par\bigskip \begin{subfigure}{0.09\textwidth} \centering \begin{CJK}{UTF8}{min} お \end{CJK} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \begin{CJK}{UTF8}{min} き \end{CJK} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \begin{CJK}{UTF8}{min} す \end{CJK} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \begin{CJK}{UTF8}{min} つ \end{CJK} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \begin{CJK}{UTF8}{min} な \end{CJK} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \begin{CJK}{UTF8}{min} は \end{CJK} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \begin{CJK}{UTF8}{min} ま \end{CJK} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \begin{CJK}{UTF8}{min} や \end{CJK} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \begin{CJK}{UTF8}{min} れ \end{CJK} \end{subfigure} \begin{subfigure}{0.09\textwidth} \centering \begin{CJK}{UTF8}{min} を \end{CJK} \end{subfigure} \caption{\textbf{The KMNIST Dataset}} \label{fig:kmnist_img} \end{figure*} \subsection{Evaluation of Black-Box Attacks} We evaluate the black-box adversarial attack against the victim model using the substitute model. We implement a white-box $\ell_{\infty}$-PGD attacks~\cite{madry2017towards} and leverage the transferability of adversarial examples to conduct the attack. PGD attack is an iterative gradient-based attack in a white-box setting and has been proven effective in many machine learning models and tasks. In the experiment, we use the test dataset $\mathcal{D}_{test}$ as our evaluation dataset. We follow the adversarial settings in~\cite{madry2017towards} and consider the untargeted adversarial attacks, where adversarial examples can be classified as any classes other than the ground-truth class. For the MNIST and KMNIST dataset, we run 40 iterations of $\ell_{\infty}$-PGD attack with a step size of 0.01. We set the maximal perturbation size as $0.3$. For the SVHN and CIFAR10 dataset, we run 20 iterations with a step size of 2/255. The maximal perturbation size is set as $8/255$. We report the success rate of adversarial attacks against our substitute model and the victim model (transferring attack) in Table~\ref{tab:adversarial}. We compare the success rate of three adversarial attacks: 1) white-box attacks against the victim model, 2) white-box attacks against the substitute model, and 3) black-box attacks against the victim model via transferring. For the third attack, we evaluate the adversarial examples generated against substitute model (white-box attacks) on the victim model. We show the performance of the white-box $\ell_{\infty}$-PGD attack against the victim model as well. \begin{table}[!tb] \renewcommand{\arraystretch}{1.3} \centering \caption{Success rates of adversarial attacks.} \label{tab:adversarial} \vspace{-0.5em} \resizebox{\linewidth}{!}{% \begin{tabular}{@{}llrrr@{}} \toprule \multirow{3}{*}{Dataset} & \multirow{3}{*}{Model}& \multicolumn{3}{c}{Attack success rate (\%)}\\\cline{3-5} & & \begin{tabular}[r]{@{}r@{}} white-box\\victim model\end{tabular} & \begin{tabular}[c]{@{}c@{}} white-box\\substitute model\end{tabular} & \begin{tabular}[c]{@{}c@{}} black-box\\victim model\end{tabular} \\ \midrule SVHN& ResNet18 & 99.95 & 99.94 & 98.71 \\ SVHN& ResNet34 & 99.93 & 99.90 & 98.21 \\ CIFAR10& ResNet18 & 100.00 & 100.00 & 93.60 \\ CIFAR10& ResNet34 & 100.00 & 100.00 & 100.00 \\ MNIST& LeNet5 & 86.07 & 99.57 & 92.14 \\ KMNIST& LeNet5 & 66.44 & 99.72 & 98.99 \\ \bottomrule \end{tabular} } \vspace{-1em} \end{table} From the experimental results, the black-box adversarial attacks using the substitute model can achieve the same success rate as the white-box, which suggests that the substitute models can transfer the adversarial examples to the victim model successfully. Almost all black-box adversarial attacks achieve high accuracy rates (over 90\%). We observe that the success rates of the black-box attacks against the victim models are less than that of the white-box attacks against the substitute models, but the change is subtle. Hence, most adversarial examples can be transferred from the substitute model to the victim model. Surprisingly, the black-box attacks against the victim model perform even better than the white-box attacks against the victim model on the MNIST and KMNIST dataset. \section{Introduction}} \input{1-intro.tex} \section{Problem Statement} \input{2-background.tex} \section{\textit{ES Attack}\xspace} \input{3-steal.tex} \section{Evaluation of \textit{ES Attack}\xspace} \input{4-eval.tex} \subsection{Further Attacks: A Case Study on Black-Box Adversarial Attacks} \label{sec:adv} \input{adversarial.tex} \section{Countermeasures of Model Stealing} \label{sec:defense} \input{5-defenses.tex} \section{Related Work} \label{sec:related} \input{6-related.tex} \section{Conclusion} \input{8-conclusion.tex} \section*{Acknowledgments} This work was supported in part by National Science Foundation (CCF-2007210, CCF-2106610, and CCF-2106754). \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Entanglement entropy and other related information-theoretic quantities such as mutual information, are by now regarded as valuable tools to study different phenomena in quantum field theories and many body systems \cite{holapp,holapp2}. These quantities provide a new kind of information that cannot be obtained from more standard observables such as two point correlation functions. Namely, both the entanglement entropy and the mutual information, are sensitive probes able to detect non-local signatures of the theory such as topological order which can not be detected by any local observable. Concretely, the mutual information $\mathcal{I}_{AB}$ between two arbitrary regions $A$ and $B$ has certain advantages over the entanglement entropy. First, $\mathcal{I}_{AB}$ can be viewed as an entropic correlator between $A$ and $B$ defined by, \begin{equation} \mathcal{I}_{AB}=S_A + S_B - S_{A \cup B}, \label{mutinf} \end{equation} where $S_{A,\, B}$ is the entanglement entropy of the region $A (B)$ and $S_{A \cup B}$ is the entanglement entropy of the two regions. By its definition, $\mathcal{I}_{AB}$ is finite and, contrarily to entanglement entropy, is non UV-cutoff dependent. In addition, the subadditivity property of the entanglement entropy states that when $A$ and $B$ are disconnected, then, \begin{equation}\label{ssub} S_A + S_B \geq S_{A \cup B}, \end{equation} which inmediatly leads to realize that $\mathcal{I}_{AB} \geq 0$. Subadditivity is the most important inequality which entanglement entropy satisfies. A standard approach to compute both the entanglement entropy and the mutual information makes use of the replica trick \cite{calcard,calcard2,calcard3}. Unfortunately, these calculations are notoriously difficult to carry out, even in the case of free field theories. In the context of the AdS/CFT \cite{adscftbib}-\cite{adscftbib4}, however, Ryu and Takayanagi (RT) have recently proposed a remarkably simple formula \cite{ryutak}-\cite{ryutak4} to obtain the entanglement entropy of an arbitrary region $A$ of a $d+1$ dimensional CFT which admits a classical gravity dual given by an asymptotically AdS$_{d+2}$ spacetime. According to the RT formula, the entanglement entropy is obtained in terms of the area of a certain minimal surface $\gamma_A$ in the dual higher dimensional gravitational geometry; as a result, the entanglement entropy $S_A$ in a CFT$_{d+1}$ is given by the celebrated area law relation, \begin{equation}\label{hologEE} S_A = \frac{{\rm Area}(\gamma_A)}{4 G_N^{(d + 2)}}, \end{equation} where $d$ is the number of space dimensions of the boundary CFT and $\gamma_{A}$ is the $d$-dimensional static minimal surface in AdS$_{d+2}$ such that $\partial A = \partial\, \gamma_A$. The $G^{(d+2)}_{N}$ is the $d+2$ dimensional Newton constant. The RT formula provides a simple tool to prove the subadditivity of entanglement entropy from the properties of minimal surfaces \cite{headrick_sa}. Otherwise it has to be laboriously derived from the positive definiteness of the Hilbert space. Here we consider the mutual information between two disconnected regions $A$ and $B$ in the ground state of an strongly coupled quantum field theory with a gravity dual given by the AdS/CFT correspondence. Using (\ref{hologEE}) in (\ref{mutinf}), this quantity reads, \begin{equation}\label{holoMI} \mathcal{I}_{AB}=\frac{1}{4 G_N^{(d + 2)}}\left[ {\rm Area}(\gamma_A) + {\rm Area}(\gamma_B)- {\rm Area}(\gamma_{A \cup B})\right], \end{equation} where ${\rm Area}(\gamma_{A \cup B})$ is the area of the minimal surface related to $A \cup B$. Recently, the holographic mutual information (\ref{holoMI}) has been considered in a quite remarkably amount of different settings \cite{headrick}-\cite{holMI7}. A striking prediction for the holographic mutual information arises when analyzing the behaviour of the minimal surface $\gamma_{A \cup B}$. In \cite{headrick} it is shown how, for certain distances between the two regions, there are minimal surfaces $\gamma_{A \cup B}^{con}$ connecting $A$ and $B$. For those regimes, the holographic mutual information has a nonzero value proportional to the number of degrees of freedom in the gauge theory lying on the boundary of AdS$_{d+2}$. However, when the separation between the two regions is large enough compared to their sizes, then a disconnected surface $\gamma_{A \cup B}^{dis}$ with, \begin{equation} {\rm Area}(\gamma_{A \cup B}^{dis}) = {\rm Area}(\gamma_A) + {\rm Area}(\gamma_B), \end{equation} is both topologically allowed and minimal. In this case, (\ref{hologEE}) yields $S_{A \cup B} = S_A + S_B$ and a sharp vanishing of $\mathcal{I}_{AB}$ then occurs. This result is quite surprising from a quantum information point of view since, when the mutual information vanishes, the reduced density matrix $\rho_{A \cup B}$ factorizes into $\rho_{A \cup B} = \rho_A \otimes \rho_B$, implying that the two regions are completely decoupled from each other and thus, all the correlations (both classical and quantum) between $A$ and $B$ should be rigorously zero. Indeed, it seems, at least counterintuitive, that all the correlations should strictly vanish at a critical distance, in a field theory in its large $N$ limit. This behaviour is a general prediction of the RT formula (\ref{hologEE}) which is valid for any two regions of any holographic theory. As a matter of fact, both (\ref{hologEE}) and (\ref{holoMI}) come from classical gravity in the bulk and provide the correct results to leading order in the $G_N$ expansion. When the boundary field theory is a large $N$ gauge theory, these terms are of order $N^2$. Thus, one might expect some corrections coming from quantum mechanical effects in the bulk theory, with the first correction appearing at order $N^0$ ($G_N^{\, 0}$) \cite{headrick}. These $G_N^{\, 0}$ order corrections are small enough not to modify the shape of the surfaces and, as have been argued in \cite{maldacena}, jointly with the leading classical contributions, they obey the strong subadditivity condition. In this note it is shown that, at least in the case that has been considered, the mutual information (\ref{holoMI}) between two disjoint regions $A$ and $B$ in the large separation regime, while undergoing a phase transition at a critical distance, instead of strictly vanishing, decays with a law whose leading contributions are given by the exchange of pairs of the lightest bulk particles between $A$ and $B$. These bulk particles correspond to operators in the boundary field theory with small scaling dimensions as stated by the standard AdS/CFT dictionary \cite{adscftbib}-\cite{adscftbib4}. In order to achieve this result, first we propose to interpret the mutual information in terms of correlators of surface operators. These can be realized in terms of a probe D3-brane using the AdS/CFT correspondence \cite{surface}. An operator product expansion (OPE) for the long distance mutual information written in terms of these correlators is then provided. The expansion is in accordance with a recent proposal given in \cite{maldacena} where authors provide a long distance OPE for the mutual information $\mathcal{I}_{AB}$ between disjoint regions inspired by an OPE for the mutual information in CFT previously discussed in \cite{headrick} and \cite{mi_ope3}. There, the expected leading contributions come from the exchange of pairs of operators $\mathcal{O}_A, \mathcal{O}_B$ located in $A$ and $B$ each with an small scaling dimension $\Delta$. The OPE reads as, \begin{equation} \label{eq0} \mathcal{I}_{AB}\sim \sum C_{\Delta}\left\langle \mathcal{O}^{\Delta}_A\mathcal{O}^{\Delta}_B \right\rangle ^2\sim \sum C_{\Delta} \left( \frac{1}{L}\right) ^{4\Delta}+\cdots, \end{equation} where $L$ is the distance between $A$ and $B$ and $C_{\Delta}$ comes from squares of OPE coefficients. Thus, when considering a CFT theory with a gravity dual, one must deal with a quantum field theory in a fixed background geometry and the long distance expansion for the mutual information reduces to an expression similar to (\ref{eq0}), where now one should consider the exchange of the lightest bulk particles. The direct computation of the one-loop bulk corrections to the holographic entanglement entropy and R\'enyi entropies of two wide separated disjoint intervals in a 1+1 CFT has been explicitly addressed in \cite{hartnoll}. Here we ask if a simpler procedure can be used to learn, at least, some basic properties of the long range expansion of the $\mathcal{I}_{AB}$ in higher dimensional theories. \section{Mutual Information, Twist Operators and Surface Operators} Our aim is to provide an OPE for the holographic mutual information in AdS$_5$ in terms of correlators of surface operators $\mathcal{W}\left(\Sigma\right) $ of the dual $\mathcal{N}=4$ SYM gauge theory. To this end, in this section, we first present some general properties of $\mathcal{I}_{AB}$ for subsystems that are weakly coupled to each other. We show a result that foreshadows the long distance expansion (\ref{eq0}) on very general grounds. Then we review the twist operators and their role in computing the entanglement entropy and the mutual information in quantum field theory through the replica trick method \cite{calcard,calcard2,calcard3}. Based on this, an OPE for the long distance mutual information is given. We also discuss on the strong analogies between the twist operators and the surface operators in gauge theories. \subsection{Mutual Information between weakly coupled subsystems} We assume, following \cite{headrick}, that the nearly factorized density matrix of two subsystems $A$ and $B$ separated by a distance $L$ much bigger than their characteristic sizes is given by, \begin{equation} \label{eqa01} \rho_{A \cup B} = \rho_{0} + \epsilon\, \rho_{1} + \epsilon^2\, \rho_{2}, \end{equation} where $\rho_{0}=\rho_{A}\otimes\rho_{B}$ with ${\rm tr}\, \rho_{0}=1$, ${\rm tr}\, \rho_{1}={\rm tr}\, \rho_{2}=0$ and $\epsilon \ll 1$. As a result, at order $\epsilon^2$, the entanglement entropy $S_{A \cup B}$ may be written as, \begin{eqnarray} \label{eqa01b} S_{A \cup B} &=& - {\rm tr}\, [\rho_{A \cup B}\, \log\, \rho_{A \cup B}] \\ \nonumber &= &- {\rm tr}\, [(\rho_{0} + \epsilon\, \rho_{1}+ \epsilon^2\, \rho_{2})\, \log\, (\rho_{0} + \epsilon\, \rho_{1}+ \epsilon^2\, \rho_{2})]\\ \nonumber &\approx & - {\rm tr}\, (\rho_{0}\log \rho_{0}) - \epsilon\, {\rm tr}\, ((\rho_{1}+ \epsilon\, \rho_{2})\log\, \rho_{0}) - \epsilon^2\, {\rm tr}\, (\rho_{0}^{-1}\, \rho_{1}^{2})\\ \nonumber & = & S_A + S_B - \epsilon\, {\rm tr}\, ((\rho_{1}+ \epsilon\, \rho_{2})\log\, \rho_{0}) - \epsilon^2\, {\rm tr}\, (\rho_{0}^{-1}\, \rho_{1}^{2}), \end{eqnarray} so, the mutual information at this order reads as, \begin{equation} \label{eqa01b2} \mathcal{I}_{AB} \sim \epsilon\, {\rm tr}\, ((\rho_{1}+ \epsilon\, \rho_{2})\log\, \rho_{0}) + \epsilon^2\, {\rm tr}(\rho_{0}^{-1}\, \rho_{1}^2). \end{equation} Thus, it is straightforward to realize that at first order in $\epsilon$, the mutual information must vanish since $\epsilon$ could take either sign while $\mathcal{I}_{AB}$ is always non-negative. Hence, the first non zero contribution to the mutual information is given by, \begin{equation} \label{eqa02} \mathcal{I}_{AB}\sim \epsilon^2\, {\rm tr}(\rho_{0}^{-1}\, \rho_{1}^2), \end{equation} which does not depend on $\rho_{2}$. It can be shown that the $\epsilon^2$ term in Eq.(\ref{eqa02}) does not generically vanish. Furthermore, since the non vanishing connected correlators between operators located in $A$ and $B$ are given by $\langle \mathcal{O}_A(0)\, \mathcal{O}_B(L)\rangle = {\rm tr}(\rho_1\, \mathcal{O}_A\, \mathcal{O}_B)$, then one might expect that, \begin{equation} \label{eqa03} \mathcal{I}_{AB}\sim \epsilon^2\, \langle \mathcal{O}_A\, \mathcal{O}_B\rangle^2 \sim C\, \left( \frac{1}{L}\right)^{4\Delta}, \end{equation} as far as $\langle \mathcal{O}_A(0)\, \mathcal{O}_B(L)\rangle \sim (1/L)^{2\Delta}$. This behaviour obeys the general bound given in \cite{wolf}, \begin{equation} \label{eq_wolf} \mathcal{I}_{AB}\geq \frac{\langle \mathcal{O}_A\mathcal{O}_B\rangle^2}{2\Vert\mathcal{O}_A\Vert^2\Vert\mathcal{O}_B\Vert^2}, \end{equation} where $\Vert \mathcal{O} \Vert$ is the absolute value of the maximum eigenvalue. \subsection{Twist operators} We consider now the computation of the entanglement entropy of a region (interval) $A$ in a (1+1)-dimensional CFT where $S_A$ is computed via the replica trick \cite{calcard,calcard2,calcard3} as, \begin{equation} \label{eq01} S_A=-\partial_n\, {\rm tr}\, \rho_A^n\vert_{n=1} = -\partial_n\, \log\, {\rm tr}\, \rho_A^n\vert_{n=1}, \end{equation} with $\rho_A$ the reduced density matrix of the region $A$ and ${\rm tr}\, \rho_A = 1$. The method relies on the computation of ${\rm tr}\, \rho_A^n$ as a path integral over a $n$-sheeted Riemann surface, each sheet containing a copy of the CFT under consideration. This path integral happens to be equivalent to the path integral of the symmetric product of the $n$ copies of the original CFT (whose central charge is given by $nc$), defined on a single $\mathbb{R}^2$ sheet. Remarkably, ${\rm tr}\, \rho_A^n$ can be written as the two point function of two vertex-like point operators $\Phi_n^+(u)$ and $\Phi_n^-(v)$ called twist operators, inserted at the two boundary points ${u,v}$ of $A$ in the path integral, i.e, \begin{equation} \label{eq02} {\rm tr} \rho_A^n = \left\langle \Phi_n^+(u)\, \Phi_n^-(v)\right\rangle. \end{equation} The twist operators are actually primary operators with scaling dimensions $\Delta_n = \frac{c}{12}(n-\frac{1}{n})$ related to the central charge of the CFT and the number of replicas $n$. They account for the conical singularities appearing as one joins the $n$ copies of the CFT in the $n$-sheeted surface formulation of the path integral. In $d+1$ dimensions, one may also compute ${\rm tr}\, \rho_A^n$ as a path integral over an $n$-sheeted Riemann surface. This multi-sheeted surface has a conical singularity along the boundary $\partial A$ of the region $A$ for which one is computing the entropy. It is expected that this path integral can be written as a path integral on a single-sheeted surface with an inserted twist-like operator $\mathcal{T}_n\left[\partial A\right]$ defined along the boundary $\partial A$. Thus, ${\rm tr} \rho_A^n = \left\langle \mathcal{T}_n\left[\partial A\right] \right\rangle $ and, in absence of further operator insertions, ${\rm tr} \rho_A = 1$. Here, the operator $ \mathcal{T}_n\left[\partial A\right]$ is no longer point-like, becoming instead an extended operator such as a line operator in 2+1 dimensions or a surface operator in 3+1 dimensions. As pointed out in \cite{swingle10}, a key realization about twist fields in a (1+1) dimensional CFT is their resemblance with operators builded as the exponential of a massles field, i.e, a vertex operator in a free boson CFT. In practice, the construction and properties of these twist fields beyond (1+1) dimensions is poorly understood\footnote{In higher dimensions, the replica trick provides only a formal definition of the the twist operators and much of their properties are unknown. See \cite{myers} for very recent advances on these topics.}. Nevertheless, let us briefly discuss on how these higher dimensional $\mathcal{T}_n\left[\partial A\right]$ operators exhibit significant analogies with extended operators in gauge theories. Assuming a vertex-like functional structure for a higher dimensional twist-field amounts to argue that it is the exponential of a certain type of massles spatial $(d-1)$-form $F^{(d-1)}$, \begin{equation} \label{eq03} \mathcal{T}_n\left[\partial A\right]=\exp\left(i \alpha_n\, \int_{\partial A} F^{(d-1)}\right), \end{equation} where $\alpha_n$ must be fixed so as to obtain the correct prefactor for the entanglement entropy, which in a strongly coupled field theory is proportional to $N^{\, 2}$. As long as the region $A$ is compact, it is easy to show that $F^{(d-1)}$ and thus $\mathcal{T}_n\left[\partial A\right]$ has a "gauge symmetry" \cite{swingle10}, \begin{equation} \label{eq04} F^{(d-1)}\to F^{(d-1)} + d\Lambda^{(d-2)}, \end{equation} with $\Lambda^{(d-2)}$ an arbitrary spatial $(d-2)$-form. Let us to further illustrate the \emph{ansatz} in Eq.(\ref{eq03}) by considering a scalar field theory $\phi$ in 3+1 dimensions and a set of $n$ replica fields $\lbrace \phi_n \rbrace$. These fields amount to a representation of the cyclic permutation subgroup of $\mathbb{Z}_n$ generated by the twist operator $\mathcal{T}_n\left[\partial A\right]$, \begin{equation} \mathcal{T}_n\left[\partial A\right]: \phi_n \longrightarrow \phi_{n \pm 1} \quad {\rm mod}\, n. \end{equation} In other words, the twist operator $\mathcal{T}_n\left[\partial A\right]$ is the analog in the original multi-sheeted surface of moving from one sheet to the next (previous) one. Now, it is useful to introduce the linear combination of the replica fields, \begin{equation} \widetilde{\phi}_k \equiv \sum_{j=1}^{n}\, e^{2\pi i\frac{k}{n} j} \, \phi_j, \quad k=0,1,\cdots, n-1, \end{equation} which are phase shifted by the factor $\lambda_k= e^{2\pi i k/n}$ as they encircle the codimension-2 spacetime region on which the twist operator is defined, i.e, they diagonalize the twist operator, \begin{equation} \mathcal{T}_n\left[\partial A\right]\, \widetilde{\phi}_k = \lambda_k\, \widetilde{\phi}_k. \end{equation} Namely, the twist operator $\mathcal{T}_n[\partial A]$ can be written as a product of operators $\mathcal{T}_{n,k}[\partial A]$ acting only on $\widetilde{\phi}_k$, \begin{equation} \mathcal{T}_n\left[\partial A\right]= \prod_{k=0}^{n-1}\, \mathcal{T}_{n,k}[\partial A], \end{equation} with $\mathcal{T}_{n,k}[\partial A] \widetilde{\phi}_{k'}=\widetilde{\phi}_k $ if $k \neq k'$ and $\mathcal{T}_{n,k}[\partial A] \widetilde{\phi}_{k}= \lambda_k\, \widetilde{\phi}_k$. The way the field $\widetilde{\phi}_k $ picks up the phase shift $\lambda_k$ resembles the Aharonov-Bohm effect. Namely, since $\vert \lambda_k \vert=1$, one might introduce 2-form gauge fields $F^{(k)}$ to give account for these phase shifts. These fields are normal gauge fields with a singular behaviour along the codimension-2 locus where the twist operator is defined. In a (3+1) dimensional theory, this locus amounts to a closed 2-dimensional surface. Therefore, the twist operator $\mathcal{T}_n\left[\partial A\right]$ would be some 2-dimensional \emph{surface} operator introducing a branch cut in the path integral over the $n$-fold replicated theory. In case that the entangling surface $\partial A$ is a static $S^2$ sphere, the twist operator residing on it, acts by opening a branch cut over the ball on the interior. Noticing that $\mathbb{Z}_n$ acts on $\lbrace \widetilde{\phi}_k \rbrace$ as a global $U(1)$ charge symmetry, the twist operator $\mathcal{T}_{n,k}[\partial A] $ can be defined by (see Eq.(\ref{eq03})), \begin{equation} \mathcal{T}_{n,k}\left[\partial A\right]\sim \exp\left(i \int_{\partial A}\, F^{(k)} \right), \end{equation} where $F^{(k)}$ encodes the flux which generates the phase shift $\lambda_k$. A similar analysis has been carried out in \cite{casini0} in the two dimensional case, when the twist field is point-like and local. There authors first discussed the interpretation of the twist fields as vortex-like operators. To finalize, we also note that the mutual information between two regions $A$ and $B$ can be written in terms of the twist operators $\mathcal{T}_n\left[\partial A\right]$ and $\mathcal{T}_n\left[\partial B\right]$ as, \begin{equation} \label{eq06} \mathcal{I}_{AB}=\partial_n\left[\log\, \frac{ \left\langle \mathcal{T}_n\left[\partial A\right]\, \mathcal{T}_n\left[\partial B\right] \right\rangle }{\left\langle \mathcal{T}_n\left[\partial A\right] \right\rangle \left\langle \mathcal{T}_n\left[\partial B\right] \right\rangle} \right]_{n=1}, \end{equation} which amounts to compute the connected correlation function between $\mathcal{T}_n\left[\partial A\right]$ and $\mathcal{T}_n\left[\partial B\right]$. As an example, in CFT$_2$, if one considers two disconnected intervals $A=[u_1,\, v_1]$, $B=[u_2,\, v_2]$ $(u_1 < v_1< u_2 < v_2)$ such that $\partial A=\lbrace u_1,\, v_1\rbrace$ and $\partial B=\lbrace u_2,\, v_2\rbrace$, then Eq.(\ref{eq06}) may be written as \cite{headrick}, \begin{equation} \label{eq07} \mathcal{I}_{AB}=\partial_n\left[\frac{1}{n-1}\log\, \frac{ \left\langle \Phi^{+}_n(u_1)\, \Phi^{-}_n(v_1)\, \Phi^{+}_n(u_2)\, \Phi^{-}_n(v_2)\,\right\rangle }{\left\langle \Phi^{+}_n(u_1)\, \Phi^{-}_n(v_1) \right\rangle \left\langle \Phi^{+}_n(u_2)\, \Phi^{-}_n(v_2) \right\rangle} \right]_{n=1}, \end{equation} where $ \Phi^{+}(u),\, \Phi^{-}(v)$ are the point-like twist operators mentioned above. \subsection{Long distance expansion for the Mutual Information} \label{LongDistanceMI} It has been argued in \cite{headrick} that the minimal area prescription in Eq.(\ref{hologEE}) and Eq.(\ref{holoMI}), though providing tempting hints about the structure of correlations in holographic theories at order $G_N$, hides an important part of that structure in situations such as the long distance regime of the mutual information. Here we argue that it might result helpful to rephrase these quantities in terms of correlators of twist operators (Eq.(\ref{eq06})) since, once taken this approach, it is in principle possible, to have an OPE of these correlators from which $(G_N)^q$, $q\geq0$ corrections to Eq.(\ref{holoMI}) might be obtained. Let us settle on this claim. The twist operator $\mathcal{T}_n\left[\partial A\right]$ can be expanded in a series of local operators $\mathcal{O}^{A}_i$ when probed from a distance $L$ much larger than the characteristic size $a$ of the region $A$ as, \begin{equation} \label{eqa04} \mathcal{T}_n\left[\partial A\right] = \langle \mathcal{T}_n\left[\partial A\right] \rangle\, \left( 1 + \sum_i\, \mathcal{C}_i^{A}(a, \Delta_{i},0)\, \mathcal{O}^{A}_i(0)\right), \end{equation} where $\Delta_{i}$ are the conformal dimensions of the operators. The exact form of the expansion coefficients $\mathcal{C}_i^{A}(a, \Delta_{i},0)$ is unknown but generally, they should depend both on the scale $a$ and of the reference point at which the operator $\mathcal{O}^{A}_i$ is inserted. Here, we have choosen the reference point as the center of the spherical region enclosed by the twist operator, i.e, the origin. For the sake of subsequent arguments in this paper, the operators $\mathcal{O}^{A}_i$ are conformal primaries inserted at a single copy of the $n$-folded replica trick construction, while in general, they consist in products of two or more of such operators inserted at the same point but in different copies of the CFT \cite{myers}. A similar expansion also holds for the twist operator $\mathcal{T}_n\left[\partial B\right]$ defined along the boundary of a region $B$ with characteristic size $a$ located at a distance $L$ from the origin, \begin{equation} \label{eqa05} \mathcal{T}_n\left[\partial B\right]= \langle \mathcal{T}_n\left[\partial B\right] \rangle\, \left( 1 + \sum_j\, \mathcal{C}_j^{B}(a,\Delta_j,L)\, \mathcal{O}^{B}_j(L)\right). \end{equation} Thus, the OPEs and their coefficients $ \mathcal{C}_i^{A},\, \mathcal{C}_j^{B}$ appear as one replaces the regions $A$, $B$ by a sum of local CFT operators\footnote{Henceforth, we simplify the notation by omitting the explicit dependence on the scale $a$, the conformal dimensions and insertion points of the expansion coefficients $ \mathcal{C}_i^{A}$, and $\mathcal{C}_j^{B}$.}. Assuming that the vacuum expectation value of a single operator $\langle \mathcal{O} \rangle = 0$, the connected correlator in Eq.(\ref{eq06}) can be written as, \begin{equation} \label{eqa06} \log \frac{\langle \mathcal{T}_n\left[\partial A\right]\, \mathcal{T}_n\left[\partial B\right]\rangle}{\langle \mathcal{T}_n\left[\partial A\right]\rangle \langle \mathcal{T}_n\left[\partial B\right]\rangle} \sim \sum_{i,j}\, \mathcal{C}_i^{A}\, \mathcal{C}_j^{B}\, \langle \mathcal{O}^{A}_i(0)\, \mathcal{O}^{B}_j(L)\rangle . \end{equation} However, recalling Eqs.(\ref{eqa02})-(\ref{eqa03}), one notices that this OPE for the mutual information should not be valid, as only involves ${\rm tr}(\rho_1)$ terms $(\sim \langle \mathcal{O}\, \mathcal{O} \rangle)$ contrarily to the expected ${\rm tr} (\rho_1^2)$ ones $(\sim \langle \mathcal{O}\, \mathcal{O} \rangle^2)$. Let us fix this point by focusing on the 1+1 CFT case. If one performs a sort of OPE such as the one given by Eq.(\ref{eqa06}) on the quantity within the brackets in Eq.(\ref{eq07}), then the computation of $\mathcal{I}_{AB}$ singles out the term that is linear in $(n-1)$. It turns out that terms $\langle \mathcal{O}\, \mathcal{O} \rangle$ in that expansion are proportional to $(n-1)^2$ as shown in \cite{headrick}, and therefore, their contribution vanishes after doing the derivative and taking the $n \to 1$ limit\footnote{I thank Juan M. Maldacena for some clarifications on this subject}. As a result, one might be compelled to consider an alternative OPE for $\mathcal{I}_{AB}$ which, while using the long distance expansion for correlators of twist fields, takes into account Eqs.(\ref{eqa02})-(\ref{eqa03}). We first notice that the long distance expansion for the operator $ \mathcal{T}_n\left[\partial A\right]$ with a chiral primary operator (CPO) $\mathcal{O}^{B}_{k}$ inserted at $\partial B$ is given by, \begin{equation} \label{eqa07} \frac{\langle \mathcal{T}_n\left[\partial A\right]\, \mathcal{O}^{B}_{k}(L)\rangle}{\langle \mathcal{T}_n\left[\partial A\right]\rangle} = \mathcal{C}_k^{A}\, \langle \mathcal{O}^{A}_{k}(0)\, \mathcal{O}^{B}_{k}(L)\rangle \sim \mathcal{C}_k^{A}\, \left( \frac{1}{L} \right)^{2\Delta_k}, \end{equation} where $L$ is the distance between regions $A$ and $B$ and $\Delta_k$ is the scaling dimension of the CPO $\mathcal{O}^{B}_{k}$. Similarly, the long distance expansion for the correlator of $\mathcal{T}_n\left[\partial B\right]$ with a CPO $\mathcal{O}^{A}_{m}$ inserted at $\partial A$ is given by, \begin{equation} \label{eqa08} \frac{\langle \mathcal{O}^{A}_{m}(0)\, \mathcal{T}_n\left[\partial B\right]\rangle}{\langle \mathcal{T}_n\left[\partial B\right]\rangle} = \mathcal{C}_m^{B}\, \langle \mathcal{O}^{A}_{m}(0)\, \mathcal{O}^{B}_{m}(L)\rangle \sim \mathcal{C}_m^{B}\, \left( \frac{1}{L} \right)^{2\Delta_m}, \end{equation} with $\Delta_m$ the scaling dimension of the CPO $\mathcal{O}^{A}_{m}$. As a consequence, it results reasonable to propose a long distance OPE for the mutual information which jointly takes into account the long distance correlators of each one of the twist fields with all the CPO which one might find inserted on the other region. This can be written as, \begin{eqnarray} \label{eqa09} \mathcal{I}_{AB} &\sim & \partial_n\, \left[ \sum_{k,\, m}\, \frac{\langle \mathcal{T}_n\left[\partial A\right]\, \mathcal{O}^{B}_{k}(L)\rangle}{\langle \mathcal{T}_n\left[\partial A\right]\rangle}\, \frac{\langle \mathcal{O}^{A}_{m}(0)\, \mathcal{T}_n\left[\partial B\right]\rangle}{\langle \mathcal{T}_n\left[\partial B\right]\rangle}\right]_{n=1} \\ \nonumber &= & \sum_{k}\, C_{k}\, \left( \frac{1}{L^{2}}\right)^{2\Delta_k} + \sum_{k \neq m}\, \partial_n\left[ \mathcal{C}_k^{A}\, \mathcal{C}_m^{B}\right]_{n=1} \, \left( \frac{1}{L^{2}}\right)^{\Delta_k + \Delta_m}, \end{eqnarray} with $C_k = \partial_n\left[ \mathcal{C}_k^{A}\, \mathcal{C}_k^{B}\right]_{n=1} $. This "OPE" accomodates to the very general requeriments for the behaviour of $\mathcal{I}_{AB}$ between weakly coupled regions showed above, while its coefficients are a byproduct of the OPE between the twist fields and the CPO of the CFT. At this point it is worth to note that, while little is known about twist fields in higher dimensional CFTs, not to say about the coefficients $\mathcal{C}_k$ of the OPE. As discussed above, those seem to be line or surface-like operators of a sort with analogous properties to the better known line and surface operators of gauge theories. Therefore, it might result tempting to access the properties of the mutual information in higher dimensional theories through the properties of these higher dimensional gauge operators, especially in situations where the benefits of computing through the AdS/CFT correspondence are manifest. This also relates to the question of, up to what extent, some information theoretic quantities such as the mutual information might determine the underlying QFT \cite{casini}. In this sense, one may realize following \cite{casini}, that as the entropy $S_{A \cup B}$ for very distant regions $A$ and $B$ approaches the sum of entropies $S_A + S_B$, the vacuum expectation value (VEV) of product of operators $\mathcal{W}_A$ and $\mathcal{W}_B$ defined on $A$ and $B$, factorizes into the product of VEV, so the exponential ansatz for $\mathcal{I}_{AB}$, \begin{equation} e^{\mu\, \mathcal{I}_{AB}} = \frac{\langle \mathcal{W}_A\, \mathcal{W}_B \rangle}{\langle \mathcal{W}_A\rangle\, \langle \mathcal{W}_B \rangle}, \end{equation} where $\mu$ is a number, is exactly what one might expect in order to account for the clustering properties of correlators and entropies. This ansatz is a mapping that must respect both Poincar\'e symmetry and causality. The causality constraint imposes that $\mathcal{W}_A$, which in principle is a product of operators fully supported on $A$, should be the same for all the spatial surfaces with the same boundary as $\partial A$. This implies that $\mathcal{W}_A$ must be localized on $\partial A$, which in more than one spatial dimensions, once more suggests that it may be some kind of generalized 'Wilson loop' operator of the theory under consideration. Here, it is worth to recall that in Eqs.(\ref{eqa07},\, \ref{eqa08},\, \ref{eqa09}), one must deal with the correlators of the twist operators $\mathcal{T}_n$ with the primary operators $\mathcal{O}$ of the theory inserted at a single copy of the CFT, for instance, the first of the $n$ copies. At this point, we follow \cite{myers} in order to construct (at least formally) a surface-like effective twist operator $\widetilde{\mathcal{T}}_n$ which only acts within the first copy of the CFT by reproducing any correlator of the form, \begin{equation} \langle \mathcal{T}_n\, \mathcal{O}\rangle = \langle \widetilde{\mathcal{T}}_n\, \mathcal{O}\rangle_{\mathbf{1}}, \label{effectwist} \end{equation} where the subscript on the second correlator means that its computation is carried out on the first single copy of the CFT. As in the two dimensional case, it is reasonable to assume that some roles of these effective twist operators, such as to impose the correct boundary conditions on the fields of the theory through their vortex-like singularities, are common to codimension-2 surface-like operators of the CFT. Under this assumption, our approach here will consist in modifying Eq.(\ref{eqa09}) by means of the effective twist operator construction in Eq.(\ref{effectwist}) and then to supersede $\langle \widetilde{\mathcal{T}}_n\left[\Sigma \right] \, \mathcal{O}\rangle_{\mathbf{1}}$ with the correlation function $\langle \mathcal{W}\left[\Sigma \right] \, \mathcal{O}\rangle$ between a surface operator $\mathcal{W}\left[\Sigma \right]$ of the CFT and a primary operator $\mathcal{O}$, with $\Sigma$ as the spatial surface on which the operators are defined. As a consequence, provided they can be computed, one may probe the long distance behaviour of $\mathcal{I}_{AB}$ by means of the OPE between the surface operators $\mathcal{W}\left[ \partial A,\, 0\right]$, $\mathcal{W}\left[ \partial B,\, L\right]$ and the CPO of the gauge theory under consideration, \begin{eqnarray} \frac{\langle \mathcal{W}\left[\partial A, 0\right]\, \mathcal{O}^{B}_{k}(L)\rangle}{\langle \mathcal{W}\left[\partial A, 0\right]\rangle} & = & \widetilde{\mathcal{C}}_k^{A}\, \langle \mathcal{O}^{A}_{k}(0)\, \mathcal{O}^{B}_{k}(L)\rangle \sim \widetilde{\mathcal{C}}_k^{A}\, \left( \frac{1}{L} \right)^{2\Delta_k},\\ \nonumber \frac{\langle \mathcal{O}^{A}_{m}(0)\, \mathcal{W}\left[\partial B, L\right]\rangle}{\langle \mathcal{W}\left[\partial B, L\right]\rangle} & = & \widetilde{\mathcal{C}}_m^{B}\, \langle \mathcal{O}^{A}_{m}(0)\, \mathcal{O}^{B}_{m}(L)\rangle \sim \widetilde{\mathcal{C}}_m^{B}\, \left( \frac{1}{L} \right)^{2\Delta_m}, \end{eqnarray} where coefficients $\widetilde{\mathcal{C}}_i^{A},\, \widetilde{\mathcal{C}}_j^{B}$ depend explicitly on the characteristic size of the spatial regions $A$ and $B$ and both the insertion points and the scaling dimensions of the CPO. Finally, the long distance expansion for $\mathcal{I}_{AB}$ written in terms of these correlators reads as, \begin{eqnarray} \label{su_mi:ope} \mathcal{I}_{AB} &\sim & \sum_{k, m}\, \frac{\langle \mathcal{W}\left[\partial A,0\right]\, \mathcal{O}^{B}_{k}(L)\rangle}{\langle \mathcal{W}\left[\partial A, 0\right]\rangle}\, \frac{\langle \mathcal{O}^{A}_{m}(0)\, \mathcal{W}\left[\partial B, L\right]\rangle}{\langle \mathcal{W}\left[\partial B, L\right]\rangle} \\ \nonumber &=& \left[ \sum_{k}\, \widetilde{C}_{k}\, \left( \frac{1}{L^{2}}\right)^{2\Delta_k} + \sum_{k \neq m}\, \widetilde{\mathcal{C}}_k^{A}\, \widetilde{\mathcal{C}}_m^{B}\, \left( \frac{1}{L^{2}}\right)^{\Delta_k + \Delta_m}\right], \end{eqnarray} where $\widetilde{C}_k = \widetilde{\mathcal{C}}_k^{A}\, \widetilde{\mathcal{C}}_k^{B}$. The sums arise by considering all the possible local primary operators of the CFT which one might expect to find inserted at each one of the surfaces $\partial A, \partial B$. This is precisely the scenario that will be considered in the remainder of this paper. As in the two dimensional case \cite{mi_ope2}, the leading contributions to $\mathcal{I}_{AB}$ in Eq.(\ref{su_mi:ope}) are controlled by the conformal primaries of the theory. Nevertheless, while in (1+1) CFT the expansion coefficients only depend on the correlation function of these operators, in the higher dimensional case, these coefficients non trivially depend on the geometry of the regions $A$ and $B$ as has been mentioned above. \section{Mutual Information in $\mathcal{N}=4$ SYM from AdS$_5 \times$ S$^5$} We analyze the the mutual information between two static spherical 3-dimensional regions $A$ and $B$ with radius $a$ and separated by a distance $L\gg a$, in the $\mathcal{N} = 4$ SYM theory dual to Type IIB superstring theory on AdS$_5 \times S^5$. To this aim, we first briefly review the holographic realization of surface operators in the gauge theory and then, using the arguments exposed above, a long distance expansion of the mutual information in terms of the correlators between these operators and the chiral primaries of the theory is provided. \subsection{Surface Operators in $\mathcal{N}=4$ SYM gauge theory.} There are different kinds of operators in a 4-dimensional gauge theory attending to the spacetime locus on which they are supported. Codimension-4 operators are point-like local operators that have been extensively studied in the AdS/CFT correspondence. Codimension-3 operators are one dimensional operators such as the Wilson and t'Hooft loops. Two dimensional surface operators $\mathcal{W}(\Sigma)$ are defined along a codimension-2 surface $\Sigma \subset \mathcal{M}$, where $\mathcal{M}$ is the spacetime manifold on which the theory is defined \footnote{For previous work involving codimension two singularities in a gauge theory, see \cite{preskill}}. The later were studied by Gukov and Witten in the context of the geometric Langlands program, where they classified them in order to understand the action of S-duality \cite{gukov_witten1, gukov_witten2}. In a theory with a gauge group $G=U(1)$ \footnote{For simplicity we have considered a $U(1)$ gauge field, but indeed, for $U(N)$, there are different types of surface operators labelled by partitions of $N$.}, surface operators are disorder operators which, like t'Hooft operators, can be defined by requiring the gauge field to have a prescribed vortex-like singularity along the surface $\Sigma$: \begin{equation} F = 2\pi \alpha\, \delta_{\Sigma} + {\rm smooth}, \end{equation} where $F$ is the gauge field curvature 2-form and $\delta_{\Sigma}$ is 2-form delta function that is Poincar\'e dual to $\Sigma$. Then, the new path integral is over fields with this prescribed singularity along $\Sigma$. This amounts to introduce a phase factor $\eta$ in the path integral by inserting the operator, \begin{equation} \exp\left(i\, \eta\, \int_{\Sigma}\, F \right). \end{equation} Thus, one needs to consider the path integral with a special prescribed singularity along a codimension-2 manifold $\Sigma$. The fields of the theory acquire the phase factors $\eta$ as they encircle the codimension-2 surface $\Sigma$ due to their singular behaviour near it. As puzzling as they may seem, these singularities are rather ubiquitous in theories with vortex-like disorder operators such as the discontinuities induced on the fields of the theory by twist (or effective twist) operators in a higher dimensional CFT. As in the case of a two dimensional CFT, these discontinuities are consistent as far as the correlation functions of physical operators remain well defined. Some remarkable calculations involving disorder-like surface operators in the context of the AdS/CFT correspondence have been carried out both in a four dimensional gauge theory \cite{surface} and in a three dimensional theory \cite{vortex}. In the large $N$ and large t'Hooft coupling $\lambda$ limit of the four dimensional $\mathcal{N}=4$ SYM theory, the vortex-like surface operators can be holographically described in terms of a D3-brane in AdS$_5 \times S^5$ with a worldvolume $Q\times S^1$, where $S^1 \subset S^5$ and $Q \subset$ AdS$_5$ is a volume minimizing 3-manifold with boundary, \begin{equation} \partial Q =\Sigma \subset \mathcal{M}. \end{equation} Likewise, the holographic M-theory representation of a one dimensional vortex-like operator in the ABJM three dimensional $\mathcal{N}=6$ supersymmetric Chern-Simons theory \cite{abjm}, amounts to an M2-brane ending along one dimensional curve on the boundary of AdS$_4 \times S^7/\mathbb{Z}_k$. Both descriptions are a probe brane approximation. Those are valid when the vortex-like operators under consideration have singular values only in the $U(1)$ factor of the unbroken gauge group $U(1) \times SU(N-1)$, which is the case that will be considered in this paper. When the singular behaviour of the gauge fields are not such specifically restricted, then the disorder operators correspond to arrays of branes from which a pure geometric description in terms of regular "bubbling" geometries can be obtained \cite{gomis2}. \subsection{Long distance expansion for the Holographic Mutual Information} We go back to the arguments given at the end of Section 2 and thus consider the OPE for the mutual information (\ref{su_mi:ope}) written in terms of the correlators of surface operators $\mathcal{W}(\Sigma)$ with the chiral primary operators $\mathcal{O}_{k}$, \begin{equation} \label{eq9} \frac{\left\langle \mathcal{W}(\Sigma,0)\, \mathcal{O}_{k}(L)\right\rangle }{\left\langle \mathcal{W}(\Sigma, 0) \right\rangle }, \end{equation} where $\Sigma = \partial A$ or $\partial B$, are two static 2-dimensional spherical regions with radius $a$, $\Delta_{k}$ is the scaling dimension of the primary operator and $L$ is distance between them. As stated above, in the supergravity approximation, when $N\gg 1$ and the t'Hooft coupling $\lambda \gg 1$, the surface operator $\mathcal{W}(\Sigma)$ is related with a D3-brane $\subset$ AdS$_5$ ending on the boundary of the spacetime with a tension given by $T_{D3}=N/2\pi^2$ (in the units where the AdS$_5$ radius $R^4=1$). The correlator (\ref{eq9}) is calculated by treating the brane as an external source for a number of propagating bulk fields in AdS and then computing the brane effective action $S_{D3}$ for the emission of the supergravity state associated to the operator $\mathcal{O}_k$ onto the point on the boundary where it is inserted \cite{surface}\footnote{See also \cite{malda98}.}. The prescription to compute this correlator is to functionally differentiate $S_{D3}$ with respect to the bulk field $s_k$. This yields a correlator which scales with the distance $L$ as, \begin{equation} \label{eq9bis} \frac{\left\langle \mathcal{W}(\Sigma,0)\, \mathcal{O}_{k}(L)\right\rangle }{\left\langle \mathcal{W}(\Sigma, 0) \right\rangle } = -\left. \frac{\delta S_{D3}}{\delta s_k}\right|_{s_k=0}=\widetilde{\mathcal{C}}_{k}\, \left( \frac{1}{L}\right)^{2\Delta_k}. \end{equation} Thus, in the following, the quantities that one might be concerned to compute, are the OPE coefficients $\widetilde{\mathcal{C}}_k$. We will outline the calculations just below, but full details of it, can be found in \cite{surface}. As a result, our proposal for the long distance expansion of the mutual information given in Eq.(\ref{su_mi:ope}), may be holographically realized in terms of the mutual exchange of bulk particles between the codimension-2 regions $\partial A$ and $\partial B$ on which the disorder surface operators $\mathcal{W}(\Sigma)$ are defined. Namely, its leading contributions should be given by the exchange of pairs of the lightest supergravity particles (smaller scaling dimensions $\Delta_k$), while its coefficients arise as a byproduct of the OPE coefficients appearing in the correlators of these surface operators with the chiral primary operators of the theory (see Figure 1). This proposal thus resembles the picture provided in \cite{maldacena}. \begin{figure}[t] \centering \includegraphics[width=11.5cm]{newfigure.eps} \caption{Two static spherical 3-dimensional regions $A$ and $B$ (shaded grey) of radius $a$ separated by a long distance $L\gg a$ whose boundaries $\partial A$ and $\partial B$ define the surfaces $\Sigma_A$ and $\Sigma_B$ respectively (The figure is represented in one lower dimension for convenience). $z$ represents the \emph{radial} coordinate in AdS. Top Left: The emission of a supergravity particle (dotted line) from the D3-brane realization of the surface operator $\mathcal{W}(\Sigma_A)$ onto a point (X $\in \Sigma_B$) on the boundary of AdS where the CPO $\mathcal{O}_B$ is inserted. Bottom Left: The emission of a particle from the D3-brane realization of the surface operator $\mathcal{W}(\Sigma_B)$ onto a point (X $\in \Sigma_A$) on the boundary of AdS where the CPO $\mathcal{O}_A$ is inserted. Right: A leading contribution to the long distance OPE for $\mathcal{I}_{AB}$ is given by the exchange of a pair of the lightest supergravity particles between the surfaces $\Sigma_A$ and $\Sigma_B$.} \label{ope_ansatz} \end{figure} \subsection{Correlators of surface observables with local operators in the probe approximation} We outline the procedure to compute the correlation function (\ref{eq9}). The coupling of the supergravity mode $s^{\Delta}$ (dual to $\mathcal{O}_{\Delta}$) to the D3-probe brane realizing the operator $\mathcal{W}(\Sigma)$ is given by a vertex operator $V_{\Delta}$. This can be determined by expanding the D3-brane action $S_{D3}=S_{D3}^{DBI} + S_{D3}^{WZ}$ to linear order in the fluctuations \cite{surface}. When the local operator $\mathcal{O}_{\Delta}(\vec{x}\, ')$ emits the supergravity field $s^{\Delta}$ at a point $\vec{x}\, '$ on the boundary, if it contributes to the correlator of $\mathcal{O}_{\Delta}$ with the surface operator $\mathcal{W}(\Sigma)$, this supergravity mode propagates on the background AdS$_5 \times S^5$ and is then absorbed by the vertex operator, which must be integrated over the D3-brane realizing the operator $\mathcal{W}(\Sigma)$ in AdS. The bulk field $s^{\Delta}$ has a simple propagator, however, it has a rather complicated set of couplings with the supergravity fields accounting for the brane fluctuations. In order to proceed, one may first write the scalar $s^{\Delta}$ in terms of a source $s_0^{\Delta}$ located at point $\vec{x}\, '$ on the boundary, \begin{equation} \label{eap1} s^{\Delta}(\vec{x},z)=\int d^4\vec{x}\, '\, G_{\Delta}(\vec{x}\, ';\vec{x},z)\, s_0^{\Delta}(\vec{x}\, '). \end{equation} Here, $G_{\Delta}(\vec{x}\, ';\vec{x},z)$ is the bulk to boundary propagator describing the propagation of the supergravity mode from the insertion point $\vec{x}\, '$ of the CPO to the point $(\vec{x},z)$ on the D3 probe brane, \begin{equation} \label{eap2} G_{\Delta}(\vec{x}\, ';\vec{x},z) = c\, \left(\frac{z}{z^2 + \vert \vec{x}-\vec{x}\, ' \vert^2}\right)^{\Delta}, \end{equation} where the constant $c$ is fixed so as to require the normalization of the two-point correlation function $\langle \mathcal{O}_{\Delta}\, \mathcal{O}_{\Delta}\rangle$. As the surface operator $\mathcal{W}(\Sigma)$ is probed from a distance $L$ larger than its radius $a$, it is possible to approximate, \begin{equation} \label{eap3} G_{\Delta}(\vec{x}\, ';\vec{x},z) \simeq c\, \frac{z^{\Delta}}{ L^{2\Delta}}. \end{equation} Then, it is necessary to write the fluctuations of $S_{D3}$ in terms of the field $s^{\Delta}$ given by Eq.(\ref{eap1}). This inmediatly leads to determine $V_{\Delta}$. Furthermore, it also allows to write the linearized fluctuation contribution of the D3-brane action as, \begin{equation} \label{eap4} S_{D3} = T_{D3}\, \int d\mathcal{A}\, V_{\Delta}\, s^{\Delta}, \end{equation} with $s^{\Delta}$ given in (\ref{eap1}) and $T_{D3}=N/2\pi^2$. In the last expression $d\mathcal{A}$ refers to the volume element of the probe D3-brane. The correlation function is obtained from functionally differentiating the previous expression with respect to the source $s_0^{\Delta}$, \begin{eqnarray} \label{eap5} \frac{\left\langle \mathcal{W}(\Sigma)\, \mathcal{O}_{\Delta}(\vec{x}_0)\right\rangle }{\left\langle \mathcal{W}(\Sigma) \right\rangle } & =& - \frac{\delta }{\delta s_0^{\Delta}(\vec{x}_0)} T_{D3}\int d\mathcal{A}\, d^4\vec{x}\, '\, V_{\Delta}\, G_{\Delta}(\vec{x}\, ';\vec{x},z)\, s_0^{\Delta}(\vec{x}\, ') \\ \nonumber &=& -T_{D3}\, \int d\mathcal{A}\, V_{\Delta}\, G_{\Delta}(\vec{x}_0;\vec{x},z). \end{eqnarray} If we let $\vec{x}_0$ to be parametrized as $(d_1 e^{i\phi_1}, d_2 e^{i\phi_2})$, then, integrating out this expression and using the approximation (\ref{eap3}) one thus obtains $\widetilde{\mathcal{C}}_{\Delta}$ explicitly as \cite{yamaguchi}, \begin{equation} \widetilde{\mathcal{C}}_{\Delta, p}=\frac{2^{\Delta/2}}{\sqrt{\Delta}} C_{\Delta, p}\, \frac{(2 \pi \beta)^{\Delta}}{\lambda^{\Delta/2}}\, \frac{e^{-ip(\phi_1 + \phi_2)/2}}{(d_1 d_2)^{\Delta/2}}\, (1 + (-1)^{\Delta}), \label{coeff_explicit} \end{equation} where $p=-\Delta, -\Delta+2, \cdots, 0, \cdots, \Delta$ is the momentum of the scalar field in $S^5$, $\beta$ is a parameter of the surface operator related with the geometric embedding of the D3-brane and $C_{\Delta,p}$ is a constant related with the spherical harmonics in $S^5$. \subsection{Contributions from the lightest bulk fields} For 10-dimensional supegravity compactified on AdS$_5 \times S^5$, the ten-dimensional fields may be written as, \begin{equation}\label{SUGRA:fields} \Psi=\sum_{p,I}\, \phi_p\, Y_{(p,I)}, \end{equation} where $\phi_p$ is a five dimensional field and $Y_{(p,I)}$ are the spherical harmonics on $S^5$ with total angular momentum $p$. The full spectrum of 10D-supergravity compactified on $S^5$ was obtained in \cite{sugra} but, in what follows, we will focus only in the lightest scalars $s^{\Delta}$, whose exchange will dominate the long distance behaviour of $\mathcal{I}_{AB}$. These light scalar fluctuations couple to the $\mathcal{N}=4$ SYM operators $\mathcal{O}_{\Delta}$ of the lowest dimensions $\Delta$ which appear in the OPE for the surface operators $\mathcal{W}(\Sigma)$ and $\mathcal{I}_{AB}$. These states solve the Klein-Gordon equation in AdS$_5$, \begin{equation} \label{eq13} \nabla_{\mu}\nabla^{\mu}s^{\Delta}=\Delta(\Delta-4)s^{\Delta}\quad \Delta\geq 2. \end{equation} Note that the field $s^{\Delta}$ has a negative mass for $\Delta = 2,3$. However, these modes are not tachyonic, since they propagate on a space of negative curvature. In \cite{surface,yamaguchi} it has been shown in full detail how to obtain the correlator between a surface operator and the lightest of these fields, i.e the scalar with $\Delta=2$. For $p=0$ this yields, \begin{equation} \label{eq14} \frac{\left\langle \mathcal{W}(\Sigma,0)\, \mathcal{O}_{2,0}(L)\right\rangle }{\left\langle \mathcal{W}(\Sigma,0) \right\rangle } = \widetilde{\mathcal{C}}_{2,0}\left( \frac{1}{L}\right)^{4}=\frac{1}{\sqrt{2}}\frac{(4\pi \beta)^2}{d_1 d_2}\frac{C_{2,0}}{\lambda}\left( \frac{1}{L}\right)^{4} \end{equation} which is of order $N^0$. From Eq.(\ref{eq14}) one may determine the contribution of the lightest scalar ($\Delta=2, p=0$) to $\mathcal{I}_{AB}$. This amounts to the leading contribution to the long distance expansion given in Eq.(\ref{su_mi:ope}). Defining $\kappa = \frac{C_{2,0}}{\sqrt{2}}\frac{(4\pi \beta)^2}{d_1 d_2}$, this expansion reads as, \begin{equation} \label{eq16} \mathcal{I}_{AB}\sim \left(\widetilde{\mathcal{C}}_{2}\right)^2\, \left( \frac{1}{L}\right)^{8}\, + \cdots\, = \frac{\kappa^2}{\lambda^2}\, \left( \frac{1}{L}\right)^{8}\, +\, \mathcal{O}(L^{-4\Delta}, \Delta \geq 3), \end{equation} which only depends on $\lambda$. As a result, it has been checked that the leading order of the long distance $\mathcal{I}_{AB}$ provided by the OPE (\ref{su_mi:ope}), is $ (G_N^{\, (5)})^{\, 0}\sim N^{\, 0}$. This $N$ dependence is subleading with respect to the expected $N^{\, 2}$ dependence which holds when a fully connected minimal surface $\gamma_{A \cup B}^{con}$ between the regions $A$ and $B$ is allowed in an holographic computation. Thus, the holographic mutual information $\mathcal{I}_{AB}$ experiences a phase transition marked by a change in the $N$ dependence of its leading contributions but does not suffer a sharp vanishing due to large $N$ effects. Namely, it smoothly decays following a power law given by Eq.(\ref{eq16}) while parametrically saturates the bound given by Eq.(\ref{eq_wolf}). \section{Conclusions} In this note, we have investigated the structure of the quantum corrections to the holographic mutual information $\mathcal{I}_{AB}$ between two wide separated regions in the $\mathcal{N}=4$ SYM gauge theory dual to AdS$_5 \times S^5$. To this end, first we have recasted the correlators of twist field operators related to the computation of the mutual information, in terms of correlators between \emph{surface} operators in gauge theories. Namely, it is reasobable enough to claim that the twist field operators in a $d+1$ theory would be some kind of codimension-2 disorder-like surface operators. As so little is known about the higher dimensional versions of the twist field operators, here we have only relied on the most basic analogies between them and the disorder-like surface operators. It is worth to note that, by no means we have tried to establish an exact identification between them. Further investigations in this direction are surely needed in order to obtain some explicit (holographic or field theoretical) constructions of the twist operators in higher dimensions. In spite of this, we feel that the commented analogies are strong enough to obtain valuable information about the $N$-dependence of the first non vanishing quantum corrections to the mutual information. Under this assumption, we have used the AdS/CFT realizations of the surface operators in the probe approximation, to provide a long distance expansion for the $\mathcal{I}_{AB}$. The coefficients of this expansion arise as a byproduct of the OPE for the correlators of the surface operators with the chiral primary operators of the theory. The results show that in the case under consideration, the mutual information $\mathcal{I}_{AB}$ undergoes a phase transition at a critical distance marked by a change in the $N$ dependence of its leading contributions. Namely, in the large separation regime $\mathcal{I}_{AB}\sim\mathcal{O}(N^0)$, so instead of strictly vanishing, it smoothly decays with a power law shaped by the exchange of pairs of the lightest bulk particles between $A$ and $B$. \section*{Acknowledgments} The author is grateful to G. Sierra, A.V. Ramallo and E. Tonni for giving very valuable insights at different stages of this project. JMV has been supported by Ministerio de Econom\'ia y Competitividad Project No. FIS2012-30625. I thank A.V. Ramallo and J. Mas for their hospitality at Universidad de Santiago de Compostela where this project took its initial steps.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Recent observations from type Ia supernovae \cite{SN} associated with Large Scale Structure \cite{LSS} and Cosmic Microwave Background anisotropies \cite{CMB} have provided main evidence for the cosmic acceleration. The combined analysis of cosmological observations suggests that the universe consists of about $70\%$ dark energy, $30\%$ dust matter (cold dark matter plus baryons), and negligible radiation. Although the nature and origin of dark energy are unknown, we still can propose some candidates to describe it, namely since we do not know where this dark energy comes from, and how to compute it from the first principles, we search for phenomenological models. The astronomical observations will then select one of these models. The most obvious theoretical candidate of dark energy is the cosmological constant $\lambda$ (or vacuum energy) \cite{Einstein:1917,cc} which has the equation of state parameter $w=-1$. However, as it is well known, there are two difficulties that arise from the cosmological constant scenario, namely the two famous cosmological constant problems --- the ``fine-tuning'' problem and the ``cosmic coincidence'' problem \cite{coincidence}. An alternative proposal for dark energy is the dynamical dark energy scenario. This dynamical proposal is often realized by some scalar field mechanism which suggests that the specific energy form with negative pressure is provided by a scalar field evolving down a proper potential. So far, a plethora of scalar-field dark energy models have been studied, including quintessence \cite{quintessence}, K-essence \cite{kessence}, tachyon \cite{tachyon}, phantom \cite{phantom}, ghost condensate \cite{ghost2} and quintom \cite{quintom}, and so forth. It should be noted that the mainstream viewpoint regards the scalar-field dark energy models as an effective description of an underlying theory of dark energy. In addition, other proposals on dark energy include interacting dark energy models \cite{intde}, braneworld models \cite{brane}, Chaplygin gas models \cite{cg}, and many others.\\ Currently, an interesting attempt for probing the nature of dark energy within the framework of quantum gravity (and thus compute it from first principles) is the so-called ``Holographic Dark Energy'' (HDE) proposal \cite{Cohen:1998zx,Horava:2000tb,Hsu:2004ri,Li:2004rb}. It is well known that the holographic principle is an important result of the recent researches for exploring the quantum gravity (or string theory) \cite{holoprin}. The HDE model has been tested and constrained by various astronomical observations \cite{Huang:2004wt,obs3} as well as by the Anthropic Principle \cite{Huang:2004mx}. Furthermore, the HDE model has been extended to include the spatial curvature contribution, i.e. the HDE model in non-flat space \cite{nonflat}. For other extensive studies, see e.g. \cite{holoext}.\\ It is known that the coincidence or, ``why now" problem is easily solved in some models of HDE based on the fundamental assumption that matter and holographic dark energy do not conserve separately \cite{interac,Amendola:2000uh}. In fact a suitable evolution of the Universe is obtained when, in addition to the holographic dark energy, an interaction (decay of dark energy to matter) is assumed.\\ Since we know neither the nature of dark energy nor the nature of dark matter, a microphysical interaction model is not available either. However, pressureless dark matter in interaction with holographic dark energy is more than just another model to describe an accelerated expansion of the universe. It provides a unifying view of different models which are viewed as different realizations of the Interacting HDE Model at the perturbative level \cite{Zimdahl:2007ne}. Since the discovery of black hole thermodynamics in 1970, physicists have speculated on the thermodynamics of cosmological models in an accelerated expanding universe \cite{thermo}. Related to the present work, for time-independent and time-dependent equations of state (EoS), the first and second laws of thermodynamics in a flat universe were investigated in \cite{abdalla}. In particular, for the case of a constant EoS, the first law of thermodynamics is valid for the apparent horizon (Hubble horizon) but it does not hold for the event horizon when viewed as system's IR cut-off. When the EoS is assumed to be time-dependent, using a holographic model of dark energy in flat space, the same result is obtained: the event horizon, in contrast to the apparent horizon, does not satisfy the first law. Additionally, while the event horizon does not respect the second law of thermodynamics, it holds for the universe enclosed by the apparent horizon. \par\noindent In the present paper we extend the work by Wang, Lin, Pavon, and Abdalla \cite{1} to the interacting HDE model of dark energy in a non-flat universe, we study the thermodynamical interpretation of the interacting holographic dark energy model for a universe enveloped by the event horizon measured from the sphere of the horizon named $L$. The remainder of the paper is as follows. In Section 2 we generalize the thermodynamical picture of the non-interacting HDE model in a non-flat universe. In Section 3, we extend the thermodynamical picture in the case where there is an interaction term between the dark components of the HDE model. An expression for the interaction term in terms of a thermal fluctuation is given. In the limiting case of flat universe,we obtain the results derived in \cite{1}. Finally, Section 4 is devoted to concluding remarks. \section{Thermodynamical Picture of the non-Interacting HDE model} \par\noindent In this section we consider the HDE model when there is no interaction between the holographic energy density $\rho_{X}$ and the Cold Dark Matter (CDM) $\rho_{m}$ with $w_{m}=0$. In addition, non-dark components have been considered negligible and thus are not included. The third Friedmann equation describes the time evolution of the energy densities of the dark components. These equations are actually the continuity equations for the dark energy and CDM \begin{eqnarray} \label{2eq1}&& \dot{\rho}_{X}+3H(1+w_{X}^{0})\rho_{X} =0, \\ \label{2eq2}&& \dot{\rho}_{m}+3H\rho_{m} =0 \end{eqnarray} where the quantity $H=\dot{a}/a$ is the Hubble parameter and the superscript above the equation of state parameter, $w_{X}$, denotes that there is no interaction between the dark components. The non-interacting HDE model will be accommodated in the non-flat Friedmann-Robertson-Walker universe which is described by the line element \begin{equation}\label{metr} ds^{2}=-dt^{2}+a^{2}(t)(\frac{dr^2}{1-kr^2}+r^2d\Omega^{2}) \end{equation} where $a=a(t)$ is the scale factor of the non-flat Friedmann-Robertson-Walker universe and $k$ denotes the curvature of space with $k=0,\,1,\,-1$ for flat, closed and open universe, respectively. A closed universe with a small positive curvature ($\Omega_{k}\sim 0.01$) is compatible with observations \cite{ {wmap}, {ws}}. Thus, in order to connect the curvature of the universe to the energy density, we employ the first Friedmann equation given by \begin{equation} \label{2eq7} H^2+\frac{k}{a^2}=\frac{1}{3M^2_p}\Big[ \rho_{X}+\rho_{m}\Big] \end{equation} where $c$ is a positive constant in the HDE model and $M_p$ is the reduced Planck mass. We also define the dimensionless density parameters \begin{equation} \label{2eq9} \Omega_{m}=\frac{\rho_{m}}{\rho_{cr}}=\frac{ \rho_{m}}{3M_p^2H^2} \hspace{1ex},\hspace{1cm} \Omega_{X}=\frac{\rho_{X}}{\rho_{cr}}=\frac{ \rho_{X}}{3M^2_pH^2} \hspace{1ex},\hspace{1cm} \Omega_{k}=\frac{k}{a^2H^2} \hspace{1ex}. \end{equation} \par\noindent Therefore, we can rewrite the first Friedmann equation as \begin{equation} \label{2eq10} \Omega_{m}+\Omega_{X}-\Omega_{k}=1\hspace{1ex}. \end{equation} For completeness, we give the deceleration parameter% \begin{equation} \label{qequ} q=-\frac{\ddot{a}}{H^2a}=-\left(\frac{\dot{H}}{H^2}+1\right) \hspace{1ex} \end{equation} which combined with the Hubble parameter and the dimensionless density parameters form a set of useful parameters for the description of the astrophysical observations. It should be stressed that in the non-flat universe the characteristic length which plays the role of the IR-cutoff is the radius $L$ of the event horizon measured on the sphere of the horizon and not the radius $R_{h}$ measured on the radial direction. Therefore, the holographic dark energy density is given as \begin{equation} \label{density1} \rho_{X}=\frac{3c^{2}M^{2}_{p}}{L^{2}} \hspace{1ex}. \end{equation} The radius $L$ is given by \begin{equation} \label{radius1} L=a r(t) \end{equation} where the function $r(t)$ is defined through the equation \begin{equation} \label{radius2} \int_{0}^{r(t)}\frac{dr}{\sqrt{1-k r^2}}=\frac{R_{h}}{a} \hspace{1ex}. \end{equation} Solving for the general case of non-flat universe the above equation, the function $r(t)$ is given as \begin{equation} \label{radius3} r(t)=\frac{1}{\sqrt{k}}\sin y \end{equation} where \begin{equation} \label{argument} y=\frac{\sqrt{k} R_{h}}{a} \hspace{1ex}. \end{equation} Substituting equation (\ref{density1}) in the expression for the dimensionless density parameter of the holographic dark energy as given by equation (\ref{2eq9}), one gets \begin{equation} \label{HL1} HL=\frac{c}{\sqrt{\Omega_{X}^{0}}} \end{equation} and thus \begin{equation} \label{HL2} \dot{L}=HL+a\dot{r}(t)=\frac{c}{\sqrt{\Omega_{X}^{0}}}-\cos y \hspace{1ex}. \end{equation} Differentiating the holographic dark energy density as given by equation (\ref{density1}) and using equations (\ref{HL1}) and (\ref{HL2}), one gets \begin{equation} \label{density2} \dot{\rho}_{X}=-2H\left(1-\frac{\sqrt{\Omega_{X}^{0}}}{c}\cos y\right)\rho_{X} \end{equation} and thus the conservation equation for the holographic dark energy (\ref{2eq1}) yields \begin{equation} \label{eosp1} 1+3\omega_{X}^{0}=-2\frac{\sqrt{\Omega_{X}^{0}}}{c}\cos y \hspace{1ex}. \end{equation} Following \cite{1} (see also \cite{Pavon:2007gt}), the non-interacting HDE model in the non-flat universe as described above is thermodynamically interpreted as a state in thermodynamical equilibrium. According to the generalization of the black hole thermodynamics to the thermodynamics of cosmological models, we have taken the temperature of the event horizon to be $T_L=(1/2\pi L)$ which is actually the only temperature to handle in the system. If the fluid temperature of the cosmological model is set equal to the horizon temperature ($T_L$), then the system will be in equilibrium. Another possibility \cite{davies2} is that the fluid temperature is proportional to the horizon temperature, i.e. for the fluid enveloped by the apparent horizon $T=eH/2\pi$ \cite{pavon}. In general, the systems must interact for some length of time before they can attain thermal equilibrium. In the case at hand, the interaction certainly exists as any variation in the energy density and/or pressure of the fluid will automatically induce a modification of the horizon radius via Einstein's equations. Moreover, if $T \neq T_{L}$, then energy would spontaneously flow between the horizon and the fluid (or viceversa), something at variance with the FRW geometry \cite{pa}. Thus, when we consider the thermal equilibrium state of the universe, the temperature of the universe is associated with the horizon temperature. In this picture the equilibrium entropy of the holographic dark energy is connected with its energy and pressure through the first thermodynamical law \begin{equation} \label{law1} TdS_{X}=dE_{X}+p_{X}dV \end{equation} where the volume is given as \begin{equation} V=\frac{4\pi}{3}L^{3} \hspace{1ex}, \end{equation} the energy of the holographic dark energy is defined as \begin{equation} \label{energy1} E_{X}=\rho_{X} V=4\pi c^{2} M^{2}_{p}L \end{equation} and the temperature of the event horizon is given as \begin{equation} \label{temp1} T=\frac{1}{2\pi L^{0}} \hspace{1ex}. \end{equation} Substituting the aforesaid expressions for the volume, energy, and temperature in equation (\ref{law1}) for the case of the non-interacting HDE model, one obtains \begin{equation} \label{entropy1} dS_{X}^{(0)}=8\pi^{2}c^{2}M^{2}_{p}\left(1+3\omega^{0}_{X}\right)L^{0}dL^{0} \end{equation} and implementing equation (\ref{eosp1}) the above-mentioned equation takes the form \begin{equation} \label{entropy2} dS_{X}^{(0)}=-16\pi^{2}c M^{2}_{p}\sqrt{\Omega^{0}_{X}}\cos y\, L^{0}dL^{0} \end{equation} where the superscript $(0)$ denotes that in this thermodynamical picture our universe is in a thermodynamical stable equilibrium. \par\noindent In the case of flat universe, i.e. $k=0$, we obtain \begin{equation} dS_{X}^{(0)}=-16\pi^{2}c M^{2}_{p}\sqrt{\Omega^{0}_{X}}\,L^{0}dL^{0} \end{equation} which is exactly the result derived in \cite{1} when one replaces $L^{0}$ with the future event horizon $R^{0}_{E}$. \section{Thermodynamical Picture of the Interacting HDE model} \par\noindent In this section we consider the HDE model when there is interaction between the holographic energy density $\rho_{X}$ and the Cold Dark Matter (CDM) $\rho_{m}$. The corresponding continuity equations are now written as \begin{eqnarray} \label{eq3}&& \dot{\rho}_{X}+3H(1+w_{X})\rho_{X} =-Q, \\ \label{eq4}&& \dot{\rho}_{m}+3H\rho_{m} = Q \end{eqnarray} where the quantity $Q$ expresses the interaction between the dark components. The interaction term $Q$ should be positive, i.e. $Q>0$, which means that there is an energy transfer from the dark energy to dark matter. The positivity of the interaction term ensures that the second law of thermodynamics is fulfilled \cite{Pavon:2007gt}. At this point, it should be stressed that the continuity equations imply that the interaction term should be a function of a quantity with units of inverse of time (a first and natural choice can be the Hubble factor $H$) multiplied with the energy density. Therefore, the interaction term could be in any of the following forms: (i) $Q\propto H\rho_{X}$ \cite{Pavon:2005yx,Pavon:2007gt}, (ii) $Q\propto H\rho_{m}$ \cite{Amendola:2006dg}, or (iii) $Q\propto H(\rho_{X}+\rho_{m})$ \cite{Wang:2005ph}. The freedom of choosing the specific form of the interaction term $Q$ stems from our incognizance of the origin and nature of dark energy as well as dark matter. Moreover, a microphysical model describing the interaction between the dark components of the universe is not available nowadays. The interacting HDE model will again be accommodated in the non-flat Friedmann-Robertson-Walker universe described by the line element (\ref{metr}). Our analysis here will give same results with those in the non-interacting case concerning the first Friedmann equation, dimensionless density parameters, and the characteristic length as well as equations related to them (see equations (\ref{2eq7})\hspace{1ex}-\hspace{1ex}(\ref{density2})). However, due to the existence of interaction between the dark components of the holographic dark energy model which changed the conservation equations, equation (\ref{eosp1}) derived for the non-interacting HDE model has to be changed accordingly. Thus, by substituting equation (\ref{density2}) in the conservation equation (\ref{eq3}) for the dark energy component one obtains \begin{equation} \label{eosp2} 1+3\omega_{X}=-2\frac{\sqrt{\Omega_{X}}}{c}\cos y - \frac{Q}{3H^{3}M^{2}_{p}\Omega_{X}} \hspace{1ex}. \end{equation} Comparing equation (\ref{eosp2}) with equation (\ref{eosp1}), it is easily seen that the presence of the interaction term $Q$ has provoked a change in the equation of state parameter and consequently in the dimensionless density parameter of the dark energy component and thus now there is no subscript above the aforesaid quantities to denote the absence of interaction. According to \cite{1}, the interacting HDE model in the non-flat universe as described above is not anymore thermodynamically interpreted as a state in thermodynamical equilibrium. In this picture the effect of interaction between the dark components of the HDE model is thermodynamically interpreted as a small fluctuation around the thermal equilibrium. Therefore, the entropy of the interacting holographic dark energy is connected with its energy and pressure through the first thermodynamical law \begin{equation} \label{law2} TdS_{X}=dE_{X}+p_{X}dV \end{equation} where now the entropy has been assigned an extra logarithmic correction \cite{das} \begin{equation} S_{X}=S_{X}^{(0)}+S_{X}^{(1)} \end{equation} where \begin{equation} \label{correction1} S_{X}^{(1)}=-\frac{1}{2}\ln \left(C T^{2}\right) \end{equation} and $C$ is the heat capacity defined by \begin{equation} C=T\frac{\partial S_{X}^{(0)}}{\partial T} \end{equation} and using equations (\ref{entropy1}), (\ref{temp1}), and (\ref{eosp1}) is given as \begin{eqnarray} \label{capacity1} C&\hspace{-1ex}=\hspace{-1ex}&-8\pi^{2}c^{2}M^{2}_{p}(L^{0})^{2}(1+3\omega_{X}^{0})\\ \label{capacity2} &\hspace{-1ex}=\hspace{-1ex}&16\pi^{2}c M^{2}_{p}(L^{0})^{2} \sqrt{\Omega_{X}^{0}} \cos y \hspace{1ex}. \end{eqnarray} Substituting the expressions for the volume, energy, and temperature (it is noteworthy that these quantities depend now on $L$ and not on $L^{0}$ since there is interaction among the dark components) in equation (\ref{law2}) for the case of the interacting HDE model, one obtains \begin{equation} \label{entropy3} dS_{X}=8\pi^{2}c^{2}M^{2}_{p}\left(1+3\omega_{X}\right)L dL \end{equation} and thus one gets \begin{eqnarray} \label{eosp3} 1+3\omega_{X}&\hspace{-1ex}=\hspace{-1ex}&\frac{1}{8\pi^{2}c^{2}M^{2}_{p}L}\frac{dS_{X}}{dL}\\ &\hspace{-1ex}=\hspace{-1ex}&\frac{1}{8\pi^{2}c^{2}M^{2}_{p}L}\left[\frac{dS_{X}^{(0)}}{dL}+\frac{dS_{X}^{(1)}}{dL}\right]\\ \label{eosp4} &\hspace{-1ex}=\hspace{-1ex}&-2\left(\frac{\sqrt{\Omega_{X}^{0}}}{c}\cos y \right)\frac{L^{0}}{L} \frac{dL^{0}}{dL}+\frac{1}{8\pi^{2}c^{2}M^{2}_{p}L}\frac{dS_{X}^{(1)}}{dL} \end{eqnarray} where the last term concerning the logarithmic correction can be computed using expressions (\ref{correction1}) and (\ref{capacity2}) \begin{equation} \frac{dS_{X}^{(1)}}{dL}=-\frac{H}{\left(\frac{c}{\sqrt{\Omega_{X}^{0}}}-\cos y\right)} \left[\frac{\left(\Omega_{X}^{0}\right)'}{4\Omega_{X}^{0}}+ y \tan y\right] \end{equation} with the prime $(\hspace{1ex}')$ to denote differentiation with respect to $\ln a$. \par\noindent Therefore, by equating the expressions (\ref{eosp2}) and (\ref{eosp4}) for the equation of state parameter of the holographic dark energy evaluated on cosmological and thermodynamical grounds respectively, one gets an expression for the interaction term \begin{equation} \label{interaction} \frac{Q}{9H^{3}M^{2}_{p}}= \frac{\Omega_{X}}{3}\left[-\frac{2\sqrt{\Omega_{X}}}{c}\cos y +\left(\frac{2\sqrt{\Omega_{X}}}{c}\cos y\right)\frac{L^{0}}{L}\frac{dL^{0}}{dL}\right]- \frac{1}{8\pi^{2}c^{2}M^{2}_{p}L}\frac{\Omega_{X}}{3}\frac{dS_{X}^{(1)}}{dL} \hspace{1ex}. \end{equation} It is noteworthy that in the limiting case of flat universe, i.e. $k=0$, we obtain exactly the result derived in \cite{1} when one replaces $L^0$ and $L$ with $R^{0}_E$ and $R_E$, respectively. \section{Conclusions} \par\noindent Understanding dark energy is one of the biggest challenges to the particle physics of this century. Studying the interaction between the dark energy and ordinary matter will open a possibility of detecting the dark energy. It should be pointed out that evidence was recently provided by the Abell Cluster A586 in support of the interaction between dark energy and dark matter \cite{Bertolami:2007zm}. However, despite the fact that numerous works have been performed till now, there are no strong observational bounds on the strength of this interaction \cite{Feng:2007wn}. This weakness to set stringent (observational or theoretical) constraints on the strength of the coupling between dark energy and dark matter stems from our unawareness of the nature and origin of dark components of the Universe. It is therefore more than obvious that further work is needed to this direction.\\ In 1973, Bekenstein \cite{bek} assumed that there is a relation between the area of the event horizon of a black hole and the thermodynamics of a black hole, so that the area of the event horizon of the black hole is a measure of the black hole entropy. Along this line of thought, it was argued in \cite{jac} that the gravitational Einstein equations can be derived through a thermodynamical argument using the relation between area and entropy as input. Following \cite{{jac},{fro}}, Danielsson \cite{dan} has been able to obtain the Friedmann equations, by applying the relation $\delta Q=T dS$ to a cosmological horizon and calculate the heat flow through the horizon of an expanding universe in an acceleration phase. This idea has been generalized to horizons of cosmological models, so that each horizon corresponds to an entropy. Therefore, the second law of thermodynamics was modified in a way that in its generalized form, the sum of all time derivatives of entropies related to horizons plus the time derivative of normal entropy must be positive i.e. the sum of entropies must be an increasing function of time.\\ In the present paper, we have provided a thermodynamical interpretation for the HDE model in a non-flat universe. We utilized the horizon's radius $L$ measured from the sphere of the horizon as the system's IR cut-off. We investigated the thermodynamical picture of the interacting HDE model for a non-flat universe enveloped by this horizon. The non-interacting HDE model in a non-flat universe was thermodynamically interpreted as a thermal equilibrium state. When an interaction between the dark components of the HDE model in the non-flat universe was introduced the thermodynamical interpretation of the HDE model changed. The thermal equilibrium state was perturbed by a stable thermal fluctuation which was now the thermodynamical interpretation of the interaction. Finally, we have derived an expression that connects this interaction term of the dark components of the interacting HDE model in a non-flat universe with the aforesaid thermal fluctuation.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Matrix Concentration via Doob Martingale} Our concentration proof proceeds by constructing a Doob martingale and controlling the norm of each increment and the total predictable variation of the martingale process. Let $$Y_k = \E[f(X_1, ..., X_n) | X_1, ..., X_k] - \E[f(X_1, ..., X_n) | X_1, ..., X_{k-1}],$$ where $f(X_1, ..., X_n) =\prod\limits_{i=1}^{n}\big(I +\frac{X_i}{n}\big)$. Note that $\E[Y_i | X_1, ..., X_i] = 0$, thus $Y_i$ is a martingale. We also observe that as $X_1, ..., X_n$ are independent, \begin{align*}Y_k &= \E\left[f(X_1, ..., X_n) | X_1, ..., X_k\right] - \E\left[f(X_1, ..., X_n) | X_1, ..., X_{k-1}\right] \\ &= \prod_{i=1}^k \big(I+\frac{X_i}{n}\big) \prod\limits_{i=k+1}^{n}\E\bigg[\big(I +\frac{X_i}{n}\big)\big] - \prod_{i=1}^{k-1} \big(I + \frac{X_i}{n}\big) \prod\limits_{i=k+1}^{n}\E\bigg[\big(I +\frac{X_i}{n}\big)\bigg] \\ &= \prod_{i=1}^{k-1} \big(I+\frac{X_i}{n}\big) \frac{X_k - \mu }{n}\prod\limits_{i=k+1}^{n}\big(I +\frac{\mu}{n}\big). \end{align*} We thus use submultiplicativity of the spectral norm to obtain, \begin{align*} \|Y_k\|&=\bigg\|\prod_{i=1}^{k-1} \big(I+\frac{X_i}{n}\big) \cdot \frac{X_k - \mu }{n}\cdot\prod\limits_{i=k+1}^{n}\big(I +\frac{\mu}{n}\big)\bigg\|\\ &\leq \bigg(\prod\limits_{i=1}^{k-1}\bigg\|I+\frac{X_i}{n}\bigg\|\bigg)\bigg\|\frac{X_k-\mu}{n}\bigg\|\bigg(\prod\limits_{i=k+1}^{n}\bigg\|\bigg(I+\frac{\mu}{n}\bigg)\bigg\|\bigg)\\ &\leq \frac{2L}{n}\big(1+\frac{L}{n}\big)^{n-1}\\ &\leq \frac{2Le^L}{n}, \end{align*} where the second inequality follows from the norms of $X_i$ (and hence norm of $\mu$) being bounded by $L$ almost surely and the last inequality follows as $(1+x/n)^{(n-1)}\leq (1+x/n)^n\leq e^x$ for non-negative $x$. Also note that \begin{align*}\left\|\mathbb{E}\left[Y_k Y_k^*| X_1,\ldots,X_{k-1}\right]\right\| &= \left\|\left(\prod_{i=1}^{k-1} I+\frac{X_i}{n}\right) \frac{X_k - \mu }{n}\prod\limits_{i=k+1}^{n}\left(I +\frac{\mu}{n}\right)\prod\limits_{i=n}^{k+1}\left(I +\frac{\mu}{n}\right)\frac{X_k^* - \mu }{n}\left(\prod_{i=k-1}^{1} I+\frac{X_i^*}{n}\right)\right\| \\ &≤ \prod_{i=1}^{k-1} \left\|I+\frac{X_i}{n}\right\| \cdot \left\|\frac{X_k - \mu }{n}\right\| \prod\limits_{i=k+1}^{n}\left\|I +\frac{\mu}{n}\right\| \prod\limits_{i=n}^{k+1}\left\|I +\frac{\mu}{n}\right\| \cdot \left\|\frac{X_k^* - \mu }{n}\right\|\prod_{i=k-1}^{1} \left\|I+\frac{X_i^*}{n} \right\| \\ &≤ \frac{4L^2}{n^2} \left(1+\frac{L}{n}\right)^{2n-2} \\ &≤ \frac{4L^2}{n^2} e^{2L}. \\ \end {align*} Hence, we get that for any $k\leq n$, \begin{align*} \left\|\sum\limits_{i=1}^{k}\mathbb{E}\left[Y_k Y_k^*| X_1,\ldots,X_{k-1}\right]\right\| &\leq \sum\limits_{i=1}^{k}\left\|\mathbb{E}\left[Y_k Y_k^*| X_1,\ldots,X_{k-1}\right]\right\|\\ &\leq \frac{4L^2e^{2L}k}{n^2}\\ &\leq\frac{4L^2e^{2L}}{n}. \end{align*} To conclude the proof, we use the Matrix Freedman inequality \cite{TroppIntro15} for concentration of matrix valued martingales which is stated next. \begin{theorem}\label{thm:freedman} Suppose $Y_k=\sum\limits_{i=1}^k X_i $ is a martingale with $d\times d$ matrix increments $X_i$ satisfying $\|X_i\|\leq R$ almost surely. Let the predictable variations of the process be $W_k^{(1)}=\sum\limits_{i=1}^{k}\E[X_i X_i^*|X_1,\ldots,X_{i-1}]$ and $W_k^{(2)}=\sum\limits_{i=1}^{k}\E[X_i^* X_i|X_1,\ldots,X_{i-1}]$. Then for all $t\geq 0$, we have \begin{align*} \mathsf{Pr}[\exists k \geq 0 : \|Y_k\|\geq t \ \mathit{and} \ \max\{\|W_k^{(1)}\|,\|W_k^{(2)}\|\}\leq \sigma^2]\leq 2d\exp\left(-\frac{c t^2}{Rt + \sigma^2}\right). \end{align*} \end{theorem} \begin{proof}[Proof of Theorem \ref{thm:mainthm}] From the above argument, we get that the increments of our martingale $Y_k$ are bounded by $Le^L/n$ in spectral norm almost surely and that the norm of the predictable quadratic variation (the analysis of $\mathbb{E}[Y_k^* Y_k| X_1,\ldots,X_{k-1}]$ is identical) is bounded by $\frac{4L^2e^{2L}}{n}$ almost surely. Hence we can use Thereom \ref{thm:freedman}, to conclude that \begin{align*} \mathsf{Pr}\left[\|Y_n\|\geq t \right]&\leq 2d\exp\left(-\frac{cnt^2}{Le^Lt + L^2e^{2L}}\right)\\ &\leq 2d\exp\left(-\frac{cnt^2}{2L^2e^{2L}}\right), \end{align*} where for the second inequality we have assumed that $t\leq Le^L\sqrt{\frac{\log d}{n}}\leq Le^L.$ \end{proof} \section{Setup} Suppose ${X_1},\ldots,{X_n} \in \mathbb C^{d\times d}$ are random matrices sampled i.i.d from some distribution with $\E[{X_i}]=\mu$ and $\|{X}_i\|_\mathsf{op}\leq L$ almost surely. A famous result is the matrix Bernstein inequality \cite{TroppIntro15} for sums of random matrices, which in this setting asserts that \begin{align*} \mathsf{Pr}\left[\left\|\sum\limits_{i=1}^{n}\frac{X_i}{n}-\mu\right\|_\mathsf{op}\geq t\right] \leq 2d \cdot \exp(-n t^2/2L^2), \end{align*} whenever $t \leq L\sqrt{\frac{\log d}{n}}$ and $n\geq \log(d)$. For some numerical linear algebra problems, it is of interest to consider instead of sums, functions of the form $$f({X_1},\ldots, {X_n}) = \prod\limits_{i=1}^{n}\left({I} +\frac{{X}_i}{n}\right).$$ We will refer to such functions as matrix product functions. One can easily prove the following lemma \begin{lemma}$\E_{{X_1},\ldots,{X_n}}[f(X_1,\ldots,X_n)] \preceq e^\mu$ with equality in the limit as $n\rightarrow \infty$. \end{lemma} \begin{proof} \begin{align*} \E_{{X_1},\ldots,{X_n}}[f({X_1},\ldots,{X_n})] &= \E_{{X_1},\ldots,{X_n}}\left[\prod\limits_{i=1}^{n}\left(\mathbf{I} +\frac{{X}_i}{n}\right)\right]\\ &=\prod\limits_{i=1}^{n}\E_{{X_i}}\left[{I}+\frac{{X_i}}{n}\right]\\ &=\prod\limits_{i=1}^{n}\left[{I}+\frac{\mu}{n}\right]\\ &=\left({I}+\frac{\mu}{n}\right)^n \preceq e^\mu, \end{align*} and there is equality in the limit. The second equality is because of independence of ${X_i}.$ \end{proof} Recently a central limit theorem for matrix products was established \cite{EH18} and the following concentration inequality was proven by Henriksen and Ward \cite{HW19}. \begin{theorem}[\cite{HW19}]Assuming $\max\{3,Le^2\}\leq \log(n)+1\leq \big(\frac{16n}{\log(dne/\delta)}\big)^{1/3}$, we have that with probability greater than $1-2\delta$, the following holds \begin{align*} \|f({X_1,\ldots,X_n)}-e^\mu\|\leq \frac{O(Le^L)\log(n)}{\sqrt{n}}\big(\sqrt{\log(d/\delta)+\log(n)^2}+\frac{\log(n)}{\sqrt{n}}\big)+\frac{L^2e^L}{n}. \end{align*} \end{theorem} Their proof groups the product into sums of $k-$wise products in a careful way, appealing to Baranyai's theorem, and applies matrix Bernstein inequality to each partition. This approach loses a $(\log n)^2$ factor compared to the matrix Bernstein result for sums and it is unclear whether this is necessary. In this note, we will give a simple proof relying on the Matrix Freedman inequality \cite{TroppIntro15} which does not lose the $\log n$ factors, essentially matching the matrix Bernstein inequality for sums of matrices upto constants. \begin{theorem}\label{thm:mainthm}\begin{align*}\mathsf{Pr}\left[\left\|f({X}_1,\ldots,{X}_n)-e^\mu\right\|_\mathsf{op}\geq t\right] \leq 2d \cdot \exp(-cn t^2/L^2e^{2L}),\end{align*} whenever $t\leq Le^L\sqrt{\frac{\log d}{n}}$, for some absolute constant $c$. Equivalently, for every $\delta\in(0,1)$ with probabiity greater than $1-\delta$, we have \begin{align*} \|f({X_1,\ldots,X_n)}-e^\mu\|\leq \frac{O(Le^L)}{\sqrt{n}}\sqrt{\log(d/\delta)}. \end{align*} \end{theorem} The key difference in this result and the matrix Bernstein inequality for sums is the $L^2e^{2L}$ factor instead of $L^2$. We will later show that even for the special case of products of scalars, such an $e^{O(L)}$ dependence is necessary if the bound is written only in terms of $L$ and not $\mu$. \begin{remark}[Independent Work] The recently posted independent work \cite{newprod} gives a different proof of a more refined version of Theorem \ref{thm:mainthm}, which has slightly better constants and an $L^2e^{2\mu}$ term in the denominator rather than $L^2e^{2L}$ (see their Theorem I). Their approach is also martingale-based, but instead of Matrix Freedman it relies on certain smoothness properties of Schatten norms, also yielding more general results for Schatten norms of matrix products which our proof does not yield. \end{remark} \input{martingale.tex} \section{Lower Bound} In this section, we show that the tail bound needs to depend as $L^2e^{O(L)}$ as given in Theorem \ref{thm:mainthm} even for the case of scalars rather than matrices. Consider a two-point distribution which takes values $X_i = 0$ or $X_i = 2L$ with equal probability. $X_i$ can thus be represented as $X_i = L+LY_i$ where $Y_i$ is a Rademacher random variable. Thus $\mathbb{E}[X] = L$. For sufficiently large $n$, $\prod\limits_{i=1}^{n}\left(1+\frac{X_i}{n}\right)= exp\left(\sum\limits_{i=1}^{n}\frac{X_i}{n}\right)(1+o_n(1))$. Taking $t=Le^Lc$, we have: \begin{align*} \mathsf{Pr}\left[exp\left(\sum\limits_{i=1}^{n}\frac{L+LY_i}{n}\right)-e^L\geq cLe^L\right]&=\mathsf{Pr}\left[exp\left(\sum\limits_{i=1}^{n}\frac{LY_i}{n}\right)-1\geq cL\right]\\ &=\mathsf{Pr}\left[\sum\limits_{i=1}^{n}\frac{LY_i}{n}\geq \log(1+cL)\right]\\ &\geq \mathsf{Pr}\left[\sum\limits_{i=1}^{n}\frac{LY_i}{n}\geq cL\right]\\ &\geq \mathsf{Pr}\left[\sum\limits_{i=1}^{n}\frac{Y_i}{n}\geq c\right], \end{align*} where the first inequality follows as $\log(1+x) < x$ for sufficiently large $x$ and hence corresponds to a larger probability event. Hence, we obtain a lower bound on the probability which is independent of $L$ and so indeed the $Le^{O(L)}$ term must appear in the tail bound. Here we have $O(L)$ in the exponent because in the lower bound example, the $X_i$ are bounded by $2L$ rather than $L$.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \subsection{Anyons and statistical gauge fields} An interesting feature of quantum physics in three-dimensional spacetime is the presence of identical particles with exotic statistics. The basic notion dates all the way back to Leinaas and Myrheim \cite{Leinaas:1977fm} and later Wilczek \cite{Wilczek:1982wy}, who provided specific models realizing such particles, which he referred to as Anyons, as flux-charge quanta. Subsequently, quantum field theories in flat spacetime containing Anyons were constructed in \cite{Semenoff:1988jr,Frohlich:1988di,Forte:1990vb}; see also \cite{Forte:1990hd} for an early review. Later, a group-theory approach based on wave-functions was developed in\footnote{Note also that equations of motion for massless, fractional-spin particles in four-dimensional Minkowski spacetime have been studied in \cite{Volkov1989,Klishevich:2001gy}. Although this formalism is covariant under infinitesimal Lorentz transformations, the four-dimensional Poincar\'e symmetry is violated for finite transformations \cite{Klishevich:2001gy}. } \cite{Jackiw:1990ka,Plyushchay:1991qd,Cortes:1992fa,Plyushchay:1997ie}; see \cite{oai:arXiv.org:1001.0274} for a summary and extensions of these models. More recently, the flux-charge realization has been generalized to models with non-Abelian gauge fields \cite{Itzhaki:2002rc}. Anyons can also be realized without integrating out any statistical gauge fields either as clusters of non-relativistic particles in two spatial dimensions kept together by external one-body potentials, such as a simple harmonic potential, and interacting with each other only via boundary conditions imposed on the multi-body wave functions \cite{Engquist:2008mc}, or as vertex operators in two-dimensional conformal field theories \cite{Jatkar:1990re}. For a more recent axiomatic treatise without gauge fields, see \cite{Mund:2008hn}. The key idea is that at fixed time the configuration space of a collection of massive particles whose trajectories cannot coincide as the result of their interactions has a non-trivial first homotopy group that is represented non-trivially on the multi-body wave-functions or correlation functions involving point-like operators \cite{Mund:2008hn}. These representations thus furnish representations of the braid group, which is why Anyon statistics is synonymous to braid statistics. The wave functions transform under rotations with phase factors which can be identified with the statistical phases under exchange of identical particles. Hence, one and the same phase characterizes the statistics of the particles as well as the representation of the spatial rotation group, which is the essence of the generalized spin-statistics theorem for massive particles in three-dimensional Minkowski space with exotic statistics and fractional spin\footnote{In the case of massless particles in three-dimensional Minkowski spacetime, for which there does not exist any notion of helicity, the statistics has been shown to instead be correlated directly to the Lorentz spin in the case of bosons and fermions \cite{Deser:1991mw}. To our best understanding, so far there does not exist any generalization of this result to fractional spins. } \cite{Mund:2008hn}. Thus, in 2+1 dimensions, the spin of a massive particle can be an arbitrary real number, thereby providing and interpolation between bosons and fermions. In the realization of anyons in quantum field theories, their fractional quantum numbers are typically quantum effects due to the presence of Chern-Simons fields, usually referred to as statistical gauge fields. Their realizations as charged vortices \cite{Wilczek:1982wy} and Hopf-interacting massive particles arise in effective descriptions of matter-coupled Abelian Chern--Simons systems \cite{Semenoff:1988jr}. Integrating out the statistical Chern--Simons gauge field produces effective topological non-local Hopf interactions among the matter fields that transmute their statistics; see also \cite{Polyakov:1988md,Forte:1990vb,Forte:1990hd} and \cite{Wu:1984kd,Wen:1988uw,Govindarajan:1992dr} for related works using the $CP1$ formalism. As for non-Abelian generalizations, the conformal Chern-Simons-scalar \cite{Aharony:2012nh} and Chern-Simons-fermion \cite{Giombi:2011kc} vector models exhibiting level-rank type dualities providing examples of three-dimensional Bose-Fermi transmutation \cite{Polyakov:1988md}. In \cite{Aharony:2012nh} it is suggested that these models contain Anyons at finite couplings. Moreover, as proposed by Itzhaki \cite{Itzhaki:2002rc}, the statistical gauge fields can be taken to be non-minimally coupled Yang-Mills fields by using Wilson lines for connections shifted by the Hodge dual of the field strength to generate the flux-charge bound states. \subsection{Coupling of anyons to background fields} On general grounds, one may ask whether Anyons can be described by any quantum-effective field theory that facilitates their coupling to ordinary tensorial and tensor-spinorial particles and fields, including gravity. In an arbitrary curved background the description of Anyons requires the introduction of a Lorentz connection valued in non-(half-)integer spin representations of the Lorentz algebra, which are infinite dimensional. As such representations admit oscillator realizations, it seems natural to incorporate them into Vasiliev's general framework for higher-spin gravity \cite{Vasiliev:1992av, Vasiliev:1999ba,Vasiliev:2003ev}. The aim of this paper is to take a first step\footnote{See also the conference proceeding \cite{Boulanger:2013zla}.} in this direction. Vasiliev's equations provide a fully non-linear and background-independent description of a large class of higher-spin gravities in various dimensions, including models with internal symmetry algebras \cite{Konstein:1989ij,Prokushkin:1998bq} and fermions \cite{Konshtein:1988yg,Konstein:1989ij,Prokushkin:1998bq, Sezgin:1998gg,Sezgin:2001yf,Vasiliev:2004cm}, of which some exhibit standard spacetime supersymmetry; for a recent review in the case of four-dimensional higher-spin gravities, see \cite{Sezgin:2012ag}. As far as spin-statistics relations are concerned, with notable exceptions in the presence of a positive cosmological constant \cite{Vasiliev:1986ye,Vasiliev:1986td,Sezgin:2012ag}\footnote{As observed by Vasiliev, in Lorentzian signature and in the presence of a positive cosmological constant, supergravities \cite{Vasiliev:1986ye} and linearized higher-spin supergravities \cite{Vasiliev:1986td} admit twisted reality conditions compatible with $\mathbb Z_2\times \mathbb Z_2$ graded quantum algebras; for a recent review and the extension to fully non-linear $dS_4$ higher-spin supergravities, see \cite{Sezgin:2012ag}. } or in Kleinian spacetime signature \cite{Sezgin:2012ag}, Vasiliev's higher-spin gravities have so far been assumed to consist of fields that are either bosonic Lorentz tensors or fermionic tensor-spinors \footnote{To our best understanding, this assumption on spin and statistics is required for consistency only within the context of relativistic quantum field theories in flat spacetimes of dimension four or higher; see \emph{e.g.} \cite{Weinberg:1996kr}. Extensions to curved backgrounds of the spin-statistics correspondence are given in \cite{Verch:2001bv} and references therein.}. However, our key observation is: Vasiliev's higher-spin gravities are not formulated \emph{a priori} in terms of Lorentz tensors and tensor-spinors; rather they are formulated in terms of master fields living on products of space-time and fiber manifolds. The latter contain non-commutative twistor or twistor-like spaces whose coordinates generate the higher-spin and internal symmetry algebras. The full specification of a Vasiliev-type higher-spin gravity model thus requires the choice of a set of fiber functions that form an associative algebra. Hence, the incorporation of fractional-spin fields into the higher-spin framework can be reduced to the technical problem of in which ways Vasiliev's theory admits non-standard embeddings of the Lorentz connection leading to fractional-spin representations. The aim of this paper is to demonstrate within a simple class of models, namely topological models of Chern--Simons type which we refer to as fractional-spin gravities, how standard tensorial higher-spin gravities can be extended by fractional-spin fields by including additional sets of fiber functions that form Lorentz representations characterized by arbitrary real-valued Lorentz spins. As we shall see, the fractional spin fields appear within a bi-module of one-forms acted upon by one-sides actions of the higher-spin algebra and an internal color gauge group of infinite rank. In doing so, a particular set of technical problems that we shall have to address concerns the nature of infinite-dimensional representations and how it is affected by different choices of bases. To this end, we are going to focus on the on-shell formulation of a class of Blencowe--Vasiliev models \cite{Blencowe:1988gj,Bergshoeff:1989ns} that arise within the Prokushkin--Vasiliev system \cite{Prokushkin:1998bq} as a consistent truncation. \subsection{Outline of the paper} In Section 2, which can be skipped at first reading, we collect further background material, general remarks on higher-spin gravities in three dimensions and how our fractional-spin gravity models can be embedded into this context. We then summarize our main results, including material from a work in progress. In Section 3, we then proceed with the main analysis of anyon representations in $AdS_3$ and their realizations using the Wigner-deformed Heisenberg oscillators. In this section we shall stress details concerning the infinite-dimensional nature of these representations and in particular the importance of keeping track of their indecomposable structures in critical limits and related choices of bases will be stressed in Section \ref{Sec:bases}. In Section 4, the fractional-spin Chern--Simons theory is formulated and some of its truncations are presented. We conclude in Section 5. \section{Preliminary remarks and summary} In this section we review some features of higher-spin gravity that are of conceptual interest and of importance for generalizations of our models. We then summarize our results including some material concerning mainly the off-shell formulation to be presented elsewhere. As the contents of this section are not crucial for the main analysis in the coming sections, to which the reader may therefore skip immediately if so desired. \subsection{Preliminary remarks on three-dimensional higher-spin gravities} \paragraph{Three-dimensional higher-spin gravity landscape.} Three-dimensional topological higher-spin gravities with Lorentz-tensorial and tensor-spinorial gauge fields are described semi-classically by the Fradkin--Vasiliev-inspired Blencowe actions \cite{Blencowe:1988gj}. These theories are of Chern--Simons type and based on Lie algebras generated by ordinary Heisenberg oscillators, or equivalently, area preserving diffeomorphisms of two-spheres and two-hyperboloids \cite{Bergshoeff:1989ns}. As pointed out by Vasiliev \cite{Vasiliev:1989re}, these algebras admit deformations based on Wigner-deformed Heisenberg oscillators \cite{Wigner:50,Yang:51}, or equivalently, algebras of symplectomorphisms of fuzzy two-hyperboloids and two-spheres. These topological models sit inside a larger landscape of matter-coupled higher-spin gravities described by the Prokushkin--Vasiliev equations \cite{Vasiliev:1996hn,Prokushkin:1998bq}; see also \cite{Barabanshchikov:1996mc}. Although their structure resembles that of the higher-dimensional Vasiliev equations \cite{Vasiliev:1990en,Vasiliev:2003ev}, the three-dimensional higher-spin gravities exhibits a proper feature: its dynamical Weyl zero-forms are necessarily accompanied by topological Weyl zero-forms\footnote{As pointed out to us by D. Jatkar, it is natural to think of these topological degrees of freedom in higher-spin gravity as corresponding two-dimensional conformal field theory defects.} while the corresponding sectors can be consistently truncated in four and higher dimensions. In any dimension, there exists a special topological zero-form (which is a singlet) that can acquire an expectation value, $\nu$ say, that deforms the higher-spin symmetries. However, it is only in three dimensions that $\nu$ does not deform the anti-de Sitter vacuum.\footnote{In four dimensions, the maximal finite sub-algebra of the higher-spin algebra that is preserved by $\nu$ is $so(1,3) $ or $so(2,2)$ depending on the choice of signature. This suggests that four-dimensional fractional-spin gravities correspond holographically to three-dimensional massive quantum field theories with anyons, and that these models are integrable in a suitable sense, as the higher-spin symmetries are deformed rather than broken.} The expansion around this $AdS_3$-vacuum, with its expectation value $\nu$, yields the aforementioned Chern--Simons models based on deformed oscillators as consistent truncations (upon setting all fluctuations in the Weyl zero-form to zero). In particular, for critical values of $\nu\,$, given conventionally by $\nu=-2\ell-1$ with $\ell=0,1,2,\dots\,$, the higher-spin algebras contain $gl(2\ell+1)$ subalgebras \cite{Vasiliev:1989re}, and the Chern--Simons models can be reduced further down to $sl(N|N\pm 1)$ and pure bosonic $sl(N)$ models studied in \cite{Bergshoeff:1989ns}. \paragraph{Prokushkin--Vasiliev system and Wigner-deformed Heisenberg oscillators.} The Prokushkin--Vasiliev system consists of a connection one-form $\widehat A$ and matter zero-form $\widehat B$ living on a base manifold given locally by the direct product of a commutative spacetime ${\cal M}$ and non-commutative twistor space ${\cal Z}$ with a closed and central two-form $\widehat J\,$. These master fields are valued in associative algebras consisting of functions on a fiber manifold ${\cal Y}\times {\cal I}\,$, the product of an additional twistor space ${\cal Y}$ and an internal manifold ${\cal I}$ whose coordinates generate a matrix algebra. The Prokushkin--Vasiliev field equations, \emph{viz.} $\widehat {\rm d}\widehat A+\widehat A^2+\widehat J \widehat B=0$ and $\widehat {\rm d}\widehat B+[\widehat A,\widehat B]=0$, state that $\widehat A=\widehat A|_{\cal M}+\widehat A|_{\cal Z}$ describes a flat connection on ${\cal M}$ and a pair of oscillators on ${\cal Z}\times {\cal Y}$ deformed by local as well as topological degrees of freedom contained in $\widehat B\,$. Working within the fully non-linear system its constructors observed that models with sufficiently elaborate internal algebra admit $AdS_3$-vacuum expectation values \cite{Prokushkin:1998bq} \begin{eqnarray} \langle \widehat B \rangle =\nu\ , \end{eqnarray} and that the perturbative expansions around these vacua yield parity-invariant three-dimensional higher-spin gravities containing massive scalars.\footnote{These scalars behave as massive higher-spin fields for critical values of $\nu$; whether parity invariance can be broken within the Prokushkin--Vasiliev formalism remains an open issue.} After a suitable redefinition, the perturbatively-defined master fields become valued in associative algebras \begin{equation} {\cal A}(2;\nu;{\cal I})=\bigoplus_{\Sigma} {\cal A}_{\Sigma}\ ,\end{equation} where ${\cal I}$ refers to sets of internal generators (including the $\mathbb Z_2$-generator $\Gamma$ used to double $sl(2)$ to $sl(2)_+ \oplus sl(2)_-$), consisting of sectors ${\cal A}_\Sigma$ of suitable non-polynomial extensions of the universal enveloping algebra $Aq(2;\nu)$ \cite{Vasiliev:1989re} of the Wigner-deformed Heisenberg oscillator algebra \cite{Wigner:50,Vasiliev:1989re,Plyushchay:1997mx}\footnote{Blencowe's construction \cite{Blencowe:1988gj} makes use of the undeformed algebra $Aq(2;0)_+ \oplus Aq(2;0)_-\,$.} ($\alpha=1,2$) \begin{equation} [q_\alpha,q_\beta]=2i\epsilon_{\alpha\beta}(1+\nu k)\ ,\qquad \{k,q_\alpha \}=0\ ,\qquad k^2=1\ . \label{defy} \end{equation} Thus, as the fully non-linear formulation rests on associative differential algebras, one may ask whether these can be extended by adding sectors of composite operators and refining correspondingly the star-product composition rule as to retain associativity, thus allowing the formal structure of the full master-field equations to remain intact, with the aim of facilitating modified embeddings of the Lorentz algebra into the gauge algebra that produces perturbatively-defined field contents containing fractional-spin fields. Indeed, as we shall outline next, this can be done in a relatively straightforward fashion by adding sectors of non-polynomial operators corresponding to Fock-space endomorphisms. These operators are given essentially by star-product versions of vacuum-to-vacuum projectors dressed by left and right multiplications by arbitrary polynomials. The extended associative star-product rules can then be defined using a matrix structure. \subsection{Outline of fractional-spin gravities}\label{Sec:main} \paragraph{Matrix fusion rules\,.} The fractional-spin gravities that we shall consider are based on $\mathbb Z_2$-graded associative algebras that are formed by extending the enveloping algebra $Aq(2;\nu)$ by sectors of operators acting in the Fock space ${\cal F}$ consisting of states with distinct eigenvalues of the spatial spin generator $J_0$. More formally, we define \begin{equation} {\cal A}(2;\nu|\mathfrak{o}(2)_{J_0};{\cal F}):=\left[\begin{array}{cc} \overline{Aq}(2;\nu)_{++} & \rho_{\tiny {\cal F}}({\rm End}({\cal F}))_{+-}\\ \rho_{\tiny {\cal F}}({\rm End}({\cal F}))_{-+}& \rho_{\tiny {\cal F}}({\rm End}({\cal F}))_{--}\end{array}\right] \end{equation} where the injective homomorphism, or monomorphism, \begin{eqnarray}\label{rhomap} \rho_{\tiny {\cal F}} \quad : \quad {\rm End}({\cal F})\quad \hookrightarrow \quad \overline{Aq}(2;\nu) \end{eqnarray} maps the space \begin{equation} {\rm End}({\cal F}):=\left\{\check E:=\sum_{m,n\geqslant 0} E^{mn}|m\rangle\langle n|\right\}\cong {\rm Mat}_{\infty}(\mathbb{C})\end{equation} of endomorphisms of the Fock space \begin{equation} {\cal F}=\sum_{m=0}^\infty \mathbb{C}\otimes |m\rangle\ ,\quad (\check N-m)|m\rangle =0\ ,\end{equation} of the undeformed Heisenberg oscillator algebra $[\check b^-,\check b^+]=1$ with number operator \begin{eqnarray} \check N := {\check b}^+ \,{\check b}^- \end{eqnarray} into a non-polynomial completion \begin{equation}\label{fqk} \overline{Aq}(2;\nu)\,:=\left\{\ f(k;q)=\sum_{m=0,1; n\geq 0} k^m{} f_{m;(n)}(q)\ ,\quad f_{m;(n)}(q):=f_{m;(n)}^{\,\alpha_1 \cdots \alpha_n} {} q_{(\alpha_1 } {}\cdots{} q_{\alpha_n)} \,\right\}\,, \end{equation} of the enveloping algebra $Aq(2;\nu)$ of the Wigner-deformed Heisenberg oscillator algebra \eqref{defy}. The monomorphism $\rho_{\tiny {\cal F}}$ is defined by the rule \begin{equation} \label{rhoF} \rho_{\tiny {\cal F}}(|m\rangle\langle n|)=P_{m|n}\ ,\end{equation} where the generalized projectors $P_{m|n} \in \overline{Aq}(2;\nu)$ obey \begin{equation} P_{m|n}{} P_{k|l}=\delta_{nk} P_{m|l}\ ,\quad (N_\nu-m){} P_{m|n}=0=P_{m|n}{}(N_\nu-n)\ , \end{equation} with number operator $N_\nu:=\rho_{\tiny {\cal F}}(\check N)$ related to $J_0$ by \begin{eqnarray} \label{J01} N_\nu := 2 J^0 - \tfrac12 (1+\nu) \; , \quad J^0 := \tfrac{1}{4}\, \{ a^- , a^+\}\; , \end{eqnarray} expressed in terms of the \emph{deformed} oscillators $(a^-,a^+)$ obeying $[a^- , a^+] = 1 + k\,\nu\,$. The relationship \cite{Plyushchay:1997mx} between deformed and undeformed oscillators $(a^- , a^+)$ and $(b^- , b^+)$, respectively, is presented in Subsection \ref{subsec:RepresentationAq}. In defining ${\cal A}(2;\nu|\mathfrak{o}(2)_{J^0};{\cal F})$ we have also used \begin{equation}\label{projector} \overline{Aq}(2;\nu)_{\sigma\sigma'} :=\Pi_{\sigma}\,{}\,\overline{Aq}(2;\nu)\,{}\,\Pi_{\sigma'}\ ,\quad \Pi_\pm=\frac12(1\pm k)\ . \end{equation} The associative composition rule in ${\cal A}(2;\nu|\mathfrak{o}(2)_{J^0};{\cal F})$ is defined as follows: Let \begin{equation} \mathbb{M}_i=\left[\begin{array}{cc} A_i& \rho_{\tiny {\cal F}}(\check B_i)\\\rho_{\tiny {\cal F}}(\check{C}_i) & \rho_{\tiny {\cal F}}(\check{D}_i)\end{array}\right]\ , \quad i=1,2\ ,\end{equation} be two elements in ${\cal A}(2;\nu|\mathfrak{o}(2)_{J^0};{\cal F})$ with $A_i\in Aq(2;\nu)$ being finite polynomials and $\check B_i,\,\check C_i,\, \check{D_i} \in {\rm Mat}_{K}(\mathbb{C})\subset {\rm End}({\cal F})$ being finite matrices, that is, $\check B_i=\sum_{m,n=0}^K A^{m,n}_i|m\rangle\langle n|$ \emph{idem} $\check C_i$ and $\check D_i$ ($i=1,2$). Then $\mathbb{M}_1 \mathbb{M}_2$ is defined by standard matrix multiplication followed by star-product compositions and expansions of the results in the appropriate bases. In particular, the quantity $\rho_{\tiny {\cal F}}(\check B_1) \rho_{\tiny {\cal F}}(\check C_2) = \sum_{m,n,p=0}^K M_1^{mp} M_2^{pn} P_{m|n}$ is given by its expansion in the monomial basis of $\overline{Aq}(2;\nu)$, while $A_1 \rho_{\tiny {\cal F}}(\check B_2)$ and $\rho_{\tiny {\cal F}}(\check C_1) A_2$ are expanded in the matrix basis of ${\rm End}({\cal F})$. This composition rules is then extended to ${\rm End}({\cal F})$ by allowing the degree of $A_i$ and $K$ to be arbitrarily large. Thus, the fractional-spin algebra has a product rule that combines star-product compositions in initial bases followed by expansions of the results in a final bases, which on may refer to as a \emph{fusion rule}. We note that in the case at hand, the fusion rule does not require any expansion of $P\in Aq(2;\nu)$ in the matrix basis of ${\rm End}({\cal F})\,$. \paragraph{Hermitian conjugation\,.} Defining the hermitian conjugation operation $\dagger$ in $Aq(2;\nu)$ by \begin{equation} (q_\alpha)^\dagger=q_\alpha\ ,\quad (k)^\dagger=k\ ,\end{equation} it follows that the Fock-space realization $\check q_\alpha$ of the deformed oscillators \cite{Plyushchay:1997mx,Plyushchay:1997ty} with \begin{equation} \check k=\varepsilon (-1)^{\check N}=\varepsilon \cos (\pi \check N)\,,\quad (\check N-n)|n\rangle=0\ ,\quad \varepsilon^2=1\ , \end{equation} obeys \begin{equation} (\check q_\alpha)^{\check{\dagger}}=\check C \check q_\alpha \check C\ ,\end{equation} where $\check{\dagger}$ refers to the standard hermitian conjugation operation in ${\rm End}({\cal F})$ and the charge conjugation matrix $\check C$ is given by the identity in the unitary regime $\varepsilon \nu\geqslant -1$ and a non-trivial matrix in the non-unitary regime $\varepsilon \nu<-1$. Assuming furthermore that $\check C^2=1$ and $(\check C)^{\check{\dagger}}=\check C\,$, it follows that \begin{equation} \dagger \circ \, \rho_{\tiny {\cal F}}=\rho_{\tiny {\cal F}} \circ {\rm Ad}_{\check C}\circ \check{\dagger}={\rm Ad}_{C}\circ \rho_{\tiny {\cal F}}\circ \check{\dagger}\ ,\quad C:=\rho_{\tiny {\cal F}}(\check C)\ ,\end{equation} or more explicitly, if $f=\rho_{\tiny {\cal F}}(\check f)$ then \begin{equation} f^\dagger=\rho_{\tiny {\cal F}}(\check C\check f^{\check{\dagger}}\check C)=C{} \rho_{\tiny {\cal F}}(\check f^{\check{\dagger}}){} C\ . \end{equation} \paragraph{Master gauge fields\,.} Starting from a Prokushkin--Vasiliev model with $\langle \widehat B\rangle =\nu$ and fiber algebra \begin{equation} {\cal A}_\sigma :=\left[{\cal A}(2;\nu|\mathfrak{o}(2)_{J^0};{\cal F})\otimes {\rm Cliff}_1(\Gamma)\otimes {\rm Cliff}_1(\xi)\right]_\sigma\ , \quad \sigma = \pm\ , \end{equation} where ${\rm Cliff}_1(\Gamma)$ and ${\rm Cliff}_1(\xi)$ denote, respectively, a bosonic Clifford algebra and a fermionic Clifford algebra with respective generators obeying \begin{equation} \Gamma{} \Gamma=1=\xi{} \xi\ ,\quad \epsilon_{\rm s}(\Gamma,\xi)=(0,1)\ \end{equation} and where $\epsilon_{\rm s}$ denotes the Grassmann parity, we may consider the consistent truncation $\widehat B=\nu$, leaving the flat connection \begin{equation} \mathbb{A}_\sigma =\left[ \begin{array}{cc} W & \psi_\sigma \\ \overline{\psi}_\sigma & U \end{array} \right]\quad \in \quad \Omega^{[1]}({\cal M}_3)\otimes {\cal A}_\sigma \ , \qquad \sigma = \pm\ . \end{equation} We demand the master fields to be Grassmann-even, \emph{i.e.} \begin{equation} \epsilon_{\rm s}(W,\psi_\pm,\overline{\psi}_\pm,U)=(0,0,0,0)\ ,\end{equation} and to have intrinsic parities \begin{equation} \sigma(W,\psi_\pm,\overline{\psi}_\pm,U)=(+1,\pm 1,\pm 1,+1)\ ,\end{equation} where $\sigma$ is defined on polynomials $f(q,k,\xi)$ of definite degrees in $q^{\alpha}$ and $\xi$ by \begin{equation} \pi_q \pi_\xi(f) =: \sigma(f) f\ ,\label{sigmadef}\end{equation} where we used the automorphisms \begin{equation} \pi_q(f(q,k,\xi)):=f(-q,k,\xi)\ ,\quad \pi_{\xi} f(q,k,\xi):=f(q,k,-\xi)\ .\end{equation} Taking into account the $\Pi_\pm$-projections and assigning the following Grassmann parity \begin{equation} \label{statisticsq} \epsilon_s(q_{\alpha}) = 0 \, \end{equation} it follows that $(W,U)$ and $(\psi_-,\overline{\psi}_-)$ are $\xi$-independent, hence consisting of Grassmann-even component fields, while $(\psi_+,\overline{\psi}_+)$ are linear in $\xi$, hence consisting of Grassmann-odd component fields. We note that $\psi_\pm$ and $\overline{\psi}_\pm$, respectively, transform under the left actions of $\overline{Aq}(2;\nu)_{++}\otimes {\rm Cliff}_1(\Gamma)$ and $\rho_{\tiny {\cal F}}({\rm End}({\cal F}))_{--}\otimes {\rm Cliff}_1(\Gamma)$ and under the right actions of $\rho_{\tiny {\cal F}}({\rm End}({\cal F}))_{--}\otimes {\rm Cliff}_1(\Gamma)$ and $\overline{Aq}(2;\nu)_{++}\otimes {\rm Cliff}_1(\Gamma)$. The reality conditions on $\mathbb{A}_\sigma$ will be chosen such that $W$ belongs to a non-compact real form of $\overline{Aq}(2;\nu)_{++} \otimes {\rm Cliff}_1(\Gamma)$ containing the Lorentz generators $\Lambda^{\alpha\beta}\Pi_+{} q_{(\alpha}{} q_{\beta)}{} \Pi_+$ with $(\Lambda^{\alpha\beta})^\dagger=\Lambda^{\alpha\beta}\,$, while $U\in u(\infty) \otimes {\rm Cliff}_1(\Gamma)\,$. We note that for generic $\nu$, the model may be level truncated such that \begin{equation} U\in u(K)\oplus u(K) \ ,\quad (\psi_\pm,\overline{\psi}_\pm)\in (K,\overline K)\ ,\end{equation} for $K=1,2,\dots,\infty$, but that the more interesting truncations arise spontaneously as $\nu$ assumes critical values. \paragraph{Embedding of Lorentz algebra\,.} A standard space-time formulation of the Chern--Simons field theory requires the choice of a canonical Lorentz connection $\omega \in sl(2)_{\rm Lor}$ associated with a principal Lorentz bundle over ${\cal M}_3$. In general, the Lorentz algebra can be embedded into the gauge algebra in several inequivalent ways leading to physically distinct models. In particular, one has \begin{itemize} \item the diagonal embedding \begin{equation} sl(2)_{\rm Lor}=sl(2)_{\rm diag}:={\rm span} \left\{ q_{(\alpha} {} q_{\beta)}\right\}\ , \label{standardso12} \end{equation} which yields standard higher-spin (super)gravities consisting of Lorentz tensors (and tensor spinors); \item the alternative non-diagonal embedding \begin{equation} sl(2)_{\rm Lor}=\Pi_+ {} sl(2)_{\rm diag}=\left\{\Pi_+ {} q_{(\alpha} {} q_{\beta)}\right\}\ ,\label{anyonso12} \end{equation} which yields the fractional-spin (super)gravities in which the canonical Lorentz connection $\omega$ is thus embedded in $W$ such that $\psi$ and $\bar \psi$, respectively, transform in left- and right-modules with fractional Lorentz spin. \end{itemize} \paragraph{Supertrace and action\,.} The non-polynomial completion $\overline{Aq}(2;\nu)$ of the enveloping algebra $Aq(2;\nu)$ admits the trace operation \begin{equation}\label{Trace} {\rm Tr}_\nu(f):= {\rm STr}_\nu(k{} f)\ ,\end{equation} where the supertrace operation ${\rm STr}_\nu$ is fixed uniquely by its defining properties \begin{equation} {\rm STr}_{\nu}(f{} g)=(-1)^{\tfrac{1-\sigma(f)}{2}}{\rm STr}_{\nu}(g{} f) = (-1)^{\tfrac{1-\sigma(g)}{2}}{\rm STr}_{\nu}(g{} f) \ ,\quad {\rm STr}_{\nu}(1):=1\ ,\end{equation} where the intrinsic parity $\sigma$ is defined in \eqref{sigmadef}, \emph{i.e.} $f(-q,k)=\sigma(f) f(q,k)\,$. Using the Weyl-ordered basis \eqref{fqk}, one has \cite{Vasiliev:1989re} \begin{equation} \label{STr}{\rm STr}_\nu (f(q,k))= f_{0;(0)} - \nu \, f_{1;(0)}\ .\end{equation} Upon including the Clifford algebras, we define \begin{equation} {\rm Tr}(f)={\rm Tr}_{\nu}(f)|_{\xi=0=\Gamma}\ ,\quad f\in \overline{Aq}(2;\nu)\otimes {\rm Cliff}_1(\Gamma)\otimes {\rm Cliff}_1(\xi)\ , \end{equation} and equip ${\cal A}_\pm$ with the trace operation \begin{equation} {\rm Tr}_\mp (\mathbb{M}_\pm)={\rm Tr}(A\mp D)\ ,\quad \mathbb{M}_\pm=\left[\begin{array}{cc} A&B_\pm\\ C_\pm&D\end{array}\right]\in{\cal A}_\pm\ ,\label{traceoperation}\end{equation} where thus $\sigma(B_\pm)=\sigma(C_\pm)=\pm 1$, which obeys\footnote{Expanding the zero-form $B_\mp=\sum_I B_\mp^{I} \Theta^\mp_I$ where $\Theta^\mp_I$ denote a basis of composite operators and $B_\mp^I$ component zero-form fields with $\epsilon_{\rm s}(\Theta^\mp_I)=\epsilon_{\rm s}(B_\mp^I)=(1\pm 1)/2\,$ \emph{idem} for $C_\pm$, it follows from ${\rm Tr}(\Theta^\mp_I{} \Theta^\mp_J)={\rm Tr}(\Theta^\mp_J{} \Theta^\mp_I)$ that ${\rm Tr}_\pm (\mathbb{M}_{1;\mp} {}\mathbb{M}_{2;\mp})={\rm Tr}(A_1{} A_2\pm D_1 {} D_2)\pm \sum_{I,J} (B_{1;\mp}^I C_{2;\mp}^{ J}+ B_{2;\mp}^I C_{1;\mp}^J){\rm Tr}(\Theta^\mp_I{} \Theta^\mp_J)$ is symmetric under the exchange $1 \longleftrightarrow2\,$.} \begin{equation} {\rm Tr}_\pm (\mathbb{M}_{1;\mp} {}\mathbb{M}_{2;\mp})= {\rm Tr}_\pm (\mathbb{M}_{2;\mp}{}\mathbb{M}_{1;\mp})\ .\end{equation} The Chern--Simons action \begin{eqnarray} S[\mathbb{A}_\pm]&=&\int \; {\rm Tr}_\mp \left( \tfrac12\mathbb{A}_\pm{} {\rm d} \mathbb{A}_\pm+\tfrac{1}{3} (\mathbb{A}_\pm)^{{} 3} \right) \label{CS1} \\ [5pt]&=&\int \; {\rm STr}_\nu\left( \tfrac12 W {} {\rm d} W+\tfrac{1}{3} W^{{} 3}+W{} \psi_\pm{} \bar\psi_\pm\right.\\ &&\left. \pm ( \tfrac{1}2 U {} {\rm d} U+\tfrac{1}{3} U^{{} 3}+U{} \bar\psi_\pm {} \psi_\pm) + \tfrac12(\psi_\pm {} {\rm d} \bar{\psi}_\pm\pm \bar{\psi}_\pm {} {\rm d} \psi_\pm)\right)|_{\Gamma=0=\xi}\qquad \label{CS2}\ , \end{eqnarray} as can be seem using the $\Pi_\pm$ projections and \begin{equation} {\rm STr}_{\nu}(\bar \psi_\pm {} W{} \psi_\pm)=\pm {\rm STr}_{\nu}(W{} \psi_\pm{} \bar \psi_\pm)\ ,\quad {\rm STr}_{\nu}(\psi_\pm {} U{} \bar\psi_\pm)=\pm {\rm STr}_{\nu}(U{} \bar\psi_\pm{} \psi_\pm)\ .\end{equation} On $\rho_{\tiny {\cal F}}({\rm End}({\cal F}))$ the operation ${\rm STr}_\nu$ reduces to the standard Fock-space supertrace, \emph{viz.} \begin{equation} {\rm STr}_\nu(\rho_{\tiny {\cal F}}(\check f))={\rm STr}_\nu\big(\rho_{\tiny {\cal F}}(|0\rangle\langle 0|)\big) \sum_{m=0}^\infty (-1)^m \langle m|\check f|m\rangle\ .\end{equation} Thus, the level of the internal gauge algebra is proportional to the $\nu$-dependent quantity ${\rm STr}_\nu\big(\rho_{\tiny {\cal F}}(|0\rangle\langle 0|)_{--}\big)\equiv {\rm STr}_\nu(\Pi_-{} P_{0|0})\,$. \paragraph{On-shell formulation in the Fock space\,.} The equations of motion take the form \begin{equation}\label{F} {\mathbb{F}}{}_\pm:={\rm d} {\mathbb{A}}{}_\pm + ({\mathbb{A}}{}_\pm)^2=0 \,, \end{equation} that is, \begin{equation} {\rm d} W + W^{{} 2} + \psi_\pm{} {\overline{\psi}}{}_\pm=0\ ,\quad {\rm d} U + U^{{} 2} + {\overline{\psi}}{}_\pm {} \psi_\pm = 0\ , \end{equation} \begin{equation} {\rm d} \psi_\pm + W {} \psi_\pm + \psi_\pm U=0\ ,\quad {\rm d} {\overline{\psi}}{}_\pm + U {} {\overline{\psi}}{}_\pm + {\overline{\psi}}{}_\pm {} W=0\ . \end{equation} Assuming that $W$ lies in the image of $\rho_{\tiny {\cal F}}$, one can thus equivalently work on-shell with the Fock-space presentation of the equations of motion, \emph{viz.} \begin{equation} {\rm d} \check W+\check W^2+\check \psi_\pm\check{\overline{\psi}}{}_\pm=0\ ,\quad {\rm d}\check U+\check U^2+\check{\overline{\psi}}{}_\pm \check \psi_\pm=0\ , \end{equation} \begin{equation} {\rm d} \check \psi_\pm+\check W\check \psi_\pm+\check \psi_\pm \check U=0\ ,\quad {\rm d} \check{\overline{\psi}}{}_\pm + \check U \check{\overline{\psi}}{}_\pm + \check{\overline{\psi}}{}_\pm\check W = 0\ , \end{equation} which we shall analyze in more detail below, though we note that the calculation of the action requires the star-product formalism. \paragraph{Fractional Lorentz spin\,.} By the construction outlined so far, and working in conventions where \begin{equation} sl(2)_{\rm diag}= \left\{ J^+, J^0, J^{-}\right\}=\left\{\tfrac{1}{2}\, a^+ {} a^+,\; \tfrac{1}{4}\,\{a^+,a^-\}_{}, \;\tfrac{1}{2}\,a^-{} a^-\right\}\ ,\end{equation} where the deformed ladder operators $a^\pm$ are linear combinations of $q_\alpha$ obeying \begin{equation} [a^-,a^+]_{}=1+\nu k\ ,\qquad \{k,a^\pm\}_{}=0\ ,\qquad (a^\pm)^\dagger=a^\mp\ ,\label{dha} \end{equation} the Lorentz spin of $(\psi,\bar\psi)$, say $\alpha$, defined to be the lowest weight of the generator \begin{equation} J^0=\tfrac12 N_{\nu}+\tfrac14(1+\nu)\ , \label{J0} \end{equation} is one of the roots of the quadratic Lorentz Casimir \begin{equation} \label{casimir} C_2(sl(2)_{\rm Lor}){} \psi_\pm=-\alpha(\alpha-1)\,\psi_\pm \;, \qquad \bar{\psi}_\pm{} C_2(sl(2)_{\rm Lor}) =-\alpha(\alpha-1)\,\bar{\psi}_\pm\,. \end{equation} Taking into account $k=\varepsilon (-1)^{N_\nu}\,$, one has \begin{equation}\label{alpha} \alpha=\frac{1+\nu}4+\frac{1-\varepsilon}{4}\ , \qquad \varepsilon = \pm 1\ .\end{equation} The Lorentz spin of $(\psi,\bar\psi)$ is thus fractional and hence $(\psi,\bar\psi)$ transform in an infinite-dimensional irreducible representation of $sl(2)_{\rm Lor}$ except for critical values of $\nu$. In the following, we will implicitly assume that $\varepsilon=+1\,$ unless explicitly mentioned otherwise. \paragraph{Critical $\nu$\,.} For the \begin{equation} \mbox{critical values}:~ \nu~=~\nu_\ell~:=~-2\ell-1\ ,\quad \ell=0,1,2,\dots\ ,\end{equation} the deformed Wigner-Heisenberg algebra is known \cite{Vasiliev:1989re,Plyushchay:1997mx,Plyushchay:1997ty} to admit $(2\ell+1)$-dimensional irreducible representations. As we shall see in Section \ref{subsec:RepresentationAq}, the algebra $Aq(2;\nu)$ can be represented in number of ways on $\mathcal F$, leading to representations $\check{Aq}(2;\nu;\tau)$ whose indecomposable structures for critical $\nu$ depend on a parameter $\tau\in \mathbb{R}$. In particular, it is possible to choose the representation matrices in accordance with the direct-sum decomposition \begin{equation} \check{Aq}(2; - 2\ell -1;0 ) \cong gl(2\ell+1)\oplus \check{Aq}(2;2\ell+1;0)\ , \end{equation} where $\check{Aq}(2; 2\ell + 1)$ is isomorphic to the representation of ${Aq}(2;-2\ell - 1)$ in ${\cal F}$ on the singular vector $\vert 2\ell+1\rangle$. Thus, in critical limits, the indecomposable structures of ${Aq}(2;-2\ell-1)_{++}$ and ${Aq}(2; - 2\ell -1 )_{--}$ differ from those of $\check{Aq}(2;-2\ell-1;\tau)_{++}$ and $\check{Aq}(2;- 2\ell -1;\tau )_{--}$, respectively, though in both cases the the finite-dimensional sectors that remain after factoring out the ideals are isomorphic to $gl(2\ell+\tfrac12(1+\varepsilon))$ and $gl(2\ell+\tfrac12(1-\varepsilon))$, respectively. \paragraph{Generalized $\mathfrak h_{\rm 1-sided}$\,.} Finally, the fractional-spin gravity admits a natural generalization based on the Fock space ${\cal F}$, in which $\check J^0$ is diagonal, and an additional state space \begin{equation} \tilde{\cal F}=\bigoplus_{\lambda}\mathbb{C}\otimes |\lambda\rangle\ ,\qquad (\check H-\lambda)|\lambda\rangle=0\ ,\end{equation} where $\check H$ is a Hamiltonian with normalizable (bound) states. If there exists a ${}$-product implementation with fusion rules corresponding to \begin{equation} W\in \overline{Aq}(2;\nu){}_{++}\ ,\qquad U\in \rho_{\tilde{\cal F}}({\rm End}(\tilde{\cal F}))\ , \qquad \psi\in \rho_{\tilde{\cal F}}({\rm End}(\tilde{\cal F}))\ ,\qquad \bar\psi\in\rho_{\tilde{\cal F}}({\rm End}(\tilde{\cal F}))\ , \end{equation} where $\rho_{\tilde{\cal F}}:{\rm End}(\tilde{\cal F})\rightarrow \overline{Aq}(2;\nu)$, then we propose a Chern--Simons action based on the Killing form \begin{equation} {\rm STr}\big(\rho_{\tilde{\cal F}}(\sum_{\lambda,\lambda'}|\lambda\rangle\langle \lambda'|f^{\lambda\l'})\big)=\sum_\lambda {\rm STr}(P_{\lambda|\lambda}) f^{\lambda\l}\ ,\end{equation} where $\sum_{\lambda,\lambda'}|\lambda\rangle\langle \lambda'|f^{\lambda\l'}\in{\rm End}(\tilde{\cal F})$ and $P_{\lambda|\lambda}$ is the star-product algebra element corresponding to $|\lambda\rangle\langle\lambda|$. \section{The Wigner-deformed Heisenberg oscillator algebra} \label{sec:Wigner-deformed} This section describes the concrete explicit realization of the fractional-spin algebras using Wigner-deformed Heisenberg oscillators. \subsection{The enveloping algebra $Aq(2;\nu)$ and its derived Lie (super)algebras }\label{Aqsec} The universal enveloping algebra $Aq(2;\nu)$ of the Wigner-deformed Heisenberg oscillator algebra is the associative algebra spanned by arbitrary polynomials in the deformed oscillators $q_\alpha$ and the Kleinian $k$ modulo their relations \eqref{defy}. It contains two associative subalgebras given by its subspaces $Aq(2;\nu)_{\pm\pm}=\Pi_{\pm} Aq(2;\nu) \Pi_\pm$, where $\Pi_\pm=\frac12(1\pm k)\,$. By taking $[f_1,f_2]:=f_1 f_2 -f_2 f_1$, these algebras turn into Lie algebras, which we denote by $lq(2;\nu)$ and $lq_{\pm\pm}(2;\nu)$, in their turn containing the Lie subalgebras $slq(2;\nu)$ and $slq(2;\nu)_{\pm\pm}\,$, respectively, obtained by factoring out $\mathbb{C}\otimes \mathbf 1$ and $\mathbb{C}\otimes \Pi_\pm\,$. The algebra $Aq(2;\nu)$ can also be endowed with the structure of a $\mathbb{Z}_2$-graded Lie algebra, denoted by $q(2;\nu)$, with graded commutator \begin{equation} [f_1,f_2]_{\varepsilon} :=f_1f_2-(-1)^{\varepsilon(f_1)\varepsilon(f_2)}f_2 f_1\ , \label{gradedcomm} \end{equation} with degree defined by\footnote{We use (square) curved brackets to denote strength-one (anti) symmetrization.} \begin{equation} \varepsilon(k^B \, q_{(\alpha_1 } \cdots\, q_{\alpha_n)}):= \tfrac12(1-(-1)^n)\ ,\end{equation} that is, $\varepsilon(f(q,k)) =0$ if $f(-q,k)=f(q,k)$ and $ \varepsilon(f(q,k)) =1$ if $f(-q,k)=-f(q,k)$. Factoring out the identity from $q(2;\nu)$ yields a superalgebra, which we denote by $sq(2;\nu)$. The Lie algebras $lq(2;\nu)$ and $slq(2;\nu)$ as well as their graded counter parts $q(2;\nu)$ and $sq(2;\nu)$ contain $sl(2)$ subalgebras generated by \begin{equation}\label{Jyy} J_a:=\frac{i}{8} (\gamma_a)^{\alpha \beta} M_{\alpha\beta}\ ,\quad M_{\alpha\beta}:=q_{(\alpha}q_{\beta)} =\tfrac{1}{2}\,( q_{\alpha}{} q_{\beta} + q_{\beta}{} q_{\alpha} )\ , \end{equation} that obey \begin{equation} [J_a,J_b] = i\,\epsilon_{abc}\, J^c\ , \end{equation} using conventions where the matrices $(\gamma_a)^{\alpha \beta}=\epsilon^{ \beta\gamma} (\gamma_a)^{\alpha}{}_\gamma$ are normalized such that \begin{equation} \{\gamma_a , \gamma_b\}=-2\eta_{ab}\ ,\quad \rm {with}\quad \eta_{ab}={\rm diag}(-1,+1,+1)\ ,\quad \epsilon^{012}=1\ ,\end{equation} and the spinor indices are raised and lowered using the conventions \begin{equation} q^\alpha=\epsilon^{\alpha\beta}q_\beta\ ,\quad q_\alpha=q^\beta \epsilon_{\beta\alpha}\ ,\quad \epsilon^{\alpha\delta}\epsilon_{\beta\delta}= \delta^{\alpha}_{\beta}\ , \end{equation} together with the realization \begin{equation} ( \gamma_0 , \gamma_1 , \gamma_2 ){}^{\alpha}{}_\beta =(-\sigma_2\, , -i \sigma_1\, , -i \sigma_3)^{\alpha}{}_\beta\ ,\quad \epsilon^{12}=\epsilon_{12}=1\ .\end{equation} One has \begin{eqnarray} & J^0=\frac{1}{4}\{a^+,a^-\},\qquad J ^\pm := J _1\pm i J _2=\frac{1}{2}(a^\pm)^2 \,,& \label{J0,Ji} \end{eqnarray} where we have defined \begin{equation} q_1:=a^++ a^-\ ,\quad q_2:=i(a^+-a^-)\ ,\quad a^+=\tfrac12(q_1-i q_2)\ ,\quad a^-=\tfrac12(q_1+ i q_2)\ ,\end{equation} \begin{equation} \label{WHalg} [a^-,a^+]=1+\nu k\ ,\quad \{k,a^\pm\}=0\ ,\quad k^2=1\ .\end{equation} In the $\mathbb{Z}_2$-graded case, the $sl(2)$ algebra can be extended further to $osp(2|2)$ by taking the supercharges $Q_\alpha^i$ ($i=1,2$) and $so(2)$ generator $T_{12}$ to be given by \cite{Bergshoeff:1991dz} \begin{equation} Q^i_\alpha=(q_\alpha,ikq_\alpha)\ ,\quad T^{12}=-k-\nu\ ,\end{equation} using conventions in which $osp({\cal N}|2)$ has the following graded commutation rules ($i=1,\dots,{\cal N}$)\footnote{The structure coefficients of $osp({\cal N}|2)$ can be found using the realization $M_{\alpha\beta}=q_{(\alpha}q_{\beta)}$, $Q_\alpha^i=\xi^i q_\alpha$ and $T^{ij}=i\xi^i \xi^j$ where $q_\alpha$ obey \eqref{defy} with $\nu=0$ and $\xi^i$ are external operators that obey $\{\xi^i,\xi^j\}=2\delta^{ij}$.}: \begin{equation} \{Q^i_\alpha,Q^j_\beta\}=4\delta^{ij}M_{\alpha\beta} +4\epsilon_{\alpha\beta} T^{ij}\ ,\end{equation} \begin{equation} [M_{\alpha\beta},M^{\gamma\delta}]=8i\delta_{(\alpha}^{(\gamma}M^{\phantom{(}}_{\beta)}{}^{\delta)} \ ,\quad [T_{ij},T^{kl}]=8i\delta_{[j}^{[k} T^{\phantom{[}}_{i]}{}^{l]}\ ,\end{equation} \begin{equation} [M_{\alpha\beta},Q_\gamma^i]=-4i\epsilon_{\gamma(\alpha}Q^i_{\beta)}\ ,\quad [T^{ij},Q^k_\alpha]=-4i\delta^{k[i} Q^{j]}_\alpha\ .\end{equation} The quadratic Casimir operators \begin{equation} C_2(osp({\cal N}|2)):=J^a J_a-\frac{i}{16} Q^{\alpha i} Q_{\alpha i}+\frac{1}{32}T^{ij}T_{ij}\ .\end{equation} For ${\cal N}=0,1,2$, the oscillator realization gives rise to one-sided representations in various left- or right-modules, as we shall discuss below, with Casimirs \begin{equation} \label{c2sp2}C_2(sl(2))|_{\rm 1-sided}=\frac1{16}(3+2\nu k-\nu^2)\ ,\end{equation}\begin{equation} \label{c2osp12}C_2(osp(1|2))|_{\rm 1-sided}=\frac1{16}(1-\nu^2)\ ,\end{equation}\begin{equation} \label{c2osp22}C_2(osp(2|2))|_{\rm 1-sided}=0\ .\end{equation} The $sl(2)$ subalgebras can be extended to $sl(2)\oplus sl(2)$ by taking translations\footnote{The Lorentz generators \begin{equation}\nonumber L_{ab} := \epsilon_{abc} J^c\ , \qquad J_a=- \tfrac{1}{2} \epsilon_{abc} L^{bc}\ , \qquad [L_{ab} , L_{cd}] = i\,\eta_{bc}L_{ad} + 3 \ \rm{terms}\ , \end{equation} and the translation generators $P^a\,$ obey the commutation relations \begin{equation}\nonumber [J^a , J^b] = i\, \epsilon^{abc} J_c\;, \quad [J^a , P^b] = i\, \epsilon^{abc} P_c\;, \quad [P^a , P^b]= i L^{ab}\; . \end{equation} } $P_a$ to be realized as $P_a=J_a k$. Instead, by tensoring with the bosonic Clifford algebra ${\rm Cliff}_1(\Gamma)$ one can take $sl(2)\oplus sl(2)\cong sl(2)\otimes {\rm Cliff}_1(\Gamma)$, with translations \begin{equation} \label{boosts} P_a= J_a \Gamma\ .\end{equation} We shall use the latter realization in the construction of the anyonic models, as $\Gamma$ commutes to the projectors $\Pi_\pm=\tfrac12(1\pm k)$ used to define the tensorial, fractional-spin and Lorentz-singlet representations making up the fractional-spin gravity model. \subsection{Representation of $Aq(2;\nu)$ in Fock space: $\check{Aq}(2;\nu;\tau)$} \label{subsec:RepresentationAq} Following \cite{Plyushchay:1997mx}, one can represent the Wigner-deformed Heisenberg oscillator algebra \eqref{WHalg} in terms of undeformed oscillators obeying \begin{equation} [ b^-, b^+]=1\,. \end{equation} To this end, one represents the elements $f$ of the oscillator algebras by operators $\check f$ acting in a Fock space ${\cal F}$, \begin{equation} \mathcal{F}=\bigoplus_{n=0}^\infty \mathbb C\otimes |n\rangle\ ,\quad |n\rangle=\frac1{\mathpalette\DHLhksqrt{n!}} (\check b^+)^n|0\rangle\ ,\quad \check b^-|0\rangle=0\ ,\end{equation} \begin{equation} (\check N-n)|n\rangle=0\ ,\quad \check N:=\check b^+ \check b^-\ .\end{equation} In the Fock space, the deformed oscillators and the Klein operator can be represented by the following non-linear constructs: \begin{eqnarray} &&\check a^+= \left(\check G \;\mathpalette\DHLhksqrt{1+\frac{\nu}{\check N}}\, \check {\Pi}_- + \check H \,\check {\Pi}_+\right)\check b^+\,,\\[5pt] &&\check a^- = \check b^- \left(\check G^{-1}\mathpalette\DHLhksqrt{1+\frac{\nu}{\check N}}\, \check \Pi_-+ \check H^{-1}\, \check \Pi_+\right)\,,\\[5pt]&& \check k=(-1)^{\check N}\ ,\quad \check G=G(\check N)\ ,\quad \check H=H(\check N)\ , \end{eqnarray} where $\check \Pi_\pm=\frac{1}{2}(1\pm \check k)$ such that $\check J^0\equiv \tfrac14 \{\check a^+,\check a^-\}=\tfrac12 \check N+\tfrac14(1+\nu)$ as in \eqref{J0} with $\varepsilon=+1$. In particular, taking $ \check{H} = 1 $ and $\check{G} = ( 1 +\frac{\nu}{\check{N}})^{\tau}$ where $\tau\in\mathbb{R}$ one has \begin{equation}\label{a+a-F1} \check a^+ =\left(1+\frac{\nu}{\check N}\right)^{1/2+\tau} \;\,\check b^+\check {\Pi}_+ + \check b^+\check {\Pi}_-\,,\end{equation}\begin{equation} \label{a+a-F2} \check a^- = \left(1+\frac{\nu}{\check N+1}\right)^{1/2-\tau} \;\, \check b^- \check \Pi_-+ \check b^- \check \Pi_+\,, \end{equation} with formal inverse \begin{equation}\label{b+} \check b^+ =\left(1+\frac{\nu}{\check N}\right)^{-1/2-\tau} \;\,\check a^+\check {\Pi}_+ + \check a^+\check {\Pi}_-\,,\end{equation}\begin{equation} \label{b-} \check b^- = \left(1+\frac{\nu}{\check N+1}\right)^{-1/2+\tau} \;\,\check a^- \check \Pi_-+\check a^- \check \Pi_+\,. \end{equation} We denote the resulting representation of $Aq(2;\nu)$ in ${\cal F}$ by $\check{Aq}(2;\nu;\tau)$, which is thus the associative algebra consisting of arbitrary polynomials in $\check{a}^\pm$ and $\check{k}$ as given above with parameter $\tau\in \mathbb{R}$. For $\nu=0$ (and all $\tau$) one has $\check a^\pm=\check b^\pm$ and the representation of $Aq(2;0)$ in ${\cal F}$ is unitary if one chooses the hermitian conjugation rule $(\check b^+)^{\check{\dagger}}= \check b^{-}$\,. One has the standard sesquilinear form defined by \begin{equation} (|0\rangle)^{\check\dagger}:=\langle 0|\ ,\quad \langle 0|0\rangle:=1\ .\end{equation} Thus $(|n\rangle)^{\check\dagger} = \langle n|$ and the Klein operator is realized in the Fock space for all $\nu$ by \begin{equation} \label{klein} \check k= \sum_{n\geqslant 0} (-1)^n | n \rangle \langle n| \,. \end{equation} For finite $\nu$ there exist hermitian conjugation operations of the form \cite{Plyushchay:1997mx} \begin{equation} \label{conjn} (\check a^\pm)^{\check \dagger} =\check C^{-1}\,\check a^\mp \check C,\qquad (\check k)^{\check \dagger}=\check C^{-1}\,\check k\, \check C\ , \end{equation} such that \begin{equation}\label{Yconj} (\check q_\alpha)^{\check \dagger} = \check C^{-1} \,\check q_\alpha\, \check C \,, \end{equation} where the conjugation matrix $\check C\in {\rm End}({\cal F})$ depends on $\nu$, or rather, as we shall see, the integral part $[\nu]\,$. We may further require that \begin{equation} \label{Creal}(\check f^{\check\dagger})^{\check \dagger}=\check f\quad \mbox{for any $\check f\in{\rm End}({\cal F})$}\quad \Leftrightarrow\quad \check C^{\check \dagger} = \check C\ . \end{equation} Imposing also \begin{equation} \check C|0\rangle=|0\rangle\,,\end{equation} it follows that the sesquilinear form \begin{equation} \langle \xi| \check C |\chi \rangle\equiv (|\xi \rangle)^{\check\dagger} \check C |\chi\rangle \,, \end{equation} is invariant under similarity transformations generated by the elements $\check f\in {\rm End}({\cal F})$ that satisfy the reality condition \begin{equation} \check f^{\check \dagger} \,=-\check C^{-1}\check f \check C \,, \end{equation} \emph{viz.} \begin{equation} \langle\widetilde{\xi}|\check C|\widetilde{\chi}\rangle = \langle{\xi}|\check C|{\chi}\rangle ,\quad \hbox{where} \quad |\widetilde{\xi}\rangle=\exp(\check f)|\xi\rangle\,, \quad |\widetilde{\chi}\rangle=\exp(\check f)|\chi\rangle\,. \end{equation} One may further restrict \begin{equation} \label{Cunimodular} \check C^2=1\quad \Leftrightarrow\quad \tau=0\ ,\end{equation} for which one has \begin{equation}\label{a+a-} \quad \check a^+= \sum_{n\geq 0} \mathpalette\DHLhksqrt{[n+1]_\nu}\; |n+1\rangle \langle n | , \qquad \check a^-= \sum_{n\geq 0} \mathpalette\DHLhksqrt{[n+1]_\nu}\; |n\rangle \langle n+1| , \end{equation} where \begin{equation} [n]_\nu :=n+\frac{1}{2}(1-(-1)^n)\nu \, . \end{equation} One may choose a diagonal conjugation matrix \begin{equation} \label{C1} \check C =\sum_{n\geq 0} C_n | n \rangle \langle n | \, . \end{equation} Factoring out the relations $\check a^-|0\rangle= 0 = (\check k-1)|0\rangle\,$ yields a generalized Verma module spanned by \begin{equation} |n):=(\check a^+)^n|0\rangle\,, \quad n=0,1,2,...\ , \end{equation} which are non-normalized eigenstates of $\check N\,$. The emergence of singular vectors, that is, states $|n)$ with $n>0$ that are annihilated by $\check a^-$, is associated with the existence of finite-dimensional representations of the Wigner-deformed Heisenberg algebra, that is, realizations of the algebra in terms of finite-dimensional matrices. Defining \begin{equation}\label{bras} (n|:=\langle 0| (\check a^-)^n=\langle 0| ((\check a^+)^n)^{\check \dagger} \check C \,,\quad n=0,1,2,...\ , \end{equation} % such that $(n'|n)\equiv \langle 0| (\check a^-)^{n'} \check C (\check a^+)^n|0\rangle$, it follows that if $|n)$ is a singular vector then $(n'|n)=0$ for all $n'$. As \begin{eqnarray}\label{sprod} (n'|n) =\delta_{n',n}\:[n]_\nu! \ , \qquad [n]_\nu! \; := \; \: \prod_{m=1}^n [m]_\nu\: , \qquad n',n \geqslant 0\,, \end{eqnarray} the following cases arise: \begin{enumerate} \item[I.] \underline{$ \nu>-1$}: In this \emph{unitary} regime, the matrix elements $(n|n)=[n]_\nu ! >0 $ for all $n\,$, and hence \begin{equation} \check C=1\ ,\qquad \check a^\pm=(\check a^\mp)^\dagger\ .\end{equation} The representation of the deformed oscillators in $\mathcal{F}$ is thus unitary \cite{Plyushchay:1997mx,Plyushchay:1997ty}. \item[II.] \underline{$\nu=-1$}: In this \textit{hyper-critical} case, which is also unitary, one has \begin{equation} \check C=1\ ,\qquad \check a^+|0\rangle =0\ ,\quad \quad \check a^-|1\rangle=0\ , \end{equation} and the representation $\check{Aq}(2;-1;0)$ decomposes into \begin{equation} \check{Aq}(2;-1;0)= gl(1)\oplus \check{Aq}(2;1;0)\ , \end{equation} that is, $\mathcal F$ decomposes under $Aq(2;-1)$ represented as $\check{Aq}(2;-1;0)$ into a singlet $|0\rangle$ and an infinite-dimensional unitary representation of $Aq(2;1)$ in $\bigoplus_{n\geqslant 1} \mathbb{C} |n\rangle$ --- as shown below Eq. \eqref{flip}. \item[III.] \underline{$\nu=-2\ell-1,\:\: \ell=1,2,...$}: In these \emph{critical} cases, one has \begin{equation}\label{lhw} \check a^+ | 2\ell \rangle =0 \, ,\quad \check a^- |2\ell+1 \rangle =0 \,, \end{equation} and \begin{eqnarray} && {\rm sign}[(n|n)]=\left\{ \begin{array}{ll} \cos\Big(\frac{n \pi}{2} \Big) - \sin\Big(\frac{n \pi}{2} \Big) & 0\:\leqslant \:n\: \leqslant 2 \ell \,, \\[5pt] 0 & n\: \geqslant 2 \ell+1 \,, \end{array} \right. \end{eqnarray} where $sign(x):=x/|x|$ for $x\neq0$ and $sign(0):=0\,$. It follows that \begin{eqnarray} \label{Cnuni} \check C =\sum_{n} C_{n} \, |n\rangle \langle n| \,, \qquad C_{n} =\left\{ \begin{array}{ll} \: {\rm sign}[(n|n)] & 0\:\leqslant \:n\: \leqslant 2 \ell \\[5pt] 1 & n\: \geqslant 2 \ell+1 \,, \end{array} \right. \end{eqnarray} and that ${\cal F}$ decomposes into two irreducible representations of the deformed oscillators, \begin{equation} \mathcal{F}=\mathcal{F}_{\rm f}\, \oplus \mathcal{F}_\infty\ ,\qquad \mathcal{F}_{\rm f}=\bigoplus_{n=0}^{2\ell} ~\mathbb{C}\otimes |n\rangle \ ,\qquad \mathcal{F}_{\infty}=\bigoplus_{n\geqslant 2\ell+1}\mathbb{C}\otimes |n\rangle\ . \end{equation} Indeed, the projectors \begin{equation}\label{nullproj} \mathcal{P}_{\rm f}:=\sum_{n=0}^{2\ell} |n\rangle \langle n| \,, \qquad \mathcal{P}_{\infty}:=\sum_{n=2\ell +1}^{\infty} |n\rangle \langle n|\, , \end{equation} commute with $(\check a^\pm,\check k)$ iff $\nu$ is critical or hyper-critical, and hence \begin{equation} \check a^\pm=\check a^\pm_{\rm f}+\check a^\pm_\infty \ ,\quad \check k=\check k_{\rm f}+\check k_{\infty}\ ,\quad \check C=\check C_{\rm f}+\check C_\infty\ ,\end{equation} \begin{equation} \check a^\pm_{ \stackrel{{\rm f}} { \infty }} := \mathcal{P}_{ \stackrel{{\rm f}} { \infty }}\, \check a^\pm \, \mathcal{P}_{ \stackrel{{\rm f}} { \infty }}\, , \qquad \check k_{ \stackrel{{\rm f}} { \infty }} := \mathcal{P}_{ \stackrel{{\rm f}} { \infty }}\, \check k \, \mathcal{P}_{ \stackrel{{\rm f}} { \infty }}\, ,\qquad \check C_{ \stackrel{{\rm f}} { \infty }} := \mathcal{P}_{ \stackrel{{\rm f}} { \infty }}\, \check C \, \mathcal{P}_{ \stackrel{{\rm f}} { \infty }}\, , \end{equation} obey \begin{equation}\label{dhafinf} \big[\check a^-_{ \stackrel{{\rm f}} { \infty }},\check a^+_{ \stackrel{ {\rm f} } { \infty }} \big]=1-(2\ell+1) \check k_{ \stackrel{{\rm f}} { \infty }} \, , \qquad \{\check k^{\phantom{\pm}}_{ \stackrel{{\rm f}} { \infty }},\check a^\pm_{ \stackrel{{\rm f}} { \infty }}\big \}= 0 \,, \end{equation} and the hermicity conditions \begin{equation} \check a^\mp_{\rm f}=\check C^{\phantom{\pm}}_{\rm f}\, (\check a^\pm_{\rm f})^{\check \dagger} \, \check C^{\phantom{\pm}}_{\rm f}\,,\qquad \check k^{\phantom{\pm}}_{\rm f}=\check C^{\phantom{\pm}}_{\rm f}\, \check k_{\rm f}^{\check \dagger} \, \check C^{\phantom{\pm}}_{\rm f}=\check k_{\rm f}^{\check \dagger} \,,\qquad \check a^\mp_{\infty}= \, (\check a^\pm_{\infty })^{\check \dagger} \,, \qquad \check k^{\phantom{\pm}}_{\infty}=\check k_{\infty}^{\check \dagger} \,. \end{equation} In terms of the bra-ket basis, one has \begin{equation}\label{a+a-f} \begin{array}{c} \check a^+_{{\rm f}}= \sum_{n= 0}^{2\ell} \mathpalette\DHLhksqrt{[n+1]_\nu} |n+1\rangle \langle n | ,\qquad \check a^-_{\rm f}= \sum_{n= 0}^{2\ell} \mathpalette\DHLhksqrt{[n+1]_\nu} |n\rangle \langle n+1| \, ,\\[10pt] \check k_{\rm f}= \sum_{n= 0}^{2\ell}\, (-1)^n \, | n \rangle \langle n| \,,\qquad \check C_{\rm f} =\sum_{n=0}^{2\ell} \left( \cos\Big(\frac{n \pi}{2} \Big) - \sin\Big(\frac{n \pi}{2} \Big) \right) \: |n\rangle \langle n| \,, \end{array} \end{equation} and \begin{equation}\label{a+a-inf} \begin{array}{c} \check a^+_{\infty}= \sum_{n \geqslant 2\ell+1} \mathpalette\DHLhksqrt{[n+1]_\nu} |n+1\rangle \langle n | ,\qquad \check a^-_{\infty}= \sum_{n \geqslant 2\ell+1} \mathpalette\DHLhksqrt{[n+1]_\nu} |n\rangle \langle n+1| ,\\[10pt] \check k_{\infty}= \sum_{n \geqslant 2\ell+1} \, (-1)^n \, | n \rangle \langle n| \,,\qquad \check C_\infty=1\ . \end{array} \end{equation} Thus, $(\check a^\pm_{\rm f},\check k_{\rm f})$ provide a finite-dimensional non-unitary representation of the Wigner--Heisenberg algebra with deformation parameter $\nu = -2\ell-1$ whose enveloping algebra is isomorphic to $gl(2\ell+1)$, while $(\check a^\pm_{\infty},\check k_{\infty})$ provide an infinite-dimensional unitary representation of the Wigner--Heisenberg algebra with deformation parameter $2\ell+1$, as can be seen from \begin{equation} \check k_{\infty}|2\ell+1\rangle=-|2\ell+1\rangle\ ,\label{flip}\end{equation} which implies that the redefinition $\check k_{\infty} \rightarrow - \check k_{\infty} $ yields a representation of $Aq(2;2\ell+1)$ on ${\cal F}_\infty$. % Thus, at critical $\nu$ one has \begin{equation} \check{Aq}(2;-2\ell-1;0)\cong gl(2\ell+1)\oplus \check{Aq}(2;2\ell+1;0)\ .\label{caseIII}\end{equation} \item[IV.] \underline{$\nu \, < \, -1\,$, $\nu\notin \{-3,-5,\dots\}$}: For these non-critical values, the representation of the deformed oscillators in ${\cal F}$ is irreducible and \emph{non-unitary}, as can be seen from \begin{eqnarray} && sign[(n|n)]=\left\{ \begin{array}{ll} \cos\Big(\frac{n \pi}{2} \Big) - \sin\Big(\frac{n \pi}{2} \Big) & 0\:\leqslant \:n\: \leqslant 2 \ell \\[5pt] (-1)^{\ell+1} & n\: \geqslant 2 \ell+1 \,. \end{array} \right. \end{eqnarray} where $\ell$ is the positive integer defined by that $2\ell+1$ is the supremum of odd integers less than $|\nu|$. The conjugation matrix is thus given by \begin{equation}\label{CII} \check C =\sum_{n=0} C_{n} \, |n\rangle \langle n| \,, \qquad C_{n}= \: {\rm sign}[(n|n)] \,. \end{equation} \end{enumerate} \subsection{Fractional-spin representations of Lorentz and AdS algebras} The representation of the Lorentz algebra \eqref{standardso12} in terms of the deformed oscillators is reducible and it can be projected as in \eqref{anyonso12}. On top of this reducible structure there may arise another one depending on the value of the deformation parameter $\nu\,$. This will affect the field content in the higher-spin Chern--Simons theory presented in Section \ref{sec:CS}. {}From \eqref{a+a-} it follows that the representation of the Lorentz generators \eqref{J0,Ji} in the Fock space is given by, \begin{eqnarray} \check J^0 &=& \sum_{n\geqslant 0} \Big(\frac{n}{2}+\frac{1+ \nu}{4} \Big) | n \rangle \langle n|\,,\label{J0F}\\[5pt] \check J^- &=& \sum_{n\geqslant 0} \mathpalette\DHLhksqrt{[n+2]_\nu[n+1]_\nu} | n\rangle \langle n+2|\,,\\[5pt] \check J^+ &=& \sum_{n\geqslant 0} \mathpalette\DHLhksqrt{[n+2]_\nu[n+1]_\nu} | n+2 \rangle \langle n|\,.\quad \end{eqnarray} The quadratic Casimir operator \eqref{c2sp2} factorizes into \begin{equation}\label{JJ} C_2(sp(2)|{\cal F})\equiv -\check \alpha (\check \alpha -1)\, \quad\hbox{with}\quad \check \alpha =\frac{1}{4}(2+\nu-\check k) \,. \end{equation} Since the Klein operator take place in this expression, the value of the Casimir operator does not take a fixed value \cite{Vasiliev:1989re}. The Fock space thus decomposes into two invariant eigenspaces of $\check k$, \begin{equation} \label{calF+-} \mathcal{F}_\pm=\check\Pi_\pm \mathcal{F}=\bigoplus_{n=0}^\infty \mathbb C\otimes |2n+\tfrac{1}{2}(1\mp1)\rangle\,, \end{equation} where $\check \Pi_\pm$ are the projectors defined in \eqref{projector}. The projected Lorentz generators and spins are given by \begin{equation}\label{Jirrep1} \check J ^\pm_a := \check \Pi_\pm \check J _a\,, \quad \eta^{ab} \check J ^\pm_a \check J^\pm_b =-j^\pm(j^\pm-1)\, \check \Pi_\pm\,, \quad j^+=\frac{1}{4}(1+\nu)\,, \quad j^-=\frac{1}{4}(3+\nu)\,. \end{equation} The spins of the odd and even representations differ by half a unit, \begin{equation} j^--j^+=\frac12\ ,\end{equation} % thus forming superpartners. Hence, in the non-critical case, the Fock space carries two irreducible representations of the Lorentz algebra. In the critical cases, there is a further sub-decomposition into a finite-dimensional and an infinite-dimensional irrep due to the additional projectors \eqref{nullproj}, \emph{viz.} \begin{equation}\label{Jirrep2} \check J ^{(f)\pm}_a := \mathcal{P}_{\rm f} \check \Pi_\pm \check J _a \,, \qquad \check J ^{(\infty)\pm}_a := \mathcal{P}_{\infty} \check \Pi_\pm \check J_a\,, \end{equation} with the spin in each irreducible sector given in the Table \ref{Tirrep}. This additional reducibility is reflected in the symmetry of the Lorentz Casimir operator under $j\rightarrow 1- j$, yielding different representations of the Lorentz algebra for which \begin{equation} j^+-j^-=\frac12\ .\end{equation} % Thus, for $\nu=-3,-5,-7,...$ one has two finite non-unitary and two infinite dimensional unitary representations of the Lorentz algebra. In the hyper-critical case $\nu=-1$, the finite dimensional subspace contains only one state, the ground state, which is invariant under the action of the full $Aq(2;-1)$ algebra represented as $\check{Aq}(2;-1;0)$. Indeed, the representation $\check{Aq}(2;-1;0)$ of the algebra $Aq(2;-1)$ is unitary since the conjugation matrix is the identity. \begin{table}[ht] \begin{center} \begin{tabular}{|c|c|c|} \hline $\nu$ & Irreducible subspaces & Lorentz spin \\\hline\hline &&\\[-10pt] non-critical \\ $\nu > -1$ & $\begin{array}{c} |2n \rangle ,\quad n=0,1,2,...\\[4pt] |2n+1\rangle ,\quad n=0,1,2,... \end{array}$ & $\begin{array}{c} j^+=(1+\nu)/4 \\[4pt] j^-=(3+\nu)/4 \end{array}$\\ &&\\[-10pt] \hline &&\\[-10pt] \begin{tabular}{l} critical \\ $\nu=-(2\ell+1),$\\ $\ell=1,2,...$ \end{tabular} & \begin{tabular}{l} ${\cal F}_{\rm f}$ $\left\{ \begin{array}{c} |2n\rangle ,\quad n=0,1,2,...,\ell\qquad \quad \: \\[4pt] |2n+1\rangle ,\quad n=0,1,2,...,\ell-1 \end{array} \right.$\\[4pt] ${\cal F}_{\infty}$ $\left\{ \begin{array}{c} |2n\rangle ,\quad n=\ell+1,\ell+2,...\\[4pt] |2n+1\rangle,\quad n=\ell,\ell+1,... \end{array} \right.$ \end{tabular} & $\begin{array}{c} j^{(f)+}=-\ell/2 \\[4pt] j^{(f)-}=1/2-\ell/2 \\[4pt] j^{(\infty)+}=1+\ell/2 \\[4pt] j^{(\infty)-}=1/2+\ell/2 \end{array}$\\ &&\\[-10pt] \hline &&\\[-10pt] \begin{tabular}{l} hyper critical \\ $\nu=-1$ \end{tabular} & \begin{tabular}{l} ${\cal F}_{\rm f}$ $\left\{ |0\rangle \right. ,$ \\[4pt] ${\cal F}_{\infty}$ $\left\{ \begin{array}{c} |2n\rangle ,\quad n=1,2,...\\[4pt] |2n+1\rangle,\quad n=0,1,... \end{array} \right.$ \end{tabular} & $\begin{array}{c} j^{(f)+}=0 \\[4pt] j^{(\infty)+}=1 \\[4pt] j^{(\infty)-}=1/2 \end{array}$ \\ &&\\ \hline \end{tabular} \end{center} \caption{Representing $sl(2)$ in terms of Wigner-deformed Heisenberg oscillators in a standard Fock space yields reducible representations of $sl(2)$. The first column contains the values of the deformation parameter $\nu$ in the Wigner-deformed Heisenberg algebra. The second column contains the corresponding $sl(2)$-irreducible subspaces of the Fock space. The third column contains the corresponding values of the spins $j$, \emph{i.e.} the $J^0$ eigenvalue of the lowest weight state in each $sl(2)$-irrep. } \label{Tirrep} \end{table} \vspace{5mm} The classification of the unitary irreducible representations of $SL(2,\mathbb{R})$ was first done by Bargmann \cite{Bargmann:1946me}. Comparing with the unitary irreducible representations of $sl(2)$ in Fock space by Barut and Fronsdal \cite{Barut:1965} and adapting the notation to this paper, we see that the unitary irreducible representations appearing in the non-critical case $\nu > -1$ above furnish the discrete series ${\cal D}^+(j^\pm)\,$. For $\nu <-1$, but non-critical, these representations are still of a discrete type, but non-unitary. In reference \cite{oai:arXiv.org:1001.0274} it was shown that the latter were essential for the construction of anyon wave equations possessing standard boson/fermion limits. \subsection{Irreducible representations of $sl(2)$ in two-sided Fock space} A generic operator $\check f\in \check{Aq}(2;\nu;\tau)$ can thus be represented in ket-bra form as \begin{equation} \label{op} \check f= \sum_{n,m\geq 0} f^{mn} | m \rangle \langle n| \,, \end{equation} where the matrix $\{f^{mn}\}$ becomes block diagonal for critical $\nu$ if $\tau=0$. The operators $| m \rangle \langle n| $ are products of harmonic oscillator states with $\nu$-dependent spin, \begin{equation} \check J^0 |m\rangle = s_m |m\rangle\, ,\qquad \langle m | \check J^0 = \langle m | s_m \, , \qquad s_m = \tfrac{m}2+\tfrac{1+ \nu}{4} \, ,\end{equation} transforming under a $2 \pi$ rotation by an anyonic statistical phase, \begin{equation} \exp(i 2\pi \check J^0 ) | m\rangle = e^{i\pi(m+\tfrac{1+\nu}{2})} \; | m\rangle \,. \end{equation} The tensor product $| m \rangle \langle n |$, which transforms in the adjoint representation of the rotation group generated by $\check J^0$, \begin{equation}\label{rotketbra} [\, \check J^0, | m \rangle \langle n|\,]= (s_m-s_n)\, | m \rangle \langle n| \,, \end{equation} transforms by a standard phase $+ 1$ or $-1$, \emph{viz.} \begin{equation} \exp(i2\pi \, \check J^0)| m \rangle \langle n| \exp(-i2\pi \, \check J^0) =(-1)^{m-n} | m \rangle \langle n| \,, \end{equation} hence corresponding to bosonic or fermionic statistics. For the alternative choice of the Lorentz generators \eqref{Jirrep1}, the ket-bra products transform as follows: \begin{equation} [\, \check J ^{\pm}_0, | m \rangle \langle n|\,]= \tilde{s}^\pm_{m,n} \, | m \rangle \langle n| \,, \qquad \tilde{s}^\pm_{m,n}:= s_m \frac{(1\pm(-1)^m)}2 - s_n \frac{(1\pm(-1)^n)}2 \,. \end{equation} It follows that if $n$ and $m$ have different parity then $| m \rangle \langle n |$ will transform under either the left or right action of the rotation group, and hence their spin will have a $\nu$-dependent fractional component. This observation indicates that, in order to include particles with fractional spin into the higher-spin connection, one needs to identify the Lorentz connection, which activates the local rotation symmetry, with the generators $J ^{\pm}_a$. The $AdS$ algebra $so(2,2)\cong sl(2)\oplus sl(2) $ is obtained by doubling the algebra as in \eqref{boosts}. \subsection{Polynomial versus Fock-space bases}\label{Sec:bases} In order to construct the fractional-spin algebras, we start from the anyon representations\footnote{To our best understanding, the complete classification of all possible representations of $Aq(2;\nu)$ is an open problem. Indeed, the classification of infinite-dimensional irreducible representations of finite-dimensional Lie algebras is an active field in pure mathematics \cite{KnappOverview}. Two key differences between finite- and infinite-dimensional irreps is that the former are completely decomposable and can be labelled by the Casimir operators, while the latter, which can exhibit different branches of indecomposable structures, cannot be labelled faithfully only by Casimir operators. Additional ``Langlands parameters'' \cite{KnappOverview} are thus required to distinguish the infinite-dimensional irreducible representations, such as the parameter $\tau$ introduced in Eqs. \eqref{a+a-F1} and \eqref{a+a-F2}.} $\check{Aq}(2;\nu;\tau)$ of $Aq(2;\nu)$ in the Fock space ${\cal F}$, which have spins $\tfrac14(1+\nu)$ and $\tfrac34(1+\nu)$. As discussed in subsection \ref{Aqsec}, the actions of $Aq(2;\nu)$ on itself from the left or from the right provide faithful representations of the algebra. Moreover, as we have seen in subsection \ref{subsec:RepresentationAq}, the representation $\check{Aq}(2;\nu;\tau)$ of $Aq(2;\nu)$ on ${\cal F}$ is isomorphic to ${\rm End}({\cal F})$ for generic values of $\nu$; for critical values, the algebra $\check{Aq}(2;\nu;\tau)$ becomes a subalgebra of ${\rm End}({\cal F})$ with an (in)decomposable structure determined by $\tau$. The algebras $\check{Aq}(2;\nu;\tau)$ are isomorphic to subalgebras $\rho_{\tiny {\cal F}}(\check{Aq}(2;\nu;\tau))$ inside the non-polynomial completion \eqref{fqk} $\overline{Aq}(2;\nu)$ of $Aq(2;\nu)$ by means of the deformed-oscillator realization of the vacuum-to-vacuum projector, in accordance with \eqref{rhoF}. The Fock space ${\cal F}$, viewed as an $sl(2)$ module, decomposes into two fractional-spin representations in the discrete series \cite{Bargmann:1946me}. In these representations the spin operator $J^0$ acts diagonally with real-valued eigenvalues. The Fock-space module can thus be identified with $\rho_{\tiny {\cal F}}(\check{Aq}(2;\nu;\tau))$ viewed as either a left module or a right module. On the other hand, the separate left and right actions in $Aq(2;\nu)$ also give rise to $sl(2)$ modules but of a different type since the only generators of $Aq(2;\nu)$ that acts diagonally on itself from one side is the identity $1$ and the Kleinian $k$. To illustrate the inequivalence between $Aq(2;\nu)$ and $\rho_{\tiny {\cal F}}(\check{Aq}(2;\nu;0))$ for critical values $\nu=-2\ell-1$, $\ell=0,1,2,\dots$, one may consider the $++$-projection defined in \eqref{projector}. For this projection, one has \begin{equation} \label{aq1}\check{Aq}(2;-2\ell-1;0)_{++}\cong gl(\ell+1)\oplus \check{Aq}(2;2\ell+1;0)_{++}\ , \end{equation} in agreement with the result obtained in \eqref{caseIII} (as a consequence of the existence of new projector operators \eqref{nullproj} which split the Fock space into two sectors of finite and infinite dimension). On the other hand, the action of $Aq(2;-2\ell-1)_{++}$ on itself exhibits an indecomposable structure of the form \cite{Vasiliev:1989re} \begin{equation} \label{aq2} Aq(2;-2\ell-1)_{++} = \frac{Aq(2;-2\ell-1)_{++}} {Aq'(2;-2\ell-1)_{++}} \supset\!\!\!\!\!\!+ Aq'(2;-2\ell-1)_{++}\ , \end{equation} where the ideal $Aq'(2;-2\ell-1)_{++}$ is spanned by $\Pi_+ q_{(\alpha_1}\cdots q_{\alpha_{2n})}$ with $n=\ell+1,\ell+2,\dots$ and the quotient \begin{equation} \frac{Aq(2;-2\ell-1)_{++}} {Aq'(2;-2\ell-1)_{++}}\cong gl(\ell+1) \end{equation} is spanned by $\Pi_+ q_{(\alpha_1}\cdots q_{\alpha_{2n})}$ with $n=0,1,\dots,\ell$ (modulo elements in $Aq'(2;\nu)_{++}\,$). Thus, the indecomposable structures of $Aq(2;-2\ell-1)_{++}$ and $\check{Aq}(2;\nu;-\frac12)_{++}$ are of different types, with $Aq(2;-2\ell-1)_{++}$ containing a non-trivial ideal and $\check{Aq}(2;-2\ell-1;0)_{++}$ having a block-diagonal structure. By choosing other values for $\tau$ it is possible to alter the indecomposable structure of $\check{Aq}(2;\nu;\tau)_{++}$ in critical limits. In particular, for $\tau=-\frac12$ it follows that $\check{Aq}(2;-2\ell-1;-\frac12)_{++}$ has an indecomposable structure of the same type as $Aq(2;-2\ell-1)_{++}$, in the sense that both algebras have infinite-dimensional ideals and coset algebras given by $gl(\ell+1)\,$. Note, however, that $\check{Aq}(2;-2\ell-1;-\frac12)_{++}$ and $Aq_{++}(2;-2\ell-1)_{++}$ are not isomorphic as $sl(2)$ representations since the spin operator is diagonal in the former space but not in the latter. In fact, the above conclusions do not change considerably if one removes the $++$ projection. In a generalization of Feigin's notation \cite{Feigin88}, we define \begin{equation} gl(\lambda;J;\tau):=\left.\left.\frac{{\rm Env}(sl(2))}{I(\lambda)}\right\downarrow_{J}\right|_{\tau}\ ,\end{equation} where $I(\lambda)$ is the ideal generated by $C_2(sl(2)) + \lambda(\lambda-1)$; $(\cdot)\downarrow_J$ indicates that the elements in $(\cdot)$ are given in a basis where the generator $J\in sl(2)$ acts diagonally from both sides; and $\tau$ parameterizes the indecomposable structure. In particular, Feigin's original construction was performed in the basis of monomials in the generators of $sl(2)$ in which no generator $J$ can be diagonal; we denote this particular basis by $gl(\lambda;-;-)$. With this notation, it follows that \begin{equation} Aq(2;\nu)_{\sigma\sigma}\cong gl(\tfrac{2+\nu-\sigma}{4}\,;-;-)\ , \end{equation} \begin{equation} \check{Aq}(2;\nu;\tau)_{\sigma\sigma}\cong gl(\tfrac{2+\nu-\sigma}{4}\, ; \check J^0;\tau)\ , \end{equation} which are thus infinite-dimensional algebras for generic $\nu$ with critical limits given by semi-direct sums of a finite-dimensional and an infinite-dimensional sub-algebra with ideal structure controlled by $\tau$. Using this notation, one can write Eqs. \eqref{aq1} and \eqref{aq2} as \begin{equation} \label{gl2} Aq(2;-2\ell-1)_{++}\cong gl(\ell+1;-;-\tfrac{1}{2})\cong gl(\ell+1)\supset\!\!\!\!\!\!+ gl(-\ell;-;-\tfrac{1}{2})\ , \end{equation} \begin{equation} \label{gl1}\check{Aq}(2;-2\ell-1;0)_{++}\cong gl(\ell+1;J_0;0)\cong gl(\ell+1)\oplus gl(-\ell;J_0;0) \ .\end{equation} \subsection{Real forms of $Aq(2;\nu)$ and related Lie (super) algebras} There are two ways to impose reality conditions on the elements of the derived Lie algebras of $Aq(2;\nu)$ using either hermitian conjugations or complex conjugations, also known as star-maps, giving rise to infinite-dimensional analogs of the real forms $gl(n;\mathbb{R})$ and $u(p,q)$ of $gl(n;\mathbb C)$, respectively. Various such conjugations can be obtained by combining inner automorphisms $\varphi={\rm Ad}_S$ of $Aq(2;\nu)$ with the basic hermitian conjugation operation $\dagger$ defined in \eqref{conjn} and the linear anti-automorphism $\tau$ defined by \begin{equation}\label{tau1} \tau(f_1{} f_2):=\tau(f_2){}\tau(f_1)\ ,\quad \tau( \beta q_\alpha):=i\beta q_ a\ ,\quad \tau(\beta|0\rangle):=\beta\langle 0|\ , \end{equation} where $\beta\in\mathbb{C}$ (and we note that $\tau( a^\pm)=a^\mp$). As for the associative algebra itself, its real forms require star-maps; the real form \begin{equation} Aq_{\varphi}(2;\nu;\mathbb{R}):=\left\{ f\in Aq(2;\nu): \varphi f^\ast =f\right\}\ ,\qquad f^\ast:=\tau(f^\dagger)\ .\end{equation} Assuming that $((f^\ast)^\ast)=f$ for all $f$ it follows that $S S^\ast=1$. Assuming furthermore that $S=\tilde S^2$ and that $\widetilde S \widetilde S^\ast=1$ it follows that if $f^\ast=\varphi(f)$ then $({\rm Ad}_{\widetilde S}(f))^\ast={\rm Ad}_{\widetilde S}(f)$, that is, \begin{equation} Aq_{\varphi}(2;\nu;\mathbb{R})\cong Aq_{\rm Id}(2;\nu;\mathbb{R}):=\left\{ f\in Aq(2;\nu): f^\ast=f\right\}\ .\end{equation} Starting from $Aq(2;\nu;\mathbb{R})$ various real forms of $slq(2;\nu;\mathbb{R})$ and $sq(2;\nu;\mathbb{R})$ can then be reached by generalizations of the Weyl unitarity trick as follows: \begin{eqnarray} lq(2;\nu;\mathbb{R})&&:= \left\{ h\in lq(2;\nu)\right\}\cap Aq_{\rm Id}(2;\nu;\mathbb{R})\ ,\\[5pt] uq(2;\nu)&&:= \left\{ h=f+ig\,,\ f,g\in lq(2;\nu)|\ \tau(f) =-f\,,\ \tau(g)=g\right\}\cap Aq_{\rm Id}(2;\nu;\mathbb{R})\qquad\\&&=\{h\in lq(2;\nu)|\ h^\dagger=-h\}\ ,\\[5pt] hosl(2|2;\nu)&&:= \left\{ h\in q(2;\nu)\right\}\cap Aq_{\rm Id}(2;\nu;\mathbb{R})\ ,\\[5pt] hosp(2|2;\nu)&& := \left\{h\in q(2;\nu)|h^\dagger = - i^{{\rm deg}(h)} h\,,\right\}\ .\end{eqnarray} In the two cases projected out by hermitian conjugation, their Fock space representations take the form \begin{equation} \check uq(2;\nu;\tau):=\left\{\check h\in \check lq(2;\nu;\tau): \check h^{\check \dagger} =-\check C \check h \check C\right\} \, , \end{equation} \begin{equation} \check hosp(2|2;\nu;\tau):=\left\{ \check h\in \check q(2;\nu;\tau): \check h^{\check \dagger} =-i^{{\rm deg}(\check h)} \check C \check h \check C \,\right\} . \end{equation} Letting $(p,q)$ refer to the signature of $\check C$, it follows that $\check uq(2;\nu;\tau)$ is equivalent to a representation of $u(p,q;J_0;\tau)$ while $\check hosp(2|2;\nu;\tau)$ is equivalent to a representation of the superalgebra $u(p|q;J_0;\tau)$; the list of isomorphisms is given in Table \ref{Trealform}. \begin{table}[ht] \begin{center} \begin{tabular}{|c|c|c|} \hline $\nu$ & $\check uq(2;\nu;\tau)\cong u(C)$ & $\check hosp(2|2;\nu;\tau)\cong u(C_+|C_-) $ \\\hline\hline & & \\[-10pt] $\nu \geq -1$ & $u(\infty_++\infty_-)$ & $u(\infty_+|\infty_-)$\\ \hline & & \\[-10pt] \begin{tabular}{l} $\nu=-(2\ell+1),$\\ $\ell=1,2,...$ \end{tabular} & \begin{tabular}{l} $u(\ell +\frac{ 1+(-1)^\ell }{ 2 },\ell +\frac{ 1-(-1)^\ell }{ 2 } ) $\\$\oplus ~ u(\infty'_++\infty'_-)$\end{tabular} & \begin{tabular}{l} $u(\ell +\frac{ 1+(-1)^\ell }{ 2 } | \ell +\frac{ 1-(-1)^\ell }{ 2 } )$\\$ \oplus~ u(\infty'_+|\infty'_-)$ \end{tabular} \\ \hline & &\\[-10pt] \begin{tabular}{l} $-2\ell-1 > \nu >1-2\ell,$\\ $\ell=1,2,...$ \end{tabular} & $\begin{array}{c} u (\ell , \infty) ,\: \ell=even\\[4pt] u (\infty,\ell +1),\: \ell=odd \end{array}$ & $\begin{array}{c} u (\ell | \infty) ,\: \ell=even\\[4pt] u (\infty|\ell +1),\: \ell=odd \end{array}$ \\ \hline & & \\[-10pt] $\nu = -\infty $ & $u( \infty , \infty)$ & $u( \infty | \infty)$ \\ \hline \end{tabular} \end{center} \caption{This table displays the $\nu$-dependence of the real forms of the Lie (super)algebras $\check uq(2;\nu;\tau)\cong u(C)$ and $\check hosp(2|2;\nu;\tau)\cong u(C_+|C_-) $. In the above, $u(\eta):=u(p,q)$ if $\eta$ is a diagonal matrix with $p$ positive and $q$ negative entries \emph{idem} $u(\eta_1|\eta_2)$. In the first row, $\infty_\pm$ refer to the dimensions of ${\cal F}_\pm$, and in the second row, $\infty'_\pm$ refer to the dimensions of ${\cal P}_{\infty}{\cal F}_\pm$. The real forms in the graded case (second column) are in agreement with \cite{Bergshoeff:1989ns}. }\label{Trealform} \end{table} \section{Chern--Simons formulation} \label{sec:CS} \subsection{Blencowe--Vasiliev higher-spin gravity sector} The Blencowe--Vasiliev higher-spin gravity sector of the fractional-spin gravity model consists of an $lq(2;\nu) \oplus lq(2;\nu)$-valued connection $W$. Its the Fock-space representation reads \begin{equation}\label{W} \check W=\frac{1}{4i}\sum_{s=0,1} \Gamma^s \sum_{n\geqslant 0} \sum_{t=0,1} W_{s,t}^{\,\alpha_1 \cdots \alpha_n} \, \check k^t \, \check q_{(\alpha_1 } \cdots \check q_{\alpha_n)}\equiv \frac{1}{4i} \sum_{s=0,1} \Gamma^s \sum_{p,q\geqslant 0} W_s^{p,q} \, |p\rangle \langle q |\,, \, \end{equation} where the gauge-field components in the Fock-space basis are given in terms of those in the multi-spinorial basis via \begin{equation}\label{WmathC} W_s^{p,q} = \sum_{n \geqslant 0}\sum_{t=0,1} W_{s,t}^{\alpha_1 \cdots \alpha_n}(x)\, \mathcal{Q}_{\,\alpha_1 \cdots \alpha_n}{}^{t, p, q }\,, \end{equation} using the Fock-space representation matrix of the higher-spin algebra defined by \begin{equation}\label{Ycoef} \mathcal{Q}_{\alpha_1 \cdots \alpha_n}{}^{t,p,q }:= \langle q | \check k^t \, \check q_{(\alpha_1}\cdots \check q_{\alpha_n)} |p \rangle \,, \end{equation} % which one may think of as a generalized Dirac matrix. As discussed in the previous section, the connection can be subjected to reality conditions using either complex or hermitian conjugations; for definiteness let us use choose a reality condition of the latter type, namely \begin{equation}\label{Aconj2} \check W^{\check \dagger} = - \check C \check W \check C \,, \end{equation} where $\check C$ is the charge conjugation matrix in \eqref{Yconj} chosen such that \eqref{Creal} and \eqref{Cunimodular} hold. As a result, the multi-spinorial component fields obey \begin{equation}\label{Aconj1} (W_{s,t}^{\,\alpha_1 \cdots \alpha_n})^*=(-1)^{nt} \, W_{s,t}^{\,\alpha_1 \cdots \alpha_n}\, . \end{equation} As a consequence of \eqref{C1}, the representation matrix \eqref{Ycoef} obeys \begin{equation}\label{ycoefconj} \mathcal{Q}_{\, \alpha_1 \cdots \alpha_n}{}^{t, q , l }= (-1)^{nt} C_l \, \Big(\mathcal{Q}_{\, \alpha_1 \cdots \alpha_n}{}^{ t, l , q } \Big)^* \, C_q \,. \end{equation} Thus, the master gauge field $W$ obeying \eqref{Aconj2} is represented by a real matrix in the Fock-space basis, \emph{viz.} \begin{equation}\label{fockreal} W_s^{p,q}= (W_s^{p,q})^* \, . \end{equation} \subsection{Internal color gauge fields} The fractional-spin gravity also contains an internal color gauge field $U$ given in the bra-ket basis by \begin{equation}\label{U} \check U=\frac{1}{4i}\sum_{s=0,1} \Gamma^s \,\sum_{p,q\geqslant 0} U^{p}_{s,q} \check T_{p}^{q}\ ,\quad \check T_{p}^{q}:=|q\rangle\langle p|\ . \end{equation} It is taken to obey the following reality condition: \begin{equation} \check U^{\check \dagger} = - \check U\ ,\end{equation} such that $\check U$ formally becomes an element of the $u(\infty)\oplus u(\infty)$ with $u(\infty)$ generated by $\check T_{p}^{q}$, \begin{equation} [\check T_{n}^m ,\check T_{q}^{l} ]=i ( \delta^m_q \check T_{n}^{l}- \delta^l_n \check T^m_q ) \,,\quad (\check T^m_n)^{\check \dagger}=\check T^n_m\ . \end{equation} With these conventions, it follows that the internal component fields form a hermitian matrix, \begin{equation} (U_{s,p}^q)^\ast= U^p_{s,q}\,. \end{equation} \subsection{Hybrid theory with fractional-spin fields}\label{Hybthe} The higher-spin gravity connection $\check W$ given in \eqref{W} and the internal connection $\check U$ given in \eqref{U} can be coupled non-trivially via two intertwining one-forms, that we shall denote by $(\check {\overline{\psi}} , \check {\psi})$, whose gauge symmetries exchange the higher-spin gravity and internal gauge fields. In what follows, we present a simplified model exhibiting this feature in which the gauge fields are further projected using $\Pi_\pm$ as follows: \begin{eqnarray} && \check W_{++}=\check \Pi_+ \check W\, \check \Pi_+=\frac{1}{4i} \sum_{s=0,1}\Gamma^s\sum_{p,q\geqslant 0} W_s^{2p,2q} \, |2p\rangle \langle 2q |\,, \\[5pt] && \check U_{--}= \check \Pi_- \check U \,\check \Pi_-=\frac{1}{4i}\sum_{s=0,1}\Gamma^s\sum_{p,q\geqslant 0} U^{2q+1}_{s,2p+1 } |2p+1\rangle \langle 2q+1 |\,,\\[5pt] && \check \psi_{+-} = \check \Pi_+ \check \psi\, \check \Pi_-= \frac{1}{4i} \sum_{s=0,1}\Gamma^s\sum_{p,q\geqslant 0} \psi^{2p, 2q+1 }_s \, |2p\rangle \langle 2q+1 |\, ,\\[5pt] && \check{\overline{\psi}}_{-+} = \check \Pi_- \check {\bar{\psi}}\, \check \Pi_+=\frac{1}{4i} \sum_{s=0,1}\Gamma^s\sum_{p,q\geqslant 0} \overline{\psi}^{2p}_{s,2q+1} \, |2q+1\rangle \langle 2p| \, . \end{eqnarray} Arranging various master fields into a single two-by-two matrix \begin{equation}\label{} \check{\mathbb{A}}=\left[ \begin{array}{cc} \check W_{++} & \check \psi _{+-}\\ \check {\overline{\psi}}_{-+} & \check U_{--} \end{array} \right] \, , \end{equation} the equations of motion can be declared to be of the standard form: \begin{equation}\label{realF0} \check{\mathbb{F}}={\rm d} \check{\mathbb{A}} + \check{\mathbb{A}} \wedge \check{\mathbb{A}}=0 \,, \end{equation} that is, \begin{eqnarray} && {\rm d} \check W_{++}+ \check W_{++}\wedge \check W_{++}+\check \psi_{+-} \wedge \check {\overline{\psi}}{}_{-+}=0 \, , \\[5pt] && {\rm d} \check U_{--}+\check U_{--}\wedge \check U_{--}+ \check{\overline{\psi}}{}_{-+} \wedge \check \psi_{+-} =0 \, , \\[5pt] && {\rm d} \check \psi_{+-}+ \check W_{++}\wedge \check \psi_{+-} + \check \psi_{+-} \wedge \check U_{--} =0 \, , \\[5pt] && {\rm d} \check{\overline{\psi}}{}_{-+} + \check{\overline{\psi}}{}_{-+} \wedge \check W_{++} + \check U_{--} \wedge \check{\overline{\psi}}{}_{-+}=0 \,, \end{eqnarray} which form a non-trivial Cartan integrable system by virtue of the assignments that we have made so far. The equations of motion are thus symmetric under the gauge transformations \begin{equation} \check{\mathbb{A}} \quad \rightarrow \quad \check{\mathbb{A}}^{\check{\mathbb{G}}} = \check{\mathbb{G}}^{-1} ({\rm d} + \check{\mathbb{A}} )\, \check{\mathbb{G}} \,, \qquad \check{\mathbb{G}} = \exp (i\check{\mathbb{X}}) \, , \quad \check{\mathbb{X}} := \left[\begin{array}{cc} \check x_{++}&\check x_{+-}\\\check x_{-+}& \check x_{--} \end{array}\right]\ . \end{equation} Thus, $\check W_{++}$ is the connection belonging to the adjoint representation of the non-minimal bosonic higher-spin subalgebra $lq(2;\nu)_{++}\, \oplus \, lq(2;\nu)_{++}$ of $lq (2;\nu) \, \oplus \, lq(2;\nu)$; it consists of all integer spins and has the Fock-space representation \begin{eqnarray} \check W_{++}= \frac{1}{4i} \sum_{s=0,1} \Gamma^s \sum_{n\geqslant 0}\sum_{p,q\geqslant} W_{s,0}^{\alpha_1 \cdots \alpha_{2n}} \, \mathcal{Q}_{\alpha_1 \cdots \alpha_{2n}}{}^{0, 2p,2q }\, |2p\rangle \langle 2q | \,. \end{eqnarray} The internal gauge field $\check U_{--}$ belongs to the adjoint representation of $u_{--}(\infty)\oplus u_{--}(\infty)$ where $u_{--}(\infty):=\check\Pi_- u(\infty)\check \Pi_-$. The intertwining fields $\check \psi_{+-}$ and $\check{\overline{\psi}}{}_{-+} $ belong to bi-fundamental representations transforming on one side under the higher-spin algebra and on the other side under the internal color gauge algebra. Thus, the master connection $\mathbb A$ belongs to a hybrid higher-spin algebra, which we refer to as a fractional-spin algebra, consisting of a sector of ordinary higher-spin generators related to space-time symmetries glued to an internal sector of compact generators via a set of intertwining generators belonging to a bi-module. The action of the global rotation $\check{\mathbb{R}}{}_{2\pi}$ by $2\pi$ generated by $\check x_{++}=2\pi \, \check \Pi_+ \check J^0$ on the fields is given by \begin{eqnarray} (\check{\mathbb{R}}{}_{2\pi})^{-1} \, \check{\mathbb{A}} \check{\mathbb{R}}{}_{2\pi} = \left[ \begin{array}{cc} \check W_{++} & e^{-i \pi \frac{1+\nu}{2}} \check \psi _{+-}\\ e^{i \pi \frac{1+\nu}{2}} \check{\overline{\psi}}{}_{-+} & \check U_{--} \end{array} \right] \,, \end{eqnarray} from which it follows that in the semi-classical theory $(\check{\overline{\psi}}, \check \psi)$ have fractional statistical phases $e^{\mp i \pi \frac{1+\nu}{2}}$, whereas $\check W_{++} $ and $\check U_{--}$ have bosonic ones. Thus, the spins and the Grassmann statistics of $(\check{\overline{\psi}}, \check \psi)$ are not correlated in the semi-classical theory for generic values of $\nu$. Observe that for critical values, $\nu=-2\ell - 1$, the semi-classical statistical phases take the values \begin{equation}\label{criticalph} \left\{e^{\mp i \pi \, \ell }: \ell=0,1,2,\dots\right\}= \{ 1,-1,1,-1,\dots \} \, , \end{equation} such that the spins and the Grassmann statistics of $(\check{\overline{\psi}}, \check \psi)$ are correlated in the semi-classical limit for even and odd $\ell$, respectively, in the case of Grassmann even and Grassmann odd fields, in agreement with discussion around \eqref{statisticsq}. The master connection obeys the following reality condition: \begin{equation}\label{realityhyb} \check{\mathbb{A}}^{\check\dagger} = - \check{\mathbf{C}} \, \check{\mathbb{A}} \, \check{\mathbf{C}} \, , \qquad \check{\mathbf{C}} := \left( \begin{array}{cc} \check C_{++}& 0 \\ 0 & \check {\Pi}_{-} \end{array}\right) \, ,\qquad \end{equation} where $\check C_{++}=\check \Pi_+ \check C \check \Pi_+$ whose Fock-space representation is given by \begin{equation} \check C_{++}:=\check \Pi_+ \, \check C\, \check \Pi_+=\sum_{q \geqslant 0} C_{2q} \, |2q\rangle \langle 2q | \, . \end{equation} and $\check{\Pi}_-=\sum_{q \geq 0} |2q+1\rangle \langle 2q+1 |$. As discussed in Section \ref{Sec:bases}, the key issue is the choice of bases used to expand the various gauge fields. Strictly speaking, the fractional-spin gravity model for which we have an off-shell formulation, is based on a master field \begin{equation}\label{} \mathbb{A} \in \left[ \begin{array}{cc} lq_{++}(2;\nu; \mathbb{R}) & \rho_{\tiny {\cal F}}\left({\rm Bi}(lq_{++}(2;\nu; \mathbb{R})|u_{--}(\infty))\right)\\[5pt] \rho_{\tiny {\cal F}}\left(\overline{{\rm Bi}}(u_{--}(\infty)|lq_{++}(2;\nu; \mathbb{R}))\right) & \rho_{\tiny {\cal F}}\left(u_{--}(\infty )\right) \end{array} \right]\otimes {\rm Cliff}_1(\Gamma) \, , \end{equation} where $\rho_{\tiny {\cal F}}$ denotes a morphism from ${\rm End}({\cal F})$ to the oscillator algebra and ${\rm Bi}(a|b)$ denotes a bi-module with a left action of $a$ and a right action of $b$. The higher-spin connection is thus expanded in multi-spinorial basis, in which only the trivial element has a diagonal one-sided action, while the internal gauge field and the intertwiners are expanded in the Fock-space basis, in which the spin operator $\check{J}^0$ has diagonal one-sided actions. The role of the map $\rho_{\tiny {\cal F}}$ is to realize the latter basis elements as elements of the oscillator algebra rather than ${\rm End}({\cal F})$ as to make sense of the source term $\psi\overline{\psi}$ in the equation for $W_{++}$. As far as the on-shell formulation is concerned, it follows from Eqs. \eqref{gl1} and \eqref{gl2}, that undoing of the map $\rho_{\tiny {\cal F}}$ by mapping $\mathbb A$ to its representation $\check{\mathbb{A}}$ in ${\cal F}$ yields a model that is equivalent to the original one only for non-critical $\nu$. However, as outlined in Section \ref{Sec:main}, the preference for the former model, formulated in terms of $\mathbb{A}$ rather than $\check{\mathbb{A}}$, stems from the fact that the construction of the standard Chern--Simons action \eqref{CS1} requires the introduction of a bi-linear form \eqref{traceoperation} on the fractional-spin algebra. This bi-linear form is based on the trace operation \eqref{Trace} in its turn based on the supertrace operation \eqref{STr} whose implementation is straightforward once all objects have been mapped to the star-product algebra\footnote{Alternatively, it would be interesting to investigate whether it is possible to start from an implementation of the supertrace operation in ${\rm End}({\cal F})$ and seek a scheme for regularizing the supertraces of the multi-spinorial generators of $lq(2;\nu)$.} \subsection{Finite-dimensional truncations at critical $\nu$} \label{ftrunc} In account of the discussion surrounding Eq. \eqref{caseIII}, for critical values of $\nu=- 2 \ell - 1$, the algebra $Aq(2;\nu)\oplus Aq(2;\nu)$ possesses an additional decomposable structure in finite and infinite dimensional subsectors, so that in those cases the connection splits into \begin{equation} \check{\mathbb{A}}= \check{\mathbb{A}}_{\rm f}+ \check{\mathbb{A}}_\infty \, , \qquad \check{\mathbb{A}}_{\rm f}:= \mathcal{P}_{\rm f} \, \check{\mathbb{A}} \, \check{\mathcal{P}}_{\rm f} \, , \quad \check{\mathbb{A}}_\infty := \mathcal{P}_\infty \, \check{\mathbb{A}} \, \mathcal{P}_\infty \, , \end{equation} where, in the notation of Eq. \eqref{gl1}, one has \begin{equation} \check{\mathbb{A}}_{\rm f} \, \in \, gl(2\ell+1) \, \oplus \, gl(2\ell+1)\, , \quad \check{\mathbb{A}}_\infty \, \in \, gl(-\ell;J_0;0)\, \oplus \, gl(-\ell;J_0;0)\ .\label{AfAinf}\end{equation} With our election of the representation of the oscillators generators \eqref{a+a-}, other projections vanish, \emph{i.e.} \begin{equation} \mathcal{P}_{ {\rm f} \atop \infty } \check{\mathbb{A}} \, \mathcal{P}_{ \infty \atop {\rm f} } = 0 \, . \end{equation} The basis element of the subsectors $\mathbb{A}_{\rm f}$ and $\mathbb{A}_{ \infty}$ are given respectively by \begin{eqnarray} gl(2\ell+1)|_{2\ell+1} \, \oplus \, gl(2\ell+1)|_{2\ell+1} & = & \{ \Gamma^s \, k^t \, \mathcal{P}_{\rm f}\, q_{(\alpha_1}\cdots q_{\alpha_n)} \, \mathcal{P}_{\rm f}\,, : \, s,t=0,1;\,n=0,...,2\ell\, \} \,, \nonumber\\[5pt] Aq(2;2\ell+1)|_{{\cal F}} \, \oplus \, Aq(2;2\ell+1)|_{{\cal F}} & = &\{ \Gamma^s \, k^t \, \mathcal{P}_{\infty}\, q_{(\alpha_1}\cdots q_{\alpha_n)} \, \mathcal{P}_{\infty}\,, :\, s,t=0,1;\, n=0,... \, \} \,. \nonumber \end{eqnarray} One can verify that the associative algebra spanned by $ k^t \, \mathcal{P}_{\rm f}\,\, q_{(\alpha_1}\cdots q_{\alpha_n)} \, \mathcal{P}_{\rm f}$ with $n=0,...,2\ell \,$; $t=0,1\,$, is isomorphic \cite{Vasiliev:1989re} to Mat${}_{2\ell+1},(\mathbb{C})$ by counting the number of independent generators considering the identity \begin{equation} \mathcal{P}_{\rm f}\, q_{(\alpha_1}\cdots q_{\alpha_{2\ell})} \, \mathcal{P}_{\rm f} \equiv \, k \, \mathcal{P}_{\rm f}\, q_{(\alpha_1}\cdots q_{\alpha_{2\ell})} \, \mathcal{P}_{\rm f} \quad \Leftrightarrow\quad \Pi_- \, \mathcal{P}_{\rm f}\, q_{(\alpha_1}\cdots q_{\alpha_{2\ell})} \, \mathcal{P}_{\rm f} =0 \, . \end{equation} In this way, the hybrid model \ref{Hybthe} constructed in Fock space, including the correspondent reality conditions \eqref{realityhyb}, thus decomposes into a finite-dimensional and an infinite-dimensional model\footnote{Thus, these equations for $\mathbb{A}$ decompose in a different way, such that $\mathbb{A}_{\rm f}$ sources $\mathbb{A}_{\infty}$.} \begin{equation}\label{} \check{\mathbb{F}}_{\rm f}=d\check{\mathbb{A}}_{\rm f}+ \check{\mathbb{A}}_{\rm f}\wedge \check{\mathbb{A}}_{\rm f}=0 \,, \qquad \check{\mathbb{F}}_\infty=d \check{\mathbb{A}}_\infty+ \check{\mathbb{A}}_\infty\wedge \check{\mathbb{A}}_\infty =0 \, , \qquad \check{\mathbb{A}}_{\rm f}\wedge \check{\mathbb{A}}_\infty=\check{\mathbb{A}}_\infty \wedge \check{\mathbb{A}}_{\rm f}=0 \, , \end{equation} for the corresponding algebras \eqref{AfAinf}; that is \begin{equation} \check{\mathbb{A}}_{\rm f} \quad \in \quad \left[ \begin{array}{cc} gl(\ell+1, \mathbb{R}) & {\rm Bi}(\ell+1\otimes \overline\ell) \\ \overline{{\rm Bi}}( \ell\otimes \ell+1) & u(\ell) \end{array} \right]\otimes {\rm Cliff}_1(\Gamma) \,, \end{equation} and \begin{equation} \check{\mathbb{A}}_{ \infty} \quad \in \quad \left[ \begin{array}{cc} gl(-\ell;J_0;0) & {\rm Bi}((-\ell;J_0;0)\otimes \overline\infty) \\ \overline{\rm Bi}(\infty\otimes (-\ell;J_0;0)) & u(\infty) \end{array} \right]\otimes {\rm Cliff}_1(\Gamma) \,, \end{equation} where ${\rm Bi}(v\otimes w)$ denotes a bi-module consisting of a left-module $v$ and a right-module $w$. \subsection{Truncations of color gauge fields} To begin with, for any $\nu$ and $N\in \mathbb{N}$, it is possible to choose $u(N)$ subalgebras of ${\rm End}({\cal F})$ and truncate $U\in u(N)$ and simultaneously take $\psi$ and $\overline\psi$, respectively, to transform in $\bar N$ and $N$. For any given $N$, there exists an infinite number of such level truncations. Another type of truncation of the color gauge fields is possible in the non-unitary regime $\nu<-1$. Here one notes that if $\check \psi= | \sigma \rangle\langle c |$, where thus $\sigma$ is a spin and $c$ is a color, then $\check{\overline{\psi}} = - | c \rangle\langle \sigma | C$ and hence $\check{\psi} \check{\overline{\psi}} = |\sigma \rangle\langle \sigma | C$ while $\check{\overline{\psi}} \check \psi = |\sigma\rangle\langle c|C|c\rangle\langle \sigma|$ that can vanish in the non-unitary regime. Thus, the fractional-spin fields necessarily source the tensor-spinorial higher-spin gravity field $W$ (\emph{c.f.} positivity of energy in ordinary gravity) while the internal gauge field $\check U$ can be truncated consistently leading to \begin{equation} {\rm d}\check W+\check W^2 +\check \psi \, \check{\overline{\psi}}=0\ ,\quad {\rm d}\check \psi +\check W \check \psi=0\ ,\quad {\rm d}\check{\overline{\psi}} + \check{\overline{\psi}} \,\check W=0\ , \quad \check{\overline{\psi}}\,\check \psi=0\ , \end{equation} which defines a quasi-free differential algebra. Thus, the last constraint above possesses non-trivial solutions owing the non-definite signature of the invariant conjugation matrix of the representation of the higher spin algebra carried by the fractional spin fields, while $\check \psi\, \check{\overline{\psi}}=0$ does not have non-trivial solutions. \section{Conclusions} \label{sec:Ccls} In this paper, we have presented a new class of three-dimensional Chern--Simons higher-spin gravities that we refer to as fractional-spin gravities. These theories are extensions of ordinary Blencowe--Vasiliev \cite{Blencowe:1988gj,Vasiliev:1989re} higher-spin gravities and Chern--Simons gauge theories by bi-fundamental one-forms valued in direct products of fundamental representations the higher-spin algebras and the internal compact gauge algebras. In effect, the fractional-spin models have been obtained by a non-standard embedding of the Lorentz algebra into an original enlarged Blencowe--Vasiliev model; in this sense one may interpret the fractional-spin gravities as describing new vacuum sectors of the Blencowe--Vasiliev theory, as we shall comment on more below. The fundamental representations of the higher-spin algebras are infinite-dimensional and characterized by a deformation parameter $\nu\in\mathbb{R}$: For non-critical $\nu$ they remain irreducible under the Lorentz sub-algebra with spin $\frac14(1+\nu)$; for critical $\nu=-1,-3,\dots$ they decompose into a finite-dimensional tensor or tensor-spinor and an infinite-dimensional representation with spin $-\nu$. The color indices, on the other hand, can be chosen to be finite-dimensional by level truncation, and if the fractional-spin representation is non-unitary, that is, if $\nu<-1$ then the internal gauge fields can be truncated; the theory then consists only of the higher-spin gravity fields and the fractional-spin fields. Denoting the Blencowe--Vasiliev connection by $W$, which thus consists of a collection of Lorentz-tensorial gauge fields making up the adjoint representation of the higher-spin algebra, and the fractional-spin fields and internal connection by $(\psi,\overline\psi)$ and $U$, respectively, we have proposed to describe the fractional-spin gravities on-shell using the following integrable system of equations: \begin{eqnarray} {\rm d} W+{W}{}^2+ \psi {} {\overline\psi} = 0\ ,\quad {\rm d} \psi + W {} \psi+ \psi {} U=0\ ,\end{eqnarray} \begin{eqnarray} {\rm d}{\overline{\psi}} +{\overline{\psi}} {} W + U {} {\overline{\psi}} = 0\ ,\quad {\rm d} U+ U{} U +{\overline{\psi}}{} \psi=0\ ,\end{eqnarray} or more concisely, as \begin{eqnarray} {\rm d} {\mathbb{A}} + {\mathbb{A}}{} {\mathbb{A}} =0\ , \quad {\mathbb{A}} =\left[\begin{array}{cc} W & \psi\\ {\overline{\psi}} & U \end{array} \right]\ .\end{eqnarray} The underlying fractional-spin algebra carries a $\mathbb{Z}_2$-grading similar to that of ordinary superalgebras: The fractional-spin generators close onto higher-spin and internal generators, while the higher-spin and internal generators rotate the fractional-spin charges into themselves. Thus, the fractional-spin fields transform under one-sided actions of the higher-spin and internal Lie, and the fractional-spin transformations can send higher-spin gauge fields into internal gauge fields and vice versa. We would like to stress that the simple appearance of the construction is due to the fact that it relies on the consistent fusion of two sectors of the enveloping algebra of the Wigner--Heisenberg deformed oscillators: The sector of arbitrary polynomials in deformed oscillators can be combined with the sector of Fock-space endomorphisms into an associative algebra by realizing the latter as elements of the enveloping algebra. In this paper, we have demonstrated this algebraic structure at the level of Fock-space representations, which are sufficient for the on-shell formulation. The off-shell formulation requires, however, the implementation using enveloping algebra techniques, as to realize the bi-linear form going into the definition of the Chern--Simons action; we leave a more detailed description of the off-shell formulation as well as the construction of non-topological fractional-spin models for forthcoming works. In terms of $sl(2)$ representation theory, the fractional-spin representations belong to the discrete series \cite{Bargmann:1946me} which are lowest-weight representations in the compact basis, labeled by the lowest eigenvalue of the spatial rotation generator $J^0$ of $so(2,1)\cong sl(2)$. Generic values of the lowest spin imply irreducibility, while negative integer or negative half-integer lowest spins, respectively, imply decomposability with finite-dimensional invariant tensor or tensor-spinorial subspaces. Hence, finite-dimensional higher-spin models can be singled out; by combining various reality conditions and working with fractional-spin fields that are either bosons or fermions one may arrive at models based on $sl(N)$, $su(p,q)$ or $su(p|q)$. The fact that the fractional-spin fields $(\psi,\overline\psi)$ are constructed from tensor-spinor higher-spin fields by a change of basis, can be interpreted as that the latter condense into the former in a new vacuum of the Prokushkin-Vasiliev system where color interactions emerge. This phenomena is reminiscent of how new phases can be reached in strongly correlated systems by means of large gauge transformation, as for example in the confined phase of QCD according to t'Hooft's mechanism \cite{'tHooft:1977hy}. It is thus inspiring to entertain the idea that the new vacua of Blencowe--Vasiliev theory studied arise in a similar fashion, namely, via a large gauge transformation of the Blencowe--Vasiliev vacuum formed by tensor and tensor-spinor fields. This physical picture also resembles the fractional quantum Hall effect \cite{Wilczek:1982wy,Laughlin:1983fy,Halperin:1984fn, dePicciotto:1997qc,Polychronakos:2001mi} where many-electron systems exposed to strong magnetic fields become confined giving rise to quasi-particle anyons. As mentioned already, anyons can be obtained in the form of a Wilson line coming from infinite and attached in its extreme to a charged particle \cite{Itzhaki:2002rc}, yielding the transmutation to braided statics. Although we have not discussed these aspects in this paper, it suggests by analogy that we may be in a similar picture, namely that the fractional spin fields should correspond to Wilson lines attached to the AdS boundary and to some higher spin particles with usual boson or fermion statistics, although in the present states of our theory the latter particles must also be located at the boundary, as it happens in Chern--Simons theory where the dynamical degrees of freedom are confined to the boundary. It is worth to mention that open higher-spin Wilson-lines have been analyzed recently \cite{Ammon:2013hba,deBoer:2013vca} and their insertions have been argued to be dual to sending the dual conformal field theory to phases with finite entanglement entropies. One problem that one can investigate, starting from our model, is a particular type of classical solutions in fractional-spin gravity that may have an interpretation as entanglement entropy. Along the same lines, a suitable approach could be suggested by the considerations made in the work \cite{Compere:2013nba}. % \section*{Acknowledge} We thank Fabien Buisseret, Andrea Campoleoni, Alejandra Castro, Johan Engquist, Matthias Gaberdiel, Dileep Jatkar, Soo-Jung Rey, Kostas Siampos and Philippe Spindel for discussions. The work of N.B. was supported in part by an ARC contract n$^{\rm o}$ AUWB-2010-10/15-UMONS-1. M. V. is supported by FONDECYT postdoctoral grant n$^{\rm o}$ 3120103. P.S. and M.V. acknowledge the support of the F.R.S.-FNRS ``Ulysse'' Incentive Grant for Mobility in Scientific Research during the first stage of this work, which was carried out at the Service de M\'ecanique et Gravitation at UMONS. \providecommand{\href}[2]{#2}\begingroup\raggedright
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Phase transitions phenomena are ubiquitous in nature at very different scales in space and in energy. Therefore, from the theoretical viewpoint, understanding their origin, and the way of classifying them, is of central interest. In spite of a huge literature on this topic, a general theory is still lacking. In the framework of Landau's phenomenological theory, phase transitions are generally related to the mechanism of spontaneous symmetry breaking. However, Landau's theory is not all-encompassing. Indeed, many systems do not fit in this theory and undergo a phase transition lacking spontaneous symmetry breaking and lacking an order parameter. Some notable examples are : Kosterlitz-Thouless transitions after Mermin-Wagner theorem, systems with local gauge symmetries after Elitzur's theorem, liquid-gas transitions, transitions in supercooled glasses and liquids, transitions in amorphous and disordered systems, folding transitions in homopolymers and proteins. Furthermore, to account for the loss of analyticity of thermodynamic observables, the mathematical description of phase transitions requires the limit of an infinite number of particles (thermodynamic limit) as is the case of the Yang-Lee theory \cite{YL} and of the Dobrushin-Lanford-Ruelle theory \cite{DLR}. However, the contemporary research on nanoscopic and mesoscopic systems, on the biophysics of polymers \cite{bachmann,PRL-Bachmann}, on Bose-Einstein condensation, Dicke superradiance in microlaser, superconducting transitions in small metallic objects, tackles transition phenomena in systems of finite - and often very small - number of particles. Within all the hitherto developed theoretical frameworks, also including the monumental theory of Renormalization Group and critical phenomena, it is assumed that the primitive object at the grounds of a theory is a given statistical measure; schematically: the gran-canonical measure in the old Yang-Lee theory, the canonical measure in the Dobrushin-Lanford-Ruelle theory, and the microcanonical measure in a still somewhat open and more recent approach \cite{gross,bachmann,PRL-Bachmann}. However, there are several general results suggesting that the possibility for a system to undergo a phase transition depends on some measure-independent properties, as is its spatial dimension, the dimensionality of its order parameter, the range of its interactions, the symmetry group (discrete or continuous) of its Hamiltonian. This hints at the possibility that the same information might be encoded already at a more fundamental level completely determined by the internal interactions of a system, interactions described by their potential function. Therefore, looking for generalisations of the existing theories is a well motivated and timely purpose. The present paper puts forward a new starting point for a line of thought initiated several years ago and based on a variety of results which hitherto did not appear to fit in a coherent theoretical framework. The central idea of this line of thought is that the singular energy dependence of the thermodynamic observables at a phase transition is the "shadow" of some adequate \textit{change of topology} of the energy level sets in phase space (or of the potential level sets in configuration space, as well). \subsection{Why topology} \smallskip Recently, the study of equilibrium phase transitions in the microcanonical ensemble has attracted increasing interest, being very important in presence of ensemble inequivalence, when only the microcanonical ensemble gives the correct results. Two complementary approaches have been undertaken. One of these is of a statistical kind \cite{gross,bachmann}, recently summarized in a very interesting, powerful and rich classification of microcanonical phase transitions by M.Bachmann in Ref.\cite{PRL-Bachmann}. On another side, as the ergodic invariant measure of nonintegrable Hamiltonian systems is the microcanonical measure, the other approach resorts to the study of Hamiltonian dynamics of systems undergoing phase transitions. This dynamical approach brings about interesting novelties with respect to the standard studies of phase transitions, eventually leading to the Topological Hypothesis (TH) through the following logical chain. The dynamics of a generic system of $N$ degrees of freedom described by a Hamiltonian $H = \frac{1}{2}\sum_{i=1}^N p_i^2 + V(q_1,\ldots,q_N)~,$ or equivalently by the corresponding Lagrangian function $L = \frac{1}{2}\sum_{i=1}^N {\dot q}_i^2 - V(q_1,\ldots,q_N)~,$ is chaotic. The degree of chaoticity of the dynamics is measured by the largest Lyapunov exponent, a new observable that has been proven useful to characterize phase transitions from a dynamical viewpoint \cite{book}. Then, the explanation of the origin of Hamiltonian chaos - encompassing the computation of the largest Lyapunov exponent - proceeds by identifying a Hamiltonian flow with a geodesic flow of an appropriate Riemannian differentiable manifold. This differential geometric framework is given by configuration space endowed with the non-Euclidean metric of components \cite{marco} $g_{ij} =2 [E- V(q)] \delta_{ij}$, whence the infinitesimal arc element $ds^2= 2[E- V(q)]^2 dq_i\ dq^i$; then Newton equations are retrieved from the geodesic equations $$ \frac{d^2q^i}{ds^2} + \Gamma^i_{jk}\frac{d q^j}{ds}\frac{d q^k}{ds} =0\ , $$ where $\Gamma^i_{jk}$ are the Christoffel connection coefficients of the manifold. The degree of instability of the dynamics is described by means of the Jacobi--Levi-Civita equation for the geodesic spread $$ \frac{\nabla^2 J}{ds^2} + R(J, \dot\gamma )\dot\gamma =0\ , $$ where the vector field $J$ locally measures the distance between nearby geodesics, $\frac{\nabla}{ds}$ is the covariant derivative along the configuration space geodesic ${\dot\gamma}$, and $R(\cdot,\cdot)$ is the Riemann curvature tensor. The largest Lyapunov exponent for high dimensional Hamiltonian flows is found to depend on the curvature ``landscape'' of the configuration space manifold \cite{book}. Hence, a natural consequence has been to investigate whether the occurrence of phase transitions has some peculiar counterpart in geometrical changes of the manifolds underlying the flows. And it has been discovered that this is actually the case. Moreover, the peculiar geometrical changes associated with phase transitions were discovered to be the effects of deeper topological changes of the potential level sets $\Sigma_v^{V_N}: =\{ V_{N}(q_1,\dots,q_N) = v \in{{\cal I} \! \! R}\}$ in configurations space, and, equivalently, of the balls $\{M_{v}^{V_N}=V_N^{-1}((-\infty,v])\}_{v \in{{\cal I} \! \! R}}$ bounded by the $\Sigma_v^{V_N}$. A topological approach to the study of phase transitions has been considered for a variety of systems, ranging from those undergoing entropy driven transitions \cite{carlsson1,barish} (having also applications to robotics), and hard spheres systems \cite{mason}, to quantum phase transitions \cite{brody,BFS,volovik}, glasses and supercooled liquids \cite{angelani,stillinger}, classical models in statistical mechanics \cite{risau,schilling,fernando}, discrete spin models \cite{cimasoni}, DNA denaturation \cite{grinza}, peptide structure \cite{becker}, to quote just a few of them. In fact, in many contexts, well before an explicit formulation of the TH \cite{CCCP,CCCPPG}, topological concepts were implicitly entering the study of phase transitions while talking of energy landscapes \cite{brooks,wales} and of saddle points for disordered systems, glasses \cite{angelani,stillinger}, spin glasses: saddle points being critical points in the language of Morse theory of differential topology. On a completely different field, more recently, handling Big Data - outsourcing from complex systems - through methods referred to as Topological Data Analysis (TDA) it happens to highlight the existence of phase transition-like phenomena in the absence of a statistical measure. Here the concept of phase transition is intended as the emergence of qualitatively new properties when a control parameter crosses a critical value (the prototype can be dated back to the Erd\"os-Renyi giant component appearance in random graphs). To quote a fascinating example in this field, in Ref.\cite{brain} the discovery is reported of topological phase transitions in functional brain networks by merging concepts from TDA, topology, geometry, physics, and network theory. The present paper is organized as follows: in Section II the basic definitions and concepts of extrinsic geometry of hypersurfaces is recalled for the sake of self-containedness, and the definition of asymptotic diffeomorphicity is also therein introduced. In Section III the Main theorem is formulated and proved; this theorem states that a topological change of the potential level sets of a physical system is a necessary condition for the appearance of a phase transition. Finally, in Section IV the problem raised by the counterexample to a preceding formulation of the Main theorem is fixed. Section V is devoted to some concluding remarks, two appendices contain computational details, and a third appendix addresses some past controversial points. \section{Topological origin of phase transitions} \smallskip On the one side the study of the Hamiltonian dynamical counterpart of phase transitions, combined with the geometrization of Hamiltonian dynamics, has led to find out the crucial role of topology at the grounds of these transition phenomena, on the other side a mathematical relationship exists between macroscopic thermodynamics and topological properties of the manifolds $M_{v}^{V_N}$, as expressed by \cite{book} \begin{equation} S_N(v) =({k_B}/{N}) \log \left[ \int_{M_{v}^{V_N}}\ d^Nq\right] =\frac{k_B}{N} \log \left[ vol [{M_{v}^{V_N}\setminus\bigcup_{i=1}^{{\cal N}(v)} \Gamma(x^{(i)}_c)}]\ + \sum_{i=0}^N w_i\ \mu_i (M_{v}^{V_N})+ {\cal R} \right] ,\label{exactS} \end{equation} where $S_N$ is the configurational entropy, $v$ is the potential energy, and the $\mu_i(M_{v}^{V_N})$ are the Morse indexes (in one-to-one correspondence with topology) of the manifolds $M_{v}^{V_N}$; in square brackets: the first term is the result of the excision of certain neighborhoods of the critical points of the interaction potential from $M_{v}^{V_N}$; the second term is a weighed sum of the Morse indexes, and the third term is a smooth function of $N$ and $v$. \begin{figure}[h!] \centering \includegraphics[scale=0.4,keepaspectratio=true,angle=-90]{Figure1_compressed.pdf} \vskip -0.3truecm \caption{Low-dimensional pictorial representation of the transition between complex topologies as a metaphor of the origin of a phase transition. From the ground level up to the crossover level $v_c$, the manifolds $M_v$ have a genus which increases with $v$. Above the crossover level $v_c$, the manifolds $M_v$ have also a nonvanishing linking number which increases with $v$. } \vskip 0.3truecm \label{Cmplx} \end{figure} As a consequence, major topology changes with $v$ of the submanifolds $M_{v}^{V_N}$ - bringing about sharp changes of the potential energy pattern of at least some of the $\mu_i (M_{v}^{V_N})$ - can affect the $v$-dependence of $S_N(v)$ and of its derivatives. Hence, it has been surmised \cite{book} that, at least for a broad class of physical systems, phase transitions stem from a suitable change of the topology of the potential level sets $\Sigma_v^{V_N}$ and, equivalently, of the manifolds $M_{v}^{V_N}$, when $v$, playing the role of the control parameter, takes a critical value $v_c$. This hypothesis has turned into the start of a new theory by putting together several studies on specific models \cite{book,physrep} and two theorems \cite{prl1,NPB1,NPB2}. These theorems state that an equilibrium phase transition - is {\it necessarily} due to appropriate topological transitions in configuration space. However, a counterexample to these theorems has been found in Ref.\cite{kastner} thus undermining this version of the topological theory of phase transitions. The counterexample is provided by the second order phase transition of the $2D$ lattice $\phi^4$-model that occurs at a critical value $v_c$ of the potential energy density which belongs to a broad interval of $v$-values void of critical points of the potential function. The difficulty raised by this counterexample has stimulated a deeper investigation of the transition of the $\phi^4$-model which led to figure out a crucial point associated with the breaking of the ${\cal I} \! \!{Z}_2$ symmetry (and possibly with the breaking of discrete symmetries in general), that is, the possibility of a phase transition to stem from an asymptotic loss of diffeomorphicity of the relevant manifolds \cite{vaff}. In what follows this fact is formalized into a new and more consistent version of the theory. Other alleged difficulties of the theory are briefly discussed in Appendix C. \subsection{Basic definitions and concepts} Consider now an open set of $v$-values $I\subseteq {\cal I} \! \!{R}$ such that the cylindrical subset of configuration space $\Gamma_I^N=\bigcup_{{v}\in {I}}\Sigma^{V_N}_{v}$ contains only non-singular level sets, that is, $\nabla V_N(q)\neq 0$ for any $q\in\Gamma^N_{I}$, meaning that $V_N$ has no critical points for any ${v}\in {I}$. For any ${v}_0,{v}_1\in{I}$, the two level sets $\Sigma^{V_N}_{v_0}\subset \Gamma^N_{{I}}$ and $\Sigma^{V_N}_{v_1} \subset \Gamma^N_{{I}}$ are diffeomorphic under the action of an explicitly known diffeomorphism given by the integral lines of the vector field $\boldsymbol{\xi}_N={\nabla{V}_N}/{\|\nabla{V}_N\|^2}$, that is, any initial condition $\textit{\textbf{q}}_\alpha\in\Sigma^{V_N}_{v_0}$ is diffeomorphically mapped onto a point $\textit{\textbf{q}}_\beta\in\Sigma^{V_N}_{v_1}$ by the equation \cite{hirsch} \begin{equation}\label{Hirsch-eq} \frac{d{\textit{\textbf{q}}}}{dv} =\dfrac{\nabla{V}_N}{\|\nabla{V}_N\|^2} \qquad . \end{equation} \begin{figure}[h] \includegraphics[scale=0.32,keepaspectratio=true,angle=-90]{Figure2_compressed.pdf} \caption{Pictorial representation of the action of the vector field $\boldsymbol{\xi}$ in Eq.\eqref{Hirsch-eq} diffeomorphically mapping each point of the level set $\Sigma_a$ onto each point of the level set $\Sigma_b$.} \label{Hirsch-fig} \end{figure} \subsection{Extrinsic geometry of hypersurfaces} In this section some basic definitions and concepts are given about the extrinsic geometry of hypersurfaces of a Euclidean space. The basic tool consists in measuring the way of changing from point to point on the surface of the normal direction in order to describe how the $n$-surface $\Sigma$ curves around in ${{\cal I} \! \! R}^N$. The rate of change of the normal vector ${\cal N}$ at a point $x\in\Sigma$ in a given direction $\textit{\textbf{u}}$ is described by the {\it shape operator} (also known as Weingarten's map) $L_x(\textit{\textbf{u}}) = - \nabla_{\textit{\textbf{u}}}\ {\cal N}= - ({\textit{\textbf{u}}}\cdot\nabla){\cal N}$, where $\textit{\textbf{u}}$ is a tangent vector at $x$ and $\nabla_{\textit{\textbf{u}}}$ is the directional derivative; gradients and vectors are represented in ${{\cal I} \! \! R}^N$. \begin{figure} \begin{center} \includegraphics[scale=0.3,keepaspectratio=true,angle=-90]{Figure3_compressed.pdf} \end{center} \caption{Illustration of the items entering the construction of the shape operator of a surface. } \label{wein-map} \end{figure} The constant-energy hypersurfaces in the phase space of Hamiltonian systems or of the equipotential hypersurfaces in configuration space, are the level sets of regular functions and, for the level sets defined via a regular real-valued function $f$ as $\Sigma_a:=f^{-1}(a)$, the normal vector is ${\cal N}= \nabla f/\Vert\nabla f\Vert$. Let $\{{\bf e}_\mu\}_{\mu =1,\dots,N} =\{{{\bf e}_1,\dots,{\bf e}_n},{\cal N}\}$, with ${\bf e}_\alpha\cdot{\bf e}_\beta =\delta_{\alpha,\beta}$ and denote with Greek subscripts, $\alpha =1,\dots,N$, the components in the embedding space ${{\cal I} \! \! R}^N$, and with Latin subscripts, $i =1,\dots,n$, the components on a generic tangent space $T_x\Sigma_a$ at $x\in\Sigma_a$. We consider the case of codimension one, that is, $N=n+1$. From $\partial_\mu {\cal N}_\alpha {\cal N}_\alpha =0= 2{\cal N}_\alpha \partial_\mu {\cal N}_\alpha$ we see that for any $\textit{\textbf{u}}$, we have ${\cal N}\cdot L_x(\textit{\textbf{u}})= - {\cal N}_\alpha \textit{\textbf{u}}_\mu \partial_\mu {\cal N}_\alpha =0$, which means that $L_x(\textit{\textbf{u}})$ projects on the tangent space $T_x\Sigma_a$. Now the {\it principal curvatures} $\kappa_1,\dots,\kappa_n$ of $\Sigma_a$ at $x$ are the eigenvalues of the shape operator restricted to $T_x\Sigma_a$. Considering the matrix ${\cal L}_x$ to be the restriction of $L_x$ to $T_x\Sigma_a$ \[ {\cal L}_{ij}(x) = {\bf e}_i\cdot L_x({\bf e}_j) = - ({\bf e}_i)_\alpha ({\bf e}_j)_\beta \partial_\beta {\cal N}_\alpha\ , \] then the {\it mean curvature} is defined as \begin{equation} H(x)= \frac{1}{n}{\rm Tr}^{(n)}{\cal L}_{ij}(x)= \frac{1}{n}\sum_{i=1}^n\kappa_i \ . \label{meancurvat} \end{equation} The computation of the analytic expression of the mean curvature $H$ proceeds from \begin{equation} H(x)= \frac{1}{n}{\rm Tr}^{(n)}{\cal L}_{ij}(x)= - \frac{1}{n}\sum_{i=1}^n ({\bf e}_i)_\alpha ({\bf e}_i)_\beta \partial_\beta {\cal N}_\alpha~. \label{meancurvatur} \end{equation} Defining $A_{\mu \nu}=({\bf e}_\mu)_\nu$, so that $A A^T={{\cal I} \! \! I}$, we have \[ \sum_{i=1}^n ({\bf e}_i)_\alpha ({\bf e}_i)_\beta =\delta_{\alpha\beta} - {\cal N}_\alpha {\cal N}_\beta \] and thus \begin{eqnarray} H(x)= -\frac{1}{n}(\delta_{\alpha\beta} - {\cal N}_\alpha {\cal N}_\beta)\partial_\beta {\cal N}_\alpha =- \frac{1}{n}\partial_\alpha {\cal N}_\alpha = - \frac{1}{n}\nabla \cdot\left( \frac{\nabla f}{\Vert\nabla f\Vert}\right)~. \label{M1} \end{eqnarray} \subsection{Asymptotic diffeomorphicity} In Ref.\cite{vaff} it has been numerically found that the phase transition undergone by the $2D$ lattice $\phi^4$ model actually corresponds to a major topological change of the potential level sets of the model, also in absence of critical points of the potential. This topological change corresponds to an asymptotic breaking of the topological transitivity of the potential level sets, what can be formalised as an asymptotic loss of diffeomorphicity of the same manifolds in the broken symmetry phase. Hence a crucial hint to fix the problem stemming from the counterexample given by the $\phi^4$ model that has been hitherto considered fatal. The first step to fix the problem thus consists in defining asymptotic diffeomorphicity, what is easily done by observing that a vector valued function of several variables, $f:{\cal I} \! \!{R}^n\rightarrow{\cal I} \! \!{R}^n$, is of differentiability class ${\cal{C}}^{l}$ if all the partial derivatives $(\partial^lf/\partial x_{i_1}^{l_1}\dots\partial x_{i_k}^{l_k})$ exist and are continuous, where each of $i_1,\dots,i_k$ is an integer between $1$ and $n$ and each $l_1,\dots,l_k$ is an integer between $0$ and $l$, and $l_1+\dots +l_k=l$. Then, by taking advantage of the explicit analytic representation of the vector field generating the diffeomorphism $\boldsymbol\xi_N:\Gamma_{I}^{N}\rightarrow T\Gamma_{I}^{N}$ previously given, uniform convergence in $N$ of the sequence $\{\boldsymbol\xi_N\}_{N\in{\cal I} \! \!{N}}$ - and thus asymptotic diffeomorphicity in some class ${\cal{C}}^{l}$ - can be defined after the introduction of an appropriate norm containing all the derivatives up to $(\partial^l \boldsymbol\xi_N/\partial q_{i_1}^{l_1}\dots\partial q_{i_k}^{l_k})$. At any fixed $N\in {\cal I} \! \!{N}$, in the absence of critical points of a confining potential $V_N$, the level sets $\Sigma_v^{V_N}$ are non singular $(N-1)$-dimensional hypersurfaces in ${\cal I} \! \!{R}^N$. Let us consider the already defined cylindrical subset of configuration space $\Gamma_I^N=\bigcup_{{v}\in {I}}\Sigma^{V_N}_{v}$ containing only non-singular level sets. The lack of asymptotic breaking of diffeomorphicity is defined by introducing a norm for the $\boldsymbol{\xi}_N$ that allows to compare the diffeomorphisms at different dimensions \begin{equation}\label{normaxi} \|\boldsymbol{\xi}_N \|_{C^k(\Gamma^N_{{I}_0})}=\sup_{\textit{{q}}_0\in\Gamma^N_{{I}_0}}\|\boldsymbol{\xi}_N\|+ \sum_{l=1}^{k}\sum_{\{ i_k\}}\sum_{j=1}^N\|{\nabla^l_{\{ i_k\}}}{\xi}_j\|_{\Gamma^N_{{I}_0}} \end{equation} where $\{ i_k\}$ stands for a multi-index and $\|{\nabla^l_{\{ i_k\}}}{\xi}_j\|_{\Gamma^N_{{I}_0}}$ is the norm of the $l$-th differential operator with $l_1+\dots +l_k=l$ \begin{equation} \|{\nabla^l_{\{ i_k\}}}{\xi}_j\|_{\Gamma^N_{{I}_0}}=\sup_{\textit{{q}}_0\in\Gamma^N_{{I}_0}}\left\vert \dfrac{\partial^l{\xi}_j}{\partial q_{i_1}^{l_1}\dots\partial q_{i_k}^{l_k}}\right\vert \ . \label{norma} \end{equation} The sequence of families of manifolds $\left\{\Gamma^N_{{I}_0}\right\}_{N\in{\cal I} \! \!{N}}$ is said to asymptotically preserve the $C^{k}$-diffeomorphicity among the hypersurfaces $\Sigma_v^{V_N}$ - foliating each family - if there exists $B\in{\cal I} \! \!{R}^{+}$ such that \begin{equation} \label{eq:AsympDiffCk} \|\boldsymbol{\xi}_N \|_{C^k\left(\Gamma^N_{{I}_0}\right)}\leq B<+\infty \qquad \forall N\in {\cal I} \! \!{N}. \end{equation} As a consequence, from this condition we get $\|\nabla{V}_N\|=\|{\boldsymbol\xi}_N\|^{-1}\geq 1/B=C>0$ for each $\textit{q}_0\in\Gamma_{{I}_0}^N$ and all $N\in{\cal I} \! \!{N}$, ruling out the existence of asymptotic critical points (i.e. $\|\nabla{V}_N\|\rightarrow 0$ for $N\rightarrow\infty$). The analytic condition \eqref{eq:AsympDiffCk} entails remarkable consequences on the extrinsic geometry of the potential level sets. In fact, using $\sum_i \Vert X_i\Vert \ge \Vert \sum_i X_i\Vert$, from Eq. \eqref{norma} at the lowest order with the aid of a normalised vector $\textit{\textbf{u}}$ tangent at $\textit{q}_0$ to a $\Sigma_v^n\subset \Gamma^N_{{I}_0}$, that is, $\textit{\textbf{u}}\in T_{\textit{q}_0}\Sigma_v^n$, we can build the quadratic forms \begin{equation}\label{kappa} \sum_{i,j=1}^N\Vert (\partial_i\xi_j) u_i u_j \Vert \ge \left\Vert \sum_{i,j=1}^N \left(\partial_i \dfrac{\partial_j V_N}{\Vert\nabla V_N\Vert^2}\right) u_i u_j \right\Vert \end{equation} where $\partial_i=\partial/\partial q^i$. With implicit summation on repeated indices the r.h.s. of Eq.\eqref{kappa} is rewritten as \begin{eqnarray}\label{Qform} && \left\Vert \left[\dfrac{1}{\Vert\nabla V_N\Vert}\left(\partial_i \dfrac{\partial_j V_N}{\Vert\nabla V_N\Vert}\right) + \dfrac{\partial_j V_N}{\Vert\nabla V_N\Vert}\partial_i \left( \dfrac{1}{\Vert\nabla V_N\Vert}\right)\right] u_i u_j \right\Vert \nonumber \\ &=& \left\Vert \dfrac{1}{\Vert\nabla V_N\Vert}\left(\partial_i \dfrac{\partial_j V_N}{\Vert\nabla V_N\Vert}\right) u_i u_j \right\Vert \end{eqnarray} where the orthogonality, at any point $\textit{q}_0$, between the vectors $\textit{\textbf{u}}$ and \noindent ${\cal N}=( \partial_1 V_N/\Vert\nabla V_N \Vert,\dots,\partial_N V_N/\Vert\nabla V_N\Vert )$ tangent and normal to $\Sigma_v^{V_N}$, respectively, has been used. Through the shape operator (Weingarten map) of $\Sigma_v^{V_N}$ \cite{thorpe} at $\textit{q}_0$ \begin{equation} L_{\textit{q}_0}(\textit{\textbf{u}}) = - L_{\textit{\textbf{u}}} {\cal N} = - (\nabla {\cal N}_1\cdot \textit{\textbf{u}}\ ,\dots,\nabla {\cal N}_N\cdot \textit{\textbf{u}}) \label{shape} \end{equation} the quadratic form $\kappa(\textit{\textbf{u}},\textit{q}_0) = \langle \textit{\textbf{u}}, L_{\textit{q}_0}(\textit{\textbf{u}})\rangle$ is found to coincide with the one given in Eq.\eqref{Qform} (last term). The quantity $\kappa(\textit{\textbf{u}},\textit{q}_0)$ is known as the \textit{normal curvature} of the level set $\Sigma_v^{V_N}$ at $\textit{q}_0$. Let $\{ \kappa_1(\textit{q}_0),\dots,\kappa_N(\textit{q}_0)\}$ denote the principal curvatures of $\Sigma_v^{V_N}$ at $\textit{q}_0$, with the corresponding orthogonal principal curvature directions $ \{ \textit{\textbf{v}}_1,\dots,\textit{\textbf{v}}_N\}$, then the normal curvature in the direction $\textit{\textbf{u}}\in T_{\textit{q}_0}\Sigma_v^N$ is given by \begin{equation} \kappa(\textit{\textbf{u}},\textit{q}_0)=\sum_{i=1}^N \kappa_i(\textit{q}_0)\langle \textit{\textbf{u}}, \textit{\textbf{v}}_i\rangle = \sum_{i=1}^N \kappa_i(\textit{q}_0)\cos^2\theta_i \end{equation} By choosing $\tilde{\textit{\textbf{u}}}\in T_{\textit{q}_0}\Sigma_v^N$ such that $\Vert\tilde{\textit{\textbf{u}}} \Vert =1$ and all the angles $\theta_i$ between $\tilde{\textit{\textbf{u}}}$ and $\textit{\textbf{v}}_i$ are equal to some $\tilde\theta$, we get \begin{equation}\label{kappatilde} \kappa(\tilde{\textit{\textbf{u}}},\textit{q}_0)=(\cos^2{\tilde\theta})\ \sum_{i=1}^N \kappa_i(\textit{q}_0) = (\cos^2{\tilde\theta})\ N\ H (\textit{q}_0) \end{equation} where $H (\textit{q}_0)$ is the mean curvature (the trace of the Weingarten map) at $\textit{q}_0$. Thus from Eqs.\eqref{Qform},\eqref{kappatilde} and \eqref{eq:AsympDiffCk} \begin{equation} \left\Vert \dfrac{1}{\Vert\nabla V_N\Vert} (\cos^2{\tilde\theta})\ N\, H (\textit{q}) \right\Vert \le B <+\infty \qquad \forall N\in {\cal I} \! \!{N} \end{equation} everywhere on $\Sigma_v^{V_N}$. Since $\Vert\nabla V_N\Vert\sim{\cal{O}}(N^{1/2})$ it follows that $H (\textit{q})\sim{\cal{O}}(N^{-1/2})$ everywhere on $\Sigma_v^{V_N}$ and uniformly in $N$. Therefore, the first remarkable consequence of asymptotic diffeomorphicity among the potential level sets is that their mean curvature \begin{equation} H (\textit{q})= \frac{1}{N}\sum_{i=1}^N\kappa_i(\textit{q}) \end{equation} is everywhere uniformly bounded in $N$. However, this does not ensure the boundedness of each principal curvature (whose sign is not definite). A-priori two or more principal curvatures of the same value but of opposite sign could diverge and mutually compensate leaving $H (\textit{q})$ finite. In order to get this missing information about the asymptotic boundedness of all the principal curvatures, let us consider the scalar curvature ${\mathscr R}$ of a level set $ V(\textit{q}) = v$, embedded in an Euclidean space of arbitrary dimension, which reads \cite{Zhou} \begin{equation} {\mathscr R}(\textit{q}) = \frac{1}{N(N-1)}\sum_{i\leq j}^{1\dots N}\kappa_i(\textit{q})\kappa_j(\textit{q}) =\frac{1}{N(N-1)}\left\{ -\triangle\log\Vert\nabla V_N(\textit{q})\Vert + \nabla\cdot \left[ \triangle V_N(\textit{q}) \frac{\nabla V_N(\textit{q}) }{\Vert\nabla V_N(\textit{q})\Vert^2} \right] \right\} \label{scalar0} \end{equation} let us notice that ${\mathscr R}$ is singular at the critical points of the potential, where $\nabla V_N(\textit{q})=0$, and can be arbitrarily large in their neighborhoods; then, using $\Vert\boldsymbol{\xi}\Vert = \Vert\nabla V_N(\textit{q})\Vert^{-1} $, this can be rewritten as \begin{equation} {\mathscr R} = \frac{1}{N(N-1)}\left\{ -\triangle\log \frac{1}{\Vert\boldsymbol{\xi}\Vert } + \nabla\cdot \left[ \triangle V_N(\textit{q})\ \boldsymbol{\xi} \right] \right\} \ , \label{scalar1} \end{equation} then, trivial computations (sketched in Appendix A) of the r.h.s. of this equation under the assumption of asymptotic diffeomorphicity [Eqs.\eqref{normaxi},\eqref{norma} and \eqref{eq:AsympDiffCk}] yield uniform boundedness also of ${\mathscr R}(\textit{q})$ entailing uniform boundedness in $N$ of each $\kappa_i(\textit{q})$ everywhere on each potential level set. \begin{figure}[h!] \includegraphics[scale=0.3,keepaspectratio=true,angle=0]{Figure4.pdf} \caption{Sequence of diffeomorphic manifolds (of the same dimension) with a limit manifold which is not diffeomorphic to the members of the sequence. The infinitely tiny bridge between the two spheres of $S_\infty$ has infinite mean curvature.} \label{GHlimt} \end{figure} To help intuition to get a hold of the relationship between boundedness of mean and scalar curvatures and asymptotic diffeomorphicity, we qualitatively illustrate in Figure \ref{GHlimt} the opposite situation, known as Gromov-Hausdorff limit \cite{sormani}, where a sequence of diffeomorphic manifolds of fixed dimension have a limit manifold which is not diffeomorphic to the other members of the sequence. The handles of these dumbbell shaped manifolds shrink to an asymptotic infinitely tiny cylinder of vanishing radius and thus of diverging transversal principal curvature, that is, of divergent mean curvature. \begin{remark}. Summarizing, the assumption of asymptotic diffeomorphicity means that, for any pair of densities $\bar{v}$ and $\bar{v}^\prime$ in some assigned interval $I_{\bar{v}}=[\bar{v}_0, \bar{v}_1]$ and $N$ arbitrarily large, the corresponding manifolds $\Sigma_{N\bar{v}}^{V_N}$ are diffeomorphic under the action of the diffeomorphism-generating vector fields $\boldsymbol{\xi}_{N_k}$ \begin{eqnarray} && \Sigma_{N_1\bar{v}}^{V_{N_1}}\quad \overset{\boldsymbol{\xi}_{N_1}}\longrightarrow \quad \Sigma_{N_1\bar{v}^\prime}^{V_{N_1}}\nonumber\\ &&\Sigma_{N_2\bar{v}}^{V_{N_2}}\quad \overset{\boldsymbol{\xi}_{N_2}}\longrightarrow \quad \Sigma_{N_1\bar{v}^\prime}^{V_{N_2}}\nonumber\\ &&\vdots \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \bar{v}, \bar{v}^\prime\in[\bar{v}_0,\bar{v}_1 ],\quad k\in{{\cal I} \! \! N}\\ && \Sigma_{N_k\bar{v}}^{V_{N_k}}\quad \overset{\boldsymbol{\xi}_{N_k}}\longrightarrow \quad \Sigma_{N_k\bar{v}^\prime}^{V_{N_k}}\nonumber\\ &&\vdots\nonumber \end{eqnarray} provided that the norm of the vector fields $\boldsymbol{\xi}_{N_k}$ is uniformly bounded according to Eq.\eqref{normaxi}. Under this condition, all the principal curvatures $\kappa_i(\textit{q})$ of every manifold in the above diagram are uniformly bounded with $N$. Moreover, after the {\rm Non-critical neck Theorem \cite{palais} } all the above manifolds $\Sigma_{N_k\bar{v}}^{V_{N_k}}$ for any $\bar{v}\in[\bar{v}_0,\bar{v}_1]$ are free of critical points of the potential functions $V_N$, that is of points where $\nabla V_N=0$. \end{remark} \section{A necessity theorem} In its original formulation, given in Refs.\cite{prl1,NPB1}, the theorem establishing the necessary topological origin of a phase transition was lacking a fundamental hypothesis that has led to the paradoxical situation of being falsified \cite{kastner} through the example of a phase transition still related with a change of topology in configuration space, though asymptotic in the number of degrees of freedom \cite{vaff}, and in the absence of critical points of the potential. The missing hypothesis suggested by the study of Ref. \cite{vaff} is to require also asymptotic diffeomorphicity of the potential level sets in order to correspondingly get uniform convergence of the Helmholtz free energy in a differentiability class that rules out first and second order phase transitions. \begin{remark}.The notation $N\in{\cal I} \! \!{N}^\#$ means that $N\rightarrow\infty$ is included. \end{remark} \medskip \subsection*{\large Main Theorem} \begin{theorem}[Absence of phase transitions under diffeomorphicity] \noindent Let $V_N(q_1,\dots,q_N): {{\cal I} \! \! R}^N \rightarrow{{\cal I} \! \! R}$, be a smooth, nonsingular, finite-range potential. Denote by $\Sigma_v^{V_N}:= V_N^{-1}(v)$, $v\in{{\cal I} \! \! R}$, its {\em level sets}, or {\em equipotential hypersurfaces}, in configuration space.\\ Then let $\bar{v} =v/N$ be the potential energy per degree of freedom.\\ If for any pair of values $\bar{v}$ and $\bar{v}^\prime$ belonging to a given interval $I_{\bar{v}}=[\bar{v}_0, \bar{v}_1]$ and for any $N>N_0$ with $N\in{\cal I} \! \!{N}^\#$ we have\\ \centerline{$\Sigma_{N\bar{v}}^{V_N}\approx \Sigma_{N\bar{v}^\prime}^{V_N}$\ , } \vskip 0.2truecm \noindent that is, $\Sigma_{N\bar{v}}^{V_N}$ is {\em diffeomorphic} to $\Sigma_{N\bar{v}^\prime}^{V_N}$, including {\em asymptotically diffeomorphic}, then the sequence of the Helmholtz free energies $\{ F_N (\beta)\}_{N\in{{\cal I} \! \! N}}$---where $\beta =1/T$ ($T$ is the temperature) and $\beta\in I_\beta =(\beta (\bar{v}_0), \beta (\bar{v}_1))$---is {\em uniformly} convergent at least in ${\mathscr C}^2(I_\beta\subset{{\cal I} \! \! R})$, so that $F_\infty \in{\mathscr C}^2(I_\beta\subset{{\cal I} \! \! R})$ and neither first- nor second-order phase transitions can occur in the (inverse) temperature interval $(\beta (\bar{v}_0), \beta (\bar{v}_1))$. \end{theorem} \begin{remark}. {\rm The configurational entropy $S_N(\bar{v})$ is related to the configurational canonical free energy, $f_N$ in \eqref{freeEnergy}, for any $N\in{{\cal I} \! \! N}$, $\bar{v}\in{{\cal I} \! \! R}$, and $\beta \in{{\cal I} \! \! R}$ through the Legendre transform \begin{eqnarray} - f_N(\beta) = \beta \cdot \bar{v}_N - S_N(\bar{v}_N) \label{legendre-tras} \end{eqnarray} where the inverse of the configurational temperature $T(v)$ is given by $\beta_N(\bar{v})= {\partial S_N(\bar{v})}/{\partial \bar{v}}$. By following Ref.\cite{dualising}, let us consider the function $\phi(\bar{v})=f_N[\beta(\bar{v})]$, from $\phi^\prime(\bar{v}) = -\bar{v}\ [d \beta_N(\bar{v})/d\bar{v}]$ it is evident that if $\beta_N(\bar{v})\in{\cal C}^k({\cal I} \! \! R)$ then also $\phi(\bar{v})\in{\cal C}^k({\cal I} \! \! R)$ and thus $S_N(\bar{v})\in{\cal C}^{k+1}({\cal I} \! \! R)$ while $f_N(\beta)\in{\cal C}^k({\cal I} \! \! R)$. First and second order phase transitions are associated with a discontinuity in the first or second derivatives of $f_\infty(\beta)$, that is with $f_\infty(\beta)\in{\cal C}^0({\cal I} \! \! R)$ or $f_\infty(\beta)\in{\cal C}^1({\cal I} \! \! R)$, respectively. Hence a first order phase transition corresponds to a discontinuity of the second derivative of the entropy $ S_\infty(\bar{v})$, and a second order phase transition corresponds to a discontinuity of the third derivative of the entropy $S_\infty(\bar{v})$. } \label{diffclassS} \end{remark} \begin{remark}. {\rm The proof of the Main Theorem follows the same conceptual path given in Refs.\cite{prl1,NPB1}: a {\it topological change} of the equipotential hypersurfaces $\Sigma_v^{V_N}$ of configuration space is a {\it necessary} condition for the occurrence of a thermodynamic phase transition if we prove the {\it equivalent proposition} that if any two hypersurfaces $\Sigma_v^{V_N}$ and ${\Sigma_{v^\prime}}^{V_N}$ with $v(N), v^\prime (N) \in (v_0(N),v_1(N))$ are {\it diffeomorphic} for all $N\in{\cal I} \! \!{N}^\#$, then {\it no phase transition} can occur in the (inverse) temperature interval $[\lim_{N\rightarrow\infty}\beta (\bar{v}_0(N)),\lim_{N\rightarrow\infty} \beta (\bar{v}_1(N))]$. } \end{remark} \noindent\textbf{Proof.} {\rm For standard Hamiltonian systems (i.e. quadratic in the momenta) the relevant information is carried by the configurational microcanonical ensemble, where the configurational canonical free energy is } \begin{equation}\label{freeEnergy} f_N(\beta)\equiv f_N(\beta; V_N)= \frac{1}{N} \log \int_{(\Lambda^d)^{\times n}}dq_1\dots dq_N\ \exp [-\beta V_N(q_1,\dots, q_N)] \end{equation} {\rm with and the configurational microcanonical entropy (in units s.t. $k_B=1$) is \begin{equation} S_N(\bar{v}) \equiv S_N(\bar{v};V_N) =\frac{1}{N} \log{ \int_{(\Lambda^d)^{\times n}} dq_1\cdots dq_N\ \delta [V_N(q_1,\dots, q_N) - v] ~\label{pallaM}} \, .\nonumber \end{equation} Then $S_N(\bar{v})$ is related to the configurational canonical free energy, $f_N$, for any $N\in{{\cal I} \! \! N}$, $\bar{v}\in{{\cal I} \! \! R}$, and $\beta \in{{\cal I} \! \! R}$ through the Legendre transform in Eq.\eqref{legendre-tras}. } From Lemma 1, proved after Lemmas 2 to 9, we have that in the limit $N\rightarrow\infty$ and at constant particle density, ${\rm vol} (\Lambda^d)^{\times n}/N\ =\ {\rm const}$, in the interval $I_{\bar{v}}=[\bar{v}_0, \bar{v}_1]$ the sequence $\{S_N\}_{N\in{{\cal I} \! \! N}^\#}$ is uniformly convergent in ${\mathscr C}^3(I_{\bar{v}}\subset{{\cal I} \! \! R})$ so that $S_\infty\in{\mathscr C}^3(I_{\bar{v}}\subset{{\cal I} \! \! R})$ that is, the thermodynamic limit of the entropy is three times differentiable, with continuous third-order derivative, in $I_{\bar{v}}=[\bar{v}_0, \bar{v}_1]$. Hence in the interval $I_\beta=[\lim_{N\rightarrow\infty}\beta (\bar{v}_0(N)),\lim_{N\rightarrow\infty} \beta (\bar{v}_1(N))]$ the sequence of configurational free energies $\{ f_N(T)\}_{N\in{\cal I} \! \!{N}^\#}$ is {\it uniformly convergent} at least in ${\mathscr C}^2(I_\beta\subset{{\cal I} \! \! R})$, so that we have \[ - f_\infty(\beta) = \beta(\bar{v}) \cdot \bar{v} - S_\infty(\bar{v}) \] that is $\{ f_\infty(T)\}\in {\mathscr C}^2(I_\beta\subset{{\cal I} \! \! R})$. Since a quadratic kinetic energy term of a standard Hamiltonian gives only a smooth contribution to the total Helmholtz free energy $F_N(\beta)$, also the asymptotic function $F_\infty(\beta)$ has differentiability class ${\mathscr C}^2(I_\beta\subset{{\cal I} \! \! R})$ so that we conclude that the corresponding physical system does not undergo neither first- nor second-order phase transitions in the inverse-temperature interval $\beta\in I_\beta$. \ $\square$ \subsection*{\large Lemmas} \begin{lemma}[Uniform upper bounds] Let $V_N$ be a standard, short-range, stable, and confining potential function bounded below. Let $\left \{ \Sigma_v^{V_N} \right \}_{v\in{{\cal I} \! \! R}}$ be the family of $(N-1)$-dimensional equipotential hypersurfaces $\Sigma_v^{V_N}:=V_N^{-1}(v)$, $v\in{{\cal I} \! \! R}$, of ${{\cal I} \! \! R}^N$. If \begin{eqnarray} \forall N\in{\cal I} \! \!{N}^\# ~~~and~\bar{v},\bar{v}' \in {I_{\bar{v}}}=[\bar{v}_0,\bar{v}_1],~~we~have~~ \Sigma_{N \bar{v}}^{V_N}~\approx~\Sigma_{N \bar{v}'}^{V_N}\ , \nonumber \end{eqnarray} then \begin{eqnarray} \sup_{N,\bar{v}\in I_{\bar{v}}} \left\vert S_N({\bar{v}})\right\vert < \infty~~~{\it and}~~~ \sup_{N,\bar{v}\in I_{\bar{v}}} \left\vert \frac{\partial^k S_N}{\partial {\bar{v}}^k}({\bar{v}})\right\vert < \infty,~~k=1,2,3,4. \nonumber \end{eqnarray} \label{derivees-majorees} \end{lemma} {\bf Proof.} The proof of this Lemma amounts to proving the Main Theorem and proceeds as follows. After Remark 2, the derivatives of the entropy are expressed in terms of the derivatives of the microcanonical configurational volume which, in turn, after Lemma 2 can be expressed as surface integrals of functions of $\overline{\zeta}_N= {\rm div} (\overline{\boldsymbol{\xi}}_N)$ and its Lie derivatives, where $\overline{\boldsymbol{\xi}}_N$ is the vector field generating the diffeomorphisms among the specific potential energy level sets. Then these integrals are replaced by averages along Monte Carlo Markov Chains (MCMC) that can be defined to have as invariant measure the microcanonical configurational measure (Lemma 3 and Remark 3). After Lemmas 4 and 5, $\overline{\zeta}_N$ is proved to behave as a random gaussian process along the mentioned MCMCs, hence, after Remark 5 and Lemmas 6 to 9 the uniform bounds are derived of the derivatives of the entropy up to the fourth one. $\square$ \begin{lemma}[Derivation of integrals over regular level sets (\cite{federer}\cite{laurence})] Let $O \subset {\cal I} \! \!{R}^p$ be a bounded open set. Let $\psi \in {\mathscr C}^{n+1} (\overline O)$ be constant on each connected component of the boundary $\partial O$ and $f \in {\mathscr C}^n (O)$. Define $O_{t,t'}=\{x \in O\mid t<\psi (x) <t' \}$ and \begin{equation} F(v)= \int_{ \{ \psi=v \} } f~\mathrm{d}\sigma^{p-1} \end{equation} where $d\sigma^{p-1}$ represents the Lebesgue measure of dimension $p-1$. If $~C>0$ exists such that for any $x \in O_{t,t'}, \Vert \nabla \psi (x) \Vert \geq C$, then for any $k$ such that $0 \leq k \leq n$, for any $v \in ]t,t'[$, one has \begin{equation} \label{eq: FedLau_Formula} \frac{\mathrm{d}^k F}{\mathrm{d} v^k}(v) =\int_{\{ \psi=v \}} A_{\psi}^k f~\mathrm{d}\sigma^{p-1} \ . \end{equation} with \begin{equation} \label{eq: def_FedLauOperator} A_{\psi} f = \nabla \left( \dfrac{\nabla \psi}{\|\nabla \psi \| }f \right) \dfrac{1}{\| \nabla \psi \|} \end{equation} \label{Cor:Federer} \end{lemma} This Lemma allows to compute higher order derivatives of the microcanonical volume $\Omega_{n}(\bar{v})$, and thus of the entropy, at any order by identifying $\psi$ with the potential $\overline{V}_N(\textit{\textbf{q}})= V(\textit{\textbf{q}})/N $. Let us introduce the following notations: $\overline{\zeta}_N= {\rm div} (\overline{\boldsymbol{\xi}}_N)$, \begin{equation} \label{eq:chi_def} \overline{\chi}_N=\|\overline{\boldsymbol{\xi}}_N\|=\dfrac{1}{\|\nabla\overline{V}_N\|}\,\, , \end{equation} for the norm of $\overline{\boldsymbol{\xi}}_N$, and \begin{equation} \label{eq:def_microcanonical_areaform} \mathrm{d}\mu^{N-1}_{\bar{v}}=\overline{\chi}_N \mathrm{d}\sigma_{\LevelSetfunc{\bar{v}}{\overline{V}_N}} \end{equation} for the microcanonical area $(N-1)$-form of non critical energy level sets, and \begin{equation} \mathcal{L}_{\overline{\boldsymbol{\xi}}}(\cdot)=(\overline{\boldsymbol{\xi}}\cdot\nabla)(\cdot)=\sum_{i=1}^{N}\dfrac{\partial^{i}\overline{V}}{\|\nabla \overline{V}\|^2}\partial_i(\cdot) \end{equation} for the Lie derivative along the flow of $\overline{\boldsymbol{\xi}}_N$. Then, given the microcanonical configurational volume \begin{equation} \Omega_N(\bar{v})=\intSigmaV{\bar{v}}{N} \,\mathrm{d}\mu_{\bar{v}}^{N-1} \end{equation} its derivatives are computed through the formula \begin{equation} \label{eq:def_recursiveDer_Omega_microcan} \dfrac{\mathrm{d}^k \Omega_N}{\mathrm{d} \overline{v}^k}(\bar{v})=\intSigmaV{\bar{v}}{N}\, \dfrac{1}{\overline{\chi}} A^{k}_{V}(\overline{\chi})\,\mathrm{d}\mu_{\bar{v}}^{N-1} \end{equation} where $A^{k}_{V}(\overline{\chi})$ stands for a $k$-times repeated application of the operator \begin{equation} \label{eq:FedererOperator_microcan} A_{V}(f) = f\overline{\zeta}_N+\mathcal{L}_{\overline{\boldsymbol{\xi}}_N}(f)\ . \end{equation} \vspace{-0.3truecm} \begin{remark}\textbf{(Derivatives of the entropy)} The configurational microcanonical entropy density is given by \begin{equation} \label{eq: MicroCan_Entropy_xi} \overline{S}_{N}(\overline{v})=\dfrac{1}{N}\log\Omega_{N}(\overline{v})=\dfrac{1}{N}\log\intSigmaV{\bar{v}}{\overline{V}_N}\,\mathrm{d}\mu^{N-1}_{\overline{v}} \end{equation} and its derivatives are \begin{equation} \label{eq: microcanEntropy_Derivative} \begin{split} &\dfrac{\mathrm{d} \overline{S}_N}{\mathrm{d} \overline{v}}(\overline{v})=\dfrac{1}{N}\dfrac{\Omega^{'}_N(\overline{v})}{\Omega_N(\overline{v})}\\ &\dfrac{\mathrm{d}^2 \overline{S}_N }{\mathrm{d} \overline{v}^2}(\overline{v})=\dfrac{1}{N}\,\left[ \dfrac{\Omega^{''}_N(\overline{v})}{\Omega_N(\overline{v})}-\left(\dfrac{\Omega^{'}_N(\overline{v})}{\Omega_N(\overline{v})}\right)^2 \right]\\ &\dfrac{\mathrm{d}^3 \overline{S}_N }{\mathrm{d} \overline{v}^3}(\overline{v})=\dfrac{1}{N}\left[\dfrac{\Omega^{'''}_N(\overline{v})}{\Omega_N(\overline{v})}-3\dfrac{\Omega_N^{''}(\overline{v})}{\Omega_N(\overline{v})} \dfrac{\Omega_N^{'}(\overline{v})}{\Omega_N(\overline{v})}+2\left(\dfrac{\Omega_N^{'}(\overline{v})}{\Omega_N(\overline{v})}\right)^3\right]\\ &\dfrac{\mathrm{d}^4 \overline{S}_N }{\mathrm{d} \overline{v}^4}(\overline{v})=\dfrac{1}{N}\left[\dfrac{\Omega^{(iv)}_{N}(\overline{v})}{\Omega_N(\overline{v})}-4\dfrac{\Omega^{'''}_N(\overline{v})\Omega^{'}_N(\overline{v})}{\Omega_N^2(\overline{v})}+ 12\dfrac{\Omega^{'2}_N(\overline{v})\Omega^{''}_N(\overline{v})}{\Omega_N^3(\overline{v})}-3\left(\dfrac{\Omega^{''}_N(\overline{v})}{\Omega_N(\overline{v})}\right)^2- 6\left(\dfrac{\Omega^{'}_N(\overline{v})}{\Omega_N(\overline{v})}\right)^4\right]\,. \end{split} \end{equation} where, after Lemma 2, the derivatives of configurational microcanonical volume $\Omega_N(\bar{v})$ up to the fourth order with respect to $\bar{v}$ are found to be \begin{equation} \label{eq:Omega_Derivatives_Xi} \begin{split} &\dfrac{\mathrm{d} \Omega_N }{\mathrm{d} \bar{v}}(\bar{v})=\intSigmaV{\bar{v}}{\overline{V}_N} \overline{\zeta}_N \,\mathrm{d}\mu^{N-1}_{\bar{v}}\\ &\dfrac{\mathrm{d}^2 \Omega_N }{\mathrm{d} \bar{v}^2}(\bar{v})=\intSigmaV{\bar{v}}{\overline{V}_N}\left[ \overline{\zeta}_N^2+\mathcal{L}_{\overline{\boldsymbol{\xi}}_N} (\overline{\zeta}_N)\right]\mathrm{d}\mu^{N-1}_{N\bar{v}}\\ &\dfrac{\mathrm{d}^3 \Omega_N }{\mathrm{d} \bar{v}^3}(\bar{v})=\intSigmaV{\bar{v}}{\overline{V}_N}\left[ \overline{\zeta}_N^3+3\overline{\zeta}_N\mathcal{L}_{\overline{\boldsymbol{\xi}}_N}\left(\overline{\zeta}_N\right)+ \mathcal{L}^{(ii)}_{\overline{\boldsymbol{\xi}}_N} (\overline{\zeta}_N)\right]\mathrm{d}\mu^{N-1}_{\bar{v}}\\ &\dfrac{\mathrm{d}^4 \Omega_N }{\mathrm{d}\overline{v}^4}(\overline{v})=\intSigmaV{\bar{v}}{\overline{V}_N}\left[\overline{\zeta}_N^4+6\overline{\zeta}_N^2\mathcal{L}_{\overline{\boldsymbol{\xi}}_N}(\overline{\zeta}_N)+4\overline{\zeta}_N \mathcal{L}^{(ii)}_{\overline{\xi}_N}(\overline{\zeta}_N)+3\left(\mathcal{L}_{\overline{\boldsymbol{\xi}}_N}(\overline{\zeta}_N)\right)^2+\mathcal{L}^{(iii)}_{\overline{\boldsymbol{\xi}}_N}(\overline{\zeta}_N)\right]\mathrm{d}\mu^{N-1}_{\bar{v}} \end{split} \end{equation} \end{remark} On any $(N-1)$-dimensional hypersurface $\Sigma_{N\bar{v}}^{V_N}=V^{-1}_N(N\bar{v} ) =\{X\in{{\cal I} \! \! R}^{N}\ \vert \ V_N(X) =N\bar{v}\}$ of ${{\cal I} \! \! R}^{N}$, we can define a homogeneous nonperiodic random Markov chain whose probability measure is the configurational microcanonical measure \cite{book}, namely $d\sigma /\Vert\nabla V_N\Vert$. We call this Markov chain a microcanonical-Monte Carlo Markov Chain (MCMC). In so doing, all the integrals giving configurational microcanonical averages are replaced by asymptotic averages along these MCMCs. Dropping the suffix $N$ of $V_N$ we have the following Lemma: \begin{lemma} [Monte Carlo Markov Chains over regular level sets] \label{mesure_ergodique} On each finite-dimensional level set $\Sigma_{N\bar{v}}=V^{-1}(N\bar{v} )$ of a standard, smooth, confining, short-range potential $V$ bounded below, and in the absence of critical points, there exists a random Markov chain of points $\{X_i\in{\Bbb R}^{N}\}_{i\in{\Bbb N_+}}$, constrained by the condition $V(X_i) = N{\bar{v}}$, which has \begin{equation} d\mu =\frac{d\sigma}{\Vert\nabla V\Vert} \left(\int_{\Sigma_{N\bar{v}}} \frac{d\sigma}{\Vert\nabla V\Vert}\right)^{-1} \label{prob} \end{equation} as its probability measure, so that for a smooth function $F :{\Bbb R}^{N}\rightarrow{\Bbb R}$ we have \begin{equation} \left(\int_{\Sigma_{N\bar{v}}} \frac{d\sigma}{\Vert\nabla V\Vert}\right)^{-1} \int_{\Sigma_{N\bar{v}}}\frac{d\sigma}{\Vert\nabla V\Vert}\ F = \lim_{n\rightarrow\infty}\frac{1}{n}\sum_{i=1}^n F(X_i)~. \label{mcmcmc} \end{equation} \end{lemma} {\bf Proof.} The level sets $\{\Sigma_{N\bar{v}}\}_{\bar{v}\in{\Bbb R}}$ are compact hypersurfaces of ${\Bbb R}^{N}$, therefore a partition of unity \cite{thorpe} can be defined on each hypersurface. Then, let $\{U_i\}$, $1\leq i \leq m$ be an arbitrary finite covering of $\Sigma_{N\bar{v}}$ by means of domains of coordinates (for example open balls), at any point of $\Sigma_{N\bar{v}}$ an ensemble of smooth functions $\{\varphi_i\}$ exists, such that $1\geq\varphi_i\geq 0$ and $\sum_i\varphi_i =1$. By means of the partition of unity $\{\varphi_i\}$ on $\Sigma_{N\bar{v}}$, associated to a collection $\{U_i\}$ of one-to-one local parametrizations of the compact and oriented hypersurfaces $\Sigma_{N\bar{v}}$, the integral of a given smooth $(N-1)$-form $\omega$ is given by: \[ \int_{\Sigma_{N\bar{v}}} \omega^{(N-1)} = \int_{\Sigma_{N\bar{v}}}\left(\sum_{i =1}^m\varphi_i (x)\right) \omega^{(N-1)}(x)= \sum_{i =1}^m \int_{U_i} \varphi_i\omega^{(N-1)}(x)~. \] The existence of a Monte Carlo Markov chain (MCMC) of assigned probability measure (\ref{prob}) on a given $\Sigma_{N\bar{v}}$ is constructively proved as follows. Let us consider sequences of random values $\{x_i : i\in\Lambda\}$, where $\Lambda$ is the finite set of indexes of the elements of the partition of unity on $\Sigma_{N\bar{v}}$, and where $x_i =(x^1_i, \dots,x^{N-1}_i)$ are local coordinates with respect to $U_i$ of an arbitrary representative point of the set $U_i$ itself. The weight $\pi (i)$ of the $i$th element of the partition is then defined by \begin{equation} \pi (i)=\left( \sum_{k=1}^m \int_{U_k}\varphi_k\ \frac{d\sigma}{\Vert\nabla V\Vert} \right)^{-1} \int_{U_i}\varphi_i\ \frac{d\sigma}{\Vert\nabla V\Vert} \label{peso} \end{equation} and the transition matrix elements \cite{mcmc} are given by \begin{equation} p_{ij} = \min \left[ 1, \frac{\pi (j)}{\pi (i)}\right] \label{pij} \end{equation} satisfyinng the detailed balance equation $\pi (i) p_{ij}= \pi (j) p_{ji}$. A random Markov chain $\{i_0, i_1\dots, i_k, \dots\}$ of indexes induces a random Markov chain of corresponding elements of the partition, that is of points $\{x_{i_0},x_{i_1},\dots, x_{i_k}, \dots\}$ on the hypersurface $\Sigma_{N\bar{v}}$. Denote by $(x^1_P, \dots,x^{N-1}_P)$ the local coordinates of a point $P$ on $\Sigma_{N\bar{v}}$ and define a local reference frame as $\{\partial/\partial x^1_P,\dots,\partial/\partial x^{N-1}_P, n(P)\}$, with $n(P)$ the outward unit normal vector at $P$; by means of the matrix that operates a point-dependent change from this reference frame to the canonical basis $\{e_1,\dots,e_{N}\}$ of ${\Bbb R}^{N}$ it is possible to associate to the Markov chain $\{x_{i_0},x_{i_1},\dots, x_{i_k}, \dots\}$ an equivalent chain $\{X_{i_0},X_{i_1},\dots, X_{i_k}, \dots\}$ of points specified through their coordinates in ${\Bbb R}^{N}$ but still constrained to belong to the subset $V(X) = v$, that is, to $\Sigma_{N\bar{v}}$. Consequently, the invariant probability measure \cite{mcmc} of the Markov chain so constructed is the probability density (\ref{prob}). Moreover, in the absence of critical points, for smooth functions $F$ and smooth potentials $V$, the variation on each set $U_i$ of $F/\Vert\nabla V\Vert$ is limited. Therefore, by keeping it finite the partition of unity can be refined as needed to make Lebesgue integration convergent; hence equation (\ref{mcmcmc}) follows.$\quad\quad\square$ \begin{remark} By introducing the following notation for the average of a generic measurable function $f:M^{N}\to{\cal I} \! \!{R}$ over the hypersurface $\LevelSetfunc{\bar{v}}{\overline{V}_N}$ endowed with the measure $\mathrm{d}\mu^{N-1}_{\bar{v}}$ \begin{equation} \label{def:averageonSigmav} \left\langle f \right\rangle_{\overline{v},\mu}=\dfrac{\displaystyle{\intSigmaV{\bar{v}}{\overline{V}_N}\,f\mathrm{d}\mu_{\overline{v}}^{N-1}}}{\displaystyle{\intSigmaV{\bar{v}}{\overline{V}_N}\,\mathrm{d}\mu_{\overline{v}}^{N-1}}}= \dfrac{\displaystyle{\intSigmaV{\bar{v}}{\overline{V}_N}\,f\mathrm{d}\mu_{\overline{v}}^{N-1}}}{\Omega_N(\overline{v})}\,\, , \end{equation} the quantities \begin{equation} \label{def: definition_statQuant} \begin{split} &\mathrm{Var}_{\overline{v},\mu}(f)=\mathrm{Cuml}^{(2)}_{\overline{v},\mu}(f)=\left\langle f^2 \right\rangle_{\overline{v},\mu}-\left\langle f \right\rangle_{\overline{v},\mu}^2\\ &\mathrm{Cov}_{\overline{v},\mu}(f;g)=\left\langle f g \right\rangle_{\overline{v},\mu}-\left\langle f\right\rangle_{\overline{v},\mu}\left\langle g \right\rangle_{\overline{v},\mu}\\ &\mathrm{Cuml}^{(3)}_{\overline{v},\mu}(f)=\left\langle f^3 \right\rangle_{\overline{v},\mu}-3\left\langle f \right\rangle_{\overline{v},\mu}\left\langle f^2 \right\rangle_{\overline{v},\mu}+2 \left\langle f \right\rangle_{\overline{v},\mu}^3\\ &\mathrm{Cuml}^{(4)}_{\overline{v},\mu}(f)=\left\langle f^4 \right\rangle_{\overline{v},\mu}-4\left\langle f^3 \right\rangle_{\overline{v},\mu}\left\langle f \right\rangle_{\overline{v},\mu}+12\left\langle f^2\right\rangle_{\overline{v},\mu}\left\langle f\right\rangle_{\overline{v},\mu}^2-3\left\langle f^2\right\rangle_{\overline{v},\mu}^2-6\left\langle f\right\rangle_{\overline{v},\mu}^4\\ \end{split} \end{equation} represent the variance, the correlation function, and the 3rd and 4th order cumulants on the hypersurface $\LevelSetfunc{\bar{v}}{\overline{V}_N}$ with measure $\mathrm{d}\mu^{N-1}_{\bar{v}}$, respectively. With this notation, and substituting Eqs.\eqref{eq:Omega_Derivatives_Xi} in Eqs.\eqref{eq: microcanEntropy_Derivative}, the derivatives of the microcanonical entropy at a non critical value $\bar{v}$, and at fixed $N$ are worked out as averages of functions of $\overline{\zeta}_N= {\rm div} (\overline{\boldsymbol{\xi}}_N)$, where the vector field $\overline{\boldsymbol{\xi}}_N$ generates the diffeomorphisms among the equipotential level sets, as follows \begin{equation} \begin{split} &\dfrac{\mathrm{d} \overline{S}_N}{\mathrm{d} \overline{v}}(\overline{v})=\dfrac{1}{N}\left\langle\overline{\zeta}_N\right\rangle_{\overline{v},\mu}\\ &\dfrac{\mathrm{d}^2 \overline{S}_N }{\mathrm{d} \overline{v}^2}(\overline{v})=\dfrac{1}{N}\left[\mathrm{Var}_{\overline{v},\mu}(\overline{\zeta}_N)+\left\langle\mathcal{L}_{\overline{\boldsymbol{\xi}}_N} (\overline{\zeta}_N)\right\rangle_{N\bar{v},\mu}\right]\\ &\dfrac{\mathrm{d}^3 \overline{S}_N }{\mathrm{d}\overline{v}^3}(\bar{v})=\dfrac{1}{N}\left[\mathrm{Cuml}^{(3)}_{\overline{v},\mu}(\overline{\zeta}_N)+3\mathrm{Cov}_{\overline{v},\mu}\left(\overline{\zeta}_N;\mathcal{L}_{\overline{\boldsymbol{\xi}}_{N}}(\overline{\zeta}_N)\right)+\left\langle\mathcal{L}_{\overline{\boldsymbol{\xi}}_N}^{(ii)}\left(\overline{\zeta}_N\right)\right\rangle_{\overline{v},\mu}\right]\\ &\dfrac{\mathrm{d}^4 \overline{S}_N }{\mathrm{d}\overline{v}^4}(\bar{v})=\dfrac{1}{N}\Biggr[\mathrm{Cuml}^{(4)}_{\overline{v},\mu}(\overline{\zeta}_N)+6\mathrm{Cov}_{\overline{v},\mu}\left(\overline{\zeta}_N^2;\mathcal{L}_{\overline{\boldsymbol{\boldsymbol{\xi}}}_N}(\overline{\zeta}_N)\right)+ 3\mathrm{Var}_{\overline{v},\mu}\left(\mathcal{L}_{\overline{\boldsymbol{\boldsymbol{\xi}}}_N}(\overline{\zeta}_N)\right)+\\ &+4\mathrm{Cov}_{\overline{v},\mu}\left(\overline{\zeta}_N;\mathcal{L}_{\overline{\boldsymbol{\boldsymbol{\xi}}}_N}^{(ii)}(\overline{\zeta}_N)\right) -12\left\langle\overline{\zeta}_N\right\rangle_{\overline{v},\mu}\mathrm{Cov}_{N\overline{v},\mu}\left(\overline{\zeta}_N;\mathcal{L}_{\overline{\boldsymbol{\boldsymbol{\xi}}}_N}(\overline{\zeta}_N)\right)+\left\langle\mathcal{L}_{\overline{\boldsymbol{\boldsymbol{\xi}}}_N}^{(iii)}\left(\overline{\zeta}_N\right)\right\rangle_{\overline{v},\mu}\Biggr]=\\ &=\dfrac{1}{N}\Biggr[\mathrm{Cuml}^{(4)}_{\overline{v},\mu}(\overline{\zeta}_N)+4\mathrm{Cov}_{\overline{v},\mu}\left(\overline{\zeta}_N;\mathcal{L}_{\overline{\boldsymbol{\boldsymbol{\xi}}}_N}^{(ii)}(\overline{\zeta}_N)\right)+3\mathrm{Var}_{\overline{v},\mu}\left(\mathcal{L}_{\overline{\boldsymbol{\boldsymbol{\xi}}}_N}(\overline{\zeta}_N)\right)+\\ &+6\left\langle\overline{\zeta}_N\right\rangle_{\overline{v},\mu}\left(\mathrm{Cov}_{\overline{v},\mu}\left(\Delta\overline{\zeta}_N;\mathcal{L}_{\overline{\boldsymbol{\boldsymbol{\xi}}}_N}(\overline{\zeta}_N)\right)\right)+\left\langle\mathcal{L}_{\overline{\boldsymbol{\boldsymbol{\xi}}}_N}^{(iii)}\left(\overline{\zeta}_N\right)\right\rangle_{\overline{v},\mu} \Biggr] \end{split} \label{eq:microcanEntropy_Derivative_Xi} \end{equation} where for sake of simplicity we have introduced the quantity \begin{equation} \Delta \overline{\zeta}_N=\dfrac{\overline{\zeta}_N^2}{\left\langle \overline{\zeta}_N \right\rangle_{\overline{v},\mu}}-2\overline{\zeta}_N \,\,\,. \end{equation} \end{remark} Now the crucial step is to show that, under the hypothesis of diffeomorphicity that now includes \textit{asymptotic diffeomorphicity}, the function $\overline{\zeta}_N$ - considered along a MCMC spanning any given $\Sigma_v^{V_N}$ - is a Gaussian random process. This is achieved through an intermediate step to show that the mean curvature $H$ - also considered along the same MCMC - is a Gaussian random process. For the sake of notation in what follows we shall omit the suffix $N$ of $V_N$. \begin{lemma}[Mean curvature along a MCMC on a level set] The pointwise mean curvature of an $N$-dimensional manifold $\Sigma_{N\bar{v}}^{V}$ \begin{equation} H (\textit{\textbf{q}})= \frac{1}{N}\sum_{i=1}^N\kappa_i(\textit{\textbf{q}}) = - \frac{1}{N}\left[\frac{\Delta V}{\Vert\nabla V\Vert} - \frac{\partial^i V\partial^2_{ij} V \partial^j V}{\Vert\nabla V\Vert^3}\right] \label{proprioH} \end{equation} computed along a Monte Carlo Markov Chain $\{\textit{\textbf{q}}_k\}_{k\in{{\cal I} \! \! N}}\in \Sigma_{N\bar{v}}^{V}$ such that the stationary invariant density of the MCMC is the microcanonical configurational measure, where $\Sigma_{N\bar{v}}^{V}$ is free of critical points of $V$, is a Gaussian random process. \end{lemma} {\bf Proof.} Along a MCMC, the principal curvatures $\kappa_i(\textit{\textbf{q}})$ behave as independent random variables with probability densities $u_i(\kappa_i)$ which we do not need to know explicitly. Statistical independence means that $\left \langle \kappa_i(\textit{\textbf{q}}) \kappa_j(\textit{\textbf{q}})\right \rangle^{\mu c}_{N,v} = \left \langle \kappa_i(\textit{\textbf{q}}) \right \rangle^{\mu c}_{N,v}\left \langle \kappa_j(\textit{\textbf{q}})\right \rangle^{\mu c}_{N,v}$ and this can be understood as follows. Let $(M^n, g)$ be an $n$-dimensional Riemannian manifold and $m$-codimensional submanifold of a Riemannian manifold $(\overline{M}^{m+n}, \overline g)$, let $R$ and $\overline R$ denote the Riemann curvature tensors of $M^n$ and $\overline{M}^{m+n}$, respectively, and denote by $h(\cdot, \cdot )$ the second fundamental form, then the Gauss equation reads \begin{equation} \overline {g}(\overline{ R} (X, Y) Z, W)) = g(R(X, Y) Z, W)) + \overline {g}(h(X,Z), h(Y, W)) - \overline {g}(h(X, W), h(Y, Z)) \label{gauss-eq} \end{equation} which, for sectional curvatures, obviously reads as \begin{equation} \overline {g}(\overline{ R} (X, Y) X, Y)) = g(R(X, Y) X, Y)) + \overline {g}(h(X,X), h(Y, Y)) - \overline {g}(h(X, Y), h(Y, X)) \ . \label{gauss-eq1} \end{equation} Now, for any point $p\in M$ and basis $\{\textit{\textbf{e}}_1,\dots,\textit{\textbf{e}}_n\}$ of $T_pM$, it is possible to choose coordinates $(y^1,\dots,y^{n+1})$ in $\overline{M}$ such that the tangent vectors $\textbf{Y}^1,\dots,\textbf{Y}^n$ coincide with $\{\textit{\textbf{e}}_1,\dots,\textit{\textbf{e}}_n\}$ and $\textit{\textbf{n}}=\textbf{Y}^{n+1}\in N_pM$ is orthogonal to $T_pM$. Then $M$ is locally given as a graph manifold: $y^1=x^1,\dots,y^n=x^n, y^{n+1} = f(\textit{\textbf{x}})$ so that the second fundamental form has the components \cite{secondaforma,thorpe} \begin{equation} h(\textit{\textbf{e}}_i,\textit{\textbf{e}}_j) = \frac{\partial^2f}{\partial x^i\partial x^j} \textit{\textbf{n}} \label{gauss-eq3} \end{equation} where $\textit{\textbf{e}}_i = \partial /\partial x^i$. Considering the potential level sets $\Sigma_{N\bar{v}}^{V}$ as hypersurfaces of ${\cal I} \! \!{R}^{N+1}$, identifying $f(x^1,\dots,x^N)$ with $V(q^1,\dots,q^N)$, taking $\textit{\textbf{n}}= \nabla V/\Vert\nabla V\Vert$, from Eqs.\eqref{gauss-eq},\eqref{gauss-eq1},\eqref{gauss-eq3} we obtain \begin{equation} K(\textit{\textbf{e}}_i,\textit{\textbf{e}}_j) = \kappa_i\kappa_j = - \left(\frac{\partial^2V}{\partial q_i^2}\right) \left( \frac{\partial^2V}{\partial q_j^2}\right)\langle \textit{\textbf{n}},\textit{\textbf{n}}\rangle + \left(\frac{\partial^2V}{\partial q_i\partial q_j} \right)^2 \langle \textit{\textbf{n}},\textit{\textbf{n}}\rangle \label{sectional} \end{equation} hence \begin{equation} \left \langle \kappa_i(\textit{\textbf{q}}) \kappa_j(\textit{\textbf{q}})\right \rangle^{\mu c}_{N,v} = \left \langle \frac{1}{\Vert\nabla V\Vert}\left[ \left(\frac{\partial^2V}{\partial q_i\partial q_j} \right)^2 - \left(\frac{\partial^2V}{\partial q_i^2}\right) \left( \frac{\partial^2V}{\partial q_j^2}\right) \right] \right \rangle^{\mu c}_{N,v} \ . \label{sectional1} \end{equation} For short-range interactions with coordination number $n_0$, meaning that - with a suitable labelling of the variables - $q_i$ and $q_{j}$ do not interact if $\vert i - j\vert > n_0$, the entries of the Hessian of $V$ vanish if $\vert i - j\vert > n_0$. Thus, locally, for $n > n_0$, we have \begin{equation} \left \langle \kappa_i(\textit{\textbf{q}}) \kappa_{j=i+n}(\textit{\textbf{q}})\right \rangle^{\mu c}_{N,v} = \left \langle - \left[ \frac{1}{\Vert\nabla V\Vert^{1/2}}\left(\frac{\partial^2V}{\partial q_i^2}\right) \right] \left[ \frac{1}{\Vert\nabla V\Vert^{1/2}}\left( \frac{\partial^2V}{\partial q_j^2}\right) \right] \right \rangle^{\mu c}_{N,v} \ . \label{sectional2} \end{equation} with evident notation, we can write \begin{equation} \left \langle \kappa_i(\textit{\textbf{q}}) \kappa_{j=i+n}(\textit{\textbf{q}})\right \rangle^{\mu c}_{N,v} = \left \langle \left[ \left \langle \kappa_i(\textit{\textbf{q}}) \right \rangle^{\mu c}_{N,v} + \delta \kappa_i(\textit{\textbf{q}}) \right] \left[\left \langle \kappa_{j=i+n}(\textit{\textbf{q}})\right \rangle^{\mu c}_{N,v} + \delta \kappa_{j=i+n}(\textit{\textbf{q}}) \right] \right \rangle^{\mu c}_{N,v} \ . \label{sectional3} \end{equation} As we shall see in the following, $1/\Vert\nabla V\Vert^{1/2}$ tends to a constant value at increasing $N$, and along a MCMC sampling a potential level set with its configurational microcanonical measure, the fluctuations of $\partial^2V/\partial q_i^2$ and $\partial^2V/\partial q_j^2$ are independent and average to zero. In conclusion, under the condition $\vert i - j\vert > n_0$ we have \[ \left \langle \kappa_i(\textit{\textbf{q}}) \kappa_j(\textit{\textbf{q}})\right \rangle^{\mu c}_{N,v} = \left \langle \kappa_i(\textit{\textbf{q}}) \right \rangle^{\mu c}_{N,v}\left \langle \kappa_j(\textit{\textbf{q}})\right \rangle^{\mu c}_{N,v} \ . \] Having shown that the principal curvatures $\kappa_i(\textit{\textbf{q}})$ are everywhere uniformly bounded on any $\Sigma_{N\bar{v}}^{V}$ belonging to any cylindrical subset of the family $\left\{\Gamma^N_{{I}_0}\right\}_{N\in{\cal I} \! \!{N}}$, the consequence is that the momenta of the distributions $u_i(\kappa_i)$ are finite and uniformly bounded too. Hence, the basic conditions are fulfilled to apply the Central Limit Theorem (CLT) formulated by Khinchin \cite{khinchin} for sum functions of independent random variables, arbitrarily distributed, with bounded momenta up to the fifth order. Hence, along the MCMC $\{\textit{\textbf{q}}_k\}_{k\in{{\cal I} \! \! N}}\in \Sigma_{N\bar{v}}^{V}$ the invariant measure of which is the configurational microcanonical one, the values of the mean curvature $H (\textit{\textbf{q}}_k)$ behave as Gaussian-distributed random variables. Notice that a finite range dependence is a weak dependence that does not prevent the CLT to apply \cite{clt-weak}. \begin{lemma}[$\zeta_N(\textit{\textbf{q}})$ along the MCMC on regular level sets] The quantity \begin{equation}\label{quasiH} {\zeta}_N(\textit{\textbf{q}}) = \frac{\Delta V}{\Vert\nabla V\Vert^2} -2 \frac{\partial^i V\partial^2_{ij} V \partial^j V}{\Vert\nabla V\Vert^4} \end{equation} as well as $\overline{\zeta}_N(\textit{\textbf{q}})$, computed along a Monte Carlo Markov Chain $\{\textit{\textbf{q}}_k\}_{k\in{{\cal I} \! \! N}}\in \Sigma_{N\bar{v}}^{V}$ the invariant measure of which is the configurational microcanonical one, is a Gaussian random process. \end{lemma} {\bf Proof.} After the preceding Lemma it follows that the two quantities ${\Delta V}/{\Vert\nabla V\Vert}$ and ${\partial^i V\partial^2_{ij} V \partial^j V}/{\Vert\nabla V\Vert^3}$ -- computed along Monte Carlo Markov Chain $\{\textit{\textbf{q}}_k\}_{k\in{{\cal I} \! \! N}}\in \Sigma_{N\bar{v}}^{V}$ the invariant measure of which is the configurational microcanonical one --- are gaussian random processes because their sum is a gaussian random process and the sum of gaussian random processes is gaussian. Now, if the quantity $\frac{\Delta V}{\Vert\nabla V\Vert}= \sum_i \partial_{ii}^2V/{\Vert\nabla V\Vert}$ is asymptotically gaussian, then the terms $\partial_{ii}^2V/{\Vert\nabla V\Vert}$ have to be i.i.d. random variables as well the terms $\partial_{ii}^2V$ because all of them are divided by the same number ${\Vert\nabla V\Vert}$ at each point of the MCMC, by the same token $\partial_{ii}^2V/{\Vert\nabla V\Vert^2}$ have to be i.i.d. random variables because now all the terms $\partial_{ii}^2V$ are divided by the same number ${\Vert\nabla V\Vert^2}$ at each point of the MCMC. The same argument applies to ${\partial^i V\partial^2_{ij} V \partial^j V}/{\Vert\nabla V\Vert^3}$ so that in the end both ${\Delta V}/{\Vert\nabla V\Vert^2}$ and ${\partial^i V\partial^2_{ij} V \partial^j V}/{\Vert\nabla V\Vert^4}$ are gaussian distributed, and, consequently, $\overline{\zeta}_N(\textit{\textbf{q}})$ in Eq.\eqref{quasiH} is a gaussian random variable along a MCMC the invariant measure of which is the configurational microcanonical one. \begin{remark}. Let us emphasize that the quantity $\overline{\zeta}_N(\textit{\textbf{q}})$ is a random variable along all the MCMC $\{\textit{\textbf{q}}_k\}_{k\in{{\cal I} \! \! N}}\in \Sigma_{N_k\bar{v}}^{V_{N_k}}$, with vanishing deviations from a gaussian distribution at increasing $N$, under the hypothesis of asymptotic diffeomorphicity because the principal curvatures $\kappa_i(\textit{\textbf{q}})$ are uniformly bounded with $N$ from above on any manifold, a crucial condition for the validity of Lemma 2. \end{remark} \begin{remark}. In the hypotheses of the Main Theorem, $V$ contains only short-range interactions and its functional form does not change with $N$. In other words, we are tackling physically homogeneous systems, which, at any $N$, can be considered as the union of smaller and identical subsystems. If a system is partitioned into a number $k$ of sufficiently large subsystems, the larger $N$ the more accurate the factorization of its configuration space. Therefore, the averages of functions of interacting variables belonging to a given block depend neither on the subsystems where they are computed (the potential functions are the same on each block after suitable relabelling of the variables) nor on the total number $N$ of degrees of freedom. \begin{itemize} \item[ ] {a)} Since the potential $V$ is assumed smooth and bounded below, one has \begin{eqnarray} \langle \mid \Delta V \mid \rangle^{\mu c}_{N,v}= \left \langle \left \vert \sum_{i=1}^N \partial_{ii}^2 V \right \vert \right \rangle^{\mu c}_{N,v}\ \leq \sum_{i=1}^N \langle \mid \partial_{ii}^2 V \mid \rangle^{\mu c}_{N,v} \ \leq N~\max_{i=1,\dots,N} \left \langle \left ( \mid \partial_{ii}^2 V \mid \right ) \right \rangle^{\mu c}_{N,v}\ . \nonumber \end{eqnarray} At large $N$ (when the fluctuations of the averages are vanishingly small) $\max_{i=1,\dots,N}\langle \mid \partial_{ii}^2 V \mid \rangle^{\mu c}_{N,v}$ does not depend on $N$, and the same holds for $\left \langle \mid \partial^i V\partial^2_{ij} V \partial^j V \mid\right\rangle^{\mu c}_{N,v}$ and $\max_{i,j =1,\dots,N} \left \langle \mid \partial^i V\partial^2_{ij} V \partial^j V \mid \right \rangle^{\mu c}_{N,v}$.\\ Hence we set \[ m_1=\max_{i=1,\dots,N} \langle \mid \partial_{ii}^2 V \mid \rangle^{\mu c}_{N,v} \] \begin{equation}\label{m1m2} m_2=\max_{i,j=1,\dots,N} \left \langle \mid \partial^i V\partial^2_{ij} V \partial^j V \mid \right \rangle^{\mu c}_{N,v} \ . \end{equation} \item[ ] {b)} Moreover, the absence of critical points of $V$, implied by the hypothesis of diffeomorphicity of the equipotential hypersurfaces, means that $\Vert\nabla V\Vert^2\geq C>0$. Hence the terms $\langle \| \nabla V \|^{2n} \rangle^{\mu c}_{N,v}$ for $n=1,\dots , 8$ we have \begin{eqnarray} \langle \| \nabla V \|^2 \rangle^{\mu c}_{N,v}&=& \left\langle\sum_{i=1}^N ( \partial_i V )^2 \right\rangle^{\mu c}_{N,v} \ = \sum_{i=1}^N \left\langle ( \partial_i V )^2\right\rangle^{\mu c}_{N,v} \geq N~\min_{i=1,\dots,N} \left \langle \left( \partial_i V \right)^2 \right \rangle^{\mu c}_{N,v}\ , \nonumber\\ \langle \| \nabla V \|^4 \rangle^{\mu c}_{N,v}&=& \left\langle \left[\sum_{i=1}^N ( \partial_i V )^2\right]^2 \right\rangle^{\mu c}_{N,v} \ = \sum_{i,j=1}^N \left\langle ( \partial_i V )^2( \partial_j V )^2\right\rangle^{\mu c}_{N,v} \nonumber\\ &\geq & N^2~\min_{i,j=1,..,N} \left \langle \left( \partial_i V \right)^2 \left( \partial_j V \right)^2 \right \rangle^{\mu c}_{N,v}\ , \nonumber \end{eqnarray} which can be iterated up to $\langle \| \nabla V \|^{16} \rangle^{\mu c}_{N,v}$ By setting \[ c_1=\min_{i=1,\dots,N}\left \langle \left( \partial_i V \right)^2 \right \rangle^{\mu c}_{N,v} \] \[ c_2=\min_{i,j=1,..,N} \left \langle \left( \partial_i V \right)^2 \left( \partial_j V \right)^2 \right \rangle^{\mu c}_{N,v} \] \[ ............................ \] \begin{equation}\label{c1c8} c_8=\min_{{i_1},\dots,{i_8}=1,..,N} \left \langle \left( \partial_{i_1} V \right)^2 \dots\left( \partial_{i_8} V \right)^2 \right \rangle^{\mu c}_{N,v} \end{equation} \item[ ] {c)} By the same token put forward at the beginning of this Remark, we can define the following quantities independent of $N$ \begin{eqnarray}\label{m3m7} m_3&=&\max_{i,j,k,l=1,N}\left\langle (\partial_i V \partial^2_{ij}V \partial_j V) (\partial_k V \partial^2_{kl}V \partial_l V) \right \rangle_{N,v}^{\mu c}\ , \nonumber\\ m_4&=&\max_{i,j,k=1,N}\left\langle \partial_i V \partial^2_{ij}V \partial^2_{jk}V \partial_k V \right \rangle_{N,v}^{\mu c}\ , \nonumber\\ m_5&=&\max_{i,j,k=1,N}\left\langle (\partial_i V \partial^2_{ij}V \partial_j V) (\partial^2_{kk}V ) \right \rangle_{N,v}^{\mu c}\ , \nonumber\\ m_6&=&\max_{i,j=1,N}\left\langle \partial_i V \partial^3_{ijj}V \right \rangle_{N,v}^{\mu c}\ , \nonumber\\ m_7&=&\max_{i,j,k=1,N}\left\langle (\partial_i V \partial_jV \partial_k V) \partial^3_{ijk}V \right \rangle_{N,v}^{\mu c}\ ,\nonumber\\ m_8&=&\max_{i,j=1,\dots,N} \left \langle \mid \partial^2_{ij} V \partial^j V \mid \right \rangle^{\mu c}_{N,v} \ . \end{eqnarray} \end{itemize} \end{remark} \begin{lemma}[Upper bound of the first derivative of the entropy] After Lemmas 2 to 5 and Remarks 2 to 5 it is \[ \sup_{N,\bar{v}\in {I_{\bar{v}}}} \left | \frac{\partial S_N}{\partial {\bar{v}}}({\bar{v}}) \right | < \infty \] \end{lemma} \textbf{Proof.} {\rm This first derivative of the entropy is equal to the inverse of the configurational temperature, thus it is necessarily uniformly bounded with $N$. In fact, the property of $\overline{\zeta}_N(\textit{\textbf{q}})$ - and of ${\zeta}_N(\textit{\textbf{q}})$ - of being a Gaussian distributed random variable along any MCMC defined above entails the following uniform bound } \begin{equation} \lim_{N\rightarrow+\infty}\left\langle \zeta_N \right\rangle_{N\bar{v},\mu}=\lim_{N\rightarrow+\infty}N^{-1}\left\langle \overline{\zeta}_N \right\rangle_{N\bar{v},\mu}\in{\cal I} \! \!{R}\ . \hskip 1truecm \square \end{equation} \begin{lemma}[Upper bound of the second derivative of the entropy] After Lemmas 2 to 5 and Remarks 2 to 5 it is \begin{equation} \sup_{N,\bar{v}\in {I_{\bar{v}}}} \left\vert \frac{\partial^2 S_N}{\partial {\bar{v}}^2}({\bar{v}}) \right\vert < \infty \end{equation} \end{lemma} \textbf{Proof.} {\rm Since $\overline{\zeta}_N(\textit{\textbf{q}})$ is a gaussian random process, the quantity $\mathrm{Var}_{\overline{v},\mu}(\overline{\zeta}_N)/N$ is uniformly bounded with $N$. Then the $N$-dependence of the average $\left\langle\mathcal{L}_{\overline{\boldsymbol{\xi}}_N}(\overline{\zeta}_N)\right\rangle_{N\bar{v},\mu}$ follows from the explicit expression of the quantity $\mathcal{L}_{\overline{\boldsymbol{\xi}}_N}(\overline{\zeta}_N)$ given by Eq.\eqref{derivLie} in Appendix B (by adapting it to quantities marked with an overline). Considering that the number of non-vanishing entries of the Hessian of the potential is ${\cal O}(n_pN)$, where $n_p$ is the number of nearest-neighbors in condensed matter systems and the average number of neighbors of a particle in a fluid system, using the above defined $N$-independent quantities in Eqs.\eqref{m1m2},\eqref{c1c8},\eqref{m3m7} a simple estimation term by term gives \begin{equation} \label{derivLiebound} \begin{split} \left\langle \mathcal{L}_{\overline{\boldsymbol{\xi}}_N}(\overline{\zeta}_N)\right\rangle_{N\bar{v},\mu}&\leq \left\langle\left\vert\dfrac{\nabla \overline{V}\cdot\nabla(\Delta \overline{V})}{\|\nabla \overline{V}\|^4}\right\vert\right\rangle_{N\bar{v},\mu} +\left\langle\left\vert 8\dfrac{(\nabla \overline{V} \cdot\mathrm{Hess} \overline{V}\nabla \overline{V})^2}{\|\nabla \overline{V}\|^8}\right\vert\right\rangle_{N\bar{v},\mu} + \\ &+ \left\langle\left\vert2\dfrac{(\nabla \overline{V} \cdot\mathrm{Hess}(\overline{V})\nabla \overline{V})\Delta \overline{V}+2\|\mathrm{Hess} \overline{V}\nabla \overline{V}\|^2+\mathrm{D}^3\overline{V}(\nabla \overline{V},\nabla \overline{{V}},\nabla \overline{V})}{\|\nabla \overline{V}\|^6} \right\vert\right\rangle_{N\bar{v},\mu}\\ &\leq N\dfrac{m_6}{c_2} + 8\dfrac{m_2^2 n_p^2}{c_4} + 2N\dfrac{m_5 n_p +2m_8^2 n_p^2 +m_7 n_p}{c_3} \end{split} \end{equation} that is, the upper bound of $\left\langle\mathcal{L}_{\overline{\boldsymbol{\xi}}_N}(\overline{\zeta}_N)\right\rangle_{N\bar{v},\mu}/N$ of this quantity remains uniformly bounded in the $N\to\infty$ limit. } $\square$ \begin{lemma} [Upper bound of the third derivative of the entropy] {\rm After Lemmas 2 to 5 and Remarks 2 to 5 it is} \[ \sup_{N,\bar{v}\in {I_{\bar{v}}}} \left | \frac{\partial^3 S_N}{\partial {\bar{v}}^3}({\bar{v}}) \right | < \infty \] \end{lemma} \textbf{Proof.} {\rm Since $\overline{\zeta}_N(\textit{\textbf{q}})$ is a gaussian random process, we have the following uniform bound \[ \lim_{N\rightarrow+\infty}N^{2}\mathrm{Cumul}^{(3)}_{N\bar{v},\mu}\zeta_N=\lim_{N\rightarrow+\infty}N^{-1}\mathrm{Cumul}^{(3)}_{N\bar{v},\mu} \overline{\zeta}_N=0 \ , \] and by considering the explicit expression of $\mathcal{L}^{(ii)}_{{\boldsymbol{\xi}}_N}({\zeta}_N)$ given by Eq.\eqref{deriv2Lie} in Appendix B, a tedious but trivial counting of the $N$-dependence term by term of $\mathcal{L}^{(ii)}_{\overline{\boldsymbol{\xi}}_N}(\overline{\zeta}_N)$ - as in the previous case - shows that $\left\langle\mathcal{L}^{(ii)}_{\overline{\boldsymbol{\xi}}_N}(\overline{\zeta}_N)\right\rangle_{N\bar{v},\mu}$ turns out of order ${\cal O}(n_p^3N)$ and thus divided by $N$ remains uniformly bounded in the $N\to\infty$ limit. Then, according to the definition \eqref{def: definition_statQuant} we have \[ \mathrm{Cov}_{\overline{v},\mu}\left(\overline{\zeta}_N;\mathcal{L}_{\overline{\boldsymbol{\xi}}_{N}}(\overline{\zeta}_N)\right) = \left\langle \overline{\zeta}_N\ \mathcal{L}_{\overline{\boldsymbol{\xi}}_{N}}(\overline{\zeta}_N) \right\rangle_{\overline{v},\mu} - \left\langle \overline{\zeta}_N \right\rangle _{\overline{v},\mu} \left\langle \mathcal{L}_{\overline{\boldsymbol{\xi}}_{N}}(\overline{\zeta}_N) \right\rangle_{\overline{v},\mu } \] where the quantities $\overline{\zeta}_N $ and $\mathcal{L}_{\overline{\boldsymbol{\xi}}_{N}}(\overline{\zeta}_N)$ are randomly varying along the MCMC whose probability measure is the configurational microcanonical measure, and the random variations of $\overline{\zeta}_N $ and of its directional (Lie) derivative in a random direction ${\boldsymbol{\xi}}_{N}$ can be considered \textit{bona fide} statistically uncorrelated, thus their covariance vanishes.} $\square$ \begin{lemma}[Upper bound of the fourth derivative of the entropy] After Lemmas 2 to 5 and Remarks 2 to 5 it is \[ \sup_{N,\bar{v}\in {I_{\bar{v}}}} \left | \frac{\partial^4 S_N}{\partial {\bar{v}}^4}({\bar{v}}) \right | < \infty \] \end{lemma} \textbf{Proof.} {\rm Since $\overline{\zeta}_N(\textit{\textbf{q}})$ is a gaussian random process, we have the following uniform bound \[ \lim_{N\rightarrow+\infty}N^{3}\mathrm{Cumul}^{(4)}_{N\bar{v},\mu}\zeta_N =\lim_{N\rightarrow+\infty}N^{-1}\mathrm{Cumul}^{(4)}_{N\bar{v},\mu}\overline{\zeta}_N =0\ . \] Then, by considering the expression of $\mathcal{L}^{(iii)}_{{\boldsymbol{\xi}}_N}({\zeta}_N)$ given by Eq.\eqref{deriv3Lie} in Appendix B, a very tedious but trivial counting of the $N$-dependence term by term of $\mathcal{L}^{(iii)}_{\overline{\boldsymbol{\xi}}_N}(\overline{\zeta}_N)$ - as in the previous case - shows that $\left\langle\mathcal{L}^{(iii)}_{\overline{\boldsymbol{\xi}}_N}(\overline{\zeta}_N)\right\rangle_{N\bar{v},\mu}$ turns out of order ${\cal O}(n_p^4N)$ and thus divided by $N$ remains uniformly bounded in the $N\to\infty$ limit. Now, the term $N^{-1}\mathrm{Var}_{\overline{v},\mu}\left(\mathcal{L}_{\overline{\boldsymbol{\boldsymbol{\xi}}}_N}(\overline{\zeta}_N)\right)$ is also uniformly bounded, in fact along the MCMC spanning a given equipotential surface the terms $\overline{\xi}_i\partial_i {\overline{\zeta}}$ are random uncorrelated variables each bringing a factor $N$ because ${\overline{\zeta}}\sim {\cal O}(N)$, thus \begin{eqnarray} \mathrm{Var}_{\overline{v},\mu}\left(\mathcal{L}_{\overline{\boldsymbol{\boldsymbol{\xi}}}_N}(\overline{\zeta}_N)\right)&=&\mathrm{Var}_{\overline{v},\mu}\left( \sum_{i=1}^N{\overline{\xi}}_i\partial_i {\overline{\zeta}}\right)= \mathrm{Var}_{\overline{v},\mu}\left(\frac{1}{N}\sum_{i=1}^N{N \overline{\xi}}_i\partial_i \overline{\zeta} \right) = \frac{1}{N^2}\sum_{i=1}^N\mathrm{Var}_{\overline{v},\mu}\left(N \overline{\xi}_i\partial_i {\overline{\zeta}}\right)\nonumber\\ &\le&\frac{1}{N^2}N \sigma_m^2N^2 = N \sigma_m^2 \end{eqnarray} where $\sigma_m^2$ is the largest value of all the standard deviations of the terms $\overline{\xi}_i\partial_i \overline{\zeta}$ along the MCMC. For what concerns the two remaining terms in the fourth derivative of the entropy in Eq.\eqref{eq:microcanEntropy_Derivative_Xi} we have \begin{equation}\label{cov1} \mathrm{Cov}_{\overline{v},\mu}\left(\overline{\zeta}_N;\mathcal{L}_{\overline{\boldsymbol{\xi}}_{N}}^{(ii)}(\overline{\zeta}_N)\right) = \left\langle \overline{\zeta}_N\ \mathcal{L}_{\overline{\boldsymbol{\xi}}_{N}}^{(ii)}(\overline{\zeta}_N) \right\rangle_{\overline{v},\mu} - \left\langle \overline{\zeta}_N \right\rangle _{\overline{v},\mu} \left\langle \mathcal{L}_{\overline{\boldsymbol{\xi}}_{N}}^{(ii)}(\overline{\zeta}_N) \right\rangle_{\overline{v},\mu } \end{equation} that vanishes when computed as microcanonical averages through "time" averages along a MCMC, in fact, we take advantage of the resulting complete decorrelation between the random values taken by $\overline{\zeta}_N$ and the random values of its second order Lie derivative taken in a random direction ${\boldsymbol{\xi}}_N$. \begin{equation}\label{cov2} \mathrm{Cov}_{\overline{v},\mu}\left(\Delta\overline{\zeta}_N;\mathcal{L}_{\overline{{\boldsymbol{\xi}}}_N}(\overline{\zeta}_N)\right) = \left\langle\Delta\overline{\zeta}_N\ \mathcal{L}_{\overline{\boldsymbol{\boldsymbol{\xi}}}_N}(\overline{\zeta}_N)\right\rangle - \left\langle\Delta\overline{\zeta}_N\right\rangle \left\langle\mathcal{L}_{\overline{\boldsymbol{\boldsymbol{\xi}}}_N}(\overline{\zeta}_N)\right\rangle \end{equation} the same argument applies to the quantities $\Delta\overline{\zeta}_N$ and $\mathcal{L}_{\overline{{\boldsymbol{\xi}}}_N}(\overline{\zeta}_N)$ that are uncorrelated random variables along a MCMC, thus their covariance vanishes. $\quad\square$ } \section{Fixing the problem raised by the lattice $\phi^4$ model} A few years ago, an argument was raised \cite{kastner} against the topological theory of phase transitions on the basis of the observation that the second order phase transition of the $2D$ lattice $\phi^4$-model occurs at a critical value $v_c$ of the potential energy density which belongs to a broad interval of $v$-values void of critical points of the potential function. In other words, for any finite $N$ the $\{ \Sigma_{v<v_c}^{V_N}\}_{v \in{{\cal I} \! \! R}}$ are diffeomorphic to the $\{ \Sigma_{v>v_c}^{V_N}\}_{v \in{{\cal I} \! \! R}}$ so that no topological change seems to correspond to the phase transition. This is a counterexample to the theorem in Refs. \cite{prl1,NPB1}. A first reply was given in \cite{vaff} where, in spite of the absence of critical points of the potential in correspondence of the transition energy, a strong evidence has been given to relate the phase transition of this model with a change of topology of both the energy and potential level sets. But in this case the topology changes are asymptotic ($N\to\infty$). Let us see how the Main Theorem proved in the present work definitely fixes the problem, so that the $2D$ lattice $\phi^4$-model is no longer a counterexample to the topological necessity theorem. The model of interest, considered in Ref.\cite{CCP}, is defined by the Hamiltonian \begin{equation} {\cal H}_{N} ( p, q )= \sum_{{\textit{\textbf{i}} }} \frac{p_{{\textit{\textbf{i}}}}^2}{2} + V_{N}(q) \label{Hphi_2} \end{equation} where the potential function $V(q)$ is \begin{equation} V(q)=\sum_{{\textit{\textbf{i}}}\in{{\cal I} \! \! Z}^D}\left( - \frac{m^2}{2} q_{{\textit{\textbf{i}}}}^2 + \frac{\lambda}{4!} q_{{\textit{\textbf{i}}}}^4 \right) + \sum_{\langle {{\textit{\textbf{ik}}}\rangle\in{{\cal I} \! \! Z}^D}} \frac{1}{2}J (q_{{\textit{\textbf{i}}}}-q_{{\textit{\textbf{k}}}})^2\ , \label{potfi4} \end{equation} with $\langle {\textit{\textbf{ik}}}\rangle$ standing for nearest-neighbor sites on a $D$ dimensional lattice. This system has a discrete ${{\cal I} \! \! Z}_2$-symmetry and short-range interactions; therefore, according to the Mermin--Wagner theorem, in $D=1$ there is no phase transition whereas in $D=2$ there is a a second order symmetry-breaking transition, with nonzero critical temperature, of the same universality class of the 2D Ising model. \\ In this Section we present the results of Monte Carlo numerical simulations on equipotential level set of this model on a 2D-lattice with periodic boundary conditions and the following parameters: $J=1$, $m^2 = 2$, and $\lambda = 0.6$. For these values of the parameters, the $2D$ system undergoes the symmetry-breaking phase transition at the critical potential energy density value is $v_c=\langle V\rangle_c/N\simeq 2.2$. This study has been performed in order to identify which terms composing the derivatives of the specific configurational microcanonical entropy with respect to the specific potential energy is not uniformly bounded in $N$, as is expected after the present Main Theorem.\\ The simulations have been performed for systems with different number of degrees of freedom: $N=10\times 10=100$, $N=20 \times 20=400$, $N=30\times 30=900$, $N=40\times 40=1600$, $N=50\times 50=2500$ and $N=70\times 70=4900$. The computations were performed with vanishing magnetization as initial condition, for $2\times 10^{7}$ steps, a number sufficient to guarantee the convergence of the reported quantities.\\ \begin{figure}[h!] \includegraphics[scale=0.2,keepaspectratio=true]{VarZeta.pdf}\hskip 1truecm \includegraphics[scale=0.215,keepaspectratio=true]{LieXiZeta.pdf} \caption[10pt]{Quantites entering Eq.\protect\eqref{eq:microcanEntropy_Derivative_Xi} for lattices with different $N$. In particular, $N=100$ (red full circles), $N=400$ (blue squares), $N=900$ (black diamonds), $N=1600$ (green triangles), $N=2500$ (purple reversed triangles). } \label{VarLieXiZeta} \end{figure} \begin{figure}[h!] \includegraphics[scale=0.2,keepaspectratio=true]{CorrZeta_LieXiZeta.pdf}\hskip 1truecm \includegraphics[scale=0.2,keepaspectratio=true]{LieXi2Zeta.pdf} \caption[10pt]{Quantites entering Eq.\protect\eqref{eq:microcanEntropy_Derivative_Xi} for lattices with different $N$. In particular, $N=100$ (red full circles),$N=400$ (blue squares), $N=900$ (black diamonds), $N=1600$ (green triangles), $N=2500$ (purple reversed triangles). } \label{CorrZeta_LieXiZeta} \end{figure} \begin{figure}[h!] \includegraphics[scale=0.2,keepaspectratio=true]{CorrZeta_Lie2XiZeta.pdf}\hskip 1truecm \includegraphics[scale=0.2,keepaspectratio=true]{VarLieXiZeta.pdf} \includegraphics[scale=0.2,keepaspectratio=true]{Zeta_CorrDeltaZeta_LieXiZetapdf.pdf}\hskip 1truecm \includegraphics[scale=0.2,keepaspectratio=true]{LieXi3Zeta.pdf} \caption[10pt]{Quantites entering Eq.\protect\eqref{eq:microcanEntropy_Derivative_Xi} for lattices with different $N$. In particular, $N=100$ (red full circles),$N=400$ (blue squares), $N=900$ (black diamonds), $N=1600$ (green triangles), $N=2500$ (purple reversed triangles). } \label{LieXi2Zeta} \end{figure} \begin{figure}[ht!] \includegraphics[scale=0.2,keepaspectratio=true]{Cumul3Zeta.pdf}\hskip 1truecm \includegraphics[scale=0.35,keepaspectratio=true]{N3Cumul4vsN.pdf} \caption[10pt]{Quantites entering Eq.\protect\eqref{eq:microcanEntropy_Derivative_Xi} for lattices with different $N$. Left panel: third cumulant of $\overline{\zeta}_N$ for $N=100$ (red full circles), $N=400$ (blue squares), $N=900$ (black diamonds), $N=1600$ (green triangles), $N=2500$ (purple reversed triangles). Right panel: fourth cumulant of $\overline{\zeta}_N$ computed at the transition energy density $\bar{v}_c \simeq 2.2$, here a systematic deviation from the gaussian scaling with $N$ is well evident.} \label{fgr:D3S_Cumul3Zeta} \label{Fourcumul} \end{figure} The results of numerical simulations are reported for each single single term entering Eq.\eqref{eq:microcanEntropy_Derivative_Xi}. When properly rescaled with $N$, under the hypothesis of diffeomorphism (at any $N$ and also asymptotically) of the equipotential hypersurfaces, all these terms are expected to be uniformly bounded with $N$. Very interestingly, it is found that across the vertical dashed line denoting the phase transition point at the potential energy density $\bar{v}_c \simeq 2.2$ all these terms do not show any tendency to change with $N$, except for the case $N=10\times 10$ (for which $36\%$ of the total number of sites belong to the boundary, making the finite size effects more relevant). There is only one very significative exception, the fourth cumulant of $\overline{\zeta}_N$ which, computed around the transition value, is found to systematically grow with $N$. This has been computed by means of the relation \begin{equation}\label{cml4} \mathrm{Cuml}^{(4)}_{N\bar{v},\mu}\overline{\zeta}= \frac{d}{dv}\left[ \mathrm{Cuml}^{(3)}_{N\bar{v},\mu}\overline{\zeta}_N\right] - 3\left\langle\overline{\zeta}_N\right\rangle_{\overline{v},\mu}\left(\mathrm{Corr}_{\overline{v},\mu}\left(\Delta\overline{\zeta}_N;\mathcal{L}_{\overline{\boldsymbol{\boldsymbol{\xi}}}_N}(\overline{\zeta}_N)\right)\right) \end{equation} where the derivative of the third cumulant is evaluated numerically. This means that $\overline{\zeta}_N$ is not a Gaussian random process along a MCMC the invariant measure of which is the microcanonical measure. As $\overline{\zeta}_N$ is proved to be a Gaussian random process under the constraining hypothesis of asymptotic diffeomorphicity of the level sets $\{\Sigma_{N\bar{v}}^{V_N}\}_{\bar{v}\in [\bar{v}_0, \bar{v}_1]}$, the growth with $N$ of $N^{-1}\mathrm{Cuml}^{(4)}_{N\bar{v},\mu}\overline{\zeta}$ entails the loss of asymptotic diffeomorphicity among the $\{\Sigma_{N\bar{v}}^{V_N}\}_{\bar{v}<\bar{v}_c}$ and the $\{\Sigma_{N\bar{v}}^{V_N}\}_{\bar{v}>\bar{v}_c}$ for some $\bar{v}_c$. This means that the 2D lattice $\phi^4$ model does not fulfil a basic requirement of the Main theorem formulated in the present work. Therefore, the 2D lattice $\phi^4$ model is not a counterexample to the present version of the topological necessity theorem. As it has been pointed out in Remark \ref{diffclassS}, a second order phase transition in the microcanonical ensemble is associated with an asymptotic discontinuity of the third derivative of the entropy and hence an asymptotic divergence of the fourth derivative of the entropy \bigskip\bigskip \section{Historical overview and challenges of the topological theory of phase transitions}\label{appC} After the investigation of specific, exactly solvable models \cite{fernando,exact1,exact2,exact3,exact4} corroborating the Topological Hypothesis, during the last decade and a half other systems have been studied, shedding doubts on the general validity of the new theory. Even if all these studies are important to better define the validity limits of the Topological Hypothesis, some authors have too quickly drawn fatal verdicts against it. In fact, for the sake of clarity, let us begin by remarking that the proposed theory applies a-priori to systems described by smooth, finite-range potentials such that the level sets $\Sigma_v^{V_N}$ and their associated balls $M_{v}^{V_N}$ qualify as \textit{good differentiable and compact manifolds}. Moreover, as we have preliminarily discussed in Ref.\cite{vaff}, and as it is thoroughly tackled throughout the present paper, topology changes - as loss of diffeomorphicity among differentiable manifolds - can take place also in the absence of critical points of the potential function. Thus, coming to specific examples, in \cite{kastner-cazz} the author tackled a system modelling in one-dimension a localization-delocalization transition of interfaces. This so-called Burkhardt model was also considered in \cite{romani1d} where the pinning potential was considered in a modified version. The absence of coincidence between topological and thermodynamical transitions was reported in both Refs. \cite{kastner-cazz} and \cite{romani1d}. However, the Burkhardt model has two bad properties with respect to the conditions of the Topological Hypothesis, formalized by the theorems in Refs.\cite{book,prl1,NPB1,NPB2} and by the theorem proved in the present paper. In fact, the pinning potentials considered in Refs. \cite{kastner-cazz} and \cite{romani1d} are singular as they need infinitely steep potential barriers to constrain the coordinates of the system on a semi-infinite positive line, and the configuration-space submanifolds are noncompact. In \cite{grinza}, and also in \cite{romani1d}, another one dimensional model was considered, the so-called Peyrard--Bishop model describing DNA denaturation. In Ref. \cite{romani1d} the authors considered also a modified version of the Peyrard--Bishop model, and for both versions of the model the topological transition is found at a critical value of the potential energy which does not correspond to critical energy value of the thermodynamic transition. The configuration space submanifolds corresponding to these models are {\it noncompact}, and the critical manifolds - containing only one critical point of infinite coordinates - are infinitely large. Again, these models are outside the domain of validity of the theorems in Refs.\cite{book,prl1,NPB1,NPB2} and in the present paper. Another model, allegedly disproving the topological theory, is the mean-field Berlin--Kac spherical model. In Ref. \cite{stariolo} the two cases of zero and non-zero external field were considered. In the former case there is a continuous phase transition and in the latter case there is no phase transition. The two cases did not display much difference when considered from the topological viewpoint. However, for this model there is a strong statistical ensemble inequivalence, in fact the continuous phase transition for zero external field is only predicted in the framework of canonical ensemble, whereas it is absent in the framework of microcanonical ensemble \cite{kastnerRT,kastner-ineq} which is the reference framework where the topological theory is formulated. Thus there is no contradiction. Moreover, considering that the ergodic invariant measure of Hamiltonian flows is the microcanonical measure, and considering that the objective reality of any physical system is dynamics, the microcanonical ensemble has to be considered the fundamental statistical ensemble. By the way, this is not the only case of this kind of ensemble inequivalence, for example, though working in the opposite way, the clustering phase transition in a self-gravitating $N$-body system found in the microcanonical ensemble framework is completely absent in the canonical ensemble \cite{mnras}. In \cite{stariolo1} the authors tackle a modified version of the Berlin-Kac model which is constrained with the introduction of a long-range correlation among the degrees of freedom. In spite of the claim that this model is the first case of short-range, confining potential where the phase transition is not entailed by a topology change in phase space or configuration space, the spherical constraint, by limiting the freedom of mutual independent variation of all the degrees of freedom, makes this model a {\it long-range} interaction one. And even though the spherical constraint is, so to speak, a weak constraint, It plays a crucial role because without it the model is trivial. Another system apparently going against the topological theory is the mean-field $\phi^4$ model. The phase transition point of this model does not correspond to the presence of critical points of the potential, that is, it has no topological counterpart \cite{romani,schilling,baroni}. Some correspondence between the topological and thermodynamic transitions was recovered for this model in Ref. \cite{romani} by introducing a suitable weakening of the Topological Hypothesis. However, the mean-field $\phi^4$ model undergoes a ${{\cal I} \! \! Z}_2$-symmetry-breaking phase transition, as in the case of the short range $\phi^4$ model \cite{vaff}. As a consequence, the reason why this system is not a counterexample of the topological theory is twofold: on the one side it violates the condition of asymptotic diffeomorphicity of the level sets $\Sigma_v^{V_N}$ put forward in the present work, and, on the other side, in the broken symmetry phase even in the absence of critical points of the potential in the $N\to\infty$ limit the splitting of the configuration space into two disjoint submanifolds is actually a major change of topology in correspondence with the phase transition point. All the attempts at falsifying the Topological Hypothesis are useful to better outline the domain of validity of the theory, which, on the other hand, is not intended to apply to {\it all} the possible phase transitions. The theory can leave outside its validity domain models with unbound and long-range potentials without being invalidated by this kind of systems. Summarizing, the Topological Hypothesis is coherent and now free of counterexamples, nevertheless the above mentioned alleged counterexamples have the merit of showing that some - perhaps much - work remains to be done, mainly in the case of long-range interactions. In fact, for example, for the exactly solvable model in Refs.\cite{exact2,exact3}, which is a mean-field XY model, thus with long-range interactions, a sharp topological transition in configuration space is clearly at the origin of the phase transition. At variance with the mean-field $\phi^4$ model described by polynomial potentials, the mean-field XY model is described by a potential bounded from above. Whether and why this fact could explain the different conformity of these models to the topological description of their transitional behaviour is still a wide open question. \section{Concluding remarks} The present work is a substantial leap forward of the topological theory of phase transitions which was seriously undermined by the counterexample mentioned in the previous sections. The theory is rooted in the study of thermodynamical phase transitions from the viewpoint of microscopic Hamiltonian dynamics. As Hamiltonian flows can be identified with geodesics flows of suitable differentiable manifolds, it turned out that across a phase transition point these manifolds undergo major geometrical changes of topological origin. This is to remark that topology is, so to speak, naturally implied by the fundamental/dynamical approach and it is not just conjectured to play a role. The first important consequence of this approach is that the occurrence of a phase transition is not the consequence of a loss of analyticity of statistical measures but it is already encoded in the potential function describing the interactions among the degrees of freedom of a system. This makes the \textit{thermodynamic limit dogma} no longer necessary neither from the conceptual side nor from the mathematical description side. And, of course, this is interesting when tackling phase transition phenomena in mesoscopic and nanoscopic systems. Moreover, phase transitions phenomena in the absence of symmetry-breaking - and thus in the absence of an order parameter - have been successfully tackled in the topological framework, at present, at least in the case of a model with a gauge symmetry \cite{dualising}, for a 2D-model with an $O(2)$ symmetry undergoing a Kosterlitz-Thouless transition \cite{kosterlitz}, and for the protein folding transition \cite{proteinfold}. It is worth mentioning that in a recent paper \cite{loris} - partly based on some results given in \cite{gori2022} - a purely geometric theory of phase transitions has been put forward. In this work it is proposed that Bachmann's classification of phase transitions \cite{bachmann,PRL-Bachmann} for finite-size systems can be reformulated in terms of geometric properties of the energy level sets associated to a given Hamiltonian function; here the energy-derivatives of the entropy are associated to specific combinations of geometric curvature properties of the energy level sets. There is no contradiction between the geometric theory and the topological theory of phase transitions, mainly because sharp changes of the geometry of the leaves (energy level sets) of a foliation of phase space can be generically attributed to deeper changes of topological kind. However, the precise relationship between geometry and topology is given by theorems in differential topology and, unfortunately, there is only a few number of these theorems that can be constructively used (essentially the Gauss-Bonnet-Hopf, the Chern-Lashof, and Pinkall theorems \cite{book}). Therefore the geometric approach has some practical advantage with the respect to the topological one in what curvature properties of the energy level sets can be always explicitly computed. It is noteworthy that, in principle, the topological approach to classical phase transitions - addressed in the present work, as well as the geometric approach in Ref.\cite{loris} - can be extended to the treatment of quantum transitions by means of Wick's analytic prolongation to imaginary times of the path-integral generating functional of quantum field theory, this allows to map a quantum system onto a formally classical one described by a classical partition function written with the euclidean Lagrangian action, on lattice to have a countable number of degrees of freedom \cite{book}. Finally, recent developments of powerful computational methods in algebraic topology, like those of \textit{persistent homology} \cite{Carlsson,noi}, provide the topological description of phase transitions with new useful constructive tools in addition to the existing concepts and methods of differential topology. \bigskip\bigskip \subsection*{Appendix A. Uniform boundedness of ${\mathscr R}$}\label{appA} Let us now show how asymptotic diffeomorphicity entails uniform boundedness with $N$ of the Ricci scalar curvature defined in equation \eqref{scalar1}, and using $\Vert\boldsymbol{\xi}\Vert = \Vert\nabla V(\textit{\textbf{q}})\Vert^{-1} $, \begin{equation} {\mathscr R} = \frac{1}{N(N-1)}\left\{ -\triangle\log \Vert\boldsymbol{\xi}\Vert^{-1}+ \nabla\cdot \left[ \triangle V(\textit{\textbf{q}})\ \boldsymbol{\xi} \right] \right\} \ , \label{scalar1-append} \end{equation} where the second term in the r.h.s. is \begin{eqnarray} \nabla\cdot \left[ \triangle V(\textit{\textbf{q}})\ \boldsymbol{\xi} \right] &=& [ \nabla\triangle V(\textit{\textbf{q}})]\cdot \boldsymbol{\xi} + [\triangle V(\textit{\textbf{q}})] \nabla\cdot\boldsymbol{\xi}\label{array1a}\\ &=& \triangle V(\textit{\textbf{q}})\ \partial^i\left(\frac{\partial_i V(\textit{\textbf{q}})}{\Vert\nabla V(\textit{\textbf{q})}\Vert^2}\right) + \frac{\partial_j V(\textit{\textbf{q})}}{\Vert\nabla V(\textit{\textbf{q})}\Vert^2}\partial^j\partial^k\partial_kV(\textit{\textbf{q}})\label{array1b} \end{eqnarray} and using \cite{convenzione} \begin{eqnarray} \triangle V(\textit{\textbf{q}}) &=& \nabla\cdot[\nabla V(\textit{\textbf{q}})] = \nabla\cdot[\Vert\nabla V(\textit{\textbf{q}})\Vert^2 \boldsymbol{\xi} ] = \partial^i \left(\frac{\xi_i}{\Vert\boldsymbol{\xi}\Vert^2 }\right)\\ &=&\frac{1}{\Vert\boldsymbol{\xi}\Vert^2 } (\partial^i\xi_i) - \frac{4}{\Vert\boldsymbol{\xi}\Vert^2} \frac{\xi_i}{\Vert\boldsymbol{\xi}\Vert }\frac{\xi^j}{\Vert\boldsymbol{\xi}\Vert } (\partial^i\xi_j) \label{array2b} \end{eqnarray} after Eqs.\eqref{normaxi},\eqref{norma}, \eqref{eq:AsympDiffCk} all these terms are uniformly bounded in $N$, and so does $\nabla\cdot\boldsymbol{\xi}$, moreover, the denominator $\Vert\boldsymbol{\xi}\Vert^{-4}\sim N^2$ is compensated by the pre-factor $1/N(N-1)$. therefore the second term in the r.h.s. of Eq.\eqref{array1a} is also uniformly bounded in $N$. Then the first term of Eq.\eqref{array1a} is obtained by applying the operator $\boldsymbol{\xi}\cdot\nabla$ to Eq.\eqref{array2b}, and, after trivial algebra of the same kind of that leading to Eq.\eqref{array2b}, one obtains a lengthy expression - containing mixed second order derivatives of the components of $\boldsymbol{\xi}$ - which are uniformly bounded under the assumption of asymptotic diffeomorphicity. On the other hand, for smooth and regularized potentials, if $n$ is the coordination number of the potential, and ${\textgoth m}_0$ is the maximum value of $\partial^i\partial_i V$, then $ \triangle V(\textit{\textbf{q}})$ is bounded by $n\, {\textgoth m}_0N/[N(N-1)]$. By the same token, if ${\textgoth m}_1$ is the maximum value of $\partial_j\partial^i\partial_i V$ then $\boldsymbol{\xi}\cdot\nabla\triangle V(\textit{\textbf{q}})$ is uniformly bounded by $n\, {\textgoth m}_1 B$, where $B$ is the constant of Eq.\eqref{eq:AsympDiffCk}. Now, coming to the first term in the r.h.s. of Eq.\eqref{scalar1-append}, that is $\partial^i\left( \partial_i \log \Vert \nabla V(\textit{\textbf{q}}) \Vert\right)$, we have \begin{eqnarray} \partial^i\left( \partial_i \log \Vert\boldsymbol{\xi}\Vert^{-1}\right) &= & \partial^i(\Vert \boldsymbol{\xi}\Vert \partial_i\Vert\boldsymbol{\xi}\Vert^{-1}) = \partial^i\Vert \boldsymbol{\xi}\Vert \partial_i\Vert\boldsymbol{\xi}\Vert^{-1} + \Vert \boldsymbol{\xi}\Vert \ \partial^i\partial_i\Vert\boldsymbol{\xi}\Vert^{-1}\\ &=& - \frac{1}{\Vert\boldsymbol{\xi}\Vert^2 } (\partial_i \Vert \boldsymbol{\xi}\Vert )^2 + \left[ - \frac{1}{\Vert\boldsymbol{\xi}\Vert} (\partial^i \partial_i \Vert \boldsymbol{\xi}\Vert ) + \frac{2}{\Vert\boldsymbol{\xi}\Vert^2 } (\partial_i \Vert \boldsymbol{\xi}\Vert )^2\right]\\ &=& - \frac{1}{\Vert\boldsymbol{\xi}\Vert} (\partial^i \partial_i \Vert \boldsymbol{\xi}\Vert ) + \frac{1}{\Vert\boldsymbol{\xi}\Vert^2 } (\partial_i \Vert \boldsymbol{\xi}\Vert )^2\label{a8} \end{eqnarray} and \begin{equation} \frac{1}{\Vert\boldsymbol{\xi}\Vert^2 } (\partial_i \Vert \boldsymbol{\xi}\Vert )^2 = \frac{1}{\Vert\boldsymbol{\xi}\Vert^2 } (\partial_i \sqrt {\xi_j\xi^j})^2 = \frac{\xi_j\xi^j}{\Vert\boldsymbol{\xi}\Vert^4 } (\partial_i\xi_j)^2 \end{equation} which is uniformly bounded after Eqs.\eqref{normaxi},\eqref{norma},\eqref{eq:AsympDiffCk} and the pre-factor $1/N(N-1)$. Then for the first term in Eq.\eqref{a8} we get \begin{equation} \frac{1}{\Vert\boldsymbol{\xi}\Vert} (\partial^i \partial_i \Vert \boldsymbol{\xi}\Vert ) = \frac{1}{\Vert\boldsymbol{\xi}\Vert} \partial^i \left( \frac{\xi^j}{\Vert\boldsymbol{\xi}\Vert}\, \partial_i\xi_j\right) = \frac{1}{\Vert\boldsymbol{\xi}\Vert}\left[ \frac{\xi^j}{\Vert\boldsymbol{\xi}\Vert}\, \partial^i\partial_i\xi_j + \frac{(\partial_i\xi_j)^2}{\Vert\boldsymbol{\xi}\Vert} - \frac{\xi_j\xi^j\, (\partial_i\xi_j)^2}{\Vert\boldsymbol{\xi}\Vert^3} \right] \end{equation} which, under the same conditions mentioned above, is also uniformly bounded in $N$. The relevant consequence is that under the assumption of asymptotic diffeomorphicity of potential energy level sets, the scalar Ricci curvature ${\mathscr R}$ in Eq.\eqref{scalar1-append} is uniformly bounded in $N$ and so do all the principal curvatures of the manifolds transformed under the action of the vector field $\boldsymbol{\xi}$. \subsection*{Appendix B. Lie derivatives of the vector field $\boldsymbol{\xi}$}\label{appB} In the following we derive explicit expressions of the Lie derivatives of the one-parameter diffeomorphism-generating vector field $\boldsymbol{\xi}$ for a potential $V$ in "critical points-free" regions of configuration space $(\mathcal{X},g_{{\cal I} \! \!{R}^N}$ endowed with a Riemannian metric. Let $(q_1,....,q_N)$ be a set of coordinates in configuration space. In what follows we shall refer to $\partial_i=\partial/\partial q^i$ so that $(\nabla V)_i =\partial_i V$ and the Hessian $(\mathrm{Hess} V)_{ij}=\partial^2_{ij}V$.\\ With these chioces the divergence of the vector field $\zeta=\mathrm{div}_{{\cal I} \! \!{R}^N}\boldsymbol{\xi}$ reads: \begin{equation} \mathrm{div}_{{\cal I} \! \!{R}^N}\boldsymbol{\xi}=\dfrac{\Delta V}{\|\nabla V\|^2}-2\dfrac{\nabla V \cdot(\mathrm{Hess} V\nabla V)}{\|\nabla V\|^4} \end{equation} where $\Delta(\cdot)=\sum_{i}^N \partial^{i}\partial_{i}(\cdot)$ is the Laplacian operator in the Euclidean configuration space and $\|\boldsymbol X\|^2=g_{{\cal I} \! \!{R}^N}(\boldsymbol X,\boldsymbol X)$ is the Euclidean norm. As the Lie derivative operator along the flow generated by the vector field $\boldsymbol{\xi}$ is \begin{equation} \mathcal{L}_{\boldsymbol{\xi}}(\cdot)=(\boldsymbol{\xi}\cdot\nabla)(\cdot)=\sum_{i=1}^{N}\dfrac{\partial^{i}V}{\|\nabla V\|^2}\partial_i(\cdot) \end{equation} the first derivative reads \begin{equation}\label{derivLie} \begin{split} \mathcal{L}_{\boldsymbol{\xi}}(\zeta)=&\dfrac{\nabla V\cdot\nabla(\Delta V)}{\|\nabla V\|^4}-2\dfrac{(\nabla V \cdot\mathrm{Hess}(V)\nabla V)\Delta V+2\|\mathrm{Hess} V\nabla V\|^2+\mathrm{D}^3V(\nabla V,\nabla V,\nabla V)}{\|\nabla V\|^6}+\\ &+8\dfrac{(\nabla V \cdot\mathrm{Hess} V\nabla V)^2}{\|\nabla V\|^8} \end{split} \end{equation} where $\mathrm{D}^3V(\nabla V,\nabla V,\nabla V) = \partial^3_{ijk}V \partial_i V \partial_jV \partial_k V $. The second order derivative (with the aid of symbolic manipulation with Mathematica) reads \begin{eqnarray}\label{deriv2Lie} &&\mathcal{L}_{\boldsymbol{\xi}}^{(ii)}(\zeta)= \dfrac{\nabla(\Delta V)\cdot (\mathrm{Hess} V \nabla V)+\nabla V \cdot(\mathrm{Hess}(\Delta V)\nabla V)}{\|\nabla V\|^6}+\nonumber\\ &-&2\Biggr[\dfrac{\Delta V\mathrm{D}^3 V(\nabla V,\nabla V,\nabla V)+2\Delta V\|\mathrm{Hess} V\nabla V\|^2+4(\mathrm{Hess} V\nabla V)\cdot(\mathrm{Hess} V\mathrm{Hess} V\nabla V)}{\|\nabla V\|^8}+\nonumber\\ &+&\dfrac{7\mathrm{D}^3V(\mathrm{Hess} V\nabla V,\nabla V,\nabla V)+\mathrm{D}^4V(\nabla V,\nabla V,\nabla V,\nabla V)+3 (\nabla V\cdot \mathrm{Hess} V\nabla V)(\nabla V\cdot \nabla(\Delta V))}{\|\nabla V\|^8}\Biggr]+\nonumber\\ &+&\dfrac{28(\nabla V\mathrm{Hess} V\nabla V)\left[2\|\mathrm{Hess} V \nabla V\|^2+\mathrm{D}^3V(\nabla V,\nabla V,\nabla V)\right]+12(\nabla V\mathrm{Hess} V\nabla V)^2\Delta V}{\|\nabla V\|^{10}}+\nonumber\\ &-&64\dfrac{(\nabla V\mathrm{Hess} V\nabla V)^3}{\|\nabla V\|^{12}} \end{eqnarray} \vfill and the third order derivative (with the aid of symbolic manipulation with Mathematica) reads \begin{equation}\label{deriv3Lie} \begin{split} &\mathcal{L}_{\boldsymbol{\xi}}^{(iii)}(\zeta)=\dfrac{3 \nabla V \cdot\mathrm{Hess}(\Delta V)\mathrm{Hess} V\nabla V+\mathrm{D}^3\Delta V(\nabla V,\nabla V,\nabla V)+\mathrm{D}^{3}V(\nabla V,\nabla V,\nabla(\Delta V))}{\|\nabla V\|^8}+\\ &+\dfrac{\nabla (\Delta V)\cdot \mathrm{Hess} V\mathrm{Hess} V \nabla V}{\|\nabla V\|^8}-2\Biggr[\dfrac{4 \mathrm{D}^{3}V(\nabla V,\nabla V,\nabla V)(\nabla V \cdot \nabla(\Delta V))}{\|\nabla V\|^{10}}+\\ &+\dfrac{ 7 \mathrm{D}^{4}V(\nabla V,\nabla V,\nabla V,\mathrm{Hess} V\nabla V)}{\|\nabla V\|^{10}}+\\ &+\dfrac{15 \mathrm{D}^{3}V(\nabla V,\nabla V,\mathrm{Hess} V \mathrm{Hess} V \nabla V)+7\|\mathrm{D}^{3}V(\nabla V,\nabla V)\|^2+18 \mathrm{D}^{3}V(\mathrm{Hess} V\nabla V,\mathrm{Hess} V \nabla V,\nabla V)}{\|\nabla V\|^{10}}+\\ &+\dfrac{4\mathrm{D}V(\nabla V,\nabla V, \nabla V,\mathrm{Hess} V \nabla V)+\mathrm{D}^{5}V(\nabla V, \nabla V, \nabla V,\nabla V, \nabla V)+8(\nabla V \cdot \nabla (\Delta V))\|\mathrm{Hess} V \nabla V\|^2}{\|\nabla V\|^{10}}+\\ &+\dfrac{8\|\mathrm{Hess} V \mathrm{Hess} V\nabla V\|^2+7\mathrm{D}^3 V(\nabla V,\nabla V,\mathrm{Hess} V \nabla V)\Delta V}{\|\nabla V\|^{10}}+\\ &\dfrac{\Delta V\mathrm{D}^{4}V(\nabla V,\nabla V,\nabla V,\nabla V)+4\Delta V (\mathrm{Hess} V \nabla V)\cdot \mathrm{Hess} V \mathrm{Hess} V \nabla V}{\|\nabla V\|^{10}}+\\ &+\dfrac{6(\mathrm{Hess} V \cdot \mathrm{Hess} V \nabla V)(\nabla V\cdot \mathrm{Hess} (\Delta V)\nabla V)+6(\mathrm{Hess} V \cdot \mathrm{Hess} V \nabla V)(\nabla(\Delta V)\cdot \mathrm{Hess} V \nabla V)}{\|\nabla V\|^{10}}\Biggr]+\\ &+4\Biggr[\dfrac{7\left(\mathrm{D}^3 V(\nabla V,\nabla V, \nabla V)\right)^2+28 \mathrm{D}^{3}V(\nabla V, \nabla V, \nabla V) \|\mathrm{Hess} V\nabla V\|^2}{\|\nabla V\|^{12}}+\\ &+\dfrac{10 \Delta V \mathrm{D}^3 V(\nabla V, \nabla V,\nabla V)(\nabla V \cdot \mathrm{Hess} V \nabla V)}{\|\nabla V\|^{12}}+\\ &+\dfrac{28 \|\mathrm{Hess} V \nabla V\|^4+20 \Delta V\|\mathrm{Hess} V \nabla V\|^2(\nabla V\cdot \mathrm{Hess} V \nabla V)}{\|\nabla V\|^{12}}+\\ &+\dfrac{(\nabla V\cdot \mathrm{Hess} V \nabla V)[77 \mathrm{D}^3 V(\nabla V,\nabla V,\mathrm{Hess} V \nabla V)}{\|\nabla V\|^{12}}+\\ &+\dfrac{11 \mathrm{D}^4 V(\nabla V,\nabla V,\nabla V,\nabla V)+44 (\mathrm{Hess} V \nabla V)\cdot(\mathrm{Hess} V \mathrm{Hess} V \nabla V)}{\|\nabla V\|^{12}}+\\ &+\dfrac{15 (\mathrm{Hess} V \cdot \mathrm{Hess} V \nabla V)(\nabla V\cdot \nabla(\Delta V))]}{\|\nabla V\|^{12}}\Biggr]+\\ &-8\Biggr[\dfrac{59\mathrm{D}^{3}V(\nabla V,\nabla V,\nabla V)(\nabla V \cdot \mathrm{Hess} V \nabla V)^2}{\|\nabla V\|^{14}}+\\ &+\dfrac{(\nabla V \cdot \mathrm{Hess} V \nabla V)^2 [118 \|\mathrm{Hess} V \nabla V\|^2+15 \Delta V (\nabla V \cdot \mathrm{Hess} V \nabla V)]}{\|\nabla V\|^{14}}\Biggr]+768\dfrac{(\nabla V \cdot \mathrm{Hess} V \nabla V)^4}{\|\nabla V\|^{16}} \end{split} \end{equation} \vfil \section{Acknowledgments} This work has been done within the framework of the project MOLINT which has received funding from the Excellence Initiative of Aix-Marseille University - A*Midex, a French ``Investissements d'Avenir'' programme. This work was also partially supported by the European Union's Horizon 2020 Research and Innovation Programme under grant agreement no. 964203 (FET-Open LINkS project). Roberto Franzosi acknowledges support by the QuantERA ERA-NET Co-fund 731473 (Project Q-CLOCKS) and the support by the National Group of Mathematical Physics (GNFM-INdAM). Matteo Gori thanks the financial support of DARPA (USA) for his long term visit at Howard University at Washington D.C. during which part of this work was done.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Background and Challenges} \label{sec:background} \subsection{The Memory Model of CompCert} A memory model is an indispensable component of program semantics. In CompCert's memory model, a memory state $m$ (of type \kmem) consists of a disjoint set of \emph{memory blocks} with unique identifiers, each of which has a linear address space. A memory address or pointer has the form $(b, o)$ that points to the $o$-th byte in the block $b$ where $b$ has type \kblock and $o$ has type $\kz$ (integers). The value of memory cell (one byte) at $(b,o)$ is denoted by $m[b,o]$. It may be an empty value $\none$ or some value $\some{v}$. Values has the sum type $\kval := \Vundef \sep \Vint{i_{32}} \sep \Vlong{i_{64}} \sep \Vsingle{f_{32}} \sep \Vfloat{f_{64}} \sep \vptr{b}{o}$, indicating that values are either undefined, 32- or 64-bit integers or floats, or pointers.~\footnote{We conflate regular values and in-memory values to simplify the discussion.} The usual memory operations including allocation, free, read and write to memory are provided. Whether these operations are allowed on certain memory cells are determined by their permissions.~\footnote{In fact, a memory cell has both maximum and current permissions; we conflate them for simplicity.} A memory cell may have the following permissions: \kfreeable (all operations are allowed), \kwritable (the cell is not freeable but can be written to), \kreadable (the cell is only readable) and \knonempty (the cell is in the footprint of valid memory). They are ordered by $\kfreeable \geqslant \kwritable \geqslant \kreadable \geqslant \knonempty$ such that if $p_1 \geqslant p_2$ then any cell with permission $p_1$ also implicitly has permission $p_2$. We shall write $\perm{m}{P}$ to denote the set of memory cells with at least permission $P$. For example, $(b, o) \in \perm{m}{\kreadable}$ iff the cell at $(b,o)$ in $m$ is readable. An address with no permission at all is not in the footprint of memory. Transformations of memory states are captured via \emph{memory injections}. A memory injection function $j: \kblock \to \some{\kblock \times \kz}$ is a partial mapping such that $j(b) = \none$ if $b$ is removed from the source memory and $j(b) = \some{(b', o)}$ if the block $b$ is shifted (injected) to $(b',o)$ in the target memory. We define $\kmeminj = \kblock \to \some{\kblock \times \kz}$ to denote the type of $j$. Two values $v_1$ and $v_2$ are related under $j$ (denoted by $\vinj{j}{v_1}{v_2}$) if any one of the following conditions is true: 1) $v_1$ is an undefined value, 2) $v_1 = v_2$ when they are not pointers, and 3) $v_1 = \vptr{b}{o}$, $j(b) = \some{(b',o')}$ and $v_2 = \vptr{b'}{o+o'}$. Note that the first condition means undefined values may be refined to concrete values, which is important for exploiting undefined behaviors in optimization passes. The third condition guarantees proper shifting of pointer values under injections. Given this relation, a memory injection between the source memory state $m_1$ and the target state $m_2$ under $j$ (denoted by $\minj{j}{m_1}{m_2}$) if the following properties are satisfied which ensure preservation of permissions and values under injection: \begin{tabbing} \quad\=\;\;\=\kill \> $\forall\app b\app b'\app o\app o'\app p,\app j(b_1) = \some{(b_2,o')} \imply (b_1, o) \in \perm{m_1}{p} \imply (b_2, o+o') \in \perm{m_2}{p}$\\ \> $\forall\app b\app b'\app o\app o',\app j(b_1) = \some{(b_2,o')} \imply (b_1,o) \in \perm{m_1}{\kreadable} \imply \vinj{j}{m_1[b_1,o]}{m_2[b_2,o+o']}$ \end{tabbing} Memory injections are necessary for verifying compiler passes that transforms the structure of memory (e.g., merging local variables into stack-allocated data and generating a concrete stack frame). For many passes that do not change the memory structure, a simpler relation called \emph{memory extension} is used instead which employs an identity injection function. \subsection{CompCertO's Framework for Flexible Semantic Interfaces} \begin{table} \caption{Language Interfaces for CompCertO} \small \begin{tabular}{c c c c} \hline \textbf{Languages} & \textbf{Interfaces} & \textbf{Queries} & \textbf{Replies}\\ \hline C/Clight & $\cli = \{\kval \times \ksig \times \kval^* \times \kmem, \kval \times \kmem\}$ & $\cquery{v_f}{\sig}{\vec{v}}{m}$ & $\creply{v'}{m'}$\\ LTL & $\ltlli = \{\kval \times \ksig \times \klocset \times \kmem, \klocset \times \kmem\}$ & $\ltlquery{v_f}{\sig}{\vec{\locset}}{m}$ & $\ltlreply{\locset'}{m'}$\\ Mach & $\machli = \{\kval \times \kval \times \kval \times \kregset \times \kmem, \kregset \times \kmem\}$ & $\machquery{v_f}{v_\spreg}{v_\rareg}{\regset}{m}$ & $\machreply{\regset'}{m'}$\\ Asm & $\asmli = \{\kregset \times \kmem, \kregset \times \kmem\}$ & $\asmquery{\regset}{m}$ & $\asmreply{\regset'}{m'}$ \end{tabular} \label{tab:lang-interfaces} \end{table} CompCertO is an adaptation of CompCert with customizable interfaces for open semantics and open simulations. It compiles a variant of C known as \kwd{Clight} into assembly programs. A language interface $A = \linterface{A^q}{A^r}$ is a pair of sets $A^q$ and $A^r$ denoting acceptable queries and replies, respectively. Different language interfaces may be used at different stages of compilation. They are summarized in~\tabref{tab:lang-interfaces} where the names and definitions of interfaces are displayed together with their abbreviations. The C interface is used for languages in front-end and early back-end in which a query is a function call with a signature (of type \ksig) together with a memory state and a reply carries a return value and an updated memory state. The LTL interface is used for back-end languages where abstraction locations (of type $\klocset$) for stack-data (e.g., arguments) are defined. The Mach interface is for Mach, an architecture-independent machine language before generation of assembly code, in which the concrete form of stack frames is determined. Its queries contain the values of the stack pointer $\spreg$ and return address $\rareg$ and a set $\regset$ of registers (of type $\kregset = \kreg \to \kval$). Finally, the Asm interface is for assembly languages where queries and replies carry registers and memory as program states. Open labeled transition systems (LTS) represent semantics of modules. An open LTS $L : A \arrli B$ is a tuple $\olts{D}{S}{I}{\to}{F}{X}{Y}$. Here, $A$ ($B$) is the language interface for outgoing (incoming) queries and replies. $D \subseteq B^q$ is a set of initial queries. $S$ is set of internal states. $I \subseteq D \times S$ and $F \subseteq S \times B^r$ determines transitions for incoming queries and replies: $(q^I, s) \in I$ iff $s$ is the internal state after the initial query $q^I$; $(s, r^I) \in F$ iff $s$ is a final state with reply $r^I$ to the initial query. $X \subseteq S \times A^q$ and $Y \subseteq S \times A^r \times S$ determine transitions for outgoing queries and replies: $(s, q^O) \in X$ iff an outgoing query $q^O$ happens at $s$; $(s, r^O, s') \in Y$ iff after the outgoing query returns with $r^O$ the execution continues with an updated state $s'$. Finally, $\to \subseteq S \times E^* \times S$ denoting internal transitions emitting a trace of events of type $E$. Two open LTS with compatible incoming and outgoing interfaces, i.e., $L_1, L_2: A \arrli A$, may be composed as $L_1 \semlink L_2 : A \arrli A$. In the composed LTS, an internal state $s = s_{i_1} :: \ldots :: s_{i_n}$ becomes a stack of internal states (known as core states in~\cite{stewart15}) from $L_1$ and $L_2$ (i.e., $s_{i_k} \in S_1$ or $s_{i_k} \in S_2$) as a result of mutual invocation between $L_1$ and $L_2$ which is hidden from the environment as shown in~\figref{fig:compose-semantics}. A complete definition of $\semlink$ is in~\cite{compcerto}. A Kripke relation $R : W \to \pset{S}{S \subseteq A \times B}$ is a family of relations indexed by a \emph{Kripke world} $W$; for simplicity, we define $\krtype{W}{A}{B} = W \to \pset{S}{S \subseteq A \times B}$. A \emph{simulation convention} relating two language interfaces $A_1$ and $A_2$ is a tuple $\scname{R} = \simconv{W}{\scname{R}^q : \krtype{W}{A^q_1}{A^q_2}}{\scname{R}^r : \krtype{W}{A^r_1}{A^r_2}}$. We write $\scname{R}: \sctype{A_1}{A_2}$. Simulation conventions serve as interfaces of open simulations by relating source and target language interfaces. For example, the basic C-level convention $\kc: \sctype{\cli}{\cli} = \simconv{\kmeminj}{\scname{R}_{\kwd{cc}}^q}{\scname{R}_{\kwd{cc}}^r}$ relates C queries and replies as follows, where the Kripke world consists of injections and, in a given world $j$, the values and memory in queries and replies are related by $j$. \begin{tabbing} \quad\=$(\cquery{v_f}{sg}{\vec{v}}{m}, \cquery{v_f'}{sg}{\vec{v'}}{m'}) \in \scname{R}_{\kwd{cc}}^q(j)$\quad\=$\iff$\quad\=\kill \>$(\cquery{v_f}{sg}{\vec{v}}{m}, \cquery{v_f'}{sg}{\vec{v'}}{m'}) \in \scname{R}_{\kwd{cc}}^q(j)$ \>$\iff$\>$\vinj{j}{v_f}{v_f'} \land \vinj{j}{\vec{v}}{\vec{v'}} \land \minj{j}{m}{m'}$\\ \>$(\creply{v}{m}, \creply{v'}{m'}) \in {\scname{R}_{\kwd{cc}}^r(j)}$ \>$\iff$\>$\vinj{j}{v}{v'} \land \minj{j}{m}{m'}$ \end{tabbing} The transitive composition of two Kripke relations $R: \krtype{W_R}{A}{B}$ and $S: \krtype{W_S}{B}{C}$ is $\comp{R}{S} : \krtype{W_R \times W_S}{A}{C}$ s.t. $(a, c) \in \comp{R}{S}(w_R, w_S) \iff \exists\app b,\app (a, b) \in R(w_R) \land (b, c) \in S(w_S)$. Simulation conventions can also be transitively composed. Given two simulation convention $\scname{R} : \sctype{A}{B}$ and $\scname{S} : \sctype{B}{C}$, their transitive composition is $\comp{\scname{R}}{\scname{S}} = \simconv{W_{\scname{R}} \times W_{\scname{S}}} {\comp{\scname{R}^q}{\scname{S}^q}} {\comp{\scname{R}^r}{\scname{S}^r}}$. This concept is critical for vertical composition of open simulations. Open simulations are used to represent compiler correctness. They are forward simulations parameterized by simulation conventions for relating incoming and outgoing queries and replies. \begin{definition}\label{def:open-sim} Given two open LTS $L_1: A_1 \arrli B_1$ and $L_2: A_2 \arrli B_2$, and given two simulation conventions $\scname{R}_A : \sctype{A_1}{A_2}$ and $\scname{R}_B : \sctype{B_1}{B_2}$, an open forward simulation between $L_1$ and $L_2$ is a Kripke relation $R \in \krtype{W_B}{S_1}{S_2}$ that satisfies the following properties: % \begin{tabbing} \quad\=\quad\=\quad\=\kill \>\cmnt{(* Initial queries match *)}\\ \>$\forall\app q_1\app q_2,\app (q_1, q_2) \in \scname{R}_B^q \imply (q_1 \in D_1 \iff q_2 \in D_2)$\\ \>\cmnt{(* Initial states satisfy simulation *)}\\ \>$\forall\app w_B\app q_1\app q_2\app s_1,\app (q_1, q_2) \in \scname{R}_B^q(w_B) \imply (q_1, s_1) \in I_1 \imply \exists\app s_2, (s_1, s_2) \in R(w_B) \land (q_2, s_2) \in I_2.$\\ \>\cmnt{(* Final states satisfy simulation *)}\\ \>$\forall\app w_B\app s_1\app s_2\app r_1,\app (s_1, s_2) \in R(w_B) \imply (s_1, r_1) \in F_1 \imply \exists\app r_2, (r_1, r_2) \in \scname{R}_B^r(w_B) \land (s_2, r_2) \in F_2.$\\ \>\cmnt{(* Stepping relations satisfy simulation *)}\\ \>$\forall\app w_B\app s_1\app s_2\app t,\app (s_1, s_2) \in R(w_B) \imply \step{s_1}{t}{s_1'} \imply \exists\app s_2', (s_1', s_2') \in R(w_B) \land s_2 \overset{t}{\rightarrow^*} s_2'.$\\ \>\cmnt{(* External calls satisfies simulations *)}\\ \>$\forall\app w_B\app s_1\app s_2\app q_1,\app (s_1, s_2) \in R(w_B) \imply (s_1, q_1) \in X_1 \imply$\\ \>\>$\exists w_A\app q_2,\app (q_1, q_2) \in \scname{R}_A^q(w_A) \land (s_2, q_2) \in X_2 \land$\\ \>\>\>$\forall\app r_1\app r_2\app s_1', (r_1, r_2) \in \scname{R}_A^r(w_A) \imply (s_1, r_1, s_1') \in Y_1 \imply \exists\app s_2', (s_1', s_2') \in R(w_B) \land (s_2, r_2, s_2') \in Y_2.$ \end{tabbing} We write $\osim{\scname{R}_A}{\scname{R}_B}{L_1}{L_2}$ when such a relation exists. \end{definition} From the above definition, it is easy to prove the horizontal and vertical compositionality of open simulations and their conformance to syntactic linking: \begin{theorem}[H. Compositionality]\label{thm:h-comp} Given $L_1, L_1': A_1 \arrli A_1$, $L_2, L_2': A_2 \arrli A_2$ and $\scname{R}:\sctype{A_1}{A_2}$, % \[ \osim{\scname{R}}{\scname{R}}{L_1}{L_2} \imply \osim{\scname{R}}{\scname{R}}{L_1'}{L_2'} \imply \osim{\scname{R}}{\scname{R}}{L_1 \semlink L_1'}{L_2 \semlink L_2'}. \] \end{theorem} \begin{theorem}[V. Compositionality]\label{thm:v-comp} Given $L_1 : A_1 \arrli B_1$, $L_2 : A_2 \arrli B_2$ and $L_3: A_3 \arrli B_3$, and given $\scname{R}_A:\sctype{A_1}{A_2}$, $\scname{R}_B:\sctype{B_1}{B_2}$, $\scname{S}_A:\sctype{A_2}{A_3}$ and $\scname{S}_B:\sctype{B_2}{B_3}$, % \[ \osim{\scname{R}_A}{\scname{R}_B}{L_1}{L_2} \imply \osim{\scname{S}_A}{\scname{S}_B}{L_2}{L_3} \imply \osim{\comp{\scname{R}_A}{\scname{S}_A}} {\comp{\scname{R}_B}{\scname{S}_B}} {L_1}{L_3}. \] \end{theorem} \begin{theorem}[Conformance to Syntactic Linking]\label{thm:syn-link} Given assembly modules $M_1, M_2: \kwd{Asm}$, $\sem{\_}: \kwd{Asm} \to (\asmli \arrli \asmli)$ and the identity simulation convention $\kwd{id}: \sctype{\asmli}{\asmli}$, \[ \osim{\kwd{id}}{\kwd{id}}{\sem{M_1} \semlink \sem{M_2}}{\sem{M_1 \synlink M_2}} \] \end{theorem} \subsection{Refinement of Simulation Conventions} Note that in~\thmref{thm:v-comp}, the vertical composition of simulations is trivial: they are put side-by-side to form a single one. As a result, the interface of the composed simulation is not a unified definition. To alleviate this problem, CompCertO introduces refinement of simulation conventions: \begin{definition} Given two simulation conventions $\scname{R}, \scname{S}: \sctype{A_1}{A_2}$, we say $\scname{R}$ is \emph{refined} by $\scname{S}$ if \begin{tabbing} \quad\=$\forall\app w_{\scname{S}}\app q_1\app q_2,\app (q_1, q_2) \in \scname{S}^q(w_{\scname{S}}) \imply \exists\app$\=\kill \>$\forall\app w_{\scname{S}}\app q_1\app q_2,\app (q_1, q_2) \in \scname{S}^q(w_{\scname{S}}) \imply \exists\app w_{\scname{R}},\app (q_1, q_2) \in \scname{R}^q(w_{\scname{R}}) \land$\\ \>\>$\forall\app r_1\app r_2,\app (r_1,r_2) \in \scname{R}^r(w_{\scname{R}}) \imply (r_1, r_2) \in \scname{S}^r(w_{\scname{S}})$ \end{tabbing} which we write as $\screfine{\scname{R}}{\scname{S}}$. If both $\screfine{\scname{R}}{\scname{S}}$ and $\screfine{\scname{S}}{\scname{R}}$, then $\scname{R}$ and $\scname{S}$ are equivalent (written as $\scequiv{\scname{R}}{\scname{S}}$). \end{definition} An open simulation is stable under refinement: \begin{theorem}\label{thm:sim-refine} Given $L_1 : A_1 \arrli B_1$ and $L_2 : A_2 \arrli B_2$, if\; $\screfine{\scname{R}'_A}{\scname{R}_A}:\sctype{A_1}{A_2}$, $\screfine{\scname{R}_B}{\scname{R}'_B}:\sctype{B_1}{B_2}$ and $\osim{\scname{R}_A}{\scname{R}_B}{L_1}{L_2}$, then $\osim{\scname{R}'_A}{\scname{R}'_B}{L_1}{L_2}$. \end{theorem} \noindent By utilizing the above property, it is possible to refine a transitive composition of two simulation conventions $\comp{\scname{R}}{\scname{S}}$ into a single unified definition $\scname{Q}$ at both incoming and outgoing sides. In particular, given $\osim{\comp{\scname{R}}{\scname{S}}}{\scname{P}}{L_1}{L_2}$, if we can prove $\refine{\scname{Q}}{\comp{\scname{R}}{\scname{S}}}$ we get $\osim{\scname{Q}}{\scname{P}}{L_1}{L_2}$; dually, given $\osim{\scname{P}}{\comp{\scname{R}}{\scname{S}}}{L_1}{L_2}$, if we can prove $\refine{\comp{\scname{R}}{\scname{S}}}{\scname{Q}}$ we get $\osim{\scname{P}}{\scname{Q}}{L_1}{L_2}$. \subsection{CompCert Kripke Logical Relations} \label{ssec:cklr} Given a simulation $\osim{\scname{R}}{\scname{S}}{L_1}{L_2}$, ${\scname{R}}$ characterizes the effects of external calls must satisfy, i.e., it describes the \emph{rely-conditions} of the simulation; ${\scname{S}}$ characterizes the effects it exhibits to the calling environment, i.e., it describes the \emph{guarantee-conditions}. For proving the correctness of memory transformation, it is critical for rely-guarantee conditions to capture requirements on how memory states should be modified between queries and replies. For this, a simulation convention is parameterized by a \emph{CompCert Kripke Logical Relation} or CKLR~\cite{compcerto}. \begin{definition} A CompCert Kripke Logical Relation is a tuple $\cklr{W}{f}{\accsymb}{R}$ where $W$ is a set of worlds, $f:W \to \kmeminj$ a function for extracting injections from worlds, $\leadsto \subseteq W \times W$ an accessibility relation between worlds and $R: \krtype{W}{\kmem}{\kmem}$ a Kripke relation over memory states that is compatible with CompCert's memory operations. % We write ${w} \accsymb {w'}$ for $(w, w') \in \accsymb$. \end{definition} \noindent Here, compatibility means $R$ is preserved by memory operations of CompCert's memory model (See~\cite{compcerto}). Transitive composition of CKLRs is defined as follows: \begin{definition} Given two CKLRs $K_1 = \cklr{W_1}{f_1}{\accsymb_1}{R_1}$ and $K_2 = \cklr{W_2}{f_2}{\accsymb_2}{R_2}$, their composition $\comp{K_1}{K_2} = \cklr{W_1 \times W_2}{f_1 \times f_2}{\accsymb_1 \times \accsymb_2}{R_1 \times R_2}$. \end{definition} CompCertO provides three CKLRs: $\kinjp$ is a relation on \emph{memory injections with protection on states}, $\kinj$ is basically $\kinjp$ without protection and $\kext$ is a relation over memory extensions. $\kinjp$ is most interesting for the memory protection it provides. To define \kinjp, we first need to define the unmapped and out-of-reach regions w.r.t. injections and a notion of unchanged memory regions. \begin{definition}\label{def:unmapped} Given $\minj{j}{m_1}{m_2}$, the memory cells in $m_1$ not in the domain of $j$ are \emph{unmapped} by $j$; the memory cells in $m_2$ not in the image of $j$ are \emph{out-of-reach} by $j$ in $m_2$: % {\small \begin{tabbing} \quad\=$(b_2, o_2) \in \outofreach{j}{m_1}$\;\=\kill \>$(b_1, o_1) \in \unmapped{j}$\>$\iff \; j(b_1) = \none$ \\ \>$(b_2, o_2) \in \outofreach{j}{m_1}$\>$\iff \; \forall\app b_1\app o_2', \app j(b_1) = \some{(b_2, o_2')} \imply (b_1, o_2-o_2') \not\in \perm{m_1}{\knonempty}.$ \end{tabbing}} \end{definition} \begin{definition}\label{def:unchanged-on-val} The set of memory addresses whose values and permissions are unchanged between $m$ and $m'$ (denoted as $\unchangedon{m}{m'}$) is defined as follows: \begin{tabbing} \quad\=\kill \>$\unchangedon{m}{m'} \; \iff \; \unchangedonperm{m}{m'} \cap \unchangedonval{m}{m'}.$ \end{tabbing} Here, $\unchangedonperm{m}{m'}$ holds if any valid cell $(b,o)$ in $m$ has the same permission in $m$ and $m'$; $\unchangedonval{m}{m'}$ if any readable cell $(b,o)$ in $m$ has the same value in $m$ and $m'$. \end{definition} \begin{definition}[CKLR with Memory Protection] $\kinjp = \cklr{W_{\kinjp}}{f_\kinjp}{\accsymb_{\kinjp}}{R_{\kinjp}}$ where $W_{\kinjp} = (\kmeminj \times \kmem \times \kmem)$, $f_\kinjp(j, \_,\_) = j$, $(m_1, m_2) \in R_\kinjp(j, m_1, m_2) \iff \minj{j}{m_1}{m_2}$ and \begin{tabbing} \quad\=$\injpacc{(j, m_1, m_2)}{(j', m_1', m_2')} \; \iff \;$\=\kill \>$\injpacc{(j, m_1, m_2)}{(j', m_1', m_2')} \; \iff \;j \subseteq j' \land \unmapped{j} \subseteq \unchangedon{m_1}{m_1'}$\\ \>\>$\land\; \outofreach{j}{m_1} \subseteq \unchangedon{m_2}{m_2'}.$ \end{tabbing} \end{definition} \noindent An example is shown in~\figref{fig:injp} where the regions with diagonal lines in $m_1$ are unmapped and unchanged while those in $m_2$ are out-of-reach and unchanged. $m_1'$ and $m_2'$ may contain newly allocated blocks; those blocks are never protected by \kinjp. Note that $j \subseteq j'$ means there exists some injection $j''$ for new blocks such that $j' = j \oplus j''$ where $\oplus$ is disjoint union. It means injections only grow bigger and never change in values. When \kinjp is used at outgoing side, it denotes that the simulation relies on knowing that the unmapped and out-of-reach regions at the call side are not modified across external calls. When \kinjp is used at incoming side, it denotes that the simulation guarantees the unmapped and out-of-reach regions at initial queries are not modified by the simulation itself. \begin{figure} \begin{tikzpicture} \snode{0.5cm}{0.3cm}{0}{draw} (n1) {}; \snode{0.4cm}{0.3cm}{0}{draw, right = 0.3cm of n1, pattern=north east lines} (n2) {}; \snode{0.6cm}{0.3cm}{0}{draw, right = 0.3cm of n2} (n3) {}; \snode{0.4cm}{0.3cm}{0}{draw, right = 0.3cm of n3, pattern=north east lines} (n4) {}; \draw[dashed] ($(n4.north east) + (0.4cm, 0.1cm)$) --++(0,-1.5cm); \snode{0.2cm}{0.3cm}{0}{draw, right = 0.6cm of n4} (n5) {}; \snode{0.4cm}{0.3cm}{0}{draw, right = 0.3cm of n5} (n6) {}; \snode{2.5cm}{0.3cm}{0}{draw, below left = 0.6cm and -2.3cm of n1} (m1) {}; \snode{0.5cm}{0.3cm}{0}{draw, right = 0.3cm of m1, pattern=north east lines} (m2) {}; \snode{0.8cm}{0.3cm}{0}{draw, right = 0.3cm of m2} (m3) {}; \draw[-stealth] (n1.south west) -- ($(m1.north west)+(0.4cm,0)$); \draw[dotted] (n1.south east) -- ($(m1.north west)+(0.9cm,0)$); \draw[-stealth] (n3.south west) -- ($(m1.north west)+(1.5cm,0)$); \draw[dotted] (n3.south east) -- ($(m1.north west)+(2.1cm,0)$); \draw[-stealth] (n6.south west) -- ($(m3.north west)+(0.2cm,0)$); \draw[dotted] (n6.south east) -- ($(m3.north west)+(0.6cm,0)$); \fill[pattern=north east lines] (m1.north west) rectangle ($(m1.south west)+(0.4cm,0)$); \fill[pattern=north east lines] ($(m1.north west)+(0.9cm,0)$) rectangle ($(m1.south west)+(1.5cm,0)$); \fill[pattern=north east lines] ($(m1.north west)+(2.1cm,0)$) rectangle (m1.south east); \draw ($(m1.north west)+(0.4cm,0)$) -- ($(m1.south west)+(0.4cm,0)$); \draw ($(m1.north west)+(0.9cm,0)$) -- ($(m1.south west)+(0.9cm,0)$); \draw ($(m1.north west)+(1.5cm,0)$) -- ($(m1.south west)+(1.5cm,0)$); \draw ($(m1.north west)+(2.1cm,0)$) -- ($(m1.south west)+(2.1cm,0)$); \node[left = 0.4cm of n1] (txtm1) {\small $m_1$}; \path let \p1 = (txtm1) in let \p2 = (m1) in node (txtm2) at (\x1, \y2) {\small $m_2$}; \path let \p1 = (txtm1) in let \p2 = (txtm2) in node (txtj1) at (\x1, {(\y1+\y2)*0.5}) {\small $j$}; \node[right = 0.4cm of n6] (txtmp1) {\small $m_1'$}; \path let \p1 = (txtmp1) in let \p2 = (m1) in node (txtmp2) at (\x1, \y2) {\small $m_2'$}; \path let \p1 = (txtmp1) in let \p2 = (txtmp2) in node (txtjp1) at (\x1, {(\y1+\y2)*0.5}) {\small $j'$}; \end{tikzpicture} \caption{Kripke Worlds Related by \kinjp} \label{fig:injp} \end{figure} Simulation conventions parameterized over CKLR are defined as follows: \begin{definition} A simulation convention parameterized over $\cklr{W}{f}{\accsymb}{R}$ is $\scname{R} = \simconv{W}{\scname{R}^q}{\mkrel{\scname{R}^r}}$ where ${\mkrel{\scname{R}^r}}$ is a modal Kripke relation such that $r \in {\mkrel{\scname{R}^r}(w)} \iff \exists\app w', w \accsymb w' \land r \in \scname{R}^r(w')$. \end{definition} \noindent That is, $\scname{R}$ inherits the world of the CKLR and the replies are related at future worlds following the accessibility relation. For example, the complete C-level simulation convention in CompCertO is parameterized over \kinjp and defined as $\kc_{\kinjp} : \sctype{\cli}{\cli} = \simconv{W_{\kinjp}}{\scname{R}_\kc^q}{\mkrel{\scname{R}_\kc^r}}$ such that: \begin{tabbing} \quad\=\qquad\=$\iff$\quad\=\kill \>$(\cquery{v_{f_1}}{sg}{\vec{v_1}}{m_1}, \cquery{v_{f_2}}{sg}{\vec{v_2}}{m_2}) \in \scname{R}_{\kc}^q(j,m_1,m_2) \; \iff \; \vinj{j}{v_{f_1}}{v_{f_2}} \land \vinj{j}{\vec{v_1}}{\vec{v_2}} \land \minj{j}{m_1}{m_2}$\\ \>$(\creply{v_1'}{m_1'}, \creply{v_2'}{m_2'}) \in {\mkrel{\scname{R}_{\kc}}}^r(j, m_1, m_2) \; \iff$\\ \>\>$\exists\app j'\app m_1'\app m_2', \injpacc{(j,m_1,m_2)}{(j',m_1',m_2')} \land \vinj{j'}{v_1'}{v_2'} \land \minj{j'}{m_1'}{m_2'}$ \end{tabbing} \noindent Note that ${\mkrel{\scname{R}_{\kc}}}^r$ provides protection over unmapped and out-of-reach regions. For instance, given $\osim{\kc_\kinjp}{\kc_\kinjp}{L_1}{L_2}$, the unmapped memory in $L_1$ and out-of-reach memory in $L_2$ at sites of external calls would not be modified when these calls return, while the unmapped and out-of-reach memory at initial queries would not be modified when final states are reached. Like simulation conventions, CKLRs may also be refined: \begin{definition}\label{def:cklr-refine} Given two CKLRs $K = \cklr{W_K}{f_K}{\accsymb_K}{R_K}$ and $L = \cklr{W_L}{f_L}{\accsymb_L}{R_L}$, $K$ is refined by $L$, written as $\refine{K}{L}$ if \begin{tabbing} \quad\=$\forall\app w_L,\app (m_1, m_2) \in R_L(w_L) \imply \exists\app$\=\quad\=\kill \>$\forall\app w_L,\app (m_1, m_2) \in R_L(w_L) \imply \exists\app w_K,\app (m_1, m_2) \in R_K(w_K) \land f_L(w_L) \subseteq f_K(w_K) \land$\\ \>\>$\forall\app w_K'\app m_1'\app m_2',\app \acc{K}{w_K}{w_K'} \imply (m_1', m_2') \in R_K(w_K') \imply$\\ \>\>\>$\exists\app w_L',\app \acc{L}{w_L}{w_L'} \land (m_1', m_2') \in R_L(w_L') \land f_K(w_K') \subseteq f_L(w_L')$. \end{tabbing} $K$ and $L$ are equivalent, written as $K \equiv L$ iff $\refine{K}{L}$ and $\refine{L}{K}$. \end{definition} \subsection{Verified Compositional Compilation in CompCertO} For every language in CompCert starting from \kwd{Clight} to assembly program, CompCertO gives it appropriate language interfaces for incoming and outgoing queries and replies. For every compiler pass, it devises appropriate simulation conventions and prove an open simulation with these conventions. Finally, by vertical compositionality (\thmref{thm:v-comp}) and refinement of simulation conventions, the correctness of CompCertO as stated below is proved, where \kwd{Clight} and \kwd{Asm} represent source and target programs respectively and $\sccompcerto:\sctype{\cli}{\asmli}$ is a refined simulation convention between \kwd{Clight} and \kwd{Asm}. \begin{theorem}\label{thm:compcerto-correct} Compilation in CompCertO is correct in terms of open simulations, \[ \forall\app (M:\kwd{Clight})\app (M':\kwd{Asm}),\app \kwd{CompCertO}(M) = M' \imply \osim{\sccompcerto}{\sccompcerto}{\sem{M}}{\sem{M'}}. \] \end{theorem} \subsection{Problem with the Semantic Interface for CompCertO} Ideally, one would like $\sccompcerto$ be regarded as a common interface for verified compilation of heterogeneous programs. For example, if we would like to prove a hand-written assembly module $M_A$ satisfying a C calling convention can be linked with a program $M_C$ compiled by CompCertO, we would first design an open LTS $L_A: \cli \arrli \cli$ as a C-level specification for $M_A$, and then prove $\osim{\sccompcerto}{\sccompcerto}{L_A}{\sem{M_A}}$. By Theorems~\ref{thm:compcerto-correct},~\ref{thm:h-comp} and~\ref{thm:syn-link}, we can then establish a simulation between composed source specification and physically linked assembly program: \[ \osim{\sccompcerto}{\sccompcerto}{\sem{M_C} \semlink L_A}{\sem{\kwd{CompCertO}(M_C) + M_A}} \] The above simulation may be further refined. For example, if $M_C$ is also given an abstract specification $L_C$ s.t. $\osim{\kid}{\kid}{L_C}{M_C}$, we get $ \osim{\sccompcerto}{\sccompcerto}{L_C \semlink L_A}{\sem{\kwd{CompCertO}(M_C) + M_A}}. $ However, the above proof is actually quite difficult to complete because, despite the refinements of simulation conventions, $\sccompcerto$ still ends up to be a composition of a series of simulation conventions: \[ \sccompcerto = \kc_{\kinjp+\kinj+\kext}^* \cdot \kwt \cdot \kcl \cdot \klm \cdot \kma \cdot \kasm_{{\kinj}}. \] Here, $\kc$ is parameterized over a disjoint union of all CKLRs; the star superscript denotes $\kc_{\kinjp+\kinj+\kext}$ may be repeated for an arbitrary number of times. $\kwt: \sctype{\cli}{\cli}$ denotes that C function calls and returns are well-typed (i.e., conforming to the signatures). $\kcl: \sctype{\cli}{\ltlli}$, $\klm: \sctype{\ltlli}{\machli}$ and $\kma: \sctype{\machli}{\asmli}$ successively relate C-level queries and replies to assembly ones through intermediate language interfaces; together they form the calling convention of CompCert. Finally, $\kasm: \sctype{\asmli}{\asmli}$ is like \kc but at the assembly level. Continuing with our example, to prove $\osim{\sccompcerto}{\sccompcerto}{L_A}{\sem{M_A}}$, we have to show $L_A$ is related to $\sem{M_A}$ via a series of intermediate queries and replies. Constructing these intermediate states requires internal knowledge of CompCert's compilation chain and incurs a lot of complexity even for trivial examples. This complexity quickly gets out of control with more sophisticated programs. \subsection{Challenges and Our Approaches} To solve the above problem, we need to come up with a unified semantic interface for CompCertO that directly relates source and target program semantics and that can serve as an understandable contract to users. Non-trivial challenges exists for achieving this goal, as discussed below. \subsubsection{Unification of the CKLRs} \label{sec:unify-cklr} \begin{figure} \begin{subfigure}{0.46\textwidth} \begin{tikzpicture} \node (m1) {\small $m_1$}; \node [below = 1.2cm of m1] (m2) {\small $m_2$}; \node [below = 1.2cm of m2] (m3) {\small $m_3$}; \path (m1) -- node[draw] (w1) {\small $w_{12}$} (m2); \path (m2) -- node[draw] (w2) {\small $w_{23}$} (m3); \draw (m1) -- (w1) -- (m2); \draw (m2) -- (w2) -- (m3); \node [right = 0.8cm of m1] (m11) {\small $m_1$}; \path let \p1 = (m11) in let \p2 = (m3) in node (m33) at (\x1, \y2) {\small $m_3$}; \path (m11) -- node[draw,red] (w) {\small $w_{13}$} (m33); \draw[red] (m11) -- (w) -- (m33); \node [right = 1cm of m11] (m11p) {\small $m_1'$}; \path let \p1 = (m11p) in let \p2 = (m33) in node (m33p) at (\x1, \y2) {\small $m_3'$}; \path (m11p) -- node[draw] (wp) {\small $w_{13}'$} (m33p); \draw (m11p) -- (wp) -- (m33p); \node [right = 0.8cm of m11p] (m1p) {\small $m_1'$}; \path let \p1 = (m1p) in let \p2 = (m2) in node[red] (m2p) at (\x1, \y2) {\small $m_2'$}; \path let \p1 = (m1p) in let \p2 = (m33) in node (m3p) at (\x1, \y2) {\small $m_3'$}; \path (m1p) -- node[draw,red] (w1p) {\small $w_{12}'$} (m2p); \path (m2p) -- node[draw,red] (w2p) {\small $w_{23}'$} (m3p); \draw[red] (m1p) -- (w1p) -- (m2p); \draw[red] (m2p) -- (w2p) -- (m3p); \path (m2) -- node {\small $\imply$} (w); \path (w) -- node {\small $\accsymb$} (wp); \path (wp) -- node {\small $\imply$} (m2p); \node[draw, dashed, right = 0.6cm of w1p] (w1pp) {\small $w_{12}$}; \path (w1pp) -- node[rotate=180,red] {\small $\accsymb$} (w1p); \node[draw, dashed, right = 0.6cm of w2p] (w2pp) {\small $w_{23}$}; \path (w2pp) -- node[rotate=180,red] {\small $\accsymb$} (w2p); \end{tikzpicture} \caption{$\refine{\kwd{K}}{\comp{\kwd{K}}{\kwd{K}}}$} \label{fig:trans-comp1} \end{subfigure} % \qquad % \begin{subfigure}{0.46\textwidth} \begin{tikzpicture} \node (m1) {\small $m_1$}; \node [below = 1.2cm of m1,draw] (w) {\small $w_{13}$}; \node [below = 1.2cm of w] (m3) {\small $m_3$}; \node [right = 0.8cm of m1] (m11) {\small $m_1$}; \path let \p1 = (m11) in let \p2 = (w) in node[red] (m2) at (\x1, \y2) {\small $m_2$}; \path let \p1 = (m11) in let \p2 = (m3) in node (m33) at (\x1, \y2) {\small $m_3$}; \path (m11) -- node[draw,red] (w1) {\small $w_{12}$} (m2); \path (m2) -- node[draw,red] (w2) {\small $w_{23}$} (m33); \draw (m1) -- (w) -- (m3); \draw[red] (m11) -- (w1) -- (m2); \draw[red] (m2) -- (w2) -- (m33); \node [right = 1cm of m11] (m1p) {\small $m_1'$}; \path let \p1 = (m1p) in let \p2 = (m2) in node (m2p) at (\x1, \y2) {\small $m_2'$}; \path let \p1 = (m1p) in let \p2 = (m33) in node (m3p) at (\x1, \y2) {\small $m_3'$}; \path (m1p) -- node[draw] (w1p) {\small $w_{12}'$} (m2p); \path (m2p) -- node[draw] (w2p) {\small $w_{23}'$} (m3p); \draw (m1p) -- (w1p) -- (m2p); \draw (m2p) -- (w2p) -- (m3p); \node [right = 0.8cm of m1p] (m11p) {\small $m_1'$}; \path let \p1 = (m11p) in let \p2 = (m3p) in node (m33p) at (\x1, \y2) {\small $m_3'$}; \path (m11p) -- node[draw,red] (w3p) {\small $w_{13}'$} (m33p); \draw[red] (m11p) -- (w3p) -- (m33p); \path (w) -- node {\small $\imply$} (m2); \path (w1) -- node {\small $\accsymb$} (w1p); \path (w2) -- node {\small $\accsymb$} (w2p); \path (m2p) -- node {\small $\imply$} (w3p); \node[draw, dashed, right = 0.6cm of w3p] (w3pp) {\small $w_{13}$}; \path (w3pp) -- node[rotate=180,red] {\small $\accsymb$} (w3p); \end{tikzpicture} \caption{$\refine{\comp{\kwd{K}}{\kwd{K}}}{\kwd{K}}$} \label{fig:trans-comp2} \end{subfigure} \caption{Transitivity of CKLRs} \label{fig:trans-comp} \end{figure} In CompCertO, although most of the simulation conventions parameterized over CKLRs are moved to the top level through refinement, the refined convention $\kc_{\kinjp+\kinj+\kext}^*$ still has two problems: 1) it requires users to work with all CKLRs, and 2) it requires users to deal with transitive composition of an arbitrary number of CKLRs. To solve the first problem, we use a \emph{uniform} definition \kwd{K} that subsumes all CKLRs. Conceptually, for this to be possible, \kwd{K} should capture rely-guarantee conditions for every compiler pass. To solve the second problem, we show \kwd{K} is transitively composable, i.e., $\kwd{K} \equiv \comp{\kwd{K}}{\kwd{K}}$, so that given any simulation convention $\scname{R}$, $\comp{\scname{R}_{\kwd{K}}}{\scname{R}_{\kwd{K}}}$ can be merged into a single definition $\scname{R}_{\kwd{K}}$. Letting $\kwd{K} = \cklr{W}{f}{\accsymb}{R}$, this amounts to showing refinement holds from two directions according to~\defref{def:cklr-refine}, as displayed in~\figref{fig:trans-comp}. In this figure, black symbols are $\forall$-quantified (assumptions we know) and red ones are $\exists$-quantified (conclusions we need to construct). The $\accsymb$ relation connecting Kripke worlds denotes evolution of worlds (and memory states) between queries and replies. Note that, for simplicity, we not only use $w$ to represent worlds, but also to denote $R(w)$ when it connects memory states through vertical lines. In both cases in~\figref{fig:trans-comp}, we need to construct interpolating states for relating source and target memory (i.e., $m_2'$ in~\figref{fig:trans-comp1} and $m_2$ in~\figref{fig:trans-comp2}). The construction of $m_2'$ is especially difficult because we need to decompose the evolved world $w_{13}'$ into $w_{12}'$ and $w_{23}'$ s.t. they are accessible from the original worlds $w_{12}$ and $w_{23}$. This is in general considered very difficult if worlds are \emph{relations} which naturally allow non-determinism. Transitivity of logical relations is difficult to prove exactly because construction of interpolating states for relations is generally not possible. Even in rare cases this is feasible, the construction is quite difficult~\cite{Ahmed06esop}. However, we notice that we are working with first-order states in CompCert and CKLR are actually functional by nature. Based on these observations, we develop a complete solution to the above problems. This is discussed in~\secref{sec:injp}. \subsubsection{Unification of Simulation Conventions} With a uniform CKLR \kwd{K}, we still need to unify simulation conventions for every compiler pass into one convention directly relating source and target semantics. In order to avoid changing the proofs of CompCert's compiler passes, we need to develop new techniques for refining simulation conventions parameterized over \kwd{K} and merging CompCert's calling convention with \kwd{K}. We discuss those work in~\secref{sec:compcerto}. We further demonstrates that the resulting semantic interface is indeed effective for VCC through a non-trivial application in~\secref{sec:eval}. \section{CompCertO with A Unified Semantic Interface} \label{sec:compcerto} In principle, given the uniform CKLR \kinjp, we should be able to merge simulation conventions for individual passes of CompCertO into a uniform one that works both at the incoming and outgoing sides. In this section, we discuss how this merging is carried out. \subsection{An Overview} \begin{table} \caption{Significant Passes of CompCertO} \small \begin{tabular}{c c} \hline \textbf{Language/Pass} & \textbf{Outgoing $\arrli$ Incoming}\\ \hline \textbf{\kwd{Clight}} & $\cli \arrli \cli$ \\ \selfsim{\kwd{Self-Sim}} & \selfsim{$\kc_{\kinjp} \arrli \kc_{\kinjp}$} \\ \kwd{SimplLocals} & $\kc_{\kinjp} \arrli \kc_{\kinj}$ \\ \textbf{\kwd{Csharpminor}} & $\cli \arrli \cli$ \\ \kwd{Cminorgen} & $\kc_{\kinjp} \arrli \kc_{\kinj}$ \\ \textbf{\kwd{Cminor}} & $\cli \arrli \cli$ \\ \kwd{Selection} & $\kwt \cdot \kc_{\kext} \arrli \kwt \cdot \kc_{\kext}$ \\ \textbf{\kwd{CminorSel}} & $\cli \arrli \cli$ \\ \kwd{RTLgen} & $\kc_{\kext} \arrli \kc_{\kext}$ \\ \textbf{\kwd{RTL}} & $\cli \arrli \cli$ \\ \kwd{Tailcall} & $\kc_{\kext} \arrli \kc_{\kext}$ \\ \selfsim{\kwd{Self-Sim}} & \selfsim{$\kc_{\kinj} \arrli \kc_{\kinj}$} \\ \kwd{Inlining} & $\kc_{\kinjp} \arrli \kc_{\kinj}$ \\ \selfsim{\kwd{Self-Sim}} & \selfsim{$\kc_{\kinjp} \arrli \kc_{\kinjp}$} \\ \selfsim{\kwd{Self-Sim}} & \selfsim{$\kc_{\kinj} \arrli \kc_{\kinj}$} \\ \kwd{Allocation} & $\kwt \cdot \kc_{\kext} \cdot \kcl \arrli \kwt \cdot \kc_{\kext} \cdot \kcl $ \\ \textbf{\kwd{LTL}} & $\ltlli \arrli \ltlli$ \\ \kwd{Tunneling} & $\kwd{ltl}_{\kext} \arrli \kwd{ltl}_{\kext}$ \\ \textbf{\kwd{Linear}} & $\ltlli \arrli \ltlli$ \\ \kwd{Stacking} & $\kwd{ltl}_{\kinjp} \cdot \klm \arrli \klm \cdot \kwd{mach}_{\kinj}$ \\ \textbf{\kwd{Mach}} & $\machli \arrli \machli$ \\ \kwd{Asmgen} & $\kmach_{\kext} \cdot \kma \arrli \kmach_{\kext} \cdot \kma$\\ \textbf{\kwd{Asm}} & $\asmli \arrli \asmli$ \\ \selfsim{\kwd{Self-Sim}} & \selfsim{$\kasm_{\kinj} \arrli \kasm_{\kinj}$} \\ \selfsim{\kwd{Self-Sim}} & \selfsim{$\kasm_{\kinjp} \arrli \kasm_{\kinjp}$} \end{tabular} \label{tab:compcerto} \end{table} CompCertO compiles \kwd{Clight} programs into \kwd{Asm} programs through 18 passes~\cite{compcerto}. We shall develop a unified simulation convention for 15 of them (excluding the 3 value analysis passes). In~\tabref{tab:compcerto}, we list passes whose simulations are not parameterized by the identity simulation convention, i.e., not of the form $\osim{\kid}{\kid}{L_1}{L_2}$.\footnote{The omitted passes are \kwd{Cshmgen}, \kwd{Renumber}, \kwd{Linearize}, \kwd{CleanupLabels} and \kwd{Debugvar}} We have omitted identity simulation conventions because they do not affect the refinement process. For reference, we also list the source and target languages of these passes together with their language interfaces. Note that there is a collection of optimization passes working on the \kwd{RTL} language. Note also that rows in red are self-simulating passes inserted for facilitating refinement of simulation conventions which we will discuss in a moment. The simulation conventions $\kltl_K: \sctype{\ltlli}{\ltlli}$, $\kmach_K: \sctype{\machli}{\machli}$ and $\kasm_K: \sctype{\asmli}{\asmli}$ are like $\kc_K: \sctype{\cli}{\cli}$: they relate the same language interfaces and are parameterized by a CKLR $K$. The simulation conventions $\kcl: \sctype{\cli}{\ltlli}$, $\klm: \sctype{\ltlli}{\machli}$ and $\kma: \sctype{\machli}{\asmli}$ capture the calling convention of CompCert: $\kcl$ relates C-level queries and replies with those in the \kwd{LTL} language where the arguments are distributed to abstract stack slots; $\klm$ further relates abstract stack slots with states on an architecture independent machine; $\kma$ relates this state to registers and memory in an assembly language (X86 assembly in this case). An obvious approach to unifying simulation conventions is to modify the proof for each pass so that their incoming and outgoing conventions are all parameterized by \kinjp. This is possible in principle by the uniformality of \kinjp discussed in~\secref{sec:injp}. Then we can merge them together by transitivity of \kinjp. However, this requires non-trivial changes to CompCert. We discover that, by combining appropriate lemmas for refining simulation conventions with \emph{self simulations} at appropriate compilation stages, merging can be carried out \emph{without any modification} to the existing proofs in CompCertO. We shall first present these lemmas below and then discuss this more lightweight approach. \subsection{Properties about Refinement of Simulation Conventions} We begin with a disclaimer: a lot of the properties we present in this subsection have been proved in CompCertO. However, there are also important properties that are missing but necessary for building a unified interface. They are $\refine{\kltl_{\kinjp} \cdot \klm}{\klm \cdot \kmach_{\kinjp}}$ in~\lemref{lem:ca-com}, $\refine{\kwt \cdot \scname{R}_K} {\scname{R}_K \cdot \kwt}$ in~\lemref{lem:wt}, and properties (5-7) in~\lemref{lem:sim-refine}. We have proved these missing properties in this work. With that out of the way, we now give a complete list of the required properties below. \begin{lemma}\label{lem:ca-com} For $\kwd{XY} \in \{\kcl ,\klm ,\kma \}$ and $K \in \{\kext ,\kinj ,\kinjp \}$ we have $\refine{\kwd{X}_{K} \cdot \kwd{XY}} {\kwd{XY} \cdot \kwd{Y}_K }$. \end{lemma} Here, \kwd{X} and \kwd{Y} denotes the simulation convention for the source and target languages of \kwd{XY}, respectively. For instance, $\kwd{X} = \kc$ and $\kwd{Y} = \kltl$ when $\kwd{XY} = \kcl$. This lemma indicates at the outgoing side a convention lower than $\kcl$, $\klm$, $\kma$ may be lifted over them to a higher position. Conversely, at the incoming side, a convention higher than $\kcl$, $\klm$, $\kma$ may be pushed down to a lower position. \begin{lemma}\label{lem:wt} For any\; $\scname{R}_K:\sctype{\cli}{\cli}$, we have $(1) \scequiv{\scname{R}_K \cdot \kwt}{\kwt \cdot \scname{R}_K \cdot \kwt}$ and\; $(2) \scequiv{\scname{R}_K \cdot \kwt}{\kwt \cdot \scname{R}_K}$. \end{lemma} The first conclusion indicates a well-typedness requirement at the $\cli$-level may be removed if it is followed immediately by another convention and a well-typedness requirement. The second conclusion indicates well-typedness commutes with $\cli$-level simulation conventions. \begin{lemma}\label{lem:sim-refine} For any $\scname{R}$, $(1) \scname{R}_\kext \cdot \scname{R}_\kext \equiv \scname{R}_\kext$ $\; (2) \scname{R}_\kext \cdot \scname{R}_\kinj \equiv \scname{R}_\kinj$ $\; (3) \scname{R}_\kinj \cdot \scname{R}_\kext \equiv \scname{R}_\kinj$ \begin{tabbing} \quad\=\kill \>$(4) \refine{\scname{R}_\kinj \cdot \scname{R}_\kinj}{\scname{R}_\kinj}$ \;$(5) \scname{R}_\kinjp \cdot \scname{R}_\kinjp \equiv \scname{R}_\kinjp$ \;$(6) \refine{\scname{R}_\kinjp}{\scname{R}_\kinj}$ \;$(7) \refine{\scname{R}_\kinjp \cdot \scname{R}_\kinj \cdot \scname{R}_\kinjp}{\scname{R}_\kinjp}$. \end{tabbing} \end{lemma} This is a collection of refinement properties mirroring that for CKLRs. Property (1) indicates $\scname{R}_\kext$ is transitive. Properties (2) and (3) indicate $\scname{R}_\kext$ may be absorbed into $\scname{R}_\kinj$. Property (4) states ${\scname{R}_\kinj \cdot \scname{R}_\kinj}$ is refined by $\scname{R}_\kinj$. Note that the refinement from another direction is not provable because of lack of memory protection for constructing interpolating states. By contrast, ${\scname{R}_\kinjp \cdot \scname{R}_\kinjp}$ is equivalent to $\scname{R}_\kinjp$ as indicated by property (5); it is a direct consequence of $\kinjp \cdot \kinjp \equiv \kinjp$ which we have proved in~\secref{sec:injp-trans}. The remaining two properties provide ways to absorb $\scname{R}_\kinj$ into $\scname{R}_\kinjp$ from two directions: property (6) is for reducing $\scname{R}_\kinj$ to $\scname{R}_\kinjp$ at the outgoing side, while property (7) is for absorbing $\scname{R}_\kinj$ into surrounding $\scname{R}_\kinjp$ at the incoming side. In the upcoming refinement process, we need to insert self-simulations at appropriate levels for combining and absorbing simulation conventions, which have already been proved in CompCertO: \begin{theorem}\label{thm:self-sim} If $p$ is a program written in $\kwd{Clight}$ or $\kwd{RTL}$ and $\scname{R} \in \{\kc_{\kext}, \kc_{\kinj}, \kc_{\kinjp}\}$, or $p$ is written in $\kwd{Asm}$ and $\scname{R} \in \{\kasm_{\kext}, \kasm_{\kinj}, \kasm_{\kinjp}\}$, then $\osim{\scname{R}}{\scname{R}}{\sem{p}}{\sem{p}}$ holds. \end{theorem} \subsection{Unification of Simulation Conventions} \label{sec:unification} To facilitate absorbing simulation conventions parameterized over CKLRs at incoming and outgoing sides, we first insert several self-simulations into the compiler passes, as shown in~\tabref{tab:compcerto}. As we shall see below, this is to supply extra $\scname{R}_\kinj$ and $\scname{R}_\kinjp$ conventions for 1) absorbing $\scname{R}_\kext$ into $\scname{R}_\kinj$ and 2) absorbing $\scname{R}_\kinj$ into $\scname{R}_\kinjp$ (both by properties in~\lemref{lem:sim-refine}). We then unify the conventions at the incoming and outgoing sides. That is, starting from the simulation $\osim {\scname{R}}{\scname{S}}{L_1}{L_2}$ resulting from the transitive composition of compiler passes in~\tabref{tab:compcerto} where \begin{tabbing} \quad\=$\scname{R}$ \== \=\kill \>$\scname{R}$\>=\>$\kc_{\kinjp} \cdot \kc_{\kinjp} \cdot \kc_{\kinjp} \cdot \kwt \cdot \kc_{\kext} \cdot \kc_{\kext} \cdot \kc_{\kext} \cdot \kc_\kinj \cdot \kc_{\kinjp} \cdot \kc_{\kinjp} \cdot \kc_{\kinj} \cdot \kwt \cdot \kc_{\kext} \cdot \kcl$\\ \>\>\>$\cdot \kltl_{\kext} \cdot \kltl_{\kinjp} \cdot \klm \cdot \kmach_{\kext} \cdot \kma \cdot \kasm_{\kinj} \cdot \kasm_{\kinjp}$\\ \>$\scname{S}$\>=\>$\kc_{\kinjp} \cdot \kc_{\kinj} \cdot \kc_{\kinj} \cdot \kwt \cdot \kc_{\kext} \cdot \kc_{\kext} \cdot \kc_{\kext} \cdot \kc_\kinj \cdot \kc_{\kinj} \cdot \kc_{\kinjp} \cdot \kc_{\kinj} \cdot \kwt \cdot \kc_{\kext} \cdot \kcl$\\ \>\>\>$\cdot \kltl_{\kext} \cdot \klm \cdot \kmach_{\kinj} \cdot \kmach_{\kext} \cdot \kma \cdot \kasm_{\kinj} \cdot \kasm_{\kinjp}$, \end{tabbing} we find two sequences of refinements $\sccompcerto \sqsubseteq \scname{R}_n \sqsubseteq \ldots \sqsubseteq \scname{R}_1 \sqsubseteq \scname{R}$ and $\scname{S} \sqsubseteq \scname{S}_1 \sqsubseteq \ldots \sqsubseteq \scname{S}_m \sqsubseteq \sccompcerto$, by which and \thmref{thm:sim-refine} we merge the outgoing simulation conventions into a unified convention $\sccompcerto$ and get the simulation $\osim {\sccompcerto} {\sccompcerto} {L_1}{L_2}$. For each side, the unification proceeds in the following steps: \begin{enumerate} \item We prove that simulation conventions parameterized over CKLRs can be shuffled to $\cli$ and $\asmli$ level, leaving the calling convention $\kcl$, $\klm$ and $\kma$ in the middle. \item We prove that simulation conventions parameterized over CKLRs at the same level can be composed into a single simulation convention parameterized over \kinjp. \item We prove the $\cli$-level convention and the calling convention can be unified into a single convention \kcainjp whose precise definition we will discuss in~\secref{ssec:cainjp}. This requires us to prove ${\kc_\kinjp \cdot \kcl \cdot \klm \cdot \kma} \equiv \kcainjp$ which is a straightforward two-step composition of $\kcl \cdot \klm \cdot \kma$ into $\kwd{CA}$ and $\kc_\kinjp \cdot \kwd{CA}$ into \kcainjp. \end{enumerate} The unified simulation convention relates C and assembly queries and replies. It is \[ \sccompcerto : \sctype{\cli}{\asmli} = \kwt \cdot \kcainjp \cdot \kasm_{\kinjp}. \] Note that we have an $\asmli$-level convention $\kasm_\kinjp$, which is not a problem in verified compilation because we can always obtain an assembly-level program (either by compilation or there is already hand-written assembly code) and they are self-simulating over $\kasm_\kinjp$ by~\thmref{thm:self-sim}. See also \secref{sec:eval} for a concrete example. We now explain how the unification is carried out. \subsubsection{Unification at the Outgoing Side} The following displays the sequence of refined simulation conventions $\sccompcerto \sqsubseteq \scname{R}_n \sqsubseteq \ldots \sqsubseteq \scname{R}_1 \sqsubseteq \scname{R}$. It begins with the composition of all initial outgoing conventions (including those for self-simulations) and ends with the unified interface. {\small \begin{tabbing} \quad\=$(11)$ \=\kill \>$(1)$\>$\textcolor{red}{\kc_{\kinjp} \cdot \kc_{\kinjp} \cdot \kc_{\kinjp}} \cdot \kwt \cdot \textcolor{red}{\kc_{\kext} \cdot \kc_{\kext} \cdot \kc_{\kext}} \cdot \kc_\kinj \cdot \kc_{\kinjp} \cdot \kc_{\kinjp} \cdot \kc_{\kinj} \cdot \kwt \cdot \kc_{\kext} \cdot \kcl$\\ \>\>$\cdot \kltl_{\kext} \cdot \kltl_{\kinjp} \cdot \klm \cdot \kmach_{\kext} \cdot \kma \cdot \kasm_{\kinj} \cdot \kasm_{\kinjp}$\\ \>$(2)$\>$\kc_{\kinjp} \cdot \kwt \cdot \kc_{\kext} \cdot \kc_\kinj \cdot \kc_{\kinjp} \cdot \kc_{\kinjp} \cdot \kc_{\kinj} \cdot \kwt \cdot \kc_{\kext} \cdot \kcl$\\ \>\>$\cdot \textcolor{red}{\kltl_{\kext} \cdot \kltl_{\kinjp}} \cdot \klm \cdot \textcolor{red}{\kmach_{\kext}} \cdot \kma \cdot \textcolor{red}{\kasm_{\kinj}} \cdot \kasm_{\kinjp}$ \\ \>$(3)$\>$\kc_{\kinjp} \cdot \kwt \cdot \kc_{\kext} \cdot \kc_\kinj \cdot \kc_{\kinjp} \cdot \kc_{\kinjp} \cdot \kc_{\kinj} \cdot \textcolor{red}{\kwt} \cdot \kc_{\kext} \cdot \kc_{\kext} \cdot \kc_{\kinjp} \cdot \kc_{\kext} \cdot \kc_{\kinj} \cdot \kcl \cdot \klm \cdot \kma \cdot \kasm_{\kinjp}$\\ \>$(4)$\>$\kc_{\kinjp} \cdot \textcolor{red}{\kwt \cdot \kc_{\kext} \cdot \kwt} \cdot \kc_\kinj \cdot \kc_{\kinjp} \cdot \kc_{\kinjp} \cdot \kc_{\kinj} \cdot \kc_{\kext} \cdot \kc_{\kext} \cdot \kc_{\kinjp} \cdot \kc_{\kext} \cdot \kc_{\kinj} \cdot \kcl \cdot \klm \cdot \kma \cdot \kasm_{\kinjp}$\\ \>$(5)$\>$\kc_{\kinjp} \cdot \kc_{\kext} \cdot \textcolor{red}{\kwt} \cdot \kc_\kinj \cdot \kc_{\kinjp} \cdot \kc_{\kinjp} \cdot \kc_{\kinj} \cdot \kc_{\kext} \cdot \kc_{\kext} \cdot \kc_{\kinjp} \cdot \kc_{\kext} \cdot \kc_{\kinj} \cdot \kcl \cdot \klm \cdot \kma \cdot \kasm_{\kinjp}$\\ \>$(6)$\>$\kwt \cdot \kc_{\kinjp} \cdot \textcolor{red}{\kc_{\kext} \cdot \kc_\kinj} \cdot \kc_{\kinjp} \cdot \kc_{\kinjp} \cdot \textcolor{red}{\kc_{\kinj} \cdot \kc_{\kext} \cdot \kc_{\kext}} \cdot \kc_{\kinjp} \cdot \textcolor{red}{\kc_{\kext} \cdot \kc_{\kinj}} \cdot \kcl \cdot \klm \cdot \kma \cdot \kasm_{\kinjp}$\\ \>$(7)$\>$\kwt \cdot \textcolor{red}{\kc_{\kinjp} \cdot \kc_\kinj \cdot \kc_{\kinjp} \cdot \kc_{\kinjp} \cdot \kc_{\kinj} \cdot \kc_{\kinjp} \cdot \kc_{\kinj}} \cdot \kcl \cdot \klm \cdot \kma \cdot \kasm_{\kinjp}$\\ \>$(8)$\>$\kwt \cdot \textcolor{red}{\kc_\kinjp \cdot \kcl \cdot \klm \cdot \kma} \cdot \kasm_{\kinjp}$\\ \>$(9)$\>$\kwt \cdot \kcainjp \cdot \kasm_{\kinjp}$ \end{tabbing}} In each line, the letters in red are simulation conventions transformed by the refinement operation at that step. In step (1), we merge consecutive conventions by applying $\kc_{\kinjp} \equiv \comp{\kc_{\kinjp}}{\kc_{\kinjp}}$ and $\kc_{\kext} \equiv \comp{\kc_{\kext}}{\kc_{\kext}}$ provided by~\lemref{lem:sim-refine}. In step (2), we lift conventions over $\kcl$, $\klm$ and $\kma$ to higher positions by~\lemref{lem:ca-com}. In steps (3) and (4), we first move $\kwt$ to higher positions by property (2) in~\lemref{lem:wt}, then eliminate a \kwt by property (1) in~\lemref{lem:wt}. In step (5), we move the remaining \kwt to the top level. In step (6), we absorb $\kc_\kext$ into $\kc_\kinj$ by properties (2) and (3) in~\lemref{lem:sim-refine}. In step (7), we absorb $\kc_\kinj$ and $\kc_\kinjp$ into a single $\kc_\kinjp$ by applying $\refine{\kc_\kinjp}{\kc_\kinj}$ and $\kc_{\kinjp} \equiv \comp{\kc_{\kinjp}}{\kc_{\kinjp}}$ provided by~\lemref{lem:sim-refine}. Finally, ${\kc_\kinjp \cdot \kcl \cdot \klm \cdot \kma}$ is merged into $\kcainjp$. \subsubsection{Unification at the Incoming Side} \label{sec:unification} The original simulation conventions at the incoming side are parameterized by $\kinj$ which does not have memory protection as in $\kinjp$. One can modify the proofs of CompCert to make $\kinjp$ an incoming convention. However, we show that this is unnecessary: with the inserted self-simulations over \kinjp, conventions over \kinj may be absorbed into them. The following is the refinement sequence $\scname{S} \sqsubseteq \scname{S}_1 \sqsubseteq \ldots \sqsubseteq \scname{S}_m \sqsubseteq \sccompcerto$ that realizes this idea. {\small \begin{tabbing} \quad\=$(11)$ \=\kill \>$(1)$\>$\kc_{\kinjp} \cdot \textcolor{red}{\kc_{\kinj} \cdot \kc_{\kinj}} \cdot \kwt \cdot \textcolor{red}{\kc_{\kext} \cdot \kc_{\kext} \cdot \kc_{\kext}} \cdot \textcolor{red}{\kc_\kinj \cdot \kc_{\kinj}} \cdot \kc_{\kinjp} \cdot \kc_{\kinj} \cdot \kwt \cdot \kc_{\kext} \cdot \kcl$\\ \>\>$\cdot \kltl_{\kext} \cdot \klm \cdot \kmach_{\kinj} \cdot \kmach_{\kext} \cdot \kma \cdot \kasm_{\kinj} \cdot \kasm_{\kinjp}$\\ \>$(2)$\>$\kc_{\kinjp} \cdot \kc_{\kinj} \cdot \kwt \cdot \kc_{\kext} \cdot \kc_{\kinj} \cdot \textcolor{red}{\kc_{\kinjp}} \cdot \kc_{\kinj} \cdot \kwt \cdot \kc_{\kext} \cdot \kcl$\\ \>\>$\cdot \kltl_{\kext} \cdot \klm \cdot \kmach_{\kinj} \cdot \kmach_{\kext} \cdot \kma \cdot \kasm_{\kinj} \cdot \kasm_{\kinjp}$\\ \>$(3)$\>$\kc_{\kinjp} \cdot \kc_{\kinj} \cdot \kwt \cdot \kc_{\kext} \cdot \kc_{\kinj} \cdot \kc_{\kinjp} \cdot \kc_{\kinjp}\cdot \kc_{\kinj} \cdot \textcolor{red}{\kwt} \cdot \kc_{\kext} \cdot \kcl$\\ \>\>$\cdot \kltl_{\kext} \cdot \klm \cdot \kmach_{\kinj} \cdot \kmach_{\kext} \cdot \kma \cdot \kasm_{\kinj} \cdot \kasm_{\kinjp}$\\ \>$(4)$\>$\kc_{\kinjp} \cdot \kc_{\kinj} \cdot \kwt \cdot \kc_{\kext} \cdot \kwt \cdot \kc_{\kinj} \cdot \kc_{\kinjp} \cdot \textcolor{red}{\kc_{\kinjp}\cdot \kc_{\kinj} \cdot \kc_{\kext}} \cdot \kcl$\\ \>\>$\cdot \kltl_{\kext} \cdot \klm \cdot \kmach_{\kinj} \cdot \kmach_{\kext} \cdot \kma \cdot \kasm_{\kinj} \cdot \kasm_{\kinjp}$\\ \>$(5)$\>$\kc_{\kinjp} \cdot \kc_{\kinj} \cdot \kwt \cdot \kc_{\kext} \cdot \kwt \cdot \kc_{\kinj} \cdot \kc_{\kinjp} \cdot \kcl \cdot \textcolor{red}{\kltl_{\kinjp}\cdot \kltl_{\kinj} \cdot \kltl_{\kext}}$\\ \>\>$\cdot \textcolor{red}{\kltl_{\kext}} \cdot \klm \cdot \textcolor{red}{\kmach_{\kinj} \cdot \kmach_{\kext}} \cdot \kma \cdot \kasm_{\kinj} \cdot \kasm_{\kinjp}$\\ \>$(6)$\>$\kc_{\kinjp} \cdot \kc_{\kinj} \cdot \kwt \cdot \kc_{\kext} \cdot \kwt \cdot \kc_{\kinj} \cdot \kc_{\kinjp} \cdot \kcl \cdot \klm \cdot \kma$\\ \>\>$\cdot \textcolor{red}{\kasm_{\kinjp}\cdot \kasm_{\kinj} \cdot \kasm_{\kext} \cdot \kasm_{\kext} \cdot \kasm_{\kinj} \cdot \kasm_{\kext} \cdot \kasm_{\kinj} \cdot \kasm_{\kinjp}}$\\ \>$(7)$\>$\kc_{\kinjp} \cdot \kc_{\kinj} \cdot \kwt \cdot \kc_{\kext} \cdot \kwt \cdot \kc_{\kinj} \cdot \kc_{\kinjp} \cdot \kcl \cdot \klm \cdot \kma \cdot \textcolor{red}{\kasm_{\kinjp} \cdot \kasm_{\kinj} \cdot \kasm_{\kinjp}}$\\ \>$(8)$\>$\kc_{\kinjp} \cdot \kc_{\kinj} \cdot \textcolor{red}{\kwt \cdot \kc_{\kext} \cdot \kwt} \cdot \kc_{\kinj} \cdot \kc_{\kinjp} \cdot \kcl \cdot \klm \cdot \kma \cdot \kasm_{\kinjp}$\\ \>$(9)$\>$\kc_{\kinjp} \cdot \kc_{\kinj} \cdot \kc_{\kext} \cdot \textcolor{red}{\kwt} \cdot \kc_{\kinj} \cdot \kc_{\kinjp} \cdot \kcl \cdot \klm \cdot \kma \cdot \kasm_{\kinjp}$\\ \>$(10)$\>$\kwt \cdot \textcolor{red}{\kc_{\kinjp} \cdot \kc_{\kinj} \cdot \kc_{\kext} \cdot \kc_{\kinj} \cdot \kc_{\kinjp}} \cdot \kcl \cdot \klm \cdot \kma \cdot \kasm_{\kinjp}$\\ \>$(11)$\>$\kwt \cdot \textcolor{red}{\kc_{\kinjp} \cdot \kcl \cdot \klm \cdot \kma} \cdot \kasm_{\kinjp}$\\ \>$(12)$\>$\kwt \cdot \kcainjp \cdot \kasm_{\kinjp}$ \end{tabbing}} In step (1), we begin by merging consecutive conventions by $\kc_{\kext} \equiv \comp{\kc_{\kext}}{\kc_{\kext}}$ and $\refine{\kc_\kinj \cdot \kc_\kinj}{\kc_\kinj}$. In step (2), we split a $\kc_\kinjp$ into two, one will be used to absorb the $\kc_\kinj$ conventions at the $\asmli$-level (step (7)) and the other at the $\cli$-level (step (10)). In step (3), we move \kwt to the appropriate location like above. In step (4-5), we push all simulation conventions parameterized over CKLRs starting with the second split $\kc_\kinjp$ to $\asmli$-level by~\lemref{lem:ca-com}. In step (6), we first absorb $\kasm_\kext$ into $\kasm_\kinj$ and then compose $\kasm_\kinj$ together by properties (2-4) in~\lemref{lem:sim-refine}. In step (7), we absorb $\kasm_\kinj$ into surrounding $\kasm_\kinjp$ by property (7) in~\lemref{lem:sim-refine}. In step (8-9), we eliminate a redundant \kwt and move the remaining one to the top level like above. In step (10), we absorb $\kc_\kext$ int $\kc_\kinj$ which is in turn absorbed into surrounding $\kc_\kinjp$ like that at $\asmli$-level. The last step is the same as above. \subsection{The Unified Simulation Convention} \label{ssec:cainjp} We now present the definition of $\sccompcerto : \sctype{\cli}{\asmli}$ which relates well-behaved $\kwd{Asm}$ modules with some $\cli$-level specification described as an open LTS. In $\sccompcerto = \kwt \cdot \kcainjp \cdot \kasm_{\kinjp}$, \kwt specifies well-typedness of arguments and return values as we have discussed. $\kasm_{\kinjp}$ has a similar definition to $\kc_{\kinjp}$ as described in~\secref{ssec:cklr}. As mentioned before, we do not need to reason about $\kasm_{\kinjp}$ because assembly programs are always available and they are self-simulating over $\kasm_{\kinjp}$. The significant part is \kcainjp, defined as follows. Note that, to simplify the presentation, we have omitted minor constraints such as function values should not be undefined, stack pointers must have a pointer type, etc. Interested readers should consult the accompanying Coq artifact for a complete definition. \begin{definition}\label{def:cainjp} $\kcainjp : \sctype{\cli}{\asmli}= \simconv{W_\kcainjp} {\scname{R}_\kcainjp^q} {\scname{R}_\kcainjp^r}$ where $W_\kcainjp = (W_{\kinjp} \times \ksig \times \kregset)$ and ${\scname{R}_\kcainjp^q:\krtype{W_\kcainjp}{\cli^q}{\asmli^q}}$ and ${\scname{R}_\kcainjp^r:\krtype{W_\kcainjp}{\cli^r}{\asmli^r}}$ are defined as: \begin{itemize} \item $(\cquery{v_f}{\sig}{\vec{v}}{m_1}, \asmquery{\regset}{m_2}) \in \scname{R}_\kcainjp^q((j,m_1,m_2),\sig,\regset)$ if \begin{tabbing} \quad\=\kill \> (1) $\minj{j}{m_1}{m_2},\quad \vinj{j}{v_f}{\regset(\pcreg)} \quad \vinj{j}{\vec{v}}{\kwd{get-args}(\sig, \regset(\spreg), \regset, m_2)}$\\ \> (2) $\kwd{outgoing-arguments}(\sig,\regset(\spreg)) \subseteq \outofreach{j}{m_1}$\\ \> (3) $\kwd{outgoing-arguments}(\sig,\regset(\spreg)) \subseteq \perm{m_2}{\kfreeable}$ \end{tabbing} ${\kwd{get-args}(\sig, \regset(\spreg), \regset, m_2)}$ is a list of values for arguments at the assembly level obtained by inspecting locations for arguments in $\regset$ and $m_2$ corresponding to the signature $\sig$ which are determined by CompCert's calling convention. % $\kwd{outgoing-arguments}(\sig,\regset(\spreg))$ is a set of addresses on the stack frame for outgoing function arguments computed from the given signature $\sig$ and the value of stack pointer. \item $(\creply{r}{m_1'}, \asmreply{\regset'}{m_2'}) \in \scname{R}_\kcainjp^r((j,m_1,m_2),\sig,\regset)$ if there is a $j'$ s.t. \begin{tabbing} \quad\=\kill \> (1) $\injpacc{(j,m_1,m_2)}{(j',m_1',m_2')}$\\ \> (2) $\minj{j'}{m_1'}{m_2'},\quad \vinj{j'}{r}{\kwd{get-result}(\sig,\regset')}$\\ \> (3) $\kwd{outgoing-arguments}(\sig,\regset(\spreg)) \subseteq \outofreach{j}{m_1}$\\ \> (4) $\regset'(\spreg) = \regset(\spreg),\quad \regset'(\pcreg) = \regset(\rareg), \quad \forall r \in \kwd{callee-saved-regs}, \regset'(r) = \regset(r)$ \end{tabbing} ${\kwd{get-result}(\sig,\regset')}$ is the return value stored in a register designated by CompCert's calling convention for the given signature $\sig$. \kwd{callee-saved-regs} is the set of callee-saved registers. \end{itemize} \end{definition} By definition, a world $((j, m_1, m_2), \sig, \regset):W_\kcainjp$ of \kcainjp is a tuple containing a world $(j, m_1, m_2)$ of \kinjp for characterizing source and target memory states and memory protection, a signature $\sig$ for identifying stack memory to protect and extracting values from stack frames at the target level, and a set $\regset$ storing values of registers. Given a $\cli$-query $\cquery{v_f}{\sig}{\vec{v}}{m_1}$ and a $\asmli$-query $\asmquery{rs}{m_2}$, they are related by $\scname{R}_\kcainjp^q$ at the Kripke world $(j,m_1,m_2),\sig,\regset)$ if the corresponding values and memory are related by injection $j$. In particular, the function $v_f$ must be injected into the program counter (PC) and source arguments must be injected into target arguments stored in registers or memory as determined by CompCert's calling convention and the signature $\sig$. Moreover, the memory regions on stack frames storing outgoing function arguments must be protected (out-of-reach), and they must be freeable at the target level. The relation matching replies are indexed by worlds for the corresponding queries. Given a $\cli$-reply $\creply{r}{m_1'}$ and a $\asmli$-reply $\asmreply{\regset'}{m_2'}$, they are related by $\scname{R}_\kcainjp^r$ at the world $((j,m_1,m_2),\sig,\regset)$ (which is a previous world at the querying site) if corresponding values and memory are related by injection $j'$ in the current world $((j',m_1',m_2'),\sig,\regset')$. In particular, $r$ must be related to return values stored in registers according to CompCert's calling convention. Note that $\injpacc{(j,m_1,m_2)}{(j',m_1',m_2')}$ guarantees appropriate protection to private memories on the stack (in this case, the outgoing arguments). Moreover, the stack pointers must be restored upon returning; the program counter must contain the return address stored in $\rareg$; and all callee-saved registers must be restored. One may notice that in~\defref{def:cainjp} the only memory regions that need protection are those for outgoing arguments. This is because, when using \kcainjp, an $\asmli$-level LTS must be related to some $\cli$-level LTS. Since the only values in the language interface $\cli$ that can be mapped to the target memory are outgoing arguments, only their image in the target memory needs protection. Therefore, when working with assembly programs with a C-level specification, $\sccompcerto$ indeed faithfully captures the underlying calling convention of CompCert and the required memory protection. \section{Application and Evaluation} \label{sec:eval} In this section, we first present a non-trivial example for VCC to illustrate the effectiveness of our unified semantic interface. We then present an evaluation of our entire development. \subsection{Application of the Unified Interface to a Heterogeneous Program} \label{sec:application} The example program is borrowed from the CompCertM paper~\cite{compcertm}. It consists of a \kwd{Clight} module $M_C$ and a hand-written assembly module $M_A$. Unlike CompCertM which deals with user-level abstraction of memory states and refinement to abstract specifications with module local invariants, we are only concerned with composition of verified compilation. That is, we design a source-level specification $L_A : \cli \arrli \cli $ for $M_A$ and prove $\osim{\sccompcerto}{\sccompcerto}{L_A}{\sem{M_A}}$. By compiling $M_C$ with CompCertO and~\thmref{thm:syn-link}, we compose the verification results for the two heterogeneous module together and propagates simulation to syntactically linked assembly module ${\kwd{CompCertO}(M_C) + M_A}$: \[ \osim{\scname{C}}{\scname{C}} {\sem{M_C} \semlink L_A} {\sem{\kwd{CompCertO}(M_C) + M_A}} \] \begin{figure} \begin{subfigure}{0.45\textwidth} \begin{lstlisting}[language = C /* C implementation of M_C */ static int memoized[1000] = {0}; int f(int i) { int sum; if (i == 0) return 0; sum = memoized[i]; if (sum == 0) { sum = g(i-1) + i; memoized[i] = sum; } return sum; } /* C code corresponding to M_A */ static int s[2] = {0,0}; int g(int i){ int sum; if (i == 0) return 0; if (i == s[0]) { sum = s[1]; } else { sum = f(i-1) + i; s[0] = i; s[1] = sum; } return sum; } \end{lstlisting} \end{subfigure} \begin{subfigure}{0.45\textwidth} \begin{lstlisting}[language = C] /* Assembly implemention of M_A */ g: Pallocframe 24 16 0 Pmov RBX 8(RSP) // save RBX /* begin */ Pmov RDI RBX Ptestl RBX RBX // i==0 Pjne l0 Pxorl_r RAX // rv=0 Pjmp l1 l0: Pmov s[0] RAX Pcmpl RAX RBX // i==s[0] Pje l2 Pleal -1(RBX) RDI Pcall f // f(i-1) Pleal (RAX,RBX) RAX//sum=f(i-1)+i Pmov RBX s[0] // s[0] = i Pmov RAX s[1] // s[1] = sum Pjmp l1 l2: Pmov s[1] RAX // rv=s[1] /* return */ l1: Pmov 8(RSP) RBX Pfreeframe 24 16 0 Pret \end{lstlisting} \end{subfigure} \caption{Heterogeneous Sum with Mutual Recursion} \label{fig:example} \end{figure} \subsubsection{The Example} The code of $M_A$ and $M_C$ is shown in~\figref{fig:example}. Note that we have also shown a version of $M_A$ at the C level for reference and given its function the name \kwd{g}; this program do not actually exist in our example. We note that \kwd{f} and \kwd{g} collaborate to implement the summation from $0$ to $i$ given an integer $i$. We shall use $\exmsig$ to denote their signature. \kwd{f} perform caching of results for any $i$ in a global array while \kwd{g} only caches for the most recent $i$. When they need to compute a fresh result, they mutually recursively call each other with a smaller argument. The assembly program uses pseudo X86 assembly instructions defined in CompCert where every instruction begins with a letter \kwd{P}. The only real pseudo instructions are \kwd{Pallocframe} and \kwd{Pfreeframe}. \kwd{Pallocframe 24 16 0} allocates a stack block $b_s$ of 24 bytes (3 integers on 64-bit x86), saves \kwd{RSP} and \kwd{RA} to $(b_s,0)$ and $(b_s, 16)$ and set \kwd{RSP} to $\vptr{b_s}{0}$. \kwd{Pfreeframe 24 16 0} recovers \kwd{RSP} and \kwd{RA} from $(b_s, 0)$ and $(b_s, 16)$ and frees the stack block $b_s$. By the calling convention and the signature of \kwd{g}, \kwd{RDI} is used to pass the only argument $i$. \kwd{RBX} is a callee-saved register that stores $i$ during internal execution. It is saved to $(b_s, 8)$ at the beginning of \kwd{g} and restored at the end. Therefore, the sole purpose of $b_s$ is to save and restore \kwd{RSP}, \kwd{RA} and \kwd{RBX}. \subsubsection{Definition of $L_A : \cli \arrli \cli$} We define $L_A = \olts{D_A}{S_A}{I_A}{\to_A}{F_A}{X_A}{Y_A}$ where $D_A$ only accepts query with a function pointer $\vptr{b_g}{0}$ where $b_g$ is code block for \kwd{g}, and {\small \begin{tabbing} \quad\=$\to_A$\;\=$:=$ \=\kill \>$S_A$ \>$:=$ \>$\rawset{\kwd{Callg}\app i\app m} \cup \rawset{\kwd{Callf}\app v_f\app i\app m} \cup \rawset{\kwd{Returnf}\app i\app r\app m} \cup \rawset{\kwd{Returng}\app r\app m}$;\\ \>$I_A$ \>$:=$ \>$\rawset{(\cquery{\vptr{b_g}{0}}{\exmsig}{[\Vint{i}]}{m}),(\kwd{Callg}\app i\app m)}$;\\ \>$\to_A$ \>$:=$ \>$\pset{(\kwd{Callg}\app i\app m, \kwd{Returng}\app 0\app m)}{i = 0} \cup$ \\ \>\>\>$\pset{(\kwd{Callg}\app i\app m,\kwd{Returng}\app r\app m)}{i \neq 0 \land i = s[0] \land r = s[1]} \cup$ \\ \>\>\>$\pset{(\kwd{Callg}\app i\app m, \kwd{Callf}\app v_f\app i\app m)}{i \neq 0 \land i \neq s[0] \land v_f = \kwd{find-func-pointer}(\kwd{f})} \cup$ \\ \>\>\>$\pset{(\kwd{Returnf}\app i\app res\app m, \kwd{Returng}\app (i+res)\app m')}{m' = m[s[0] \leftarrow i, s[1] \leftarrow (i+res)]}$;\\ \>$F_A$ \>$:=$ \>$\rawset{(\kwd{Returng}\app i\app m,(i,m))}$;\\ \>$X_A$ \>$:=$ \>$\rawset{(\kwd{Callf}\app v_f\app i\app m,\cquery{v_f}{\exmsig}{[\Vint{i-1}]}{m})}$;\\ \>$Y_A$ \>$:=$ \>$\rawset{(\kwd{Callf}\app v_f\app i\app m,(r,m'),\kwd{Returnf}\app i\app r\app m')}$. \end{tabbing}} By this definition, there are four kinds of internal states: \kwd{Callg} is at right after the initial call to \kwd{g}; \kwd{Callf} is right before the external call to \kwd{f}; \kwd{Returnf} is right after returning from \kwd{f}; and \kwd{Returng} is right before returning from \kwd{g}. The definitions of transition relations directly match the C-level version of \kwd{g} in~\figref{fig:example}, albeit in a big-step style. Note that when transiting internally from $\kwd{Callg}\app i\app m$ to $\kwd{Callf}\app v_f\app i\app m$, $\kwd{find-func-pointer}$ is used to query the global symbol table for the function pointer to \kwd{f}. Also note that in $L_A$ the memory state $m$ is not changed from \kwd{Callg} to \kwd{Callf}, while in the assembly code $M_A$ a new stack frame is allocated by \kwd{Pallocframe}. This indicates the stack frame is out-of-reach at the external call to \kwd{f} and should be protected during its execution. This point is also manifested in the proof below. \subsubsection{Proof of $\osim{\sccompcerto}{\sccompcerto}{L_A}{\sem{M_A}}$} The key is to identify a relation $R \in \krtype{W_\kcainjp}{S_A}{\kregset \times \kmem}$ satisfying all the properties in~\defref{def:open-sim}. Given $w \in W_\kcainjp = ((j,m_1,m_2), \sig, \regset)$, if $\sig \neq \exmsig$ then $R(w) = \emptyset$. Assume $\sig = \exmsig$, then $R(w)$ is defined as follows: {\small \begin{tabbing} \quad\= (a.1)\quad\=\quad\=\kill \>(a) \>$(\kwd{Callg}\app i\app m_1, (\regset, m_2)) \in R(w) \iff \textcolor{orange}{\kwd{(* initial state *)}}$\\ \>(a.1)\>\>$\regset(\rdireg)=i \land \regset(\pcreg) =\vptr{b_g}{0} \land \minj{j}{m_1}{m_2}$\\ % \>(b) \>$(\kwd{Callf}\app v_f\app i\app m_1', (\regset', m_2')) \in R(w) \iff {\color{orange}\kwd{(* before external call *)}}$\\ \>(b.1)\>\>$\regset'(rbx)=i \land \regset'({{\color{red}\rareg}}) =\vptr{b_g}{13} \land \minj{j'}{m_1'}{m_2'} \land \vinj{j'}{v_f}{\regset'(\pcreg)}$\\ \>(b.2)\>\>$\land \forall r, (r \in \kwd{callee-saved-regs} \land r \neq \rbxreg) \to \regset'(r) = \regset(r)$ \\ \>(b.3)\>\>$\land \regset'(\spreg) = \vptr{b_s}{0} \land {\color{blue}\lnot(\exists b\ o, j \ b = \some{b_s,o})}$ \\ \>(b.4)\>\>$\land \injpacc{(j,m_1,m_2)}{(j',m_1',m_2')}$\\ \>(b.5)\>\>$\land m_2'[b_s,0] = \regset(\spreg) \land m_2'[b_s,8] = \regset(\rbxreg) \land m_2'[b_s,16] = \regset(\rareg)$ \\ % \>(c) \>$(\kwd{Returnf}\app i\app res\app m_1', (\regset', m_2')) \in R(w) \iff {\color{orange}\kwd{(* after external call *)}}$ \\ \>(c.1)\>\>$\regset'(\rbxreg)=i \land \regset'({\color{red}\pcreg}) =(b_g,13) \land \regset'(rax) = res \land \minj{j'}{m_1'}{m_2'}$ \\ \>(c.2)\>\>$\land \forall r, (r \in \kwd{callee-saved-regs} \land r \neq \rbxreg) \to \regset'(r) = \regset(r)$ \\ \>(c.3)\>\>$\land \regset'(\spreg) = \vptr{b_s}{0} \land {\color{blue}\lnot(\exists b\ o, j' \ b = \some{b_s,o})}$ \\ \>(c.4)\>\>$\land \injpacc{(j,m_1,m_2)}{(j',m_1',m_2')}$\\ \>(c.5)\>\>$\land m_2'[b_s,0] = \regset(\spreg) \land m_2'[b_s,8] = \regset(\rbxreg) \land m_2'[b_s,16] = \regset(\rareg)$\\ % \>(d) \>$(\kwd{Returng}\ res\ m_1', (\regset', m_2')) \in R(w) \iff {\color{orange}\kwd{(* final state *)}}$ \\ \>(d.1)\>\>$\regset'(\raxreg)=res \land \regset'(\spreg) = \regset(\spreg) \land \regset'(\pcreg) = \regset(\rareg) \land \minj{j'}{m_1'}{m_2'}$ \\ \>(d.2)\>\>$\land \forall r, r \in \kwd{callee-saved-regs} \to \regset'(r) = \regset(r)$ \\ \>(d.3)\>\>$\land \injpacc{(j,m_1,m_2)}{(j',m_1',m_2')}$ \end{tabbing}} By definition the relation between internal states of $L_A$ and assembly states evolve in four stages: \begin{enumerate}[(a)] \item Right after the initial call to \kwd{g}, (a.1) indicates that the argument $i$ is stored in $\rdireg$ and the program counter is at $(b_g, 0)$ (pointing to the first assembly instruction in~\figref{fig:example}); \item Right before the external call to \kwd{f}, (b.1) indicates $i$ is stored in $\rbxreg$, the return address is set to the 13th assembly instruction in~\figref{fig:example} (right after \kwd{Pcall f}) and $v_f$ matches with the program counter. (b.2) indicates callee saved registers---except for $\rbxreg$--are not modified since the initial call. (b.3) indicates the entire stack frame $b_s$ is out-of-reach. (b.4) maintains properties in \kinjp. (b.5) indicates values on the stack frame is not modified since the initial call. \item Right after the external call to \kwd{f}, we have almost the same conditions as above, except that the program counter points to the return address set at the call to \kwd{f}. \item Right before returning from \kwd{g}, (d.1) indicates the return value is in $\raxreg$, the stack pointer is restored and the return address is set. (d.2) indicates all callee-saved registers are restored and (d.3) indicates the guarantee condition \kinjp is met. \end{enumerate} To prove $R$ is indeed an invariant, we first show that condition (a) holds at the initial state. We then show by internal execution we can prove (b) holds right before the call to \kwd{f}. Now, the source and target execution proceed by calling and returning from \kwd{f}, after which we need to shown (c) holds. This is the most interesting part: it is achieved by combining properties (b.1--5) with the rely-condition provided by \kcainjp for calling \kwd{f}. For example, because we know the stack frame $b_s$ is out-of-reach at the call to \kwd{f}, by the accessibility enforced by \kinjp in $\scname{R}_\kcainjp^r$ in~\defref{def:cainjp}, all values in $m_2'[b_s, o]$ are unchanged, therefore if $m_2'[b_s,0] = \regset(\spreg) \land m_2'[b_s,8] = \regset(\rbxreg) \land m_2'[b_s,16] = \regset(\rareg)$ (condition (b.5)) holds before calling \kwd{f}, they also hold after (hence condition (c.5) holds). Similarly, we can derive (c.2) from (b.2) by the protection over callee-saved registers enforced in $\scname{R}_\kcainjp^r$. Moreover, $({\color{red}\pcreg}) =(b_g,13)$ in (c.1) is derived from $({\color{red}\rareg}) =(b_g,13)$ in (b.1) via the relation between $\pcreg$ and $\rareg$ stated in $\scname{R}_\kcainjp^r$. After the external call, we show condition (d) can be derived from (c) by following internal execution. We note that condition (d) provides exactly the guarantee-condition needed by $\scname{R}_\kcainjp^r$ for the incoming call to \kwd{g}. Therefore, we successfully show $\osim{\sccompcerto}{\sccompcerto}{L_A}{\sem{M_A}}$ indeed holds. With this, we can finally compose with CompCertO's correctness theorem to get $\osim{\scname{C}}{\scname{C}} {\sem{M_C} \semlink L_A} {\sem{\kwd{CompCertO}(M_C) + M_A}}$. \subsection{Evaluation} \label{sec:subeval} \begin{table} \caption{Statistics of Our Development in Coq} \small \begin{tabular}{|c|c|c|c|} \hline Files & CompCertO & This work & Changes(*/+) \\ \hline Maps.v & 1647 & 1850 & 203 \\ \hline Memory.v & 4382 & 5464 & 1146 \\ \hline InjectFootprint.v & 325 & 2421 & 2096 \\ \hline \hline SimplLocalsproof.v & 2006 & 2517 & 574 \\ \hline \hline CA.v & (New) & 497 & 497 \\ \hline Callconv.v & 1057 & 1191 & 134 \\ \hline Compiler.v & 639 & 639 & 224 \\ \hline \hline Demo.v & (New) & 117 & 117 \\ \hline Demospec.v & (New) & 87 & 87 \\ \hline Demoproof.v & (New) & 1108 & 1108 \\ \hline \end{tabular} \label{tab:proof} \end{table} We discuss our proof effort in this section. It took us one person month to prove the transitivity of \kinjp. The current formulation of unified semantics interface with \kcainjp was discovered during the development of our example on heterogeneous summation. It took us another one person month to define $\kcainjp$, prove $\kcainjp \equiv \kc_{\kinjp} \cdot \kcl \cdot \klm \cdot \kma$ and $\osim{\scname{C}}{\scname{C}}{L_A}{\sem{M_A}}$. The unification of semantics interfaces described in~\secref{sec:unification} took us one extra person week. Our framework is built on top of the most recent version of CompCertO which is in turn based on CompCert 3.10. The statistics for our development relative to CompCertO is shown in~\secref{tab:proof} in which we only list files that were modified, extended or newly introduced. Column 2 and 3 show the lines of code (LOC) for every file (counted by using $\kwd{coqwc}$) in the original CompCertO and in our updated version, respectively. Column 4 shows the LOC that were modified or added in the updated version. Note that most files were extended without much modification to the existing code, except for $\kwd{Memory.v}$, $\kwd{Compiler.v}$ and $\kwd{SimplLocalsproof.v}$. The most significant proof effort is for proving the transitivity of \kinjp discussed in~\secref{sec:injp-trans}, which is summarized in rows 2-4. A technical detail is that the memory permission in CompCert was originally represented as a function from blocks and offsets to permissions. However, the domain of this function (footprint of permissions in memory) is not known, which is critical for construction of interpolating states. Therefore, we have changed to functional representation of permission into a tree-like map from which we can extract the footprint of permissions. These changes are made in $\kwd{Maps.v}$ and $\kwd{Memory.v}$. The main proof of transitivity is in \kwd{InjectFootprint.v} with some relevant low-level memory properties implemented in $\kwd{Memory.v}$. Total LOC for this part is about $2.5k$. To test if \kinjp can indeed serve as a guarantee-condition for compiler passes as discussed in~\secref{sec:injp-guarantee}. We have experiment with \kwd{SimplLocals} by modifying its simulation to be $\osim{\kc_\kinjp}{\kc_\kinjp}{\sem{p}}{\sem{p'}}$ where $p$ and $p'$ are source and target programs. We have proved this new simulation with a moderate amount of work as shown in row 5 which reaffirms the uniformality of \kinjp. The effort for unifying interfaces as described in~\secref{sec:compcerto} is shown in rows 6-8. We have added about 500 LOC for defining $\kcainjp$ and proving $\kcainjp \equiv \kc_{\kinjp} \cdot \kcl \cdot \klm \cdot \kma$ in $\kwd{CA.v}$. We have also added 358 LOC in $\kwd{Callconv.v}$ and $\kwd{Compiler.v}$ to unify the simulation conventions of CompCertO. The effort for developing our example is shown in rows 9-11. The newly added files are \kwd{Demo.v} containing the definitions of $M_A$ and $M_C$, \kwd{Demospec.v} containing the definition of $L_A$ and \kwd{Demoproof.v} containing the proof of $\osim{\scname{C}}{\scname{C}}{L_A}{\sem{M_A}}$ and the simulation for the finally composed program. From the above discussion, we can see that our approach piggybacks on the existing proofs of CompCertO and is therefore quite lightweight. The main difficult and technical contribution of this work are not so much about the proofs, but more about the discovery of \kinjp as a uniform and composable interface, figuring out the right proof structure for transitivity of \kinjp, figuring out the formulation of the unified semantic interface $\sccompcerto$ and an approach to refining transitive composition of individual simulation conventions into a unified convention. \section{A Uniform and Composable CompCert Kripke Logical Relation} \label{sec:injp} An important discovery we make is that \kinjp can serve as a uniform and composable CKLR. First, \kinjp can directly capture the rely-guarantee conditions for memory protection for compiler passes in CompCertO. Second, it is actually transitive, i.e., $\kinjp \equiv \comp{\kinjp}{\kinjp}$. The proof is based on the discovery that constructing interpolating states for \kinjp is possible because it encodes a \emph{partial functional transformation} on memory states. Although the proof is quite involved, the result can be reused for all compiler passes thanks to \kinjp's uniformality. In this section, we first present a notion of public and private memory derived from injections which provides an inherent notion of protection that is in turn precisely captured by \kinjp. We then elaborate on the above points. \subsection{Public and Private Memory with Respect to Memory Injections} \begin{definition} Given $\minj{j}{m_1}{m_2}$, the public memory regions in $m_1$ and $m_2$ are defined as follows: \begin{tabbing} \quad\=$\pubtgtmem{j}{m_1}$\;\=\kill \>$\pubsrcmem{j}$\>$ = \pset{(b,o)}{j(b) \neq \none};$\\ \>$\pubtgtmem{j}{m_1}$\>$ = \pset{(b,o)}{\exists b'\app o', j(b') = \some{(b, o')} \land (b',o-o') \in \perm{m_1}{\knonempty}}.$ \end{tabbing} \end{definition} \noindent By definition, a cell $(b, o)$ is public in the source memory if it is in the domain of injection, and $(b,o)$ is public in the target memory if it is mapped from some valid public source memory. With this definition and preservation of pointer values enforced by memory injection, we can easily prove the following property: \begin{lemma}\label{lem:pub-closure} Given $\minj{j}{m_1}{m_2}$, \begin{tabbing} \quad\=$\forall\app b_1\app o_1,\app$\=\kill \>$\forall\app b_1\app o_1,\app (b_1, o_1) \in \pubsrcmem{j} \imply (b_1, o_1) \in \perm{m_1}{\kreadable} \imply$\\ \>\>$\mcontents{m_1}{b_1}{o_1} = \vptr{b_1'}{o_1'} \imply (b_1', o_1') \in \pubsrcmem{j}.$ \end{tabbing} \end{lemma} That is, access of pointers in a readable and public source location gets back another public location. This property implies that readable public memory regions form a ``closure'' such that the sequences of reads are bounded inside these regions as shown in~\figref{fig:read-inj}. \begin{figure} \begin{center} \begin{tikzpicture} \path node (n1) {\small $(b_1, o_1)$} --++(3,0) node (n2) {\small $(b_2, o_2)$} --++(3,0) node (n3) {\small $(b_3, o_3)$} --++(3,0) node (n4) {$\ldots$}; \draw[-stealth] (n1) -- node[sloped, above] {\small \kwd{read}} (n2); \draw[-stealth] (n2) -- node[sloped, above] {\small \kwd{read}} (n3); \draw[-stealth] (n3) -- node[sloped, above] {\small \kwd{read}} (n4); \node[below = 0.6cm of n1] (m1) {\small $(b_1', o_1')$}; \path let \p1 = (n2) in let \p2 = (m1) in node (m2) at (\x1,\y2) {\small $(b_2', o_2')$}; \path let \p1 = (n3) in let \p2 = (m1) in node (m3) at (\x1,\y2) {\small $(b_3', o_3')$}; \path let \p1 = (n4) in let \p2 = (m1) in node (m4) at (\x1,\y2) {$\ldots$}; \draw[-stealth] (m1) -- node[sloped, above] {\small \kwd{read}} (m2); \draw[-stealth] (m2) -- node[sloped, above] {\small \kwd{read}} (m3); \draw[-stealth] (m3) -- node[sloped, above] {\small \kwd{read}} (m4); \draw[-stealth] (n1) -- node[left] {$j$} (m1); \draw[-stealth] (n2) -- node[left] {$j$} (m2); \draw[-stealth] (n3) -- node[left] {$j$} (m3); \end{tikzpicture} \end{center} \caption{A Sequence of Reads from the Closed Public Memory} \label{fig:read-inj} \end{figure} Here, the horizontal arrows indicates a pointer $(b_{i+1}, o_{i+1})$ is obtained from reading the in-memory value at $(b_i, o_i)$ with possible adjustment using pointer arithmetic. Note that all memory cells at $(b_i, o_i)$s and $(b_i', o_i')$s have at least \kreadable permission. By~\lemref{lem:pub-closure}, $(b_i, o_i)$s are all in public regions. The mirroring reads $(b_i', o_i')$s at the target level are also in public regions by definition. \subsection{Uniformality of \kinjp} We need to show that \kinjp is both a reasonable guarantee condition and a reasonable rely condition for the compiler passes in CompCert. \subsubsection{\kinjp as a Guarantee Condition} \label{sec:injp-guarantee} Given any source and target programs of a compiler pass in CompCert, we show that their execution between an initial query and the final reply respects \kinjp. The critical point is that the initially unmapped and out-of-reach regions are not modified by internal execution. We argue by using an open simulation $\osim{\kc_\kinjp}{\kc_\kinjp}{\sem{M_1}}{\sem{M_2}}$ between C modules $M_1$ and $M_2$; note that the same argument applies to other stages of compilation. During the forward simulation, assume $M_1$ and $M_2$ takes incoming queries $\cquery{v_{f_1}}{sg}{\vec{v_1}}{m_1}$ and $\cquery{v_{f_2}}{sg}{\vec{v_2}}{m_2}$, respectively. By definition, all incoming values and memories are related by some initial injection $j$ (i.e., $\vinj{j}{v_{f_1}}{v_{f_2}}$, $\vinj{j}{\vec{v_1}}{\vec{v_2}}$ and $\minj{j}{m_1}{m_2}$). In particular, the pointers in them are related by $j$ (e.g., $v_{f_1} = \vptr{b_1}{o_1}$, $j(b_1) = \some{(b_2,o_1')}$ and $v_{f_2} = \vptr{b_2} {o_1+o_1'}$). Therefore, any sequence of reads starting from pointers stored in the arguments of queries only inspect public memories in the source and target, as already shown in~\figref{fig:read-inj}. Note that the internal execution of $M_1$ and $M_2$ may write to the public regions of the initial memory $m_1$ and $m_2$ during these reads. However, those changes can only result in new pointer values related by some bigger injection $j' \supseteq j$ between expanded memory states $m_1'$ and $m_2'$ such that $j'(b) = j(b)$ for any $b \in m_1$. Therefore, as long as these pointer values are referring to locations in $m_1$ or $m_2$, they must still be related by $j$ and must point to the public regions of $m_1$ or $m_2$. We further notice that by definition: \begin{tabbing} \quad\=$(b, o) \in \pubtgtmem{j}{m}$\;\=\kill \>$(b, o) \in \pubsrcmem{j}$\>$\iff (b, o) \not\in \unmapped{j}$\\ \>$(b, o) \in \pubtgtmem{j}{m}$\>$\iff (b, o) \not\in \outofreach{j}{m}$ \end{tabbing} That is, if only public memories are inspected or modified, then any value in the unmapped or out-of-reach regions of the initial memory is \emph{not modified}. Finally, $M_1$ and $M_2$ themselves may perform external calls that may modify memory. However, because the outgoing calls has \kinjp as a rely-condition and injections only increase but never change in values, the initially unmapped and out-of-reach regions will be protected during external calls. Therefore, we conclude that $\kinjp$ is a reasonable guarantee condition for internal execution. \subsubsection{\kinjp as a Rely Condition} \begin{figure} \begin{subfigure}{0.2\textwidth} \center \begin{verbatim} void f() { int x, y; g(&y); } \end{verbatim} \caption{Example} \label{fig:injp-rely-code} \end{subfigure} % % \begin{subfigure}{0.31\textwidth} \begin{tikzpicture} \snode{0.4cm}{0.3cm}{0}{draw, pattern=north east lines} (x) {}; \node[left = 0 of x] {\small $b_x$}; \snode{0.4cm}{0.3cm}{0}{draw, right = 0.7cm of x} (y) {}; \node[left = 0 of y] {\small $b_y$}; \snode{0.4cm}{0.3cm}{0}{draw, below left = 0.6cm and 0.0cm of y} (yp) {}; \node[left = 0 of yp] {\small $b_y$}; \draw[-stealth] (y.south west) -- (yp.north west); \draw[dotted] (y.south east) -- (yp.north east); \draw[dashed] ($(y.north east) + (0.2cm, 0.1cm)$) --++(0,-1.5cm); \snode{0.4cm}{0.3cm}{0}{right = 0.8cm of y} (b1) {$\ldots$}; \path let \p1 = (b1) in let \p2 = (yp) in node (b2) at (\x1, \y2) {$\ldots$}; \path let \p1 = (b1) in let \p2 = (b2) in node at (\x1, {(\y1+\y2)*0.5}) {\small \kwd{call to g}}; \end{tikzpicture} \caption{\kwd{SimplLocals}} \label{fig:injp-rely-simpl} \end{subfigure} % % \begin{subfigure}{0.31\textwidth} \begin{tikzpicture} \snode{0.4cm}{0.3cm}{0}{draw} (y) {}; \node[left = 0 of y] {\small $b_y$}; \snode{1cm}{0.3cm}{0}{draw, below left = 0.6cm and -0.5cm of y} (sf) {}; \node[left = 0 of sf] {\small $b_s$}; \draw[-stealth] (y.south west) -- ($(sf.north west)+(0.3cm, 0)$); \draw[dotted] (y.south east) -- ($(sf.north west)+(0.7cm,0)$); \fill[pattern=north east lines] (sf.north west) rectangle ($(sf.south west)+(0.3cm,0)$); \fill[pattern=north east lines] ($(sf.north west)+(0.7cm,0)$) rectangle (sf.south east); \draw ($(sf.north west)+(0.3cm,0)$) -- ($(sf.south west)+(0.3cm,0)$); \draw ($(sf.north west)+(0.7cm,0)$) -- ($(sf.south west)+(0.7cm,0)$); \draw[dashed] ($(y.north east) + (0.2cm, 0.1cm)$) --++(0,-1.5cm); \snode{0.4cm}{0.3cm}{0}{right = 0.8cm of y} (b1) {$\ldots$}; \path let \p1 = (b1) in let \p2 = (yp) in node (b2) at (\x1, \y2) {$\ldots$}; \path let \p1 = (b1) in let \p2 = (b2) in node at (\x1, {(\y1+\y2)*0.5}) {\small \kwd{call to g}}; \end{tikzpicture} \caption{\kwd{Stacking}} \label{fig:injp-rely-stacking} \end{subfigure} \caption{Protection of Private Memory by \kinjp} \label{fig:injp-rely} \end{figure} In fact, the conditions in \kinjp are exactly part of CompCert's assumptions on external calls~\cite{compcert}. We show that \kinjp provides adequate memory protection for preventing external calls from interfering with internal execution by inspecting two compiler passes as representative examples. Protection for the remaining compiler passes follows a similar pattern. We shall use the code in~\figref{fig:injp-rely-code} as an example where \kwd{g} is an external function. The first pass is called \kwd{SimplLocals} which turns local variables whose memory addresses are not taken from in-memory variables to temporary ones. As shown in~\figref{fig:injp-rely-simpl}, before \kwd{SimplLocals}, both $x$ and $y$ are allocated in memory. After \kwd{SimplLocals}, $x$ is turned into a temporary variable. The injection $j$ between source and target then projects $x$ out (i.e., $j(b_x) = \none$). It is critical to prevent $g$ from modifying value in $x$ at the source level, which may break the simulation as $x$ is not visible at the target level. We note that $b_x$ is unmapped by $j$, thereby protected by \kinjp by default. The second pass is called \kwd{Stacking} which lays out the structure of concrete stack frames. Before \kwd{Stacking} a stack frame only contains stack-allocated data. After it the frame is expanded with the return addresses, spilled registers, arguments, etc. Those new locations are private to the function and should not be modified by external calls. Continuing with our example as shown in~\figref{fig:injp-rely-stacking}, the only stack-allocated data is $y$. Therefore, the injection $j$ for \kwd{Stacking} is defined as $j({b_y}) = \some{(b_s, o)}$ where $b_s$ is the concrete stack frame and $o$ is the offset at which $y$ reside in $b_s$. Since the image of stack-allocated data is disjoint from private data, all private data is out-of-reach in the target memory, thereby protected by \kinjp (cannot be modified by calls to \kwd{g}). \subsection{Transitivity of \kinjp} \label{sec:injp-trans} The goal is to show the two refinements in~\figref{fig:trans-comp} holds when $\kwd{K} = \kinjp$. As discussed in~\secref{sec:unify-cklr}, the critical step is to construct interpolating memory states and Kripke worlds that transitively relate interpolating states to source and target states. Conceptually, this is possible because given an injection $j$ and the source memory $m_1$, to build an interpolating memory $m_2$, we can project the values in $m_1$ through $j$ to $m_2$. Furthermore, any state of $m_2$ not in the image of $j$ would be determined by memory protection provided by $\kinjp$. Below we elaborate on these key ideas and steps. A complete proof of transitivity of \kinjp is given in~\apdxref{sec:injp-trans-proof}. \begin{figure} \center \begin{tikzpicture} \node (m1) {$m_1$}; \node[below = 0.6cm of m1] (m2) {$m_2$}; \node[below = 0.6cm of m2] (m3) {$m_3$}; \draw[-stealth] (m1) -- node[left] {\small $j_{12}$} (m2); \draw[-stealth] (m2) -- node[left] {\small $j_{23}$} (m3); \node[right = 2cm of m1] (m1p) {$m_1'$}; \node[right = 2cm of m3] (m3p) {$m_3'$}; \path let \p1 = (m2) in let \p2 = (m1p) in node at ({(\x1+\x2)*0.5}, \y1) {\bf $\accsymb_{\kinjp}$}; \draw[-stealth] (m1) -- (m1p); \draw[-stealth] (m3) -- (m3p); \draw[-stealth] (m1p) -- node[left] {\small $j_{13}'$} (m3p); \node[right = 3cm of m2] {$\Longrightarrow$}; \node[right = 2cm of m1p] (m11) {$m_1$}; \path let \p1 = (m11) in let \p2 = (m2) in node (m22) at (\x1, \y2) {$m_2$}; \path let \p1 = (m11) in let \p2 = (m3) in node (m33) at (\x1, \y2) {$m_3$}; \draw[-stealth] (m11) -- node[left] {\small $j_{12}$} (m22); \draw[-stealth] (m22) -- node[left] {\small $j_{23}$} (m33); \node[right = 2cm of m11] (m11p) {$m_1'$}; \node[right = 2cm of m33] (m33p) {$m_3'$}; \draw[-stealth] (m11) -- (m11p); \draw[-stealth] (m33) -- (m33p); \draw[-stealth] (m11p) .. controls ($(m11p)+(1.5cm,0)$) and ($(m33p)+(1.5cm,0cm)$) .. node[right] {\small $j_{13}'$} (m33p); \node[right = 2cm of m22,red] (m22p) {$m_2'$}; \draw[-stealth,red] (m11p) -- node[left] {\small $j_{12}'$} (m22p); \draw[-stealth,red] (m22p) -- node[left] {\small $j_{23}'$} (m33p); \draw[-stealth,red] (m22) -- (m22p); \path let \p1 = (m11) in let \p2 = (m22p) in node[red] at ({(\x1+\x2)*0.5},{(\y1+\y2)*0.5}) {\bf $\accsymb_{\kinjp}$}; \path let \p1 = (m22) in let \p2 = (m33p) in node[red] at ({(\x1+\x2)*0.5},{(\y1+\y2)*0.5}) {\bf $\accsymb_{\kinjp}$}; \end{tikzpicture} \caption{Construction of Interpolating States} \label{fig:injp-int-st} \end{figure} \subsubsection{$\refine{\kinjp}{\comp{\kinjp}{\kinjp}}$} By definition, we need to prove the following lemma: \begin{lemma}\label{lem:injp-refine-injp-comp} $\refine{\kinjp}{\comp{\kinjp}{\kinjp}}$ holds. That is, \begin{tabbing} \quad\=\quad\=\quad\=$\exists m_2'\app j_{12}'\app j_{23}',$\=\kill \>$\forall j_{12}\app j_{23}\app m_1\app m_2\app m_3,\app \minj{j_{12}}{m_1}{m_2} \imply \minj{j_{23}}{m_2}{m_3} \imply \exists j_{13},\app \minj{j_{13}}{m_1}{m_3} \land$\\ \>\>$\forall m_1'\app m_3'\app j_{13}',\app \injpacc{(j_{13}, m_1, m_3)}{(j_{13}', m_1', m_3')} \imply \minj{j_{13}'}{m_1'}{m_3'} \imply$\\ \>\>\>$\exists m_2'\app j_{12}'\app j_{23}', \injpacc{(j_{12},m_1,m_2)}{(j_{12}',m_1',m_2')} \land \minj{j_{12}'}{m_1'}{m_2'}$\\ \>\>\>\>$\land \injpacc{(j_{23},m_2,m_3)}{(j_{23}',m_2',m_3')} \land \minj{j_{23}'}{m_2'}{m_3'}.$ \end{tabbing} \end{lemma} This lemma conforms to the graphic representation in~\figref{fig:trans-comp1}. To prove it, an obvious choice is to pick $j_{13} = \comp{j_{23}}{j_{12}}$. Then we are left to prove the existence of interpolating state $m_2'$ and the memory and accessibility relations as shown in~\figref{fig:injp-int-st}. \begin{figure}[ht] \def0{0} \def0{0} \def0.5cm{0.5cm} \def0.6cm{0.6cm} \def0.4cm{0.4cm} \def0.8cm{0.8cm} \def0.4cm{0.4cm} \def0.02cm{0.02cm} \begin{subfigure}[b]{0.4\textwidth} \begin{tikzpicture} \snode{0.8cm}{0.4cm}{0.1cm}{draw} (m1b1) {}; \fill[green] ($(m1b1.north west)+(0.2cm,-0.02cm)$) rectangle ($(m1b1.south west)+(0.5cm,0.02cm)$); \node[left = 0 of m1b1] {\small $b_1^1$}; \snode{1cm}{0.4cm}{0.1cm}{draw, right = 0.8cm of m1b1} (m1b2) {}; \fill[green] ($(m1b2.north west)+(0.2cm,-0.02cm)$) rectangle ($(m1b2.south west)+(0.4cm,0.02cm)$); \fill[green] ($(m1b2.north west)+(0.6cm,-0.02cm)$) rectangle ($(m1b2.south west)+(0.8cm,0.02cm)$); \node[left = 0 of m1b2] {\small $b_1^2$}; \snode{0.4cm}{0.4cm}{0.1cm}{draw, right = 0.6cm of m1b2} (m1b3) {}; \node[left = 0 of m1b3] {\small $b_1^3$}; \snode{0.4cm}{0.4cm}{0.1cm}{draw, below left = 0.8cm and 0.2cm of m1b1} (m2b1) {}; \node[left = 0 of m2b1] {\small $b_2^1$}; \snode{1.2cm}{0.4cm}{0.1cm}{draw, right = 0.6cm of m2b1} (m2b2) {}; \fill[green] ($(m2b2.north west)+(0.5cm,-0.02cm)$) rectangle ($(m2b2.south west)+(0.8cm,0.02cm)$); \draw[line width=0.02cm, -stealth,red] (m1b1.south west) -- ($(m2b2.north west)+(0.3cm,0)$); \draw[line width=0.02cm,red] ($(m2b2.north west)+(0.3cm,0)$) -- ($(m2b2.south west)+(0.3cm,0)$); \draw[line width=0.02cm,dotted,red] (m1b1.south east) -- ($(m2b2.north west)+(1.1cm,0)$); \draw[line width=0.02cm,dotted,red] ($(m2b2.north west)+(1.1cm,0)$) -- ($(m2b2.south west)+(1.1cm,0)$); \node[left = 0 of m2b2] {\small $b_2^2$}; \snode{1.4cm}{0.4cm}{0.1cm}{draw, right = 0.6cm of m2b2} (m2b3) {}; \fill[green] ($(m2b3.north west)+(0.4cm,-0.02cm)$) rectangle ($(m2b3.south west)+(1cm,0.02cm)$); \draw[line width=0.02cm, -stealth,red] (m1b2.south west) -- ($(m2b3.north west)+(0.2cm,0)$); \draw[line width=0.02cm,red] ($(m2b3.north west)+(0.2cm,0)$) -- ($(m2b3.south west)+(0.2cm,0)$); \draw[line width=0.02cm,dotted,red] (m1b2.south east) -- ($(m2b3.north west)+(1.2cm,0)$); \draw[line width=0.02cm,dotted,red] ($(m2b3.north west)+(1.2cm,0)$) -- ($(m2b3.south west)+(1.2cm,0)$); \node[left = 0 of m2b3] {\small $b_2^3$}; \snode{0.6cm}{0.4cm}{0.1cm}{draw, below left = 0.8cm and -0.8cm of m2b1} (m3b1) {}; \draw[line width=0.02cm, -stealth,blue] (m2b1.south west) -- ($(m3b1.north west)+(0.2cm,0)$); \draw[line width=0.02cm,blue] ($(m3b1.north west)+(0.2cm,0)$) -- ($(m3b1.south west)+(0.2cm,0)$); \draw[line width=0.02cm,dotted,blue] (m2b1.south east) -- (m3b1.north east); \node[left = 0 of m3b1] {\small $b_3^1$}; \snode{2.2cm}{0.4cm}{0.1cm}{draw, right = 1cm of m3b1} (m3b2) {}; \fill[green] ($(m3b2.north west)+(0.8cm,-0.02cm)$) rectangle ($(m3b2.south west)+(1.4cm,0.02cm)$); \draw[line width=0.02cm, -stealth,blue] (m2b3.south west) -- ($(m3b2.north west)+(0.4cm,0)$); \draw[line width=0.02cm,blue] ($(m3b2.north west)+(0.4cm,0)$) -- ($(m3b2.south west)+(0.4cm,0)$); \draw[line width=0.02cm,dotted,blue] (m2b3.south east) -- ($(m3b2.north west)+(1.8cm,0)$); \draw[line width=0.02cm,dotted,blue] ($(m3b2.north west)+(1.8cm,0)$) -- ($(m3b2.south west)+(1.8cm,0)$); \draw[line width=0.02cm, -stealth,red] ($(m2b3.south west)+(0.2cm,0)$) -- ($(m3b2.north west)+(0.6cm,0)$); \draw[line width=0.02cm,red] ($(m3b2.north west)+(0.6cm,0)$) -- ($(m3b2.south west)+(0.6cm,0)$); \draw[line width=0.02cm, dotted,red] ($(m2b3.south west)+(1.2cm,0)$) -- ($(m3b2.north west)+(1.6cm,0)$); \draw[line width=0.02cm,dotted,red] ($(m3b2.north west)+(1.6cm,0)$) -- ($(m3b2.south west)+(1.6cm,0)$); \node[left = 0 of m3b2] {\small $b_3^2$}; \node[left = 1cm of m1b1] (txtm1) {\small\color{purple} $m_1$:}; \path let \p1 = (txtm1) in let \p2 = (m2b1) in node (txtm2) at (\x1, \y2) {\small\color{purple} $m_2$:}; \path let \p1 = (txtm1) in let \p2 = (m3b1) in node (txtm3) at (\x1, \y2) {\small\color{purple} $m_3$:}; \path let \p1 = (txtm1) in let \p2 = (txtm2) in node (txtj12) at (\x1, {(\y1+\y2)*0.5}) {\small\color{purple} $j_{12}$}; \path let \p1 = (txtm2) in let \p2 = (txtm3) in node (txtj23) at (\x1, {(\y1+\y2)*0.5}) {\small\color{purple} $j_{23}$}; \end{tikzpicture} \caption{Before} \label{fig:inj-before} \end{subfigure} \begin{subfigure}[b]{0.55\textwidth} \begin{tikzpicture} \snode{0.8cm}{0.4cm}{0.1cm}{draw} (m1b1) {}; \fill[green] ($(m1b1.north west)+(0.2cm,-0.02cm)$) rectangle ($(m1b1.south west)+(0.5cm,0.02cm)$); \node[left = 0 of m1b1] {\small $b_1^1$}; \snode{1cm}{0.4cm}{0.1cm}{draw, right = 0.8cm of m1b1} (m1b2) {}; \fill[green] ($(m1b2.north west)+(0.2cm,-0.02cm)$) rectangle ($(m1b2.south west)+(0.4cm,0.02cm)$); \fill[green] ($(m1b2.north west)+(0.6cm,-0.02cm)$) rectangle ($(m1b2.south west)+(0.8cm,0.02cm)$); \node[left = 0 of m1b2] {\small $b_1^2$}; \snode{0.4cm}{0.4cm}{0.1cm}{draw, right = 0.6cm of m1b2} (m1b3) {}; \node[left = 0 of m1b3] {\small $b_1^3$}; \snode{0.4cm}{0.4cm}{0.1cm}{draw, below left = 0.8cm and 0.2cm of m1b1} (m2b1) {}; \node[left = 0 of m2b1] {\small $b_2^1$}; \snode{1.2cm}{0.4cm}{0.1cm}{draw, right = 0.6cm of m2b1} (m2b2) {}; \fill[green] ($(m2b2.north west)+(0.5cm,-0.02cm)$) rectangle ($(m2b2.south west)+(0.8cm,0.02cm)$); \draw[line width=0.02cm, -stealth,red] (m1b1.south west) -- ($(m2b2.north west)+(0.3cm,0)$); \draw[line width=0.02cm,red] ($(m2b2.north west)+(0.3cm,0)$) -- ($(m2b2.south west)+(0.3cm,0)$); \draw[line width=0.02cm,dotted,red] (m1b1.south east) -- ($(m2b2.north west)+(1.1cm,0)$); \draw[line width=0.02cm,dotted,red] ($(m2b2.north west)+(1.1cm,0)$) -- ($(m2b2.south west)+(1.1cm,0)$); \node[left = 0 of m2b2] {\small $b_2^2$}; \snode{1.4cm}{0.4cm}{0.1cm}{draw, right = 0.6cm of m2b2} (m2b3) {}; \fill[green] ($(m2b3.north west)+(0.4cm,-0.02cm)$) rectangle ($(m2b3.south west)+(1cm,0.02cm)$); \draw[line width=0.02cm, -stealth,red] (m1b2.south west) -- ($(m2b3.north west)+(0.2cm,0)$); \draw[line width=0.02cm,red] ($(m2b3.north west)+(0.2cm,0)$) -- ($(m2b3.south west)+(0.2cm,0)$); \draw[line width=0.02cm,dotted,red] (m1b2.south east) -- ($(m2b3.north west)+(1.2cm,0)$); \draw[line width=0.02cm,dotted,red] ($(m2b3.north west)+(1.2cm,0)$) -- ($(m2b3.south west)+(1.2cm,0)$); \node[left = 0 of m2b3] {\small $b_2^3$}; \snode{0.6cm}{0.4cm}{0.1cm}{draw, below left = 0.8cm and -0.8cm of m2b1} (m3b1) {}; \draw[line width=0.02cm, -stealth,blue] (m2b1.south west) -- ($(m3b1.north west)+(0.2cm,0)$); \draw[line width=0.02cm,blue] ($(m3b1.north west)+(0.2cm,0)$) -- ($(m3b1.south west)+(0.2cm,0)$); \draw[line width=0.02cm,dotted,blue] (m2b1.south east) -- (m3b1.north east); \node[left = 0 of m3b1] {\small $b_3^1$}; \snode{2.2cm}{0.4cm}{0.1cm}{draw, right = 1cm of m3b1} (m3b2) {}; \fill[green] ($(m3b2.north west)+(0.8cm,-0.02cm)$) rectangle ($(m3b2.south west)+(1.4cm,0.02cm)$); \draw[line width=0.02cm, -stealth,blue] (m2b3.south west) -- ($(m3b2.north west)+(0.4cm,0)$); \draw[line width=0.02cm,blue] ($(m3b2.north west)+(0.4cm,0)$) -- ($(m3b2.south west)+(0.4cm,0)$); \draw[line width=0.02cm,dotted,blue] (m2b3.south east) -- ($(m3b2.north west)+(1.8cm,0)$); \draw[line width=0.02cm,dotted,blue] ($(m3b2.north west)+(1.8cm,0)$) -- ($(m3b2.south west)+(1.8cm,0)$); \draw[line width=0.02cm, -stealth,red] ($(m2b3.south west)+(0.2cm,0)$) -- ($(m3b2.north west)+(0.6cm,0)$); \draw[line width=0.02cm,red] ($(m3b2.north west)+(0.6cm,0)$) -- ($(m3b2.south west)+(0.6cm,0)$); \draw[line width=0.02cm, dotted,red] ($(m2b3.south west)+(1.2cm,0)$) -- ($(m3b2.north west)+(1.6cm,0)$); \draw[line width=0.02cm,dotted,red] ($(m3b2.north west)+(1.6cm,0)$) -- ($(m3b2.south west)+(1.6cm,0)$); \node[left = 0 of m3b2] {\small $b_3^2$}; \path [draw,dashed] let \p1 = (m3b1.south) in let \p2 = (m1b3.north east) in ({\x2 + 0.1cm}, \y2) -- ({\x2+0.1cm}, \y1); \snode{0.4cm}{0.4cm}{0.1cm}{draw, right = 0.6cm of m1b3} (m1b4) {}; \node[left = 0 of m1b4] {\small $b_1^4$}; \snode{0.4cm}{0.4cm}{0.1cm}{draw, right = 1.1cm of m2b3} (m2b4) {}; \node[left = 0 of m2b4] {\small $b_2^4$}; \snode{0.8cm}{0.4cm}{0.1cm}{draw, right = 0.8cm of m3b2} (m3b3) {}; \node[left = 0 of m3b3] {\small $b_3^3$}; \snode{0.3cm}{0.4cm}{0.1cm}{draw, right = 0.6cm of m3b3} (m3b4) {}; \node[left = 0 of m3b4] {\small $b_3^4$}; \draw[line width=0.02cm, -stealth,red] (m1b4.south west) -- (m2b4.north west); \draw[line width=0.02cm, dotted,red] (m1b4.south east) -- (m2b4.north east); \draw[line width=0.02cm,-stealth,blue] (m2b4.south west) -- ($(m3b3.north west)+(0.2cm,0)$); \draw[line width=0.02cm,blue] ($(m3b3.north west)+(0.2cm,0)$) -- ($(m3b3.south west)+(0.2cm,0)$); \draw[line width=0.02cm,dotted,blue] (m2b4.south east) -- ($(m3b3.north west)+(0.6cm,0)$); \draw[line width=0.02cm,dotted,blue] ($(m3b3.north west)+(0.6cm,0)$) -- ($(m3b3.south west)+(0.6cm,0)$); \node[left = 1cm of m1b1] (txtm1) {\small\color{purple} $m_1'$:}; \path let \p1 = (txtm1) in let \p2 = (m2b1) in node (txtm2) at (\x1, \y2) {\small\color{purple} $m_2'$:}; \path let \p1 = (txtm1) in let \p2 = (m3b1) in node (txtm3) at (\x1, \y2) {\small\color{purple} $m_3'$:}; \path let \p1 = (txtm1) in let \p2 = (txtm2) in node (txtj12) at (\x1, {(\y1+\y2)*0.5}) {\small\color{purple} $j_{12}'$}; \path let \p1 = (txtm2) in let \p2 = (txtm3) in node (txtj23) at (\x1, {(\y1+\y2)*0.5}) {\small\color{purple} $j_{23}'$}; \fill[gray] ($(m1b2.north west)+(0.02cm,-0.02cm)$) rectangle ($(m1b2.south east)+(-0.02cm,0.02cm)$); \fill[gray] ($(m2b3.north west)+(0.2cm,-0.02cm)$) rectangle ($(m2b3.south west)+(1.2cm,0)$); \fill[gray] ($(m3b2.north west)+(0.6cm,-0.02cm)$) rectangle ($(m3b2.south west)+(1.6cm,0)$); \end{tikzpicture} \caption{After} \label{fig:inj-after} \end{subfigure} \caption{Constructing of an Interpolating Memory State} \label{fig:inj} \end{figure} We use the concrete example in~\figref{fig:inj} to motivate the construction of $m_2'$. Here, the white and green areas correspond to locations in $\perm{\_}{\knonempty}$ (i.e., having at least some permission) and in $\perm{\_}{\kreadable}$ (i.e., having at least readable permission). Given $\minj{j_{12}}{m_1}{m_2}$, $\minj{j_{23}}{m_2}{m_3}$ and $\injpacc{(\comp{j_{23}}{j_{12}}, m_1, m_3)}{(j_{13}', m_1', m_3')}$, we need to build $m_2'$ satisfying $\minj{j_{12}'}{m_1'}{m_2'}$, $\minj{j_{23}'}{m_2'}{m_3'}$, $\injpacc{(j_{12},m_1,m_2)}{(j_{12}',m_1',m_2')}$, $\injpacc{(j_{23},m_2,m_3)}{(j_{23}',m_2',m_3')}$. For now, let us assume no free operation is performed between $m_1$ and $m_1'$ (and between $m_3$ and $m_3'$). Therefore, $m_1'$ and $m_3'$ are expansions of $m_1$ and $m_3$ with new blocks and possible modification to the public regions of $m_1$ and $m_3$ (according to the discussion in previous two sections). Here, $m_1'$ has a new block $b_1^4$ and $m_3'$ has two new block $b_3^3$ and $b_3^4$. We first fix $j_{12}'$, $j_{23}'$ and the shape of blocks in $m_2'$. We begin with $m_2$ and introduce a newly allocated block $b_2^4$ whose shape matches with $b_1^4$ in $m_1'$. Then, $j_{12}'$ is obtained by expanding $j_{12}$ with identity mapping from $b_1^4$ to $b_2^4$. Furthermore, $j_{23}'$ is also expanded with a mapping from $b_2^4$ to a block in $m_3'$; this mapping is determined by $j_{13}'$. We then set the values and permissions for memory cells in $m_2'$ so that it satisfies injection and the \kunchangedon properties for readable memory regions implied by $\injpacc{(j_{12},m_1,m_2)}{(j_{12}',m_1',m_2')}$ and $\injpacc{(j_{23},m_2,m_3)}{(j_{23}',m_2',m_3')}$. The values and permissions for newly allocate blocks are obviously mapped from $m_1'$ by $j_{12}'$. Those for old blocks are fixed as follows. By memory protection provided in $\injpacc{(\comp{j_{23}}{j_{12}}, m_1, m_3)}{(j_{13}', m_1', m_3')}$, the only memory cells in $m_1$ that may have been modified in $m_1'$ are those mapped all the way to $m_3$ by $\comp{j_{23}}{j_{12}}$, while the cells in $m_3$ that may be modified in $m_3'$ must be in the image of $\comp{j_{23}}{j_{12}}$. To match this fact, the only old memory regions in $m_2'$ whose values and permissions may be modified are those both in the image of $j_{12}$ and the domain of $j_{23}$. In our example, these regions are the gray areas in~\figref{fig:inj-after}. The values and permissions in those regions are then projected from $m_1'$ by applying the injection function $j_{12}$. The remaining old memory regions should have the same values and permissions as in $m_2$. Finally, we need to deal with the case where \kwd{free} operations are performed when trasiting from $m_1$ and $m_1'$ (and also from $m_3$ to $m_3'$). By memory protection of \kinjp, only memory cells in the gray areas in $m_1'$ (and $m_3'$) in~\figref{fig:inj-after} may be freed (meaning that they are no longer in the memory footprint). To match with this situation, the memory cells in $m_2'$ injected from the freed cells in $m_1'$ must also be freed. In summary, the values in $m_2'$ can be derived from $m_1'$ because of the following two reasons. First, as memory injections get compose unmapped and out-of-reach regions get \emph{bigger}, meaning more memory regions gets protected. For example, in~\figref{fig:inj}, $b_1^1$ is mapped by $j_{12}$ but becomes unmapped by $\comp{j_{23}}{j_{12}}$; the image of $b_2^1$ in $b_3^1$ is in-reach by $j_{23}$ but becomes out-of-reach by $\comp{j_{23}}{j_{12}}$. Protected regions must have unchanged values and permissions. The only unprotected regions are those in the domain and image of the composed injections (i.e., gray areas in~\figref{fig:inj}), whose values can be easily fixed for the second reason. That reason is: because injections are partial functions, we can deterministically project the values and permission from a source state into the interpolating state. Note that this is not in general possible in case injections are relations. A complete description of the construction of $m_2'$ and the proof can be found in~\apdxref{sec:injp-trans-proof}. \subsubsection{$\refine{\comp{\kinjp}{\kinjp}}{\kinjp}$} By definition, we need to prove: \begin{lemma}\label{lem:injp-comp-refine-injp} $\refine{\comp{\kinjp}{\kinjp}}{\kinjp}$ holds. That is, \begin{tabbing} \quad\=\quad\=\quad\=$\exists m_2'\app j_{12}'\app j_{23}',$\=\kill \>$\forall j_{13}\app m_1\app m_3,\app \minj{j_{13}}{m_1}{m_3} \imply \exists j_{12}\app j_{23}\app m_2,\app \minj{j_{12}}{m_1}{m_2} \land \minj{j_{23}}{m_2}{m_3} \land$\\ \>\>$\forall m_1'\app m_2'\app m_3'\app j_{12}'\app j_{23}',\app \injpacc{(j_{12}, m_1, m_2)}{(j_{12}', m_1', m_2')} \imply \injpacc{(j_{23}, m_2, m_3)}{(j_{23}', m_2', m_3')} \imply$\\ \>\>\>$\minj{j_{12}'}{m_1'}{m_2'} \imply \minj{j_{23}'}{m_2'}{m_3'} \imply \exists j_{13'},\app \injpacc{(j_{13}, m_1, m_3)}{(j_{13}', m_1', m_3')} \land \minj{j_{13}'}{m_1'}{m_3'}.$ \end{tabbing} \end{lemma} This lemma conforms to the graphic representation in~\figref{fig:trans-comp2}. To prove it, we pick $j_{12}$ to be an identity injection, $j_{23} = j_{13}$ and $m_2 = m_1$. Then the lemma is reduced to proving the existence of $j_{13}'$ that satisfies $\injpacc{(j_{13}, m_1, m_3)}{(j_{13}', m_1', m_3')}$ and $\minj{j_{13}'}{m_1'}{m_3'}$. By picking $j_{13}' = \comp{j_{12}'}{j_{23}'}$, we can easily prove these properties are satisfied by exploiting the guarantees provided by \kinjp (see~\apdxref{sec:injp-trans-proof}). \section{Transitivity of the CKLR with memory protection} \label{sec:injp-trans-proof} \subsection{More complete definitions of memory injection and $\kinjp$ accessibility} We have used a simplified version of definitions of $\kwd{perm}$, $\hookrightarrow_m$ and $\leadsto_{\kinjp}$ in Sec. \ref{sec:injp}. To present a more detailed proof of the CKLR with memory protection($\kinjp$), we present full definitions of $\kwd{perm}$ and $\leadsto_{\kinjp}$. We also present a more complete definition of $\hookrightarrow_m$. Note that the definition is not 100\% complete here, because we ignore two properties for simplicity. They are about alignment and range of size $\delta$ in mapping $j(b) = \some{(b',\delta)}$. They are not essential for this proof as preconditions and can be proved similarly as other properties of $\hookrightarrow_m$. Readers interested in these details can find them in our artifact. By the definition of CompCert memory model, a memory cell has both maximum and current permissions such that $\permcur{m}{p} \subseteq \permmax{m}{p}$. During the execution of a program, the current permission of a memory cell may be lowered or raised by an external call. However, the maximum permission can only decrease in both internal and external calls. This invariant was defined in CompCert as: \begin{tabbing} \quad\=\quad\=\kill \> $\mpd{m_1}{m_2} \iff$\\ \>\> $\forall \app b \app o \app p, \app b \in \validblock{m_1} \imply (b,o) \in \permmax{m_2}{p} \imply (b,o) \in \permmax{m_1}{p}$ \end{tabbing} \begin{definition} Definition of $\hookrightarrow_m$ is presented in Fig. \ref{fig:meminj}.\\ \begin{figure}[ht!] \small \begin{tabbing} \quad\=\quad\=\;\;\=\kill \> $\minj{j}{m_1}{m_2} := \{|$\\ \>\> \cmnt{(* Preservation of permission under injection *)}\\ \>\> $(1) \app \forall\app b_1\app b_2\app o_1\app o_2 \app k\app p,\app j(b_1) = \some{(b_2,o_2 - o_1)} \imply (b_1, o_1) \in \permk{k}{m_1}{p} \imply (b_2, o_2) \in \permk{k}{m_2}{p}$\\ \>\> \cmnt{(* Preservation of memory values for currently readable cells under injection *)}\\ \>\> $(2) \app \forall\app b_1\app b_2\app o_1\app o_2,\app j(b_1) = \some{(b_2,o_2 - o_1)} \imply (b_1,o_1) \in \permcur{m_1}{\kreadable}$\\ \>\>\> $\imply \vinj{j}{m_1[b_1,o_1]}{m_2[b_2,o_2]}$\\ \>\> \cmnt{(* Invalid source blocks must be unmapped *)}\\ \>\> $(3) \app \forall\app b_1,\app b_1 \notin \validblock{m_1} \imply j(b_1) = \none$\\ \>\> \cmnt{(* The range of $j$ must only contain valid blocks *)}\\ \>\> $(4) \app \forall\app b_1\app b_2\app \delta,\app j(b_1) = \some{(b_2,\delta)} \imply b_2 \in \validblock{m_2}$\\ \>\> \cmnt{(* Two disjoint source cells with non-empty permission do not overlap with each other after injection *)}\\ \>\> $(5) \app \forall\app b_1\app b_2\app o_1\app o_2\app b_1'\app b_2'\app o_1'\app o_2',\app b_1 \neq b_1' \imply j(b_1) = \some{(b_2, o_2 - o_1)} \imply j(b_1') = \some{(b_2', o_2' - o_1')} \imply $\\ \>\>\> $(b_1, o_1) \in \permmax{m_1}{\knonempty} \imply (b_1', o_1') \in \permmax{m_1}{\knonempty} \imply b_2 \neq b_2' \lor o_2 \neq o_2'$\\ \>\> \cmnt{(* Given a target cell, its corresponding source cell either }\\ \>\>\> \cmnt{have the same permission or does not have any permission *)}\\ \>\> $(6) \app \forall\app b_1\app o_1\app b_2\app o_2\app k\app p,\app j(b_1) = \some{(b_2, o_2 - o_1)} \imply (b_2, o_2) \in \permk{k}{m_2}{p}$\\ \>\>\> $\imply (b_1,o_1) \in \permk{k}{m_1}{p} \lor (b_1,o_1) \not\in \permmax{m_1}{\knonempty}$ \end{tabbing} \caption{Definition of Memory Injection} \label{fig:meminj} \end{figure} \end{definition} For the complete definition of $\leadsto_{\kinjp}$, we define the separation property for injection as: \begin{tabbing} \quad\=\quad\=\kill \> $\injsep{j}{j'}{m_1}{m_2} \iff$\\ \>\> $\forall \app b_1 \app b_2 \app \delta, \app j(b_1) = \none \imply j'(b_1) = \some{(b_2,\delta)} \imply b_1 \notin \validblock{m_1} \land b_2 \notin \validblock{m_2}$ \end{tabbing} This invariant states that when we start from $\minj{j}{m_1}{m_2}$, after executing on source and target semantics, the future injection $j'$ only increases from $j$ by relating newly allocated blocks. \begin{definition}\label{def:injpacc} Accessibility relation of $\kinjp$ \begin{tabbing} \quad\=$\injpacc{(j, m_1, m_2)}{(j', m_1', m_2')} \; \iff \;$\=\kill \>$\injpacc{(j, m_1, m_2)}{(j', m_1', m_2')} \; \iff \;j \subseteq j' \land \unmapped{j} \subseteq \unchangedon{m_1}{m_1'}$\\ \>\>$\land\; \outofreach{j}{m_1} \subseteq \unchangedon{m_2}{m_2'}$ \\ \>\>$\land\; \mpd{m_1}{m_1'} \land \mpd{m_2}{m_2'}$\\ \>\>$\land\; \injsep{j}{j'}{m_1}{m_2}.$ \end{tabbing} \end{definition} By definition, if $\unchangedon{m_1}{m_2}$ then $\validblock{m_1} \subseteq \validblock{m_2}$. Which was elided in Sec \ref{ssec:cklr}. As a result, $\injpacc{(j, m_1, m_2)}{(j', m_1', m_2')}$ implies that $\validblock{m_1} \subseteq \validblock{m_1'}$ and $\validblock{m_2} \subseteq \validblock{m_2'}$. These are used implicitly in the proof later. \subsection{Auxiliary Properties} In this section we present several lemmas about properties of memory injection and $\kinjp$ accessibility. These lemmas are used in the proof of $\kinjp$ refinement. Firstly, the memory injections are composable. \begin{lemma}\label{lem:inj-trans} Given $\minj{j_{12}}{m_1}{m_2}$ and $\minj{j_{23}}{m_2}{m_3}$, we have \[ \minj{j_{23} \cdot j_{12}}{m_1}{m_3} \] \end{lemma} This property is proved and used in CompCert, we do not repeat the proof here. \begin{lemma}\label{lem:readable-midvalue} Given $\minj{j_{23}\cdot j_{12}}{m_1}{m_3}$, $(b_1,o_1) \in \permcur{m_1}{\kreadable}$ and $j_{23}\cdot j_{12}(b_1) = \some{(b_3,o_3-o_1)}$, then \[ \exists v_2, \vinj{j_{12}}{m_1[b_1,o_1]}{v_2} \land \vinj{j_{23}}{v_2}{m_3[b_3,o_3]}. \] \end{lemma} Note that $j_{23}\cdot j_{12}(b_1) = \some{(b_3,o_3-o_1)}$ iff $\exists\app b_2\app o_2, j_{12}(b_1) = \some{(b_2,o_2 - o_1)} \land j_{23}(b_2) = \some{(b_3,o_3-o_2)}$. \begin{proof} According to property (2) in Fig.\ref{fig:meminj}, we know that $\vinj{j_{23} \cdot j_{12}}{m_1[b_1,o_1]}{m_3[b_3,o_3]}$. We divide the value $m_1[b_1,o_1]$ into: \begin{itemize} \item If $m_1[b_1,o_1] = \kVundef$, we take $v_2 = \kVundef$. Then $\vinj{j_{12}}{\kVundef}{\kVundef} \land \vinj{j_{23}}{\kVundef}{m_3[b_3,o_3]}$ trivially holds. \item If $m_1[b_1,o_1]$ is a concrete value, we take $v_2 = m_1[b_1,o_1]$. In such case we have $m_1[b_1,o_1] = v_2 = m_3[b_3,o_3]$. \item If $m_1[b_1,o_1] = \vptr{b_1'}{o_1'}$, we can derive that $\vinj{j_{23}\cdot j_{12}}{\vptr{b_1'}{o_1'}}{m_3[b_3,o_3]}$ implies $\exists b_3\ o_3, m_3[b_3,o_3] = \vptr{b_3'}{o_3'}$ and $ j_{23} \cdot j_{12}(b_1') = \some{(b_3',o_3' - o_1')}$. Therefore \[ \exists b_2'\ o_2', j_{12}(b_1') = \some{(b_2',o_2' - o_1')} \land j_{23}(b_2') = \some{(b_3',o_3' - o_2')} \] We take $v_2 = \vptr{b_2'}{o_2'}$ and $\vinj{j_{12}}{m_1[b_1,o_1]}{v_2} \land \vinj{j_{23}}{v_2}{m_3[b_3,o_3]}$ can be derived from the formula above. \end{itemize} \end{proof} \begin{lemma}\label{lem:outofreach-reverse} Given $\minj{j_{12}}{m_1}{m_2}$, $\minj{j_{23}}{m_2}{m_3}$ and $j_{23}(b_2) = \some{(b_3,o_3 - o_2)}$. If $(b_2,o_2) \in \outofreach{j_{12}}{m_1}$ and $(b_2,o_2) \in \permmax{m_2}{\knonempty}$, then \[ (b_3,o_3) \in \outofreach{j_{23}\cdot j_{12}}{m_1} \] \begin{proof} According to the definition of $\koutofreach$, If $j_{12}(b_1) = \some{(b_2',o_2' - o_1)}$ and $j_{23}(b_2') = \some{(b_3,o_3 - o_1)}$, we need to prove that $(b_1,o_1) \notin \permmax{m_1}{\knonempty}$. If $b_2 = b_2'$, from $(b_2,o_2) \in \outofreach{j_{12}}{m_1}$ we can directly prove $(b_1,o_1) \notin \permmax{m_1}{\knonempty}$. If $b_2 \neq b_2'$, we assume that $(b_1,o_1) \in \permmax{m_1}{\knonempty}$, by property (1) of $\minj{j_{12}}{m_1}{m_2}$ we get $(b_2',o_2') \in \permmax{m_2}{\knonempty}$. Now $(b_2,o_2)$ and $(b_2',o_2')$ are two different positions in $m_2$ which are mapped to the same position $(m_3,o_3)$ in $m_3$. This scenario is prohibited by the non-overlapping property (5) of $\minj{j_{23}}{m_2}{m_3}$. So $(b_1,o_1) \notin \permmax{m_1}{\knonempty}$. \end{proof} \end{lemma} \subsection{Proof of Lemma \ref{lem:injp-refine-injp-comp}} Based on definitions and lemmas before, we prove Lemma \ref{lem:injp-refine-injp-comp} in this section: \begin{tabbing} \quad\=\quad\=\quad\=$\exists m_2'\app j_{12}'\app j_{23}',$\=\kill \>$\forall j_{12}\app j_{23}\app m_1\app m_2\app m_3,\app \minj{j_{12}}{m_1}{m_2} \imply \minj{j_{23}}{m_2}{m_3} \imply \exists j_{13},\app \minj{j_{13}}{m_1}{m_3} \app \land$\\ \>\>$\forall m_1'\app m_3'\app j_{13}',\app \injpacc{(j_{13}, m_1, m_3)}{(j_{13}', m_1', m_3')} \imply \minj{j_{13}'}{m_1'}{m_3'} \imply$\\ \>\>\>$\exists m_2'\app j_{12}'\app j_{23}', \injpacc{(j_{12},m_1,m_2)}{(j_{12}',m_1',m_2')} \land \minj{j_{12}'}{m_1'}{m_2'}$\\ \>\>\>\>$\land \injpacc{(j_{23},m_2,m_3)}{(j_{23}',m_2',m_3')} \land \minj{j_{23}'}{m_2'}{m_3'}.$ \end{tabbing} Given $\minj{j_{12}}{m_1}{m_2}$ and $\minj{j_{23}}{m_2}{m_3}$. We take $j_{13} = j_{23} \cdot j_{12}$, from Lemma \ref{lem:inj-trans} we can prove $\minj{j_{13}}{m_1}{m_3}$. After the external call, given $\injpacc{(j_{13}, m_1, m_3)}{(j_{13}', m_1', m_3')}$ and $\minj{j_{13}'}{m_1'}{m_3'}$. We present the construction and properties of $j_{12}', j_{23}'$ and $m_2'$ in Sec. \ref{subsubsec:construction}. Then the proof reduce to prove $\minj{j_{12}'}{m_1'}{m_2'}$, $\minj{j_{23}'}{m_2'}{m_3'}$, $\injpacc{(j_{12},m_1,m_2)}{(j_{12}',m_1',m_2')}$ and $\injpacc{(j_{23},m_2,m_3)}{(j_{23}',m_2',m_3')}$, they are proved in Sec. \ref{subsubsec:detail-proof} \subsubsection{Construction and properties of $j_{12}'$, $j_{23}'$ and $m_2'$}\label{subsubsec:construction} \begin{definition}\label{def:construction} We construct the memory state $m_2'$ by the following three steps, $j_{12}'$ and $j_{23}'$ are constructed in step (1). \begin{enumerate} \item We first extend $m_2$ by allocating new blocks, at the same time we extend $j_{12},j_{23}$ to get $j_{12}'$ and $j_{23}'$ such that $j_{13}' = j_{23}' \cdot j_{12}'$. Specifically, for each new block $b_1$ in $m_1'$ relative to $m_1$ which is mapped by $j_{13}'$ as $j_{13}'(b_1) = \some{(b_3,\delta)}$, we allocate a new memory block $b_2$ from $m_2$ and add new mappings $(b_1,(b_2,0))$ and $(b_2,(b_3,\delta))$ to $j_{12}$ and $j_{23}$, respectively. \item We then copy the contents of new blocks in $m_1'$ into corresponding new blocks in $m_2'$ as follows. For each \emph{mapped} new block $b_1$ in $m_1'$ where $j_{12}'(b_1) = \some{(b_2,0)}$, we enumerate all positions $(b_1,o_1) \in \permmax{m_1'}{\knonempty}$ and copy the permission of $(b_1,o_1)$ in $m_1'$ to $(b_2,o_1)$ in $m_2'$. If $(b_1,o_1) \in \permcur{m_1'}{\kreadable}$, we further set $\mcontents{m_2'}{b_2}{o_1}$ to $v_2$ where $\minj{j_{12'}}{\mcontents{m_1'}{b_1}{o_1}}{v_2}$. The existence of $v_2$ here is provided by Lemma \ref{lem:readable-midvalue} with preconditions $\minj{j_{13}'}{m_1'}{m_3'}$, $(b_1,o_1) \in \permcur{m_1'}{\kreadable}$ and $j_{13}'(b_1) = \some{(b_3,\delta)}$ (because $b_1$ is a new block chosen in step (1)). \item Finally, we update the old blocks of $m_2$. If a position $(b_2,o_2) \in \pubtgtmem{j_{12}}{m_1} \cap \pubsrcmem{j_{23}}$, the permission and value of this position in $m_2'$ should comes from the corresponding position $(b_1,o_1)$ in $m_1'$ as depicted in Fig. \ref{fig:injp-int-st}(b). Other positions just remain unchanged from $m_2$ to $m_2'$. To complete the construction, we have to enumerate the set $\pubtgtmem{j_{12}}{m_1}\cap \pubsrcmem{j_{23}}$. We state that \[ \pubtgtmem{j_{12}}{m_1} \subseteq \permmax{m_2}{\knonempty} \] where $\permmax{m_2}{\knonempty}$ is enumerable. Note that $(b_2,o_2) \in \pubtgtmem{j_{12}}{m_1} \iff (b_2,o_2) \notin \outofreach{j_{12}}{m_1}$ by definition. If $(b_2,o_2) \in \pubtgtmem{j_{12}}{m_1}$, then there exists$(b_1,o_1) \in \permmax{m_1}{\knonempty} $ such that $j_{12}(b_1) = \some{(b_2,o_2 - o_1)} $. The property (1) of $\minj{j_{12}}{m_1}{m_2}$ ensures that $(b_2,o_2) \in \permmax{m_2}{\knonempty}$. The concrete algorithm can be described as follows. For $(b_2,o_2) \in \permmax{m_2}{\knonempty}$, we can enumerate $\permmax{m_1}{\knonempty}$ to find whether there exists a corresponding position $(b_1,o_1) \in \permmax{m_1}{\knonempty} $ such that $j_{12}(b_1) = \some{(b_2,o_2 - o_1)}$. Note that the property (3) of $\minj{j_{12}}{m_1}{m_2}$ ensures that we cannot find more than one of such position. If there exists such $(b_1,o_1)$ and $j_{23}(b_2) \neq \some{(b_3,o_3)}$, We copy the permission of position $(b_1,o_1)$ in $m_1'$ to $(b_2,o_2)$. If $(b_1,o_1) \in \permcur{m_1'}{\kreadable}$, we further set $\mcontents{m_2'}{b_2}{o_2}$ to $v_2$ where $\vinj{j_{12'}}{m_1'[b_1,o_1]}{v_2}$. Note that although $(b_1,o_1) \in \permmax{m_1}{\knonempty}$, it may be freed during the external call and have empty permission in $m_1'$, in such situation we should also free $(b_2,o_2)$ in $m_2'$. Otherwise $\minj{j_{23}'}{m_2'}{m_3'}$ breaks because of $(b_2,o_2) \in \permmax{m_2'}{\knonempty}$ and $(b_3,o_3)$ \emph{may} have empty permission in $m_3'$. \end{enumerate} \end{definition} We present several lemmas about $j_{12}', j_{23}'$ and $m_2'$ according to Definition \ref{def:construction} as follows. \begin{lemma}\label{lem:constr-inj} \[ (1) j_{12} \subseteq j_{12}' \; (2) j_{23} \subseteq j_{23}' \; (3) \injsep{j_{12}}{j_{12}'}{m_1}{m_2} \; (4) \injsep{j_{23}}{j_{23}'}{m_2}{m_3} \] \end{lemma} \begin{proof} Directly from the construction step (1) \end{proof} \begin{lemma}\label{lem:m2-unchangedon} \[ (1) \outofreach{j_{12}}{m_1} \subseteq \unchangedon{m_2}{m_2'} \; (2) \unmapped{j_{23}} \subseteq \unchangedon{m_2}{m_2'} \] \end{lemma} \begin{proof} For each position $(b_2,o_2)$ from $m_2$ to $m_2'$ in step (3), we enforce that $\exists b_1, j_{12}(b_1) = \some{(b_2,o_2 - o_1)} \land (b_1,o_1) \in \permmax{m_1}{\knonempty}$ and $(b_2,o_2) \notin \unmapped{j_{23}}$. Thus, if $(b_2,o_2) \in \outofreach{j_{12}}{m_1}$ or $(b_2,o_2) \in \unmapped{j_{23}}$, then $(b_2,o_2) \in \unchangedon{m_2}{m_2'}$. \end{proof} \begin{lemma}\label{lem:m2-mpd} \[ \mpd{m_2}{m_2'} \] \end{lemma} \begin{proof} For unchanged position $(b_2,o_2)$ in $m_2$, we trivially have $(b_2,o_2) \in \permmax{m_2'}{p} \iff (b_2,o_2)\in \permmax{m_2}{p}$. If $(b_2,o_2)$ is changed in step (3), then the permission of $(b_2,o_2)$ in $m_2'$ is copied from some corresponding position $(b_1,o_1)$ in $m_1'$($j_{12}(b_1) = \some{(b_2,o_2 - o_1)}$). Given $(b_2,o_2) \in \permmax{m_2'}{p}$, we get $(b_1,o_1) \in \permmax{m_1'}{p}$. Form $\mpd{m_1}{m_1'}$ we can further derive that $(b_1,o_1) \in \permmax{m_1}{p}$. Finally, by property (1) of $\minj{j_{12}}{m_1}{m_2}$ we can conclude that $(b_2,o_2) \in \permmax{m_2}{p}$. \end{proof} \subsubsection{Proof of remaining formulas} \label{subsubsec:detail-proof} Recall that we are still proving Lemma \ref{lem:injp-refine-injp-comp}, we have constructed $j_{12}', j_{23}'$ and $m_2'$. Based on the construction and properties of them presented above, we present complete proofs of last four formulas separately in this section. \begin{lemma}\label{lem:constr-inj1} $\minj{j_{12}'}{m_1'}{m_2'}$ \end{lemma} \begin{proof} We check the properties in Fig. \ref{fig:meminj} as follows: \begin{enumerate} \item Given $ j_{12}'(b_1) = \some{(b_2,o_2 - o_1)} \land (b_1, o_1) \in \permk{k}{m_1'}{p}$. We prove $(b_2, o_2) \in \permk{k}{m_2'}{p}$ by cases of $j_{12}(b_1)$. Note that $j_{12}(b_1)$ is either $\none$ or the same as $j_{12}(b_1)$ because of $j_{12} \subseteq j_{12}'$. \begin{itemize} \item If $j_{12} (b_1)= \none$, the mapping $j_{12}'(b_1) = \some{(b_2,o_2-o_1)}$ is added in step (1). As a result, we know $\exists b_3\app \delta, j_{13}'(b_1) = \some{b_3,\delta}$. Since $ \permk{k}{m_1'}{p} \subseteq \permmax{m_1'}{\knonempty}$, we know $(b_1,o_1) \in \permmax{m_1'}{\knonempty}$ and the permission of $(b_1,o_1)$ in $m_1'$ is copied to $(m_2,o_2)$ in $m_2'$ in step (2). Therefore $(b_2,o_2) \in \permk{k}{m_2'}{p}$. \item If $j_{12}(b_1) = \some{(b_2,o_2)}$, we further divide whether $(b_2,o_2)$ is a public position by $j_{23}(b_2)$ \begin{itemize} \item If $j_{23}(b_2) = \none$, i.e. $(b_2,o_2) \in \unmapped{j_{23}}$, According to Lemma \ref{lem:m2-unchangedon}, we know $(b_2,o_2) \in \unchangedon{m_2}{m_2'}$. At the same time, we also get $(b_1,o_1) \in \unmapped{j_{13}}$ because of $j_{13} = j_{23} \cdot j_{12}$. Together with $\injpacc{(j_{13},m_1,m_3)}{(j_{13}',m_1',m_3')}$, we can conclude that $(b_1,o_1) \in \kunchangedon (m_1,m_1')$. Therefore, we get $(b_1,o_1) \in \permk{k}{m_1}{p}$. Using property (1) of $\minj{j_{12}}{m_1}{m_2}$ we get $(b_2,o_2) \in \permk{k}{m_2}{p}$. Since $(b_2,o_2)$ is also unchanged between $m_2$ and $m_2'$, $(b_2, o_2) \in \permk{k}{m_2'}{p}$. \item If $j_{23}(b_2) = \some{(b_3,o_3 - o_2)}$, the permission of $(b_2,o_2)$ in $m_2'$ is set as the same as $(b_1,o_1)$ in $m_1'$ in step (3). So $(b_2,o_2) \in \permk{k}{m_2'}{p}$ holds trivially. \end{itemize} \end{itemize} \item Given $j_{12}'(b_1) = \some{(b_2,o_2 - o_1)} \land (b_1, o_1) \in \permcur{m_1'}{\kreadable}$, following the method in (1) we can prove $\vinj{j_{12}'}{m_1'[b_1,o_1]}{m_2'[b_2,o_2]}$ \item Given $b_1 \notin \validblock{m_1'}$, we know $b_1 \notin \validblock{m_1}$($\unchangedon{m_1}{m_1'}$), therefore $j_{12}(b_1) = \none$. Since $b_1$ cannot be added to $j_{12}'$ in step (1), we can conclude that $j_{12}'(b_1) = \none$. \item Given $j_{12}'(b_1) = \some{(b_2,\delta)}$, It is easy to show $b_2$ is either old block in $m_2$($j_{12}(b_1) = \some{(b_2,\delta)}$) or newly allocated block($j_{12}(b_1)= \none$), therefore $b_2 \in \validblock{m_2'}$. \item Given $j_{12}'(b_1) = \some{(b_2,o_2 - o_1)} \land (b_1,o_1) \in \permmax{m_1'}{\knonempty}$ and $j_{12}'(b_1') = \some{(b_2',o_2' - o_1')} \land (b_1',o_1') \in \permmax{m_1'}{\knonempty}$ where $b_1 \neq b_1'$. We need to prove these two positions do not overlap ($(b_2,o_2) \neq (b_2',o_2')$) by cases of whether $b_1$ and $b_1'$ are mapped by old injection $j_{12}$. Note that $j_{12} \subseteq j_{12}'$, so $j_{12}(b)$ is either $\none$ or the same as $j_{12}'(b)$. \begin{itemize} \item $j_{12} (b_1) = j_{12} (b_1') = \none$. The $j_{12}'$ mappings of them are added in step (1). It is obvious that newly added mappings in step (1) never map different blocks in $m_1'$ into the same block in $m_2'$. Therefore $b_2 \neq b_2'$. \item $j_{12}(b_1) = \some{(b_2,o_2 - o_1)}, j_{23}(b_1') = \none$. we can derive that $b_2 \in \validblock{m_2}$ by property (4) of $\minj{j_{12}}{m_1}{m_2}$. While $b_2'$ is newly allocated from $m_2$ in step (1). Therefore $b_2 \neq b_2'$. \item $j_{12}(b_1) = \none, j_{12}(b_1') = \some{(b_2',o_2' - o_1')}$. Similarly we have $b_2 \neq b_2'$. \item $j_{12}(b_1) = \some{(b_2,o_2 - o_1)}, j_{12}(b_1') = \some{(b_2',o_2' - o_1')}$. We can prove $(b_2,o_2) \neq (b_2,o_2')$ using the property (5) in $\minj{j_{12}}{m_1}{m_2}$ by showing $(b_1,o_1) \in \permmax{m_1}{\knonempty}$ and $(b_1',o_1') \in \permmax{m_1}{\knonempty}$. This follows from the $\mpd{m_1}{m_1'}$ invariant in $\injpacc{(j_{13},m_1,m_3)}{(j_{13}',m_1',m_3')}$. \end{itemize} \item Given $j_{12}'(b_1) = \some{(b_2, o_2 - o_1)} \land (b_2, o_2) \in \permk{k}{m_2'}{p}$. Similarly we prove $ (b_1,o_1) \in \permk{k}{m_1'}{p}$ or $(b_1,o_1) \not\in \permmax{m_1'}{\knonempty} $ by cases of $j_{12}(b_1)$: \begin{itemize} \item If $j_{12}(b_1) = \none$, then $b_1$ and $b_2$ are new blocks by $\injsep{j_{12}}{j_{12}'}{m_1}{m_2}$. According to the construction steps , every nonempty permission of $(b_2,o_2)$ in $m_2'$ is copied from $(b_1,o_1)$ in $m_1'$. Therefore $(b_1,o_1) \in \permk{k}{m_1'}{p}$. \item If $j_{12}(b_1) = \some{(b_2,o_2 - o_1)}$, then $b_1$ and $b_2$ are old blocks. We further divide $j_{23}(b_2)$ into two cases: \begin{itemize} \item $j_{23}(b_2) = \none$. In this case we have $(b_1,o_1) \in \unchangedon{m_1}{m_1'}$ and $(b_2,o_2) \in \unchangedon{m_2}{m_2'}$ (same as (1)). We can derive $(b_2,o_2) \in \permk{k}{m_2}{p}$, then $ (b_1,o_1) \in \permk{k}{m_1}{p} \lor (b_1,o_1) \not\in \permmax{m_1}{\knonempty}$ by property (6) of $\minj{j_{12}}{m_1}{m_2}$. Finally $ (b_1,o_1) \in \permk{k}{m_1'}{p} \lor (b_1,o_1) \not\in \permmax{m_1'}{\knonempty}$ by $(b_1,o_1) \in \unchangedon{m_1}{m_1'}$. \item $j_{23}(b_2) = \some{(b_3,o_3)}$. We assume that $(b_1,o_1) \in \permmax{m_1'}{\knonempty}$(otherwise the conclusion holds trivially), by $\mpd{m_1}{m_1'}$ we can derive that $(b_1,o_1) \in \permmax{m_1}{\knonempty}$. Therefore $(b_2,o_2) \in \pubtgtmem{j_{12}}{m_1} \cap \pubsrcmem{j_{23}}$ is copied from $m_1'$ in step (3). As a result, we get $(b_1,o_1) \in \permk{k}{m_1'}{p}$ from $(b_2,o_2) \in \permk{k}{m_2'}{p}$. \end{itemize} \end{itemize} \end{enumerate} \end{proof} \begin{lemma}\label{lem:constr-inj2} $\minj{j_{23}'}{m_2'}{m_3'}$ \end{lemma} \begin{proof} $\ $ \begin{enumerate} \item Given $j_{23}'(b_2) = \some{(b_3,o_3 - o_2)} \land (b_2,o_2) \in \permk{k}{m_2'}{p}$. We prove $(b_3,o_3) \in \permk{k}{m_3'}{p}$ by cases of whether $b_2 \in \validblock{m_2}$. \begin{itemize} \item If $b_2 \notin \validblock{m_2}$ is a new block relative to $m_2$, then $(b_2,o_2) \in \permk{k}{m_2'}{p}$ is copied from $m_1'$ in step (2). Therefore we get $(b_1,o_1) \in \permk{k}{m_1'}{p}$ and $j_{23}' \cdot j_{12}'(b_1) = \some{(b_3,o_3 - o_1)}$ according to step (1). From property (1) of $\minj{j_{13}'}{m_1'}{m_3'}$ we get $(b_3,o_3) \in \permk{k}{m_3'}{p}$; \item If $b_2 \in \validblock{m_2}$, then $j_{23}(b_2) = \some{(b_3,o_3)}$ from $\injsep{j_{23}}{j'_{23}}{m_2}{m_3}$. We further divide whether $(b_2,o_2) \in \outofreach{j_{12}}{m_1}$ using the same algorithm in step (2). \begin{itemize} \item If $(b_2,o_2) \in \outofreach{j_{12}}{m_1}$. According to Lemma \ref{lem:m2-unchangedon} we get $(b_2,o_2) \in \unchangedon{m_2}{m_2'}$ and $(b_2,o_2) \in \permk{k}{m_2}{p}$. From $\minj{j_{23}}{m_2}{m_3}$ we can derive $(b_3,o_3) \in \permk{k}{m_3}{p}$. By Lemma \ref{lem:outofreach-reverse}, $(b_3,o_3) \in \outofreach{j_{13}}{m_1}$. Therefore $(b_3,o_3)\in \unchangedon{m_3}{m_3'}$ and $(b_3,o_3) \in \permk{k}{m_3'}{p}$. \item If $(b_2,o_2) \notin \outofreach{j_{12}}{m_1}$, the permission of public position $(b_2,o_2)$ in $m_2'$ is copied from $m_1'$ in step (3). Thus $(b_1,o_1) \in \permk{k}{m_1'}{p}$ and $j_{13}'(b_1) = \some{(b_3,o_3 - o_1)}$. From property (1) of $\minj{j_{13}'}{m_1'}{m_3'}$ we get $(b_3,o_3) \in \permk{k}{m_3'}{p}$. \end{itemize} \end{itemize} \item The proof is similar to (1). Lemma \ref{lem:readable-midvalue} ensures that the constructed value $v_2$ in $m_2'$ can be related to the value in $m_3'$ as $\vinj{j_{23}'}{v_2}{m_3'[b_3,o_3]}$. \item Given $b_2 \notin \validblock{m_2'}$, we have $b_2 \notin \validblock{m_2}$ and $j_{23}(b_2) = \none$. Also $b_2$ is not added into the domain of $j_{23}'$ in step (1), so $j_{23}'(b_2) = \none$. \item Given $j'_{23}(b_2) = \some{(b_3,o_3)}$. Similarly $b_3$ is either an old block in $m_3$($j_{23}(b_2) = \some{(b_3,o_3)}$) or a new block in $m_3'$($j_{23}(b_2) = \none$). Therefore $b_3 \in \validblock{m_3'}$. \item Given $j_{23}'(b_2) = \some{(b_3,o_3 - o_2)} \land (b_2,o_2) \in \permmax{m_2'}{\knonempty}$ and $j_{23}'(b_2') = \some{(b_3',o_3' - o_2')} \land (b_2',o_2') \in \permmax{m_2'}{\knonempty}$ where $b_2 \neq b_2'$. We need to prove that $(b_3,o_3) \neq (b_3',o_3')$ by cases of whether $b_2$ and $b_2'$ are mapped by old injection $j_{23}$. Note that $j_{23} \subseteq j_{23}'$, so $j_{23}(b)$ is either $\none$ or the same as $j_{23}'(b)$. \begin{itemize} \item $j_{23} (b_2) = j_{23} (b_2') = \none$. The $j_{23}'$ mappings of them are added in step (1). It is obvious that newly added mappings in $j_{23}'$ never map different blocks in $m_2'$ into the same block in $m_3'$. Therefore $b_3 \neq b_3'$. \item $j_{23}(b_2) = \some{(b_3,o_3 - o_2)}, j_{23}(b_2') = \none$. we can derive that $b_3\in \validblock{m_3}$ By property (4) of $\minj{j_{23}}{m_2}{m_3}$. While $b_3' \notin \validblock{m_3}$ can be derived from $\injsep{j_{23}}{j_{23}'}{m_2}{m_3}$. Therefore $b_3 \neq b_3'$. \item $j_{23}(b_2) = \none, j_{23}(b_2') = \some{(b_3',o_3' - o_3')}$. Similarly we have $b_3 \neq b_3'$. \item $j_{23}(b_2) = \some{(b_3,o_3 - o_2)}, j_{23}(b_2') = \some{(b_3',o_3' - o_2')}$. We can prove $(b_3,o_3) \neq (b_3,o_3')$ using the property (5) in $\minj{j_{23}}{m_2}{m_3}$ by showing $(b_2,o_2) \in \permmax{m_2}{\knonempty}$ and $(b_2',o_2') \in \permmax{m_2}{\knonempty}$. This follows from $\mpd{m_2}{m_2'}$ (Lemma \ref{lem:m2-mpd}). \end{itemize} \item Given $j_{23}'(b_2) = \some{(b_3, o_3 - o_2)} \land (b_3, o_3) \in \permk{k}{m_3'}{p}$. Similarly we prove $ (b_2,o_2) \in \permk{k}{m_2'}{p}$ or $(b_2,o_2) \not\in \permmax{m_2'}{\knonempty} $ by cases of $j_{23}(b_2)$: \begin{itemize} \item If $j_{23}(b_2) = \none$, then $b_2$ and $b_3$ are new blocks by $\injsep{j_{23}}{j_{23}'}{m_2}{m_3}$. According to step (1), we know that $ \exists b_1 \app o_1, j_{13}'(b_1) = \some{(b_3,o_3- o_1)}$. At the same time, we also know that the permission of $(b_2,o_2)$ in new block of $m_2'$ is copied from $(b_1,o_1)$ in $m_1'$. Now from property (6) of $\minj{j_{13}'}{m_1'}{m_3'}$ we can derive that $(b_1,o_1) \in \permk{k}{m_1'}{p} \lor (b_1,o_1) \notin \permmax{m_1'}{\knonempty}$, therefore $(b_2,o_2) \in \permk{k}{m_2'}{p} \lor (b_2,o_2) \notin \permmax{m_2'}{\knonempty}$. \item If $j_{23}(b_2) = \some{(b_3,o_3 - o_2)}$, then $b_2$ and $b_3$ are old blocks. We further divide $b_2$ into two cases: \begin{itemize} \item If $(b_2,o_2) \in \outofreach{j_{12}}{m_1}$, we have $(b_2,o_2) \in \unchangedon{m_2}{m_2'}$ (Lemma \ref{lem:m2-unchangedon}). If $(b_2,o_2) \in \permmax{m_2'}{\knonempty}$(otherwise the conclusion holds trivially), then $(b_2,o_2) \in \permmax{m_2}{\knonempty}$ holds ($\mpd{m_2}{m_2'}$). According to Lemma \ref{lem:outofreach-reverse}, we get $(b_3,o_3) \in \unchangedon{m_3}{m_3'}$ and $(b_3,o_3)\in \permk{k}{m_3}{p}$. Then we can derive that $ (b_2,o_2) \in \permk{k}{m_2}{p} \lor (b_2,o_2) \not\in \permmax{m_2}{\knonempty}$ by property (6) of $\minj{j_{23}}{m_2}{m_3}$. Finally we can prove that \[(b_2,o_2) \in \permk{k}{m_2'}{p} \lor (b_2,o_2) \not\in \permmax{m_2'}{\knonempty}.\ \item If $(b_2,o_2) \notin \outofreach{j_{12}}{m_1}$, we know that $\exists b_1, j_{13}'(b_1)=\some{b_3,o_3 - o_1}$. From $\minj{j_{13}'}{m_1'}{m_3'}$ we can derive that $(b_1,o_1) \in \permk{k}{m_1'}{p} \lor (b_1,o_1) \notin \permmax{m_1'}{p}$. Meanwhile, the permission of $(b_2,o_2) \in \pubtgtmem{j_{12}}{m_1} \cap \pubsrcmem{j_{23}}$ is copied from $m_1'$ in step (3). Therefore \[(b_2,o_2) \in \permk{k}{m_2'}{p} \lor (b_2,o_2) \notin \permmax{m_2'}{\knonempty}\] \end{itemize} \end{itemize} \end{enumerate} \end{proof} \begin{lemma}$\injpacc{(j_{12},m_1,m_2)}{(j_{12}',m_1',m_2')}$\label{lem:injp-acc1} \end{lemma} \begin{proof} According to Definition \ref{def:injpacc}, most of the properties of $\injpacc{(j_{12},m_1,m_2)}{(j_{12}',m_1',m_2')}$ have been proved in Lemma \ref{lem:constr-inj}, Lemma \ref{lem:m2-unchangedon} and Lemma \ref{lem:m2-mpd}. From $\injpacc{(j_{13},m_1,m_3)}{(j_{13}',m_1',m_3')}$ we can get $\mpd{m_1}{m_1'}$ and $\unmapped{j_{13}} \subseteq \unchangedon{m_1}{m_1'}$. To get the last leaving property $\unmapped{j_{12}} \subseteq \unchangedon{m_1}{m_1'}$ we only need to show \[ \unmapped{j_{12}} \subseteq \unmapped{j_{13}} \] where $j_{13} = j_{23} \cdot j_{12}$. This relations holds simply because of $\forall b, j_{12}(b) = \none \imply j_{23} \cdot j_{12}(b) = \none$. In other word, more regions in $m_1$ is protected in $\injpacc{(j_{13},m_1,m_3)}{(j_{13}',m_1',m_3')}$ than in $\injpacc{(j_{12},m_1,m_2)}{(j_{12}',m_1',m_2')}$. \end{proof} \begin{lemma}$\injpacc{(j_{23},m_2,m_3)}{(j_{23}',m_2',m_3')}$\label{lem:injp-acc2} \end{lemma} \begin{proof} Similarly, we only need to show \[ \outofreach{j_{23}}{m_2} \subseteq \outofreach{j_{23}\cdot j_{12}}{m_1} \] Given $(b_3,o_3) \in \outofreach{j_{23}}{m_2}$, i.e. \[\forall b_2 \app o_2, j_{23}(b_2) = \some{(b_3,o_3)} \imply (b_2,o_2) \notin \permmax{m_2}{\knonempty}\] We need to prove $(b_3,o_3) \in \outofreach{j_{23}\cdot j_{12}}{m_1}$. as follows. If $j_{23} \cdot j_{12} (b_1) = \some{(b_3,o_3)}$,i.e. $\exists b_2, j_{12}(b_1) = \some{(b_2,o_2)} \land j_{23}(b_2) = \some{(b_3,o_3)}$, we can derive that $(b_2,o_2) \notin \permmax{m_2}{\knonempty}$. By property (1) of $\minj{j_{12}}{m_1}{m_2}$, we can get $(b_1,o_1) \notin \permmax{m_1}{\knonempty}$. Therefore $ (b_3,o_3) \in \outofreach{j_{23}\cdot j_{12}}{m_1}$. \end{proof} \subsection{Proof of Lemma \ref{lem:injp-comp-refine-injp}} We prove Lemma \ref{lem:injp-comp-refine-injp} in this section: \begin{tabbing} \quad\=\quad\=\quad\=$\exists m_2'\app j_{12}'\app j_{23}',$\=\kill \>$\forall j_{13}\app m_1\app m_3,\app \minj{j_{13}}{m_1}{m_3} \imply \exists j_{12}\app j_{23}\app m_2,\app \minj{j_{12}}{m_1}{m_2} \land \minj{j_{23}}{m_2}{m_3} \land$\\ \>\>$\forall m_1'\app m_2'\app m_3'\app j_{12}'\app j_{23}',\app \injpacc{(j_{12}, m_1, m_2)}{(j_{12}', m_1', m_2')} \imply \injpacc{(j_{23}, m_2, m_3)}{(j_{23}', m_2', m_3')} \imply$\\ \>\>\>$\minj{j_{12}'}{m_1'}{m_2'} \imply \minj{j_{23}'}{m_2'}{m_3'} \imply \exists j_{13'},\app \injpacc{(j_{13}, m_1, m_3)}{(j_{13}', m_1', m_3')} \land \minj{j_{13}'}{m_1'}{m_3'}.$ \end{tabbing} \begin{proof} Given $\minj{j_{13}}{m_1}{m_3}$, take $j_{12} = \{(b,(b,0))| j_{13}(b) \neq \none\}$, $j_{23} = j_{13}$ and $m_2 = m_1$. As a result, $\minj{j_{23}}{m_2}{m_3}$ holds trivially. We show $\minj{j_{12}}{m_1}{m_1}$ as follows: \begin{enumerate} \item Given $j_{12}(b_1) = \some{(b_2,o_2 - o_1)} \land (b_1,o_1) \in \permk{k}{m_1}{p}$, according to the definition of $j_{12}$ we know that $b_2 = b_1$ and $o_2 = o_1$. Therefore $(b_2,o_2) \in \permk{k}{m_1}{p}$. \item Given $j_{12}(b_1) = \some{(b_2,o_2 - o_1)} \land (b_1,o_1) \in \permcur{m_1}{p}$, similar to (1) we know $b_2 = b_1$ and $o_2 = o_1$. Therefore $m_1[b_1,o_1] = m_1[b_2,o_2]$. If $m_1[b_1,o_1]$ is not in the form of $\vptr{b_1'}{o_1'}$, $\vinj{j_{12}}{m_1[b_1,o_1]}{m_1[b_2,o_2]}$ holds trivially. If $m_1[b_1,o_1] = \vptr{b_1'}{o_1'}$, from $j_{12}(b_1) = \some{(b_1,0)}$ we get $j_{13}(b_1) \neq \none$. According to property (2) of $\minj{j_{13}}{m_1}{m_3}$, $\exists v_3, \minj{j_{13}}{\vptr{b_1'}{o_1'}}{v_3}$. Which means that $j_{12}(b_1') = \some{(b_1',0)}$, therefore $\vinj{j_{12}}{\vptr{b_1'}{o_1'}}{\vptr{b_1'}{o_1'}}$. \item Given $b_1 \notin \validblock{m_1}$, we can derive that $j_{13}(b_1) = \none$ by $\minj{j_{13}}{m_1}{m_3}$. Therefore $j_{12}(b_1) = \none$ holds by definition. \item Given $j_{12}(b_1) = \some{(b_2,\delta)}$, we know that $j_{13}(b_1) \neq \none$. Therefore $b_1 \in \validblock{m_1}$ by $\minj{j_{13}}{m_1}{m_3}.$ Since $b_1 = b_2$, $b_2 \in \validblock{m_1}$. \item Given $b_1 \neq b_1'$, $j_{12}(b_1) = \some{b_2,o_2 - o_1}$ and $j_{12}(b_1') = \some{b_2', o_2' - o_1'}$. It is straightforward that $b_2 = b_1$, $b_2' = b_1'$ therefore $b_2 \neq b_2'$. \item Given $j_{12}(b_1) = \some{b_2,o_2 - o_1}$ and $(b_2,o_2) \in \permk{k}{m_1}{p}$. Similarly we have $b_2 = b_1$, $o_2 = o_1$ and $(b_1,o_1) \in \permk{k}{m_1}{p}$. \end{enumerate} After external calls, given preconditions $\injpacc{(j_{12}, m_1, m_2)}{(j_{12}', m_1', m_2')}$, $\injpacc{(j_{23}, m_2, m_3)}{(j_{23}', m_2', m_3')}$, $\minj{j_{12}'}{m_1'}{m_2'}$ and $\minj{j_{23}'}{m_2'}{m_3'}$. We can get $\minj{j_{13}'}{m_1'}{m_3'}$ directly by Lemma \ref{lem:inj-trans}. For $\injpacc{(j_{13}, m_1, m_3)}{(j_{13}', m_1', m_3')}$, \begin{enumerate} \item We can easily show $j_{13} = j_{23} \cdot j_{12}$ by the definition of $j_{12}$. Since $j_{12} \subseteq j_{12'}$, $j_{23} \subseteq j_{23}'$, we can conclude that $j_{23} \cdot j_{12} \subseteq j_{23}' \cdot j_{12}'$, i.e. $j_{13} \subseteq j_{13}'$. \item \[ \unmapped{j_{13}} \subseteq \unchangedon{m_1}{m_1'} \] By definition of $j_{12}$, we have $\unmapped{j_{12}} = \unmapped{j_{13}}$. Therefore the result comes directly from $\injpacc{(j_{12}, m_1, m_2)}{(j_{12}', m_1', m_2')}$. \item \[ \outofreach{j_{13}}{m_1} \subseteq \unchangedon{m_3}{m_3'} \] Since $j_{23} = j_{13}$ and $m_2 = m_1$, the result comes directly from $\injpacc{(j_{23}, m_2, m_3)}{(j_{23}', m_2', m_3')}$. \item $\mpd{m_1}{m_1'}$ comes from $\injpacc{(j_{12}, m_1, m_2)}{(j_{12}', m_1', m_2')}$. \item $\mpd{m_3}{m_3'}$ comes from $\injpacc{(j_{23}, m_2, m_3)}{(j_{23}', m_2', m_3')}$. \item \[\injsep{j_{13}}{j_{13}'}{m_1}{m_3}\] If $j_{13}(b_1) = \none$ and $j_{13}'(b_1) = \some{(b_3,o_3 - o_1)}$, we get \[j_{12}(b_1) = \none \text{ and } \exists b_2, j_{12}'(b_1) = \some{(b_2,o_2 - o_1)} \land j_{23}'(b_2) = \some{(b_3,o_3 - o_2)}\] by $\injsep{j_{12}}{j_{12}'}{m_1}{m_2}$ we get $b_1 \notin \validblock{m_1}$ and $b_2 \notin \validblock{m_2}$. By property (3) of $\minj{j_{23}}{m_2}{m_3}$ we can derive that $j_{23}(b_2) = \none$. Finally we get $b_3 \notin \validblock{m_3}$ by $\injsep{j_{23}}{j_{23}'}{m_2}{m_3}$. \end{enumerate} \end{proof} \section{Introduction} \label{sec:intro} Verified compilation has been under active investigation for decades~\cite{patterson-icfp-2019}. The state-of-the-art is CompCert~\cite{compcert,leroy09,Leroy09jar}, a verified compiler for translating a substantial part of C into assembly languages. CompCert supports a syntactic approach to separate compilation that allows modules written in the same language and separately compiled by the same compiler to preserve semantics after being linked into whole programs~\cite{hur16}. In reality, a program is usually composed of modules written in different languages and compiled through different compiler passes. This heterogeneity renders the syntactic approach to verified separate compilation ineffective. To address this challenge, various techniques have been proposed to support a more general notion of modular verification of compilers known as \emph{verified compositional compilation} (VCC) in which both semantics for heterogeneous programs and correctness proofs for compilers are composable at the boundaries of modules~\cite{stewart15,compcertm,compcerto,cascompcert,wang2019,dscal15}. \subsection{Semantic Interfaces for Verified Compositional Compilation} Most of the existing work on VCC is based on a well-known formulation of composable semantics named \emph{interaction semantics}~\cite{beringer14,stewart15} and its variants~\cite{hur16,compcerto,cascompcert}. Interaction semantics is a generalization of labeled transition systems (LTS) for describing operational semantics of closed programs in CompCert into open transitions systems. For example, the LTS $L_1$ in \figref{fig:compose-semantics} represents the semantics of a module which may be triggered by incoming queries $q^I_1$. During its execution, $L_1$ may perform outgoing queries $q^O_1$ and get a reply $r^O_1$. Finally, $L_1$ returns with an reply $r^I_1$ for the initial query. The formats of incoming (and outgoing) queries and replies are \emph{language interfaces} for interaction with other modules. Two open LTS with compatible interfaces may be composed by a semantic linking operator $\semlink$ that hides the mutual queries and replies between them, as shown in~\figref{fig:compose-semantics} where the dashed line represents hidden interaction. \begin{figure}[ht!] \begin{tikzpicture} \snode {0.6cm}{1cm}{0.2cm}{draw, black, rectangle} (l1) {$L_1$}; \draw [-stealth] ($([yshift=-0.2cm]l1.north east)+(0.4cm,0)$) node[anchor=west] {$q^I_1$} --([yshift=-0.2cm]l1.north east); \draw [-stealth] ([yshift=0.2cm]l1.south east) --++(0.4cm, 0) node[anchor=west] {$r^I_1$}; \draw [-stealth] ([yshift=-0.2cm]l1.north west) --++(-0.4cm, 0) node[anchor=east] {$q^O_1$}; \draw [-stealth] ($([yshift=0.2cm]l1.south west)-(0.4cm,0)$) node[anchor=east] {$r^O_1$} -- ([yshift=0.2cm]l1.south west); \node[right = 1cm of l1] (link) {\large $\semlink$}; \snode {0.6cm}{1cm}{0.2cm}{draw, black, rectangle,right=1cm of link} (l2) {\small $L_2$}; \draw [-stealth] ($([yshift=-0.2cm]l2.north east)+(0.4cm,0)$) node[anchor=west] {$q^I_2$} --([yshift=-0.2cm]l2.north east); \draw [-stealth] ([yshift=0.2cm]l2.south east) --++(0.4cm, 0) node[anchor=west] {$r^I_2$}; \draw [-stealth] ([yshift=-0.2cm]l2.north west) --++(-0.4cm, 0) node[anchor=east] {$q^O_2$}; \draw [-stealth] ($([yshift=0.2cm]l2.south west)-(0.4cm,0)$) node[anchor=east] {$r^O_2$} -- ([yshift=0.2cm]l2.south west); \node[right = 1cm of l2] (equal) {\large $=$}; \snode {0.6cm}{1cm}{0.2cm}{draw, black, rectangle,right=1.6cm of equal} (l11) {$L_1$}; \draw [-stealth] ([yshift=-0.2cm]l11.north west) --++(-0.8cm, 0) node[anchor=east] {$q^O$}; \draw [-stealth] ($([yshift=0.2cm]l11.south west)-(0.8cm,0)$) node[anchor=east] {$r^O$} -- ([yshift=0.2cm]l11.south west); \snode {0.6cm}{1cm}{0.2cm}{draw, black, rectangle,right=1.5cm of l11} (l22) {\small $L_2$}; \draw [-stealth] ($([yshift=-0.2cm]l22.north east)+(0.8cm,0)$) node[anchor=west] {$q^I$} --([yshift=-0.2cm]l22.north east); \draw [-stealth] ([yshift=0.2cm]l22.south east) --++(0.8cm, 0) node[anchor=west] {$r^I$}; \draw [-stealth] ([xshift=0.4cm,yshift=-0.2cm]l22.north east) --++(0,0.4cm) --([xshift=0.4cm,yshift=0.2cm]l11.north east) --++(0,-0.4cm) --([yshift=-0.2cm]l11.north east); \draw [-stealth] ([yshift=0.2cm]l11.south east) --++(0.4cm,0) --([xshift=0.4cm,yshift=-0.2cm]l11.south east) --([xshift=0.4cm,yshift=-0.2cm]l22.south east) --([xshift=0.4cm,yshift=0.2cm]l22.south east); \draw [-stealth] ([yshift=-0.2cm]l22.north west) --++(-0.4cm,0) --([xshift=-0.4cm,yshift=0.4cm]l22.north west) --([xshift=-0.4cm,yshift=0.4cm]l11.north west) --([xshift=-0.4cm,yshift=-0.2cm]l11.north west); \draw [-stealth] ([xshift=-0.4cm,yshift=0.2cm]l11.south west) --++(0,-0.6cm) --([xshift=-0.4cm,yshift=-0.4cm]l22.south west) --([xshift=-0.4cm,yshift=0.2cm]l22.south west) --([yshift=0.2cm]l22.south west); \draw [stealth-stealth,line width=1pt, dashed] (l11) -- (l22); \end{tikzpicture} \caption{Composition of Interaction Semantics} \label{fig:compose-semantics} \end{figure} Note that the language interfaces of an open LTS may deviate from the languages implementing its underlying module. For example, an assembly module may be formalized as an LTS receiving and performing C-style function calls. This makes interaction semantics a uniform framework for describing semantics of heterogeneous modules. In the existing work on VCC, the correctness of compilation is described via \emph{open simulation relations} between interaction semantics of source and target modules~\cite{stewart15,compcertm,cascompcert,compcerto}. To deal with mutually recursive calls between modules, simulations must encode \emph{rely-guarantee conditions} between semantics of different modules. Then, to prove the correctness of compilation, it must be shown that the internal invariants and the rely-guarantee conditions imposed by simulations hold throughout the execution of modules. In summary, to support VCC, the key is to build interaction semantics with appropriate language interfaces and simulation relations, which we collectively call the \emph{semantic interfaces} for VCC. The effectiveness of semantic interfaces can be evaluated against the properties below, where we write $\sim$ to represent a simulation relation, $\synlink$ to represent the syntactic linking of programs and $\sem{M}$ to represent the interaction semantics of a module $M$: \begin{align*} \mbox{\bf Horizontal Compositionality:} & \quad L_1 \sim L_1' \imply L_2 \sim L_2' \imply L_1 \semlink L_1' \sim L_2 \semlink L_2'\\ \mbox{\bf Vertical Compositionality:} & \quad L_1 \sim L_2 \imply L_2 \sim L_3 \imply L_1 \sim L_3\\ \mbox{\bf Syntactic Linking for Targets:} & \quad \sem{T_1} \semlink \sem{T_2} \sim \sem{T_1 \synlink T_2} \end{align*} The first property guarantees that simulation is preserved by semantic linking. It is essential for composition of correctness theorems for different compilation chains represented by $\sim$. The second one states that simulation is transitive. It is essential for composing proofs for multi-pass compilers. The last one ensures that, for any target modules $T_1$ and $T_2$ (e.g., assembly modules) of compilation, their semantic linking simulates their syntactic linking. This is essential to propagate semantics preservation to the final target programs. By proving every compiler pass respects $\sim$ (even hand-written assembly modules may be related to LTS specifications by simulations), it is straightforward to apply the above properties to get a simulation between linked semantics of heterogeneous source programs and the physically linked target programs, from which semantics preservation can be derived. \subsection{Evolution of Approaches to Building Semantic Interfaces} \begin{table} \caption{Comparison Between Different Compilers Supporting VCC} \small \begin{tabular}{c c c c c} \hline & \textbf{H. Comp.} & \textbf{V. Comp.} & \textbf{Syntactic Linking} & \textbf{Unified Interface}\\ \hline CompComp & YES & YES & \textcolor{red}{NO} & YES\\ CompCertM & YES & \textcolor{brown}{RUSC} & YES & \textcolor{brown}{RUSC}\\ CompCertO & YES & YES & YES & \textcolor{red}{NO}\\ \bf This Work & \bf YES & \bf YES & \bf YES & \bf YES \end{tabular} \label{tab:compare} \end{table} Researchers have been striving to develop \emph{unified} semantic interfaces---i.e., integrated semantic interfaces that directly relate source and target programs---for VCC that simultaneously satisfies the essential properties above. Unified semantics interfaces are important because users of verified compilers can understand them without knowing the internal mechanisms of compilers. \tabref{tab:compare} displays projects that made significant progress in this direction. Compositional CompCert (CompComp) is the earliest attempt~\cite{stewart15}. It provides a single language interface for modeling queries and replies as C-style function calls and returns, and a uniform simulation relation called \emph{structured simulation}, which is proved both horizontally and vertically composable. However, every language in CompComp (even the assembly language) must use C-style function calls and returns to interact with each other. Because of this semantic gap, CompComp does not support semantic preservation down to syntactically linked assembly code. To solve the above problem, CompCertM extends the language interfaces to accept both C and assembly calls and fixes the problems in interaction between C and assembly in CompComp's interaction semantics~\cite{compcertm}. However, its simulation relations do not have native support for vertical composition. Instead, an extra layer of framework called \emph{Refinement Under Self-related Contexts} (RUSC) is added to recover vertical compositionality. Its semantic interface is also built upon RUSC. In CompComp and CompCertM, the language interfaces and simulation relations are first fixed and then enforced down to every source, target and intermediate language and down to every single compiler pass. As a result, their simulation relations are not only quite complex, but also carry information which does not \emph{directly reflect} the rely-guarantee requirements \emph{actually needed} by individual compiler passes. In fact, significant changes are made to CompCert's verification framework to make it behave correctly with respect to the monolithic semantic interfaces. For instance, CompComp changes every language of CompCert to carry \emph{effect annotations} to make vertical composition of simulations possible and introduces a complex \emph{memory leakage protocol} to make horizontal composition possible. CompCertM avoids introducing \emph{effect annotations} by delegating vertical composition to RUSC. However, it needs to enrich memory injections with \emph{private memory} for horizontal composition and introduces \emph{mixed forward and backward simulation} to deal with changes to CompComp's interaction semantics. To avoid the high complexity resulting from forcing every compiler pass to fit into pre-defined semantic interfaces, CompCertO adopts a \emph{bottom-up} approach to building semantic interfaces~\cite{compcerto}. It parameterizes interaction semantics over language interfaces, therefore enables languages at different stages of compilation to adopt appropriate language interfaces. It also parameterizes simulation relations over \emph{simulation conventions} for relating different language interfaces. Therefore, it becomes possible to define customized simulation relations for individual compiler passes. The rely-guarantee conditions specific to every compiler pass are captured in simulation conventions and accumulated to the top-level along with the composition of simulations. Following this approach, a minimal modification to CompCert's verification framework is required to satisfy the essential properties. In fact, the entire theory and semantic framework of CompCertO is built on top of the existing framework of CompCert, modulo the mandatory change of closed LTS into open LTS. \subsection{Challenges with the Bottom-up Approach} Despite the above benefits, the semantic interface obtained from the bottom-up approach is \emph{not unified} (as shown in~\tabref{tab:compare}). In particular, letting $\sim_1, \sim_2 \ldots, \sim_n$ be simulation relations for the compilers passes, their vertical composition $\sim_1 \simconvcomp \sim_2 \simconvcomp \ldots \simconvcomp \sim_n$ is not a single definition, but $n$ parallel relations with different simulation conventions working at the same time. Although the final simulation relation in CompCertO is simplified to a certain extent by refinement of simulation conventions, it still does not relate source and target semantics directly. The difficulty in getting a unified semantic interface for CompCertO is tied to the well-known challenge to proving ``real'' vertical compositionality for open simulations where the composed simulation must converge to a unified definition. This is considered ``in general very technical and involved'' (see~\cite{hur16}) because of the difficulty to construct \emph{interpolating} program states for transitively relating evolving source and target states across \emph{external calls} of open simulations. This problem also manifests in proving transitivity for \emph{logical relations} where construction of interpolating terms of higher-order types (whose arguments at \emph{negative positions} are related by logical relations) is not in general possible~\cite{Ahmed06esop}. Even in our setting where simulations only relate first-order states, the folklore is that real vertical compositionality is impossible without introducing intrusive changes to deal with complex rely-guarantee conditions on interleaving program states from different modules. Indeed, CompComp instruments the semantics of languages with effect annotations to make the constructing of interpolating states feasible, while CompCertM avoids the problem by only allowing vertical composition of \emph{closed} simulations with the help of RUSC. The above discussion leaves us the question: \emph{Is it possible to construct unified semantic interfaces for VCC with a bottom-up approach, without any intrusive changes to the existing verification framework?} \subsection{Our Contributions} We give an affirmative answer to the above question. An important observation we make is that interpolating states for proving vertical compositionality of simulation conventions can be constructed by exploiting properties on memory injections already presented in CompCert. Following this observation, we discover that a \emph{Kripke relation with memory protection} in CompCertO can serve as a \emph{uniform and composable} interface for characterizing evolution of memory states across external calls. By vertically composing CompCert's calling convention with simulation conventions using this uniform interface, a unified semantic interface for CompCertO is constructed. We summarize our technical contributions below: \begin{itemize} \item We prove that \kinjp---a CompCert Kripke Logical Relation with a notion of protection on evolving memory states across external calls---is both uniform (i.e., memory transformation in every compiler pass respects this relation) and composable (i.e., transitive modulo an equivalence relation). The critical observation making this proof possible is that interpolating memory states can be constructed by exploiting memory protection \emph{inherent} to memory injections and the \emph{functional} nature of injections (\secref{sec:injp}). \item Based on the above discovery, we develop a unified semantic interface for CompCertO's open simulations that directly relates open semantics for C and assembly. This results in a compiler that supports all of the properties listed in~\tabref{tab:compare} for VCC and, contrary to the folklore, we show that vertical compositionality for open simulations can be achieved without fundamental changes to the verification framework of CompCert (\secref{sec:compcerto}). \item We demonstrate the conciseness and usefulness of our unified interface by applying it to a non-trivial example: the verified compositional compilation of a C module and a hand-written assembly program that mutually recursively call each other (\secref{sec:eval}). \end{itemize} All of our developments are formalized in Coq based on CompCertO~\cite{compcerto} (See the supplementary materials). Note that we have left a treatment of three optimization passes using value analysis to future work because their rely-guarantee conditions have ad-hoc interpretation in CompCertO. Nevertheless, this work shows that, with the bottom-up approach, we can build compilers supporting VCC with interfaces faithfully reflecting the requirements of underlying compiler passes. Therefore, it provides a promising direction for further evolving the verification techniques for VCC. \subsection{Structure of the Paper} In~\secref{sec:background}, we introduce background knowledge necessary to understand CompCertO's bottom-up approach to VCC; we also discuss challenges to developing a unified semantic interface and our approaches. We present our technical contributions in~\secref{sec:injp},~\secref{sec:compcerto} and~\secref{sec:eval}. \secref{sec:eval} also provides an evaluation of our entire development. We discuss related work and conclude in~\secref{sec:related}. \section{An Overview of Our Approach} \label{sec:overview} \section{Related Work} \label{sec:related} \subsection{Modular Verification of Compilers for First-Order Languages} In this work, we are concerned with verified compositional compilation of imperative programs with first-order states and support of pointers, e.g., C programs. In this setting, we need to deal with a memory state shared by different modules and the private memory of one module may be leaked to others through manipulation of pointers. Existing work on this topic is mostly based on CompCert and their extensions. We compare our work with them below. \paragraph{Compositional CompCert} CompComp is the first work that supports VCC based on interaction semantics~\cite{stewart15}. As we have mentioned in the introduction, because it adopts C-style function calls as a uniform interface it cannot support simulation down to syntactic linked assembly program. By contrast, we have the flexibility to choose and combine different interfaces, and our unified interface relates C-level specifications directly to well-behaved assembly implementations, therefore automatically supports syntactic linking. Also note that if we adopt C-level simulation convention with \kinjp (i.e., $\kc_\kinjp$) as a uniform interface for every compiler pass, we basically get an alternative implementation of CompComp without the extra mechanisms or complexity in CompComp. \paragraph{CompCertM} A distinguishing feature of CompCertM is Refinement Under Self-related Contexts~\cite{compcertm}. A RUSC relation is defined based on a \emph{fixed} collection of simulation relations. By exploiting the property of contexts that are self-relating under all of these simulation relations, horizontal and vertical compositionality are achieved. However, the simulation relation by themselves are not vertically composable, and are forced down to individual compiler pass like CompComp. By contrast, we show that there is no need to introduce predefined simulations that are not vertically composable, instead simulations can be accumulated from each compiler pass and vertically composed into a unified form. This results in a much more concise and usable semantic interface for VCC. On the other hand, CompCertM has many features that we do not yet support. First, we require that every assembly module conforms to a C-level specification via our semantic interface, which may not be the case for assembly programs with ill-defined behaviors. CompCertM provides support for some of the ill-defined interaction. Second, CompCertM supports abstraction over memory states to a certain extent which we do not. Finally, CompCertM derives refinements of external behaviors from simulation which we do not support yet. To address the first problem, we will need to relax our semantic interface to accommodate ill-defined behaviors. For the second problem, we need to introduce a notion of abstract states and state hiding in the interface. For the last problem, we need to ``close'' the interaction semantics after simulations are established to get behavioral semantics and to derive their refinement. We believe all of them can be resolved by further extending our framework, which we plan to do in the future. \paragraph{CompCertX} CompCertX~\cite{dscal15,wang2019} realizes a weaker form of VCC that only allows assembly contexts to invoke C programs, but not the other way around. Therefore, it does not support horizontal composition of modules with mutual recursions. However, its top-level semantic interface is similar to our unified interface, despite not carrying a symmetric rely-guarantee condition for memory protection. This indicates that our work is a natural evolution of VCC realized in CompCertX. \paragraph{CASCompCert} CASCompCert is an extension of CompComp that supports compositional compilation of concurrent programs with no (or benign) data races~\cite{cascompcert}. To make CompComp's approach to VCC work in a concurrent setting, CASCompCert imposes some restrictions including not supporting stack-allocated data and allowing only nondeterminism in scheduling threads. A recent advancement based on CASCompCert is about verifying concurrent programs~\cite{zha2022} running on weak memory models using the promising semantics~\cite{promising1,promising2}. We believe the ideas in CASCompCert are complementary to this work and can be adopted to achieve VCC for concurrency with cleaner interface and less restrictions. \subsection{Modular Verification of Compilers for Higher-Order Languages} Another class of work on VCC focuses on compilation of higher-order languages. In this setting, the main difficulty comes from complex language features (possibly with both functional and imperative features) together with higher-order states. A prominent example is the Pilsner compiler~\cite{neis15icfp} that compiles a higher-order language into some form of assembly programs. The technique Pilsner adopts is called \emph{parametric simulations} that evolves from earlier work on reasoning about program equivalence by combining bisimulation and logical relations~\cite{hur12}. The vertical compositionality proof is technical in order to support high-order functions. Another line of work based is multi-language semantics~\cite{patterson-icfp-2019,funtal,perconti14,scherer18} where a language combining all source, intermediate and target language is used to formalize the semantics and compiler correctness is stated using contextual equivalence or logical relations. It seems that our technique will not be directly applicable to VCC for high-order programs because of the limitation of relational reasoning on higher-order states. However, an interesting direction to explore is to combine our technique with existing ones to deal with linear memory space with pointers in higher-order states.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Nonparaxial Bogoliubov action} \label{sec:nonpar_bogo_act} The starting point for the formulation of the Bogoliubov theory for a nonparaxial fluid of light is represented by its Lagrangian density [Eq.~(2) of the main text]. First, one has to rewrite this quantity in terms of the density-phase variables. The resulting expression, that can be found in Ref.~\cite{Martone2021}, should then be expanded up to second order in the small fluctuations about the background field. After switching to the Fourier space, it is convenient to decompose the quadratic Lagrangian as $\tilde{\mathcal{L}}^{(2)} = \tilde{\mathcal{L}}_z^{(2)} + \tilde{\mathcal{L}}_\perp^{(2)}$, where $\tilde{\mathcal{L}}_z^{(2)}$ is the term that contains the longitudinal field $\tilde{\mathcal{E}}_z$, while $\tilde{\mathcal{L}}_\perp^{(2)}$ depends only on the transverse components. In the case of the elliptically polarized background considered in the present work one has \begin{equation} \begin{split} \tilde{\mathcal{L}}_z^{(2)} = {}&{} - \frac{1}{2\beta_0} \left[ (q_\perp^2 - k_0^2) \tilde{\mathcal{E}}_z^*(\vec{q}_\perp,z) \tilde{\mathcal{E}}_z(\vec{q}_\perp,z) + \frac{\Delta k_0^2}{2} \tilde{\mathcal{E}}_z(\vec{q}_\perp,z) \tilde{\mathcal{E}}_z(-\vec{q}_\perp,z) + \frac{\Delta k_0^2}{2} \tilde{\mathcal{E}}_z^*(\vec{q}_\perp,z) \tilde{\mathcal{E}}_z^*(-\vec{q}_\perp,z) \right] \\ &{} + \frac{\mathrm{i} q_\perp}{2\beta_0}\sqrt{\frac{I_0}{2}} \left[ \mathcal{A}(\vec{q}_\perp,z) \tilde{\mathcal{E}}_z^*(\vec{q}_\perp,z) - \mathcal{A}^*(\vec{q}_\perp,z) \tilde{\mathcal{E}}_z(\vec{q}_\perp,z) \right] \, , \end{split} \label{eq:lagr_z_2} \end{equation} where $k_0^2 = \beta_0^2 - 2 \beta_0 (g_d + g_s) I_0$, $\Delta k_0^2 = 2 \beta_0 g_s I_0 \sin\vartheta_0$, and \begin{equation} \begin{split} \mathcal{A}(\vec{q}_\perp,z) = {}&{} \left( \cos\frac{\vartheta_0}{2} \, \mathrm{e}^{\mathrm{i} \varphi} - \sin\frac{\vartheta_0}{2} \, \mathrm{e}^{- \mathrm{i} \varphi} \right) \frac{\delta\dot{\tilde{I}}(\vec{q}_\perp,z)}{2 I_0} - \left( \sin\frac{\vartheta_0}{2} \, \mathrm{e}^{\mathrm{i} \varphi} + \cos\frac{\vartheta_0}{2} \, \mathrm{e}^{- \mathrm{i} \varphi} \right) \frac{\delta\dot{\tilde{\vartheta}}(\vec{q}_\perp,z)}{2} \\ {}&{} + \mathrm{i} \left( \cos\frac{\vartheta_0}{2} \, \mathrm{e}^{\mathrm{i} \varphi} - \sin\frac{\vartheta_0}{2} \, \mathrm{e}^{- \mathrm{i} \varphi} \right) \dot{\tilde{\Theta}}(\vec{q}_\perp,z) + \mathrm{i} \left( \cos\frac{\vartheta_0}{2} \, \mathrm{e}^{\mathrm{i} \varphi} + \sin\frac{\vartheta_0}{2} \, \mathrm{e}^{- \mathrm{i} \varphi} \right) \frac{\delta\dot{\tilde{\chi}}(\vec{q}_\perp,z)}{2} \\ {}&{} + \mathrm{i} \left( k_+ \cos\frac{\vartheta_0}{2} \, \mathrm{e}^{\mathrm{i} \varphi} - k_- \sin\frac{\vartheta_0}{2} \, \mathrm{e}^{- \mathrm{i} \varphi} \right) \frac{\delta\tilde{I}(\vec{q}_\perp,z)}{2 I_0} - \mathrm{i} \left( k_+ \sin\frac{\vartheta_0}{2} \, \mathrm{e}^{\mathrm{i} \varphi} + k_- \cos\frac{\vartheta_0}{2} \, \mathrm{e}^{- \mathrm{i} \varphi} \right) \frac{\delta\tilde{\vartheta}(\vec{q}_\perp,z)}{2} \\ {}&{} - \left( k_+ \cos\frac{\vartheta_0}{2} \, \mathrm{e}^{\mathrm{i} \varphi} - k_- \sin\frac{\vartheta_0}{2} \, \mathrm{e}^{- \mathrm{i} \varphi} \right) \tilde{\Theta}(\vec{q}_\perp,z) - \left( k_+ \cos\frac{\vartheta_0}{2} \, \mathrm{e}^{\mathrm{i} \varphi} + k_- \sin\frac{\vartheta_0}{2} \, \mathrm{e}^{- \mathrm{i} \varphi} \right) \frac{\delta\tilde{\chi}(\vec{q}_\perp,z)}{2} \, . \end{split} \label{eq:lagr_z_2_coeff} \end{equation} We recall that $\varphi = \varphi_q + \Delta k \;\!\!\! z / 2$. Since the Bogoliubov Lagrangian density and the corresponding action $\mathcal{S}^{(2)} = \int \mathrm{d} z \int {\mathrm{d}^2 \vec{q}_\perp}/{(2\pi)^2} \tilde{\mathcal{L}}^{(2)}$ do not depend on the effective-time derivatives of $\tilde{\mathcal{E}}_z$, the Euler-Lagrange equation $\delta \mathcal{S}^{(2)} / \delta \tilde{\mathcal{E}}_z^*(\vec{q}_\perp) = 0$ and its complex conjugate yield the relation \begin{equation} \tilde{\mathcal{E}}_z(\vec{q}_\perp,z) = \sqrt{\frac{I_0}{2}} \frac{\mathrm{i} q_\perp \left[(q_\perp^2 - k_0^2) \mathcal{A}(\vec{q}_\perp,z) + \Delta k_0^2 \mathcal{A}^*(-\vec{q}_\perp,z) \right]}{(q_\perp^2 - k_0^2)^2 - (\Delta k_0^2)^2} \, , \label{eq:Ez} \end{equation} which allows us to eliminate the longitudinal field from the Lagrangian in favor of the variables related to the transverse components and their derivatives. The final result corresponds to Eq.~(4) of the main text, i.e., \begin{equation} \tilde{\mathcal{L}}^{(2)} = \dot{X}^\dagger \Lambda_2 \dot{X} + \dot{X}^\dagger \Lambda_1 X + X^\dagger \Lambda_1^T \dot{X} - X^\dagger \Lambda_0 X \, , \label{eq:lagr_2} \end{equation} where we have introduced three $4 \times 4$ matrices having the structure \begin{equation} \Lambda_k = -\frac{I_0}{2 \beta_0} \begin{bmatrix} (\Lambda_k)_{1,1} & (\Lambda_k)_{1,2} & (\Lambda_k)_{1,3} & (\Lambda_k)_{1,4} \\ (\Lambda_k)_{2,1} & (\Lambda_k)_{2,2} & (\Lambda_k)_{2,3} & (\Lambda_k)_{2,4} \\ (\Lambda_k)_{3,1} & (\Lambda_k)_{3,2} & (\Lambda_k)_{3,3} & (\Lambda_k)_{3,4} \\ (\Lambda_k)_{4,1} & (\Lambda_k)_{4,2} & (\Lambda_k)_{4,3} & (\Lambda_k)_{4,4} \\ \end{bmatrix} \qquad (k=0,1,2) \, . \label{eq:Lambda_mat} \end{equation} We now give the expressions of all the entries of these matrices. We first define \begin{subequations} \label{eq:Lambda_mat_coeff} \begin{align} A(q_\perp) {}&{} = \frac{q_\perp^2}{2} \frac{q_\perp^2 - k_0^2}{(q_\perp^2 - k_0^2)^2 - (\Delta k_0^2)^2} \, , \label{eq:Lambda_mat_coeff_1} \\ B(q_\perp) {}&{} = \frac{q_\perp^2}{2} \frac{\Delta k_0^2}{(q_\perp^2 - k_0^2)^2 - (\Delta k_0^2)^2} \, . \label{eq:Lambda_mat_coeff_2} \end{align} \end{subequations} Then, the entries of $\Lambda_0$ are \begin{subequations} \label{eq:Lambda_0} \begin{align} \begin{split} &{} (\Lambda_0)_{1,1} = \begin{aligned}[t] {}&{} - \frac{q_\perp^2}{2} - 4 \beta_0 (g_d + g_s \cos^2 \vartheta_0) I_0 + A(q_\perp) \left(k_+^2 \cos^2 \frac{\vartheta_0}{2} + k_-^2 \sin^2 \frac{\vartheta_0}{2} \right) - B(q_\perp) k_+ k_- \sin\vartheta_0 \\ {}&{} - \left[ \frac{q_\perp^2}{2} \sin\vartheta_0 + A(q_\perp) k_+ k_- \sin\vartheta_0 - B(q_\perp) \left(k_+^2 \cos^2 \frac{\vartheta_0}{2} + k_-^2 \sin^2 \frac{\vartheta_0}{2}\right) \right] \cos 2\varphi \, , \end{aligned} \end{split} \label{eq:Lambda_0_11} \\ \begin{split} &{} (\Lambda_0)_{1,2} = (\Lambda_0)_{2,1} = \begin{aligned}[t] {}&{} 2 \beta_0 g_s I_0 \sin 2\vartheta_0 - A(q_\perp) \frac{k_+^2 - k_-^2}{2} \sin\vartheta_0 - B(q_\perp) k_+ k_- \cos\vartheta_0 \\ {}&{} - \left[ \frac{q_\perp^2}{2} \cos\vartheta_0 + A(q_\perp) k_+ k_- \cos\vartheta_0 + B(q_\perp) \frac{k_+^2 - k_-^2}{2} \sin\vartheta_0 \right] \cos 2\varphi \, , \end{aligned} \end{split} \label{eq:Lambda_0_12} \\ &{} (\Lambda_0)_{1,3} = (\Lambda_0)_{3,1} = - B(q_\perp) \left(k_+^2 \cos^2 \frac{\vartheta_0}{2} - k_-^2 \sin^2 \frac{\vartheta_0}{2}\right) \sin 2\varphi \, , \label{eq:Lambda_0_13} \\ &{} (\Lambda_0)_{1,4} = (\Lambda_0)_{4,1} = \left[ \frac{q_\perp^2}{2} \sin\vartheta_0 + A(q_\perp) k_+ k_- \sin\vartheta_0 - B(q_\perp) \left(k_+^2 \cos^2 \frac{\vartheta_0}{2} + k_-^2 \sin^2 \frac{\vartheta_0}{2}\right) \right] \sin 2\varphi \, , \label{eq:Lambda_0_14} \\ \begin{split} &{} (\Lambda_0)_{2,2} = \begin{aligned}[t] {}&{} - \frac{q_\perp^2}{2} - 4 \beta_0 g_s I_0 \sin^2 \vartheta_0 + A(q_\perp) \left(k_+^2 \sin^2 \frac{\vartheta_0}{2} + k_-^2 \cos^2 \frac{\vartheta_0}{2} \right) + B(q_\perp) k_+ k_- \sin\vartheta_0 \\ {}&{} + \left[ \frac{q_\perp^2}{2} \sin\vartheta_0 + A(q_\perp) k_+ k_- \sin\vartheta_0 + B(q_\perp) \left(k_+^2 \sin^2 \frac{\vartheta_0}{2} + k_-^2 \cos^2 \frac{\vartheta_0}{2}\right) \right] \cos 2\varphi \, , \end{aligned} \end{split} \label{eq:Lambda_0_22} \\ &{} (\Lambda_0)_{2,3} = (\Lambda_0)_{3,2} = \left[ \frac{q_\perp^2}{2} + A(q_\perp) k_+ k_- + B(q_\perp) \frac{k_+^2 + k_-^2}{2} \sin\vartheta_0 \right] \sin 2\varphi \, , \label{eq:Lambda_0_23} \\ &{} (\Lambda_0)_{2,4} = (\Lambda_0)_{4,2} = \left[ \frac{q_\perp^2}{2} \cos\vartheta_0 + A(q_\perp) \, k_+ k_- \cos\vartheta_0 + B(q_\perp) \frac{k_+^2 - k_-^2}{2} \sin\vartheta_0 \right] \sin 2\varphi \, , \label{eq:Lambda_0_24} \\ \begin{split} &{} (\Lambda_0)_{3,3} = \begin{aligned}[t] {}&{} - \frac{q_\perp^2}{2} + A(q_\perp) \left(k_+^2 \cos^2 \frac{\vartheta_0}{2} + k_-^2 \sin^2 \frac{\vartheta_0}{2}\right) + B(q_\perp) k_+ k_- \sin\vartheta_0 \\ {}&{} - \left[ \frac{q_\perp^2}{2} \sin\vartheta_0 + A(q_\perp) k_+ k_- \sin\vartheta_0 + B(q_\perp) \left(k_+^2 \cos^2 \frac{\vartheta_0}{2} + k_-^2 \sin^2 \frac{\vartheta_0}{2}\right) \right] \cos 2\varphi \, , \end{aligned} \end{split} \label{eq:Lambda_0_33} \\ \begin{split} &{} (\Lambda_0)_{3,4} = (\Lambda_0)_{4,3} = \begin{aligned}[t] {}&{} - \frac{q_\perp^2}{2} \cos\vartheta_0 + A(q_\perp) \left(k_+^2 \cos^2 \frac{\vartheta_0}{2} - k_-^2 \sin^2 \frac{\vartheta_0}{2}\right) \\ {}&{} - B(q_\perp) \left(k_+^2 \cos^2 \frac{\vartheta_0}{2} - k_-^2 \sin^2 \frac{\vartheta_0}{2}\right) \cos 2\varphi \, , \end{aligned} \end{split} \label{eq:Lambda_0_34} \\ \begin{split} &{} (\Lambda_0)_{4,4} = \begin{aligned}[t] {}&{} - \frac{q_\perp^2}{2} + A(q_\perp) \left(k_+^2 \cos^2 \frac{\vartheta_0}{2} + k_-^2 \sin^2 \frac{\vartheta_0}{2}\right) - B(q_\perp) k_+ k_- \sin\vartheta_0 \\ {}&{} + \left[ \frac{q_\perp^2}{2} \sin\vartheta_0 + A(q_\perp) k_+ k_- \sin\vartheta_0 - B(q_\perp) \left(k_+^2 \cos^2 \frac{\vartheta_0}{2} + k_-^2 \sin^2 \frac{\vartheta_0}{2}\right) \right] \cos 2\varphi \, . \end{aligned} \end{split} \label{eq:Lambda_0_44} \end{align} \end{subequations} The entries of $\Lambda_1$ are \begin{subequations} \label{eq:Lambda_1} \begin{align} &{} (\Lambda_1)_{1,1} = - \left[ A(q_\perp) \frac{k_+ - k_-}{2} \sin\vartheta_0 + B(q_\perp) \left( k_+ \cos^2\frac{\vartheta_0}{2} - k_- \sin^2\frac{\vartheta_0}{2} \right) \right] \sin 2\varphi \, , \label{eq:Lambda_1_11} \\ &{} (\Lambda_1)_{1,2} = \left[ A(q_\perp) \left( k_+ \sin^2\frac{\vartheta_0}{2} + k_- \cos^2\frac{\vartheta_0}{2} \right) + B(q_\perp) \frac{k_+ + k_-}{2} \sin\vartheta_0 \right] \sin 2\varphi \, , \label{eq:Lambda_1_12} \\ \begin{split} &{} (\Lambda_1)_{1,3} = \begin{aligned}[t] {}&{} A(q_\perp) \left(k_+ \cos^2 \frac{\vartheta_0}{2} + k_- \sin^2 \frac{\vartheta_0}{2}\right) + B(q_\perp) \frac{k_+ + k_-}{2} \sin\vartheta_0 \\ {}&{} - \left[ A(q_\perp) \frac{k_+ + k_-}{2} \sin\vartheta_0 + B(q_\perp) \left(k_+ \cos^2 \frac{\vartheta_0}{2} + k_- \sin^2 \frac{\vartheta_0}{2}\right) \right] \cos 2\varphi \, , \end{aligned} \end{split} \label{eq:Lambda_1_13} \\ \begin{split} &{} (\Lambda_1)_{1,4} = \begin{aligned}[t] {}&{} A(q_\perp) \left(k_+ \cos^2 \frac{\vartheta_0}{2} - k_- \sin^2 \frac{\vartheta_0}{2}\right) + B(q_\perp) \frac{k_+ - k_-}{2} \sin\vartheta_0 \\ {}&{} - \left[ A(q_\perp) \frac{k_+ - k_-}{2} \sin\vartheta_0 + B(q_\perp) \left(k_+ \cos^2 \frac{\vartheta_0}{2} - k_- \sin^2 \frac{\vartheta_0}{2}\right) \right] \cos 2\varphi \, , \end{aligned} \end{split} \label{eq:Lambda_1_14} \\ &{} (\Lambda_1)_{2,1} = \left[ - A(q_\perp) \left( k_+ \cos^2\frac{\vartheta_0}{2} + k_- \sin^2\frac{\vartheta_0}{2} \right) + B(q_\perp) \frac{k_+ + k_-}{2} \sin\vartheta_0 \right] \sin 2\varphi \, , \label{eq:Lambda_1_21} \\ &{} (\Lambda_1)_{2,2} = \left[ A(q_\perp) \frac{k_+ - k_-}{2} \sin\vartheta_0 - B(q_\perp) \left( k_+ \sin^2\frac{\vartheta_0}{2} - k_- \cos^2\frac{\vartheta_0}{2} \right) \right] \sin 2\varphi \, , \label{eq:Lambda_1_22} \\ \begin{split} &{} (\Lambda_1)_{2,3} = \begin{aligned}[t] {}&{} - A(q_\perp) \frac{k_+ - k_-}{2} \sin\vartheta_0 + B(q_\perp) \left(k_+ \cos^2 \frac{\vartheta_0}{2} - k_- \sin^2 \frac{\vartheta_0}{2}\right) \\ {}&{} - \left[A(q_\perp) \left(k_+ \cos^2 \frac{\vartheta_0}{2} - k_- \sin^2 \frac{\vartheta_0}{2}\right) - B(q_\perp) \frac{k_+ - k_-}{2} \sin\vartheta_0\right] \cos 2\varphi \, , \end{aligned} \end{split} \label{eq:Lambda_1_23} \\ \begin{split} &{} (\Lambda_1)_{2,4} = \begin{aligned}[t] {}&{} - A(q_\perp) \frac{k_+ + k_-}{2} \sin\vartheta_0 + B(q_\perp) \left(k_+ \cos^2 \frac{\vartheta_0}{2} + k_- \sin^2 \frac{\vartheta_0}{2}\right) \\ {}&{} - \left[A(q_\perp) \left(k_+ \cos^2 \frac{\vartheta_0}{2} + k_- \sin^2 \frac{\vartheta_0}{2}\right) - B(q_\perp) \frac{k_+ + k_-}{2} \sin\vartheta_0 \right] \cos 2\varphi \, , \end{aligned} \end{split} \label{eq:Lambda_1_24} \\ \begin{split} &{} (\Lambda_1)_{3,1} = \begin{aligned}[t] {}&{} \left[2 - A(q_\perp)\right] \left(k_+ \cos^2\frac{\vartheta_0}{2} + k_- \sin^2\frac{\vartheta_0}{2}\right) + B(q_\perp) \frac{k_+ + k_-}{2} \sin\vartheta_0 \\ {}&{} + \left[A(q_\perp) \frac{k_+ + k_-}{2} \sin\vartheta_0 - B(q_\perp) \left(k_+ \cos^2 \frac{\vartheta_0}{2} + k_- \sin^2 \frac{\vartheta_0}{2}\right)\right] \cos 2\varphi \, , \end{aligned} \end{split} \label{eq:Lambda_1_31} \\ \begin{split} &{} (\Lambda_1)_{3,2} = \begin{aligned}[t] {}&{} - \left[2 - A(q_\perp)\right] \frac{k_+ - k_-}{2} \sin\vartheta_0 - B(q_\perp) \left(k_+ \sin^2 \frac{\vartheta_0}{2} - k_- \cos^2 \frac{\vartheta_0}{2}\right) \\ {}&{} - \left[ A(q_\perp) \left(k_+ \sin^2 \frac{\vartheta_0}{2} - k_- \cos^2 \frac{\vartheta_0}{2}\right) - B(q_\perp) \frac{k_+ - k_-}{2} \sin\vartheta_0 \right] \cos 2\varphi \, , \end{aligned} \end{split} \label{eq:Lambda_1_32} \\ &{} (\Lambda_1)_{3,3} = \left[ - A(q_\perp) \frac{k_+ - k_-}{2} \sin\vartheta_0 + B(q_\perp) \left( k_+ \cos^2\frac{\vartheta_0}{2} - k_- \sin^2\frac{\vartheta_0}{2} \right) \right] \sin 2\varphi \, , \label{eq:Lambda_1_33} \\ &{} (\Lambda_1)_{3,4} = \left[ - A(q_\perp) \frac{k_+ + k_-}{2} \sin\vartheta_0 + B(q_\perp) \left( k_+ \cos^2\frac{\vartheta_0}{2} + k_- \sin^2\frac{\vartheta_0}{2} \right) \right] \sin 2\varphi \, , \label{eq:Lambda_1_34} \\ \begin{split} &{} (\Lambda_1)_{4,1} = \begin{aligned}[t] {}&{} \left[2 - A(q_\perp)\right] \left(k_+ \cos^2\frac{\vartheta_0}{2} - k_- \sin^2\frac{\vartheta_0}{2}\right) - B(q_\perp) \frac{k_+ - k_-}{2} \sin\vartheta_0 \\ {}&{} - \left[A(q_\perp) \frac{k_+ - k_-}{2} \sin\vartheta_0 + B(q_\perp) \left(k_+ \cos^2 \frac{\vartheta_0}{2} - k_- \sin^2 \frac{\vartheta_0}{2}\right) \right] \cos 2\varphi \, , \end{aligned} \end{split} \label{eq:Lambda_1_41} \\ \begin{split} &{} (\Lambda_1)_{4,2} = \begin{aligned}[t] {}&{} - \left[2 - A(q_\perp)\right] \frac{k_+ + k_-}{2} \sin\vartheta_0 + B(q_\perp) \left(k_+ \sin^2 \frac{\vartheta_0}{2} + k_- \cos^2 \frac{\vartheta_0}{2}\right) \\ {}&{} + \left[ A(q_\perp) \left(k_+ \sin^2 \frac{\vartheta_0}{2} + k_- \cos^2 \frac{\vartheta_0}{2}\right) + B(q_\perp) \frac{k_+ + k_-}{2} \sin\vartheta_0 \right] \cos 2\varphi \, , \end{aligned} \end{split} \label{eq:Lambda_1_42} \\ &{} (\Lambda_1)_{4,3} = \left[ A(q_\perp) \frac{k_+ + k_-}{2} \sin\vartheta_0 + B(q_\perp) \left( k_+ \cos^2\frac{\vartheta_0}{2} + k_- \sin^2\frac{\vartheta_0}{2} \right) \right] \sin 2\varphi \, , \label{eq:Lambda_1_43} \\ &{} (\Lambda_1)_{4,4} = \left[ A(q_\perp) \frac{k_+ - k_-}{2} \sin\vartheta_0 + B(q_\perp) \left( k_+ \cos^2\frac{\vartheta_0}{2} - k_- \sin^2\frac{\vartheta_0}{2} \right) \right] \sin 2\varphi \, . \label{eq:Lambda_1_44} \end{align} \end{subequations} Finally, the entries of $\Lambda_2$ are \begin{subequations} \label{eq:Lambda_2} \begin{align} &{} (\Lambda_2)_{1,1} = 1 - A(q_\perp) - B(q_\perp) \sin\vartheta_0 + \left[A(q_\perp) \sin\vartheta_0 + B(q_\perp)\right] \cos 2\varphi \, , \label{eq:Lambda_2_11} \\ &{} (\Lambda_2)_{1,2} = (\Lambda_2)_{2,1} = - B(q_\perp) \cos\vartheta_0 + A(q_\perp) \cos\vartheta_0 \cos 2\varphi \, , \label{eq:Lambda_2_12} \\ &{} (\Lambda_2)_{1,3} = (\Lambda_2)_{3,1} = - B(q_\perp) \cos\vartheta_0 \sin 2\varphi \, , \label{eq:Lambda_2_13} \\ &{} (\Lambda_2)_{1,4} = (\Lambda_2)_{4,1} = - \left[A(q_\perp) \sin\vartheta_0 + B(q_\perp)\right] \sin 2\varphi \, , \label{eq:Lambda_2_14} \\ &{} (\Lambda_2)_{2,2} = 1 - A(q_\perp) + B(q_\perp) \sin\vartheta_0 - \left[A(q_\perp) \sin\vartheta_0 - B(q_\perp)\right] \cos 2\varphi \, , \label{eq:Lambda_2_22} \\ &{} (\Lambda_2)_{2,3} = (\Lambda_2)_{3,2} = - \left[A(q_\perp) - B(q_\perp) \sin\vartheta_0\right] \sin 2\varphi \, , \label{eq:Lambda_2_23} \\ &{} (\Lambda_2)_{2,4} = (\Lambda_2)_{4,2} = - A(q_\perp) \cos\vartheta_0 \sin 2\varphi \, , \label{eq:Lambda_2_24} \\ &{} (\Lambda_2)_{3,3} = 1 - A(q_\perp) + B(q_\perp) \sin\vartheta_0 + \left[A(q_\perp) \sin\vartheta_0 - B(q_\perp)\right] \cos 2\varphi \, , \label{eq:Lambda_2_33} \\ &{} (\Lambda_2)_{3,4} = (\Lambda_2)_{4,3} = \left[1 - A(q_\perp)\right] \cos\vartheta_0 - B(q_\perp) \cos\vartheta_0 \cos 2\varphi \, , \label{eq:Lambda_2_34} \\ &{} (\Lambda_2)_{4,4} = 1 - A(q_\perp) - B(q_\perp) \sin\vartheta_0 - \left[A(q_\perp) \sin\vartheta_0 + B(q_\perp)\right] \cos 2\varphi \, . \label{eq:Lambda_2_44} \end{align} \end{subequations} \section{Paraxial limit and sound velocities} \label{sec:paraxial_lim} As stated in the main text, the paraxial Bogoliubov Lagrangian can be deduced by expanding the nonparaxial one [Eq.~\eqref{eq:lagr_2}] up to first order in $\dot{X}/\beta_0$, $(q_\perp/\beta_0)^2$, and $g_{d,s}I_0 / \beta_0$. The final result is \begin{equation} \tilde{\mathcal{L}}_\mathrm{par}^{(2)} = \dot{X}^\dagger \Lambda_{\mathrm{par},1} X + X^\dagger \Lambda_{\mathrm{par},1}^T \dot{X} - X^\dagger \Lambda_{\mathrm{par},0} X \, , \label{eq:paraxial_lagr} \end{equation} where \begin{equation} \begin{gathered} \Lambda_{\mathrm{par},1} = I_0 \begin{bmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ -1 & 0 & 0 & 0 \\ -\cos\vartheta_0 & \sin\vartheta_0 & 0 & 0 \\ \end{bmatrix} \, , \\ \Lambda_{\mathrm{par},0} = I_0 \begin{bmatrix} \frac{q_\perp^2}{2\beta_0} + 2(g_d + g_s\cos^2\vartheta_0)I_0 & - g_s I_0 \sin 2\vartheta_0 & 0 & 0 \\ - g_s I_0 \sin 2\vartheta_0 & \frac{q_\perp^2}{2\beta_0} + 2 g_s I_0 \sin^2\vartheta_0 & 0 & 0 \\ 0 & 0 & \frac{q_\perp^2}{2\beta_0} & \frac{q_\perp^2}{2\beta_0} \cos\vartheta_0 \\ 0 & 0 & \frac{q_\perp^2}{2\beta_0} \cos\vartheta_0 & \frac{q_\perp^2}{2\beta_0} \end{bmatrix} \, . \end{gathered} \label{eq:paraxial_lagr_mat} \end{equation} The Euler-Lagrange equation for $X$ takes the simple form \begin{equation} \dot{X} = - \left[(\Lambda_{\mathrm{par},1} - \Lambda_{\mathrm{par},1}^T)\right]^{-1} \Lambda_{\mathrm{par},0} X \, . \label{eq:paraxial_el_eq} \end{equation} By making use of the Ansatz $X(\vec{q}_\perp,z) = X_0(\vec{q}_\perp) \mathrm{e}^{- \mathrm{i} \Omega(\vec{q}_\perp) z}$, one can reduce this equation to a four-dimensional eigenvalue problem. One finds four solutions characterized by the oscillation frequencies $\pm \Omega_d$ and $\pm \Omega_s$, which exhibit the standard Bogoliubov form \begin{equation} \Omega_{d(s)}(q_\perp) = \sqrt{\frac{q_\perp^2}{2 \beta_0} \left[\frac{q_\perp^2}{2 \beta_0} + 2 \beta_0 c_{d(s)}^2\right]} \, . \label{eq:paraxial_bogo_freq} \end{equation} These are the counterparts of the density and spin mode of a binary mixture of atomic Bose-Einstein condensates, featuring in-phase and out-of-phase oscillations of the densities of the two spin components, respectively. The corresponding sound velocities are given by \begin{equation} c_{d(s)}^2 = \frac{(g_d + g_s \pm \sqrt{g_d^2 + g_s^2 + 2 g_d g_s \cos 2\vartheta_0}) I_0}{2\beta_0} \, , \label{eq:paraxial_sound} \end{equation} where the upper (lower) sign refers to the density (spin) mode, respectively. In Fig.~\ref{fig:sound_vel} we plot the velocity of (left) density and (right) spin sound waves as a function of the background polarization. The paraxial prediction~\eqref{eq:paraxial_sound} (dashed curves) is compared with the values obtained in the nonparaxial description of this work (solid curves) by numerically computing the slope of the linear bands appearing in the low-$q_\perp$ part of the Bogoliubov spectrum (see Fig.~2 of the main text). We consider several values of the nonlinear coupling $g_d I_0 / \beta_0$, keeping the ratio $g_s / g_d$ fixed. We notice that the qualitative behavior does not change between the two frameworks. On the other hand, the quantitative discrepancy is negligible at small $g_d I_0 / \beta_0$, but it grows significantly at increasing nonlinearity. We further mention that $c_d^2$ approaches the value $g_d I_0 / (\beta_0 - 3 g_d I_0)$ predicted in Ref.~\cite{Martone2021} in the $\cos\vartheta_0 \to 0$ limit (linearly polarized background). Concerning the spin sound velocity, we first recall that its value in the case of a linearly polarized background field was computed in Ref.~\cite{Martone2021} and found to be anisotropic, i.e., depending on $\varphi$. Here we checked that $c_s^2 \to g_s I_0 [\beta_0 - 2 (g_d + g_s) I_0] / \{[\beta_0 - (2 g_d + g_s) I_0] [\beta_0 - 2 (g_d + 2 g_s) I_0]\}$ as $\cos\vartheta_0 \to 0$, which is the prediction of Ref.~\cite{Martone2021} [see Eq.~(47) therein] evaluated at $\varphi = \pi / 4$. In this respect we point out that, unlike the frequency spectrum of a linearly polarized background field studied in Ref.~\cite{Martone2021}, the quasi-frequency spectrum computed in the present work is independent of $\varphi_q$, that is, it is isotropic. However, it should be noticed that frequency and quasi-frequency have different definitions that only make sense in the $\Delta k = 0$ and $\Delta k \neq 0$ case, respectively; hence, it is not inconsistent that the two spectra have qualitatively different features. \begin{figure} \includegraphics[scale=1]{figure1sm_sound_vel.pdf} \caption{Velocity of (left) density and (right) spin sound waves as a function of the background polarization quantified by $\cos\vartheta_0$. The solid curves represent the results of the full nonparaxial theory, while the dashed ones correspond to the paraxial predictions~\eqref{eq:paraxial_sound}. We take $g_d I_0 / \beta_0 = 0.02$ (blue lines), $0.1$ (red lines), $0.2$ (green lines), with the same ratio $g_s / g_d = 0.25$ for all the curves.} \label{fig:sound_vel} \end{figure} \section{Solution of Bogoliubov equations. Hill's method} \label{sec:bogo_hill} Let us rewrite Eq.~(5) of the main text as \begin{equation} \mathrm{i} \begin{bmatrix} \dot{X} \\ \dot{\Pi}^* \end{bmatrix} = \mathcal{B} \begin{bmatrix} X \\ \Pi^* \end{bmatrix} \, , \label{eq:bogo_eq} \end{equation} where the $8 \times 8$ matrix \begin{equation} \mathcal{B} = \mathrm{i} \begin{bmatrix} - \Lambda_2^{-1} \Lambda_1 & \Lambda_2^{-1} \\ - (\Lambda_1^T \Lambda_2^{-1} \Lambda_1 + \Lambda_0) & (\Lambda_2^{-1} \Lambda_1)^T \end{bmatrix} \label{eq:bogo_mat} \end{equation} is a $\pi$-periodic function of $\varphi = \varphi_q + \Delta k \;\! z / 2$. As discussed in the main text, the Floquet theorem allows us to write the general integral of Eq.~\eqref{eq:bogo_eq} as a linear combination of eight independent solutions of the form \begin{equation} \begin{bmatrix} X_\ell(\vec{q}_\perp,z) \\ \Pi_\ell^*(\vec{q}_\perp,z) \end{bmatrix} = \mathrm{e}^{- \mathrm{i} \Omega_\ell(q_\perp) z} \begin{bmatrix} X_{0,\ell}(q_\perp,\varphi) \\ \Pi_{0,\ell}^*(q_\perp,\varphi) \end{bmatrix} \, , \label{eq:bogo_ansatz} \end{equation} where $\Omega_\ell(q_\perp)$ is the quasi-frequency and the amplitudes $X_{0,\ell}$ and $\Pi_{0,\ell}^*$ are themselves $\pi$-periodic in $\varphi$. In order to determine these solutions we adopt Hill's method~\cite{Deconinck2006}. First, we formally represent $\mathcal{B}$ as a Fourier expansion, $\mathcal{B} = \sum_{n = - \infty}^{+ \infty} \mathcal{B}_n \mathrm{e}^{- 2 \mathrm{i} n \varphi}$, where the matrix expansion coefficients $\mathcal{B}_n = 0$ when $n \neq 0, \pm 1$, and $\mathcal{B}_{-1} = - \mathcal{B}_1^*$. Similarly, we Fourier expand the amplitudes entering the Ansatz~\eqref{eq:bogo_ansatz}, $X_{0,\ell}(q_\perp,\varphi) = \sum_{n = - \infty}^{+ \infty} X_{0,\ell,n}(q_\perp) \mathrm{e}^{- 2 \mathrm{i} n \varphi}$ and $\Pi_{0,\ell}^*(q_\perp,\varphi) = \sum_{n = - \infty}^{+ \infty} \Pi_{0,\ell,n}^*(q_\perp) \mathrm{e}^{- 2 \mathrm{i} n \varphi}$, where the $X_{0,\ell,n}(q_\perp)$'s and $\Pi_{0,\ell,n}^*(q_\perp)$'s are the $4$-component expansion coefficients. Plugging these expansions into Eq.~\eqref{eq:bogo_eq} and equating the terms on the two sides having the same oscillating behavior one finds \begin{equation} \sum_{n' = - \infty}^{+ \infty} \mathcal{F}_{n n'} \begin{bmatrix} X_{0,\ell,n'} \\ \Pi_{0,\ell,n'}^* \end{bmatrix} = \Omega_\ell \begin{bmatrix} X_{0,\ell,n} \\ \Pi_{0,\ell,n}^* \end{bmatrix} \, . \label{eq:bogo_eq_exp} \end{equation} Here we have introduced the infinite-dimensional matrix $\mathcal{F}$ whose entries $\mathcal{F}_{n n'} = \mathcal{B}_{n-n'} - n \Delta k \mathbb{I}_8 \delta_{n,n'}$ are themselves $8 \times 8$ matrices, and $\mathbb{I}_8$ the $8 \times 8$ identity matrix. Then, Eq.~\eqref{eq:bogo_eq_exp} is the eigenvalue problem for $\mathcal{F}$. One can immediately see that for each solution with quasi-frequency $\Omega_\ell$ there exists another one having quasifrequency $\Omega_\ell + n \Delta k$ for arbitrary $n \in \mathbb{Z}$. However, all these infinitely many solutions of Eq.~\eqref{eq:bogo_eq_exp} are physically equivalent as they corresponds to a single Bogoliubov mode of the form~\eqref{eq:bogo_ansatz}. For this reason, one can restrict the real part of $\Omega_\ell$ in the first Brillouin zone, $-|\Delta k|/2 < \Re \Omega_\ell \leq |\Delta k|/2$, as done in Fig.~2 of the main text. In addition, from the identity $\mathcal{F}_{-n, -n'} = - \mathcal{F}_{n n'}^*$ it follows that both $\Omega_\ell$ and $- \Omega_\ell^*$ belong to the spectrum of $\mathcal{F}$. Finally, the relation \begin{equation} \begin{bmatrix} 0 & \mathrm{i} \mathbb{I}_4 \\ - \mathrm{i} \mathbb{I}_4 & 0 \end{bmatrix} \mathcal{F}_{n n'} \begin{bmatrix} 0 & \mathrm{i} \mathbb{I}_4 \\ - \mathrm{i} \mathbb{I}_4 & 0 \end{bmatrix}^{-1} = \mathcal{F}_{-n, -n'}^\dagger \, , \label{eq:bogo_mat_symm} \end{equation} with $\mathbb{I}_4$ the $4 \times 4$ identity matrix, ensures that $\mathcal{F}$ and $\mathcal{F}^\dagger$ have the same spectrum, meaning that quasi-frequencies occur in complex conjugate pairs. Combining Eqs.~\eqref{eq:bogo_eq_exp} and~\eqref{eq:bogo_mat_symm}, after a few appropriate manipulations, we obtain the amplitude orthonormalization relations \begin{equation} \sum_{n' = - \infty}^{+ \infty} \mathrm{i} \left[X_{0,\ell',n'}^\dagger(q_\perp) \Pi_{0,\ell,n'+n}^*(q_\perp) - \Pi_{0,\ell',n'}^T(q_\perp) X_{0,\ell,n'+n}(q_\perp) \right] = \mathcal{N}_\ell(q_\perp) \delta_{\ell' \bar{\ell}} \delta_{n,0} \, , \label{eq:bogo_amp_norm} \end{equation} where $\bar{\ell}$ labels the solution with quasi-frequency $\Omega_{\bar{\ell}} = \Omega_\ell^*$ and $\mathcal{N}_\ell$ is the mode norm. From Eq.~\eqref{eq:bogo_amp_norm}, we easily obtain the relation \begin{equation} \mathrm{i} [X_{0,\ell}^\dagger(q_\perp,\varphi) \Pi_{0,\ell'}^*(q_\perp,\varphi) - \Pi_{0,\ell}^T(q_\perp,\varphi) X_{0,\ell'}(q_\perp,\varphi)] = \mathcal{N}_\ell(q_\perp)\delta_{\ell'\bar{\ell}} \, , \label{eq:bogo_amp_norm_phi} \end{equation} holding at arbitrary $\varphi$. Then, taking Eq.~(6) of the main text at $z=0$ and projecting it onto each separate mode by means of Eq.~\eqref{eq:bogo_amp_norm_phi} at $z=0$, we end up with an expression for the weights as functions of the input field: \begin{equation} C_\ell(\vec{q}_\perp) = \frac{\mathrm{i}}{\mathcal{N}_\ell(q_\perp)} \left[ X_{0,\bar{\ell}}^\dagger(q_\perp,\varphi_q) \Pi^*(\vec{q}_\perp,z=0) - \Pi_{0,\bar{\ell}}^T(q_\perp,\varphi_q) X(\vec{q}_\perp,z=0) \right] \, , \label{eq:bogo_weight} \end{equation} which is precisely Eq.~(8) of the main text. Up to now no approximation has been made. In order to perform the numerical computation of the Bogoliubov spectrum and amplitudes, the Hill method prescribes one to choose a (sufficiently large) positive integer $N$ and truncate $\mathcal{F}_{n n'}$ to the square block with $-(2N+1) \leq n,n' \leq 2N+1$. After checking the convergence of the numerical results, we have chosen $N = 20$ for the figures of the Letter. \section{Input conjugate momenta} \label{sec:input_conj_mom} The study a nonparaxial light beam propagating inside a bulk nonlinear medium requires one to solve the Bogoliubov equations~\eqref{eq:bogo_eq} for a given input field profile $X(\vec{q}_\perp,z=0)$, such as that of Eq.~(7) of the main text. However, this is not enough to uniquely determine the solution to the problem, as the input value of the conjugate momenta $\Pi^*(\vec{q}_\perp,z=0)$ is also required. In order to fix this value, we first recall that the Bogoliubov spectrum of the system includes eight branches, only half of which are associated with transmitted field modes propagating inside the medium; the remaining branches describe reflected modes at the air-medium interface which do not appear in the paraxial framework. To know whether a given branch $\ell$ is transmitted or reflected we define an average frequency $\bar{\Omega}_\ell = \Omega_\ell + \mathcal{N}_\ell^{-1} \Delta k \sum_{n = - \infty}^{+ \infty} \mathrm{i} n \left(X_{0,\bar{\ell},n}^\dagger \Pi_{0,\ell,n}^* - \Pi_{0,\bar{\ell},n}^T X_{0,\ell,n} \right)$, which can be regarded as the expectation value of the effective energy operator $\mathrm{i}\nabla_z$ [cfr. the orthonormalization condition~\eqref{eq:bogo_amp_norm}]. Transmitted (reflected) modes are those having the smallest (largest) value of $|\Re \bar{\Omega}_\ell|$. We take $\Pi^*(\vec{q}_\perp,z=0)$ such that only the transmitted modes, whose frequencies will be denoted by $\Omega_{-,d}$ and $\Omega_{-,s}$ (where $d$ and $s$ stand for ``density'' and ``spin'', respectively), be excited; conversely, their reflected counterparts of quasi-frequency $\Omega_{+,d}$ and $\Omega_{+,s}$ do not appear in the solution of Eq.~\eqref{eq:bogo_eq}. The practical implementation of the above criterion requires as a preliminary step to consider the operator evolving the solution of the Bogoliubov equations~\eqref{eq:bogo_eq} with respect to effective time by one oscillation period $2\pi / \Delta k$, \begin{equation} \begin{bmatrix} X(\vec{q}_\perp,z=2\pi/\Delta k) \\ \Pi^*(\vec{q}_\perp,z=2\pi/\Delta k) \end{bmatrix} = \mathcal{U}(2\pi/\Delta k,0) \begin{bmatrix} X(\vec{q}_\perp,z=0) \\ \Pi^*(\vec{q}_\perp,z=0) \end{bmatrix} \, . \label{eq:inp_evol_XP} \end{equation} One can decompose this operator as $\mathcal{U}(2\pi/\Delta k,0) = \mathcal{P} \mathcal{U}_D \mathcal{P}^{-1}$, with \begin{equation} \mathcal{U}_D = \begin{bmatrix} \mathrm{e}^{- 2 \pi \mathrm{i} \mathcal{B}_{D+} / \Delta k} & 0 \\ 0 & \mathrm{e}^{- 2 \pi \mathrm{i} \mathcal{B}_{D-} / \Delta k} \end{bmatrix} \, , \qquad \mathcal{P} = \begin{bmatrix} \mathcal{P}_{X+} & \mathcal{P}_{X-} \\ \mathcal{P}_{\Pi+} & \mathcal{P}_{\Pi-} \end{bmatrix} \, , \qquad \mathcal{P}^{-1} = \begin{bmatrix} \left(\mathcal{P}^{-1}\right)_{X+} & \left(\mathcal{P}^{-1}\right)_{\Pi+} \\ \left(\mathcal{P}^{-1}\right)_{X-} & \left(\mathcal{P}^{-1}\right)_{\Pi-} \end{bmatrix} \, . \label{eq:inp_diag_mat} \end{equation} The $4 \times 4$ blocks entering $\mathcal{U}_D$ and $\mathcal{P}$ are given by \begin{subequations} \label{eq:inp_mat_block} \begin{align} \mathcal{B}_{D\pm} &{} = \operatorname{diag}\left( \Omega_{d,\pm}(q_\perp), -\Omega_{d,\pm}^*(q_\perp), \Omega_{s,\pm}(q_\perp), -\Omega_{s,\pm}^*(q_\perp) \right) \, , \label{eq:inp_mat_block_bogo} \\ \mathcal{P}_{X\pm} &{} = \begin{bmatrix} X_{d,\pm}(\vec{q}_\perp,z=0) & X_{d,\pm}^*(-\vec{q}_\perp,z=0) & X_{s,\pm}(\vec{q}_\perp,z=0) & X_{s,\pm}^*(-\vec{q}_\perp,z=0) \end{bmatrix} \, , \label{eq:inp_mat_block_X} \\ \mathcal{P}_{\Pi\pm} &{} = \begin{bmatrix} \Pi_{d,\pm}^*(\vec{q}_\perp,z=0) & \Pi_{d,\pm}(-\vec{q}_\perp,z=0) & \Pi_{s,\pm}^*(\vec{q}_\perp,z=0) & \Pi_{s,\pm}(-\vec{q}_\perp,z=0) \end{bmatrix} \, . \label{eq:inp_mat_block_Pi} \end{align} \end{subequations} Notice that the columns of $\mathcal{P}$ correspond to the eight wave functions defined in Eq.~\eqref{eq:bogo_ansatz} taken at $z=0$. Consequently, the blocks of $\mathcal{P}^{-1}$ can be found from the orthonormalization conditions discussed in Sec.~\ref{sec:bogo_hill}. We now define the new variables \begin{equation} \begin{bmatrix} Y_{D+} \\ Y_{D-} \end{bmatrix} = \mathcal{P}^{-1} \begin{bmatrix} X \\ \Pi \end{bmatrix} \, , \label{eq:inp_def_Y} \end{equation} whose effective-time evolution over one oscillation period is simply given by a phase factor, $Y_{D\pm}(\vec{q}_\perp,z=2\pi / \Delta k) = \mathrm{e}^{- 2 \pi \mathrm{i} \mathcal{B}_{D\pm} / \Delta k} Y_{D\pm}(\vec{q}_\perp,z=0)$. Requiring $Y_{D+}(\vec{q}_\perp,z=0) = 0$ is a sufficient condition to ensure that the reflected Bogoliubov modes are not excited. When expressing this constraint in terms of the original variables one obtains \begin{equation} \Pi^*(\vec{q}_\perp,z=0) = - \left[ \left(\mathcal{P}^{-1}\right)_{\Pi+} \right]^{-1} \left(\mathcal{P}^{-1}\right)_{X+} X(\vec{q}_\perp,z=0) \, . \label{eq:inp_init_cond_Pi} \end{equation} We checked that in the paraxial limit this condition is consistent with the relation~\eqref{eq:paraxial_el_eq}, connecting $X$ to its first-order derivative, taken at $z = 0$. \section{Intensity-intensity correlation function} \label{sec:g2} In order to compute the intensity-intensity correlation function one has to start from the input field [Eq.~(7) of the main text] and write down the corresponding expression of the four-component vector $X$, \begin{equation} X(\vec{q}_\perp,z=0) = \epsilon \sum_{\alpha = r,i} \sum_{\sigma = \pm} X_{\alpha,\sigma} \tilde{\phi}_{\alpha,\sigma}(\vec{q}_\perp) \, , \label{eq:speckle_in_X} \end{equation} where \begin{equation} X_{r,+} = \begin{bmatrix} \cos\frac{\vartheta_0}{2} \\ -\sin\frac{\vartheta_0}{2} \\ 0 \\ 0 \end{bmatrix} \, , \quad X_{r,-} = \begin{bmatrix} \sin\frac{\vartheta_0}{2} \\ \cos\frac{\vartheta_0}{2} \\ 0 \\ 0 \end{bmatrix} \, , \quad X_{i,+} = \begin{bmatrix} 0 \\ 0 \\ \frac{1}{2\cos\frac{\vartheta_0}{2}} \\ \frac{1}{2\cos\frac{\vartheta_0}{2}} \end{bmatrix} \, , \quad X_{i,-} = \begin{bmatrix} 0 \\ 0 \\ \frac{1}{2\sin\frac{\vartheta_0}{2}} \\ -\frac{1}{2\sin\frac{\vartheta_0}{2}} \end{bmatrix} \, . \label{eq:speckle_in_X_as} \end{equation} The correlators of the Fourier transform of the speckle fields are given by $\langle \tilde{\phi}_{r,\sigma}(\vec{q}_\perp) \tilde{\phi}_{r,\sigma'}^*(\vec{q}_\perp')\rangle = \langle \tilde{\phi}_{i,\sigma}(\vec{q}_\perp) \tilde{\phi}_{i,\sigma'}^*(\vec{q}_\perp')\rangle = (2\pi)^2 \delta(\vec{q}_\perp' - \vec{q}_\perp) \tilde{\gamma}(q_\perp) \delta_{\sigma\sigma'} / 2$ and $\langle \tilde{\phi}_{r,\sigma}(\vec{q}_\perp) \tilde{\phi}_{i,\sigma'}^*(\vec{q}_\perp')\rangle = 0$, with $\tilde{\gamma}(q_\perp) = 4 \pi \sigma^2 \exp(- \sigma^2 q_\perp^2)$. From the discussion of Sec.~\ref{sec:input_conj_mom}, we know that the input conjugate momenta take the form \begin{equation} \Pi^*(\vec{q}_\perp,z=0) = \epsilon \sum_{\alpha = r,i} \sum_{\sigma = \pm} \Pi_{\alpha,\sigma}^*(\vec{q}_\perp) \tilde{\phi}_{\alpha,\sigma}(\vec{q}_\perp) \, , \label{eq:speckle_in_Pi} \end{equation} where the $\Pi_{\alpha,\sigma}^*$'s can be directly deduced from Eq.~\eqref{eq:inp_init_cond_Pi}. After solving the problem as discussed in the previous sections and in the main text, one can write the intensity fluctuation at arbitrary $\vec{r}_\perp$ and $z$ as \begin{equation} \delta I(\vec{r}_\perp,z) = \int \frac{\mathrm{d}^2 q_\perp}{(2\pi)^2} \sum_\ell C_\ell(\vec{q}_\perp) \delta I_{0,\ell}(q_\perp,\varphi(z)) \mathrm{e}^{\mathrm{i}[\vec{q}_\perp \cdot \vec{r}_\perp - \Omega_\ell(q_\perp) z]} \, , \label{eq:speckle_sol_int} \end{equation} where $\delta I_{0,\ell}$ is (up to a factor $2 I_0$) the first component of the amplitude vector $X_{0,\ell}$ [see Eq.~\eqref{eq:bogo_ansatz}] and the weights are given by Eq.~\eqref{eq:bogo_weight}. To evaluate $g_2(z)\equiv \langle \delta I(\vec{r}_\perp,0) \delta I(\vec{r}_\perp,z) \rangle$ one needs the expression of the weight correlator \begin{equation} \langle C_\ell(\vec{q}_\perp) C_{\ell'}^*(\vec{q}_\perp') \rangle = \epsilon^2 (2\pi)^2 \delta(\vec{q}_\perp' - \vec{q}_\perp) \tilde{\gamma}(q_\perp) \frac{\Xi_{\ell\ell'}^*(\vec{q}_\perp)}{2} \, , \label{eq:speckle_weight_corr} \end{equation} where \begin{equation} \begin{split} \Xi_{\ell\ell'}^*(\vec{q}_\perp) = {}&{} \sum_{\alpha = r,i} \sum_{\sigma = \pm} \left[ X_{0,\bar{\ell}}^\dagger(q_\perp,\varphi_q) \Pi_{\alpha,\sigma}^*(\vec{q}_\perp) - \Pi_{0,\bar{\ell}}^T(q_\perp,\varphi_q) X_{\alpha,\sigma} \right] \\ &{} \times \left[ X_{0,\bar{\ell}'}^\dagger(q_\perp,\varphi_q) \Pi_{\alpha,\sigma}^*(\vec{q}_\perp) - \Pi_{0,\bar{\ell}'}^T(q_\perp,\varphi_q) X_{\alpha,\sigma} \right]^* \left[ \mathcal{N}_\ell(q_\perp) \mathcal{N}_{\ell'}^*(q_\perp) \right]^{-1} \, . \end{split} \label{eq:speckle_weight_corr_coeff} \end{equation} Then, a straightforward calculation yields \begin{equation} \begin{split} g_2(z) = \epsilon^2 I_0^2 \sum_\ell \int_0^{\infty} \frac{\mathrm{d} q_\perp}{2\pi} \, q_\perp \tilde{\gamma}(q_\perp) K_\ell(q_\perp, z) \mathrm{e}^{- \mathrm{i} \Omega_\ell(q_\perp) z} \, , \end{split} \label{eq:speckle_2corr} \end{equation} where we have introduced the modulated coefficients \begin{equation} K_{\ell}(q_\perp,z) = \frac{1}{2 I_0^2} \int_{- \pi}^\pi \frac{\mathrm{d}\varphi_q}{2\pi} \sum_{\ell'} \Xi_{\ell'\ell}(\vec{q}_\perp) \delta I_{0,\ell'}^*(q_\perp,\varphi_q) \delta I_{0,\ell}(q_\perp,\varphi(z)) \, . \label{eq:speckle_2corr_coeff} \end{equation} Equation~\eqref{eq:speckle_2corr} also holds in the paraxial framework, with the coefficients taking the $q_\perp$- and $z$-independent values \begin{equation} K_{d(s)} = \frac{1}{2} \pm \frac{g_d + g_s \cos 2 \vartheta_0}{2\sqrt{g_d^2 + g_s^2 + 2 g_d g_s \cos 2\vartheta_0}} \, , \label{eq:speckle_2corr_coeff_par} \end{equation} where the upper (lower) sign refers to the two density (spin) branches of the Bogoliubov spectrum. In addition, at large $z$ one can approximate $\Omega_{d(s)}(q_\perp) \simeq c_{d(s)} q_\perp$ in the integral~\eqref{eq:speckle_2corr}, with $c_{d(s)}$ the paraxial sound velocities~\eqref{eq:paraxial_sound}, yielding the asymptotic behavior $g_2(z) \simeq - \epsilon^2 I_0^2 (\sqrt{g_d I_0/\beta_0} {z}/{2 \sigma})^{-2}$. \begin{figure} \includegraphics[scale=1]{figure2sm_g2_fourier.pdf} \caption{Discrete Fourier transform of $g_2$ (in modulus) as a function of the effective frequency $q_z$ (we can restrict to $q_z > 0$ because of symmetry). The parameters are the same as the blue solid curves of Figs.~3 and 4(a) of the main text, namely the background polarization $\vartheta_0 = \pi / 4$, the nonlinear couplings $g_d I_0 / \beta_0 = 0.2$, $g_s I_0 / \beta_0 = 0.05$, and the correlation length $\beta_0\sigma = 15$.} \label{fig:g2_fourier} \end{figure} In the nonparaxial regime the $K_\ell$'s are periodic in $z$, giving rise to the peculiar oscillating behavior shown in Figs.~3 and~4(a) of the main text. To identify the dominant oscillation frequencies we numerically evaluate $g_2(z)$ in a window of fairly large $z$ and then compute its (discrete) Fourier transform $\tilde{g}_2(q_z)$. At large $\sigma$ we find two symmetric peaks at some $q_z$ close to $\pm |\Delta k|$, meaning that the oscillations are practically harmonic. However, as $\sigma$ is decreased (at fixed values of the other parameters) additional peaks occur, and the oscillations become more and more anharmonic. As an example, in Fig.~\ref{fig:g2_fourier} we show the Fourier transform of $g_2$ evaluated in the $1000 \, z_\mathrm{NL} \leq z \leq 2000 \, z_\mathrm{NL}$ range with the same parameters as the blue curves of Figs.~3 and~4(a) of the main text. One can clearly see the main peaks at $q_z \simeq \pm 0.8 \, |\Delta k|$ and the secondary peaks at $q_z \simeq \pm 0.5 \, |\Delta k|$ and $q_z \simeq \pm 0.2 \, |\Delta k|$. However, we noticed that, each time a new peak appears, when further lowering $\sigma$ its position remains basically $\sigma$-independent.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{sec1 We will consider, in this paper, the topological and ergodic dynamics of Bohr almost periodic motions of topological abelian semigroups acting continuously on a compact metric space $(X,d)$. Let $f\colon\mathbb{Z}_+\times X\rightarrow X;\ (t,x)\mapsto f(t,x)$ be a discrete-time semi flow on the space $X$. Recall that a point $x\in X$ or a motion $f(\centerdot,x)$ is called \textit{almost periodic of Bohr} if and only if \begin{itemize} \item for any $\varepsilon>0$ there exists a relatively dense subset $\{\tau_n\}$ of $\mathbb{Z}_+$ which possesses the following property: \begin{equation*}d(f(t,x),f(t+\tau_n,x))<\varepsilon\quad\forall t\in\mathbb{Z}_+.\end{equation*} Here a subset $S$ of $\mathbb{Z}_+$ is called `relatively dense' if one can find an integer $L>0$ such that $S\cap[n,n+L)\not=\varnothing\ \forall n\in\mathbb{Z}_+$; cf.~\cite[Definition~V7.08]{NS}. \end{itemize} This is equivalent to say that $f_x\colon\mathbb{Z}_+\rightarrow X$, defined by $t\mapsto f(t,x)$, is a Bohr almost periodic function (Bohr 1925, 1926). See \cite[Definition~V8.01]{NS} for the $\mathbb{R}$-action system case. By a topological semigroup, like $\mathbb{R}_+^d, \mathbb{Z}_+^d$ and $\mathbb{N}^d$, we mean a semigroup endowed with a Hausdorff topology that renders the algebraic operations sum $+$ or product $\circ$ continuous. We now will extend Bohr almost periodic motion to topological semigroup acting continuously on the compact metric space $X$. From here on out, assume $G$ is a topological semigroup with a product operation $\circ$ and an identity $e$, which acts continuously from left on the compact metric space $X$, \begin{gather*} \pi\colon G\times X\rightarrow X, \end{gather*} simply written as \begin{equation*} G\curvearrowright X;\quad (g,x)\mapsto g(x)=\pi(g,x). \end{equation*} Here are the basic notations we study in this paper. \begin{Adefn}\label{Adef1 Let $(G,\circ)$ be a topological semigroup with an identity element $e$. \begin{enumerate} \item[(a)] A subset $T$ of $G$ is called \textit{left syndetic} in $G$ if there exists a compact subset $L$ of $G$ such that $T\circ L=G$ (cf.~\cite[Definition~2.02]{GH}). \item[(b)] Further a point $x\in X$ is called \textit{Bohr almost periodic for $G\curvearrowright X$} if for any $\varepsilon>0$ the $\varepsilon$-error period set \begin{equation*} \mathbb{P}(\varepsilon)=\left\{\tau\in G\,|\, d(g(x),\tau\circ g(x))<\varepsilon\ \forall g\in G\right\} \end{equation*} is left syndetic in $G$ under the sense of (a). \end{enumerate} Clearly, if $x$ is Bohr almost periodic for $G\curvearrowright X$, then each point of the orbit $G(x)$ is also Bohr almost periodic for $G\curvearrowright X$. Thus we may say $G(x)$ is Bohr almost periodic and moreover the subsystem $G\curvearrowright\textrm{cls}_X{G(x)}$ is von Neumann almost periodic (cf.~\cite[Remark~4.32]{GH} and \cite{ChD}). \end{Adefn} Since every syndetic subset of $\mathbb{Z}_+$ is also relatively dense, Def.~\ref{Adef1}(b) is a generalization of the classical Bohr almost periodic motion. However, for a $\mathbb{Z}_+$-action, generally an almost periodic motion in the sense of \cite{NS} is not necessarily to be Bohr almost periodic in the sense of Def.~\ref{Adef1}(b); this is because a relatively dense subset $S$ of $\mathbb{Z}_+$ like $S=3\mathbb{N}$ does not need to be syndetic in the sense of Def.~\ref{Adef1}(a). See, e.g.,~\cite{ChD} for some comparison with uniform recurrence and von Neumann almost periodicity. Particularly, since $G$ is not necessarily a group here, we even cannot sure the orbit closure $\textrm{cls}_X{G(x)}$ of a Bohr almost periodic point $x$ for $G\curvearrowright X$ is minimal when $G\not=\mathbb{Z}_+^d$ and $\not=\mathbb{R}_+^d$ (cf.~\cite[Corollary~2.7]{ChD}). It should be noted here that the $\varepsilon$-error period set $\mathbb{P}(\varepsilon)$ in Def.~\ref{Adef1}(b) is not required to be a \textit{sub-semigroup} of $G$. Otherwise it is named ``uniform regular almost periodicity'' and the latter is systematically studied in \cite{Eg, MR} when $G$ is assumed to be a topological group. In this paper, we shall consider the topological structure (Theorem~\ref{Athm4} in $\S\ref{sec2}$), the probabilistic structure (Theorem~\ref{Athm6} in $\S\ref{sec3}$), and the pointwise ergodic behavior (Theorem~\ref{Athm9} in $\S\ref{sec4}$), of Bohr almost periodic motions of topological abelian semigroups acting continuously on the compact metric space $X$. The theory of Bohr almost periodic motions of abelian semigroup herself has some essential difficulties comparing with the abelian groups. For example, this theory will involve the Haar measure of a locally compact second countable abelian semigroup; but such a semigroup does not need to have a classical Haar measure like $G=\mathbb{R}_+$ and $\mathbb{Z}_+$, since the Lebesgue and the counting measures are not translation invariant; for example, for $L\colon x\mapsto x+1$ from $\mathbb{Z}_+$ to $\mathbb{Z}_+$, $3=\#\{0,1,2\}\not=\#L^{-1}\{0,1,2\}=2$. \section{Topological structures}\label{sec2 The following theorem establishes the equicontinuity of Bohr almost periodic motions of a topological semigroups $G$ acting continuously on a compact metrizable space $X$, which is a generalization of A.A. Markov's Lyapunov stability theorem of continuous-time flows with $G=\mathbb{R}$ (cf.~\cite[Theorem~V8.05]{NS}). \begin{Athm}\label{Athm2 Let $y\in X$ be Bohr almost periodic for $G\curvearrowright X$. Then the family of transformations \begin{gather*} \left\{g\colon\mathrm{cls}_X{G(y)}\rightarrow\mathrm{cls}_X{G(y)};\ x\mapsto g(x)\right\}_{g\in G} \end{gather*} is equicontinuous; that is to say, \begin{itemize} \item for any $\varepsilon>0$ there exists a $\delta>0$ such that if $x,z\in\mathrm{cls}_X{G(y)}$ with $d(x,z)<\delta$ then $d(g(x),g(z))<\varepsilon$ for every $g\in G$. \end{itemize} \end{Athm} \begin{rem} See~\cite[Theorem~4.37]{GH} for the group case under the guise that $G\curvearrowright X$ is (Bohr) almost periodic. If the compactness of the underlying space $X$ is weakened by the uniform continuity of the transformations $g\colon X\rightarrow X$ for all $g\in G$ (cf.~\cite[Definition~4.36]{GH}) but $G$ is discrete there, then the statement of this theorem still holds by the same argument below. \end{rem} \begin{proof} Let $\varepsilon>0$ be arbitrarily given. Then because of the Bohr almost periodicity of the point $y$ for $G\curvearrowright X$ there exists a compact subset $L=L(\varepsilon)$ of $G$ for which the $\frac{\varepsilon}{3}$-error period set $$ \mathbb{P}(\varepsilon/3)=\left\{\tau\in G\,|\,d(g(y),\tau\circ g(y))<\frac{\varepsilon}{3}\ \forall g\in G\right\} $$ is such that $\mathbb{P}(\varepsilon/3)\circ L=G$. Since $\textrm{cls}_X{G(y)}$ and $L$ both are compact and $G$ acts continuously on $X$, there exists a number $\delta>0$ such that for any two points $x,z\in\textrm{cls}_X{G(y)}$ it will follow from $d(x,z)<\delta$ that $$ d(\ell(x),\ell(z))<\frac{\varepsilon}{3}\quad\forall \ell\in L. $$ To prove Theorem~\ref{Athm2}, it is sufficient to show that: if $d(t_1(y),t_2(y))<\delta$ for two elements $t_1,t_2\in G$, then $d(g\circ t_1(y),g\circ t_2(y))<\varepsilon$ for all $g\in G$. For that, we now choose an arbitrary $g\in G$ and we then can pick two elements $\ell\in L$ and $\tau\in\mathbb{P}(\varepsilon/3)$ such that $g=\tau\circ\ell$. Hence \begin{equation*}\begin{split} d(g\circ t_1(y),g\circ t_2(y))&=d(\tau\circ\ell\circ t_1(y),\tau\circ\ell\circ t_2(y))\\ &<d(\ell\circ t_1(y),\ell\circ t_2(y))+\frac{2\varepsilon}{3}\\ &<\varepsilon. \end{split}\end{equation*} This concludes the proof of Theorem~\ref{Athm2}. \end{proof} For our further result, we need a lemma in which we assume $G$ is commutative. \begin{Alem}\label{Alem3 Let $y\in X$ be a Bohr almost periodic point for $G\curvearrowright X$ and $t_n,s_n\in G$, where $G$ is a topological abelian semigroup. Then if $\{t_n(y)\}_1^\infty$ and $\{s_n(y)\}_1^\infty$ are two Cauchy sequences in $X$, the sequence $\{t_n\circ s_n(y)\}_1^\infty$ is also of Cauchy in $X$. \end{Alem} \begin{proof} Let $\varepsilon>0$ be any given. Then by Theorem~\ref{Athm2}, one can find some number $\delta=\delta(\varepsilon/2)>0$ so that for any $x,z\in\textrm{cls}_X{G(y)}$ with $d(x,z)<\delta$ there follows \begin{equation*} d(t(x),t(z))<\frac{\varepsilon}{2}\quad \forall t\in G. \end{equation*} Now according to the hypotheses of the lemma, there exists an $N>0$ such that for any $n\ge N$ and $m\ge N$ we have $$ d(t_n(y), t_m(y))<\delta\quad \textrm{and}\quad d(s_n(y), s_m(y))<\delta. $$ From these inequalities we can get $$ d(s_n\circ t_n(y), s_n\circ t_m(y))<\frac{\varepsilon}{2},\quad d(t_m\circ s_n(y), t_m\circ s_m(y))<\frac{\varepsilon}{2}, $$ and hence by the commutativity of $G$, we can obtain that $$ d(t_n\circ s_n(y), t_m\circ s_m(y))<\varepsilon. $$ This completes the proof of Lemma~\ref{Alem3}. \end{proof} The structure of a set consisting of a Bohr almost periodic motion is characterized by the following theorem, the $\mathbb{R}$-action case is due to V.V.~Nemytskii (cf.~\cite[Theorem~V8.16]{NS}). \begin{Athm}\label{Athm4 Let $G\curvearrowright X$ be a continuous action of a topological abelian semigroup $G$ on the compact metric space $X$. If $y\in X$ is a Bohr almost periodic point for $G\curvearrowright X$, then $\mathrm{cls}_X{G(y)}$ is a compact abelian semigroup having a binary operation $\diamond$ with \begin{equation*} g(y)\diamond h(y)=(g\circ h)(y)\quad \forall g,h\in G, \end{equation*} such that if $G$ has an identity $e$, then $\mathrm{cls}_X{G(y)}$ has the identity $y$ and that $G\curvearrowright\mathrm{cls}_X{G(y)}$ is characterized by the translation $(g,x)\mapsto g(x)=g(y)\diamond x$ for all $g\in G$ and $x\in\mathrm{cls}_X{G(y)}$. \end{Athm} \begin{rem} See \cite{ST} in the case $G=\mathbb{R}$ and see \cite[Theorem~4.48]{GH} in the abelian group case for the related results. \end{rem} \begin{proof} Let $y$ be Bohr almost periodic for $G\curvearrowright X$ and simply write $K=\textrm{cls}_X{G(y)}$. We will define in $K$ a commutative binary operation $\diamond$ as follows: First, let $x,z\in G(y)$, i.e., $x=t_x(y)$ and $z=t_z(y)$ for some $t_x,t_z\in G$; the identity of the semigroup $K$ will be the point $y=e(y)$ if $G$ contains an identity $e$; we then define the commutative binary operation as $x\diamond z=t_x\circ t_z(y)=z\diamond x$. If $x=g(y)=g^\prime(y)$ for some pair $g,g^\prime\in G$ with $g\not=g^\prime$ and $z=t_z(y)$, then $$ g\circ t_z(y)=t_z(g(y))=t_z(g^\prime(y))=g^\prime\circ t_z(y). $$ Thus $x\diamond z$ is well defined and commutative in $G(y)$. This binary operation $\diamond$ in $G(y)$ clearly satisfies the semigroup axioms and it is continuous. We now need to extend this operation $\diamond$ by continuity to the whole of $K$. For this, let $x\in K$ with $x=\lim_{n\to\infty}t_n^x(y)$ and let $z\in K$ with $z=\lim_{n\to\infty}t_n^z(y)$. Then, by definition, $$ x\diamond z=\lim_{n\to\infty}t_n^x\circ t_n^z(y)=z\diamond x. $$ The above limit exists, since by Lemma~\ref{Alem3} the sequence $t_n^x\circ t_n^z(y)$ is of Cauchy in $X$ and $X$ is complete. Clearly this binary operation satisfies the algebraic axioms required by a semigroup. Now we shall prove the continuity of the operation $\diamond$ defined above. For this, we let $x=\lim_{n\to\infty}t_n^x(y)$ and $z=\lim_{n\to\infty}t_n^z(y)$, and let there be given any $\varepsilon>0$. We define $\delta=\delta(\varepsilon/3)$ by Theorem~\ref{Athm2}. Assume $$ d(x,x^\prime)<\frac{\delta}{3},\ x^\prime=\lim_{n\to\infty}t_n^{x^\prime}(y)\quad \textrm{and}\quad d(z,z^\prime)<\frac{\delta}{3},\ z^\prime=\lim_{n\to\infty}t_n^{z^\prime}(y). $$ There is some $N>0$ such that for any $n\ge N$, we have $$ d(x,t_n^x(y))<\frac{\delta}{3},\ d(x^\prime,t_n^{x^\prime}(y))<\frac{\delta}{3}, $$ and $$ d(z,t_n^z(y))<\frac{\delta}{3},\ d(z^\prime,t_n^{z^\prime}(y))<\frac{\delta}{3}. $$ Then we get $$ d(t_n^x(y),t_n^{x^\prime}(y))<\delta\ \textrm{ and }\ d(t_n^z(y),t_n^{z^\prime}(y))<\delta\quad \forall n\ge N. $$ Further, by the triangle inequality and the equicontinuity, we get \begin{equation*}\begin{split} d(x\diamond z,x^\prime\diamond z^\prime)&=d(x\diamond z,x^\prime\diamond z)+d(x^\prime\diamond z,x^\prime\diamond z^\prime)\\ &=\lim_{n\to\infty}d(t_n^x\circ t_n^z(y), t_n^{x^\prime}\circ t_n^z(y))+\lim_{n\to\infty}d(t_n^{x^\prime}\circ t_n^z(y), t_n^{x^\prime}\circ t_n^{z^\prime}(y))\\ &\le\frac{\varepsilon}{3}+\frac{\varepsilon}{3} \end{split}\end{equation*} as $n\ge N$. Therefore, under this binary operation $\diamond$, $K$ is a compact abelian semigroup with the required properties. This completes the proof of Theorem~\ref{Athm4}. \end{proof} The above proof is an improvement of the necessity of \cite[Theorem~V.8.16]{NS} for $G=\mathbb{R}$. When $G=\mathbb{Z}$, then $G\curvearrowright K$ in Theorem~\ref{Athm4} is exactly a Kronecker system; cf.~\cite[Theorem~1.9]{Fur}. In fact, we note here that if $G\curvearrowright X$ is topologically transitive and equicontinuous, then the statement of Theorem~\ref{Athm4} still holds by analogous arguments. In addition, if $G$ is assumed to be a topological abelian group, one can further show that $\mathrm{cls}_X{G(y)}$ is a compact abelian group. \section{Probabilistic structures}\label{sec3 We now turn to the probabilistic or ergodic theory of Bohr almost periodic motions of topological abelian semigroups acting continuously on compact metric spaces in the last two sections. The classical Haar measure on a locally compact second countable group possesses the property that it has positive measures for every open subsets of the group. However, even for a compact countable abelian semigroup, this is not a case. For example, let $\bar{\mathbb{Z}}_+=\mathbb{Z}_+\cup\{\infty\}$ be the one-point compactification of the discrete additive semigroup $(\mathbb{Z}_+,+)$, endowed with the multiplicative binary operation $\circ$ as follows: \begin{equation*} s\circ t=s+t\quad \forall s,t\in\bar{\mathbb{Z}}_+. \end{equation*} Then the atomic probability measure $\delta_{\infty}$ concentrated at the point $\infty$ is the unique translation-invariant probability measure on $(\bar{\mathbb{Z}}_+,\circ)$. As mentioned before, the classical Haar-Weil theorem that asserts the existence and uniqueness of Haar measures for locally compact second countable (abbreviated lcsc) groups, does not work in our situations. However we will need the following preliminary lemma. \begin{Alem}[Haar measure]\label{Alem5 Let $(K,\diamond)$ be a compact second countable abelian \textit{semigroup}. Then $K\curvearrowright K$, given by $(g,k)\mapsto g(k)=g\diamond k$ for all $g,k\in K$, has a unique $K$-invariant probability measure $m_K$, called the \textit{Haar measure} of $K$ as in the lcsc group case. \end{Alem} \begin{proof} \textit{Existence of Haar measure}: First of all, by the Markov-Kakutani fixed-point theorem, there exists at least one common invariant Borel probability measure, write $m_K$, on $K$ for all the commuting continuous transformations \begin{equation*} \left\{L_g\colon K\rightarrow K;\quad k\mapsto g(k)\right\}_{g\in K}. \end{equation*} Since $K$ is commutative, clearly $m_K$ is both left- and right-invariant for all $g\in K$; in other words, \begin{gather*} m_K\circ L_g^{-1}=m_K=m_K\circ R_g^{-1}\quad \forall g\in G. \end{gather*} \textit{Unicity of Haar measure}: Let there exists another such Borel probability measure $\mu$ on $K$. Using the invariance of $m_K$ and $\mu$ together with Fubini's theorem, we get for any $\varphi\in C(K)$ \begin{equation*}\begin{split} \int_K\varphi d\mu&=\int_K\left(\int_K\varphi(y)d\mu(y)\right)dm_K(x)\\ &=\int_K\left(\int_K\varphi(R_x(y))d\mu(y)\right)dm_K(x)\\ &=\int_K\left(\int_K\varphi(L_y(x))dm_K(x)\right)d\mu(y)\\ &=\int_K\left(\int_K\varphi(x)dm_K(x)\right)d\mu(y)\\ &=\int_K\varphi dm_K \end{split}\end{equation*} Since $\varphi$ is arbitrary, we conclude that $m_K=\mu$ and the asserted uniqueness follows. This thus completes the proof of Lemma~\ref{Alem5}. \end{proof} The existence proofs of the classical Haar-Weil theorem presented in available literature for an lcsc group $G$ are complicated and the inversion $g\rightarrow g^{-1}$ from $G$ onto $G$ plays an important role (cf.~\cite[Theorem~14.14]{Roy}). To get around the difficulty caused by no inversion, we have employed the Markov-Kakutani theorem in the above proof of Lemma~\ref{Alem5}. In fact, this is not a new method for proving the existence of invariant measures. The credit goes back to at least Mahlon Day (1961). Recall that for the $\mathbb{Z}_+$-action dynamical system $f\colon\mathbb{Z}_+\times X\rightarrow X$ on the compact metric space $X$, it is called \textit{strictly ergodic} if it consists of a unique ergodic set and all points of the underlying space $X$ are density points with respect to this measure; see \cite[Definition~VI.9.33]{NS}. It is a well-known interesting fact that every minimal set consisting of Bohr almost periodic motions $f(\centerdot,x)$ is strictly ergodic (cf.~\cite[Theorem~VI.9.34]{NS}). To prove this one needs the equicontinuity of a Bohr almost periodic motion $f(\centerdot,x)$ and an ergodic theorem of Bohr that says the time-mean value \begin{equation*} \lim_{N\to\infty}\frac{1}{N}\sum_{t=0}^{N-1}\varphi(f(t,x)) \end{equation*} exists for every $\varphi\in C(X)$. Although there is no Bohr theorem at hands here, yet we can extend this to abelian semigroups in another way as follows: \begin{Athm}\label{Athm6 Let $G\curvearrowright X$ be a continuous action of a topological abelian semigroup $G$ on the compact metric space $X$. If $y$ is a Bohr almost periodic point with $X=\mathrm{cls}_XG(y)$, then $G\curvearrowright X$ is \textit{uniquely ergodic}; i.e., it consists of a unique ergodic set (and all points of $X$ are density with respect to this measure if $G$ has inversion). In particular, if additionally $G=\mathbb{Z}_+^d$ or $\mathbb{R}_+^d$, then $G\curvearrowright X$ is strictly ergodic. \end{Athm} \begin{proof} Let $\mathscr{M}_{\textsl{inv}}(G\curvearrowright X)$ denote the weak-* compact convex set of all the $G$-invariant Borel probability measures on $X$. Since $G$ is commutative, it follows from the Markov-Kakutani fixed-point theorem that $\mathscr{M}_{\textsl{inv}}(G\curvearrowright X)\not=\varnothing$. It is easy to see that $\mu\in\mathscr{M}_{\textsl{inv}}(G\curvearrowright X)$ is ergodic if and only if it is an extremal point of the convex set $\mathscr{M}_{\textsl{inv}}(G\curvearrowright X)$ (see, e.g., \cite[Proposition~3.4]{Fur}). Next, let $\mu\in\mathscr{M}_{\textsl{inv}}(G\curvearrowright X)$ be arbitrarily given. According to Theorem~\ref{Athm4}, for every $g\in G$, the left-translation $$ L_g\colon X\rightarrow X;\quad x\mapsto g(y)\diamond x $$ preserves the measure $\mu$ invariant, where $\diamond$ is the commutative multiplication in $X$ defined by Theorem~\ref{Athm4}. Since $G(y)$ is dense in $X$, we can get that for every $z\in X$, the left-translation $L_z\colon X\rightarrow X$, given by $x\mapsto z\diamond x$, preserves $\mu$ invariant as well; this is because as $g_n(y)\to z$ there follows that $\varphi(L_{g_n}(x))\to\varphi(L_z(x))$ for any $\varphi\in C(X)$. Therefore by Lemma~\ref{Alem5}, $\mu$ is just the Haar probability measure $m_X$ of $(X,\diamond)$, which is left and right invariant since $X$ is commutative and compact. This means that $\mathscr{M}_{\textsl{inv}}(G\curvearrowright X)$ exactly consists of a single point $\mu$ (such that all points of $X$ are density with respect to this measure if $G$ has inversion). In particular, if $G=\mathbb{Z}_+^d$ or $\mathbb{R}_+^d$, then from \cite[Corollary~2.7]{ChD} it follows that $X$ is $G$-minimal and hence $G\curvearrowright X$ is strictly ergodic. This completes the proof of Theorem~\ref{Athm6}. \end{proof} \section{Bohr pointwise convergence theorem}\label{sec4 Now conversely our Theorem~\ref{Athm6} results in Bohr's pointwise convergence theorem for abelian semigroups acting continuously on a compact metric space. From now on, let $(G,\circ)$ be an lcsc semigroup, where $\circ$ denotes the binary operation in $G$. By a \textit{Radon measure} on $G$ we mean here a Borel measure that is finite on each compact subset and positive on some compact subset of $G$. The following concept is a generalization of the classical Haar measure of lcsc groups. \begin{Adefn}\label{def7 A Radon measure $\lambda_G$ on $G$ is called a (left) \textit{quasi-Haar measure} of $G$ (for discrete $G$, we take this to be the counting measure $\#$ on $G$), if for any compact subsets $K$ of $G$ there holds $\lambda_G(K)=\lambda_G\left(g\circ K)\right)$ for any $g\in K$. \end{Adefn} For example, the standard Lebesgue measure on $G=\mathbb{R}_+^d$ is a left and right quasi-Haar measure, but not a Haar measure. Clearly, the counting measure $\#$ on $G=\mathbb{Z}_+^d$ is a quasi-Haar, but not Haar, measure. Both $\mathbb{R}_+^d$ and $\mathbb{Z}_+^d$ are such that the left translation $L_g\colon G\rightarrow G$ is continuous injective for each $g\in G$. In fact, if $G$ is discrete and if $L_g\colon t\mapsto g\circ t$ is continuous and injective for all $g\in G$, then $\lambda_G=\#$ is a left quasi-Haar measure on $G$. We consider another lcsc discrete semigroup with no inversion. Let $G$ be the set of all nonsingular, nonnegative, and integer $n\times n$ matrices. Since $G$ may be thought of as an open subspace of $\mathbb{R}_+^{n\times n}$, it is an lcsc discrete semigroup under the standard matrix multiplication such that $L_g\colon G\rightarrow G$ is continuous and injective for each $g\in G$. Since the inverse of a nonnegative nonsingular matrix is not necessarily nonnegative, for example, \begin{equation*} \left[\begin{matrix}1&1\\0&1\end{matrix}\right]^{-1}=\left[\begin{matrix}1&-1\\0&1\end{matrix}\right], \end{equation*} $G$ is only a semigroup, but not a group, with the standard matrix multiplication operation. \begin{Adefn}\label{def8 Let $\lambda_G$ be a left quasi-Haar measure on $G$. We refer to a sequence of compact subsets $\mathcal{F}=(F_n)_{n=1}^\infty$ of $G$ as a \textit{F{\o}lner sequence} w.r.t. $\lambda_G$, if \begin{equation*} \lim_{n\to\infty}\frac{\lambda_G\left(F_n\vartriangle g\circ F_n\right)}{\lambda_G(F_n)}=0\quad \forall g\in G. \end{equation*} Here $\vartriangle$ means the symmetric difference of sets. \end{Adefn} It is well known that if $G$ is an \textsl{amenable lcsc} group, then one can always find a F{\o}lner sequence $\mathcal{F}=(F_n)_1^\infty$ of compact subsets of $G$ w.r.t. the left Haar measure $\lambda_G$. An abelian lcsc group is amenable. For any Borel probability measure $\mu$ in the compact metric space $X$, write $\mathscr{C}_\textsl{b}^\mu(X)$ as the set of all bounded Borel real/complex functions defined on the space $X$ whose discontinuities form a set of $\mu$-measure zero. \begin{Athm}\label{Athm9 Let $G$ be an lcsc abelian semigroup with a quasi-Haar measure $\lambda_G$ such that $L_g\colon G\rightarrow G$ is injective for each $g\in G$. Let $G\curvearrowright X$ be a Bohr almost periodic, continuous action of $G$ on the compact metric space $X$, which preserves a Borel probability measure $\mu$ in $X$. Then for any F{\o}lner sequence $\mathcal{F}$ w.r.t. $\lambda_G$ of $G$ and any $\varphi\in \mathscr{C}_\textsl{b}^\mu(X)$, \begin{equation*} \frac{1}{\lambda_G(F_n)}\int_{F_n}\varphi(g(x))d\lambda_G(g)\to\int_X\varphi d\mu\quad \textrm{as }n\to\infty, \end{equation*} uniformly for $x\in\mathrm{supp}(\mu)$. \end{Athm} \begin{proof} First of all we may assume that $X=\mathrm{supp}(\mu)$. Let $\mathcal{F}=(F_n)_{n=1}^\infty$ be an arbitrary F{\o}lner sequence of $G$ with respect to the quasi-Haar measure $\lambda_G$ and let $\varphi\in \mathscr{C}_\textsl{b}^\mu(X)$ be any given. For any point $x\in X$ and $n\ge1$, using the Riesz representation theorem we now first define an empirical probability measure $\mu_{x,n}$ in $X$ as follows: $$ \int_X\psi(y)d\mu_{x,n}(y)=\frac{1}{\lambda_G(F_n)}\int_{F_n}\psi(t(x))d\lambda_G(t),\quad \forall\psi\in C(X). $$ Let $\tilde{\mu}$ be an arbitrary limit point of the sequence $(\mu_{x,n})_{n=1}^\infty$ under the weak-* topology; then from the basic property of F{\o}lner sequence it follows that $\tilde{\mu}$ is $G$-invariant. Thus $\tilde{\mu}=\mu$ by Theorem~\ref{Athm6} and then $\mu_{x,n}$ converges weakly-* to $\mu$ for all $x\in X$. This implies by \cite[Lemma~2.1]{Dai} that \begin{equation*} \frac{1}{\lambda_G(F_n)}\int_{F_n}\varphi(t(x))d\lambda_G(t)\to\int_X\varphi d\mu\quad \textrm{as }n\to\infty \end{equation*} for each point $x\in X$. To prove the desired uniformity, we may assume $\int_X\varphi d\mu=0$ without loss of generality. By contradiction, let there exist some $\varepsilon>0$ and a sequence of points $(x_{n_k})_{k=1}^\infty$ in $X$ so that \begin{equation*} \left|\frac{1}{\lambda_G(F_{n_k})}\int_{F_{n_k}}\varphi(t(x_{n_k}))d\lambda_G(t)\right|\ge\varepsilon,\quad \forall k>0. \end{equation*} By throwing away a subsequence of $(n_k)$ if necessary, we can assume $$ \mu_{x_{n_k},n_k}\xrightarrow[]{\textrm{weakly-*}}\mu\quad \textrm{as }k\to\infty. $$ Then by \cite[Lemma~2.1]{Dai} once again $$\frac{1}{\lambda_G(F_{n_k})}\int_{F_{n_k}}\varphi(t(x_{n_k}))d\lambda_G(t)\to0$$ which is a contradiction. This concludes the proof of Theorem~\ref{Athm9}. \end{proof} We note that if the F{\o}lner sequence $\mathcal{F}$ of $G$ in Theorem~\ref{Athm9} satisfies the additional essential \textit{Shulman Condition}, that is, for some $C>0$ and all $n>0$ we have \begin{equation*} \lambda_G\left({\bigcup}_{k<n}F_k^{-1}F_n\right)\le C\lambda_G(F_n), \end{equation*} where $G$ need to be required to be a \textit{group} and $F_k^{-1}=\{g^{-1}\colon g\in F_k\}$, then from the pointwise ergodic theorem of Lindenstrauss~\cite[Theorem~1.2]{Lin} it follows that the pointwise convergence holds for any $\varphi\in L^1(X,\mathscr{B}(X),\mu)$. Because of lacking of this Shulman condition in our context we, however, cannot generalize the pointwise convergence in Theorem~\ref{Athm9} to $L^1$-functions, not even for $L^\infty$-functions. Indeed for $G=\mathbb{Z}$, let $\mathcal{F}$ be chosen as $$ F_n=\left\{n^2, n^2+1, \dotsc,n^2+n\right\}; $$ then A.~del Junco and J.~Rosenblatt showed in \cite{JR} that there always exists certain $\varphi\in L^\infty(X,\mathscr{B}(X),\mu)$ such that $$\frac{1}{\#F_n}\sum_{g\in F_n}\varphi(g(x))$$ does not have a limit almost everywhere, if $(X,\mathscr{B}(X),\mu)$ is nontrivial. \section*{\textbf{Acknowledgments} This work was partly supported by National Natural Science Foundation of China grant $\#$11431012, 11271183 and PAPD of Jiangsu Higher Education Institutions.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The dynamics of many systems can be represented by matrices and matrix products. The analysis of such systems lead to solving reachability questions in matrix semigroups which is essential part in verification procedures, control theory questions, biological systems predictability, security etc. \cite{BN16,BM04,COW13,COW16,ding2015,GOW15,KLZ16,MT17,OPW15,OW14,OW14a}. Many nontrivial algorithms for decision problems on matrix semigroups have been developed for matrices under different constraints on the dimension, the size of a generating set or for specific subclasses of matrices: e.g. commutative matrices \cite{BBC+96}, row-monomial matrices \cite{LP04} or $2 \times 2$ matrix semigroups generated by non-singular integer matrices~\cite{PS17}, upper-triangular integer matrices \cite{Honkala14}, matrices from the special linear group \cite{BHP17,CK05}, etc. Despite visible interest in this research domain, we still see a significant lack of algorithms and complexity results for answering decision problems in matrix semigroups. Many computational problems for matrix (semi)groups are computationally hard starting from dimension two and very often become undecidable from dimensions three or four even in the case of integer matrices. The central decision problem in matrix semigroups is the membership problem, which was originally considered by A. Markov in 1947 \cite{Markov47}. Let $S = \langle G \rangle$ be a matrix semigroup finitely generated by a generating set of square matrices~$G$. The {\em membership problem} is to decide whether or not a given matrix $M$ belongs to the matrix semigroup $S$. By restricting~$M$ to be the identity matrix we call the problem the {\em identity problem}. \begin{problem}[Identity problem] Let $S = \langle G \rangle$, where $G$ is a finite set of $n$-dimensional matrices over $\mathbb{K}=\mathbb{Z},\mathbb{Q},\mathbb{R},\mathbb{C},\ldots$. Is the identity matrix in the semigroup, i.e., does $\bm{I}\in S$ hold? \end{problem} The identity problem is computationally equivalent to another fundamental problem -- the \emph{subgroup problem} (i.e., to decide whether a semigroup contains a subgroup) as any subset of matrices, which can form a product leading to the identity also generate a group \cite{CK05} \footnote{The product of matrices which is equal to the identity is still the identity element after a cyclic shift, so every element from this product has the inverse.}. The decidability status of the identity problem was unknown for a long time for matrix semigroups of any dimension, see Problem 10.3 in ``Unsolved Problems in Mathematical Systems and Control Theory'' \cite{BM04}, but it was shown in \cite{BP10} to be undecidable for $48$ matrices from $\mathbb{Z}^{4 \times 4}$ by proving that the identity correspondence problem (a variant of the Post correspondence problem over a group alphabet) is undecidable, and embedding pairs of words over free group alphabet into {\rm SL}\ensuremath{(4,\mathbb{Z})}\xspace as two blocks on the main diagonal and by a morphism $f$ as follows $f(a)=\begin{psmallmatrix}1&2\\0&1\end{psmallmatrix}$, $f(a^{-1})=\begin{psmallmatrix}1&-2\\0&1\end{psmallmatrix}$, $f(b)=\begin{psmallmatrix}1&0\\2&1\end{psmallmatrix}$ and $f(b^{-1})=\begin{psmallmatrix}1&0\\-2&1\end{psmallmatrix}$. In the seminal paper of Paterson in 1970, see \cite{Paterson70}, an injective morphism from pairs of words in alphabet $\Sigma=\{a,b\}$ into $3\times3$ integral matrices, $g(u,v)=\begin{psmallmatrix}n^{|u|}&0&0\\0&n^{|v|}&0\\\sigma(u)&\sigma(v)&1\end{psmallmatrix}$ (where $\sigma$ represents each word as an $n$-adic number), was used to prove undecidability of mortality and which later led to many undecidability results of matrix problems in dimension three, e.g., \cite{CHK99,HHH07}. Finding new injective morphisms is hard, but having them gives an opportunity to prove new undecidability results. In 1999, Cassaigne, Harju and Karhum\"aki significantly boosted the research on finding algorithmic solutions for $2 \times 2$ matrix semigroups by showing that there is no injective semigroup morphism from pairs of words over any finite alphabet (with at least two elements) into complex $2 \times 2$ matrices \cite{CHK99}. This result led to substantial interest in finding algorithmic solutions for such problems as the identity problem, mortality, membership, vector reachability, freeness etc. for $2 \times 2$ matrices. For example, in 2007 Gurevich and Schupp~\cite{GS07} showed that the membership problem is decidable in polynomial time for the finitely generated subgroups of the modular group and later in 2017 Bell, Hirvensalo and Potapov proved that the identity problem for a semigroup generated by matrices from {\rm SL}\ensuremath{(2,\mathbb{Z})}\xspace is $\NP$-complete by developing a new effective technique to operate with compressed word representations of matrices and closing the gap on complexity improving the original $\EXPSPACE$ solution proposed in 2005 \cite{CK05}. The first algorithm for the membership problem which covers the cases beyond {\rm SL}\ensuremath{(2,\mathbb{Z})}\xspace and {\rm GL}\ensuremath{(2,\mathbb{Z})}\xspace has been proposed in \cite{PS17} and provides the solution for a semigroup generated by non-singular $2 \times 2$ integer matrices. Later, these techniques have been applied to build another algorithm to solve the membership problem in {\rm GL}\ensuremath{(2,\mathbb{Z})}\xspace extended by singular matrices~\cite{PS17a}. The current limit of decidability is standing for $2 \times 2$ matrices which are defined over hypercomplex numbers (quaternions) for which most of the problems have been shown to be undecidable in \cite{BP08} and correspond to reachability problems for 3-sphere rotation. In our paper, we show that there is no embedding from pairs of words into $3\times3$ integer matrices with determinant one (i.e., into {\rm SL}\ensuremath{(3,\mathbb{Z})}\xspace), which is a strong evidence that computational problems in {\rm SL}\ensuremath{(3,\mathbb{Z})}\xspace are decidable as all known undecidability techniques for low-dimensional matrices are based on encoding of Turing machine computations via the Post correspondence problem (\PCP) which cannot be applied in {\rm SL}\ensuremath{(3,\mathbb{Z})}\xspace following our result. In case of the \PCP encoding the matrix products extended by the right multiplication correspond to the Turing machine simulation and the only known proof alternatives are recursively enumerable sets and Hilbert's tenth problem that provide undecidability for matrix equations, but of very high dimensions \cite{BHH+08,CH14,Honkala15}. So in analogy to 1999 result from \cite{CHK99} on non-existence of embedding into $2 \times 2$ matrix semigroups over complex numbers, we expand a horizon of decidability area for matrix semigroups and show that there is no embedding from a set of pairs of words over a semigroup alphabet to any matrix semigroup in {\rm SL}\ensuremath{(3,\mathbb{Z})}\xspace. It follows almost immediately that there is no embedding from a set of pairs of group words into $\mathbb{Z}^{3 \times 3}$.\footnote{The idea that such result may hold was motivated by analogy from combinatorial topology, where the identity problem is decidable for the braid group $B_3$ which is the universal central extension of the modular group {\rm PSL}\ensuremath{(2,\mathbb{Z})}\xspace~\cite{Potapov13}, an embedding for a set of pairs of words into the braid group $B_5$ exists, see \cite{BD99}, and non-existence of embeddings were proved for $B_4$ in \cite{Akimenkov91}. So {\rm SL}\ensuremath{(3,\mathbb{Z})}\xspace was somewhere in the goldilocks zone between $B_3$ and $B_5$.} The matrix semigroup in {\rm SL}\ensuremath{(3,\mathbb{Z})}\xspace has attracted a lot of attention recently as it can be represented by a set of generators and relations~\cite{CRW92,Conder16} similar to {\rm SL}\ensuremath{(2,\mathbb{Z})}\xspace where it was possible to convert numerical problems into symbolic problems and solve them with novel computational techniques; see \cite{BHP17,CK05,PS17,PS17a}. Comparing to the relatively simple representation of ${\rm SL}\ensuremath{(2,\mathbb{Z})}\xspace=\langle S,T \mid S^4= \bm{I}_2, (ST)^6=\bm{I}_2 \rangle$, where $S=\begin{psmallmatrix}0 & -1 \\ 1 & 0\end{psmallmatrix}$ and $T=\begin{psmallmatrix}1 & 1 \\ 0 & 1\end{psmallmatrix}$ the case of ${\rm SL}\ensuremath{(3,\mathbb{Z})}\xspace =\langle X,Y,Z \mid X^3=Y^3=Z^2=(XZ)^3=(YZ)^3=(X^{-1}ZXY)^2=(Y^{-1}ZYX)^2=(XY)^6=\bm{I}_3 \rangle$, where \begin{align*} X=\begin{psmallmatrix}0 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 0 & 0\end{psmallmatrix},\ Y=\begin{psmallmatrix}1 & 0 & 1 \\ 0 & -1 & -1 \\ 0 & 1 & 0\end{psmallmatrix} \text{ and }Z= \begin{psmallmatrix}0 & 1 & 0 \\ 1 & 0 & 0 \\ -1 & -1 & -1\end{psmallmatrix}, \end{align*} looks more challenging containing both non-commutative and partially commutative elements. As the decidability status of the \emph{identity problem} in dimension three is still a long standing open problem, we look for an important subgroup of ${\rm SL}(3,\mathbb{Z})$, the Heisenberg group ${\rm H}(3,\mathbb{Z})$, for which the \emph{identity problem} could be decidable following our result on non-existence of embedding. The Heisenberg group is an important subgroup of {\rm SL}\ensuremath{(3,\mathbb{Z})}\xspace which is useful in the description of one-dimensional quantum mechanical systems~\cite{Brylinski93,GU14,Kostant70}. We show that the \emph{identity problem} for a matrix semigroup generated by matrices from ${\rm H}(3,\mathbb{Z})$ and even ${\rm H}(3,\mathbb{Q})$ is decidable in polynomial time. Furthermore, we extend the decidability result for ${\rm H}(n,\mathbb{Q})$ in any dimension $n$. Moreover we tighten the gap between (un)decidability results for the \emph{identity problem} substantially reducing the bound on the size of the generator set from $48$ (see \cite{BP10}) to $8$ in {\rm SL}\ensuremath{(4,\mathbb{Z})}\xspace by developing a novel reduction technique. \section{Preliminaries} A {\em semigroup} is a set equipped with an associative binary operation. Let $S$ be a semigroup and $G$ be a subset of $S$. We say that a semigroup $S$ is {\em generated} by a subset~$G$ of~$S$ if each element of $S$ can be expressed as a composition of elements of $G$. In this case, we call $G$ the {\em generating set} of $S$. Given an \emph{alphabet} $\Sigma = \{a_1,a_2, \ldots, a_m\}$, a finite \emph{word} $u$ is an element of semigroup $\Sigma^*$. The \emph{empty word} is denoted by $\varepsilon$. The length of a finite word~$u$ is denoted by $|u|$ and $|\varepsilon|=0$. Let $\Gamma=\{a_1,a_2,\ldots,a_\ell,a^{-1}_1,a^{-1}_2,\ldots,a^{-1}_\ell\}$ be a generating set of a free group $\FG$. The elements of $\FG$ are all \emph{reduced} words over $\Gamma$, i.e., words not containing $a_ia_i^{-1}$ or $a_i^{-1}a_i$ as a subword. In this context, we call $\Gamma$ a finite \emph{group alphabet}, i.e., an alphabet with an involution. The multiplication of two elements (reduced words) $u,v\in \FG$ corresponds to the unique reduced word of the concatenation $uv$. This multiplication is called \emph{concatenation} throughout the paper. Later in the encoding of words over a group alphabet we denote $a^{-1}$ by $\overline{a}$ and the alphabet of inverse letters is denoted as $\Sigma^{-1}=\{a^{-1}\mid a\in \Sigma\}$. In the next lemma, we present an encoding from an arbitrary group alphabet to a binary group alphabet used in Section~\ref{sec:IPfour}. The result is crucial as it allows us to present the results of the above section over the smallest domain. \begin{lemma}[Birget, Margolis \cite{BM08}]\label{lem:groupEnc} Let $\Gamma = \{z_1,\ldots, z_\ell,\overbar{z_1},\ldots,\overbar{z_\ell}\}$ be a group alphabet and $\Gamma_2=\{c,d,\overbar{c},\overbar{d}\}$ be a binary group alphabet. Define the mapping $\alpha:\Gamma \to \FG[\Gamma_2]$ by: \begin{align*} \alpha (z_i) = c^i d\overbar{c}^{i}, \qquad \alpha (\overbar{z_i}) = c^i\overbar{d}\overbar{c}^{i}, \end{align*} where $1 \leq i \leq \ell$. Then $\alpha$ is a monomorphism, that is, an injective morphism. Note that $\alpha$ can be extended to domain $\FG$ in the usual way. \end{lemma} The \emph{Post correspondence problem} (\PCP) is a famous undecidable problem. In Section~\ref{sec:IPfour}, we use the \PCP to reduce the number of generators needed to prove the undecidability of the identity problem for {\rm SL}\ensuremath{(4,\mathbb{Z})}\xspace. An \emph{instance} of the \PCP consists of two morphisms $g,h: \Sigma^*\rightarrow B^*$, where $\Sigma$ and $B$ are alphabets. A nonempty word $u\in \Sigma^*$ is a \emph{solution} of an instance $(g,h)$ if it satisfies $g(u)=h(u)$. The problem is undecidable for all domain alphabets~$\Sigma$ with $|\Sigma|\geq 5$ \cite{Neary15} The special linear group is ${\rm SL}(n,\mathbb{K})=\{M\in \mathbb{K}^{n\times n}\mid \det(M)=1\}$, where $\mathbb{K}=\mathbb{Z},\mathbb{Q},\mathbb{R},\mathbb{C},\ldots$. The \emph{identity matrix} is denoted by $\bm{I}_n$ and the \emph{zero matrix} is denoted by $\bm{0}_n$. The \emph{Heisenberg group} {\rm H}\ensuremath{(3,\mathbb{K})}\xspace is formed by the $3 \times 3$ matrices of the form $M = \begin{psmallmatrix} 1 & a & c\\ 0 & 1 & b\\ 0 & 0 & 1 \end{psmallmatrix}$, where $a,b,c \in \mathbb{K}$. It is easy to see that the Heisenberg group is a non-commutative subgroup of ${\rm SL}(3,\mathbb{K})$. We can consider the Heisenberg group as a set of all triples with the following group law: \begin{align*} (a_1, b_1, c_1)\otimes (a_2, b_2, c_2) = (a_1 + a_2, b_1 + b_2, c_1 + c_2 + a_1 b_2). \end{align*} By $\psi(M)$ we denote the triple~$(a,b,c) \in \mathbb{K}^3$ which corresponds to the upper-triangular coordinates of $M$. Let $M$ be a matrix in {\rm H}\ensuremath{(3,\mathbb{K})}\xspace such that $\psi(M) = (a,b,c)$. We define the {\em superdiagonal vector} of $M$ to be $\vec{v}(M) = (a,b)$. Given two vectors~$\mathbf u = (u_1,u_2)$ and $\mathbf v = (v_1,v_2)$, the {\em cross product} of $\mathbf u$ and $\mathbf v$ is defined as $\mathbf u \times \mathbf v = u_1v_2 - u_2v_1$. Two vectors are {\em parallel} if the cross product is zero. The Heisenberg group can be also defined in higher dimensions. The Heisenberg group of dimension $n$ over $\mathbb{K}$ is denoted by ${\rm H}(n,\mathbb{K})$ and is the group of square matrices in $\mathbb{K}^{n \times n}$ of the following form: $\begin{psmallmatrix} 1 & \mathbf a^\mathsf{T} & c\\ 0 & {\bm{I}}_{n-2} & \mathbf b\\ 0 & 0 & 1 \end{psmallmatrix}$, where $\mathbf a,\mathbf b \in \mathbb{K}^{n-2}, c \in \mathbb{K}$. Similar to the Heisenberg group in dimension three, we can also consider the Heisenberg group in dimension $n$ for any integer $n \ge 3$ as a set of all triples with the following group law: $(\mathbf a_1, \mathbf b_1, c_1)\otimes (\mathbf a_2, \mathbf b_2, c_2) = (\mathbf a_1 + \mathbf a_2, \mathbf b_1 + \mathbf b_2, c_1 + c_2 + \mathbf a_1 \cdot \mathbf b_2)$, where $\mathbf a_1, \mathbf a_2, \mathbf b_1, \mathbf b_2 \in \mathbb{K}^{n-2}$ and $\mathbf a_1\cdot\mathbf b_2$ is the dot product of vectors $\mathbf a_1$ and $\mathbf b_2$. We extend the function $\psi$ to $n$-dimensional Heisenberg group: For a matrix $M$, $\psi(M)$ is the triple~$(\mathbf a,\mathbf b,c) \in (\mathbb{K}^{n-2})^2 \times \mathbb{K}$ which corresponds to the upper-triangular coordinates of $M$. Next, we prove a simple necessary and sufficient condition for commutation of two matrices from the Heisenberg group. \begin{lemma}\label{lem:commute} Let $M_1$ and $M_2$ be two matrices from the Heisenberg group~${\rm H}(n,\mathbb{K})$ and $\psi(M_i)=(\mathbf a_i,\mathbf b_i,c_i)$ for $i=1,2$. Then $M_1M_2 = M_2M_1$ holds if and only if $\mathbf a_1\cdot\mathbf b_2=\mathbf a_2\cdot\mathbf b_1$.\footnote{Note that, in dimension three, the condition can be stated as superdiagonal vectors of $M_1$ and $M_2$ being parallel.} \end{lemma} \begin{proof} The product $M_1M_2$ has $c_1+ c_2 + \mathbf a_1\cdot\mathbf b_2$ in the upper-right corner whereas $M_2M_1$ has $c_1+ c_2 + \mathbf a_2\cdot\mathbf b_1$. The other coordinates are identical as we add numbers in the same coordinate. It is easy to see that the two products are equivalent if and only if $\mathbf a_1\cdot\mathbf b_2 = \mathbf a_2\cdot\mathbf b_1$ holds. \end{proof} The main results of the paper, Theorem~\ref{thm:ptime} and Theorem~\ref{thm:heisnq}, reduce to solving systems of linear homogeneous Diophantine equations. In the next lemma, we show that solving a system of linear homogeneous Diophantine equations is in polynomial time. Note that the polynomial time complexity is not important for Theorem~\ref{thm:heisnq} as there is a part of the decision procedure that requires exponential time. \begin{lemma}\label{lem:DiophantineP} Deciding whether a system of linear homogeneous Diophantine equations \begin{align}\label{eq:slhDe} \begin{psmallmatrix}a_{11} & \cdots & a_{1n} \\ \vdots & \ddots & \vdots \\ a_{m1} & \cdots & a_{mn} \end{psmallmatrix}\begin{psmallmatrix} x_1\\ \vdots \\ x_n \end{psmallmatrix} &= \begin{psmallmatrix} 0 \\ \vdots \\ 0 \end{psmallmatrix}, \end{align} where $a_{ij}\in\mathbb{Q}$, has a positive integer solution is in $\P$. \end{lemma} \begin{proof} We prove the claim by converting the system of linear homogeneous Diophantine equations into an instance of linear programming problem which is known to be solvable in polynomial time \cite{Khachiyan80}. Indeed, let us convert \eqref{eq:slhDe} into an instance of the linear programming problem \begin{align}\label{eq:lp} \begin{psmallmatrix}a_{11} & \cdots & a_{1n} \\ \vdots & \ddots & \vdots \\ a_{m1} & \cdots & a_{mn} \\ -a_{11} & \cdots & -a_{1n} \\ \vdots & \ddots & \vdots \\ -a_{m1} & \cdots & -a_{mn} \\ -1 & \cdots & -1 \end{psmallmatrix}\begin{psmallmatrix} y_1\\ \vdots \\ y_n \end{psmallmatrix} &\leq \begin{psmallmatrix} 0 \\ \vdots \\ 0 \\ 0 \\ \vdots \\ 0 \\ -1 \end{psmallmatrix}. \end{align} The idea is that equations \begin{align*} (a_{i1},\ldots,a_{in})\cdot(y_1,\ldots,y_n)^\mathsf{T} &\leq 0 \\ (-a_{i1},\ldots,-a_{in})\cdot(y_1,\ldots,y_n)^\mathsf{T} &\leq 0 \end{align*} ensure that if $(y_1,\ldots,y_n)$ satisfies both equations, it in fact satisfies \begin{align*} (a_{i1},\ldots,a_{in})\cdot(y_1,\ldots,y_n)^\mathsf{T} = 0. \end{align*} The final equation guarantees that a solution is not a zero vector. Let $(\xi_1,\ldots,\xi_n)\in\mathbb{Q}^n$ be a solution of \eqref{eq:lp}. We write $\xi_i=\frac{p_i}{q_i}$ as an irreducible fraction, where $p_i,q_i\in\mathbb{N}$. Now $(\xi'_1,\ldots,\xi'_n)$, where $\xi'_i=\prod_{j=1}^n q_j \xi_i$, is a solution to the system of linear homogeneous Diophantine equations. First, observe that $(\xi'_1,\ldots,\xi'_n)$ is in $\mathbb{Z}^n$ and satisfies the matrix equation. Indeed, \begin{align*} \begin{psmallmatrix}a_{11} & \cdots & a_{1n} \\ \vdots & \ddots & \vdots \\ a_{m1} & \cdots & a_{mn} \end{psmallmatrix}\begin{psmallmatrix} \xi'_1\\ \vdots \\ \xi'_n \end{psmallmatrix} = \prod_{j=1}^n q_j \begin{psmallmatrix}a_{11} & \cdots & a_{1n} \\ \vdots & \ddots & \vdots \\ a_{m1} & \cdots & a_{mn} \end{psmallmatrix}\begin{psmallmatrix} \xi_1\\ \vdots \\ \xi_n \end{psmallmatrix} = \prod_{j=1}^n q_j \begin{psmallmatrix} 0 \\ \vdots \\ 0 \end{psmallmatrix} =\begin{psmallmatrix} 0 \\ \vdots \\ 0 \end{psmallmatrix}. \end{align*} Also $(\xi'_1,\ldots,\xi'_n)$ is not a trivial solution (i.e., $(0,\ldots,0)$). Indeed, assume that $\xi'_i=0$ for all $i$. Now, since $\xi'_i=\prod_{j=1}^n q_j \xi_i$ and all $q_j$ are non-zero, $\xi_i=0$. That is, $(\xi_1,\ldots,\xi_n)=(0,\ldots,0)$ which does not satisfy the last equation, i.e., $-1\cdot0-\ldots-1\cdot0=0\leq -1$ does not hold. Finally, note that $\xi'_i\geq0$ for all $i$. That is, $(\xi'_1,\ldots,\xi'_n)$ is a non-trivial integer solution to the system of linear homogeneous Diophantine equations \eqref{eq:slhDe}. \end{proof} \section{On embedding from pairs of words into {\rm SL}\ensuremath{(3,\mathbb{K})}\xspace} Let $\Sigma=\{0,1\}$. The monoid~$\Sigma^* \times \Sigma^*$ has a generating set $S=\{(0, \varepsilon), (1, \varepsilon), (\varepsilon, 0), (\varepsilon, 1)\}$, where $\varepsilon$ is the empty word. We simplify the notation by setting $a = (0, \varepsilon)$, $b = (1, \varepsilon)$, $c = (\varepsilon, 0)$ and $d = (\varepsilon, 1)$. It is easy to see that we have the following relations: \begin{align}\label{relation} ac&=ca, & bc &=cb, & ad &= da, & bd &= db. \end{align} In other words, $a$ and $b$ commute with $c$ and $d$. Furthermore, these are the only relations. That is, $a$ and $b$ do not commute with each other, and neither do $c$ and $d$. The monoid $\Sigma^*\times\Sigma^*$ is a partially commutative monoid or a trace monoid. A necessary and sufficient conditions for existence of an embedding of trace monoids into $\mathbb{N}^{2\times2}$ was given in \cite{Choffrut90} but, to the authors' best knowledge, there are no similar results even for $\mathbb{N}^{3\times3}$. Let $\varphi:\Sigma^*\times\Sigma^*\to {\rm SL}\ensuremath{(3,\mathbb{K})}\xspace$ be an injective morphism and denote $A = \varphi(a)$, $B=\varphi(b)$, $C = \varphi(c)$ and $D = \varphi(d)$. Our goal is to show that $\varphi$ does not exist for $\mathbb{K}=\mathbb{Z}$. Additionally, we also show that an embedding does exist for $\mathbb{K}=\mathbb{Q}$. Unfortunately, the technique developed in \cite{CHK99}, where the contradiction was derived from simple relations, resulting from matrix multiplication, cannot be used for a case of~{\rm SL}\ensuremath{(3,\mathbb{K})}\xspace as it creates a large number of equations which do not directly limit the existence of $\varphi$. In contrast to \cite{CHK99}, we found new techniques to show non-existence of $\varphi$ by analysis of eigenvalues and the Jordan normal forms. We consider Jordan normal forms of matrices and showing that some normal form result in additional relations beside relations \eqref{relation}. Let $\varphi$ be an injective morphism from $S$ into {\rm SL}\ensuremath{(3,\mathbb{C})}\xspace. Because of obvious symmetries, it suffices to prove the claim for $A=\varphi((0,\varepsilon))$. Now, the only relations in {\rm SL}\ensuremath{(3,\mathbb{C})}\xspace are $AC=CA$, $AD=DA$, $BC=CB$ and $BD=DB$. Since the conjugation by an invertible matrix does not influence the injectivity, we can conjugate the four matrices by some $X\in\mathbb{C}^{3\times3}$ such that $A$ is in the Jordan normal form. For a $3 \times 3$ matrix, there are six different types of matrices in the Jordan normal form. If $A$ has three different eigenvalues, then \begin{align}\label{eq:threeCigen} A = \begin{pmatrix} \lambda & 0 & 0\\ 0 & \mu & 0\\ 0 & 0 & \nu \end{pmatrix}. \end{align} If $A$ has two eigenvalues, then \begin{align}\label{eq:twoCigen} A = \begin{pmatrix} \lambda & 0 & 0\\ 0 & \mu & 0\\ 0 & 0 & \mu \end{pmatrix} \mbox{ or } A = \begin{pmatrix} \lambda & 0 & 0\\ 0 & \mu & 1\\ 0 & 0 & \mu \end{pmatrix}. \end{align} Finally, if $A$ has only one eigenvalue, then \begin{align}\label{eq:oneCigen} A = \begin{pmatrix} \lambda & 0 & 0\\ 0 & \lambda & 0\\ 0 & 0 & \lambda \end{pmatrix} \mbox{ or } A = \begin{pmatrix} \lambda & 1 & 0\\ 0 & \lambda & 0\\ 0 & 0 & \lambda \end{pmatrix} \mbox{ or } A = \begin{pmatrix} \lambda & 1 & 0\\ 0 & \lambda & 1\\ 0 & 0 & \lambda \end{pmatrix}. \end{align} \begin{lemma}\label{lem:3eval} Let $\Sigma=\{0,1\}$. If there is an injective morphism $\varphi: \Sigma^* \times \Sigma^* \to {\rm SL}(3,\mathbb{C})$ and the matrices~$A,B,C$ and $D$ correspond to $\varphi((0,\varepsilon))$, $\varphi((1,\varepsilon))$, $\varphi((\varepsilon,0))$ and $\varphi((\varepsilon,1))$ respectively, then the Jordan normal form of matrices $A,B,C$ and $D$ is not $ \begin{psmallmatrix} \lambda & 0 & 0\\ 0 & \mu & 0\\ 0 & 0 & \nu \end{psmallmatrix}$. \end{lemma} \begin{proof} This form can be easily ruled out since $A=\begin{psmallmatrix} \lambda & 0 & 0\\ 0 & \mu & 0\\ 0 & 0 & \nu \end{psmallmatrix}$ only commutes with diagonal matrices. Then $C$ and $D$ should be commuting with $A$ by the suggested relations and as a result, $C$ and $D$ commute with each other. \end{proof} \begin{lemma}\label{lem:2evalNotDiag} Let $\Sigma=\{0,1\}$. If there is an injective morphism $\varphi: \Sigma^* \times \Sigma^* \to {\rm SL}(3,\mathbb{C})$ and the matrices~$A,B,C$ and $D$ correspond to $\varphi((0,\varepsilon))$, $\varphi((1,\varepsilon))$, $\varphi((\varepsilon,0))$ and $\varphi((\varepsilon,1))$ respectively, then the Jordan normal form of matrices $A,B,C$ and $D$ is not $ \begin{psmallmatrix} \lambda & 0 & 0\\ 0 & \mu & 1\\ 0 & 0 & \mu \end{psmallmatrix}$. \end{lemma} \begin{proof} Let $A=\begin{psmallmatrix}\lambda&0&0\\0&\mu&1\\0&0&\mu\end{psmallmatrix}$ and let $C=\begin{psmallmatrix}a&b&c \\ d&e&f \\ g&h&\ell\end{psmallmatrix}$. Now \begin{align*} AC&=\begin{pmatrix}\lambda&0&0\\0&\mu&1\\0&0&\mu\end{pmatrix} \begin{pmatrix}a&b&c\\d&e&f\\g&h&\ell\end{pmatrix} = \begin{pmatrix}\lambda a&\lambda b&\lambda c\\g + \mu d&h +\mu e& \ell +\mu f\\ \mu g&\mu h & \mu \ell\end{pmatrix} \text{ and } \\ CA&=\begin{pmatrix}a&b&c\\d&e&f\\g&h&\ell\end{pmatrix}\begin{pmatrix}\lambda&0&0\\0&\mu&1\\0&0&\mu\end{pmatrix}=\begin{pmatrix}\lambda a&\mu b&b+\mu c\\\lambda d&\mu e&e+\mu f\\\lambda g&\mu h& h+\mu \ell\end{pmatrix}. \end{align*} Since these matrices are equal, and since $\lambda\neq\mu$, we have that $b=c=d=g=h=0$ and $e=\ell$. Similar calculation gives us $D=\begin{psmallmatrix}a'&0&0\\0&e'&f'\\0&0&e'\end{psmallmatrix}$. Now, matrices $C$ and $D$ commute as follows: \begin{align*} \begin{pmatrix}a&0&0\\0&e&f\\0&0&e\end{pmatrix}\begin{pmatrix}a'&0&0\\0&e'&f'\\0&0&e'\end{pmatrix}=\begin{pmatrix}aa'&0&0\\0&ee'&ef'+fe'\\0&0&ee'\end{pmatrix} = \begin{pmatrix}a'&0&0\\0&e'&f'\\0&0&e'\end{pmatrix}\begin{pmatrix}a&0&0\\0&e&f\\0&0&e\end{pmatrix}, \end{align*} which is not one of the allowed relations. \end{proof} \begin{lemma}\label{lem:1eval} Let $\Sigma=\{0,1\}$. If there is an injective morphism $\varphi: \Sigma^* \times \Sigma^* \to {\rm SL}(3,\mathbb{C})$ and the matrices~$A,B,C$ and $D$ correspond to $\varphi((0,\varepsilon))$, $\varphi((1,\varepsilon))$, $\varphi((\varepsilon,0))$ and $\varphi((\varepsilon,1))$ respectively, then the Jordan normal form of matrices $A,B,C$ and $D$ is not $ \begin{psmallmatrix} \lambda & 0 & 0\\ 0 & \lambda & 0\\ 0 & 0 & \lambda \end{psmallmatrix}$ nor $ \begin{psmallmatrix} \lambda & 1 & 0\\ 0 & \lambda & 1\\ 0 & 0 & \lambda \end{psmallmatrix}$. \end{lemma} \begin{proof} In the first case, the matrix~$A$ is diagonal and it is easy to see that then $A$ commutes with all matrices, including $B$. Let us then consider the second case, where the matrix~$A$ is in the following form~$\begin{psmallmatrix}\lambda &1&0 \\ 0&\lambda&1 \\ 0&0&\lambda\end{psmallmatrix}$ and let $C=\begin{psmallmatrix}a&b&c \\ d&e&f \\ g&h&\ell\end{psmallmatrix}$. Now \begin{align*} AC&=\begin{pmatrix}\lambda&1&0\\0&\lambda&1\\0&0&\lambda\end{pmatrix}\begin{pmatrix}a&b&c\\d&e&f\\g&h&\ell\end{pmatrix}=\begin{pmatrix}d+a\lambda&e+b\lambda&f+c\lambda\\g + d \lambda&h + e \lambda& \ell + f \lambda\\g \lambda&h \lambda&\ell \lambda\end{pmatrix} \text{ and}\\ CA&=\begin{pmatrix}a&b&c\\d&e&f\\g&h&\ell\end{pmatrix}\begin{pmatrix}\lambda&1&0\\0&\lambda&1\\0&0&\lambda\end{pmatrix}=\begin{pmatrix}a \lambda&a+b \lambda&b+c \lambda\\d \lambda&d+e \lambda&e+f \lambda\\g \lambda&g+h \lambda&h+\ell \lambda\end{pmatrix}. \end{align*} Since these matrices are equal, we have that $d=g=h=0$, $a=e=\ell$ and $b=f$. Let $D=\begin{psmallmatrix}a'&b'&c'\\d'&e'&f'\\g'&h'&\ell'\end{psmallmatrix}$. Solving $D$ from equation $AD=DA$, gives us $D=\begin{psmallmatrix}a'&b'&c'\\0&a'&b'\\0&0&a'\end{psmallmatrix}$ and now matrices $C$ and $D$ commute by Lemma~\ref{lem:commute}. Indeed, matrix $C$ can be expressed as $C=a\begin{psmallmatrix}1&\frac{b}{a}&\frac{c}{a}\\0&1&\frac{b}{a}\\0&0&1\end{psmallmatrix}\in {\rm H}\ensuremath{(3,\mathbb{C})}\xspace$ and matrix $D$ has an analogous expression. Then it is clear that $\frac{b}{a}\frac{b'}{a'}=\frac{b'}{a'}\frac{b}{a}$ and thus matrices $C$ and $D$ commute. \end{proof} In the above lemmas, we ruled out four out of six possible Jordan normal forms. In the next theorem, we give an embedding from $\Sigma\times\Sigma$ into {\rm SL}\ensuremath{(3,\mathbb{Q})}\xspace. \begin{theorem} Let $\Sigma=\{0,1\}$. The morphism $\varphi: \Sigma^* \times \Sigma^* \to {\rm SL}(3,\mathbb{Q})$, defined by $\varphi((0,\varepsilon))=\begin{psmallmatrix} 4 & 0 & 0\\ 0 & \frac{1}{2} & 0\\ 0 & 0 & \frac{1}{2} \end{psmallmatrix}$, $\varphi((1,\varepsilon))=\begin{psmallmatrix} 9 & \frac{1}{3} & 0\\ 0 & \frac{1}{3} & 0\\ 0 & 0 & \frac{1}{3} \end{psmallmatrix}$, $\varphi((\varepsilon,0))=\begin{psmallmatrix} \frac{1}{2} & 0 & 0\\ 0 & \frac{1}{2} & 0\\ 0 & 0 & 4 \end{psmallmatrix}$ and $\varphi((\varepsilon,1))=\begin{psmallmatrix} \frac{1}{3} & 0 & 0\\ 0 & \frac{1}{3} & 0\\ 0 & \frac{1}{3} & 9 \end{psmallmatrix}$ is an embedding. \end{theorem} \begin{proof} Let $A=\varphi((0,\varepsilon))=\begin{psmallmatrix} 4 & 0 & 0\\ 0 & \frac{1}{2} & 0\\ 0 & 0 & \frac{1}{2} \end{psmallmatrix}$, $B=\varphi((1,\varepsilon))=\begin{psmallmatrix} 9 & \frac{1}{3} & 0\\ 0 & \frac{1}{3} & 0\\ 0 & 0 & \frac{1}{3} \end{psmallmatrix}$, $C=\varphi((\varepsilon,0))=\begin{psmallmatrix} \frac{1}{2} & 0 & 0\\ 0 & \frac{1}{2} & 0\\ 0 & 0 & 4 \end{psmallmatrix}$ and $D=\varphi((\varepsilon,1))=\begin{psmallmatrix} \frac{1}{3} & 0 & 0\\ 0 & \frac{1}{3} & 0\\ 0 & \frac{1}{3} & 9 \end{psmallmatrix}$. It is easy to see that the relations of \eqref{relation} hold. For example, the relation $AB\neq BA$ holds since \begin{align*} AB=\begin{pmatrix} 4 & 0 & 0\\ 0 & \frac{1}{2} & 0\\ 0 & 0 & \frac{1}{2} \end{pmatrix}\begin{pmatrix} 9 & \frac{1}{3} & 0\\ 0 & \frac{1}{3} & 0\\ 0 & 0 & \frac{1}{3} \end{pmatrix} &= \begin{pmatrix} 36&\frac{4}{3}&0\\0&\frac{1}{6}&0\\0&0&\frac{1}{6} \end{pmatrix} \\ BA=\begin{pmatrix} 9 & \frac{1}{3} & 0\\ 0 & \frac{1}{3} & 0\\ 0 & 0 & \frac{1}{3} \end{pmatrix}\begin{pmatrix} 4 & 0 & 0\\ 0 & \frac{1}{2} & 0\\ 0 & 0 & \frac{1}{2} \end{pmatrix} &= \begin{pmatrix} 36 & \frac{1}{6} & 0 \\ 0 & \frac{1}{6} & 0 \\ 0 & 0 & \frac{1}{6} \end{pmatrix}. \end{align*} On the other hand, \begin{align*} AD=\begin{pmatrix} 4 & 0 & 0\\ 0 & \frac{1}{2} & 0\\ 0 & 0 & \frac{1}{2} \end{pmatrix}\begin{pmatrix} \frac{1}{3} & 0 & 0\\ 0 & \frac{1}{3} & 0\\ 0 & \frac{1}{3} & 9 \end{pmatrix} &= \begin{pmatrix} \frac{4}{3}&0&0\\0&\frac{1}{6}&0\\0&\frac{1}{6}&\frac{9}{2} \end{pmatrix} = \begin{pmatrix} \frac{1}{3} &0& 0\\ 0 & \frac{1}{3} & 0\\ 0 & \frac{1}{3} & 9 \end{pmatrix}\begin{pmatrix} 4 & 0 & 0\\ 0 & \frac{1}{2} & 0\\ 0 & 0 & \frac{1}{2} \end{pmatrix}=DA. \end{align*} Next, we show that the pairs $\{A,B\}$ and $\{C,D\}$ generate free semigroups. Denote by $A'=\begin{psmallmatrix}4&0\\0&\frac{1}{2}\end{psmallmatrix}$ and $B'=\begin{psmallmatrix}9&\frac{1}{3}\\0&\frac{1}{3}\end{psmallmatrix}$ the top left 2-by-2 blocks of $A$ and $B$ respectively. By Lemma 3 of \cite{CHK99}, $\{A',B'\}^+$ is free if and only if $\{\lambda A',\mu B'\}$ for some $\lambda,\mu\in \mathbb{Q}\setminus \{0\}$. Let $\lambda=2$ and $\mu=3$ and denote $A''=\lambda A'=\begin{psmallmatrix}8&0\\0&1\end{psmallmatrix}$ and $B''=\mu B'=\begin{psmallmatrix}27&1\\0&1\end{psmallmatrix}$. Further, by Proposition 3 of \cite{CHK99}, if $\frac{1}{|a|}+\frac{1}{|b|}\leq1$, then $\{A'',B''\}^+$ is free, where $a$ and $b$ are the elements in top left corner of $A''$ and $B''$ respectively. In our case, the condition holds as $\frac{1}{8}+\frac{1}{27}=\frac{35}{216}<1$ and hence $\{A'',B''\}^+$ and $\{A',B'\}^+$ are free groups. It is easy to see that also $\{A,B\}^+$ is a free group. Proving that $C$ and $D$ generate a free semigroup is done analogously. As the matrices $A$, $B$, $C$ and $D$ satisfy the relations of \eqref{relation} and pairwise generate free groups, we conclude that $\varphi$ is an embedding of $\Sigma^*\times\Sigma^*$. \end{proof} Note that in the previous lemma, all matrices have a Jordan normal form of $\begin{psmallmatrix}\lambda&0&0\\0&\mu&0\\0&0&\mu\end{psmallmatrix}$, where $\lambda\neq\mu$. Next, we consider existence of an embedding into {\rm SL}\ensuremath{(3,\mathbb{Z})}\xspace. Lemmas~\ref{lem:3eval}, \ref{lem:2evalNotDiag} and \ref{lem:1eval} can be applied to matrices of {\rm SL}\ensuremath{(3,\mathbb{Z})}\xspace with a caveat that the matrices are no longer in {\rm SL}\ensuremath{(3,\mathbb{Z})}\xspace after $A$ is transformed into a Jordan normal form. In these cases, it does not lead to problems as the contradictions derived in {\rm SL}\ensuremath{(3,\mathbb{C})}\xspace do not rely on properties of complex numbers, so they are applicable to integral matrices as well. In {\rm SL}\ensuremath{(3,\mathbb{Z})}\xspace, we can prove that an embedding, where the Jordan normal form of matrices is $\begin{psmallmatrix}\lambda&0&0\\0&\mu&0\\0&0&\mu\end{psmallmatrix}$, does not exist. \begin{lemma}\label{lem:another2eval} Let $\Sigma=\{0,1\}$. If there is an injective morphism $\varphi: \Sigma^* \times \Sigma^* \to {\rm SL}(3,\mathbb{Z})$ and the matrices~$A,B,C$ and $D$ correspond to $\varphi((0,\varepsilon))$, $\varphi((1,\varepsilon))$, $\varphi((\varepsilon,0))$ and $\varphi((\varepsilon,1))$ respectively, then the Jordan normal form of matrices $A,B,C$ and $D$ is not $ \begin{psmallmatrix} \lambda & 0 & 0\\ 0 & \mu & 0\\ 0 & 0 & \mu \end{psmallmatrix}$. \end{lemma} \begin{proof} As in the previous proofs, we assume that $A=\begin{psmallmatrix} \lambda & 0 &0 \\ 0&\mu&0 \\ 0&0&\mu \end{psmallmatrix}$. Note that $A,B,C,D\in{\rm SL}\ensuremath{(3,\mathbb{C})}\xspace$. Observe that $\det(A)=\lambda\mu^2=1$ and $\tr(\varphi(0,\varepsilon))=\tr(A)=\lambda+2\mu\in\mathbb{Z}$. We claim that $\lambda\in\mathbb{Z}$ and $\mu\in\mathbb{Q}$. Let $A_1=\varphi(0,\varepsilon)$. First, we can rule out that the eigenvalues are complex. Indeed, $\lambda$ and $\mu$ are the solutions to the characteristic polynomial \begin{align*} x^3+\tr(A_1)x^2-\frac{\tr(A_1)^2-\tr(A_1^2)}{2}x+\frac{\tr(A_1)^3+2\tr(A_1^3)-3\tr(A_1)\tr(A_1^2)}{6} \end{align*} with rational coefficients. It is well-known that a cubic equation with real coefficients has a complex roots only if there are three distinct roots. In our case, $\mu$ is a double root, and hence both $\lambda$ and $\mu$ are real. Next we show that $\lambda$ and $\mu$ are not irrational. From the general solution for a cubic equation, it follows that \begin{align*} \lambda &=\frac{2\tr(A_1)(\tr(A_1)^2-\tr(A_1^2))-9\frac{\tr(A_1)^3+2\tr(A_1^3)-3\tr(A_1)\tr(A_1^2)}{6}-\tr(A_1)^3}{-\tr(A_1)^2+3\frac{\tr(A_1)^2-\tr(A_1^2)}{2}} \text{ and} \\ \mu &= \frac{-9\frac{\tr(A_1)^3+2\tr(A_1^3)-3\tr(A_1)\tr(A_1^2)}{6}+\tr(A_1)\frac{\tr(A_1)^2-\tr(A_1^2)}{2}}{2\tr(A_1)^2-3(\tr(A_1)^2-\tr(A_1^2))}. \end{align*} It is clear that both eigenvalues are rational. Finally, we prove that $\lambda$ is in fact integer. Assume to the contrary that $\lambda$ is a rational number represented by irreducible fraction $\frac{n}{m}$. Since $\det(A)=\lambda\mu^2=1$, it follows that $\mu=\frac{\sqrt{m}}{\sqrt{n}}$. Now $\lambda+2\mu=\frac{n}{m}+2\frac{\sqrt{m}}{\sqrt{n}}\in\mathbb{Z}$ only if the denominators are equal, that is, $\sqrt{n}=m$. Then $m^2=n$ and $\lambda=\frac{n}{m}=\frac{m^2}{m}=m\in\mathbb{Z}$ which contradicts our assumption that $\lambda$ is a rational number. Hence, $\lambda$ is an integer. From $\lambda+2\mu\in\mathbb{Z}$ it follows that $2\mu\in \mathbb{Z}$. Denote $\mu=\frac{k}{2}$ for some $k\in\mathbb{Z}$. Now, $\lambda \frac{k^2}{4}=1$ and thus $\lambda k^2=4$. The only integer solutions for $\lambda$ are 1 or 4. Clearly $\lambda\neq1$ as then $\mu$ is also 1 which contradicts our assumption that $\lambda$ and $\mu$ are distinct. That is, we have concluded that $\lambda=4$ and $\mu=\frac{1}{2}$. Consider then the trace of $A^2$ which is also an integer. Indeed, $\tr(A^2)=\tr(A_1^2)\in\mathbb{Z}$. That is $\lambda^2+2\mu^2=16+2\frac{1}{4}\in\mathbb{Z}$, which is a contradiction. \end{proof} With the previous lemmas, we have ruled out the possible Jordan normal forms of potential embeddings into {\rm SL}\ensuremath{(3,\mathbb{Z})}\xspace. The final Jordan normal form is ruled out in the next lemma. \begin{lemma}\label{lem:finalform} Let $\Sigma=\{0,1\}$. If there is an injective morphism $\varphi: \Sigma^* \times \Sigma^* \to {\rm SL}(3,\mathbb{Z})$ and the matrices~$A,B,C$ and $D$ correspond to $\varphi((0,\varepsilon))$, $\varphi((1,\varepsilon))$, $\varphi((\varepsilon,0))$ and $\varphi((\varepsilon,1))$ respectively, then the Jordan normal form of matrices $A,B,C$ and $D$ is not $ \begin{psmallmatrix} \lambda & 1 & 0\\ 0 & \lambda & 0\\ 0 & 0 & \lambda \end{psmallmatrix}$. \end{lemma} \begin{proof} Assume to the contrary that exists an injective morphism $\varphi$ from $\Sigma^* \times \Sigma^*$ into {\rm SL}\ensuremath{(3,\mathbb{Z})}\xspace. Since the conjugation by an invertible matrix does not influence the injectivity, we suppose that the image of $a$ is in the Jordan normal form $\begin{psmallmatrix} \lambda & 1 & 0\\ 0 & \lambda & 0\\ 0 & 0 & \lambda \end{psmallmatrix}$ as the other form have been ruled out in the previous lemmas. By $A$, $B$, $C$ and $D$ we denote the images of the generators, $a$, $b$, $c$ and $d$, conjugated by the matrix transforming $A$ into the Jordan normal form. Then we have the following matrices corresponding to the generators~$a$, $b$, $c$ and $d$ as follows: {\small\begin{align*} \!\!\!\!\!A &=\begin{pmatrix} \lambda & 1 & 0\\ 0 & \lambda & 0\\ 0 & 0 & \lambda \end{pmatrix}, & B &= \begin{pmatrix} a_B & b_B & c_B\\ d_B & e_B & f_B\\ g_B & h_B & \ell_B \end{pmatrix}, & C&=\begin{pmatrix} a_C & b_C & c_C\\ d_C & e_C & f_C\\ g_C & h_C & \ell_C \end{pmatrix}, & D &=\begin{pmatrix} a_D & b_D & c_D\\ d_D & e_D & f_D \\ g_D & h_D & \ell_D \end{pmatrix}. \end{align*}} Note again that $B,C,D\in{\rm SL}\ensuremath{(3,\mathbb{C})}\xspace$. Since $A$ and $C$ commute with each other by one of the given relations in~\eqref{relation}, we have {\small\begin{align*} AC = \begin{pmatrix} \lambda a_C + d_C & \lambda b_C + e_C & \lambda c_C + f_C\\ \lambda d_C & \lambda e_C & \lambda f_C\\ \lambda g_C & \lambda h_C & \lambda\ell_C \end{pmatrix} = \begin{pmatrix} \lambda a_C & a_C + \lambda b_C & \lambda c_C\\ \lambda d_C & d_C + \lambda e_C & \lambda f_C\\ \lambda g_C & g_C + \lambda h_C & \lambda \ell_C \end{pmatrix} = CA. \end{align*}} It is easy to see that $d_C = g_C = f_C = 0$ and $a_C = e_C$. Therefore, {\small\begin{align*} C = \begin{pmatrix} a_C & b_C & c_C\\ 0 & a_C & 0\\ 0 & h_C &\ell_C \end{pmatrix} \text{ and } D = \begin{pmatrix} a_D & b_D & c_D\\ 0 & a_D & 0\\ 0 & h_D & \ell_D \end{pmatrix}. \end{align*}} Since $\varphi(c)$ and $\varphi(d)$ are in {\rm SL}\ensuremath{(3,\mathbb{C})}\xspace, the determinants of $C$ and $D$ are 1. Now, the determinant of $C$ is $a_C^2 \ell_C$ and the eigenvalues are $a_C$ and $\ell_C$. As $C$ is similar to $\varphi(c)$, the matrices have the same eigenvalues. Now, $a_C = \ell_C$ as other Jordan normal forms have been ruled out previously. Analogously, we can also see that $a_D = \ell_D$. Next, we observe that the matrices $C$ and $D$ commute if and only if $c_C h_D = c_D h_C$. Indeed, \begin{align*} \!\!\!\!\!CD = \begin{pmatrix} a_C & b_C & c_C\\ 0 & a_C & 0\\ 0 & h_C & a_C \end{pmatrix} \begin{pmatrix} a_D & b_D & c_D\\ 0 & a_D & 0\\ 0 & h_D & a_D \end{pmatrix} &= \begin{pmatrix} a_Ca_D & b_Ca_D + a_Cb_D + c_C h_D & c_Ca_D + a_Cc_D\\ 0 & a_Ca_D & 0\\ 0 & h_Ca_D + a_Ch_D &a_Ca_D \end{pmatrix} \text{ and} \\ DC = \begin{pmatrix} a_D & b_D & c_D\\ 0 & a_D & 0\\ 0 & h_D &a_D \end{pmatrix} \begin{pmatrix} a_C & b_C & c_C\\ 0 & a_C & 0\\ 0 & h_C &a_C \end{pmatrix} &= \begin{pmatrix} a_Da_C & b_Da_C + a_Db_C + c_D h_C & c_Da_C + a_Dc_C\\ 0 & a_Da_C & 0\\ 0 & h_Da_C + a_Dh_C &a_Da_C \end{pmatrix}. \end{align*} By relations~\eqref{relation}, $C$ and $D$ do not commute and hence there are three cases to be considered: \begin{enumerate} \item $c_C = 0$ and $h_C \ne 0$; \item $c_C \ne 0$ and $h_C = 0$; \item $c_C \ne 0$ and $h_C \ne 0$. \end{enumerate} We prove that each case leads to a contradiction, i.e., that $C$ and $D$ commute. Let us examine the three cases in more details. First, let us consider the case where $c_C = 0$ and $h_C \ne 0$. We know that $c_D$ is also non-zero because otherwise $C$ and $D$ commute with each other since $c_C h_D = c_D h_C = 0$. We have the following calculations: \begin{align*} BC &= \begin{pmatrix} a_B & b_B & c_B\\ d_B & e_B & f_B\\ g_B & h_B & \ell_B \end{pmatrix} \begin{pmatrix} a_C & b_C & 0\\ 0 & a_C & 0\\ 0 & h_C &a_C \end{pmatrix} = \begin{pmatrix} a_Ba_C & a_B b_C + b_B a_C + c_B h_C & c_Ba_C\\ d_Ba_C & d_B b_C + e_Ba_C + f_B h_C & f_Ba_C\\ g_Ba_C & g_B b_C + h_Ba_C + \ell_B h_C & \ell_Ba_C \end{pmatrix} \text{ and} \\ CB &= \begin{pmatrix} a_C & b_C & 0\\ 0 & a_C & 0\\ 0 & h_C &a_C \end{pmatrix} \begin{pmatrix} a_B & b_B & c_B\\ d_B & e_B & f_B\\ g_B & h_B & \ell_B \end{pmatrix} = \begin{pmatrix} a_Ba_C + d_B b_C & b_Ba_C + e_B b_C & c_Ba_C + f_B b_C\\ d_Ba_C & e_Ba_C & f_Ba_C\\ d_B h_C + g_Ba_C & e_B h_C + h_Ba_C & f_B h_C + \ell_Ba_C \end{pmatrix}. \end{align*} Since $BC = CB$, we have $d_B b_C = 0$, $d_B h_C = 0$, $f_B b_C = 0$, and $f_B h_C = 0$. By the supposition $h_C \ne 0$, we further deduce that $d_B = f_B = 0$. Then $ B = \begin{psmallmatrix} a_B & b_B & c_B\\ 0 & e_B & 0\\ g_B & h_B & \ell_B \end{psmallmatrix}. $ Note that we also have \begin{align}\label{eq:fromBC} a_Bb_C + c_Bh_C = e_B b_C \mbox{ and }g_Bb_C + \ell_Bh_C = e_Bh_C \end{align} by the equality~$BC = CB$. The characteristic polynomial of $B$ is \begin{align*} P(x) = -x^3 + \tr(B) x^2 - (a_Be_B + a_B\ell_B + e_B\ell_B - c_Bg_B)x + \det(B) \end{align*} which has roots $\lambda = e_B$ and $\lambda = \frac{1}{2}(a_B + \ell_B \pm \sqrt{(a_B -\ell_B)^2 + 4c_Bg_B})$. By the previous considerations, we know that $B$ has only one eigenvalue and therefore, we have $a_B =e_B = \ell_B$ and $c_Bg_B = 0$. Moreover, it follows from \eqref{eq:fromBC} that $c_B = 0$ and $g_B b_C = 0$. Note that $g_B \ne 0$ because otherwise the matrix $B$ commutes with $A$. Finally, we consider {\small\begin{align*} BD &= \begin{pmatrix} a_B & b_B & 0\\ 0 & a_B & 0\\ g_B & h_B & a_B \end{pmatrix} \begin{pmatrix} a_D & b_D & c_D\\ 0 & a_D & 0\\ 0 & h_D &a_D \end{pmatrix} = \begin{pmatrix} a_Ba_D & b_Ba_D + a_Bb_D & a_Bc_D\\ 0 & a_Ba_D & 0\\ g_Ba_D & g_Bb_D + h_Ba_D + a_Bh_D & a_Ba_D g_Bc_D \end{pmatrix}\\ DB&=\begin{pmatrix} a_D & b_D & c_D\\ 0 & a_D & 0\\ 0 & h_D &a_D \end{pmatrix} \begin{pmatrix} a_B & b_B & 0\\ 0 & a_B & 0\\ g_B & h_B & a_B \end{pmatrix}= \begin{pmatrix} a_Da_B + c_D g_B & a_Db_B + b_Da_D + c_D h_B & c_Da_B\\ 0 & a_Da_B & 0\\ g_Ba_D & a_Dh_B + h_Da_B & a_Da_B \end{pmatrix}. \end{align*}} It is easy to see that $g_Bb_D = g_Bc_D = 0$ and thus $b_D=c_D=0$, and then $D$ commutes with $C$. Therefore, we have a contradiction. Let us consider the second case where $c_C \ne 0$ and $h_C = 0$. It is quite similar to the previous case. Consider the matrix~$B$ which commutes with $C$ as follows: \begin{align*} BC &= \begin{pmatrix} a_B & b_B & c_B\\ d_B & e_B & f_B\\ g_B & h_B & \ell_B \end{pmatrix} \begin{pmatrix} a_C & b_C & c_C\\ 0 & a_C & 0\\ 0 & 0 &a_C \end{pmatrix} = \begin{pmatrix} a_Ba_C & a_B b_C + b_Ba_C & a_B c_C + c_Ba_C\\ d_Ba_C & d_B b_C + e_Ba_C & d_B c_C + f_Ba_C\\ g_Ba_C & g_B b_C + h_Ba_C &g_B c_C + \ell_Ba_C \end{pmatrix}\\ &= \begin{pmatrix} a_Ba_C + d_B b_C + g_B c_C & b_Ba_C + e_B b_C + h_B c_C & c_Ba_C + f_B b_C + \ell_B c_C\\ d_Ba_C & e_Ba_C & f_Ba_C\\ g_Ba_C & h_Ba_C & \ell_Ba_C \end{pmatrix}\\& = \begin{pmatrix} a_C & b_C & c_C\\ 0 & a_C & 0\\ 0 & 0 &a_C \end{pmatrix} \begin{pmatrix} a_B & b_B & c_B\\ d_B & e_B & f_B\\ g_B & h_B & \ell_B \end{pmatrix} = CB. \end{align*} By the equivalence, we have $d_B b_C = 0$, $g_B b_C = 0$, $g_B c_C = 0$, and $d_B c_C = 0$. By the supposition $c_C \ne 0$, we further deduce that $d_B = g_B = 0$. Then $B$ is of the following form: $ B = \begin{psmallmatrix} a_B & b_B & c_B\\ 0 & e_B & f_B\\ 0 & h_B & \ell_B \end{psmallmatrix}. $ Note that we also have \begin{align}\label{eq:fromBC2} a_B b_C = e_Bb_C + h_Bc_C \mbox{ and }a_Bc_C = f_Bb_C + \ell_Bc_C \end{align} by the equality~$BC = CB$. The characteristic polynomial of $B$ is $P(x) = -x^3 + \tr(B) x^2 - (a_Be_B + a_B\ell_B + e_B\ell_B - f_Bh_B)x + \det(B)$ which has roots $\lambda = e_B$ and $\lambda = \frac{1}{2}(e_B + \ell_B \pm \sqrt{(a_B -\ell_B)^2 + 4f_Bh_B})$. We know that $B$ has only one eigenvalue by the previous considerations and therefore, we have $a_B =e_B = \ell_B$ and $f_Bh_B = 0$. We can further deduce from \eqref{eq:fromBC2} that $h_B = 0$ and $f_Bb_C = 0$. By a similar argument for the matrices $B$ and $D$ that should commute with each other as in the first case, we have a contradiction. Finally, consider the third case where $c_C \ne 0$ and $h_C \ne 0$. It is obvious that $c_D$ and $h_D$ are also non-zero because otherwise $C$ and $D$ would commute. Now consider the matrix~$B$ which is commuting with $C$ and $D$. We can deduce from the relation~$BC = CB$ that $d_B = g_B = f_B = 0$ and $a_B = e_B = \ell_B$ since they are eigenvalues of $B$. Hence, $ B = \begin{psmallmatrix} a_B & b_B & c_B\\ 0 & a_B & 0\\ 0 & h_B & a_B \end{psmallmatrix}. $ Now we have $c_C h_B = c_B h_C$ since $B$ and $C$ commute with each other. Note that $h_B$ and $c_B$ are both non-zero since $A$ and $B$ commute if $h_B = c_B = 0$. Let us denote $\frac{c_C}{h_C} = \frac{c_B}{h_B} = x$. We also have $c_D h_B = c_B h_D$ from the relation~$BD = DB$ and have $\frac{c_D}{h_D} = \frac{c_B}{h_B} = x$. From $x = \frac{c_C}{h_C} = \frac{c_D}{h_D}$, we have $c_C h_D = c_D h_C$ which results in the relation~$CD = DC$. Therefore, we also have a contradiction. \end{proof} \begin{theorem} \label{NoIntoSL3Z} There is no injective morphism $ \varphi: \Sigma^* \times \Sigma^* \to {\rm SL}(3,\mathbb{Z}) $ for any binary alphabet~$\Sigma$. \end{theorem} \begin{proof} Since we have examined all possible cases in Lemmas~\ref{lem:3eval}, \ref{lem:2evalNotDiag}, \ref{lem:1eval}, \ref{lem:another2eval} and \ref{lem:finalform} and found contradictions for every case, we can conclude that there is no injective morphism from $\Sigma^* \times \Sigma^*$ into the special linear group {\rm SL}\ensuremath{(3,\mathbb{Z})}\xspace. \end{proof} \begin{corollary} There is no injective morphism $ \varphi: \FG \times \FG \to \mathbb{Z}^{3\times 3} $ for any binary group alphabet $\Gamma$. \end{corollary} \begin{proof} We proceed by contradiction. Assume that there exists such an injective morphism~$\varphi$ from the set of pairs of words over a group alphabet to the set of matrices in $\mathbb{Z}^{3\times 3}$. Suppose that $A = \varphi((a,\varepsilon))$, where $a \in \Gamma$. Then the inverse matrix $A^{-1}$ corresponding to $(\overbar{a},\varepsilon)$ must be in $\mathbb{Z}^{3\times 3}$. This implies that the determinant of $A$ is $\pm1$ because otherwise the determinant of $A^{-1}$ becomes a non-integer. Consider then a morphism~$\psi$ such that $\psi(x)=\varphi(x)\varphi(x)$ for each $x\in \FG\times\FG$. It is clear that also $\psi$ is injective and that the determinant of the image is 1. By Theorem~\ref{NoIntoSL3Z}, such injective morphism $\psi$ does not exist even from semigroup alphabets and hence neither does~$\varphi$. \end{proof} \section{Decidability of the identity problem in the Heisenberg group} The decidability of the identity problem in dimension three is a long standing open problem. Following our findings on non-existence of embedding into {\rm SL}\ensuremath{(3,\mathbb{Z})}\xspace, in this section we consider the decidability of an important subgroup of {\rm SL}\ensuremath{(3,\mathbb{Z})}\xspace , the Heisenberg group, which is well-known in the context of quantum mechanical systems \cite{Brylinski93,GU14,Kostant70}. Recently a few decidability results have been obtained for a knapsack variant of the membership problem in dimension three (i.e., {\rm H}\ensuremath{(3,\mathbb{Z})}\xspace), where the goal was to solve a single matrix equation with a specific order of matrices \cite{KLZ16}. In this section, we prove that the identity problem is decidable for the Heisenberg group over rational numbers. First, we provide more intuitive solution for dimension three, i.e., {\rm H}\ensuremath{(3,\mathbb{Q})}\xspace, which still requires a number of techniques to estimate possible values of elements under permutations in matrix products. In the end of the section, we generalize the result for {\rm H}\ensuremath{(n,\mathbb{Q})}\xspace case using analogies in the solution for dimension three. Here we prove that the identity problem for matrix semigroups in the Heisenberg group over rationals is decidable by analysing the behaviour of multiplications especially in the upper-right coordinate of matrices. From Lemma~\ref{lem:commute}, it follows that the matrix multiplication is commutative in the Heisenberg group if and only if matrices have pairwise parallel superdiagonal vectors. So we analyse two cases of products for matrices with pairwise parallel and none pairwise parallel superdiagonal vectors and then provide algorithms that solve the problem in polynomial time. The most difficult part is showing that only limited number of conditions must be checked to guarantee the existence of a product that results in the identity. \begin{lemma}\label{lem:single} Let $G = \{ M_1, M_2, \ldots, M_r \} \subseteq {\rm H}(3,\mathbb{Q})$ be a set of matrices from the Heisenberg group such that superdiagonal vectors of matrices are pairwise parallel. If there exists a sequence of matrices $M = M_{i_1} M_{i_2} \cdots M_{i_k},$ where $i_j \in [1,r]$ for all $1 \le j \le k$, such that $\psi(M) = (0,0,c)$ for some $c \in \mathbb{Q}$, then, \begin{align*} c = \sum_{j=1}^k (c_{i_j} - \frac{q}{2}a_{i_j}^2 ) \end{align*} for some $q \in \mathbb{Q}$, dependent only on $G$. \end{lemma} \begin{proof} Consider the sequence $M_{i_1} M_{i_2} \cdots M_{i_k}$ and let $M_i = \begin{psmallmatrix} 1 & a_i & c_i\\ 0 & 1 & b_i\\ 0 & 0 & 1 \end{psmallmatrix}$ for each $i \in [1,r]$. Since the superdiagonal vectors are parallel, i.e., $a_ib_j=b_ia_j$ for any $i,j\in[1,r]$, we have $q = \frac{b_i}{a_i} \in \mathbb{Q}$ and thus $a_iq = b_i$ for all $i \in [1,r]$. Let us consider the product of the matrices. Then the value~$c$ is equal to \begin{align*} c &= \sum_{j=1}^k c_{i_j} + \sum_{\ell=1}^{k-1} \Bigg( \sum_{j=1}^{\ell}a_{i_j} \Bigg) a_{i_{\ell+1}} q = \sum_{j=1}^k c_{i_j} + \frac{1}{2} \Bigg( \sum_{\ell=1}^{k}\sum_{j=1}^{k} a_{i_\ell} a_{i_j} q - \sum_{j=1}^k a_{i_j}^2 q \Bigg) \\ &= \sum_{j=1}^k (c_{i_j} - \frac{q}{2}a_{i_j}^2). \end{align*} The first equality follows from a direct computation as {\small\begin{align}\label{eq:sum} \!\!\!\sum_{\ell=1}^k\sum_{j=1}^k a_{i_\ell}a_{i_j} &=\sum_{\ell=1}^{k-1} \Bigg( \sum_{j=1}^{\ell}a_{i_j} \Bigg) a_{i_{\ell+1}}+a_{i_1}(a_{i_1}+\ldots+a_{i_k})+a_{i_2}(a_{i_2}+\ldots+a_{i_k})+\ldots+a_{i_k}a_{i_k} \nonumber \\ &=\sum_{\ell=1}^{k-1} \Bigg( \sum_{j=1}^{\ell}a_{i_j} \Bigg) a_{i_{\ell+1}}+a_{i_1}a_{i_1}+a_{i_2}(a_{i_1}+a_{i_2})+\ldots+a_{i_k}(a_{i_1}+\ldots+a_{i_k}) \\ &=\sum_{\ell=1}^{k-1} \Bigg( \sum_{j=1}^{\ell}a_{i_j} \Bigg) a_{i_{\ell+1}}+\sum_{\ell=1}^{k-1} \Bigg( \sum_{j=1}^{\ell}a_{i_j} \Bigg) a_{i_{\ell+1}}+\sum_{j=1}^k a_{i_j}^2. \nonumber \end{align}} Note that $\sum_{j=1}^k a_{i_j}=0$ by our choice of the sequence of matrices. The value $c$ is preserved in case of reordering of matrices due to their commutativity. \end{proof} Note that the previous lemma also holds for {\rm H}\ensuremath{(3,\mathbb{R})}\xspace. It is worth mentioning that the identity problem in the Heisenberg group is decidable if any two matrices have pairwise parallel superdiagonal vectors since now the problem reduces to solving a system of two linear homogeneous Diophantine equations. Hence, it remains to consider the case when there exist two matrices with non-parallel superdiagonal vectors in the sequence generating the identity matrix. In the following, we prove that the identity matrix is always constructible if we can construct any matrix with the zero superdiagonal vector by using matrices with non-parallel superdiagonal vectors. \begin{lemma}\label{lem:nonparallel} Let $S = \langle M_1, \ldots, M_r \rangle \subseteq {\rm H}(3,\mathbb{Q})$ be a finitely generated matrix semigroup. Then the identity matrix exists in $S$ if there exists a sequence of matrices $M_{i_1} M_{i_2} \cdots M_{i_k},$ where $i_j \in [1,r]$ for all $1 \le j \le k$, satisfying the following properties: \begin{enumerate}[(i)] \item $\psi(M_{i_1} M_{i_2} \cdots M_{i_k}) = (0,0,c)$ for some $c \in \mathbb{Q}$, and \item $\vec{v}(M_{i_{j_1}})$ and $\vec{v}(M_{i_{j_2}})$ are not parallel for some $j_1, j_2 \in [1,k]$. \end{enumerate} \end{lemma} \begin{proof} Let $M = M_{i_1} M_{i_2} \cdots M_{i_k}$ and $\psi(M) = (0,0,c)$ for some $c \in \mathbb{Q}$. If $c=0$, then $M$ is the identity matrix, hence we assume that $c > 0$ as the case of $c < 0$ is symmetric. Given that $M_i$ is the $i$th generator and $\psi(M_i) = (a_i,b_i,c_i)$, we have $\sum_{j=1}^k a_{i_j} = 0$ and $ \sum_{j=1}^k b_{i_j} = 0$. Since $c > 0$, the following also holds: \begin{align}\label{eq:cvalue} c = \sum_{\ell=1}^{k-1}\sum_{j=1}^\ell a_{i_j} b_{i_{\ell +1}} + \sum_{j=1}^k c_{i_j} >0. \end{align} If the matrix semigroup~$S\subseteq {\rm H}\ensuremath{(3,\mathbb{Q})}\xspace$ has two different matrices $N_1$ and $N_2$ such that $\psi(N_1) = (0,0,c_1)$ and $\psi(N_2) = (0,0,c_2)$ and $c_1 c_2 < 0$, then the identity matrix exists in $S$. Let $\psi(N_1) = (0,0,\frac{p_1}{q_1})$ and $\psi(N_2) = (0,0,\frac{p_2}{q_2})$, where $p_1,q_1, q_2 \in \mathbb{Z}$ are positive and $p_2 \in \mathbb{Z}$ is negative. Then it is easy to see that the matrix $N_1^{-q_1p_2}N_2^{q_2p_1}$ exists in $S$ and that $\psi(N_1^{-q_1p_2}N_2^{q_2p_1}) = (0,0,0).$ Now we will prove that if $S$ contains a matrix $M$ such that $\psi(M) = (0,0,c)$, where $c > 0$, then there also exists a matrix $M'$ such that $\psi(M') = (0,0,c')$, where $c' < 0$. First, we classify the matrices into four types as follows. A matrix with a superdiagonal vector~$(a,b)$ is classified as \begin{enumerate}[1)] \item the $({\scriptstyle +,+})$-type if $a,b >0$, \item the $({\scriptstyle +,-})$-type if $a\ge 0$ and $b\le0$, \item the $({\scriptstyle -,-})$-type if $a,b < 0$, and \item the $({\scriptstyle -,+})$-type if $a<0$ and $b>0$. \end{enumerate} Let $G = \{M_1, M_2, \ldots, M_r\}$ be the generating set of the matrix semigroup~$S$. Then $G = G_{({\scriptscriptstyle +,+})} \sqcup G_{({\scriptscriptstyle +,-})} \sqcup G_{({\scriptscriptstyle -,-})} \sqcup G_{({\scriptscriptstyle -,+})}$ such that $ G_{(\xi_1,\xi_2)}$ is the set of matrices of the $(\xi_1,\xi_2)$-type, where $\xi_1, \xi_2 \in \{+,-\}$. Recall that we assume $M = M_{i_1} \cdots M_{i_k}$ and $\psi(M) = (0,0,c)$ for some $c>0$. The main idea of the proof is to generate a matrix $M'$ such that $\psi(M') = (0,0,c')$ for some $c' < 0$ by duplicating the matrices in the sequence $M = M_{i_1} \cdots M_{i_k}$ multiple times and reshuffling. Note that any permutation of the sequence generating the matrix $M$ such that $\psi(M) = (0,0,c)$ still generates matrices $M'$ such that $\psi(M') = (0,0,c')$ since the multiplication of matrices exchanges the first two coordinates in a commutative way. Moreover, we can still obtain matrices $M''$ such that $\psi(M'') = (0,0,c'')$ for some $c'' \in \mathbb{Q}$ if we shuffle two different permutations of the sequence $M_{i_1}\cdots M_{i_k}$ by the same reason. \begin{figure}[htb] \centering\begin{tikzpicture}[xscale=0.7,yscale=0.5,every node/.style={scale=1}] \draw (0,10.5) -- (0,0); \draw (0,0) -- (17,0); \draw[pattern=dots, pattern color=blue] (2,0) node (v13) {} rectangle (3,3); \draw[pattern=dots, pattern color=blue] (3,0) node (v15) {} rectangle (4,5.5); \draw [dashed] (4,0) -- (4,10.5); \draw [pattern=north east lines, pattern color=red] (4,0) rectangle (5,7.5); \draw [pattern=north east lines, pattern color=red] (5,0) rectangle (6.5,9); \draw [pattern=north east lines, pattern color=red] (6.5,0) rectangle (8,10); \draw [dashed] (12,10.5) -- (12,0); \draw (16,0) -- (16,10.5); \draw [dashed] (8,10.5) -- (8,0); \draw[pattern=north east lines, pattern color=red] (8,9) rectangle (10.5,0); \draw[pattern=north east lines, pattern color=red] (10.5,5) rectangle (12,0); \draw [pattern=dots, pattern color=blue] (12,4) rectangle (13,0); \draw [pattern=dots, pattern color=blue] (13,3) rectangle (14,0); \draw [pattern=dots, pattern color=blue] (14,2) rectangle (15,0); \draw [pattern=dots, pattern color=blue] (15,1) rectangle (16,0); \draw (1,0) -- (1,-1); \draw (2,0) -- (2,-1); \draw (3,0) -- (3,-1); \draw (4,0) -- (4,-1); \draw (5,0) -- (5,-1); \draw (6.5,0) -- (6.5,-1); \draw (8,0) -- (8,-1); \draw (10.5,0) -- (10.5,-1); \draw (12,0) -- (12,-1); \draw (13,0) -- (13,-1); \draw (14,0) -- (14,-1); \draw (15,0) -- (15,-1); \draw (16,0) -- (16,-1); \draw [<->] (1,-0.5) -- (2,-0.5); \draw [<->] (2,-0.5) -- (3,-0.5); \draw [<->] (3,-0.5) -- (4,-0.5); \draw [<->] (4,-0.5) -- (5,-0.5); \draw [<->] (5,-0.5) -- (6.5,-0.5); \draw [<->] (6.5,-0.5) -- (8,-0.5); \draw [<->] (8,-0.5) -- (10.5,-0.5); \draw [<->] (10.5,-0.5) -- (12,-0.5); \draw [<->] (12,-0.5) -- (13,-0.5); \draw [<->] (13,-0.5) -- (14,-0.5); \draw [<->] (14,-0.5) -- (15,-0.5); \draw [<->] (15,-0.5) -- (16,-0.5); \node at (0.5,-1) {$b_{1}$}; \node at (1.5,-1) {$b_{2}$}; \node at (2.5,-1) {$b_{3}$}; \node at (3.5,-1) {$b_{4}$}; \node at (4.5,-1) {$|b_{5}|$}; \node at (5.75,-1) {$|b_{6}|$}; \node at (7.25,-1) {$|b_{7}|$}; \node at (9.25,-1) {$|b_{8}|$}; \node at (11.25,-1) {$|b_{9}|$}; \node at (12.5,-1) {$b_{{10}}$}; \node at (13.5,-1) {$b_{{11}}$}; \node at (14.5,-1) {$b_{{12}}$}; \node at (15.5,-1) {$b_{{13}}$}; \draw [pattern=dots, pattern color=blue] (2,2) rectangle (1,0); \draw (-1,0) -- (1,0); \draw (-1,2) -- (0,2); \draw (-1,3) -- (0,3); \draw (-1,5.5) -- (0,5.5); \draw (-1,7.5) -- (0,7.5); \draw (-1,9) -- (0,9); \draw (-1,10) -- (0,10); \draw (16,1) -- (17,1); \draw (16,2) -- (17,2); \draw (16,3) -- (17,3); \draw (16,4) -- (17,4); \draw (16,5) -- (17,5); \draw (16,9) -- (17,9); \draw (16,10) -- (17,10); \draw [<->] (-0.5,10) -- (-0.5,9); \draw [<->] (-0.5,9) -- (-0.5,7.5); \draw [<->] (-0.5,7.5) -- (-0.5,5.5); \draw [<->] (-0.5,5.5) -- (-0.5,3); \draw [<->] (-0.5,3) -- (-0.5,2); \draw[<->] (-0.5,2) -- (-0.5,0); \node at (-1,9.5) {$a_6$}; \node at (-1,8.25) {$a_5$}; \node at (-1,6.5) {$a_4$}; \node at (-1,4.25) {$a_3$}; \node at (-1,2.5) { $a_2$}; \node at (-1,1) {$a_1$}; \draw [<->](16.5,10) -- (16.5,9); \draw[<->] (16.5,9) -- (16.5,5); \draw [<->](16.5,5) -- (16.5,4); \draw [<->] (16.5,4) -- (16.5,3); \draw [<->] (16.5,3) -- (16.5,2); \draw [<->] (16.5,2) -- (16.5,1); \draw [<->] (16.5,1) -- (16.5,0); \node at (17.3,9.5) {$|a_7|$}; \node at (17.3,7) {$|a_8|$}; \node at (17.3,4.5) {$|a_9|$}; \node at (17.3,3.5) {$|a_{10}|$}; \node at (17.3,2.5) {$|a_{11}|$}; \node at (17.3,1.5) {$|a_{12}|$}; \node at (17.3,0.5) {$|a_{13}|$}; \draw [draw=none,pattern=dots, pattern color=blue] (0.4,10.5) rectangle (1.15,9.75); \draw [draw=none,pattern=north east lines, pattern color=red] (0.4,9.5) rectangle (1.15,8.75); \node [align=left] at (2.4,10.1) {: positive}; \node [align=left] at (2.49,9.1) {: negative}; \draw (0,0) -- (0,-1); \draw [<->] (0,-0.5) -- (1,-0.5); \end{tikzpicture} \caption{The histogram describes how the upper-right corner of $M_{1}\cdots M_{13}$ is computed by multiplications. The blue dotted (red lined) area implies the value which will be added to (subtracted from) the upper-right corner of the final matrix after multiplications of matrices in the sequence.} \label{fig:example} \end{figure} \begin{figure}[htb] \centering \begin{tikzpicture}[xscale=0.6,yscale=0.5,every node/.style={scale=1.0}] \draw (0,11) -- (0,0); \draw (0,0) -- (16,0); \draw [dashed] (4,0) -- (4,11); \draw [dashed] (12,0) -- (12,11); \draw (16,0) -- (16,11); \draw [dashed] (8,0) -- (8,11); \draw [draw=none,pattern=dots, pattern color=blue] (0.25,10.5) rectangle (1,9.75); \draw [draw=none,pattern=north east lines, pattern color=red] (0.25,9.5) rectangle (1,8.75); \node [align=left] at (2.25,10.1) {: positive}; \node [align=left] at (2.35,9.1) {: negative}; \draw [pattern=dots, pattern color=blue] (0,1) rectangle (0.5,0); \draw [pattern=dots, pattern color=blue] (0.5,0) rectangle (1,2); \draw [pattern=dots, pattern color=blue] (1,0) rectangle (1.5,3); \draw (0,0) -- (4,8) -- (8,10); \draw (8,10) -- (12,4) -- (12,4) -- (16,0); \draw [pattern=dots, pattern color=blue] (1.5,4) rectangle (2,0); \draw [pattern=dots, pattern color=blue] (2,5) rectangle (2.5,0); \draw [pattern=dots, pattern color=blue] (2.5,6) rectangle (3,0); \draw [pattern=dots, pattern color=blue] (3,7) rectangle (3.5,0); \draw [pattern=dots, pattern color=blue] (3.5,8) rectangle (4,0); \draw [pattern=north east lines, pattern color=red] (4,8.0) rectangle (4.5,0); \draw [pattern=north east lines, pattern color=red] (4.5,8.25) rectangle (5,0); \draw [pattern=north east lines, pattern color=red] (5,8.5) rectangle (5.5,0); \draw [pattern=north east lines, pattern color=red] (5.5,8.75) rectangle (6,0); \draw [pattern=north east lines, pattern color=red] (6,9) rectangle (6.5,0); \draw [pattern=north east lines, pattern color=red] (6.5,9.25) rectangle (7,0); \draw [pattern=north east lines, pattern color=red] (7,9.5) rectangle (7.5,0); \draw [pattern=north east lines, pattern color=red] (7.5,9.75) rectangle (8,0) ; \draw [pattern=north east lines, pattern color=red] (8,9.25) rectangle (8.5,0); \draw [pattern=north east lines, pattern color=red] (8.5,8.5) rectangle (9,0); \draw [pattern=north east lines, pattern color=red] (9,7.75) rectangle (9.5,0); \draw [pattern=north east lines, pattern color=red] (9.5,7) rectangle (10,0); \draw [pattern=north east lines, pattern color=red] (10,6.25) rectangle (10.5,0); \draw [pattern=north east lines, pattern color=red] (10.5,5.5) rectangle (11,0); \draw [pattern=north east lines, pattern color=red] (11,4.75) rectangle (11.5,0); \draw [pattern=north east lines, pattern color=red] (11.5,4) rectangle (12,0); \draw [pattern=dots, pattern color=blue] (12,4) rectangle (12.5,0); \draw [pattern=dots, pattern color=blue] (12.5,3.5) rectangle (13,0); \draw [pattern=dots, pattern color=blue] (13,3) rectangle (13.5,0); \draw [pattern=dots, pattern color=blue] (13.5,2.5) rectangle (14,0); \draw [pattern=dots, pattern color=blue] (14,2) rectangle (14.5,0); \draw [pattern=dots, pattern color=blue] (14.5,1.5) rectangle (15,0); \draw [pattern=dots, pattern color=blue] (15,1) rectangle (15.5,0); \draw [pattern=dots, pattern color=blue] (15.5,0.5) rectangle (16,0); \draw (0,0) -- (0,-1); \draw (4,0) -- (4,-1); \draw (8,0) -- (8,-1); \draw (12,0) -- (12,-1); \draw (16,0) -- (16,-1); \draw [<->] (0,-0.5) -- (4,-0.5); \draw [<->] (8,-0.5) -- (4,-0.5); \draw [<->](8,-0.5) -- (12,-0.5); \draw [<->] (12,-0.5) -- (16,-0.5); \draw (-1,0) -- (0,0); \draw (-1,8) -- (0,8); \draw (-1,10) -- (0,10); \draw (16,0) -- (17,0); \draw (16,4) -- (17,4); \draw [<->](-0.5,10) -- (-0.5,8); \draw [<->](-0.5,8) -- (-0.5,0); \draw [<->](16.5,4) -- (16.5,0); \draw (16,10) -- (17,10); \draw [<->](16.5,10) -- (16.5,4); \node at (2,-1) {$b_{({\scriptscriptstyle +,+})}m$}; \node at (6,-1) {$|b_{({\scriptscriptstyle +,-})}|m$}; \node at (10,-1) {$|b_{({\scriptscriptstyle -,-})}|m$}; \node at (14,-1) {$b_{({\scriptscriptstyle -,+})}m$}; \node at (18,2) {$|a_{({\scriptscriptstyle -,+})}|m$}; \node at (18,7) {$|a_{({\scriptscriptstyle -,-})}|m$}; \node at (-1.7,4) {$a_{({\scriptscriptstyle +,+})}m$}; \node at (-1.7,9) {$a_{({\scriptscriptstyle +,-})}m$}; \end{tikzpicture} \caption{The histogram describes how the value in the upper-right corner of matrix $M_{({\scriptscriptstyle +,+})}^m M_{({\scriptscriptstyle +,-})}^m M_{({\scriptscriptstyle -,-})}^m M_{({\scriptscriptstyle -,+})}^m$ is computed by multiplications. Here $m = 8$.} \label{fig:repeat} \end{figure} Let us illustrate the idea with the following example. See Figure~\ref{fig:example} and Figure~\ref{fig:repeat} for pictorial descriptions of the idea. Let $\{M_i \mid 1\le i\le 4\} \subseteq G_{({\scriptscriptstyle +,+})}$, $\{M_i \mid 5\le i\le 7\} \subseteq G_{({\scriptscriptstyle +,-})}$, $\{M_i \mid 8\le i\le 9\} \subseteq G_{({\scriptscriptstyle -,-})}$, and $\{M_i \mid 10\le i\le 13\} \subseteq G_{({\scriptscriptstyle -,+})}$. Then assume that $M_1M_2\cdots M_{13}=\begin{psmallmatrix}1&0&x\\0&1&0\\0&0&1\end{psmallmatrix}$, where $x$ is computed by \eqref{eq:cvalue}. As we mentioned above, $x$ changes if we change the order of multiplicands. In this example, we first multiply $({\scriptstyle +,+})$-type matrices and accumulate the values in the superdiagonal coordinates since these matrices have positive values in the coordinates. Indeed, the blue dotted area implies the value we add to the upper-right corner by multiplying such matrices. Then we multiply $({\scriptstyle +,-})$-type matrices and still increase the `$a$'-value. The `$b$'-values in $({\scriptstyle +,-})$-type matrices are negative thus, the red lined area is subtracted from the upper-right corner. We still subtract by multiplying $({\scriptstyle -,-})$-type matrices since the accumulated `$a$'-value is still positive and `$b$'-values are negative. Then we finish the multiplication by adding exactly the last blue dotted area to the upper-right corner. It is easy to see that the total subtracted value is larger than the total added value. However, we cannot guarantee that $x$ is negative since $\sum_{i=1}^{13} c_i$ could be larger than the contribution from the superdiagonal coordinates. This is why we need to copy the sequence of matrices generating the matrix corresponding to the triple~$(0,0,c)$ for some $c \in \mathbb{Q}$. In Figure~\ref{fig:repeat}, we describe an example where we duplicate the sequence eight times and shuffle and permute them in order to minimize the value in the upper-right corner. Now the lengths of both axes are $m$ ($m=8$ in this example) times larger than before and it follows that the area also grows quadratically in $m$. Since the summation $m \cdot \sum_{i=1}^{13} c_i$ grows linearly in $m$, we have $x <0$ when $m$ is large enough. For each $\xi_1,\xi_2\in\{+,-\}$, let us define multisets $S_{(\xi_1, \xi_2)}$ that are obtained from the sequence~$M_{i_1} \cdots M_{i_k}$ by partitioning the product according to the matrix types. That is, $S_{(\xi_1, \xi_2)}$ contains exactly the matrices of $(\xi_1,\xi_2)$-type in the product (possibly with several copies of each matrix). For each $\xi_1, \xi_2 \in \{+,-\}$, let us define $a_{(\xi_1,\xi_2)},b_{(\xi_1,\xi_2)},c_{(\xi_1,\xi_2)}$ such that \begin{align*} (a_{(\xi_1, \xi_2)} ,b_{(\xi_1, \xi_2)}, c_{(\xi_1, \xi_2)})=\sum_{M \in S_{(\xi_1, \xi_2)}} \psi(M). \end{align*} In other words, $a_{(\xi_1, \xi_2)}$ ($b_{(\xi_1, \xi_2)}$ and $c_{(\xi_1, \xi_2)}$, respectively) is the sum of the values in the `$a$' (`$b$' and `$c$', respectively) coordinate from the matrices in the multiset $S_{(\xi_1, \xi_2)}$. Now consider a permutation of the sequence $M_{i_1}\cdots M_{i_k}$, where the first part of the sequence only consists of the $({\scriptstyle +,+})$-type matrices, the second part only consists of the $({\scriptstyle +,-})$-type matrices, the third part only consists of the $({\scriptstyle -,-})$-type, and finally the last part only consists of the $({\scriptstyle -,+})$-type. Let us denote by $M_{({\scriptscriptstyle +,+})}$ the matrix which results from the multiplication of the first part, namely, $M_{({\scriptscriptstyle +,+})} = \prod_{M \in S_{({\scriptscriptstyle +,+})}} M.$ Then $\psi(M_{({\scriptscriptstyle +,+})}) = (a_{({\scriptscriptstyle +,+})}, b_{(+, +)}, x_{({\scriptscriptstyle +,+})})$ holds, where $x_{({\scriptscriptstyle +,+})} < c_{(+, +)} + a_{({\scriptscriptstyle +,+})} b_{({\scriptscriptstyle +,+})}.$ Let us define $M_{({\scriptscriptstyle +,-})}$, $M_{({\scriptscriptstyle -,-})}$ and $M_{({\scriptscriptstyle -,+})}$ in a similar fashion. Note that for $M_{({\scriptscriptstyle +,-})}$ and $M_{({\scriptscriptstyle -,+})}$, the term $x$ is bounded from below. Now we claim that there exists an integer~$m > 0$ such that $M_{({\scriptscriptstyle +,+})}^m M_{({\scriptscriptstyle +,-})}^m M_{({\scriptscriptstyle -,-})}^m M_{({\scriptscriptstyle -,+})}^m$ corresponds to the triple~$(0,0,c')$ for some $c' < 0$. Let $N$ be a matrix in $ {\rm H}(3,\mathbb{Q})$ and $\psi(N) = (a,b,c)$. Then the upper-triangular coordinates of the $m$th power of $N$ are calculated as follows: $\psi(N^m) = (am, bm, cm + ab \cdot \frac{1}{2}m(m-1))$. Next, we consider how the upper-triangular coordinates are affected by multiplication of matrices $M_{({\scriptscriptstyle +,+})}^m$, $M_{({\scriptscriptstyle +,-})}^m$, $M_{({\scriptscriptstyle -,-})}^m$ and $M_{({\scriptscriptstyle -,+})}^m$. Let us consider the first part of the product, $M_{({\scriptscriptstyle +,+})}^m$, that is, $\psi(M_{({\scriptscriptstyle +,+})}^m) = (a_{({\scriptscriptstyle +,+})}m, b_{({\scriptscriptstyle +,+})}m, x_{({\scriptscriptstyle +,+})}m + z_1)$, where $z_1$ can be found in Table~\ref{tab:z1234}. Now we multiply $M_{({\scriptscriptstyle +,+})}^m$ by the second part $M_{({\scriptscriptstyle +,-})}^m$. Then the resulting matrix $M_{({\scriptscriptstyle +,+})}^mM_{({\scriptscriptstyle +,-})}^m$ corresponds to \begin{align*} \psi(M_{({\scriptscriptstyle +,+})}^mM_{({\scriptscriptstyle +,-})}^m)=((a_{({\scriptscriptstyle +,+})}+a_{({\scriptscriptstyle +,-})})m, (b_{({\scriptscriptstyle +,+})}+b_{({\scriptscriptstyle +,-})})m, (x_{({\scriptscriptstyle +,+})}+x_{({\scriptscriptstyle +,-})})m + z_1 - z_2), \end{align*} where $z_2$ can be found in Table~\ref{tab:z1234}. Similarly, we compute $z_3$ and $z_4$ that will be added to the upper-right corner as a result of multiplying $M_{({\scriptscriptstyle -,-})}^m$ and $M_{({\scriptscriptstyle -,+})}^m$ and present them in Table~\ref{tab:z1234}. \begin{table}[htb] \begin{align*} z_1 &= | a_{({\scriptscriptstyle +,+})}|| b_{({\scriptscriptstyle +,+})}| \cdot \frac{1}{2}m(m-1), \\ z_2 &= m^2 |a_{({\scriptscriptstyle +,+})}| |b_{({\scriptscriptstyle +,-})}| + |a_{({\scriptscriptstyle +,-})} ||b_{({\scriptscriptstyle +,-})}| \cdot \frac{1}{2}m(m-1), \\ z_3 &= |a_{({\scriptscriptstyle -,+})}|| b_{({\scriptscriptstyle -,-})}| m^2 + |a_{({\scriptscriptstyle -,-})} || b_{({\scriptscriptstyle -,-})}| \cdot \frac{1}{2}m(m-1) \ \text{and} \\ z_4 &= |a_{({\scriptscriptstyle -,+})}| |b_{({\scriptscriptstyle -,+})}| \cdot \frac{1}{2}m(m-1). \end{align*} \caption{Values $z_1$, $z_2$, $z_3$ and $z_4$ in the product $M_{({\scriptscriptstyle +,+})}^m M_{({\scriptscriptstyle +,-})}^m M_{({\scriptscriptstyle -,-})}^m M_{({\scriptscriptstyle -,+})}^m$ \label{tab:z1234}} \end{table} After the multiplying all four parts, we have \begin{multline*} \psi(M_{({\scriptscriptstyle +,+})}^m M_{({\scriptscriptstyle +,-})}^m M_{({\scriptscriptstyle -,-})}^m M_{({\scriptscriptstyle -,+})}^m) = \\ (0,0, (x_{({\scriptscriptstyle +,+})}+x_{({\scriptscriptstyle +,-})} + x_{({\scriptscriptstyle -,-})} + x_{({\scriptscriptstyle -,+})})m + z_1 - z_2 - z_3 + z_4). \end{multline*} Denote $z = z_1 - z_2 - z_3 + z_4$. From the above equations, we can see that $z$ can be represented as a quadratic equation of $m$ and that the coefficient of $m^2$ is always negative if $S_{(\xi_1, \xi_2)} \ne \emptyset$ for all $\xi_1, \xi_2 \in \{ +,-\}$. That is, the coefficient of $m^2$ is \begin{multline*} \frac{1}{2}(|a_{({\scriptscriptstyle +,+})}|| b_{({\scriptscriptstyle +,+})}| + |a_{({\scriptscriptstyle -,+})}|| b_{({\scriptscriptstyle -,+})}|) - \frac{1}{2}(|a_{({\scriptscriptstyle +,-})}|| b_{({\scriptscriptstyle +,-})}| + |a_{({\scriptscriptstyle -,-})}|| b_{({\scriptscriptstyle -,-})}|) \\ {} + |a_{({\scriptscriptstyle +,+})}|| b_{({\scriptscriptstyle +,-})}| + |a_{({\scriptscriptstyle -,+})}|| b_{({\scriptscriptstyle -,-})}|. \end{multline*} Let us simplify the equation by denoting $|a_{({\scriptscriptstyle +,+})}| + |a_{({\scriptscriptstyle +,-})}| = |a_{({\scriptscriptstyle -,+})}| + |a_{({\scriptscriptstyle -,-})}| = a'$ and $|b_{({\scriptscriptstyle +,+})}| + |b_{({\scriptscriptstyle -,+})}| = |b_{({\scriptscriptstyle +,-})}| + |b_{({\scriptscriptstyle -,-})}| = b'$. Note, that the equations hold as we are considering the product, where `a' and `b' elements add up to zero. Then \begin{align*} a'b' &= a' (|b_{({\scriptscriptstyle +,-})}| + |b_{({\scriptscriptstyle -,-})}|) = a' |b_{({\scriptscriptstyle +,-})}| + a'|b_{({\scriptscriptstyle -,-})}| \\ &= (|a_{({\scriptscriptstyle +,+})}| + |a_{({\scriptscriptstyle +,-})}|)|b_{({\scriptscriptstyle +,-})}| + (|a_{({\scriptscriptstyle -,+})}| + |a_{({\scriptscriptstyle -,-})}|)|b_{({\scriptscriptstyle -,-})}|. \end{align*} Now the coefficient of $m^2$ in $z$ can be written as \begin{align}\label{eq:negative} -a'b' + \frac{1}{2}(|a_{({\scriptscriptstyle +,+})}|| b_{({\scriptscriptstyle +,+})}| + |a_{({\scriptscriptstyle -,+})}|| b_{({\scriptscriptstyle -,+})}| + |a_{({\scriptscriptstyle -,-})}|| b_{({\scriptscriptstyle -,-})}| + |a_{({\scriptscriptstyle +,-})}|| b_{({\scriptscriptstyle +,-})}| ). \end{align} Without loss of generality, suppose that $|a_{({\scriptscriptstyle +,+})}| \ge |a_{({\scriptscriptstyle -,+})}|$. Then we have \begin{align*} \!\!\!\!\!|a_{({\scriptscriptstyle +,+})}|| b_{({\scriptscriptstyle +,+})}| + |a_{({\scriptscriptstyle -,+})}|| b_{({\scriptscriptstyle -,+})}| \le |a_{({\scriptscriptstyle +,+})}|b' \mbox{ and } |a_{({\scriptscriptstyle -,-})}|| b_{({\scriptscriptstyle -,-})}| + |a_{({\scriptscriptstyle +,-})}|| b_{({\scriptscriptstyle +,-})}| \le |a_{({\scriptscriptstyle -,-})}|b'. \end{align*} From $ (|a_{({\scriptscriptstyle +,+})}|+ |a_{({\scriptscriptstyle -,-})}|)b' \le 2a'b', $ we can see that the coefficient of the highest power of the variable is negative in $z$ if $|a_{({\scriptscriptstyle +,+})}|+ |a_{({\scriptscriptstyle -,-})}| < 2a'$. By comparing two terms in \eqref{eq:negative}, we can see that the coefficient is negative if all subsets $S_{({\scriptscriptstyle -,+})}$, $S_{({\scriptscriptstyle +,-})}$, $S_{({\scriptscriptstyle +,+})}$ and $S_{({\scriptscriptstyle -,-})}$ are not empty. Since the coefficient of the highest power of the variable is negative, $z$ becomes negative when $m$ is large enough. Therefore, we have a matrix corresponding to the triple~$(0,0,c')$ for some $c'<0$ as a product of multiplying matrices in the generating set and the identity matrix is also reachable. It should be noted that there are some subcases where some of subsets from $S_{({\scriptscriptstyle +,+})}$, $S_{({\scriptscriptstyle -,+})}$, $S_{({\scriptscriptstyle +,-})}$, and $S_{({\scriptscriptstyle -,-})}$ are empty. We examine all possible cases and prove that the coefficient of $m^2$ is negative in every case and the matrix with a negative number in the corner is constructible. First, we prove that the coefficient of $m^2$ in $z$ is negative when only one of the subsets from $S_{({\scriptscriptstyle +,+})}$. $S_{({\scriptscriptstyle +,-})}$, $S_{({\scriptscriptstyle -,-})}$, and $S_{({\scriptscriptstyle -,+})}$ is empty as follows: Assume that only $S_{({\scriptscriptstyle +,+})} = \emptyset$. In this case, note that $|a_{({\scriptscriptstyle +,-})}| = a'$ and $|b_{({\scriptscriptstyle -,+})}| = b'$ since $|a_{({\scriptscriptstyle +,+})}| = |b_{({\scriptscriptstyle +,+})}| = 0$ by $S_{({\scriptscriptstyle +,+})} = \emptyset$ being empty. Then the coefficient of $m^2$ becomes \begin{align*} -a'b' + \frac{ |a_{({\scriptscriptstyle -,+})}| b' + |a_{({\scriptscriptstyle -,-})}|| b_{({\scriptscriptstyle -,-})}| + a' | b_{({\scriptscriptstyle +,-})}|}{2}. \end{align*} We can see that the coefficient can be at most 0 since $|a_{({\scriptscriptstyle -,+})}| b' $ and $|a_{({\scriptscriptstyle -,-})}|| b_{({\scriptscriptstyle -,-})}| + a' | b_{({\scriptscriptstyle +,-})}|$ can be maximized to $a'b'$. If we maximize $|a_{({\scriptscriptstyle -,+})}| b' $ by setting $|a_{({\scriptscriptstyle -,+})}| = a'$, then $|a_{({\scriptscriptstyle -,-})}|$ is 0 since $|a_{({\scriptscriptstyle +,+})}| + |a_{({\scriptscriptstyle -,+})}| = a'$. Then $|a_{({\scriptscriptstyle -,-})}|| b_{({\scriptscriptstyle -,-})}| + a' | b_{({\scriptscriptstyle +,-})}|$ can be $a'b'$ only when $| b_{({\scriptscriptstyle +,-})}| = b'$. This leads to the set $S_{({\scriptscriptstyle -,-})}$ being empty since we have $| a_{({\scriptscriptstyle -,-})}| = 0$ and $| b_{({\scriptscriptstyle -,-})}| = 0$ and therefore, we have a contradiction. The remaining cases, $S_{({\scriptscriptstyle +,-})} = \emptyset$, or $S_{({\scriptscriptstyle -,-})} = \emptyset$, or $S_{({\scriptscriptstyle -,+})} = \emptyset$ are proven analogously \begin{figure}[htb] \begin{subfigure}{0.49\textwidth} \centering \caption{The case of $S_{(-,+)}$ being empty.} \begin{tikzpicture}[xscale=0.5,yscale=0.3,every node/.style={scale=1.0}] \tikzset{>=latex} \draw (0,9) -- (0,0); \draw (0,0) -- (12,0); \draw [dashed] (6,9) -- (6,0); \draw (12,0) -- (12,9); \draw [dashed] (9,0) -- (9,9); \draw [draw=none,pattern=dots, pattern color=blue] (0.25,8.5) rectangle (1,7.75); \draw [draw=none,pattern=north east lines, pattern color=red] (0.25,7.5) rectangle (1,6.75); \node at (2.7,8.1) {: positive}; \node at (2.8,7.1) {: negative}; \draw (0,0) -- (6,6) -- (9,8); \draw (9,8) -- (12,0); \fill[pattern=dots, pattern color=blue] (0,0)--(6,6)--(6,0)--cycle; \fill[pattern=north east lines, pattern color=red] (6,0)--(6,6)--(9,8)--(9,0)--cycle; \fill[pattern=north east lines, pattern color=red] (9,0)--(9,8)--(12,0)--cycle; \end{tikzpicture} \end{subfigure} \begin{subfigure}{0.49\textwidth} \centering \caption{The case of $S_{(+,+)}$ being empty.} \begin{tikzpicture}[xscale=0.5,yscale=0.3,every node/.style={scale=1.0}] \tikzset{>=latex} \draw (0,9) -- (0,0); \draw (0,0) -- (12,0); \draw [dashed] (3,0) -- (3,9); \draw (12,0) -- (12,9); \draw [dashed] (6,0) -- (6,9); \draw [draw=none,pattern=dots, pattern color=blue] (7.75,8.25) rectangle (8.5,7.5); \draw [draw=none,pattern=north east lines, pattern color=red] (7.75,7.25) rectangle (8.5,6.5); \node at (10.2,7.85) {: positive}; \node at (10.3,6.85) {: negative}; \draw (0,0) -- (3,7) -- (6,5.5); \draw (6,5.5) -- (12,0); \fill[pattern=north east lines, pattern color=red] (0,0)--(3,7)--(3,0)--cycle; \fill[pattern=north east lines, pattern color=red] (3,0)--(3,7)--(6,5.5)--(6,0)--cycle; \fill[pattern=dots, pattern color=blue] (6,0)--(6,5.5)--(12,0)--cycle; \end{tikzpicture} \end{subfigure} \\ \begin{subfigure}{0.49\textwidth} \centering \caption{The case of $S_{(+,-)}$ being empty.} \begin{tikzpicture}[xscale=0.5,yscale=0.3,every node/.style={scale=0.8}] \tikzset{>=latex} \draw (0,9) -- (0,0); \draw (0,0) -- (12,0); \draw [dashed] (3,0) -- (3,9); \draw (12,0) -- (12,9); \draw [dashed] (9,0) -- (9,9); \draw [draw=none,pattern=dots, pattern color=blue] (4.75,8.75) rectangle (5.5,8); \draw [draw=none,pattern=north east lines, pattern color=red] (4.75,7.75) rectangle (5.5,7); \node at (7.2,8.35) {: positive}; \node at (7.3,7.35) {: negative}; \draw (0,0) -- (3,7) -- (9,5.5); \draw (9,5.5) -- (12,0); \fill[pattern=dots, pattern color=blue] (0,0)--(3,7)--(3,0)--cycle; \fill[pattern=north east lines, pattern color=red] (3,0)--(3,7)--(9,5.5)--(9,0)--cycle; \fill[pattern=dots, pattern color=blue] (9,0)--(9,5.5)--(12,0)--cycle; \end{tikzpicture} \end{subfigure} \begin{subfigure}{0.49\textwidth} \centering \caption{The case of $S_{(-,-)}$ being empty.} \begin{tikzpicture}[xscale=0.5,yscale=0.3,every node/.style={scale=0.8}] \tikzset{>=latex} \draw (0,9) -- (0,0); \draw (0,0) -- (12,0); \draw [dashed] (3,0) -- (3,9); \draw (12,0) -- (12,9); \draw [dashed] (9,0) -- (9,9); \draw [draw=none,pattern=dots, pattern color=blue] (4.25,8.25) rectangle (5,7.5); \draw [draw=none,pattern=north east lines, pattern color=red] (4.25,7.25) rectangle (5,6.5); \node [align=left] at (6.7,7.85) {: positive}; \node [align=left] at (6.8,6.85) {: negative}; \draw (0,0) -- (3,4.5) -- (9,6.5); \draw (9,6.5) -- (12,0); \fill[pattern=dots, pattern color=blue] (0,0)--(3,4.5)--(3,0)--cycle; \fill[pattern=north east lines, pattern color=red] (3,0)--(3,4.5)--(9,6.5)--(9,0)--cycle; \fill[pattern=dots, pattern color=blue] (9,0)--(9,6.5)--(12,0)--cycle; \end{tikzpicture} \end{subfigure} \caption{Subcases where one of the subsets from $S_{(+,+)}$, $S_{(-,+)}$, $S_{(+,-)}$, and $S_{(-,-)}$ is empty.} \label{fig:oneempty} \end{figure} Figure~\ref{fig:oneempty} shows the cases when one of subsets from $S_{(+,+)}$, $S_{(-,+)}$, $S_{(+,-)}$, and $S_{(-,-)}$ is empty. Lastly, it remains to consider the cases where two of the subsets are empty. Note that we do not consider the cases where three of the subsets are empty because the sum of $a$'s and $b$'s cannot be both zero in such cases. Here we assume one of $S_{({\scriptscriptstyle +,+})}$ and $S_{({\scriptscriptstyle -,-})}$ contains two matrices whose superdiagonal vectors are not parallel by the statement of this lemma. Then we can always make the negative contribution larger by using matrices with different superdiagonal vectors. See Figure~\ref{fig:twoempty} for an example. More formally, we consider the two cases as follows: Assume first that $S_{({\scriptscriptstyle +,+})} = \emptyset$ and $S_{({\scriptscriptstyle -,-})} = \emptyset$. Without loss of generality, assume that $S_{({\scriptscriptstyle -,+})}$ contains two matrices~$M_1$ and $M_2$ with non-parallel superdiagonal vectors. Let $\vec{v}(M_1) = (a_1,b_1)$ and $\vec{v}(M_2) = (a_2,b_2)$ be superdiagonal vectors for $M_1$ and $M_2$, respectively, such that $|\frac{a_1}{b_1}| > |\frac{a_2}{b_2}|$. To simplify the proof, we assume the set $S_{({\scriptscriptstyle +,-})}$ only uses one matrix~$M_3$, where $\vec{v}(M_3) = (a_3,b_3)$, to generate a matrix with a zero superdiagonal vector. This implies that $a_1 x + a_2 y + a_3 = 0$ and $b_1 x + b_2 y + b_3 = 0$ for some $x,y \in \mathbb{Q}$. Here the idea is that we first multiply the matrix~$M_1$ and then multiply $M_2$ later. For instance, we first multiply $M_1^m$ and then $M_2^m$. Then the coefficient of the highest power in $z$ becomes $\frac{- a'b' + 2|a_2||b_1| + |a_1||b_1| + |a_2||b_2|}{2}$. Since $a' = |a_1| + |a_2|$ and $b' = |b_1| + |b_2|$, the coefficient of $m^2$ is now $\frac{|a_2||b_1| - |a_1||b_2|}{2}$. By the supposition $|\frac{a_1}{b_1}| > |\frac{a_2}{b_2}|$, we prove that the coefficient of the highest power in $z$ is always negative. The second case, where $S_{({\scriptscriptstyle +,-})} = \emptyset$ and $S_{({\scriptscriptstyle -,+})} = \emptyset$, is proven analogously. \begin{figure}[htb] \centering \begin{subfigure}{0.50\textwidth} \centering \caption{When $S_{(+,+)}$ and $S_{(-,-)}$ are empty.} \begin{tikzpicture}[xscale=0.5,yscale=0.3,every node/.style={scale=0.8}] \tikzset{>=latex} \draw (0,9) -- (0,0); \draw (0,0) -- (12,0); \draw (12,0) -- (12,9); \draw [dashed] (6,0) -- (6,9); \draw [draw=none,pattern=dots, pattern color=blue] (7.25,7.9) rectangle (8,7.15); \draw [draw=none,pattern=north east lines, pattern color=red] (7.25,6.9) rectangle (8,6.15); \node [align=left] at (9.7,7.5) {: positive}; \node [align=left] at (9.8,6.5) {: negative}; \draw (0,0) -- (3,5) -- (6,6.5); \draw (6,6.5) -- (12,0); \fill[pattern=north east lines, pattern color=red] (0,0)--(3,5)--(3,0)--cycle; \fill[pattern=north east lines, pattern color=red] (3,0)--(3,5)--(6,6.5)--(6,0)--cycle; \fill[pattern=dots, pattern color=blue] (6,0)--(6,6.5)--(12,0)--cycle; \end{tikzpicture} \end{subfigure}\begin{subfigure}{0.50\textwidth} \centering \caption{When $S_{(-,+)}$ and $S_{(+,-)}$ are empty.} \begin{tikzpicture}[xscale=0.5,yscale=0.3,every node/.style={scale=0.8}] \tikzset{>=latex} \draw (0,9) -- (0,0); \draw (0,0) -- (12,0); \draw (12,0) -- (12,9); \draw [dashed] (6,0) -- (6,9); \draw [draw=none,pattern=dots, pattern color=blue] (1.25,8.25) rectangle (2,7.5); \draw [draw=none,pattern=north east lines, pattern color=red] (1.25,7.25) rectangle (2,6.5); \node [align=left] at (3.7,7.85) {: positive}; \node [align=left] at (3.8,6.85) {: negative}; \draw (0,0) -- (6,6.5); \draw (6,6.5) -- (9,5.5) -- (12,0); \fill[pattern=dots, pattern color=blue] (0,0)--(6,0)--(6,6.5)--cycle; \fill[pattern=north east lines, pattern color=red] (6,0)--(6,6.5)--(9,5.5)--(12,0)--cycle; \end{tikzpicture} \end{subfigure} \caption{Subcases where two of the subsets from $S_{(+,+)}$, $S_{(-,+)}$, $S_{(+,-)}$, and $S_{(-,-)}$ are empty.} \label{fig:twoempty} \end{figure} As we have proven that it is always possible to construct a matrix~$M'$ such that $\psi(M') = (0,0,c')$ for some $c' < 0$, we complete the proof. \end{proof} Note that in the above proof, we do not give optimal bounds on number of repetitions of a sequence. We illustrate Lemma~\ref{lem:nonparallel} in the next example. \begin{example} Consider a semigroup $S$ generated by matrices \begin{align*} &\begin{pmatrix} 1&-4&20\\0&1&-6\\0&0&1 \end{pmatrix}, &&\begin{pmatrix} 1&3&20\\0&1&-2\\0&0&1 \end{pmatrix}, &&\begin{pmatrix} 1&-1&20\\0&1&1\\0&0&1 \end{pmatrix}, &&\begin{pmatrix} 1&2&20\\0&1&7\\0&0&1 \end{pmatrix}. \end{align*} A simple calculation shows that a product of the four matrices (in any order) is a matrix $M$ such that $\psi(M)=(0,0,80+x)$ for some $x\in\mathbb{Z}$. Our goal, is to minimize $x$ by multiplying the matrices in a different order. Denote the given matrices by $M_{({\scriptscriptstyle +,+})}=\begin{psmallmatrix} 1&2&20\\0&1&7\\0&0&1 \end{psmallmatrix}$, $M_{({\scriptscriptstyle +,-})}=\begin{psmallmatrix} 1&3&20\\0&1&-2\\0&0&1 \end{psmallmatrix}$, $M_{({\scriptscriptstyle -,-})}=\begin{psmallmatrix} 1&-4&20\\0&1&-6\\0&0&1 \end{psmallmatrix}$ and $M_{({\scriptscriptstyle -,+})}=\begin{psmallmatrix} 1&-1&20\\0&1&1\\0&0&1 \end{psmallmatrix}$, and \begin{align*} N_1=M_{({\scriptscriptstyle +,+})}M_{({\scriptscriptstyle +,-})}M_{({\scriptscriptstyle -,-})}M_{({\scriptscriptstyle -,+})}=\begin{psmallmatrix} 1&0&47\\0&1&0\\0&0&1 \end{psmallmatrix}. \end{align*} That is, $x=-33$. By considering several copies of the product, we can have a negative value in the top right corner. Indeed, consider the product of $16$ matrices \begin{align*} N_2=M_{({\scriptscriptstyle +,+})}^4M_{({\scriptscriptstyle +,-})}^4M_{({\scriptscriptstyle -,-})}^4M_{({\scriptscriptstyle -,+})}^4=\begin{psmallmatrix} 1&0&-22\\0&1&0\\0&0&1 \end{psmallmatrix}. \end{align*} Since, we have a matrix with negative value in the top corner, the identity matrix can be generated for example by the product $N_1^{22}N_2^{47}$. \end{example} \begin{theorem}\label{thm:ptime} The identity problem for a semigroup generated by matrices from {\rm H}\ensuremath{(3,\mathbb{Q})}\xspace is in polynomial time. \end{theorem} \begin{proof} Let $S$ be the matrix semigroup in {\rm H}\ensuremath{(3,\mathbb{Q})}\xspace generated by the set $G = \{M_1, \ldots, M_r\}$. There are two possible cases of having the identity matrix in the matrix semigroup in {\rm H}\ensuremath{(3,\mathbb{Q})}\xspace. Either the identity matrix is generated by a product of matrices with pairwise parallel superdiagonal vectors or there are at least two matrices with non-parallel superdiagonal vectors. Consider the first case. Lemma~\ref{lem:single} provides a formula to compute the value in the top corner regardless of the order of the multiplications. That is, we need to solve a system of linear homogeneous Diophantine equations with solutions over non-negative integers. We partition the set $G$ into several disjoint subsets~$G_1, G_2, \ldots,G_s$, where $s$ is at most $r$, and each subset contains matrices with parallel superdiagonal vectors. Since superdiagonal vectors being parallel is a transitive and symmetric property, each matrix needs to be compared to a representative of each subset. If there are no matrices with parallel superdiagonal vectors, then there are $r$ subsets $G_i$ containing exactly one matrix and $O(r^2)$ tests were done. Let us consider $G_i = \{ M_{k_1}, \ldots, M_{k_{s_i}}\}$, i.e., one of the subsets containing $s_i$ matrices and $\psi(M_{k_j}) = (a_{k_j},b_{k_j},c_{k_j} $. By Lemma~\ref{lem:single}, the value $c_{k_{j}} - \frac{q_i}{2} a_{k_{j}}^2$, for a fixed $q_i \in \mathbb{Q}$, is added to the top corner when matrix $M_{k_{j}}$ is multiplied We solve the system of two linear homogeneous Diophantine equations~$A \mathbf y = \bm{0}$, where \begin{align*} A = \begin{pmatrix} a_{k_1} & a_{k_2}& \cdots & a_{k_{s_i}}\\ c_{k_{1}} - \frac{q_i}{2} a_{k_1}^2 & c_{k_{2}} - \frac{q_i}{2} a_{k_2}^2 & \cdots & c_{k_{s_i}} -\frac{q_i}{2} a_{k_{s_i}}^2 \end{pmatrix} \end{align*} and $\mathbf y^\mathsf{T} \in \mathbb{N}^{s_i}$. The first row is the constraint that guarantees that the first component of the superdiagonal is zero in the matrix product constructed from a solution. Since the superdiagonal vectors are parallel, it also implies that the whole vector is zero. The second row guarantees that the upper corner is zero. It is obvious that the identity matrix is in the semigroup if we have a solution in the system of linear homogeneous Diophantine equations for any subset $G_i$. That is, we need to solve at most $r$ systems of two linear homogeneous Diophantine equations. Next, we consider the second case, where by Lemma~\ref{lem:nonparallel}, it is enough to check whether there exists a sequence of matrices generating a matrix with zero superdiagonal vector and containing two matrices with non-parallel superdiagonal vectors. Let us say that $M_{i_1}, M_{i_2} \in G$, where $1 \le i_1,i_2 \le r$ are the two matrices. Recall that $G = \{M_1, M_2, \ldots, M_r\}$ is a generating set of the matrix semigroup and let $\psi(M_i) = (a_i, b_i, c_i)$ for all $1 \le i \le r$. We can see that there exists such a product containing the two matrices by solving a system of two linear homogeneous Diophantine equations of the form $B \mathbf y = \bm{0}$, where \begin{align*} B = \begin{pmatrix} a_{1} & a_{2}& \cdots & a_{r}\\ b_{1} & b_{2}& \cdots & b_{r} \end{pmatrix}, \end{align*} with an additional constraint that the numbers in the solution~$\mathbf y$ that correspond to $M_{i_1}$ and $M_{i_2}$ are non-zero since we must use these two matrices in the product. We repeat this process at most $r(r-1)$ times until we find a solution. Therefore, the problem reduces again to solving at most $O(r^2)$ systems of two linear homogeneous Diophantine equations. Finally, we conclude the proof by mentioning that the identity problem for matrix semigroups in the Heisenberg group over rationals {\rm H}\ensuremath{(3,\mathbb{Q})}\xspace can be decided in polynomial time as, by Lemma~\ref{lem:DiophantineP}, the problem of existence of a positive integer solution to a system of linear homogeneous Diophantine equations is in polynomial time. Note that if the system is non-homogeneous, then solvability of a system of linear Diophantine equations with solutions over positive integers is an $\NP$-complete problem; see for example \cite{Papadimitriou81}. \end{proof} Next, we generalize the above algorithm for the identity problem in the Heisenberg group~{\rm H}\ensuremath{(3,\mathbb{Q})}\xspace to the domain of the Heisenberg groups for any dimension over the rational numbers. Similarly to the case of dimension three, we establish the following result for the case of matrices where multiplication is commutative. \begin{lemma}\label{lem:single2} Let $G = \{ M_1, M_2, \ldots, M_r \} \subseteq {\rm H}(n,\mathbb{Q})$ be a set of matrices from the Heisenberg group such that $\psi(M_i) = (\mathbf a_i,\mathbf b_i,c_i)$ and $\psi(M_j) = (\mathbf a_j,\mathbf b_j,c_j)$ and $\mathbf a_i \cdot \mathbf b_j = \mathbf a_j \cdot \mathbf b_i$ for any $1 \le i \ne j \le r$. If there exists a sequence of matrices $M = M_{i_1} M_{i_2} \cdots M_{i_k},$ where $i_j \in [1,r]$ for all $1 \le j \le k$, such that $\psi(M) = (\bm{0},\bm{0},c)$ for some $c \in \mathbb{Q}$, then, \begin{align*} c = \sum_{j=1}^k (c_{i_j} - \frac{1}{2} \mathbf a_{i_j} \cdot \mathbf b_{i_j}). \end{align*} \end{lemma} \begin{proof} Consider the sequence $M_{i_1} M_{i_2} \cdots M_{i_k}$ and let $\psi(M_i) = (\mathbf a_i, \mathbf b_i, c_i)$ for each $i \in [1,r]$. From the multiplication of matrices, we have the following equation: \begin{align*} c &= \sum_{j=1}^k c_{i_j} + \sum_{\ell=1}^{k-1} \left( \sum_{j=1}^{\ell}\mathbf a_{i_j} \right) \cdot \mathbf b_{i_{\ell+1}} = \sum_{j=1}^k c_{i_j} + \frac{1}{2} \left( \sum_{\ell=1}^{k}\sum_{j=1}^{k} \mathbf a_{i_\ell} \cdot \mathbf b_{i_j} - \sum_{j=1}^k \mathbf a_{i_j} \cdot \mathbf b_{i_j} \right) \\ &= \sum_{j=1}^k (c_{i_j} - \frac{1}{2} \mathbf a_{i_j} \cdot \mathbf b_{i_j}). \end{align*} Note that the first equality follows from a direct computation as in equation~\eqref{eq:sum}. From the above equation, we prove the statement claimed in the lemma. Moreover, due to the commutativity of multiplication, the value~$c$ does not change even if we change the order of multiplicands. \end{proof} Lemma~\ref{lem:nonparallel} does not generalize to {\rm H}\ensuremath{(n,\mathbb{Q})}\xspace in the same way as we cannot classify matrices according to types to control the value in upper-right corner, so we use a different technique to prove that the value in the upper corner will be diverging to both positive and negative infinity quadratically as we repeat the same sequence generating any matrix $M$ such that $\psi(M)=(\bm{0},\bm{0},c)$. \begin{lemma}\label{lem:nonparallel2} Let $S = \langle M_1, \ldots, M_r \rangle \subseteq {\rm H}(n,\mathbb{Q})$ be a finitely generated matrix semigroup. Then the identity matrix exists in $S$ if there exists a sequence of matrices $M_{i_1} M_{i_2} \cdots M_{i_k},$ where $i_j \in [1,r]$ for all $1 \le j \le k$, satisfying the following properties: \begin{enumerate}[(i)] \item $\psi(M_{i_1} M_{i_2} \cdots M_{i_k}) = (\bm{0},\bm{0},c)$ for some $c \in \mathbb{Q}$, and \item $\mathbf a_{i_{j_1}} \cdot \mathbf b_{i_{j_2}} \ne \mathbf a_{i_{j_2}} \cdot \mathbf b_{i_{j_1}}$ for some $j_1, j_2 \in [1,k]$, where $\psi(M_i) = (\mathbf a_i, \mathbf b_i, c_i)$ for $1 \le i \le r$. \end{enumerate} \end{lemma} \begin{proof} From the first property claimed in the lemma, we know that any permutation of the sequence of matrix multiplications of $M_{i_1} \cdots M_{i_k}$ results in matrices $M'$ such that $\psi(M') = (\bm{0}, \bm{0}, y)$ for some $y \in \mathbb{Q}$ since the multiplication of matrices in the Heisenberg group performs additions of vectors which is commutative in the top row and the rightmost column excluding the upper-right corner. From the commutative behaviour in the horizontal and vertical vectors of matrices in the Heisenberg group, we also know that if we duplicate the matrices in the sequence $M_{i_1} \cdots M_{i_k}$ and multiply the matrices in any order, then the resulting matrix has a non-zero coordinate in the upper triangular coordinates only in the upper right corner. Now let $j_1, j_2 \in [1,k]$ be two indices such that $\mathbf a_{i_{j_1}} \cdot \mathbf b_{i_{j_2}} \ne \mathbf a_{i_{j_2}} \cdot \mathbf b_{i_{j_1}}$ as claimed in the lemma. Then consider the following matrix~$M_d$ that can be obtained by duplicating the sequence $M_{i_1} \cdots M_{i_k}$ of matrices into $\ell$ copies and shuffling the order as follows: $M_d = M_{i_{j_1}}^\ell M_{i_{j_2}}^\ell M_x^\ell,$ where $M_x$ is a matrix that is obtained by multiplying the matrices in $M_{i_1}\cdots M_{i_k}$ except the two matrices $M_{j_1}$ and $M_{j_2}$. Then it is clear that $\psi(M_d) = (\bm{0}, \bm{0}, z)$ for some $z$. Let us say that $\psi(M_x) = (\mathbf a_x, \mathbf b_x, c_x)$. Then it is easy to see that $\mathbf a_{i_{j_1}} + \mathbf a_{i_{j_2}} + \mathbf a_x = \bm{0}$ and $\mathbf b_{i_{j_1}} + \mathbf b_{i_{j_2}} + \mathbf b_x = \bm{0}$. Now we show that we can always construct two matrices that have only one non-zero rational number in the upper right corner with different signs. First, let us consider the $\ell$th power of the matrix $M_{i_{j_1}}$ as follows: \begin{align*} \!\!\!\!\!\psi(M_{i_{j_1}}^\ell) = (\mathbf a_{i_{j_1}} \ell, \mathbf b_{i_{j_1}} \ell, c_{i_{j_1}} \ell + \sum_{h=1}^{\ell -1} h (\mathbf a_{i_{j_1}} \cdot \mathbf b_{i_{j_1}} ) ) = (\mathbf a_{i_{j_1}} \ell, \mathbf b_{i_{j_1}} \ell, c_{i_{j_1}} \ell + \mathbf a_{i_{j_1}} \cdot \mathbf b_{i_{j_1}} \frac{(\ell-1) \ell}{2}). \end{align*} It follows that the matrix~$M_d$ satisfies the equation~$\psi(M_d) = (\bm{0},\bm{0}, z)$ such that \begin{align*} z &= y \ell + (\mathbf a_{i_{j_1}} \cdot \mathbf b_{i_{j_1}} + \mathbf a_{i_{j_2}} \cdot \mathbf b_{i_{j_2}} + \mathbf a_x \cdot \mathbf b_x ) \frac{(\ell-1) \ell}{2} + (\mathbf a_{i_{j_1}} \cdot \mathbf b_{i_{j_2}} + (\mathbf a_{i_{j_1}} + \mathbf a_{i_{j_2}}) \cdot \mathbf b_x) \ell^2\\ &= \frac{1}{2}((\mathbf a_{i_{j_1}} \cdot \mathbf b_{i_{j_1}} + \mathbf a_{i_{j_2}} \cdot \mathbf b_{i_{j_2}} + \mathbf a_x \cdot \mathbf b_x ) + 2 (\mathbf a_{i_{j_1}} \cdot \mathbf b_{i_{j_2}} + (\mathbf a_{i_{j_1}} + \mathbf a_{i_{j_2}}) \cdot \mathbf b_x)) \ell^2 \\ & \quad {} + \frac{1}{2}(2y - (\mathbf a_{i_{j_1}} \cdot \mathbf b_{i_{j_1}} + \mathbf a_{i_{j_2}} \cdot \mathbf b_{i_{j_2}} + \mathbf a_x \cdot \mathbf b_x ) ) \ell. \end{align*} Now the coefficient of the highest term $\ell^2$ in $z$ can be simplified as follows: \begin{align*} &\frac{1}{2}((\mathbf a_{i_{j_1}} \cdot \mathbf b_{i_{j_1}} + \mathbf a_{i_{j_2}} \cdot \mathbf b_{i_{j_2}} + \mathbf a_x \cdot \mathbf b_x ) + 2 (\mathbf a_{i_{j_1}} \cdot \mathbf b_{i_{j_2}} + (\mathbf a_{i_{j_1}} + \mathbf a_{i_{j_2}}) \cdot \mathbf b_x))\\ &\qquad= \frac{1}{2}((\mathbf a_{i_{j_1}} + \mathbf a_{i_{j_2}}) \cdot (\mathbf b_{i_{j_1}} + \mathbf b_{i_{j_1}}) + \mathbf a_{i_{j_1}} \cdot \mathbf b_{i_{j_2}} - \mathbf a_{i_{j_2}} \cdot \mathbf b_{i_{j_1}} + (\mathbf a_{i_{j_1}} + \mathbf a_{i_{j_2}}) \cdot \mathbf b_x) \\ &\qquad= \frac{1}{2} ((- \mathbf a_x) \cdot (- \mathbf b_x) + \mathbf a_{i_{j_1}} \cdot \mathbf b_{i_{j_2}} - \mathbf a_{i_{j_2}} \cdot \mathbf b_{i_{j_1}} + (- \mathbf a_x) \cdot \mathbf b_x)\\ &\qquad= \frac{1}{2} (\mathbf a_{i_{j_1}} \cdot \mathbf b_{i_{j_2}} - \mathbf a_{i_{j_2}} \cdot \mathbf b_{i_{j_1}}). \end{align*} By the second property claimed in the lemma, we know that the coefficient of the highest term~$\ell^2$ in $z$ cannot be zero. Moreover, the value of $z$ will be diverging to negative or positive infinity depending on the sign of $\mathbf a_{i_{j_1}} \cdot \mathbf b_{i_{j_2}} - \mathbf a_{i_{j_2}} \cdot \mathbf b_{i_{j_1}}$. Now we consider a different matrix~$M_e$ which is defined to be the following product $M_{i_{j_2}}^\ell M_{i_{j_1}}^\ell M_x^\ell$ and say that $\psi(M_e) = (\bm{0}, \bm{0}, e)$ for some $e \in \mathbb{Q}$. Since we have changed the role of two matrices $M_{i_{j_1}}$ and $M_{i_{j_2}}$, the value of $e$ can be represented by a quadratic equation where the coefficient of the highest term is $\mathbf a_{i_{j_2}} \cdot \mathbf b_{i_{j_1}} - \mathbf a_{i_{j_1}} \cdot \mathbf b_{i_{j_2}}$. Therefore, we have proved that it is always possible to construct two matrices that have only one non-zero rational number in the upper right corner with different signs. Then, as in the proof Lemma~\ref{lem:nonparallel}, the identity matrix always exists in the semigroup as we can multiply these two matrices correct number of times to have zero in the upper right coordinate as well. \end{proof} Next, we prove that the identity problem is decidable for $n$-dimensional Heisenberg matrices. In contrast to Theorem~\ref{thm:ptime}, we do not claim that the problem is decidable in polynomial time since one of the steps of the proof is to partition matrices according to dot products which cannot be extended to higher dimensions than three. For higher dimensions, partitioning matrices according to dot products takes an exponential time in the number of matrices in the generating set. Note that if the size of the generating set is fixed, i.e., only the matrices are part of the input, then the problem remains in $\P$. \begin{theorem}\label{thm:heisnq} The identity problem for finitely generated matrix semigroups in the Heisenberg group~${\rm H}(n,\mathbb{Q})$ is decidable. \end{theorem} \begin{proof} Similarly to the proof of Theorem~\ref{thm:ptime}, there are two ways the identity matrix can be generated. Either all the matrices commute or there are at least two matrices that do not commute. Let $S$ be the matrix semigroup in {\rm H}\ensuremath{(n,\mathbb{Q})}\xspace generated by the set $G = \{M_1, M_2, \ldots, M_r\}$. Consider matrices $N_1,N_2$ and $N_3$, such that $\psi(N_1)=(\mathbf a_1,\mathbf b_1,c_1)$, $\psi(N_2)=(\mathbf a_2,\mathbf b_2,c_2)$ and $\psi(N_3)=(\mathbf a_3,\mathbf b_3,c_3)$. If $\mathbf a_1\cdot \mathbf b_2=\mathbf a_2\cdot\mathbf b_1$ and $\mathbf a_2\cdot \mathbf b_3=\mathbf a_3\cdot \mathbf b_2$, it does not imply that $\mathbf a_1\cdot \mathbf b_3=\mathbf a_3\cdot \mathbf b_1$. Therefore, the number of subsets of $G$, where each subset contains matrices that commute with other matrices in the same subset, is exponential in $r$ as two different subsets are not necessarily disjoint. Now we examine whether it is possible to generate the identity matrix by multiplying matrices in each subset by Lemma~\ref{lem:single2}. If it is not possible, we need to consider the case of having two matrices that do not commute with each other in the product with zero values in the upper-triangular coordinates except the corner. Let us say that $M_{i_1}, M_{i_2} \in G$, where $1 \le i_1,i_2 \le r$ are the two matrices. Recall that $G = \{M_1, M_2, \ldots, M_r\}$ is a generating set of the matrix semigroup and let $\psi(M_i) = (\mathbf a_i, \mathbf b_i, c_i)$ for all $1 \le i \le r$. We also denote the $m$th element of the vector~$\mathbf a_i$ (respectively, $\mathbf b_i$) by $\mathbf a_i[m]$ (respectively, $\mathbf b_i[m]$) for $1 \le m \le n-2$. Then we can see that there exists such a product by solving a system of $2(n-2)$ linear homogeneous Diophantine equations of the form $B \mathbf y = \bm{0}$, where \begin{align*} B = \begin{pmatrix} \mathbf a_1[1] & \cdots & \mathbf a_r[1] \\ \vdots & \ddots & \vdots \\ \mathbf a_1[d-2] & \cdots & \mathbf a_r[d-2] \\ \mathbf b_1[1] & \cdots & \mathbf b_r[1] \\ \vdots & \ddots & \vdots \\ \mathbf b_1[d-2] & \cdots & \mathbf b_r[d-2] \\ \end{pmatrix}, \end{align*} with an additional constraint that the numbers in the solution~$\mathbf y$ that correspond to $M_{i_1}$ and $M_{i_2}$ are non-zero since we must use these two matrices in the product. We repeat this process at most $r(r-1)$ times until we find a solution. Hence, we can view the identity problem in ${\rm H}(n,\mathbb{Q})$ for $n \ge 3$ as the problem of solving systems of $2(n-2)$ linear homogeneous Diophantine equations with some constraints on the solution. By Lemma~\ref{lem:DiophantineP}, we can solve systems of linear homogeneous Diophantine equations in polynomial time, thus we conclude that the identity problem in ${\rm H}(n,\mathbb{Q})$ is also decidable. \end{proof} \section{The identity problem in matrix semigroups in dimension four}\label{sec:IPfour} In this section, we prove that the identity problem is undecidable for $4\times4$ matrices, when the generating set has eight matrices, by introducing a new technique exploiting the anti-diagonal entries. \begin{theorem}\label{thm:identity4} Given a semigroup $S$ generated by eight $4 \times 4$ integer matrices with determinant one, determining whether the identity matrix belongs to $S$ is undecidable. \end{theorem} \begin{proof} We prove the claim by reducing from the \PCP. We shall use an encoding to embed an instance of the \PCP into a set of $4 \times 4$ integer matrices. Let $\alpha$ be the mapping of Lemma~\ref{lem:groupEnc} that maps elements of arbitrary group alphabet into a binary group alphabet $\Gamma_2=\{a,b,\overbar{a},\overbar{b}\}$. We also define a monomorphism $f : \FG[\Gamma_2] \to \mathbb{Z}^{2\times 2}$ as $f(a) = \begin{psmallmatrix}1 & 2 \\ 0 & 1 \end{psmallmatrix}$, $f(\overbar{a}) = \begin{psmallmatrix}1 & -2 \\ 0 & 1 \end{psmallmatrix}$, $f(b) = \begin{psmallmatrix}1 & 0 \\ 2 &1 \end{psmallmatrix}$ and $f(\overbar{b}) = \begin{psmallmatrix}1 & 0 \\ -2 & 1 \end{psmallmatrix}$. Recall that the matrices $\begin{psmallmatrix}1 & 2 \\ 0 & 1 \end{psmallmatrix}$ and $\begin{psmallmatrix}1 & 0 \\ 2 &1 \end{psmallmatrix}$ generate a free subgroup of {\rm SL}\ensuremath{(2,\mathbb{Z})}\xspace~\cite{LS77}. The composition of two monomorphisms~$\alpha$ and $f$ gives us the embedding from an arbitrary group alphabet into the special linear group~{\rm SL}\ensuremath{(2,\mathbb{Z})}\xspace. We use the composition of two monomorphisms~$\alpha$ and $f$ to encode a set of pairs of words over an arbitrary group alphabet into a set of $4 \times 4$ integer matrices in {\rm SL}\ensuremath{(4,\mathbb{Z})}\xspace and denote it by $\beta$. Let $(g,h)$ be an instance of the \PCP, where $g,h:\{a_1,\ldots,a_n\}^*\to \Sigma_2^*$, where $\Sigma_2=\{a,b\}$. Without loss of generality, we can assume that the solution starts with the letter $a_1$. Moreover, we assume that this is the only occurence of $a_1$. We define the alphabet $\Gamma = \Sigma_2\cup \Sigma_2^{-1} \cup \Sigma_B\cup \Sigma_B^{-1}$, where $\Sigma_B = \{q_0, q_1, p_0,p_1\}$ is the alphabet for the border letters that enforce the form of a solution. Let us define the following sets of words $W_1 \cup W_2 \subseteq \FG \times \FG$, where \begin{align*} W_1 &= \left\{ (q_0 a\overbar{q_0},p_0a\overbar{p_0}),\ (q_0 b\overbar{q_0},p_0b\overbar{p_0}) \mid a,b \in \Sigma_2,\;\; q_0,p_0 \in \Sigma_B \right\} \text{ and} \\ W_2 &= \left\{ (q_0\overbar{g(a_1)}\overbar{q_1}, p_0\overbar{h(a_1)}\overbar{p_1}),\ (q_1\overbar{g(a_i)}\overbar{q_1}, p_1\overbar{h(a_i)}\overbar{p_1})\mid 1 < i \le n, \;\; q_0,q_1,p_0,p_1 \in \Sigma_B \right\}. \end{align*} Intuitively, the words from set $W_1$ are used to construct words over $\Sigma_2$ and the words from set $W_2$ to cancel them according to the instance of the \PCP. Let us prove that $(q_0 \overbar{q_1}, p_0 \overbar{p_1}) \in \FG[W_1 \cup W_2]$ if and only if the \PCP has a solution. It is easy to see that any pair of non-empty words in $\FG[W_1]$ is of the form~$(q_0 w \overbar{q_0}, p_0 w \overbar{p_0})$ for $w \in \Sigma_2^+$. Then there exists a pair of words in $\FG[W_2]$ of the form $(q_0 \overbar{w} \overbar{q_1}, p_0 \overbar{w}\overbar{p_1})$ for some word~$w \in \Sigma_2^+$ if and only if the \PCP has a solution. Therefore, the pair of words $(q_0 \overbar{q_1}, p_0 \overbar{p_1})$ can be constructed by concatenating pairs of words in $W_1$ and $W_2$ if and only if the \PCP has a solution. For each pair of words~$(u, v) \in \FG[W_1 \cup W_2]$, we define a matrix~$A_{u,v} $ to be $\begin{psmallmatrix} \beta(u) & \bm{0}_2 \\ \bm{0}_2 & \beta(v) \end{psmallmatrix} \in {\rm SL}(4,\mathbb{Z}),$ where $\bm{0}_2$ is the zero matrix in $\mathbb{Z}^{2 \times 2}$. Moreover, we define the following matrix \begin{align*} B_{q_1 \overbar{q_0}, p_1 \overbar{p_0}} = \begin{pmatrix} \bm{0}_2 & \beta(q_1 \overbar{q_0}) \\ \beta(p_1 \overbar{p_0}) & \bm{0}_2 \end{pmatrix} \in {\rm SL}(4,\mathbb{Z}). \end{align*} Let $S$ be a matrix semigroup generated by the set $ \{ A_{u,v}, B_{q_1 \overbar{q_0}, p_1 \overbar{p_0}} \mid (u,v) \in W_1 \cup W_2 \}. $ We already know that the pair $(q_0 \overbar{q_1}, p_0 \overbar{p_1})$ of words can be generated by concatenating words in $W_1$ and $W_2$ if and only if the \PCP has a solution. The matrix semigroup $S$ has the corresponding matrix~$A_{q_0 \overbar{q_1}, p_0 \overbar{p_1}}$ and thus, \begin{align*} \begin{pmatrix} \beta(q_0 \overbar{q_1}) & \bm{0}_2 \\ \bm{0}_2 & \beta(p_0 \overbar{p_1}) \end{pmatrix} \begin{pmatrix} \bm{0}_2 & \beta(q_1 \overbar{q_0}) \\ \beta(p_1 \overbar{p_0}) & \bm{0}_2 \end{pmatrix} = \begin{pmatrix} \bm{0}_2 & \beta(\varepsilon) \\ \beta(\varepsilon) & \bm{0}_2 \end{pmatrix} \in S. \end{align*} Then we see that the identity matrix~$\bm{I}_4$ exists in the semigroup~$S$ as follows: \begin{align*} \begin{pmatrix} \bm{0}_2 & \beta(\varepsilon) \\ \beta(\varepsilon) & \bm{0}_2 \end{pmatrix} \begin{pmatrix} \bm{0}_2 & \beta(\varepsilon) \\ \beta(\varepsilon) & \bm{0}_2 \end{pmatrix} = \begin{pmatrix} \beta(\varepsilon) & \bm{0}_2 \\ \bm{0}_2 & \beta(\varepsilon) \end{pmatrix} = \begin{pmatrix} \bm{I}_2 & \bm{0}_2 \\ \bm{0}_2 & \bm{I}_2 \end{pmatrix} = \bm{I}_4 \in S. \end{align*} Now we prove that the identity matrix does not exist in $S$ if the \PCP has no solution. It is easy to see that we cannot obtain the identity matrix only by multiplying `$A$' matrices since there is no possibility of cancelling every border letter. We need to multiply the matrix~$B_{q_1 \overbar{q_0}, p_1 \overbar{p_0}}$ with a product of `$A$' matrices at some point to reach the identity matrix. Note that the matrix~$B_{q_1 \overbar{q_0}, p_1 \overbar{p_0}}$ cannot be the first matrix of the product, followed by the `$A$' matrices, because the upper right block of $B_{q_1 \overbar{q_0}, p_1 \overbar{p_0}}$, which corresponds to the first word of the pair, should be multiplied with the lower right block of `$A$' matrix, which corresponds to the second word of the pair. Suppose that the `$A$' matrix is of form $\begin{psmallmatrix} \beta(q_0 u \overbar{q_1}) & \bm{0}_2 \\ \bm{0}_2 & \beta(p_0 v \overbar{p_1}) \end{psmallmatrix}$. Since the \PCP instance has no solution, either $u$ or $v$ is not the empty word. We multiply $B_{q_1 \overbar{q_0}, p_1 \overbar{p_0}}$ to the matrix and then obtain the following matrix: {\small\begin{align*} \begin{pmatrix} \beta(q_0 u \overbar{q_1}) & \bm{0}_2 \\ \bm{0}_2 & \beta(p_0 v \overbar{p_1}) \end{pmatrix} \begin{pmatrix} \bm{0}_2 & \beta(q_1 \overbar{q_0}) \\ \beta(p_1 \overbar{p_0}) & \bm{0}_2 \end{pmatrix} = \begin{pmatrix} \bm{0}_2 & \beta(q_0 u \overbar{q_0}) \\ \beta(p_0 v \overbar{p_0}) & \bm{0}_2 \end{pmatrix}. \end{align*}} We can see that either the upper right part or the lower left part cannot be $\beta(\varepsilon)$, which actually corresponds to the identity matrix in $\mathbb{Z}^{2\times 2}$. Now the only possibility of reaching the identity matrix is to multiply matrices which have ${\rm SL}(2,\mathbb{Z})$ matrices in the anti-diagonal coordinates like $B_{q_1 \overbar{q_0}, p_1 \overbar{p_0}}$. However, we cannot cancel the parts because the upper right block (the lower left block) of the left matrix is multiplied with the lower left block (the upper right block) of the right matrix as follows: {\small\begin{align*} \begin{pmatrix} \bm{0}_2 & A \\ B & \bm{0}_2 \end{pmatrix} \begin{pmatrix} \bm{0}_2 & C \\ D & \bm{0}_2 \end{pmatrix} = \begin{pmatrix} AD & \bm{0}_2 \\ \bm{0}_2 & BC \end{pmatrix}, \end{align*}} where $A,B,C$ and $D$ are matrices in $\mathbb{Z}^{2 \times 2}$. As the first word of the pair is encoded in the upper right block of the matrix and the second word is encoded in the lower left block, it is not difficult to see that we cannot cancel the remaining blocks Currently, the undecidability bound for the \PCP is five \cite{Neary15} and thus the semigroup $S$ is generated by eight matrices. Recall that in the beginning of the proof, we assumed that the letter $a_1$ of the \PCP is used exacly once and is the first letter of a solution. This property is in fact present in \cite{Neary15}. \end{proof} Consider the membership problem called the {\em special diagonal membership problem}, where the task is to determine whether a scalar multiple of the identity matrix exists in a given matrix semigroup. The most recent undecidability bound in $\mathbb{Z}^{4\times4}$ is shown to be 14 by Halava et al.~\cite{HHH07}. We improve the bound to eight, as the identity matrix is the only diagonal matrix of the semigroup $S$ in the proof of Theorem~\ref{thm:identity4}. We also prove that the identity problem is undecidable in $\mathbb{H}(\mathbb{Q})^{2\times 2}$\xspace as well by replacing the composition $f \circ \alpha$ of mappings with a mapping from a group alphabet to the set of rational quaternions; see \cite{BP08}. \begin{corollary} For a given semigroup $S$ generated by eight $4 \times 4$ integer matrices, determining whether there exists any diagonal matrix in $S$ is undecidable. \end{corollary} \begin{corollary} For a given semigroup $S$ generated by eight $2 \times 2$ rational quaternion matrices, determining whether there exists the identity matrix in $S$ is undecidable \end{corollary} \section{Concluding remarks} In this paper, we considered the identity problem in matrix semigroups and provided a better bound on the number of matrices (reducing from 48 to 8) in the generator set for $4\times4$ integer matrices, where the problem is undecidable. More importantly, we showed that there is no embedding of pairs of words into ${\rm SL}(3,\mathbb{Z})$. While this does not imply that the identity problem is decidable, it does provide strong evidence about decidability of computational problems in {\rm SL}\ensuremath{(3,\mathbb{Z})}\xspace. Then we showed that the identity problem is decidable for Heisenberg group {\rm H}\ensuremath{(3,\mathbb{Z})}\xspace, which is an important subgroup of ${\rm SL}(3,\mathbb{Z})$, and generalized the result to {\rm H}\ensuremath{(n,\mathbb{Q})}\xspace for any $n\in\mathbb{N}$. The natural follow-up question is whether other standard matrix problems, such as membership, are decidable in ${\rm H}\ensuremath{(3,\mathbb{Z})}\xspace$ or whether the identity problem is decidable for ${\rm H}(3,\mathbb{C})$. \bibliographystyle{plainurl}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction and notation} \label{sec:introduction} As well known, many practical applications require to solve numerically linear systems of Toeplitz kind and of large dimensions. As a consequence a number of iterative techniques, such as preconditioned Krylov methods, multigrid procedures, and sophisticated combination of them have been designed (see \cite{ChJi2007,Ng2004} and the references therein). Linear systems with Toeplitz coefficient matrices of large dimension arise when dealing with the numerical solution of (integro-)differential equations and of problems with Markov chains. More recently, new examples of real world problems have emerged. \textcolor{black}{ The first focus of this paper is the characterization of the spectrum and the singular values of the coefficient matrix stemming from the discretization with a space-time grid for a parabolic diffusion problem. More specifically, we consider the diffusion equation in one space dimension, \begin{equation*} u_t=u_{xx}, \quad x\in (a,b),\ t\in [0, T], \end{equation*} and we approximate our parabolic model problem on a rectangular space-time grid consisting of $N_t$ time intervals and $N_x$ space intervals. } The second focus concerns the matrix-sequences involved with the discretization of distributed order fractional differential equations (FDEs) which have gained a lot of attention. Owing to the nonlocal nature of fractional operators, independently of the locality of the approximation methods, the matrix structures are dense and under assumptions of uniform step-sizing and of constant coefficients in the involved operators, the matrices are again of Toeplitz type (unilevel, or multilevel according to the dimensionality of the considered domains). When the fractional order is fixed, the spectral analysis of such matrices (conditioning, extremal eigenvalues etc) can be performed, by exploiting the well-established analysis of the spectral features of Toeplitz matrix-sequences generated by Lebesgue integrable functions and the more recent Generalized Locally Toeplitz (GLT) theory \cite{GLT-bookI}; see for instance \cite{DoMa2016, DoMa2018}. However in the case of the numerical approximation of distributed-order fractional operators, also the spectral analysis of the resulting matrices is more involved. We recall that distributed-order FDEs can be interpreted as a parallel distribution of derivatives of fractional orders, whose most immediate application consists in the physical modeling of systems characterized by a superposition of different processes operating in parallel. As an example, we mention the application of fractional distributed-order operators as a tool for accounting memory effects in composite materials \cite{CaFa2017} or multi-scale effects \cite{CaGi2018}. For a detailed review on the topic we refer the reader to \cite{DiPa2021}. \textcolor{black}{In order to study the involved structured linear systems of both integral and differential equations, we will use the classical theory of GLT matrix-sequences \cite{GLT-bookI,GLT-bookII} and the new concept of GLT momentary symbols.} The first permits to describe the singular value or eigenvalue asymptotic distribution of the sequence of the coefficient matrices, the latter permits to derive a function, which describes the singular value or eigenvalue distribution of a fixed matrix of the sequence, even for small matrix-sizes, but under given assumptions. This paper is organized as follows. The remaining part of this section is devoted to definitions, notation, and to the necessary background for our analysis: in particular we provide a formal definition of GLT momentary symbols. Section \ref{sec:problem} is devoted to setting up the problem and to derive the relevant matrix structures. The distributional analysis both for the eigenvalues and singular values is the main focus of Subsection \ref{sec:coefficientmatrix}, while Section \ref{sec:fractional} contains similar results for specific matrix-structures with generating function depending on the matrix-size and which arise in the context of fractional differential equations with distributed orders. Section \ref{sec:conclusions} contains conclusions and a list of open problems, including the use of our machinery in the study of iteration matrices, especially those concerning multigrid-type techniques. \subsection{Background and definitions} \label{sec:introduction:background} Throughout this paper, we will use the following notations. Let ${f}:G\to\mathbb{C}$ be a function belonging to $L^1(G)$, with $G\subseteq\mathbb R^\ell$, $\ell\ge 1$, a measurable set. We denote by $\{A_{n}\}_{n}$ the matrix-sequence, whose elements are given by the matrices $A_{n}$ of dimension $n \times n$. Let $s,d \in \mathbb{N}$. Let $\mathbf{n}=(n_1,n_2,\dots,n_d)$ be a multi-index, we indicate by $\{A_{\mathbf{n}}\}_{\mathbf{n}}$, the $d$-level $s\times s$ block matrix-sequence, whose elements are the matrices $A_\mathbf{n}$ of size $d=d(\mathbf{n},s)=sn_1n_2\cdots n_d$. \subsection{Toeplitz and circulant matrix-sequences} \label{sec:introduction:tep_circ} In the following we report the main background concerning the concepts of Toeplitz and circulant matrices, for simplicity, in the scalar unilevel setting. We only provide the generalization in the block multilevel case of the results that will be exploited for the purpose of the paper. \begin{defn} \label{def:toeplitz_scalar} An $n\times n$ Toeplitz matrix $A_n$ is a matrix that has equal {entries} along each diagonal, and can be written as \begin{equation*} A_n=\left[a_{i-j}\right]_{i,j=1}^{n}=\left[\begin{smallmatrix} a_0 & a_{-1} & a_{-2} & \cdots & \cdots & a_{1-n}\vphantom{\ddots}\\ a_1 & \ddots & \ddots & \ddots & & \vdots\\ a_2 & \ddots & \ddots & \ddots & \ddots & \vdots\\ \vdots & \ddots & \ddots & \ddots & \ddots & a_{-2}\\ \vdots & & \ddots & \ddots & \ddots & a_{-1}\\ a_{n-1} & \cdots & \cdots & a_2 & a_1 & a_0\vphantom{\ddots}\\ & \end{smallmatrix}\right], \ \ \ a_j\in \mathbb{C}, \ j=1-n,\ldots,n-1. \end{equation*} \end{defn} In the following we focus on the two important sub-classes given by the Toeplitz matrices $T_{n}(f) \in \mathbb{C}^{n \times n}$ and the circulant matrices $C_{n}(f) \in \mathbb{C}^{n \times n}$, associated with a function $f$, called the \textbf{generating function.} \begin{defn}\label{def:toeplitz_scalar_generating} Given $f$ belonging to $L^1([-\pi,\pi])$ and periodically extended to the whole real line, the matrix $T_{n}(f)$ is defined as \begin{equation*} T_n(f)=\left[\hat f_{i-j}\right]_{i,j=1}^n, \end{equation*} where \begin{equation} \hat{f}_{k}\coloneqq\frac1{2\pi}\int_{-\pi}^{\pi}\!\!f(\theta)\,\mathrm{e}^{-k\mathbf{i} \theta}\!\mathrm{d}\theta,\qquad k\in\mathbb Z,\qquad \mathbf{i}^2=-1,\label{eq:introduction:background:fourier} \end{equation} are the Fourier coefficients of $f$, and \begin{equation*} f(\theta)=\!\!\sum_{k=-\infty}^{\infty}\!\!\hat{f}_{k}\mathrm{e}^{k\mathbf{i} \theta},\label{eq:introduction:fourierseries} \end{equation*} is the Fourier series of $f$. \end{defn} {\begin{defn}\label{def:Cir} Let the Fourier coefficients of a given function ${f}\in L^1([-\pi,\pi])$ be defined as in formula (\ref{eq:introduction:background:fourier}). Then, we can define the $n \times n$ circulant matrix $C_{n}(f)$ associated with $f$, as \begin{equation} C_{n}(f)=\!\!\!\!\!\!\sum_{j=-(n-1)}^{n-1}\!\!\!\!\!\!\hat{a}_{j}Z_{n}^{j}=\mathbb{F}_{n} D_{n}(f) \mathbb{F}_{n}^{*},\label{eq:introduction:background:circulant:schur} \end{equation} where $^*$ denotes the transpose conjugate, $Z_{n}$ is the $n \times n$ matrix defined by \begin{equation*} \left(Z_{n}\right)_{ij}=\begin{cases} 1,&\text{if }\mathrm{mod}(i-j,n)=1,\\ 0,&\text{otherwise}. \end{cases} \end{equation*} Moreover, \begin{equation*} D_{n}(f)=\diag\left(s_n(f(\theta_{j,n}^c))\right),\quad j=1,\ldots,n,\label{eig-circ} \end{equation*} where \begin{equation} \theta_{j,n}^c=\frac{(j-1)2\pi}{n},\quad j=1,\ldots,n,\label{eq:introduction:background:circulant:grid-circ} \end{equation} and $s_{n}(f(\theta))$ is the $ n$th Fourier sum of $f$ given by \begin{equation*} s_{n}(f({\theta}))= \sum_{k=1-n}^{n-1} \hat{f}_{k} \mathrm{e}^{k\mathbf{i}\theta}.\label{fourier-sum} \end{equation*} The matrix $\mathbb{F}_n$ is the so called Fourier matrix of order $n$, given by \begin{align*} (\mathbb{F}_{n})_{i,j}=\frac{1}{\sqrt{n}} \mathrm{e}^{\mathbf{i}(i-1)\theta_{j,n}^c}, \quad i,j=1,\ldots,n. \end{align*} In the case of the Fourier matrix, we have $\mathbb{F}_{n}\mathbb{F}_{n}^*=\mathbb{I}_{n}$, that is $\mathbb{F}_{n}$ is complex-symmetric and unitary, with $\mathbb{I}_{n}$ being the identity of size $n$. The proof of the second equality in (\ref{eq:introduction:background:circulant:schur}), which implies that the columns of the Fourier matrix $\mathbb{F}_n$ are the eigenvectors of $C_{n}(f)$, can be found in \cite[Theorem 6.4]{GLT-bookI}. \end{defn} Note that, from the definition follows that if ${f}$ is a trigonometric polynomial of fixed degree less than $n$, the entries of $D_n(f)$ are the eigenvalues of $C_n(f)$, explicitly given by sampling the generating function $f$ {using} the grid $\theta_{j,n}^c$. \begin{equation} \begin{split} \lambda_j(C_n(f))&=f\left(\theta_{j,n}^c\right),\quad j=1,\ldots,n,\nonumber\\ D_{n}(f)&=\diag\left(f\left(\theta_{j,n}^c\right)\right),\quad j=1,\ldots,n. \label{eq:introduction:background:circulant:eig-circSEE} \end{split} \end{equation} The type of domain (either one-dimensional $[-\pi,\pi]$ or d-dimensional $[-\pi,\pi]^d$) and codomain (either the complex field or the space of $s \times s$ complex matrices) of $f$ gives rise to different kinds of Toeplitz matrices, see Table~\ref{ttt} for a complete overview. \begin{table}[] \begin{center} \small \caption{Different types of generating function and the associated Toeplitz matrix.} \label{ttt} \begin{tabular}{ll|ll} \toprule \multicolumn{2}{c|}{\textbf{Type of generating function}} & \multicolumn{2}{c}{\textbf{Associated Toeplitz matrix}}\\ \midrule univariate scalar & $f(\theta):[-\pi,\pi]\to \mathbb{C}$ & unilevel scalar & $T_{n}(f)\in \mathbb{C}^{n\times n}$\\ $d$-variate scalar & $f(\boldsymbol{\theta}):[-\pi,\pi]^d\to \mathbb{C}$ & $d$-level scalar & $T_{\mathbf{n}}(f)\in \mathbb{C}^{d(\mathbf{n},1)\times d(\mathbf{n},1)}$\\ univariate matrix-valued & $\mathbf{f}(\theta):[-\pi,\pi]\to \mathbb{C}^{s\times s}$& unilevel block & $T_{n}(\mathbf{f})\in \mathbb{C}^{d({n},s)\times d({n},s)}$\\ $d$-variate matrix-valued & $\mathbf{f}(\boldsymbol{\theta}):[-\pi,\pi]^d\to \mathbb{C}^{s\times s}$ & $d$-level block & $T_{\mathbf{n}}(\mathbf{f})\in \mathbb{C}^{d(\mathbf{n},s)\times d(\mathbf{n},s)}$\\ \bottomrule \end{tabular} \end{center} \end{table} In particular, we provide the definition of a $d$-level $s\times s$ block Toeplitz matrices $T_\mathbf{n}(\mathbf{f})$ starting from $d$-variate matrix-valued function $\mathbf{f}:[-\pi,\pi]^{d}\rightarrow \mathbb{C}^{s\times s}$, with with $\mathbf{f}\in L^1([-\pi,\pi]^d)$. \begin{defn}\label{def:toeplitz_block_generating} Given a function $\mathbf{f}:[-\pi,\pi]^{d}\rightarrow \mathbb{C}^{s\times s}$ its Fourier coefficients are given by \begin{equation*} \hat{\mathbf{f}}_{\mathbf{k}}\coloneqq \frac1{(2\pi)^d} \int_{[-\pi,\pi]^d}\mathbf{f}(\boldsymbol{\theta})\mathrm{e}^{-\mathbf{i}\left\langle {\mathbf{k}},\boldsymbol{\theta}\right\rangle}\mathrm{d}\boldsymbol{\theta}\in\mathbb{C}^{s\times s}, \qquad \mathbf{k}=(k_1,\ldots,k_d)\in\mathbb{Z}^d,\label{fhat} \end{equation*} where $\boldsymbol{\theta}=(\theta_1,\ldots,\theta_d)$, $\left\langle \mathbf{k},\boldsymbol{\theta}\right\rangle=\sum_{i=1}^dk_i\theta_i$, and the integrals of matrices are computed elementwise. The associated generating function an be defined via its Fourier series as \begin{equation*} \mathbf{f}(\boldsymbol{\theta})=\sum_{\mathbf{k}\in \mathbb{Z}^d}\hat{\mathbf{f}}_{\mathbf{k}}\mathrm{e}^{\mathbf{i}\left\langle {\mathbf{k}},\boldsymbol{\theta}\right\rangle}.\label{eq:introduction:matrixvaluedsymbol} \end{equation*} The $d$-level $s\times s$ block Toeplitz matrix associated with $\mathbf{f}$ is the matrix of dimension $d({\mathbf{n},s})$, where $\mathbf{n}=(n_1,\ldots,n_d)$, given by \begin{equation*} T_\mathbf{n}(\mathbf{f})= \sum_{\mathbf{e}-\mathbf{n}\le \mathbf{k}\le \mathbf{n}-\mathbf{e}} T_{n_1}(\mathrm{e}^{\mathbf{i}k_1\theta_1})\otimes \cdots \otimes T_{n_d}(\mathrm{e}^{\mathbf{i}k_d\theta_1})\otimes \hat{\mathbf{f}}_{\mathbf{k}}, \end{equation*} where $\mathbf{e}$ is the vector of all ones and where $\mathbf{s}\le \mathbf{t}$ means that $s_j\le t_j$ for any $j=1,\ldots,d$. \end{defn} \begin{defn} If $\textbf{n}\in \mathbb{N}^d$ and $\textbf{a} : [0,1]^d \to \mathbb{C}^{s\times s}$, we define the $\textbf{n}$-th $d$-level and $s\times s$ block diagonal sampling matrix as the following multilevel block diagonal matrix of dimension $d(\textbf{n},s)$: \begin{equation*}\label{eq:def_diagonal_sampling} D_{\textbf{n}}(\textbf{a}) = \diag_{\textbf{e}\le \textbf{j} \le \textbf{n}} \textbf{a}\left(\frac{\textbf{j}}{\textbf{n}}\right), \end{equation*} where we recall that $\textbf{e}\le \textbf{j} \le \textbf{n}$ means that $\textbf{j}$ varies from $\textbf{e}$ to $\textbf{n}$ following the lexicographic ordering. \end{defn} The following result provides an important relation between tensor products and multilevel Toeplitz matrices. \begin{lem}{\rm \cite{GLT-bookII}}\label{lemm:tensor_prod} Let $f_1 , \dots, f_d \in L^1([-\pi,\pi])$, $\mathbf{n}=(n_1,n_2, \dots, n_d) \in \mathbb{N}^d$. Then, \begin{equation*} T_{ n_1} ( f_1 ) \otimes \dots \otimes T_{n_d} ( f_d )= T_{\mathbf{n}} ( f_1 \otimes \dots \otimes f_d ), \end{equation*} where the Fourier coefficients of $f_1 \otimes \dots \otimes f_d$ are given by \begin{equation*} ( f_1 \otimes \dots \otimes f_d)_{\mathbf{k}} = ( f_1 )_{k_1} \dots ( f_d )_{k_d}, \quad \mathbf{k} \in \mathbb{Z}^d. \end{equation*} \end{lem} \subsection{Asymptotic distributions} \label{sec:introduction:distributions} In this subsection we introduce the definition of \textit{asymptotic distribution} in the sense of the eigenvalues and of the singular values, first for a generic matrix-sequence $\{A_{n}\}_{n}$, and then we report specific results concerning the distributions of Toeplitz and circulant matrix-sequences. Finally, we recall the notion of GLT algebra and we introduce a general notion of GLT momentary symbols. We remind that in a more specific and limited setting, the notion of momentary symbols is given in \cite{Momentary_1}: here we generalize the definition in \cite{Momentary_1}. \begin{defn} {\rm \cite{GLT-bookI,GLT-bookII,gsz,TyZ}} \label{def:introduction:background:distribution} Let $f,{\mathfrak{f}}:G\to\mathbb{C}$ be measurable functions, defined on a measurable set $G\subset\mathbb{R}^\ell$ with $\ell\ge 1$, $0<\mu_\ell(G)<\infty$. Let $\mathcal{C}_0(\mathbb{K})$ be the set of continuous functions with compact support over $\mathbb{K}\in \{\mathbb{C}, \mathbb{R}_0^+\}$ and let $\{A_{\mathbf{n}}\}_{\mathbf{n}}$, be a sequence of matrices with eigenvalues $\lambda_j(A_{\mathbf{n}})$, $j=1,\ldots,{d_\mathbf{n}}$ and singular values $\sigma_j(A_{\mathbf{n}})$, $j=1,\ldots,{d_\mathbf{n}}$. Then, \begin{itemize} \item The matrix-sequence $\{A_{\mathbf{n}}\}_\mathbf{n}$ is \textit{distributed as $\s$ in the sense of the \textbf{singular values}}, and we write \begin{align} \{A_{\mathbf{n}}\}_{\mathbf{n}}\sim_\sigma \s,\nonumber \end{align} if the following limit relation holds for all $F\in\mathcal{C}_0(\mathbb{R}_0^+)$: \begin{equation} \lim_{\mathbf{n}\to\infty}\frac{1}{{d_\mathbf{n}}}\sum_{j=1}^{{d_\mathbf{n}}}F(\sigma_j(A_\mathbf{n}))= \frac1{\mu_\ell(G)}\int_G F({|\s(\boldsymbol{\theta})|})\,\!\mathrm{d}{\boldsymbol{\theta}}.\label{eq:introduction:background:distribution:sv} \end{equation} The function $\s$ is called the \textbf{singular value symbol} which describes asymptotically the singular value distribution of the matrix-sequence $ \{A_{\mathbf{n}}\}_{\mathbf{n}}$. \item The matrix-sequence $\{A_{\mathbf{n}}\}_{\mathbf{n}}$ is \textit{distributed as $ \es$ in the sense of the \textbf{eigenvalues}}, and we write \begin{equation*} \{A_{\mathbf{n}}\}_{\mathbf{n}}\sim_\lambda \es, \end{equation*} if the following limit relation holds for all $F\in\mathcal{C}_0(\mathbb{C})$: \begin{equation} \lim_{\mathbf{n}\to\infty}\frac{1}{{d_\mathbf{n}}}\sum_{j=1}^{{d_\mathbf{n}}}F(\lambda_j(A_{\mathbf{n}}))= \frac1{\mu_\ell(G)}\int_G \displaystyle F({\es(\boldsymbol{\theta})})\,\!\mathrm{d}{\boldsymbol{\theta}}.\label{eq:introduction:background:distribution:ev} \end{equation} The function $\mathfrak{f}$ is called the \textbf{eigenvalue symbol} which describes asymptotically the eigenvalue distribution of the matrix-sequence $ \{A_{\mathbf{n}}\}_{\mathbf{n}}$. \end{itemize} \end{defn} \begin{rem} \label{rem:introduction:background:1} Note that, if $A_\mathbf{n}$ is normal for any $\mathbf{n}$ or at least definitely, then $\{A_{\mathbf{n}}\}_{\mathbf{n}}\sim_\sigma \s$ and $\{A_{\mathbf{n}}\}_{\mathbf{n}}\sim_\lambda \es$ imply that $\s=\es$. Of course this is true for Hermitian Toeplitz matrix-sequences as emphasized in Theorem \ref{thm:prel_toepl_szeg} and Theorem \ref{thm:prel_toepl_szeg_multi}. Moreover, considering the case $d=1$, if $\s$ (or $\es$) is smooth enough, then the informal interpretation of the limit relation \eqref{eq:introduction:background:distribution:sv} (or \eqref{eq:introduction:background:distribution:ev}) is that, for $n$ sufficiently large, the $n$ singular values (or eigenvalues) of $A_{n}$ can be approximated by a sampling of $|\s(\theta)|$ (or $\es(\theta)$) on an equispaced grid of the interval~$G$, up to the presence of possibly $o(n)$ outliers. It is worthy to notice that in most of the Toeplitz and PDE/FDE applications the number of actual outliers is often limited to $O(1)$ with often a small number of outliers (see \cite{GLT-bookI,GLT-bookII,GLT-bookIII,GLT-bookIV} and references therein). The generalization of Definition \ref{def:introduction:background:distribution} and Remark \ref{rem:introduction:background:1} to the block setting and multilevel block setting can be found in \cite{GLT-bookIII,GLT-bookIV} and in the references therein. \end{rem} In the case where the matrix-sequence is a Toeplitz matrix-sequence generated by a function, the singular value distribution and the spectral distribution have been well studied in the past few decades. At the beginning Szeg{\H{o}} in \cite{gsz} showed that the eigenvalues of the Toeplitz matrix $T_n(f)$ generated by real-valued $f\in L^{\infty}([-\pi,\pi])$ are asymptotically distributed as~$\s$. Moreover, under the same assumption on $f$, Avram and Parter \cite{MR952991,MR851935} proved that the singular values of $T_n(f)$ are distributed as $|\s|$. This result has been undergone many generalizations and extensions among the years (see \cite{GLT-bookI,GLT-bookII,GLT-bookIII,GLT-bookIV} and the references therein). The generalized Szeg{\H{o}} theorem that describes the singular value and spectral distribution of Toeplitz sequences generated by a scalar $f\in L^1([-\pi, \pi])$ is given as follows \cite{MR1481397}. \begin{thm}\label{thm:prel_toepl_szeg} Suppose $f \in L^{1}([-\pi,\pi])$. Let $T_n(f)$ be the Toeplitz matrix generated by $f$. We have \begin{equation*} \{T_n(f)\}_n \sim_{\sigma} f. \end{equation*} Moreover, if $f$ is real-valued almost everywhere (a.e.), then \begin{equation*} \{T_n(f)\}_n \sim_{\lambda} f. \end{equation*} \end{thm} Tilli \cite{MR1671591} generalized the proof to the block-Toeplitz setting and we report the extension of the eigenvalue result to the case of multivariate Hermitian matrix-valued generating functions. \begin{thm}\label{thm:prel_toepl_szeg_multi} Suppose $\mathbf{f}\in L^1([-\pi,\pi]^d,s)$ with positive integers $d,s$. Let $T_{\bf n}(\mathbf{f})$ be the Toeplitz matrix generated by $\mathbf{f}$. We have \[ \{T_{\bf n}(\mathbf{f})\}_{{\bf n}}\sim_\sigma~\mathbf{f}. \] Moreover, if $\mathbf{f}$ is a Hermitian matrix-valued function a.e., then, \[ \{T_{\bf n}(\mathbf{f})\}_{{\bf n}}\sim_\lambda~\mathbf{f}. \] \end{thm} Concerning the circulant matrix-sequences, though the eigenvalues of a $C_{\bf n}(\textbf{f})$ are explicitly known, a result like Theorem \ref{thm:prel_toepl_szeg} and Theorem \ref{thm:prel_toepl_szeg_multi} does not hold for sequences $\left\{C_{\bf n}(\textbf{f})\right\}_{{\bf n}}$ in general. Indeed, the Fourier sum of $\textbf{f}$ converges to $\textbf{f}$ under quite restrictive assumptions (see \cite{zygmund}). In particular, if $\textbf{f}$ belongs to the Dini-Lipschitz class, then $\left\{C_{\bf n}(\textbf{f})\right\}_{{\bf n}}\sim_\lambda\textbf{f}$, (see \cite{estatico-serra} for more relationships between circulant sequences and spectral distribution results). \subsection{Matrix algebras}\label{sec:matrix_algebra} A part from the circulant algebra, introduced in Section \ref{sec:introduction:tep_circ}, we recall that other particular matrix algebras have interesting properties and can be exploited for our purpose. In particular, we mention the well-known $\tau$-algebras, see \cite{bozzo} and references therein. Here, we restrict the analysis to the case of the matrix algebras $ {\tau_{\varepsilon,\varphi}}$, introduced in~\cite{bozzo}, where an element of the algebra is a matrix \begin{align} T_{n,\epsilon,\varphi}(g)=\left[ \begin{array}{ccccc} a+\varepsilon b &b\\ b&a&b\\ &\ddots&\ddots&\ddots\\ &&b&a&b\\ &&&b&a+\varphi b\ \end{array} \right],\nonumber \end{align} We can associate to this matrix a function $g$ of the form $g(\theta)=a+2b\cos\theta$. For some values of $\varepsilon$ and $\varphi$ the exact eigenvalues of $T_{n,\epsilon,\varphi}(g)$ are given by sampling with specific grids; for detailed examples see~\cite{Momentary_1} and \cite{taueconomy} for asymptotic results. In Table~\ref{tbl:taugrids} we provide the proper grids $\theta^{(\varepsilon,\varphi)}_{j,n}$ and $\Theta_{i,j,n}^{(\varepsilon,\varphi)}$ to give the exact eigenvalues and eigenvectors respectively for $\varepsilon, \varphi \in\{-1,0,1\}$. \begin{table}[!ht] \centering \caption{Grids for {$\tau_{\varepsilon,\varphi}$}-algebras, $\varepsilon,\varphi\in\{-1,0,1\}$; $\theta_{j,n}^{(\varepsilon,\varphi)}$ and $\Theta_{j,n}^{(\varepsilon,\varphi)}$ are the grids used to compute the eigenvalues and eigenvectors, respectively. {The standard naming convention (\texttt{dst-*} and \texttt{dct-*}) in parenthesis; see, e.g., \cite[Appendix 1]{CeccheriniSilberstein2008}.}} \label{tbl:taugrids} \begin{tabular}{|r|ccc||c|} \hline &&$\theta_{j,n}^{(\varepsilon,\varphi)}$&&$\Theta_{i,j,n}^{(\varepsilon,\varphi)}$\\[0.2em] \hline \diaghead{\theadfont MMMMM}{ $\varepsilon$ }{ $\varphi$ }& \thead{-1}&\thead{0}&\thead{1}&\thead{-1, 0, 1}\\[0.2em] \hline \thead{-1}&\makecell{{{\tiny (\texttt{dst-2})}}\\$\frac{j\pi}{n}$}&\makecell{{{\tiny (\texttt{dst-6})}}\\$\frac{j\pi}{n+1/2}$}&\makecell{{{\tiny (\texttt{dst-4})}}\\$\frac{(j-1/2)\pi}{n}$}&$(i-1/2)\theta_{j,n}^{(\varepsilon,\varphi)}$\\[0.2em] \thead{0}&\makecell{{{\tiny (\texttt{dst-5})}}\\$\frac{j\pi}{n+1/2}$}&\makecell{{{\tiny (\texttt{dst-1})}}\\$\frac{j\pi}{n+1}$}&\makecell{{{\tiny (\texttt{dst-7})}}\\$\frac{(j-1/2)\pi}{n+1/2}$}&$i\theta_{j,n}^{(\varepsilon,\varphi)}$\\[0.2em] \thead{1}&\makecell{{{\tiny (\texttt{dct-4})}}\\$\frac{(j-1/2)\pi}{n}$}&\makecell{{{\tiny (\texttt{dct-8})}}\\$\frac{(j-1/2)\pi}{n+1/2}$}&\makecell{{{\tiny (\texttt{dct-2})}}\\$\frac{(j-1)\pi}{n}$}&$(i-1/2)\theta_{j,n}^{(\varepsilon,\varphi)}+\frac{\pi}{2}$\\[0.2em] \hline \end{tabular} \end{table} Since all grids $\theta_{j,n}^{(\varepsilon,\varphi)}$ associated with {$\tau_{\varepsilon,\varphi}$}-algebras where $\varepsilon,\varphi\in\{-1,0,1\}$ are uniformly spaced grids, we know that \begin{alignat*}{7} \theta_{j,n}^{(1,1)}&<\theta_{j,n}^{(0,1)}&&=\theta_{j,n}^{(1,0)}\nonumber\\ &&&<\theta_{j,n}^{(-1,1)}&&=\theta_{j,n}^{(1,-1)}\nonumber\\ &&&&&<\theta_{j,n}^{(0,0)}\nonumber\\ &&&&&<\theta_{j,n}^{(-1,0)}&&=\theta_{j,n}^{(0,-1)}\nonumber\\ &&&&&&&<\theta_{j,n}^{(-1,-1)},\qquad\qquad \forall j=1,\ldots,n.\label{eq:gridcomparison} \end{alignat*} \subsection{Theory of Generalized Locally Toeplitz (GLT) sequences} \label{sec:introduction:glt} In this subsection we will introduce the main properties from the theory of Generalized Locally Toeplitz (GLT) sequences and the practical features, which are sufficient for our purposes, see~\cite{GLT-bookI,GLT-bookII,GLT-bookIII,GLT-bookIV}. In particular, we consider the multilevel and block setting with $d$ being the number of levels. \begin{description} \item[GLT1] Each GLT sequence has a singular value symbol $\mathbf{f}(\boldsymbol{\theta,x})$ which is measurable according to the Lebesgue measure and according to the second item in Definition \ref{def:introduction:background:distribution} with $\ell=2d$. In addition, if the sequence is Hermitian, then the distribution also holds in the eigenvalue sense. We specify that a GLT sequence $\{A_\mathbf{n}\}_\mathbf{n}$ has GLT symbol $\mathbf{f}(\boldsymbol{\theta,x})$ writing $\{A_\mathbf{n}\}_\mathbf{n}\sim_{\textsc{glt}} \mathbf{f}(\boldsymbol{\theta,x})$, $(\boldsymbol{\theta,x})\in [-\pi,\pi]^d\times [0,1]^d$. \item[GLT2] The set of GLT sequences form a $*$-algebra, i.e., it is closed under linear combinations, products, inversion (whenever the symbol is singular, at most, in a set of zero Lebesgue measure), and conjugation. Hence, we obtain the GLT symbol of algebraic operations of a finite set of GLT sequences by performing the same algebraic manipulations of the symbols of the considered GLT sequences. \item[GLT3] Every Toeplitz sequence $\{T_\mathbf{n}(\mathbf{f})\}_n$ generated by a function $\mathbf{f}(\boldsymbol{\theta})$ belonging to $L^1([-\pi,\pi]^d)$ is a GLT sequence and with GLT symbol given by $\mathbf{f}$. Every diagonal sampling sequence $\{D_\mathbf{n}(\mathbf{a})\}_n$ generated by a Riemann integrable function $\mathbf{a}(\boldsymbol{x})$, $\boldsymbol{x}\in [0,1]^d$ is a GLT sequence and with GLT symbol given by $\mathbf{a}$. \item[GLT4] Every sequence which is distributed as the constant zero in the singular value sense is a GLT sequence with symbol~$0$. In particular: \begin{itemize} \item every sequence in which the rank divided by the size tends to zero, as the matrix-size tends to infinity; \item every sequence in which the trace-norm (i.e., sum of the singular values) divided by the size tends to zero, as the matrix-size tends to infinity. \end{itemize} \end{description} From a practical view-point, on the one hand, one of the main advantages for a sequence of belonging to the GLT class is that, under certain hypotheses, crucial spectral and singular value information can be derived using the concept of GLT symbol. On the other hand, the above properties imply the following important features of the GLT symbol. Given a sequence $\{A_\mathbf{n}\}_\mathbf{n}$ obtained by algebraic operations of a finite set of GLT sequences, the small-norm and low-rank terms which composes the sequence should be neglected in the computation of the GLT symbol. Consequently, it happens that for small matrix-sizes $n$, the approximations may not be as accurate as it is desirable. For this reason in \cite{Momentary_1} it has been introduced and exploited the concept of a (singular value and spectral) ``momentary symbols'', starting from a special case of Toeplitz structures. Here we generalize the notion to that of ``GLT momentary symbols'': the construction stems from that of the symbol in the GLT sense, but in practice the information of the small norm contributions is kept in the symbol and this may lead to higher accuracy, at least in some emblematic cases, when approximating the singular values and eigenvalues of Toeplitz-like matrices, even for small dimensions. \subsection{The GLT momentary symbol sequence} \label{sec:momentary} For clarity in this subsection we consider the matrix-sequences in detail only in the unilevel and scalar setting. We want to avoid a cumbersome notation, but the ideas are extensible in a plain manner to the case where the involved GLT symbols are also matrix-valued and multivariate, as briefly sketched. As an example, we take the following second-order differential equation with Dirichlet boundary conditions \begin{equation*} \label{FD} \begin{cases} -(a(x)u'(x))' +b(x)u'(x) + c(x)u(x) = f(x), & x\in (0,1),\\ u(0) = \alpha, \qquad u(1) =\beta. & \end{cases} \end{equation*} The well-posedness of the previous diffusion-convection-advection problem holds in the case where $a(x)\in C^1(0,1)$. Furthermore, the uniqueness and existence of the solution are guaranteed in the case where $a(x) > 0$, $c(x) \ge 0$ and with continuous functions $b(x),c(x)$ on $[0,1]$, with $f(x)\in L^2([0,1])$ (see \cite{Brezis}). For a more exhaustive discussion regarding the conditions of existence and uniqueness, even in the multidimensional case, we refer to \cite{Pozio,Punzo} and references therein. From a GLT viewpoint, we only require the following much weaker assumptions that are \begin{itemize} \item $a(x), c(x)$ are real-valued functions, continuous almost everywhere, defined in $[0,1]$, \item $b(x)$ is a real-valued function on $[0,1]$, such that $|b(x)x^{\alpha}|$ is bounded for some $\alpha<3/2$, \end{itemize} while $f(x)$ is a general function.\\ We employ central second-order finite differences for approximating the given equation. We define the stepsize $h=\frac{1}{n+1}$ and the points $x_k=kh$ for $k$ belonging to the interval $[0,n+1]$. Let $a_k:= a(x_{\frac k2})$ for any $k\in[0,2n+2]$ and set $b_j:=b(x_j)$, $c_j:=c(x_j)$, $f_j:=f(x_j)$ for every $j=0,\ldots,n+1$. We compute approximations $u_j$ of the values $u(x_j)$ for $j=1,\ldots,n$ by solving the following linear system \begin{equation}\label{linear-sys} A_n\begin{pmatrix} u_1\\ u_2\\ \vdots\\ u_{n-1}\\ u_n \end{pmatrix} + B_n \begin{pmatrix} u_1\\ u_2\\ \vdots\\ u_{n-1}\\ u_n \end{pmatrix} + C_n\begin{pmatrix} u_1\\ u_2\\ \vdots\\ u_{n-1}\\ u_n \end{pmatrix} =h^2 \begin{pmatrix} f_1 + \frac{1}{h^2}a_1\alpha + \frac{1}{2h}b_1\alpha\\ f_2\\ \vdots\\ f_{n-1}\\ f_n+ \frac{1}{h^2}a_{2n+1}\beta - \frac{1}{2h}b_{n}\beta \end{pmatrix}, \end{equation} where \[ A_n = \begin{pmatrix} a_1 + a_3 & - a_3 & & & \\ -a_3 & a_3+a_5 & -a_5 & & \\ & \ddots&\ddots &\ddots & \\ & & -a_{2n-3}& a_{2n-3}+a_{2n-1}& -a_{2n-1} \\ & & & -a_{2n-1}&a_{2n-1}+a_{2n+1} \end{pmatrix}, \] \[ B_n = \frac{h}{2} \begin{pmatrix} 0 & b_1 & & & \\ -b_2 & 0 & b_2 & & \\ & \ddots&\ddots &\ddots & \\ & & -b_{n-1} & 0 & b_{n-1} \\ & & & -b_{n} & 0 \end{pmatrix}, \quad C_n = h^2 \diag(c_1,\ldots,c_n). \] In the case where $a(x)\equiv 1$ and $b(x)\equiv 1$, we find the basic Toeplitz structures \begin{equation*}\label{basic toep1} K_n = T_n(2-2\cos\theta)= \begin{pmatrix} 2 & - 1 & & & \\ -1 & 2 & -1 & & \\ & \ddots&\ddots &\ddots & \\ & & -1& 2 & -1 \\ & & & -1& 2 \end{pmatrix}, \end{equation*} \begin{equation*}\label{basic toep2} H_n = T_n(\mathbf{i}\sin\theta)= \frac{1}{2} \begin{pmatrix} 0 & 1 & & & \\ -1 & 0 & 1 & & \\ & \ddots&\ddots &\ddots & \\ & & -1 & 0 & 1 \\ & & & -1 & 0 \end{pmatrix}, \end{equation*} which are of importance since $A_n=D_n(a)K_n + E_n$ and $B_n=\textcolor{black}{h}D_n(b)H_n$ with $\{E_n\}_n\sim_\sigma 0$, as an immediate check in \cite{MR4157202} can show. Therefore, by using the GLT axioms (as done in detail in \cite{MR4157202}) we obtain \[ \{ A_n\}_n \sim_{\textsc{glt}} a(x)(2-2\cos\theta), \quad \left\{ \frac{1}{h} B_n\right\}_n \sim_{\textsc{glt}}\mathbf{i} b(x)\sin\theta, \quad \left\{\frac{1}{h^2} C_n\right\}_n \sim_{\textsc{glt}} c(x). \] As a conclusion $\left\{B_n\right\}_n \sim_{\textsc{glt}}\text 0$, $\ \left\{C_n\right\}_n \sim_{\textsc{glt}} 0$ and hence, setting $X_n=A_n+B_n+C_n$ the actual coefficient matrix of the linear system in (\ref{linear-sys}), again by the $*$-algebra structure of the GLT matrix-sequences, we deduce \[ \{ X_n\}_n \sim_{\textsc{glt}} a(x)(2-2\cos\theta). \] Now, following \cite{Momentary_1}, the idea is to consider not only the asymptotic setting, but also the case of moderate sizes. As a consequence, for increasing the precision of the evaluation of eigenvalues and singular values, we can associate to \[ X_n=A_n+B_n+C_n \] the specific symbol $f_n(x,\theta)= a(x)(2-2\cos\theta)+h\mathbf{i} b(x)\sin\theta+ h^2c(x)$. We are now in position to give a formal definition of GLT momentary symbols. \begin{defn}[GLT momentary symbols] \label{def:momentarysymbols} Let $\{X_n\}_n$ be a matrix-sequence and assume that there exist matrix-sequences $\{A_n^{(j)}\}_n$, scalar sequences $c_n^{(j)}$, $j=0,\ldots,t$, and measurable functions $f_j$ defined over $[-\pi,\pi]\times [0,1]$, $t$ nonnegative integer independent of $n$, such that \begin{eqnarray} \nonumber \left\{ \frac{A_n^{(j)}}{ c_n^{(j)}}\right\}_n &\sim_{\textsc{glt}} & f_j, \\ \nonumber c_n^{(0)}=1, & & c_n^{(s)}=o(c_n^{(r)}), \ \ t\ge s>r, \\ \label{eq:sequence} \{X_n\}_n & = & \{A_n^{(0)}\}_n + \sum_{j=1}^t \{A_n^{(j)}\}_n. \end{eqnarray} Then, by a slight abuse of notation, \begin{equation}\label{eq:GLT_momentary_1D} f_n=f_0+ \sum_{j=1}^t c_n^{(j)} f_j \end{equation} is defined as the GLT momentary symbol for $X_n$ and $\{f_n\}$ is the sequence of GLT momentary symbols for the matrix-sequence $\{X_n\}_n$. \end{defn} Of course, in line with Section \ref{sec:introduction:glt}, the momentary symbol could be matrix-valued with a number of variables equal to $2d$ and domain $[-\pi,\pi]^d\times [0,1]^d$ if the basic matrix-sequences appearing in Definition \ref{def:momentarysymbols} are, up to proper scaling, matrix-valued and multilevel GLT matrix-sequences. \textcolor{black}{For example in the scalar $d$-variate setting relation (\ref{eq:GLT_momentary_1D}) takes the form \[f_{\textbf{n}}= \sum_{\textbf{j}=\textbf{0}}^\textbf{t} c_\textbf{n}^{(\textbf{j})} f_\textbf{j},\] which is a plain multivariate (possibly block) version of (\ref{eq:GLT_momentary_1D}).} Clearly there is a link with the GLT theory stated in the next result. \begin{thm}\label{moment-vs-glt} Assume that the matrix-sequence $\{X_n\}_n$ satisfies the requirements in Definition \ref{def:momentarysymbols}. Then $\{X_n\}_n$ is a GLT matrix sequence and the GLT symbol $f_0$ of the main term $A_n^{(0)}$ is the GLT symbol of $\{X_n\}_n$, that is, $\{X_n\}_n \sim_{\textsc{glt}} f_0$ and $ \lim_{n\to \infty} f_n=f_0$ uniformly on the definition domain. \end{thm} The given definition of momentary symbols is inspired, as it is clear from the initial example of diffusion-convection-advection equation, by the example of approximated differential equations, where the presence of differential operators of different orders induces, after a possible proper scaling, a structure like that reported in (\ref{eq:sequence}). The idea is that the momentary symbol can be used for giving a more precise evaluation either of the spectrum or of the eigenvalues for moderate sizes of the matrices and not only asymptotically. However, we should be aware that, intrinsically, there is no general recipe especially for the eigenvalues. In fact, as already proven in \cite{MR2176808}, a rank one perturbation of infinitesimal spectral norm actually can change the spectra of matrix-sequences, sharing the same GLT symbol and even sharing the same sequence of momentary symbols. \begin{description} \item[Example 1] Take the matrices $T_n(e^{\mathbf{i} \theta})$ and $X_n=T_n(e^{\mathbf{i} \theta})+ e_1 e_n^T c_n^{(1)}$ with $c_n^{(1)}=n^{-\alpha}$, $\alpha>0$ any positive number independent of the matrix-size $n$. By direct inspection $\{e_1 e_n^T c_n^{(1)}\}_n\sim_\sigma 0$ and hence it is a GLT matrix-sequence with zero symbol, independently of the parameter $\alpha$. If we look at the GLT momentary symbols then they coincide with the GLT symbol for both $\{T_n(e^{\mathbf{i} \theta})\}_n$ and $\{X_n\}_n$: however while in the first case, the eigenvalues are all equal to zero, in the second case they distribute asymptotically as the GLT symbol $e^{\mathbf{i} \theta}$ (which is also the GLT momentary symbol for any $n$). \item[Example 2] Take a positive function $a$ defined on $[0,1]$ and the matrices $D_n(a)T_n(e^{\mathbf{i} \theta})$ and $X_n=D_n(a)T_n(e^{\mathbf{i} \theta})+ e_1 e_n^T c_n^{(1)}$ with $c_n^{(1)}=n^{-\alpha}$, $\alpha>0$ any positive number independent of the matrix-size $n$. Since $\{e_1 e_n^T c_n^{(1)}\}_n$ is a GLT matrix-sequence with zero symbol, independently of the parameter $\alpha$, we deduce that both $\{D_n(a)T_n(e^{\mathbf{i} \theta})\}_n$ and $\{X_n\}_n$ share the same GLT symbol $a(x)e^{\mathbf{i} \theta}$ (which is also the momentary symbol for any $n$). Again there is dramatic change: while in the first case, the eigenvalues are all equal to zero, in the second case they distribute asymptotically as the function $\hat a e^{\mathbf{i} \theta}$, where $\hat a$ is the limit (if it exists) of the geometric mean of sampling values present in $D_n(a)$, as $n$ tends to infinity: since $n^{-\alpha/n}$ converges to $1$ independently of the parameter $\alpha$ as $n$ tends to infinity, $\hat a$ will depend only on the diagonal values of $D_n(a)$. As a conclusion the eigenvalue distributions do not coincide with the GLT momentary symbols and this is a message that the present tool could be not effective and even misleading, when very non-normal matrices are considered. In this setting it must be emphasized that the asymptotic eigenvalue distribution is discontinuous with respect to the standard norms or metrics widely considered in the context of matrix-sequences. \end{description} \section{\textcolor{black}{All-at-once solution of parabolic problems}} \label{sec:problem} The aim of this section is that of describing as accurate as possible the spectra and singular values of the structured linear system sequence stemming by the space-time discretization for a parabolic diffusion problem. Then, we consider the diffusion equation in one space dimension, \begin{equation*} u_t=u_{xx}, \quad x\in (a,b),\ t\in [0, T], \end{equation*} where we are prescribing $u$ at $t= 0$ and imposing the periodicity condition $u(x\pm(b-a),t)=u(x,t)$. We approximate our parabolic model problem on a rectangular space-time grid consisting of $N_t$ time intervals and $N_x$ space intervals. We obtain a sequence of linear systems, in which the each component is of the form \begin{equation} A_{\mathbf{n}}x=b, \quad A_\mathbf{n}=J_{N_t}\oplus Q_{N_x}=J_{N_t}\otimes \mathbb{I}_{N_x}+\mathbb{I}_{N_t}\otimes Q_{N_x}\in\mathbb{R}^{N\times N}, \quad x,b\in\mathbb{R}^{N}, \label{eq:system} \end{equation} where $N=N_tN_x$, $\mathbf{n}=(N_t,N_x),$ $\mathbb{I}_{m}$ is the identity matrix of size $m$, and the matrices $J_{N_t}$ and $Q_{N_x}$ come from the discretization in time and space, respectively. In the following, we describe the time and space discretization and, in particular, how this leads to structured components of the matrix $A_{\mathbf{n}}$. \subsection{Time discretization} \label{sec:problem:time} The principal ingredients of the time discretization are: \begin{itemize} \item Choosing $N_t$ equispaced points in $[0,T]$ with stepsize $h_t=T/N_t$, that is, $t_j=jh_t$, for $j=1,\ldots,N_t$. \item Discretizing in time by standard Euler backwards. \end{itemize} Regarding notations, for the sake of simplicity, since we are considering a 2D problem, the symbols will have as Fourier variable $(\theta, \xi)$ instead of the standard choice $(\theta_1,\theta_2)$ indicated in the notations of Section \ref{sec:introduction:tep_circ} (see Definition \ref{def:toeplitz_block_generating}). The resulting matrix is $J_{N_{t}}$, which has the following unilevel scalar Toeplitz structure: \begin{equation} J_{N_t}=\frac{1}{h_t} \begin{bmatrix} 1 \\ -1 & 1\\ & \ddots & \ddots \\ & & -1 & 1 \end{bmatrix}=\frac{1}{h_t}T_{N_t}(f_J),\label{eq:problem:time:J} \end{equation} where $f_J$ is the generating function of the matrix-sequence $\{T_{n}(f_J)\}_{n}$ with \begin{equation*} f_J(\theta)=1-\mathrm{e}^{\mathbf{i}\theta}.\label{eq:problem:f_J} \end{equation*} \subsection{Space discretization} \label{sec:problem:space} The principal elements of the time discretization are: \begin{itemize} \item Choosing $N_x$ equispaced points in $[a,b]$. Since we are considering periodic boundary conditions, we have step size $h_x=(a-b)/N_x$ and $x_j=h_x(j-1)$, for $j=1,\ldots,N_x$. \item Discretizing in space using second order finite differences. \end{itemize} Consequently, the space discretization matrix will be the circulant matrix $Q_{N_x}$ of the form: \begin{equation*} Q_{N_x}= \frac{1}{h_x^2} \begin{bmatrix} 2 &-1&&&-1\\ -1 & 2& -1\\ & \ddots & \ddots &\ddots \\ & & -1 & 2 & -1\\ -1& & & -1 & 2 \end{bmatrix}= \frac{1}{h_x^2}C_{N_x}(f_{Q}),\label{eq:problem:space:Q} \end{equation*} where \begin{equation*} f_Q(\xi)=2-2\cos\xi\label{eq:problem:f_Q} \end{equation*} is the generating function of the matrix. Of course a different choice of Dirichlet boundary conditions would lead to the standard discrete Laplacian $T_{N_x}(f_Q)$: the analysis is equivalent since also this matrix admits a well known diagonalization matrix, that is the sine transform matrix of type I, which is real, orthogonal and symmetric. \subsection{Analysis of the coefficient matrix $A_\mathbf{n}$} \label{sec:coefficientmatrix} We have seen that discretizing of the problem of interest for a sequence of discretization parameters $h_x$ and $h_t$ leads to a sequence of linear systems, whose approximation error tends to zero as the coefficient matrix-size grows to infinity. The $\mathbf{n}$th coefficient matrix component is of the form \begin{equation}\label{to be diagonalized} A_\mathbf{n}=\frac{1}{h_t}T_{N_t}(f_J)\otimes \mathbb{I}_{N_x}+\mathbb{I}_{N_t}\otimes \frac{1}{h_x^2}C_{N_x}(f_Q). \end{equation} In order to design efficient solvers for the considered linear systems, it is of crucial importance to know the spectral propriety of the matrix-sequence $\{A_\mathbf{n}\}_\mathbf{n}$. Hence, this section is devoted to the analysis of the structure of the matrix-sequence $\{{A}_\mathbf{n}\}_\mathbf{n}$ in \eqref{eq:system}. In particular, we provide the singular values and spectral analysis using algebraic tricks, the GLT theory, and the concept of GLT momentary symbols. \subsection{GLT analysis of the coefficient sequence $\{{A}_\mathbf{n}\}_{\mathbf{n}}$}\label{sec:general_glt_analysis} The asymptotic spectral and singular value distribution, for the matrix-size $d(\mathbf{n})$ sufficiently large, of the matrix-sequence $\{{A}_\mathbf{n}\}_\mathbf{n}$ depend on how $h_x$ and $h_t$ approaches zero. Let $c_h\coloneqq h_x^2/h_t$, we have three different cases to consider. \begin{itemize} \item[\textsc{Case} 1.] $\left[c_h\to\infty\right]:$ If $h_t\to 0$ faster than $C_1h_x^2$, where $C_1$ is a constant, then we can consider the matrix \begin{equation*} h_t{A}_\mathbf{n}=T_{N_t}(f_J)\otimes \mathbb{I}_{N_x}+\mathbb{I}_{N_t}\otimes \underbrace{\frac{h_t}{h_x^2}}_{c_h^{-1}\to 0}C_{N_x}(f_Q). \end{equation*} Then, the sequence $\{h_t A_\mathbf{n}\}_\mathbf{n}=\{T_{N_t}(f_J)\otimes \mathbb{I}_{N_x}+\mathbb{N}_\mathbf{n}\}_\mathbf{n}$, where $\mathbb{N}_\mathbf{n}$ is a small-norm matrix in the sense of the item 2 of property \textbf{GLT4}, with $\|\mathbb{N}_\mathbf{n}\|<C_2$, $C_2$ constant. Consequently, from \textbf{GLT4}, $\{\mathbb{N}_\mathbf{n}\}_\mathbf{n}$ is a matrix-sequence distributed in the singular value sense as $0$, which implies that $\{\mathbb{N}_\mathbf{n}\}_\mathbf{n}$ is zero-distribued in GLT sense as described in \textbf{GLT4}. Moreover $f_J$ is a trigonometric polynomial, then Theorem \ref{thm:prel_toepl_szeg_multi}, properties \textbf{GLT1}-\textbf{GLT4} and Lemma \ref{lemm:tensor_prod} imply that \begin{equation*} \{h_t A_\mathbf{n}\}_\mathbf{n}\sim_{\textsc{glt}}f_J(\theta)\otimes 1+1\otimes0= f_A^{(1)}(\theta,\xi). \end{equation*} The $1$ present in $ f_J(\theta)\otimes 1$ should be interpreted as $1\mathrm{e}^{0\mathbf{i}\xi}$, and $1\otimes0$ should be interpreted as $1\mathrm{e}^{0\mathbf{i}\theta}\otimes0\mathrm{e}^{0\mathbf{i}\xi}$. Hence, the GLT symbol of the sequence $ \{h_t A_\mathbf{n}\}_\mathbf{n}$ is the bivariate function \begin{equation*} f_A^{(1)}(\theta,\xi)= f_J(\theta)=1-\mathrm{e}^{\mathbf{i}\theta},\label{eq:coefficientmatrix:fA1} \end{equation*} and it should be interpreted as the function $f_J(\theta)\otimes 1$, with is constant in the second component. From the property \textbf{GLT1}, the function $f_A^{(1)}(\theta,\xi)$ describes the singular value distribution in the sense of relation (\ref{eq:introduction:background:distribution:sv}). More in detail \begin{equation*} \{h_t A_\mathbf{n}\}_\mathbf{n}\sim_{\textsc{glt},\sigma}1-\mathrm{e}^{\mathbf{i}\theta}. \end{equation*} However, the matrix-sequence $\{h_t A_\mathbf{n}\}_\mathbf{n}$ is not symmetric, hence the distribution does not hold in the eigenvalue sense (see also {\bf Example 1} and {\bf Example 2} at the end of Section \ref{sec:momentary}). Because of the structure of $J_{N_t}=\frac{1}{h_t}T_{N_t}(f_J)$ in equation (\ref{eq:problem:time:J}) it is straightforward to see that the asymptotic spectral distribution is given by ${\mathfrak{f}({\theta},\xi)}=1,$ accordingly to relation (\ref{eq:introduction:background:distribution:ev}), that is \begin{equation*} \{h_t A_\mathbf{n}\}_\mathbf{n}\sim_{\lambda}1. \end{equation*} \item[\textsc{Case} 2.] $\left[c_h\to0\right]:$ If $h_x^2\to 0$ faster than $C_1h_t$, where $C_1$ is a constant, then we have \begin{equation*} h_x^2A_\mathbf{n}=\underbrace{\frac{h_x^2}{h_t}}_{c_h\to 0}T_{N_t}(f_J)\otimes \mathbb{I}_{N_x}+\mathbb{I}_{N_t}\otimes C_{N_x}(f_Q). \end{equation*} Then, the sequence $\{h_x^2 A_\mathbf{n}\}_\mathbf{n}=\{\mathbb{N}_\mathbf{n}+\mathbb{I}_{N_t}\otimes C_{N_x}(f_Q)\}_\mathbf{n}$, where $\mathbb{N}_\mathbf{n}$ is a small-norm matrix in the sense of the item 2 of property \textbf{GLT4}, with $\|\mathbb{N}_\mathbf{n}\|<C_2$, $C_2$ constant. Then, $\{\mathbb{N}_\mathbf{n}\}_\mathbf{n}$ is a matrix-sequence distributed in the singular value sense, and consequently in the GLT sense, as $0$. Moreover, ${f_Q}$ belongs to the Dini-Lipschitz class, consequently, properties \textbf{GLT2}-\textbf{GLT4}, and Lemma \ref{lemm:tensor_prod} imply that \begin{equation*} \{h_x^2 A_\mathbf{n}\}_\mathbf{n}\sim_{\textsc{glt}}0\otimes 1+1\otimes f_Q(\xi)=1\otimes f_Q(\xi)=f_A^{(2)}(\theta,\xi), \end{equation*} where the GLT symbol is given by \begin{equation*} f_A^{(2)}(\theta,\xi)= f_Q(\xi)=2-2\cos\xi.\label{eq:coefficientmatrix:fA2} \end{equation*} In this case the function $f_A^{(2)}(\theta,\xi)$ is a singular value symbol for the sequence $\{h_x^2 A_\mathbf{n}\}_\mathbf{n}$, and also an eigenvalue symbol, since the matrices $C_{N_x}(f_{Q})$ are Hermitian for each $N_x$. Hence we have \begin{equation*} \{h_x^2 A_\mathbf{n}\}_\mathbf{n}\sim_{\textsc{glt},\sigma, \lambda} 2-2\cos\xi. \end{equation*} \item[\textsc{Case} 3.] $\left[c_h=c= \text{constant}\right]:$ The last case is when $h_x^2$ and $h_t$ are proportional and related by the constant $c_h=c=\frac{h_x^2}{h_t}$, independent of the various step-sizes. In this setting we have \begin{equation*} h_x^2A_\mathbf{n}=\underbrace{\frac{h_x^2}{h_t}}_{c_h}T_{N_t}(f_J)\otimes \mathbb{I}_{N_x}+\mathbb{I}_{N_t}\otimes C_{N_x}(f_Q). \end{equation*} Consequently, from \textbf{GLT2}, \textbf{GLT3} and Lemma \ref{lemm:tensor_prod}, the following relationship holds when $c_h$ is a constant, \begin{equation*} \{h_x^2A_\mathbf{n}\}_\mathbf{n}\sim_{\textsc{glt}}c f_J(\theta)\otimes 1+1\otimes f_Q(\xi)=f_A^{(3)}(\theta,\xi). \end{equation*} From considerations analogous to the case 1 and 2 we have \begin{equation*} \{h_x^2A_\mathbf{n}\}_\mathbf{n}\sim_{\textsc{glt},\sigma}c(1-\mathrm{e}^{\mathbf{i}\theta})+(2-2\cos\xi).\label{eq:coefficientmatrix:fA3} \end{equation*} Since the matrix $h_x^2A_\mathbf{n}$ is not Hermitian, the eigenvalue symbol $\es({\theta},\xi)$ cannot be directly derived by $f_A^{(3)}(\theta,\xi)$ (see again the discussion in the examples after Definition \ref{def:momentarysymbols}). In this setting the situation is simple because the involved twolevel structure can be simply block-diagonalized, while the use of the GLT momentary symbol becomes useful in approximation the singular values of the sequence $\{h_x^2A_\mathbf{n}\}_\mathbf{n}$. \end{itemize} \subsection{Analysis of the coefficient matrix-sequence $\{{A}_\mathbf{n}\}_{\mathbf{n}}$ by algebraic manipulations and GLT momentary symbols}\label{sec:general_momentary_analysis} The first observation is that the matrix in (\ref{to be diagonalized}) admits a perfect decomposition which shows in evidence a lower triangular matrix, which is similar to the original one and hence all the eigenvalues are known exactly. In fact, by looking carefully at (\ref{to be diagonalized}), we obtain that \[ \frac{1}{h_t}T_{N_t}(f_J)\otimes \mathbb{I}_{N_x}= \mathbb{I}_{N_t}\left[\frac{1}{h_t}T_{N_t}(f_J)\right]\mathbb{I}_{N_t}\otimes \mathbb{F}_{N_x}\mathbb{I}_{N_x} \mathbb{F^*}_{N_x} \] and \[ \mathbb{I}_{N_t} \otimes \frac{1}{h_x^2}C_{N_x}(f_Q)= \mathbb{I}_{N_t}\mathbb{I}_{N_t}\mathbb{I}_{N_t} \otimes \mathbb{F}_{N_x}\frac{1}{h_x^2}D_{N_x}\mathbb{F^*}_{N_x}, \] where $\mathbb{F}_{N_x}$ is the unitary Fourier matrix of size $N_x$, $\mathbb{F^*}_{N_x}$ is its transpose conjugate and hence its inverse, and $D_{N_x}$ is the diagonal matrix containing the eigenvalues of $C_{N_x}(f_Q)$ that is $f_Q(2\pi j/N_x)=2-2\cos(2\pi j/N_x)$, $j=0,1,\ldots, N_x-1$. Since $T_{N_t}(f_J)$ is lower bidiagonal matrix with $1$ on the main diagonal, it can be easily seen that the eigenvalues of $A_\mathbf{n}$ in (\ref{to be diagonalized}) are exactly \[ \frac{1}{h_t}+\frac{1}{h_x^2}(2-2\cos(2\pi j/N_x)),\ \ \ j=0,1,\ldots, N_x-1, \] each of them with multiplicity $N_t$. As a consequence, by taking a proper normalization, the spectral radius $\rho(h_x^2 A_{\mathbf{n}})$ will coincide simply with $4+c_h$. It is clear that, in this context, due to the high non-normality of the term $T_{N_t}(f_J)$, after proper scalings depending on $h_t$ and $h_x$, the eigenvalues are a uniform sampling of a function which is not the GLT symbol and is not the associated GLT momentary symbol. This is not surprising given the discussion regarding the asymptotical behaviour of the matrix-sequences reported in {\bf Example 1} and in {\bf Example 2}, when discussing the potential and the limitations of the notion of GLT momentary symbols. Also in this setting, by imposing (quite artificial) periodic boundary conditions in time, the term $T_{N_t}(f_J)$ will change into $C_{N_t}(f_J)$ and magically a one-rank correction repeated $N_t$ times to the matrix $A_\mathbf{n}$ will produce a new matrix with the same GLT and momentary symbols as before: however in this case the eigenvalues will be exactly the sampling of such functions. This is a further confirmation of the delicacies of the eigenvalues that can have dramatic changes due to minimal corrections, when we are in a context of higlhy non-normal matrices. \subsubsection{Singular values of $h_x^2A_\mathbf{n}$ (exact)} The singular values $\sigma_{1}(h_x^2A_\mathbf{n}),\dots,$ $\sigma_{d(\mathbf{n})}(h_x^2A_\mathbf{n}) $ of the matrix $h_x^2A_\mathbf{n}$ are given the square root of the eigenvalues of the Hermitian matrix $h_x^4A_\mathbf{n}A_\mathbf{n}^{\textsc{t}}$. Hence, in order to provide exactly $\sigma_{i}(h_x^2A_\mathbf{n})$, $i=1\dots, d(\mathbf{n})$, we are interested at the spectrum of the matrix \begin{equation*} \begin{split} &h_x^4A_\mathbf{n}A_\mathbf{n}^{\textsc{t}}=\\ &\left[ \begin{array}{cccccccccc} \tilde{Q}_{N_x}^2&-c_h\tilde{Q}_{N_x}\\ -c_h\tilde{Q}_{N_x}&\tilde{Q}_{N_x}^2+c_h^2\mathbb{I}_{N_x}&-c_h\tilde{Q}_{N_x}\\ &-c_h\tilde{Q}_{N_x}&\tilde{Q}_{N_x}^2+c_h^2\mathbb{I}_{N_x}&\ddots&\\ & & & &\\ & &\ddots&\ddots&-c_h\tilde{Q}_{N_x}\\ & & &-c_h\tilde{Q}_{N_x}&\tilde{Q}_{N_x}^2+c_h^2\mathbb{I}_{N_x} \end{array} \right], \end{split} \end{equation*} where $\tilde{Q}_{N_x}=C_{N_x}+c_h\mathbb{I}_{N_x} $. Note that $h_x^4A_\mathbf{n}A_\mathbf{n}^{\textsc{t}}$ is not a pure block-tridiagonal Toeplitz because of the missing constant $c_h^2$ in the block in the top left corner. However, for each fixed $N_t$ and $N_x$, the matrix $\tilde{Q}_{N_x}$ is a circulant matrix with generating function $f_{\tilde{Q}_{N_x}}(\xi)=2-2\cos\xi +c_h$, which is also its GLT momentary symbol. Thus we infer that $h_x^4A_\mathbf{n}A_\mathbf{n}^{\textsc{t}}$ is similar to a matrix $X_{d(\mathbf{n})}$, whose explicit expression is reported below \begin{equation*} \begin{split} & h_x^4A_\mathbf{n}A_\mathbf{n}^{\textsc{t}}\sim X_{d(\mathbf{n})}=\\ &\left[ \begin{array}{cccccccccc} D_{\tilde{Q}}^{2}&-c_hD_{\tilde{Q}}\\ -c_hD_{\tilde{Q}}&D_{\tilde{Q}}^{2}+c_h^2\mathbb{I}_{N_x}&-c_hD_{\tilde{Q}}\\ &-c_hD_{\tilde{Q}}&D_{\tilde{Q}}^{2}+c_h^2\mathbb{I}_{N_x}&\ddots\\ & &\ddots&\ddots&-c_hD_{\tilde{Q}}\\ & & &-c_hD_{\tilde{Q}}&D_{\tilde{Q}}^{2}+c_h^2\mathbb{I}_{N_x} \end{array} \right], \end{split} \end{equation*} with $D_{\tilde{Q}}=\diag_{\ell=1,\dots, N_x}\left(f_{\tilde{Q}_{N_x}}(\xi_{\ell,N_x}) \right)$. Consequently we study the spectrum of $X_{d(\mathbf{n})}$ to attain formulas for the exact singular values of $h_x^2A_{\mathbf{n}}$. Let us consider a permutation matrix $P$ such transforms $X_{d(\mathbf{n})}$ into an $N_t\times N_t$ block diagonal matrix $PX_{d(\mathbf{n})}P^{\textsc{t}}$, which has on the main diagonal, for $k=1,\ldots,N_x$, blocks of the form \begin{equation} \begin{split} &\left(PX_{d(\mathbf{n})}P^{\textsc{t}}\right)_{i,j=(k-1)N_t+1}^{\textcolor{black}{kN_t}}=\\ &\left[ \begin{array}{cccccccccc} C_k^2&-c_hC_k\\ -c_hC_k&C_k^2+c_h^2&-c_hC_k\\ &-c_hC_k&C_k^2+c_h^2&\ddots\\ & &\ddots&\ddots&-c_hC_k\\ & & &-c_hC_k&C_k^2+c_h^2 \end{array} \right],\label{eq:coefficientmatrix:submatrix} \end{split} \end{equation} where $C_k=D_{\tilde{Q}}(k,k)=f_{\tilde{Q}_{N_x}}(\xi_{k,N_x})$. Hence, the union of the eigenvalues of all blocks of $PX_{d(\mathbf{n})}P^{\textsc{t}}$ is equivalent to the full spectrum of $h_x^4A_\mathbf{n}A_\mathbf{n}^{\textsc{t}}$. These local eigenvalue problems can be solved analytically (or numerically) independently from each other. For example for $N_t=2$ we have for every $k=1,\ldots,N_x$ the characteristic equation \begin{equation*} \left| \begin{array}{cccccccccc} \left(f_{\tilde{Q}_{N_x}}(\xi_{k,N_x})\right)^2-\lambda&-c_h(f_{\tilde{Q}_{N_x}}(\xi_{k,N_x}))\\ -c_h(f_{\tilde{Q}_{N_x}}(\xi_{k,N_x}))&\left(f_{\tilde{Q}_{N_x}}(\xi_{k,N_x})\right)^2+c_h^2-\lambda \end{array} \right|=0. \end{equation*} Thus, we have as singular value the union for $k=1,\dots, N_x$ of the quantities \begin{equation*} \begin{split} \sigma^{(1)}(k,c_h)&=\sqrt{\frac{2(f_{\tilde{Q}_{N_x}}(\xi_{k,N_x}))^2+c_h^2}{2}-\frac{c_h}{2}\sqrt{4(f_{\tilde{Q}_{N_x}}(\xi_{k,N_x}))^2+c_h^2}},\\ \sigma^{(1)}(k,c_h)&=\sqrt{\frac{2(f_{\tilde{Q}_{N_x}}(\xi_{k,N_x}))^2+c_h^2}{2}+\frac{c_h}{2}\sqrt{4(f_{\tilde{Q}_{N_x}}(\xi_{k,N_x}))^2+c_h^2}}. \end{split} \end{equation*} Clearly, solving the characteristic equation for $k=1,\ldots,N_x$ becomes more and more complex as $N_t$ grows. Hence, in the next section provide two possible approximations given by the GLT theory and by the GLT momentary formulations. \subsubsection{Singular values of $h_x^2A_{\mathbf{n}}$ (approximation) via GLT momentary symbols} For case (2), in Section \ref{sec:general_glt_analysis} and in Section \ref{sec:general_momentary_analysis}, we have already shown that \begin{equation*} \{h_x^2 A_\mathbf{n}\}_\mathbf{n}\sim_{\sigma}\s_{A}^{(2)}(\theta,\xi)= 2-2\cos\xi. \end{equation*} \textcolor{black}{On the other hand, the subsequent sequence $\{f_\mathbf{n}^{(2)}\}_{\textbf{n}}$ with \begin{equation*} f_\mathbf{n}^{(2)}(\theta,\xi)= c_h(1-\mathrm{e}^{\mathbf{i}\theta})+(2-2\cos\xi), \end{equation*} is the sequence of GLT momentary functions. } Remark \ref{rem:introduction:background:1} suggests to exploit these relations in order to obtain a better approximation of the singular value of $h_x^2A_{\mathbf{n}}$, with respect to the information obtained by the pure GLT symbol. In the following, we compute the quantities $|\s_A^{(2)}(\theta,\xi)|$ and $|f_\mathbf{n}^{(2)}(\theta,\xi)|$ using the specific grid described below \begin{equation} \theta_{j,N_t}=\frac{j\pi}{N_t\textcolor{black}{+1}}, \, j=1,\dots, N_t\qquad \xi_{\ell,N_x}=\frac{2\pi(\ell-1)}{N_x}\, \ell=1,\dots, N_x. \label{eq:ANperfectgrids} \end{equation} \textcolor{black}{ We can observe in Figure \ref{fig:sigma} that the singular value of $h_x^2A_{\mathbf{n}}$ (blue circles) are well approximated by the samplings of $|f_\mathbf{n}^{(2)}(\theta,\xi)|$ on the grid \eqref{eq:ANperfectgrids} (red stars). The approximation by using $|\s_A^{(2)}(\theta,\xi)|$, instead, is good when $c_h$ is small, see the top panel of Figure \ref{fig:sigma} for $N_t=2$ and $N_x=10$, but it tends to become a substantially less accurate approximation otherwise, see the bottom panel of Figure \ref{fig:sigma} and Figure \ref{fig:error_sigma} where $N_t=10$ and $N_x=10$.} \begin{figure}[!h] \centering \includegraphics[width=0.7\textwidth]{svd_abs_f.eps} \includegraphics[width=0.7\textwidth]{svd_abs_f_bad.eps} \caption{\textcolor{black}{Singular values of $h_x^2A_{\mathbf{n}}$ and samplings of $|f_A^{(2)}(\theta,\xi)|$ and $|f_\mathbf{n}^{(2)}(\theta,\xi)|$ on the grid \eqref{eq:ANperfectgrids}, for $N_t=2$ and $N_x=10$ (top) and $N_t=10$ and $N_x=10$ (bottom). }} \label{fig:sigma} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=1\textwidth]{err_sigma.eps} \caption{\textcolor{black}{Singular values and samplings of $|f_A^{(2)}(\theta,\xi)|$ and $|f_\mathbf{n}^{(2)}(\theta,\xi)|$ for $N_t=10$ and $N_x=10$ on the grid in \eqref{eq:ANperfectgrids}.}}\label{fig:error_sigma} \end{figure} \subsubsection{2-norm of $h_x^2A_{\mathbf{n}}$ (approximation)} In the following we are interested in providing a bound for the $2$-norm of the matrix $h_x^2A_{\mathbf{n}}$. By definition it is given by $\|h_x^2A_{\mathbf{n}}\|_2=\max_{j=1\dots,d(\mathbf{n})}|\sigma_{j}(h_x^2A_{\mathbf{n}})|$. From the previous section we know that it can be computed by making the square root of the maximum eigenvalue of the block in \eqref{eq:coefficientmatrix:submatrix}, corresponding to $\xi_{N_x/2+1,N_x}=\pi$. Since the $ \max_k \left( f_{\tilde{Q}_{N_x}}(\xi_{k,N_x})\right)=4+c_h$, we are interested in estimate the maximum eigenvalue of \begin{equation}\label{eq:symmetriz_4_sing} \left[ \begin{array}{cccccccccc} (4+c_h)^2&-c_h(4+c_h)\\ -c_h(4+c_h)&(4+c_h)^2+c_h^2&-c_h(4+c_h)\\ &-c_h(4+c_h)&(4+c_h)^2+c_h^2&\ddots\\ & &\ddots&\ddots&-c_h(4+c_h)\\ & & &-c_h(4+c_h)&(4+c_h)^2+c_h^2 \end{array} \right]. \end{equation} For this purpose we exploit the concept of {$\tau_{\varepsilon,\varphi}$}-algebras of Subsection \ref{sec:matrix_algebra}. In our case $$a=(4+c_h)^2+c_h^2, \quad b=-c_h(4+c_h),$$ and the matrix belongs to the $\tau_{\frac{c_h}{4+c_h},0}$-algebra, since the element with indices $i,j=1$ is $a+(c_h/(4+c_h))b$. Hence, we have $g(\theta)=(4+c_h)^2+c_h^2-2c_h(4+c_h)\cos\theta$, which coincides with the eigenvalue symbol $\mathfrak{g}_{\mathbf{n}}$ of the matrix (\ref{eq:symmetriz_4_sing}). Due to the Interlacing theorem, see~\cite{interlacing} and the specific relation between algebras \cite{Momentary_1}, the following relationships can be derived \begin{align} &\underbrace{\mathfrak{g}_{\mathbf{n}}\left(\frac{\pi(N_t-1/2)}{N_t+1/2}\right)}_{\mathrm{max}(\lambda_j(T_{N_t,1,0}(g)))}< \underbrace{\|h_x^2A_{\mathbf{n}}\|_2^2}_{\mathrm{max}(\lambda_j(T_{N_t,c_h/(4+c_h),0}(g))) <\\ & \underbrace{\mathfrak{g}_{\mathbf{n}}\left(\frac{\pi N_t}{N_t+1}\right)}_{\mathrm{max}(\lambda_j(T_{N_t,0,0}(g)))} < \underbrace{\mathfrak{g}_{\mathbf{n}}(\pi)}_{\mathrm{max}(\lambda_j(T_{N_t,-1,-1}(f)))=\mathrm{max}(\mathfrak{g}_{\mathbf{n}})}.\nonumber \end{align} As a consequence, good upper and lower bounds for the 2-norm of $h_x^2A_{\mathbf{n}}$ are reported in the following set of inequalities \begin{equation*} \sqrt{\mathfrak{g}_{\mathbf{n}}\left(\frac{\pi(N_t-1/2)}{N_t+1/2}\right)} < \|h_x^2A_{\mathbf{n}}\|_2 < \sqrt{\mathfrak{g}_{\mathbf{n}}\left(\frac{\pi N_t}{N_t+1}\right)}. \end{equation*} In Table~\ref{tbl:2normAN} we present approximations of the 2-norm of $h_x^2A_{\mathbf{n}}$ using the grid sampling from $\tau_{1,0}$ (lower bound), $\tau_{0,0}$ (upper bound), and $\tau_{-1,-1}$. Note that the sampling on the latter grid is equivalent to do the sampling of the singular value momentary symbols $f_\mathbf{n}^{(2)}(\theta,\xi)$ at their maximum point. The two-norm $\|h_x^2A_\mathbf{n}\|_2$ is computed numerically. We see that the 2-norm is well described by the two bounds given above, as $N_t$ increases. Hence, for this type of examples, the GLT momentary symbols provide, at least for moderate sizes, a more precise alternative to the pure GLT symbol. \begin{table}[!h] \centering \caption{Approximations of $2$-norm for different $N_t$ and $c_h$. The maximum bound is $\sqrt{\max \mathfrak{g}_{\mathbf{n}}}=4+2c_h$.} \label{tbl:2normAN} \begin{tabular}{rrrrrr} \toprule $N_t$&$c_h$&$\sqrt{\mathfrak{g}_{\mathbf{n}}\left(\frac{\pi(N_t-1/2)}{N_t+1/2}\right)}$&$\|h_x^2A_\mathbf{n}\|_2$&$\sqrt{\mathfrak{g}_{\mathbf{n}}\left(\frac{\pi N_t}{N_t+1}\right)}$&$4+2c_h$\\ \midrule 1&1/8&4.06394205&4.12500000&4.12689350&4.25\\ 10&1/8&4.24460651&4.24505679&4.24508270&4.25\\ 100&1/8&4.24994073&4.24994128&4.24994131&4.25\\ 1000&1/8&4.24999940&4.24999940&4.24999940&4.25\\ \midrule 1&1&4.58257569&5.00000000&5.09901951&6.00\\ 10&1&5.96286240&5.96511172&5.96614865&6.00\\ 100&1&5.99959287&5.99959555&5.99959689&6.00\\ 1000&1&5.99999589&5.99999589&5.99999590&6.00\\ \midrule 1&8&10.58300524&12.00000000&14.42220510&20.00\\ 10&8&19.78560029&19.78964627&19.80461186&20.00\\ 100&8&19.99765486&19.99765952&19.99767802&20.00\\ 1000&8&19.99997634&19.99997634&19.99997636&20.00\\ \bottomrule \end{tabular} \end{table} \section{The case of approximations of distributed order differential operators via asymptotic expansion and GLT momentary symbols} \label{sec:fractional} \textcolor{black}{In this last section we focus on the matrix-sequences arising from the numerical approximation of distributed-order operators. In detail, such a procedure consists of two steps:} \begin{enumerate} \item Employ a quadrature formula to discretize the distributed-order operator into a multi-term constant-order fractional derivative; \item discretize each constant-order fractional derivative. \end{enumerate} In particular, we focus on the case where the matrices under consideration take the form \textcolor{black}{ \begin{equation}\label{eq_sum} \frac{h^{\alpha_\ell}}{\Delta {\alpha}}\mathcal{T}_{n}=c_\ell T_n(g_{\alpha_\ell})+c_{\ell-1} h^{\Delta\alpha} T_n(g_{\alpha_{\ell-1}})+\dots+ c_1h^{\Delta\alpha({\ell-1})}T_n(g_{\alpha_1}), \end{equation} where $\ell$ is a positive integer, all the coefficients $c_j$ are positive, independent of $n$, and contained in a specific positive range $[c_*,c^*]$. Moreover, $\Delta\alpha=\frac{1}{\ell}$ and all the terms $1<\alpha_1<\alpha_2\cdots<\alpha_\ell<2$ are positive and defined by $a_k=1+\left(k-\frac{1}{2}\right)\Delta \alpha$, $k=1,\dots, \ell$. More importantly all the functions $g_{\alpha_\ell}$ are globally continuous, monotonically increasing in the interval $[0,\pi]$ and even in the whole definition domain $[-\pi,\pi]$. } \textcolor{black}{ The goal is to exploit the notion of GLT momentary symbols and use it in combination with the asymptotic expansions derived in a quite new research line (see \cite{EkFuSe18} and references there reported), in order to have a very precise description of the spectrum of such matrices. } \textcolor{black}{ Indeed, under specific hypotheses on the generating function $f$, and fixing an integer $\nu\ge0$, it is possible to give an accurate description of the eigenvalues of $T_n(f)$ via the following asymptotic expansion: \begin{align*} \lambda_j(T_n(f))=w_0(\theta_{j,n})+hw_1(\theta_{j,n})+h^2w_2(\theta_{j,n})+\ldots +h^\nu w_\nu(\theta_{j,n})+E_{j,n,\nu}, \end{align*} where the eigenvalues of $T_n(f)$ are arranged in ascending order, $h=\frac{1}{n}$, and $\theta_{j,n}=\frac{j\pi}{n}=j\pi h$ for $j=1,\ldots,n$, $E_{j,n,\nu}=O(h^{\nu+1})$ is the error. Moreover, $\{w_k\}_{k=1,2,\ldots}$ is a sequence of functions from $[0,\pi]$ to $\mathbb R$. The idea of such procedure is that a numerical approximation of the value $w_k(\theta_{j,n})$ can be obtained by fast interpolation-extrapolation algorithms (see \cite{EkGa2019} and references therein). In particular, choosing $\nu$ proper grids $\theta_{j,n_1}$, $\theta_{j,n_2}$, $\dots$ $\theta_{j,n_\nu}$ with $n>>n_\nu>\dots>n_1$ an approximation of the quantities $\tilde{w_k}({\theta}_{j,n})\approx w_k({\theta}_{j,n})$ can be obtained. In the Hermitian case, we find that $\tilde{w}_0$ coincides with the generating function. } \textcolor{black}{ Concerning the example in (\ref{eq_sum}), the idea is to link the functions $\tilde{w}^{c_i g_{\alpha_i}}_k$, $k=1,\dots,\nu$, associated with each $c_i g_{\alpha_i}$ with the GLT momentary symbols of $\frac{h^{\alpha_\ell}}{\Delta {\alpha}}\mathcal{T}_{n}$. Precisely, for $j=1,\dots,n$, for a fixed $\nu$, we approximate the eigenvalues of $\frac{h^{\alpha_\ell}}{\Delta {\alpha}}\mathcal{T}_{n}$ by } \textcolor{black}{ \begin{equation}\label{eq:momentary_asymp_exp} \begin{split} &\lambda_j\left(\frac{h^{\alpha_\ell}}{\Delta {\alpha}}\mathcal{T}_{n}\right)\approx\\ &c_\ell g_{\alpha_\ell}(\theta_{j,n})+\sum_{t=1}^{\nu}h^t\left(\tilde{w}_t^{{\alpha_\ell}}(\theta_{j,n})+\sum_{i=\ell-1}^1\tilde{w}_{t-1}^{{\alpha_i}}(\theta_{j,n})h^{-(1-\Delta\alpha(\ell-i))}\right), \end{split} \end{equation} where, for sake of notation, we denoted by $\tilde{w}_t^{\alpha_i}$ the approximation of the $t$-th asymptotic expansion coefficient associated with $c_i g_{\alpha_i}$ and the term $\tilde{w}_0^{\alpha_i}$ coincides with the evaluations of $c_ig_{\alpha_i}$. } \textcolor{black}{ We highlight that the terms in brackets on the right-hand side of the equality act as possible asymptotic expansion coefficients associated with the GLT momentary symbols $g_n$ of $\frac{h^{\alpha_\ell}}{\Delta {\alpha}}\mathcal{T}_{n}$. Note that formula (\ref{eq:momentary_asymp_exp}) can be rewritten in compact form as \begin{equation*} \begin{split} \lambda_j\left(\frac{h^{\alpha_\ell}}{\Delta {\alpha}}\mathcal{T}_{n}\right)\approx\sum_{t=1}^{\nu}h^t\left(\tilde{w}_t^{{\alpha_\ell}}(\theta_{j,n})+\sum_{i=\ell}^1\tilde{w}_{t-1}^{{\alpha_i}}(\theta_{j,n})h^{-(1-\Delta\alpha(\ell-i))}\right). \end{split} \end{equation*} Hence, it is easy to see that the GLT momentary symbols correspond to take $\nu$ of the asymptotic expansion equal to 1. } \textcolor{black}{ In the following we consider the cases where $\ell=2$ and $\ell=5$ and $\ell=n$ as in Section 4 of \cite{MaSe2021} and confirming at least numerically the conjecture in (\ref{eq:momentary_asymp_exp}) for a fixed $\nu=4$. } \textcolor{black}{ \subsection{Examples: } For $\ell=2$, $\Delta\alpha$ is $\frac{1}{2}$ and the matrix in (\ref{eq_sum}) becomes \[2h^{\alpha_2}\mathcal{T}_{n}=c_2 T_n(g_{\alpha_2})+c_{1} h^{\frac{1}{2}} T_n(g_{\alpha_1}),\] where $\alpha_1=\frac{5}{4}$ and $\alpha_2=\frac{7}{4}$. } \textcolor{black}{ Exploiting the procedure based on formula (\ref{eq:momentary_asymp_exp}) with $\nu=4$, we compute an approximation of the eigenvalues of $c_2 T_n(g_{\alpha_2})+c_{1} h^{\frac{1}{2}} T_n(g_{\alpha_1})$ by \begin{equation*}\label{eq:momentary_asymp_exp_l2} \begin{split} c_2g_{\alpha_2}({{\theta}_{j,n}})+&h\left[\tilde{w}^{{\alpha_2}}_{1}({\tilde{\theta}_{j,n}})+h^{-\frac{1}{2}}c_1g_{\alpha_1}({{\theta}_{j,n}})\right]+\\ & \sum_{t=2}^{\nu} h^{t}\left[ \tilde{w}^{{\alpha_2}}_{t}({{\theta}_{j,n}})+h^{-\frac{1}{2}}\tilde{w}^{{\alpha_1}}_{t-1}({{\theta}_{j,n}})\right], \end{split} \end{equation*} for $j=1, \dots, n$. We consider the cases where $n=100,500,1000$ using an initial grid with $n_1=10$ points and we compare the aforementioned approximations with those obtained by the evaluations of GLT and GLT momentary symbols associated with the sequence described by the matrices in (\ref{eq_sum}). In Figure \ref{fig:frac_expansion_l2} we can observe that the approximation of the spectrum obtained computing the evaluations of the GLT momentary symbols is better with respect to that provided by the evaluations $c_\ell g_{\alpha_\ell}(\theta_{j,n})$. Moreover, the error of the approximation significantly reduces for almost all the eigenvalues combining the notions of GLT momentary symbols with the asymptotic expansion described before, see Figure \ref{fig:err_frac_expansion_l2}. Note that the particular shape of the asymptotic expansion error depends on the fact that in correspondence with the grid points $\theta_{j,n_t}$, $t=1,\dots, \nu$ the quantities $\tilde{w}_t^{\alpha_i}$ are calculated exactly by the extrapolation-interpolation procedure. Moreover, note that the accuracy of the approximation via the combination of GLT momentary symbols and spectral asymptotic expansion seems to decrease corresponding to the maximum eigenvalue. Actually, this behavior is expected by the theory of the asymptotic expansion. Indeed, it is a consequence of the fact that the involved symbols are not trigonometric polynomials and in particular they become non-smooth when periodically extended on the real line. } \begin{figure}[!h] \centering \includegraphics[width=0.49\textwidth]{frac_expansion_l2_100.eps}\\ \includegraphics[width=0.49\textwidth]{frac_expansion_l2_500.eps} \includegraphics[width=0.49\textwidth]{frac_expansion_l2_1000.eps} \caption{\textcolor{black}{Approximation of the eigenvalues of $\frac{h^{\alpha_\ell}}{\Delta \alpha}\mathcal{T}_{n}$, $\ell=2$ by the samplings of the GLT and GLT momentary symbols, and making use of the momentary asymptotic expansion (MAE) with $\nu=4$ for $n=100,500,1000$ with an initial grid of $n_1=10$ points. }} \label{fig:frac_expansion_l2} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=0.6\textwidth]{err_frac_expansion_l2_100.eps}\\ \includegraphics[width=0.49\textwidth]{err_frac_expansion_l2_500.eps} \includegraphics[width=0.49\textwidth]{err_frac_expansion_l2_1000.eps} \caption{\textcolor{black}{Absolute errors of the approximation of the eigenvalues of $\frac{h^{\alpha_\ell}}{\Delta \alpha}\mathcal{T}_{n}$, $\ell=2$ by the samplings of the GLT and GLT momentary symbols, and making use of the momentary asymptotic expansion (MAE) with $\nu=4$ for $n=100,500,1000$ with an initial grid of $n_1=10$ points. }} \label{fig:err_frac_expansion_l2} \end{figure} \textcolor{black}{ Following the analogous procedure, we consider the case where $\ell=5$ and $\ell=n$ which are associated with $\Delta \alpha=\frac{1}{5}$ and $\Delta \alpha=\frac{1}{n}$, respectively. In Figures \ref{fig:frac_expansion_l5} and \ref{fig:frac_expansion_ln} we plot the approximations of the eigenvalues given by the three presented strategies for $\ell=5$ and $\ell=n$. } \textcolor{black}{ Again, we obtain numerical confirmation that the combination of the notions of GLT momentary symbols and asymptotic expansion provides accurate results even for moderate sizes, as confirmed by the error plots in Figures \ref{fig:err_frac_expansion_l5} and \ref{fig:err_frac_expansion_ln}, for $n=100,500,1000$. } The good outcome of the presented numerical tests gives ground for a finer analysis of the spectral features of the matrices considered in the case left open in \cite{MaSe2021}, which arises when the integral partition width is asymptotic to the adopted discretization step. That is, when in formula (\ref{eq_sum}) we take $\alpha_j=jh$, $j=1,\ldots,\ell$, \textcolor{black}{$\ell=n$}. Moreover, efficient and fast algorithms which exploit the concept of momentary symbols can be studied for computing the singular values and eigenvalues of $T_n(f)$ with its possible block, and variable coefficients generalizations and this will be investigated in the future. \begin{figure}[!h] \centering \includegraphics[width=0.49\textwidth]{frac_expansion_l5_100.eps}\\ \includegraphics[width=0.49\textwidth]{frac_expansion_l5_500.eps} \includegraphics[width=0.49\textwidth]{frac_expansion_l5_1000.eps} \caption{\textcolor{black}{Approximation of the eigenvalues of $\frac{h^{\alpha_\ell}}{\Delta \alpha}\mathcal{T}_{n}$, $\ell=5$ by the samplings of the GLT and GLT momentary symbols, and making use of the momentary asymptotic expansion (MAE) with $\nu=4$ for $n=100,500,1000$ with an initial grid of $n_1=10$ points. }} \label{fig:frac_expansion_l5} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=0.49\textwidth]{err_frac_expansion_l5_100.eps}\\ \includegraphics[width=0.49\textwidth]{err_frac_expansion_l5_500.eps} \includegraphics[width=0.49\textwidth]{err_frac_expansion_l5_1000.eps} \caption{\textcolor{black}{Absolute errors of the approximation of the eigenvalues of $\frac{h^{\alpha_\ell}}{\Delta \alpha}\mathcal{T}_{n}$, $\ell=5$ by the samplings of the GLT and GLT momentary symbols, and making use of the momentary asymptotic expansion (MAE) with $\nu=4$ for $n=100,500,1000$ with an initial grid of $n_1=10$ points. }} \label{fig:err_frac_expansion_l5} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=0.49\textwidth]{frac_expansion_ln_100.eps}\\ \includegraphics[width=0.49\textwidth]{frac_expansion_ln_500.eps} \includegraphics[width=0.49\textwidth]{frac_expansion_ln_1000.eps} \caption{\textcolor{black}{Approximation of the eigenvalues of $\frac{h^{\alpha_\ell}}{\Delta \alpha}\mathcal{T}_{n}$, $\ell=n$ by the samplings of the GLT and GLT momentary symbols, and making use of the momentary asymptotic expansion (MAE) with $\nu=4$ for $n=100,500,1000$ with an initial grid of $n_1=10$ points. }} \label{fig:frac_expansion_ln} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=0.49\textwidth]{err_frac_expansion_ln_100.eps}\\ \includegraphics[width=0.49\textwidth]{err_frac_expansion_ln_500.eps} \includegraphics[width=0.49\textwidth]{err_frac_expansion_ln_1000.eps} \caption{\textcolor{black}{Absolute errors of the approximation of the eigenvalues of $\frac{h^{\alpha_\ell}}{\Delta \alpha}\mathcal{T}_{n}$, $\ell=n$ by the samplings of the GLT and GLT momentary symbols, and making use of the momentary asymptotic expansion (MAE) with $\nu=4$ for $n=100,500,1000$ with an initial grid of $n_1=10$ points. }} \label{fig:err_frac_expansion_ln} \end{figure} \section{Concluding remarks} \label{sec:conclusions} The main focus of this paper has been the characterization of the spectrum and the singular values of the coefficient matrix stemming from the approximation with space-time grid for a parabolic diffusion problem and from the approximation of distributed order fractional equations. For this purpose we employed the classical GLT theory and the new concept of GLT momentary symbols. The first has permitted to describe the singular value or eigenvalue asymptotic distribution of the sequence of the coefficient matrices. The latter has permitted to derive a function, able to describe the singular value or eigenvalue distribution of the matrix of the sequence, even for small matrix-sizes, but under given assumptions. \textcolor{black}{In particular, we exploited the notion of GLT momentary symbols and we used it in combination with the interpolation-extrapolation algorithms based on the spectral asymptotic expansion of the involved matrices.} Many questions remain and below we list open problems to be considered in future researches. \begin{itemize} \item More examples of the use of GLT momentary symbols in a non-Toeplitz setting; \item The application of GLT momentary symbol in a pure Toeplitz setting, but of very involved nature, like that expressed in relation (\ref{eq_sum}). The use of GLT momentary symbol for the analysis of efficient iterative solvers, also of multigrid type, of linear systems as those appearing in (\ref{eq:system}), also with the inclusion of variable coefficients. \end{itemize} \subsection*{Acknowledgment} This work was partially supported by INdAM-GNCS. Moreover, the work of Isabella Furci was also supported by the Young Investigator Training Program 2020 (YITP 2019) promoted by ACRI. \clearpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Let $C=\{\textbf{c}_0,\textbf{c}_1,\cdots, \textbf{c}_{N-1}\}$ be a set of $N$ unit-norm complex vectors $\textbf{c}_l\in \mathbb{C}^K$ over an alphabet $A$, where $l=0, 1,\cdots , N-1$. The size of $A$ is called the alphabet size of $C$. Such a set $C$ is called an $(N, K)$ codebook (also called a signal set). The maximum cross-correlation amplitude, which is a performance measure of a codebook in practical applications, of the $(N, K)$ codebook $C$ is defined as \begin{eqnarray*} I_{\max}(C) &=& \max_{0\leq i<j\leq N-1} |\textbf{c}_i\textbf{c}_j^H|, \end{eqnarray*} where $\textbf{c}_j^H$ denotes the conjugate transpose of the complex vector $\textbf{c}_j$. For a certain length $K$, it is desirable to design a codebook such that the number $N$ of codewords is as large as possible and the maximum cross-correlation amplitude $I_{\max}(C)$ is as small as possible. To evaluate a codebook $C$ with parameters $(N, K)$, it is important to find the minimum achievable $I_{\max}(C)$ or its lower bound. However, for $I_{\max}(C)$, we have the well-known Welch bound in the following. \begin{lem}\label{lem1} \cite{LW} For any $(N, K)$ codebook $C$ with $N\geq K$, \begin{eqnarray}\label{eq1} I_{\max}(C) &\geq& I_w=\sqrt{\frac{N-K}{(N-1)K}}. \end{eqnarray} Furthermore, the equality in (\ref{eq1}) is achieved if and only if \begin{eqnarray*} |\textbf{c}_i\textbf{c}_j^H| &=& \sqrt{\frac{N-K}{(N-1)K}} \end{eqnarray*} for all pairs $(i, j)$ with $i\neq j$. \end{lem} A codebook is referred to as a maximum-Welch-bound-equality (MWBE) codebook \cite{DS} or an equiangular tight frame \cite{JK} if it meets the Welch bound equality in ($\ref{eq1}$). Codebooks meeting the Welch bound are used to distinguish among the signals of different users in code-division multiple-access (CDMA) systems \cite{MM}. In addition, codebooks meeting optimal (or asymptotically optimal) with respect to the Welch bound are much preferred in many practical applications, such as, multiple description coding over erasure channels \cite{SH}, communications \cite{DS}, compressed sensing \cite{CW}, space-time codes \cite{TK}, coding theory \cite{DGS} and quantum computing \cite{RBSC} etc.. In general, it is very difficult to construct optimal codebooks achieving the Welch bound (i.e. MWBE). Hence, many researchers attempted to construct asymptotically optimal codebooks, i.e., the minimum achievable $I_{\max}(C)$ of the codebook $C$ nearly achieving the Welch bound for large $N$. There are many results in regard to optimal or almost optimal codebooks by the Welch bound, interested readers may refer to [1, 2, 4-6, 9-12, 14-16, 18, 28, 29]. It is important that the construction method of codebooks. At present, many researchers constructed the codebooks based on difference sets, almost difference sets, relative difference sets, binary row selection sequences and cyclotomic classes. It is well known that the additive characters, multiplicative characters and Gauss sums over finite fields and some of their good properties \cite[Chapter 5]{LNC}. Especially, they have many rich applications in coding theory. It's worth mentioning that some researchers constructed codebooks by using the character sums of finite fields \cite{DF, LC, ZF}. Later, G. Luo and X. Cao proposed two constructions of complex codebooks from character sums over the Galois ring $GR(p^2, r)$ in \cite{LC2} based on existing results \cite{LZF}. In fact, we know that many scholars have done a lot of research over local rings \cite{GSF, LL, SWLS} etc.. Motivated by \cite{LC2} and \cite{LZF}, a natural question is to explore the character sums over the ring $R=\mathbb{F}_q+u\mathbb{F}_q~(u^2=0)$, is it possible to construct codebooks over the ring $R$ based on the character sums we studied and obtain several classes of asymptotically optimal codebooks with respect to the Welch bound? This paper will give a positive answer to this question. This manuscript has three main contributions. One contribution of this paper is to give explicit description on the additive characters, multiplicative characters and establish a Gauss sum over a local ring for the first time. Another contribution of this paper is to focus on the constructions of codebooks over the ring $R=\mathbb{F}_q+u\mathbb{F}_q~(u^2=0)$ by using the character sums. Finally, we show that the maximum cross-correlation amplitudes $I_{\max}(C)$ of these codebooks asymptotically meet the Welch bound and obtain new parameters by comparing with the parameters of some known classes of asymptotically optimal codebooks. The rest of this paper is arranged as follows. Section 2 presents some notations and basic results which will be needed in subsequent sections. In Section 3, we explicit description on the additive characters and multiplicative characters over a local ring. In Section 4, we present computation on Gauss sums over a local ring. Section 5 introduces two generic families of codebooks meeting asymptotically optimal with respect to the Welch bound. In Section 6, we conclude this paper and present several open problems. \section{Preliminaries} Let $\mathbb{F}_q$ denote the finite field with $q$ elements and $q=p^m$, where $p$ is a prime and $m$ is a positive integer. We consider the chain ring $R=\mathbb{F}_q+u\mathbb{F}_q=\{a+bu: a, b\in \mathbb{F}_q\} (u^2=0)$ with the unique maximal ideal $M=\langle u\rangle$. In fact, $R=\mathbb{F}_q\oplus u\mathbb{F}_q \simeq \mathbb{F}_q^2$ is a two-dimensional vector space over $\mathbb{F}_q$ and $|R|=q^2.$ The invertible elements of $R$ is $$R^*=R\backslash M=\mathbb{F}_q^*+u\mathbb{F}_q=\{a+bu: a\in \mathbb{F}_q^*, b\in \mathbb{F}_q\}$$ with $|R^*|=q(q-1)$. In fact, $R^*$ can also be represented as $\mathbb{F}_q^*\times (1+M)~~({\rm direct~product}).$ A character $\chi$ of a finite abelian group $G$ is a homomorphism from $G$ into the multiplicative group $U$ of complex numbers of absolute value 1, that is, a mapping from $G$ into $U$ with $\chi(g_1g_2)=\chi(g_1)\chi(g_2)$ for all $g_1, g_2\in G.$ Next, we recall the the additive characters and multiplicative characters of the finite field $\mathbb{F}_q$. $\bullet$ The additive character $\chi$ of $\mathbb{F}_q$ defined by $$\chi(c)=e^{\frac{2\pi i{\rm Tr}(c)}{p}}$$ for all $c\in \mathbb{F}_q$, where Tr: $\mathbb{F}_q\longrightarrow \mathbb{F}_p$ is the absolute trace function from $\mathbb{F}_q$ to $\mathbb{F}_p$~(see Definition 2.22 in \cite{LNC}). For any $c_1, c_2\in \mathbb{F}_q$, we have \begin{eqnarray}\label{den2} \chi(c_1+c_2) &=&\chi(c_1)\chi(c_2). \end{eqnarray} Moreover, for $b\in \mathbb{F}_q$, the function $\chi_b$ is defined as $\chi_b(c)=\chi(bc)$ for all $c\in \mathbb{F}_q$.\\ $\bullet$ The multiplicative character $\psi_j$ of $\mathbb{F}_q$ defined by $$\psi_j(g^k)=e^{\frac{2\pi ijk}{q-1}}$$ for each $j=0,1,\cdots, q-2$, where $k=0,1,\cdots, q-2$ and $g$ is a fixed primitive element of $\mathbb{F}_q$. For any $c_1, c_2\in \mathbb{F}_q^*$, we have \begin{eqnarray}\label{den3} \psi_j(c_1c_2) &=&\psi_j(c_1)\psi_j(c_2). \end{eqnarray} Now, let $\psi$ be a multiplicative and $\chi$ an additive character of $\mathbb{F}_q$. Then the Gauss sum $G(\psi, \chi)$ of $\mathbb{F}_q$ is defined by \begin{eqnarray*} G(\psi, \chi)&=&\sum\limits_{c\in \mathbb{F}_q^*}\psi(c)\chi(c). \end{eqnarray*} However, we now need to study the additive and multiplicative characters of a local ring $R=\mathbb{F}_q+u\mathbb{F}_q~(u^2=0)$, which implies that the character of the ring $R$ are described in detail similarly by the definition of the character of the finite field $\mathbb{F}_q$. Furthermore, the explicit description on the additive and multiplicative characters of $R$ we present should be satisfied the similar properties above equalities (\ref{den2}) and (\ref{den3}), respectively. In addition, we establish the Gauss sum of $R$ by the Gauss sum of $\mathbb{F}_q$. Hence, we will present the the additive and multiplicative characters of $R$ in the following section based on the characters of finite fields and propose the Gauss sum of $R$ in Section 4. \section{Characters} In this section, we will give the additive characters and multiplicative characters of $R$.\\\\ $\blacktriangle$\textbf{ \large Additive characters of $R$} The group of additive characters of $(R, +)$ is $$\widehat{R}:=\{\lambda: R\longrightarrow \mathbb{C}^*| \lambda(\alpha+\beta)=\lambda(\alpha)\lambda(\beta), \alpha, \beta \in R\}.$$ For any additive character $\lambda$ of $R$ $$\lambda: R \longrightarrow\mathbb{C}^*.$$ Since $\lambda(a_0+ua_1)=\lambda(a_0)\lambda(ua_1)$ for any $a_0, a_1\in \mathbb{F}_q,$ we define two maps as follows: \begin{itemize} \item $$\lambda^{'}: \mathbb{F}_q \longrightarrow \mathbb{C}^*$$ by $\lambda^{'}(c):=\lambda(c)$ for $c\in \mathbb{F}_q.$ \item $$\lambda^{''}: \mathbb{F}_q \longrightarrow \mathbb{C}^*$$ by $\lambda^{''}(c):=\lambda(uc)$ for $c\in \mathbb{F}_q.$ \end{itemize} Therefore, it is easy to prove that $\lambda^{'}(c_1+c_2)=\lambda^{'}(c_1)\lambda^{'}(c_2)$ and $\lambda^{''}(c_1+c_2)=\lambda^{''}(c_1)\lambda^{''}(c_2)$ for $c_1, c_2 \in \mathbb{F}_q.$ Based on this, we know that $\lambda^{'}$ and $\lambda^{''}$ are additive characters of $(\mathbb{F}_q, +)$, then there exist $b, c \in \mathbb{F}_q$ such that $$\lambda^{'}(x)=\zeta_p^{{\rm Tr}(bx)}=\chi_{b}(x), \lambda^{''}(x)=\zeta_p^{{\rm Tr}(cx)}=\chi_{c}(x)$$ for all $x\in \mathbb{F}_q$, where $\zeta_p=e^{\frac{2\pi i}{p}}$ is a primitive $p$th root of unity over $\mathbb{F}_q.$ Hence, we can get the additive character of $R$ \begin{eqnarray*} \lambda(a_0+ua_1) &=& \lambda'(a_0)\lambda''(a_1)\\ &=& \chi_{b}(a_0)\chi_{c}(a_1). \end{eqnarray*} Thus, there is an one-to-one correspondence: \begin{eqnarray*} \tau : \widehat{(R,+)} &\longrightarrow& \widehat{(\mathbb{F}_q,+)}\times \widehat{(\mathbb{F}_q,+)},\\ \lambda &\longmapsto& (\chi_b, \chi_c). \end{eqnarray*} It is easy to prove that the mapping $\tau$ is an isomorphism.\\ $\blacktriangle$\textbf{ \large Multiplicative characters of $R$} The structure of the multiplicative group $R^*$ is $$R^*=\mathbb{F}_q^*\times (1+M)~~({\rm direct~product}).$$ Now, we have \begin{eqnarray*} R^* &=& \{a_0+ua_1: a_0\in \mathbb{F}_q^*, a_1\in \mathbb{F}_q\} \\ &=& \{b_0(1+ub_1): b_0\in \mathbb{F}_q^*, b_1\in \mathbb{F}_q\}. \end{eqnarray*} The group of multiplicative characters of $R$ is denoted by $\widehat{R}^*$ and $\widehat{R}^*=\widehat{\mathbb{F}}_q^*\times\widehat{(1+M)}$. We define $$\widehat{R}^*:=\{\varphi: R^*\longrightarrow \mathbb{C}^*| \varphi(\alpha\beta)=\varphi(\alpha)\varphi(\beta), \alpha, \beta \in R\}.$$ For any multiplicative character $\varphi$ of $R$ $$\varphi: R^* \longrightarrow\mathbb{C}^*.$$ Since $\varphi(b_0(1+ub_1))=\varphi(b_0)\varphi(1+ub_1)$ for any $b_0\in \mathbb{F}_q^*, b_1\in \mathbb{F}_q,$ we define two maps as follows: \begin{itemize} \item $$\varphi^{'}: \mathbb{F}_q^*\longrightarrow \mathbb{C}^*$$ by $\varphi^{'}(c):=\varphi(c)$ for $c\in \mathbb{F}_q^*.$ \item $$\varphi^{''}: \mathbb{F}_q\longrightarrow \mathbb{C}^*$$ by $\varphi^{''}(c):=\varphi(1+uc)$ for $c\in \mathbb{F}_q.$ \end{itemize} For any $c_1, c_2 \in \mathbb{F}_q^*$, we have $\varphi'(c_1c_2)=\varphi'(c_1)\varphi'(c_2)$ and \begin{eqnarray*} \varphi''(c_1+c_2)&=& \varphi(1+u(c_1+c_2)) \\ &=& \varphi((1+uc_1)(1+uc_2)) \\ &=& \varphi(1+uc_1)\varphi(1+uc_2)\\ &=&\varphi''(c_1)\varphi''(c_2). \end{eqnarray*} Based on this, we can obtain that $\varphi'$ is a multiplicative character of $\mathbb{F}_q$ and $\varphi''$ is an additive character of $\mathbb{F}_q$. Hence, we can get the multiplicative character of $R$ $$\varphi(b_0(1+ub_1))=\varphi'(b_0)\varphi''(b_1),$$ where $\varphi'\in \widehat{\mathbb{F}}_q^*$ and $\varphi''\in \widehat{\mathbb{F}}_q.$ Since $\varphi''$ is an additive character of $\mathbb{F}_q$, then there exists $a\in \mathbb{F}_q$ such that $\varphi''=\chi_a.$ Moreover, we have \begin{eqnarray*} \sigma : \widehat{(R^*,\ast)} &\longrightarrow& \widehat{\mathbb({\mathbb{F}}_q^*,\ast)}\times \widehat{(\mathbb{F}_q,+)}, \\ \varphi &\longmapsto& (\psi, \chi_a), \end{eqnarray*} where $\psi=\varphi'$ is a multiplicative character of $\mathbb{F}_q$. One can show that the mapping $\sigma$ is an isomorphism. \section{Gaussian sums} Let $\lambda$ and $\varphi$ be an additive character and a multiplicative character of $R$, respectively. The Gaussian sum for $\lambda$ and $\varphi$ of $R=\mathbb{F}_q+u\mathbb{F}_q~(u^2=0)$ is defined by \begin{eqnarray*} G_R(\varphi, \lambda) &=&\sum\limits_{t\in R^*}\varphi(t)\lambda(t). \end{eqnarray*} In this section, we calculate the value of $G_R(\varphi, \lambda)$. For convenience, we denote $\varphi:=\psi\star\chi_a~({\rm namely,}~\varphi(t)=\psi(t_0)\chi_a(t_1)), \lambda:=\chi_b\star\chi_c~({\rm namely,}~\lambda(t)=\chi_b(t_0)\chi_c(t_0t_1))$ according to Section 3, where $a, b, c\in \mathbb{F}_q$ and $t=t_0(1+ut_1)\in R.$ Hence, we denote $G_R(\varphi, \lambda):=G(\psi\star\chi_a, \chi_b\star\chi_c)$. \begin{thm}\label{thm1} Let $\varphi$ be a multiplicative character and $\lambda$ be an additive character of $R$, where $\varphi:=\psi\star\chi_a, \lambda:=\chi_b\star\chi_c$ and $a, b, c\in \mathbb{F}_q.$ Then the Gaussian sum $G_R(\varphi, \lambda)$ satisfies \begin{equation*}\label{den1} G_R(\varphi, \lambda)=\begin{cases} \emph{ }qG(\psi, \chi_b), ~~~~~~~~~{\rm if}~a=0, c=0;\\ \emph{ }0, ~~~~~~~~~~~~~~~~~~~{\rm if}~a=0, c\neq0; \\ \emph{ }0, ~~~~~~~~~~~~~~~~~~~{\rm if}~a\neq0, c=0; \\ \emph{ }q\psi(-\frac{a}{c})\chi(-\frac{ab}{c}), ~~~{\rm if}~a\neq0, c\neq 0, \\ \end{cases} \end{equation*} where \begin{equation*} G(\psi, \chi_b)=\begin{cases} \emph{ }q-1, ~~~~{\rm if}~\psi~{\rm is~trivial}, b=0;\\ \emph{ }-1, ~~~~~{\rm if}~\psi~{\rm is~trivial}, b\neq0; \\ \emph{ }0,~~~~~~~~~{\rm if}~\psi~{\rm is~nontrivial}, b=0.\\ \end{cases} \end{equation*}If $\psi$ is nontrivial and $b\neq0$, then $|G(\psi, \chi_b)|=q^\frac{1}{2}$. \end{thm} \begin{proof} Now, let $\varphi:=\psi\star\chi_a$ and $\lambda:=\chi_b\star\chi_c$ with $a,b,c\in \mathbb{F}_q.$ Assume that $t=t_0(1+ut_1)$, where $t_0\in \mathbb{F}_q^*$ and $t_1\in \mathbb{F}_q.$ \begin{eqnarray*} G_R(\varphi, \lambda) &=& \sum\limits_{t\in R^*}\varphi(t)\lambda(t)\\ &=& \sum\limits_{t_0\in \mathbb{F}_q^*, t_1\in \mathbb{F}_q}\varphi(t_0(1+ut_1))\lambda(t_0(1+ut_1)) \\ &=& \sum\limits_{t_0\in \mathbb{F}_q^*, t_1\in \mathbb{F}_q}\psi(t_0)\chi_a(t_1)\chi_b(t_0)\chi_c(t_0t_1)\\ &=& \sum\limits_{t_0\in \mathbb{F}_q^*, t_1\in \mathbb{F}_q}\psi(t_0)\chi(at_1+bt_0+ct_0t_1)\\ &=& \sum\limits_{t_0\in \mathbb{F}_q^*}\psi(t_0)\chi(bt_0)\sum\limits_{t_1\in \mathbb{F}_q}\chi((a+ct_0)t_1)\\ &=& q\sum\limits_{t_0\in \mathbb{F}_q^*, a+ct_0=0}\psi(t_0)\chi(bt_0)\\ &=&\begin{cases} \emph{ }qG(\psi, \chi_b), ~~~~~~~~~{\rm if}~a=0, c=0;\\ \emph{ }0, ~~~~~~~~~~~~~~~~~~~~{\rm if}~a=0, c\neq0; \\ \emph{ }0, ~~~~~~~~~~~~~~~~~~~~{\rm if}~a\neq0, c=0; \\ \emph{ }q\psi(-\frac{a}{c})\chi(-\frac{ab}{c}),~~{\rm if}~a\neq0, c\neq 0, \\ \end{cases} \end{eqnarray*} where $G(\psi, \chi_b)$ is a Gaussian sum of $\mathbb{F}_q.$ \end{proof} \section{Two families of asymptotically optimal codebooks} In this section, we study two classes of codebooks asymptotically achieving the Welch bound by using character sums over the local ring $R=\mathbb{F}_q+u\mathbb{F}_q~(u^2=0)$. Note that $|R^*|=q(q-1)$ and we can write $K=q(q-1)$. Let $\varphi:=\psi\star\chi_a$ and $\lambda:=\chi_b\star\chi_c$ with $a,b,c\in \mathbb{F}_q.$ Assume that $t=t_0(1+ut_1)$, where $t_0\in \mathbb{F}_q^*$ and $t_1\in \mathbb{F}_q.$ Then we can define a set $C_0(R)$ of length $K$ as \begin{eqnarray*} C_0(R) &=& \{\frac{1}{\sqrt{K}}(\varphi(t)\lambda(t))_{t\in R^*}, \varphi\in \widehat{R}^*, \lambda\in \widehat{R}\} \\ &=& \{\frac{1}{\sqrt{K}}(\psi(t_0)\chi_a(t_1)\chi_b(t_0)\chi_c(t_0t_1))_{t_0\in \mathbb{F}_q^*, t_1\in \mathbb{F}_q}, \psi\in \widehat{\mathbb{F}}_q^*,\chi_a, \chi_b, \chi_c \in \widehat{\mathbb{F}}_q\}. \end{eqnarray*} Next, we will give the two constructions of codebooks over the ring $R$. \subsection{The first construction of codebooks} The codebook $C_1(R)$ of length $K$ over $R$ is constructed as \begin{eqnarray*} C_1(R) &=& \{\frac{1}{\sqrt{K}}(\psi(t_0)\chi_a(t_1)\chi_b(t_0)\chi_c(t_0t_1))_{t_0\in \mathbb{F}_q^*, t_1\in \mathbb{F}_q}, \\ && \psi~{\rm is~a~fixed~multiplicative~character~over}~ \mathbb{F}_q,\chi_a, \chi_b, \chi_c \in \widehat{\mathbb{F}}_q\}. \end{eqnarray*} Based on this construction of the codebook $C_1(R)$, we have the following theorem. \begin{thm}\label{thm2} Let $C_1(R)$ be a codebook defined as above. Then $C_1(R)$ is a $(q^3, q(q-1))$ codebook with the maximum cross-correlation amplitude $I_{max}(C_1(R))=\frac{1}{q-1}$. \end{thm} \begin{proof} According to the definition of $C_1(R)$, it is easy to see that $C_1(R)$ has $N=q^3$ codewords of length $K=q(q-1)$. Next, our task is to determine the maximum cross-correlation amplitude $I_{max}$ of the codebook $C_1(R)$. Let $\textbf{c}_1$ and $\textbf{c}_2$ be any two distinct codewords in $C_1(R)$, where $\textbf{c}_1=\frac{1}{\sqrt{K}}(\psi(t_0)\chi_{a_1}(t_1)\chi_{b_1}(t_0)\chi_{c_1}(t_0t_1))_{t_0\in \mathbb{F}_q^*, t_1\in \mathbb{F}_q}$ and $\textbf{c}_2=\frac{1}{\sqrt{K}}(\psi(t_0)\chi_{a_2}(t_1)\chi_{b_2}(t_0)\chi_{c_2}(t_0t_1))_{t_0\in \mathbb{F}_q^*, t_1\in \mathbb{F}_q}$. Without loss of generality, we denote the trivial multiplicative character of $\mathbb{F}_q$ by $\psi_0$. Then we have \begin{eqnarray*} \textbf{c}_1\textbf{c}_2^H &=&\frac{1}{K}\sum\limits_{t_0\in \mathbb{F}_q^*, t_1\in \mathbb{F}_q}\psi(t_0)\chi_{a_1}(t_1)\chi_{b_1}(t_0)\chi_{c_1}(t_0t_1)\overline{\psi(t_0)\chi_{a_2}(t_1)\chi_{b_2}(t_0)\chi_{c_2}(t_0t_1)}\\ &=&\frac{1}{K}\sum\limits_{t_0\in \mathbb{F}_q^*, t_1\in \mathbb{F}_q}\psi_0(t_0)\chi((a_1-a_2)t_1+(b_1-b_2)t_0+(c_1-c_2)t_0t_1) \\ &=&\frac{1}{K}\sum\limits_{t_0\in \mathbb{F}_q^*}\psi_0(t_0)\chi{((b_1-b_2)t_0)}\sum\limits_{t_1\in \mathbb{F}_q}\chi((a_1-a_2)t_1+(c_1-c_2)t_0t_1)\\ &=& \frac{1}{K}\sum\limits_{t_0\in \mathbb{F}_q^*}\psi_0(t_0)\chi{(bt_0)}\sum\limits_{t_1\in \mathbb{F}_q}\chi((a+ct_0)t_1)~({\rm Set}~a=a_1-a_2, b=b_1-b_2, c=c_1-c_2)\\ &=&\frac{q}{K}\sum\limits_{t_0\in \mathbb{F}_q^*, a+ct_0=0}\psi_0(t_0)\chi_b(t_0)\\ &=& \frac{1}{K}G_R(\varphi, \lambda)~({\rm By~the~proof~of~Theorem~\ref{thm1},~where}~\varphi:=\psi_0\star\chi_a, \lambda:=\chi_b\star\chi_c) \end{eqnarray*} Since $\textbf{c}_1\neq \textbf{c}_2$, then $a, b$ and $c$ are not all equal to $0$. In view of Theorem \ref{thm1}, we have \begin{equation*} K\textbf{c}_1\textbf{c}_2^H=\begin{cases} \emph{ }-q, ~~~~~~~~~~~{\rm if}~a=0, c=0, b\neq 0;\\ \emph{ }q\chi(-\frac{ab}{c}), ~~~~~{\rm if}~a\neq0, c\neq0;\\ \emph{ }0, ~~~~~~~~~~~~~~~{\rm otherwise}.\\ \end{cases} \end{equation*} Consequently, we infer that $|\textbf{c}_1\textbf{c}_2^H|\in \{0, \frac{1}{q-1}\}$ for any two distinct codewords $\textbf{c}_1, \textbf{c}_2$ in $C_1(R)$. Hence, $I_{max}(C_1(R))=\frac{1}{q-1}.$ \end{proof} By Theorem \ref{thm2}, we can calculate the ratio $\frac{I_{max}(C_1(R))}{I_w}$, which is to prove that the codebook $C_1(R)$ is asymptotically optimal. \begin{thm}\label{thm3} Let the symbols be the same as those in Theorem \ref{thm2}. Then the codebook $C_1(R)$ asymptotically meets the Welch bound. \end{thm} \begin{proof} In view of Theorem \ref{thm2}, note that $N=q^3$ and $K=q(q-1)$. Then the corresponding Welch bound of the codebook $C_1(R)$ is \begin{eqnarray*} I_w &=& \sqrt{\frac{N-K}{(N-1)K}} \\ &=& \sqrt{\frac{q^3-q(q-1)}{(q^3-1)q(q-1)}}\\ &=&\sqrt{\frac{q^2-q+1}{q^4-q^3-q+1}}. \end{eqnarray*}It follows from Theorem \ref{thm2}, then we have \begin{equation*} \frac{I_{max}(C_1(R))}{I_w}=\sqrt{\frac{q^4-q^3-q+1}{(q^2-q+1)(q-1)^2}} \end{equation*} Obviously, we get $\lim\limits_{q\longrightarrow \infty}\frac{I_{max}(C_1(R))}{I_w}=1$, which implies that $C_1(R)$ asymptotically meets the Welch bound. \end{proof} \subsection{The second construction of codebooks} The codebook $C_2(R)$ of length $K$ over $R$ is constructed as \begin{eqnarray*} C_2(R) &=& \{\frac{1}{\sqrt{K}}(\psi(t_0)\chi_a(t_1)\chi_b(t_0)\chi_c(t_0t_1))_{t_0\in \mathbb{F}_q^*, t_1\in \mathbb{F}_q}, \\ && \psi\in \widehat{\mathbb{F}}_q^*, \chi_b~{\rm is~a~fixed~additive~character~over}~\mathbb{F}_q,\chi_a, \chi_c \in \widehat{\mathbb{F}}_q\}. \end{eqnarray*} With this construction, we will figure up the maximum cross-correlation amplitude $I_{max}(C_2(R))$ as follows. \begin{thm}\label{thm4} Let $C_2(R)$ be a codebook defined as above. Then $C_2(R)$ is a $(q^2(q-1), q(q-1))$ codebook with the maximum cross-correlation amplitude $I_{max}(C_2(R))=\frac{1}{q-1}$. \end{thm} \begin{proof} According to the definition of $C_2(R)$, it is obvious that $C_2(R)$ has $N=q^2(q-1)$ codewords of length $K=q(q-1)$. Next, our goal is to determine the maximum cross-correlation amplitude $I_{max}$ of the codebook $C_2(R)$. Let $\textbf{c}_1$ and $\textbf{c}_2$ be any two distinct codewords in $C_2(R)$, where $\textbf{c}_1=\frac{1}{\sqrt{K}}(\psi_1(t_0)\chi_{a_1}(t_1)\chi_{b}(t_0)\chi_{c_1}(t_0t_1))_{t_0\in \mathbb{F}_q^*, t_1\in \mathbb{F}_q}$ and $\textbf{c}_2=\frac{1}{\sqrt{K}}(\psi_2(t_0)\chi_{a_2}(t_1)\chi_{b}(t_0)\chi_{c_2}(t_0t_1))_{t_0\in \mathbb{F}_q^*, t_1\in \mathbb{F}_q}$. Then we have \begin{eqnarray*} \textbf{c}_1\textbf{c}_2^H &=&\frac{1}{K}\sum\limits_{t_0\in \mathbb{F}_q^*, t_1\in \mathbb{F}_q}\psi_1(t_0)\chi_{a_1}(t_1)\chi_{b}(t_0)\chi_{c_1}(t_0t_1)\overline{\psi_2(t_0)\chi_{a_2}(t_1)\chi_{b}(t_0)\chi_{c_2}(t_0t_1)}\\ &=&\frac{1}{K}\sum\limits_{t_0\in \mathbb{F}_q^*, t_1\in \mathbb{F}_q}\psi_1\overline{\psi}_2(t_0)\chi((a_1-a_2)t_1+(c_1-c_2)t_0t_1)\\ &=& \frac{1}{K}\sum\limits_{t_0\in \mathbb{F}_q^*}\psi(t_0)\sum\limits_{t_1\in \mathbb{F}_q}\chi((a+ct_0)t_1)~({\rm Set}~\psi=\psi_1\overline{\psi}_2, a=a_1-a_2, c=c_1-c_2)\\ &=&\frac{q}{K}\sum\limits_{t_0\in \mathbb{F}_q^*, a+ct_0=0}\psi(t_0). \end{eqnarray*} \begin{itemize} \item If $a=c=0,$ since $\textbf{c}_1\neq \textbf{c}_2$, thus $\psi$ is nontrivial. Then we have $$K\textbf{c}_1\textbf{c}_2^H=q\sum\limits_{t_0\in \mathbb{F}_q^*}\psi(t_0)=0;$$ \item If $a=0, c\neq 0$ or $a\neq0, c=0$, then $K\textbf{c}_1\textbf{c}_2^H=0$; \item If $a\neq0, c\neq 0$, then $K\textbf{c}_1\textbf{c}_2^H=q\psi(-\frac{a}{c})$. \end{itemize} \begin{equation*} \textbf{c}_1\textbf{c}_2^H=\begin{cases} \emph{ }\frac{q}{K}\psi(-\frac{a}{c}), ~~~~~{\rm if}~a\neq0, c\neq0;\\ \emph{ }0, ~~~~~~~~~~~~~~~{\rm otherwise}.\\ \end{cases} \end{equation*} Consequently, we infer that $|\textbf{c}_1\textbf{c}_2^H|\in \{0, \frac{1}{q-1}\}$ for any two distinct codewords $\textbf{c}_1, \textbf{c}_2$ in $C_2(R)$. Hence, $I_{max}(C_1(R))=\frac{1}{q-1}.$ \end{proof} Similarly, we show the near-optimality of the codebook $C_2(R)$ in the following theorem. \begin{thm}\label{thm5} Let the symbols be the same as those in Theorem \ref{thm4}. Then the codebook $C_2(R)$ asymptotically meets the Welch bound. \end{thm} \begin{proof} In view of Theorem \ref{thm4}, note that $N=q^2(q-1)$ and $K=q(q-1)$. Then the corresponding Welch bound of the codebook $C_2(R)$ is \begin{eqnarray*} I_w &=& \sqrt{\frac{N-K}{(N-1)K}} \\ &=& \sqrt{\frac{q^2(q-1)-q(q-1)}{(q^3-q^2-1)q(q-1)}}\\ &=&\sqrt{\frac{q-1}{q^3-q^2-1}}. \end{eqnarray*}It follows from Theorem \ref{thm4}, then we have \begin{equation*} \frac{I_{max}(C_2(R))}{I_w}=\sqrt{\frac{q^3-q^2-1}{(q-1)(q-1)^2}} \end{equation*} Obviously, we get $\lim\limits_{q\longrightarrow \infty}\frac{I_{max}(C_2(R))}{I_w}=1$, which implies that $C_2(R)$ asymptotically meets the Welch bound. \end{proof} \section{Conclusions} In this paper, we described the additive characters and multiplicative characters over the ring $R=\mathbb{F}_q+u\mathbb{F}_q~(u^2=0)$ in detail. Our results on Gauss sums over the ring $R$ are calculated explicitly based on the additive and multiplicative characters. The purpose of studying the characters over $R$ is to present an application in the codebooks. Based on this idea, we proposed two constructions of codebooks and determined the maximum cross-correlation amplitude $I_{\max}(C)$ of codebooks generated by these two constructions. Moreover, we showed that these codebooks are asymptotically optimal with respect to Welch bound and the parameters of these codebooks are new. In further research, it would be interesting to investigate the application of the new families of codebooks meeting the Welch bound or Levenstein bound by finding the new constructions of codebooks. In addition, we hope and believe that the better properties with respect to Gauss and Jacobi sums over rings will be studied and the results will be useful in applications.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Achieving fairness when allocating resources in communication and computing systems has been a subject of extensive research, and has been successfully applied in numerous practical problems. Fairness is leveraged to perform congestion control in the Internet~\cite{kellyORS98, mo2000fair}, to select transmission power in multi-user wireless networks~\cite{srikantJSAC06, tassiulasFnT06}, and to allocate multidimensional resources in cloud computing platforms~\cite{carleeToN13, baochuninfocom14, bonaldsigmetrics15}. Depending on the problem at hand, the criterion of fairness can be expressed in terms of how the service performance is distributed across the end-users, or in terms of how the costs are balanced across the servicing nodes. The latter case exemplifies the natural link between fairness and load balancing in resource-constrained systems~\cite{loadbalacing1,loadbalacing2}. A prevalent fairness metric is $\alpha$-fairness, which encompasses the utilitarian principle (Bentham-Edgeworth solution~\cite{edgeworth1881mathematical}), proportional fairness (Nash bargaining solution~\cite{Nash1950}), max-min fairness (Kalai–Smorodinsky bargaining solution~\cite{kalai1975other}){, and, under some conditions, Walrasian equilibrium~\cite{iosifidis-sigmetrics15}.} All these fairness metrics have been used in different cases for the design of resource management mechanisms~\cite{radunovic2007unified, Nace2008}. A common limitation of the above works is that they consider \emph{static} environments. That is, the resources to be allocated and, importantly, the users' utility functions, are fixed and known to the decision maker. This assumption is very often unrealistic for today's communication and computing systems. For instance, in small-cell mobile networks the user churn is typically very high and unpredictable, thus hindering the fair allocation of spectrum to cells~\cite{andrews5g}. Similarly, placing content files at edge caches to balance the latency gains across the served areas is non-trivial due to the non-stationary and fast-changing patterns of requests~\cite{paschosComMag16}. At the same time, the increasing virtualization of these systems introduces cost and performance volatility, as extensive measurement studies have revealed~\cite{traverso2013temporal,leconte2016placing,elayoubi2015performance}. This uncertainty is exacerbated for services that process user-generated data (e.g., streaming data applications) where the performance (e.g., inference accuracy) depends also on a priori unknown input data and dynamically selected machine learning libraries~\cite{jose-conext,Liu2019Aug,Alipourfard2017Mar}. \subsection{Contributions} This paper makes the next step towards enabling long-term fairness in dynamic systems. We consider a system that serves a set of agents $\mathcal I$, where a controller selects at each timeslot $t \in \mathbb N$ a resource allocation profile $\vec{x}_t$ from a set of eligible allocations~$\mathcal X$ based on past agents' utility functions ${\u}_{t'}: \mathcal{X} \to \mathbb{R}^\mathcal{I}_{\geq 0}$ for $t' < t$ and of $\alpha$-fairness function $F_{\alpha} : \mathbb{R}^\mathcal{I}_{\geq 0} \to \mathbb{R}$. The utilities might change due to unknown, unpredictable, and (possibly) non-stationary perturbations that are revealed to the controller only after it decides $\vec{x}_t$. We employ the terms \emph{horizon-fairness}~(HF) and \emph{slot-fairness}~(SF) to distinguish the different ways fairness can be enforced in a such time-slotted dynamic system. Under horizon-fairness, the controller enforces fairness on the aggregate utilities for a given time horizon~$T$, whereas under slot-fairness, it enforces fairness on the utilities at each timeslot separately. Both metrics have been studied in previous work, e.g., see~\cite{gupta2021individual,Liao2022Feb,jalota2022online,Sinclair2022} and the discussion in Section~\ref{s:related_work}. Our focus is on horizon-fairness, which raises novel technical challenges and subsumes slot-fairness as a special case. We design the \emph{online horizon-fair}~(\textsc{OHF}) policy by leveraging \emph{online convex optimization}~(OCO)~\cite{Hazanoco2016}, to handle this reduced-information setting under a powerful \emph{adversarial} perturbation model. In this context, the performance of a resource allocation policy $\vec \mathcal{A}$ is evaluated by the \emph{fairness regret}, which is defined as the difference between the $\alpha$-fairness, over the time-averaged utilities, achieved by a static optimum-in-hindsight (\emph{benchmark}) and the one achieved by the policy: \begin{align} \mathfrak{R}_T \parentheses{F_{\alpha}, \vec\A} \triangleq \sup_{ \set{\u_t}^T_{t=1} \in {{\U^T}}} \set{\max_{\vec{x} \in \mathcal{X}} F_{\alpha}\left({\frac{1}{T}\sum_{t \in \mathcal{T}}\vec u_{t}(\vec{x})}\right) -F_{\alpha}\left({\frac{1}{T}\sum_{t \in \mathcal{T}}\vec u_{t}(\vec{x}_t)}\right)}. \label{e:b_regret1} \end{align} If the fairness regret vanishes over time (i.e., $\lim_{T\to \infty} \mathfrak{R}_T \parentheses{F_{\alpha}, \vec\A} = 0$), policy $\vec \mathcal{A}$ will attain the same fairness value as the static benchmark under any possible sequence of utility functions. A policy that achieves sublinear regret under these adversarial conditions, can also succeed in more benign conditions where the perturbations are not adversarial, or the utility functions are revealed at the beginning of each slot. The fairness regret metric~\eqref{e:b_regret1} departs from the template of~OCO. In particular, the scalarization of the vector-valued utilities, through the $\alpha$-fairness function, is not applied at every timeslot to allow for the controller to easily adapt its allocations, instead is only applied at the end of the time horizon $T$. Our first result characterizes the challenges in tackling this learning problem. Namely, Theorem~\ref{theorem:impossibility} proves that, when utility perturbations are only subject to four mild technical conditions, such as in standard OCO, it is impossible to achieve vanishing fairness-regret. Similar negative results were obtained under different setups of primal-dual learning and online saddle point learning~\cite{mannor2009online,anderson2022lazy, rivera2018online}, but they have been devised for specific problem structures (e.g., online matrix games) and thus do not apply to our setting. In light of this negative result, we introduce additional \emph{necessary} conditions on the adversary to obtain a vanishing regret guarantee. Namely, the adversary can only induce perturbations to the time-averaged utilities we call budgeted-severity or partitioned-severity constrained. These conditions capture several practical utility patterns, such as non-stationary corruptions, ergodic and periodic inputs~\cite{Liao2022Feb,balseiro2022best,zhou2019robust, duchi2012ergodic}. We proceed to propose the \textsc{OHF}{} policy which adapts dynamically the allocation decisions and provably achieves $\mathfrak{R}_T \parentheses{F_{\alpha}, \vec\A} = o(1)$ (see Theorem~\ref{th:maintheorem}). The \textsc{OHF}{} policy employs a novel learning approach that operates concurrently, and in a synchronized fashion, in a primal and a dual (conjugate) space. Intuitively, \textsc{OHF}{} learns the weighted time-varying utilities in a primal space, and learns the weights accounting for the global fairness metric in some dual space. To achieve this, we develop novel techniques through a convex conjugate approach (see Lemmas~\ref{lemma:convex_conjugate},~\ref{l:recover_f}, and~\ref{l:saddle_problem} in the Appendix). Finally, we apply our fairness framework to a representative resource management problem in virtualized caching systems where different caches cooperate by serving jointly the received content requests. We evaluate the performance of \textsc{OHF}{} with its slot-fairness counterpart policy through numerical examples. We evaluate the price of fairness of \textsc{OHF}, which quantifies the efficiency loss due to fairness, across different network topologies and participating agents. Lastly, we apply \textsc{OHF}{} to a Nash bargaining scenario, a concept that has been widely used in resource allocation to distribute to a set of agents the utility of their cooperation~\cite{Boche2011,Iosifidis2017,Wenjie2009,Liang2017}. \subsection{Outline of Paper} The paper is organized as follows. The related literature is discussed in Section~\ref{s:related_work}. The definitions and background are provided in Section~\ref{s:fairness}. The adversarial model and the proposed algorithm are presented in Section~\ref{s:OHF}. Extensions to the fairness framework are provided in Section~\ref{s:extensions}. The resource management problem in virtualized caching systems application is provided in Section~\ref{s:experiments}. Finally, we conclude the paper and provide directions for future work in Section~\ref{s:conclusion}. \section{Literature Review} \label{s:related_work} \subsection{Fairness in Resource Allocation} Fairness has found many applications in wired and wireless networking~\cite{kellyORS98, mo2000fair,srikantJSAC06,tassiulasFnT06,altman2008generalized}, and cloud computing platforms~\cite{carleeToN13, baochuninfocom14, bonaldsigmetrics15}. Prevalent fairness criteria are the max-min fairness and proportional fairness, which are rooted in axiomatic bargaining theory, namely the Kalai–Smorodinsky~\cite{kalai1975other} and Nash bargaining solution~\cite{Nash1950}, respectively. On the other side of the spectrum, a controller might opt to ignore fairness and maximize the aggregate utility of users, i.e., to follow the \emph{utilitarian principle}, also referred to as the Bentham-Edgeworth solution~\cite{edgeworth1881mathematical}. The \emph{Price of Fairness} (PoF)~\cite{bertsimas2011price} is now an established metric for assessing how much the social welfare (i.e., the aggregate utility) is affected when enforcing some fairness metric. Ideally, we would like this price to be as small as possible, bridging in a way these two criteria. Atkinson~\cite{ATKINSON1970244} proposed the unifying $\alpha$-fairness criterion which yields different fairness criteria based on the value of~$\alpha$, i.e., the utilitarian principle~($\alpha=0$), proportional fairness~($\alpha =1$), and max-min fairness~($\alpha\to \infty$). Due to the generality of the $\alpha$-fairness criterion, we use it to develop our theory, which in turn renders our results transferrable to all above fairness and bargaining problems. In this work, the PoF, together with the metric of fairness-regret, are the two criteria we use to characterize our fairness solution. \subsection{Fairness in Dynamic Resource Allocation} Several works consider slot-fairness in dynamic systems~\cite{jalota2022online,Sinclair2022, Talebi2018}. Jalota and Ye~\cite{jalota2022online} proposed a weighted proportional fairness algorithm for a system where new users arrive in each slot, having linear i.i.d. perturbed unknown utility functions at the time of selecting an allocation, and are allocated resources from an i.i.d. varying budget. Sinclair et al.~\cite{Sinclair2022} consider a similar setup, but assume the utilities are known at the time of selecting an allocation, and the utility parameters (number of agents and their type) are drawn from some fixed known distribution. They propose an adaptive threshold policy, which achieves a target efficiency (amount of consumed resources' budget) and fairness tradeoff, where the latter is defined w.r.t. to an offline weighted proportional fairness benchmark. Finally, Talebi and Proutiere~\cite{Talebi2018} study dynamically arriving tasks that are assigned to a set of servers with unknown and stochastically-varying service rates. Using a stochastic multi-armed bandit model, the authors achieve proportional fairness across the service rates assigned to different tasks at each slot. All these important works, however, do not consider the more practical horizon-fairness metric where fairness is enforced throughout the entire operation of the system and not over each slot separately. Horizon-fairness has been recently studied through the lens of competitive analysis~\cite{kawase2021online, banerjee2022online, bateni2022fair}, where the goal is to design a policy that achieves online fairness within a constant factor from the fairness of a suitable benchmark. Kawase and Sumita~\cite{kawase2021online} consider the problem of allocating arriving items irrevocably to one agent who has additive utilities over the items. The arrival of the items is arbitrary and can even be selected by an adversary. The authors consider known utility at the time of allocation, and design policies under the max-min fairness criterion. Banerjee et al.~\cite{banerjee2022online} consider a similar problem under the proportional fairness criterion, and they allow the policies to exploit available predictions. We observe that the competitive ratio guarantees, while theoretically interesting, may not be informative about the fairness of the actual approximate solution achieved by the algorithm for ratios different from one. For instance, when maximizing a Nash welfare function under the proportional fairness criterion, the solution achieves some axiomatic fairness properties~\cite{Nash1950} (e.g., Pareto efficiency, individual rationality, etc.), but this welfare function is meaningless for ``non-optimal'' allocations~\cite{Sinclair2022}, i.e., a policy with a high competitive ratio is not necessary less fair than a policy with a lower competitive ratio. For this reason, our work considers regret as a performance metric: when regret vanishes asymptotically, the allocations of the policy indeed achieve the exact same objective as the adopted benchmark. A different line of work~\cite{gupta2021individual,Liao2022Feb,Cayci2020,benade2018make,zeng2020fairness,sinclair2020sequential,baek2021fair} considers horizon-fairness through regret analysis. Gupta and Kamble~\cite{gupta2021individual} study individual fairness criteria that advocate similar individuals should be treated similarly. They extend the notion of individual fairness to online contextual decision-making, and introduce: (1)~fairness-across-time and (2)~fairness-in-hindsight. Fairness-across-time criterion requires the treatment of individuals to be individually fair relative to the past as well as future, while fairness-in-hindsight only requires individual fairness at the time of the decision. The utilities are known at the time of selecting an allocation and are i.i.d. and drawn from an unknown fixed distribution. In this work, we make a similar distinction on the fairness criterion in the online setting, where we define the horizon-fairness and slot-fairness. Liao et al.~\cite{Liao2022Feb} consider a similar setup to ours, with a limited adversarial model and time-varying but {known} utilities, and focus on proportional fairness. They consider adversarial perturbation added on a fixed item distribution where the demand of items generally behaves predictably, but for some time steps, the demand behaves erratically. Our approach departs significantly from these interesting works in that we consider unknown utility functions, a broader adversarial model (in fact, as broad as possible while still achieving vanishing fairness regret), and by using the general $\alpha$-fairness criterion that encompasses all the above criteria as special cases. This makes, we believe, our \textsc{OHF}{} algorithm applicable to a wider range of practical problems. Table~\ref{tab:related_work} summarizes the differences between our contribution and the related works. \begin{table}[t] \caption{Summary of related work under online fairness in resource allocation.} \begin{footnotesize} \label{t:setting} \begin{center} \begin{tabular}{clc>{\centering\arraybackslash}p{4.6em} >{\centering\arraybackslash}p{4.6em} c} \hline \textbf{Paper} & \textbf{Criterion}&\textbf{HF/SF} & \textbf{Unknown utilities} & \textbf{Adversarial utilities} & \textbf{Metric}\\ \hline \cite{jalota2022online} & Weighted proportional fairness & SF & \cmark& \xmark& Regret \\ \cite{Sinclair2022} & Weighted proportional fairness & SF &\xmark&\xmark& Envy, Efficiency\\ \cite{Talebi2018} & Proportional fairness& SF &\xmark&\xmark & Regret \\ \cite{gupta2021individual} & Individual fairness& HF/SF &\xmark&\xmark& Regret \\ \cite{Liao2022Feb} & Proportional fairness& HF&\xmark& \cmark& Regret \\ \cite{Cayci2020} & $\alpha$-fairness &HF & \cmark& \xmark& Regret \\ \cite{benade2018make} & Envy-freeness & HF&\xmark&\cmark& Envy \\ \cite{zeng2020fairness} & Weighted proportional fairness & HF&\xmark&\cmark& Envy, Pareto Efficiency \\ \cite{baek2021fair} & Proportional fairness & HF&\xmark&\xmark& Regret\\ \cite{kawase2021online} & Max-Min fairness & HF& \xmark &\cmark& Competitive ratio \\ \cite{banerjee2022online} & Proportional fairness & HF&\xmark& \cmark& Competitive ratio \\ \cite{bateni2022fair} & Proportional fairness & HF &\xmark&\xmark& Competitive ratio \\ \hline This work & Weighted $\alpha$-fairness & HF/SF & \cmark& \cmark& Fairness Regret\\ \hline \end{tabular} \end{center} \end{footnotesize} \label{tab:related_work} \end{table} \subsection{Online Learning} Achieving horizon-fairness in our setup requires technical extensions to the theory of OCO~\cite{Hazanoco2016}. The basic template of OCO-learning (in terms of resource allocation) considers that a decision maker selects repeatedly a vector $\vec{x}_t$ from a convex set $\mathcal{X}$, without having access to the $t$-th slot scalar utility function $u_t(\vec{x})$, with the goal to maximize the aggregate utility $\sum^T_{t=1} u_t(\vec{x}_t)$. The decision maker aims to have vanishing time-averaged regret, i.e., the time-averaged distance of the aggregate utility $\sum^T_{t=1} u_t(\vec{x}_t)$ from the aggregate utility of the optimal-in-hindsight allocation $\max_{\vec{x} \in \mathcal{X}}\sum_{t=1}^T u_t(\vec{x})$ for some time horizon $T$. OCO models are robust, expressive, and can be tackled with several well-studied learning algorithms~\cite{Hazanoco2016,ShalevOnlineLearning, mcmahan2017survey}. However, none of those is suitable for the fairness problem at hand, as we need to optimize a global function $F_\alpha(\,\cdot\,)$ of the time-averaged vector-valued utilities. This subtle change creates additional technical complications. Indeed, optimizing functions of time-averaged utility/cost functions in learning is an open and challenging problem. In particular, Even-Dar et al.~\cite{evan2009colt} introduced the concept of global functions in online learning, and devised a policy with vanishing regret using the theory of approachability~\cite{blackwell1956analog}. However, their approach can handle only norms as global functions, and this limitation is not easy to overcome: the authors themselves stress characterizing when a global function enables a vanishing regret is an open problem (see~\cite[Sec.~7]{evan2009colt}). Rakhlin et al.~\cite{rakhlin11} extend this work to non-additive global functions. However, the $\alpha$-fairness function considered in our work is not supported by their framework. To generalize the results to $\alpha$-fairness global functions, we employ a convex conjugate approach conceptually similar to the approach taken in the work of Agrawal and Evanur~\cite{agrawal2014bandits} to obtain a regret guarantee with a concave global function under a stationary setting and linear utilities. In this work, we consider an adversarial setting (i.e., utilities are picked by an adversary after we select an allocation) that encompasses general concave utilities, and this requires learning over the primal space as well as the dual (conjugate) space. \section{Online Fairness: Definitions and Background} \label{s:fairness} \subsection{Static Fairness} \label{s:fairness+static} Consider a system $\S$ that serves a set of agents $\mathcal{I}$ by selecting allocations from the set of eligible allocations $\mathcal{X}$. In the general case, this set is defined as the Cartesian product of agent-specific eligible allocations' set $\mathcal{X}_i$, i.e., $\mathcal{X} \triangleq \bigtimes_{i \in \mathcal{I}} \mathcal{X}_i$. We assume that each set $\mathcal{X}_i$ is convex. The utility of each agent $i \in \mathcal{I}$ is a concave function $ u_i: \mathcal{X} \to \mathbb{R}_{\geq 0}$, and depends, possibly, not only on $\vec{x}_i \in \mathcal{X}_i$, but on the entire vector $\vec{x}\in \mathcal{X}$.\footnote{For example, in TCP congestion control, the performance of each end-node depends not only on the rate that is directly allocated to that node, but also, through the induced congestion in shared links, by the rate allocated to other nodes~\cite{kellyORS98}. Similar couplings arise in wireless transmissions over shared channels~\cite{tassiulasFnT06}.} The vector $\vec u(\vec{x}) \triangleq \parentheses{u_{i} (\vec{x})}_{i \in \mathcal{I}} \in \U$ is the vectorized form of the agents' utilities, where $\U$ is the set of possible utility functions. The joint allocation $\vec{x}_{\star} \in \mathcal{X}$ is an $\alpha$-fair allocation for some $\alpha \in \mathbb{R}_{\geq 0}$, if it solves the following convex problem: \begin{align} \max_{\vec{x} \in \mathcal{X}} F_{\alpha}\parentheses{\vec u (\vec{x})},\label{e:prob2} \end{align} where $F_{\alpha}$ is the $\alpha$-fairness criterion the system employs (e.g., when $\alpha = 1$, problem~\eqref{e:prob2} corresponds to an Eisenberg-Gale convex problem~\cite{eisenberg1959consensus}). The $\alpha$-fairness function is defined as follows~\cite{ATKINSON1970244}: \begin{definition} An $\alpha$-fairness function $F_\alpha:\U\to \mathbb{R}$ is parameterized by the inequality aversion parameter $\alpha \in \mathbb{R}_{\geq 0}$, and it is given by \begin{align} F_\alpha(\u) &\triangleq \sum_{i \in \mathcal{I}} f_\alpha(u_i), &\text{where}\qquad f_\alpha (u) \triangleq \begin{cases} \frac{u^{1-\alpha} - 1}{1-\alpha},& \text{ for $\alpha \in \mathbb{R}_{\geq 0} \setminus \set{1}$},\\ \log(u),& \text{ for $\alpha = 1$}, \end{cases}\label{e:alpha-fair} \end{align} for every $\u \in \U$. Note that $\U \subset \mathbb{R}^\mathcal{I}_{\geq 0}$ for $\alpha <1$, and $\U \subset \mathbb{R}^\mathcal{I}_{>0}$ for $\alpha \geq 1$. \end{definition} Note that we use the most general version of utility-based fairness where the fairness rule is defined w.r.t. to accrued utilities (as opposed to allocated resource, only), i.e., in our system $\S$, the utility vector $\u \in \U$ can be a function of the selected allocations in~$\mathcal{X}$. The $\alpha$-fairness function is concave and component-wise increasing, and thus exhibits diminishing returns~\cite{bertsimas2012efficiency}. An increase in utility to a player with a low utility results in a higher $\alpha$-fairness objective. Thus, such an increase is desirable to the system controller. Moreover, the rate at which the marginal increases diminish is controlled by $\alpha$, and hence the nomenclature of the \emph{inequality aversion parameter}. \subsection{Online Fairness} We consider an online variant of system~$\S$, in which the performance of the system is tracked over a time horizon spanning $T\in\mathbb N$ timeslots. At the beginning of each timeslot $t \in \mathcal{T} \triangleq \set{1,2, \dots, T}$, a policy selects an allocation $\vec{x}_t \in \mathcal{X}$ \emph{before} $\u_{t}: \mathcal{X} \to \mathbb{R}_{\geq 0}$ is revealed to the policy. The goal is to approach the performance of a properly-selected fair allocation benchmark. We consider the following two cases: \paragraph{Slot-Fairness.} An offline benchmark in hindsight, with access to the utilities revealed at every timeslot $t \in \mathcal{T}$, can ensure fairness at every timeslot satisfying a \emph{slot-fairness} (SF) objective; for instance, this approach has been followed by~\cite{jalota2022online,Sinclair2022, Talebi2018}. Formally, the benchmark selects the joint allocation $\vec{x}_{\star} \in \mathcal{X}$ satisfying \begin{align} \mathrm{SF}:\qquad \vec{x}_\star \in \arg\max_{\vec{x} \in \mathcal{X}}\frac{1}{T} \sum_{t \in \mathcal{T}} F_{\alpha}\parentheses{\vec u_{t}(\vec{x})}.\label{eq:sf_objective} \end{align} \paragraph{Horizon-Fairness.} Enforcing fairness at every timeslot can be quite restrictive, and this is especially evident for large time horizons. An alternative formulation is to consider that the agents can accept a momentary violation of fairness at a given timeslot $t \in \mathcal{T}$ as long as in the long run fairness over the total incurred utilities is achieved. Therefore, it is more natural (see Example~\ref{e:example}) to ensure a horizon-fairness criterion over the entire period $\mathcal{T}$. Formally, the benchmark selects the allocation $\vec{x}_{\star} \in \mathcal{X}$ satisfying \begin{align} \mathrm{HF}:\qquad \vec{x}_{\star} \in \arg\max_{\vec{x} \in \mathcal{X}} F_{\alpha}\parentheses{\frac{1}{T}\sum_{t \in \mathcal{T}}\vec u_{t}(\vec{x})}.\label{eq:hf_objective} \end{align} \paragraph{Price of fairness.} Bertsimas et al.~\cite{bertsimas2012efficiency} defined the \emph{price of fairness} (PoF) metric to quantify the efficiency loss due to fairness as the difference between the maximum system efficiency and the efficiency under the fair scheme. In the case of $\alpha$-fairness, it is defined for some utility set $\U$ as \begin{align} \mathrm{PoF} (\U; \alpha) \triangleq\frac{ \max_{\u \in \U} F_{0} (\u) - F_{0} \parentheses{\u_{\max, \alpha}}}{ \max_{\u \in \U} F_{0} (\u) }, \end{align} where $\u_{\max, \alpha} \in \arg\max_{\u \in \U } F_{\alpha} (\u)$ and $ F_0(\u)=\sum_{i \in \mathcal{I}} u_i$ measures the achieved social welfare. Note that by definition the utilitarian objective achieves maximum efficiency, i.e., $\mathrm{PoF} (\U; 0)=0$. Naturally, in our online setting, the metric is extended as follows \begin{align} \mathrm{PoF} (\mathcal{X}; \mathcal{T}; \alpha) \triangleq\frac{ \max_{\vec{x}\in \mathcal{X}} \sum_{t \in \mathcal{T}} F_{0} \parentheses{\u_t (\vec{x})}- \sum_{t \in \mathcal{T}} F_{0} \parentheses{\u_t (\vec{x}_\star)}}{ \max_{\vec{x}\in \mathcal{X}} \sum_{t \in \mathcal{T}} F_{0} \parentheses{\u_t (\vec{x})}}, \end{align} where $\vec{x}_\star$ is obtained through either SF~\eqref{eq:sf_objective} or HF~\eqref{eq:hf_objective}. We provide the following example to further motivate our choice of horizon-fairness as a performance objective. A similar argument is provided in~\cite[Example 7]{lodi2021fairness}. \begin{example} \label{e:example} Consider a system with two agents $\mathcal{I} = \set{1,2}$, an allocation set $\mathcal{X} = [0, x_{\max}]$ with $x_{\max} > 1$, $\alpha$-fairness criterion with $\alpha =1$, even $T \in \mathbb N$, and the following sequence of utilities $$\{\u_{t} (x)\}^T_{t=1} = \set{\parentheses{1+x, 1-x}, \parentheses{1+x, 1+x},\dots}.$$ It can easily be verified that $\mathrm{PoF} = 0$ for HF objective~\eqref{eq:hf_objective} because the HF optimal allocation is $x_{\max}$ which matches the optimal allocation under the utilitarian objective. However, under the SF objective~\eqref{eq:sf_objective} we have $\mathrm{PoF} = \frac{x_{\max} - 0.5}{ x_{\max} + 2} \approx 1$ when $x_{\max}$ is large. Remark that the two objectives have different domains of definitions; in particular, the allocations in the set $[1, x_{\max}] \subset \mathcal{X}$ are unachievable by the SF objective because they would lead to $u_{t,2}(x) \leq 0$. The HF objective achieves lower PoF (hence, larger aggregate utility), and it allows a much larger set of eligible allocations (in particular all the allocations in the set $\mathcal{X}$), as shown in Fig.~\ref{e:example_motivating}. Indeed, when the controller has the freedom to achieve fairness over a time horizon, there is an opportunity for more efficient allocations during the system operation. This example provides intuition on the robustness and practical importance of the horizon-fairness objective. \begin{figure}[t] \centering \includegraphics[width =220pt]{figs/PoF.pdf} \caption{\protect\rule{0ex}{-10ex}Price of Fairness under HF and SF objectives for Example~\ref{e:example} for $x_{\max} =2$. The green shaded area provides the set of allocation unachievable by the SF objective but achievable by the HF objective.} \label{e:example_motivating} \end{figure} \end{example} In the following section, we provide the description of an online learning model and our performance metric of interest under the HF objective. \subsection{Online Policies and Performance Metric} The agents' allocations are determined by an online policy $\vec\mathcal{A} = \set{\mathcal{A}_1, \mathcal{A}_2, \dots, \mathcal{A}_{T-1}}$, i.e., a sequence of mappings. For every timeslot $t \in \mathcal{T}$, $\mathcal{A}_t: \mathcal{X}^t \times \U^t \to \mathcal{X}$ maps the sequence of past allocations~$\set{\vec{x}_s}^{t}_{s=1} \in \mathcal{X}^{t}$ and utility functions $\set{\u_s}^t_{s=1} \in \U^t$ to the next allocation $\vec{x}_{t+1} \in \mathcal{X}$. We assume the initial decision $\vec{x}_1$ is feasible (i.e., $\vec{x}_1 \in \mathcal{X}$). We measure the performance of policy $\vec\mathcal{A}$ in terms of the \emph{fairness regret}, i.e., the difference between the fairness objective experienced by $\vec\mathcal{A}$ at the time horizon $T$ and that of the best static decision $\vec{x}_{\star} \in \mathcal{X}$ in hindsight. We restate the regret metric here to streamline the presentation: \begin{align} \mathfrak{R}_T \parentheses{F_{\alpha}, \vec\A} \triangleq \sup_{ \set{\u_t}^T_{t=1} \in {{\U^T}}} \set{ F_{\alpha}\left({\frac{1}{T}\sum_{t \in \mathcal{T}}\vec u_{t}(\vec{x}_{\star})}\right) -F_{\alpha}\left({\frac{1}{T}\sum_{t \in \mathcal{T}}\vec u_{t}(\vec{x}_t)}\right)}. \label{e:b_regret} \end{align} If the fairness regret becomes negligible for large $T$ (i.e., $ \mathfrak{R}_T \parentheses{F_{\alpha}, \vec\A}= \SmallO{1}$), then $\vec\mathcal{A}$ attains the same fairness objective as the optimal static decision with hindsight. Note that under the utilitarian objective ($\alpha=0$), this fairness regret coincides with the classic time-averaged regret in OCO~\cite{Hazanoco2016}. However, for general values of $\alpha \neq 0$, the metric is completely different, as we aim to compare $\alpha$-fair functions evaluated at time-averaged vector-valued utilities. \section{Online Horizon-Fair (\textsc{OHF}) Policy} \label{s:OHF} We first present in Sec.~\ref{s:adversarial_model} the adversarial model considered in this work and provide a result on the impossibility of guaranteeing vanishing fairness regret~\eqref{e:b_regret} under general adversarial perturbations. We also provide a powerful family of adversarial perturbations for which a vanishing fairness regret guarantee is attainable. Secondly, we present the \textsc{OHF}{} policy in Sec.~\ref{s:algorithm_description} and provide its performance guarantee. Finally, we provide in Sec.~\ref{s:examples} a set of adversarial examples captured by our fairness framework. \subsection{Adversarial Model and Impossibility Result} \label{s:adversarial_model} We begin by introducing formally the adversarial model that characterizes the utility perturbations. In particular, we consider $\vec\delta_t(\vec{x}) \triangleq { \left( \frac{1}{T}\sum_{s \in \mathcal{T}}\u_s(\vec{x}) \right) - \u_t(\vec{x})}$ to quantify how much the adversary \emph{perturbs} the average utility by selecting a utility function $\u_t$ at timeslot $t \in \mathcal{T}$. Recall that $\vec{x}_{\star} \in \mathcal{X}$ denotes the optimal allocation under HF objective~\eqref{eq:hf_objective}. We denote by $\Xi(\mathcal{T})$ the set of all possible decompositions of $\mathcal{T}$ into sets of contiguous timeslots, i.e., for every $\set{\mathcal{T}_1, \mathcal{T}_2, \dots, \mathcal{T}_K} \in \Xi(\mathcal{T})$ it holds $\mathcal{T}= \dot \bigcup_{k \in \set{1,2,\dots, K}} \mathcal{T}_k$ and $\max \mathcal{T}_k < \min \mathcal{T}_{k+1}$ for $k \in \set{1,2, \dots, K-1}$. We define two types of adversarial perturbations: \begin{align} &\!\!\!\!\!\text{Budgeted-severity: }&& \!\!\!\!\mathbb{V}_{\T}\!\triangleq\!\!\!\!\!\! \sup_{ \set{\u_t}^T_{t=1} \in {{\U^T}}} \set{\sum_{t \in \mathcal{T}} \sum_{i \in \mathcal{I}} \abs{\delta_{t,i} (\vec{x}_{\star})}, }\label{e:adv1} \\ &\!\!\!\!\!\text{Partitioned-severity: }&& \!\!\!\!\mathbb{W}_{\T} \! \triangleq\!\!\!\!\!\! \sup_{ \set{\u_t}^T_{t=1} \in {{\U^T}}} \! \set{ \inf_{\substack{\set{\mathcal{T}_1,\mathcal{T}_2,\dots,\mathcal{T}_K}\\ \in\, \Xi(\mathcal{T})}}\! \set{ \sum^K_{k=1} \sum_{i \in \mathcal{I}} \abs{\sum_{t \in \mathcal{T}_k}\! \! \delta_{t,i}(\vec{x}_{\star})}\! +\! \sum^K_{k=1}\! \frac{\card{\mathcal{T}_k}^2}{\sum_{k' < k } \card{\mathcal{T}_k}+1}}}\label{e:adv2}. \end{align} Our result in Theorem~\ref{th:maintheorem} implies that when either $\mathbb{V}_{\T}$ or $\mathbb{W}_{\T}$ grows sublinearly in the time horizon (i.e., the perturbations satisfy at least one of these two conditions), the regret of \textsc{OHF}{} policy in Algorithm~\ref{alg:primal_dual_ogaogd} vanishes over time. The \emph{budgeted-severity} $\mathbb{V}_{\T}$ in Eq.~\eqref{e:adv1} bounds the total amount of perturbations of the time-averaged utility. When $\mathbb{V}_{\T} = 0$ the adversary is only able to select a fixed function, otherwise the adversary is able to select time-varying utilities, while keeping the total deviation no more than $\mathbb{V}_{\T}$. Moreover, the adversary is allowed to pick opportunely the timeslots to maximize performance degradation for the controller. This model is similar to the \emph{adversarial corruption} setting considered in~\cite{Liao2022Feb,balseiro2022best}, and it captures realistic scenarios where the utilities incurred at different timeslots are predictable, but can be perturbed for some fraction of the timeslots. For instance, when demands are manipulated to favor a subset of the agents, or there are unpredictable traffic spikes due to breaking news or similar events in Internet advertising~\cite{esfandiari2015online}, or caching applications~\cite{traverso2013temporal}. The \emph{partitioned-severity} $\mathbb{W}_{\T}$ at first less easy to understand than budgeted-severity condition~\eqref{e:adv1}, but equally important from a practical point of view. For simplicity, consider a uniform decomposition of the timeslots, i.e., $\mathcal{T}_k = M$ for every $k \in \set{1,2,\dots, T/M}$ assuming w.l.g. $M$ divides $T$. Then the r.h.s. term in Eq.~\eqref{e:adv2} can be bounded as follows: \begin{align} \sum^{T/M}_{k=1} \frac{\card{\mathcal{T}_k}^2}{\sum_{k' < k } \card{\mathcal{T}_k}+1} = \sum^{T/M}_{k=1} \frac{M^2}{M (k-1)+1} =\BigO{M^2 + M \log(T / M)}.\label{e:uniform-partitioned-severity} \end{align} Hence, when $M = o(\sqrt{T})$ it holds $\sum^{T/M}_{k=1} \frac{\card{\mathcal{T}_k}^2}{\sum_{k' < k } \card{\mathcal{T}_k}+1} = o(T)$. Since this term grows sublinearly in time, it remains to characterize the growth of the l.h.s. term $ \sum^K_{k=1} \sum_{i \in \mathcal{I}} \abs{\sum_{t \in \mathcal{T}_k} \delta_{t,i} (\vec{x}_{\star})}$ in Eq.~\eqref{e:adv2}. This term is related to the perturbations selected by the adversary, however the absolute value is only evaluated at the end of each contiguous subperiod $\mathcal{T}_k$, i.e., the positive and negative deviations from the average utilities can cancel out. For example, a periodic selection of utilities from some set with cardinality $M$ would have zero deviation for this term. This type of adversary is similar to the periodic adversary considered in~\cite{duchi2012ergodic,balseiro2022best}; but also includes adversarial selection of utilities from some finite set (see Example~\ref{example:periodic} in Sec.~\ref{s:examples}). The partitioned-severity adversary can model real-life applications that exhibit seasonal properties, e.g., the traffic may be completely different throughout the day, but daily traffic is self-similar~\cite{zhou2019robust}. This condition also unlocks the possibility to obtain high probability guarantees under stochastic utilities (see Corollary~\ref{corollary:stochastic}). We formally make the following assumptions: \begin{enumerate}[label=(A\arabic*)] \item \label{a:1} The allocation set $\mathcal{X}$ is convex with diameter $\diam{\mathcal{X}} < \infty$. \item The utilities are bounded, i.e., $\u_{t}(\vec{x}) \in \brackets{u_{\mathrm{min\vspace{-1em}}}, u_{\mathrm{max}}}^{\mathcal{I}} \subset \mathbb{R}^{\mathcal{I}}$ for every $t \in \mathcal{T}$. \item The supergradients of the utilities are bounded over $\mathcal{X}$, i.e., it holds $\norm{\vec g}_2 \leq L_{\mathcal{X}} < \infty$ for any $\vec g \in \partial_{\vec{x}} u_{t,i}(\vec{x})$ and $\vec{x} \in \mathcal{X}$. \item\label{a:4} The average utility of the optimal allocation~\eqref{eq:hf_objective} is bounded such that $\frac{1}{T} \sum_{t \in \mathcal{T}}\u_t(\vec{x}_{\star}) \in \brackets{u_{\star, \min}, u_{\star, \max}}^{\mathcal{I}} \subset \mathbb{R}^{\mathcal{I}}_{>0}$. \item\label{a:5} The adversary is restricted to select utilities such that \begin{align} \min\set{\mathbb{V}_{\T}, \mathbb{W}_{\T}} = o(T). \end{align} \end{enumerate} We first show that an adversary solely satisfying the mild assumptions~\ref{a:1}--\ref{a:4} can arbitrarily degrade the performance of any policy $\vec\mathcal{A}$. Formally, we have the following negative result: \begin{theorem} When Assumptions~\ref{a:1}--\ref{a:4} are satisfied, there is no online policy $\vec\mathcal{A}$ attaining $\mathfrak{R}_T \parentheses{F_{\alpha}, \vec\A} = \SmallO{1}$ for $\card{\mathcal{I}} > 1$ and $\alpha > 0$. Moreover, there exists an adversary where Assumption~\ref{a:5} is necessary for $\mathfrak{R}_T \parentheses{F_{\alpha}, \vec\A}=o(1)$. \label{theorem:impossibility} \end{theorem} The proof can be found in Appendix~\ref{proof:impossibility}. We design an adversary with a choice over two sequences of utilities against two agents. We show that no policy can have vanishing fairness regret w.r.t. the time horizon under both sequences. \begin{algorithm}[t] \caption{\textsc{OHF}{} policy} \label{alg:primal_dual_ogaogd} \begin{algorithmic}[1] \begin{footnotesize} \Require \Statex $\mathcal{X}$, $\alpha \in \mathbb{R}_{\geq 0}$, $ \brackets{u_{\star, \min}, u_{\star, \max}}$ \State $\Theta \gets\brackets{-1/{u^\alpha_{\star, \min}},- 1/{u^\alpha_{\star, \max}}}^\mathcal{I}$ \Comment{initialize the dual (conjugate) subspace} \State $\vec{x}_1 \in \mathcal{X}$; $\vec{\theta}_1\in \Theta$; \Comment{Initialize primal decision $\vec{x}_1$ and dual decision $\vec{\theta}_1$} \For{$t \in \mathcal{T}$} \State Reveal $\Psi_{t,\alpha}(\vec{\theta}_t, \vec{x}_t) = \parentheses{-F_{\alpha}}^\star(\vec{\theta}_t) - \vec{\theta}_t \cdot \vec u_t (\vec{x}_t)$ \Comment{Incur reward $\Psi_{t,\alpha}(\vec{\theta}_t, \underline{\vec{x}_t})$ and loss $\Psi_{t,\alpha}(\underline{\vec{\theta}_t}, \vec{x}_t)$} \State$\vec g_{\mathcal{X}, t} \in \partial_{\vec{x}} \Psi_{t,\alpha}(\vec{\theta}_t, \vec{x}_t) = \sum_{i \in \mathcal{I}} \theta_{t,i} \partial_{\vec{x}} u_{t,i}$ \Comment{Compute supergradient $\vec g_{\mathcal{X}, t}$ at $\vec{x}_t$ of reward $\Psi_{t,\alpha}(\vec{\theta}_t, \,\cdot\,)$} \State $\vec g_{\Theta, t} =\nabla_{\vec \theta} \Psi_{t,\alpha}(\vec{\theta}_t, \vec{x}_t) = \parentheses{\parentheses{-\theta_{t,i}}^{-1/\alpha} - \vec u_t (\vec{x}_t)}_{i \in \mathcal{I}}$\Comment{Compute gradient $\vec g_{\Theta, t}$ at $\vec{\theta}_t$ of loss $\Psi_{t,\alpha}(\,\cdot\,, \vec{x}_t)$} \State $\eta_{\mathcal{X}, t} = \frac{\diam{\mathcal{X}}}{\sqrt{\sum^t_{s=1} \norm{\vec g_{\mathcal{X}, s}}^2_2}}$; $\eta_{\Theta,t} = \frac{\alpha u_{\mathrm{min\vspace{-1em}}}^{-1-1/\alpha}}{t}$ \Comment{Compute adaptive learning rates} \State $\vec{x}_{t+1} = \Pi_{\mathcal{X}}\parentheses{\vec{x}_t + \eta_{\mathcal{X}, t} \vec g_{\mathcal{X}, t}}$; $\vec{\theta}_{t+1} =\Pi_{\Theta} \parentheses{\vec{\theta}_t - \eta_{\Theta, t} \vec g_{\Theta, t}}$ \Comment{Compute a new allocation through OGA and a new dual decision through OGD} \EndFor \end{footnotesize} \end{algorithmic} \end{algorithm} \subsection{\textsc{OHF}{} Policy} \label{s:algorithm_description} Our policy employs a convex-concave function, composed of a convex conjugate term that tracks the global fairness metric in a dual (conjugate) space, and a weighted sum of utilities term that tracks the appropriate allocations in the primal space. This function is used by the policy to compute a gradient and a supergradient to adapt its internal state. In detail, we define the function $\Psi_\alpha: \Theta \times \mathcal{X} \to \mathbb{R} $ given by \begin{align} \Psi_{t,\alpha} (\vec{\theta}, \vec{x}) \triangleq \parentheses{-F_\alpha}^\star(\vec \theta) - \vec \theta \cdot \vec u_t(\vec{x}),\label{e:convex_convcave_function} \end{align} where $\Theta= \brackets{-1/{u^\alpha_{\star, \min}},- 1/{u^\alpha_{\star, \max}}}^\mathcal{I} \subset \mathbb{R}_{<0}^\mathcal{I}$ is a subspace of the dual (conjugate) space, and $\parentheses{-F_\alpha}^\star$ is the \emph{convex conjugate} (see Definition~\ref{def:conjugate} in Appendix) of $-F_{\alpha}$ given by for any $\vec{\theta} \in \Theta$ \begin{align} \parentheses{-F_{\alpha}}^\star(\vec \theta) &= \begin{cases} \sum_{i \in \mathcal{I}} \frac{\alpha(-\theta_i)^{1-1/\alpha} - 1}{1-\alpha} & \text{ for $\alpha \in \mathbb{R}_{\geq 0} \setminus \set{1}$}, \\ \sum_{i \in \mathcal{I}} - \log(-\theta_i) - 1 &\text{ for $\alpha = 1$}. \end{cases} \end{align} The policy is summarized in Algorithm~\ref{alg:primal_dual_ogaogd}. The algorithm only requires as input: the set of eligible allocations $\mathcal{X}$, the $\alpha$-fairness parameter in $\mathbb{R}^\mathcal{I}_{\geq 0}$, and the range $\brackets{u_{\star, \min}, u_{\star, \max}}$ of values of the average utility obtained by the optimal allocation~\eqref{eq:hf_objective}, i.e., $\frac{1}{T} \sum_{t \in \mathcal{T}}\u_t(\vec{x}_{\star}) \in \brackets{u_{\star, \min}, u_{\star, \max}}^{\mathcal{I}} \subset \mathbb{R}^{\mathcal{I}}_{>0}$. We stress that the target time horizon $T$ is \emph{not} an input to the policy. The policy uses this input to initialize the dual (conjugate) subspace $\Theta = \brackets{-1/{u^\alpha_{\star, \min}},- 1/{u^\alpha_{\star, \max}}}^\mathcal{I}$, an allocation $\vec{x}_1 \in \mathcal{X}$, and a dual decision $\vec{\theta}_1 \in \Theta$ (lines~1--2 in Algorithm~\ref{alg:primal_dual_ogaogd}). At a given timeslot $t \in \mathcal{T}$, the allocation $\vec{x}_t$ is selected; then a vector-valued utility $\vec u_t (\,\cdot\,)$ is revealed and in turn $\Psi_{t, \alpha} (\,\cdot\,, \,\cdot\,)$ is revealed to the policy (line 4 in Algorithm~\ref{alg:primal_dual_ogaogd}). The supergradient $\vec g_{\mathcal{X},t}$ of $\Psi_{t, \alpha} (\vec{\theta}_t, \,\cdot\,)$ at point $\vec{x}_t \in \mathcal{X}$, and the gradient $\vec g_{\Theta, t}$ of $\Psi_{t, \alpha} (\,\cdot\,, \vec{x}_t)$ at point $\vec{\theta}_t \in \Theta$ are computed (lines~5--6 in Algorithm~\ref{alg:primal_dual_ogaogd}). The policy then finally performs an adaptation of its state variables $(\vec{x}_t, \vec{\theta}_t)$ through a descent step in the dual space and an ascent step in the primal space through online gradient descent (OGD) and online gradient ascent (OGA) policies\footnote{Note that a different OCO policy can be used as long as it has a no-regret guarantee, e.g., online mirror descent (OMD), follow the regularized leader (FTRL), or follow the perturbed leader (FTPL)~\cite{Hazanoco2016, mcmahan2017survey}; moreover, one could even incorporate optimistic versions of such policies~\cite{rakhlin2013online}, to improve the regret rates when the controller has access to accurate predictions.}, respectively (line 8 in Algorithm~\ref{alg:primal_dual_ogaogd}). The learning rates (step size) used are ``self-confident''~\cite{AUER200248} as they depend on the experienced gradients. Such a learning rate schedule is compelling because it can adapt to the adversary and provides tighter regret guarantees for ``easy'' utility sequences; moreover, it allows attaining an \emph{anytime} regret guarantee, i.e., a guarantee holding for any time horizon $T$. In particular, \textsc{OHF}{} policy in Algorithm~\ref{alg:primal_dual_ogaogd} enjoys the following fairness regret guarantee. \begin{theorem} Under assumptions~\ref{a:1}--\ref{a:5}, \textsc{OHF}{} policy in Algorithm~\ref{alg:primal_dual_ogaogd} attains the following fairness regret guarantee: \begin{align} \mathfrak{R}_T \parentheses{F_{\alpha}, \vec\A} &\le\!\!\! \sup_{ \set{\u_t}^T_{t=1} \in {{\U^T}}} \set{{\frac{1.5\diam{\mathcal{X}}}{T}\sqrt{\sum_{t \in \mathcal{T}}\! \norm{\vec g_{\mathcal{X}, t}}^2_2}}\! +\! \sum^T_{t=1}\!\frac{\alpha \norm{\vec g_{\Theta, t}}^2_2}{2 u_{\star,\min}^{1 + \frac{1}{\alpha }} T t}}+\mathcal{O}\parentheses{ \frac{\min\set{\mathbb{V}_{\T}, \mathbb{W}_{\T}}}{T}} \label{e:th:u1}\\ &\leq \frac{1.5\diam{\mathcal{X}} L_{\mathcal{X}}}{{u^{\alpha}_{\star, \min}} \sqrt{T}} + \frac{\alpha L^2_{\Theta} (\log(T) + 1)}{u_{\star,\min}^{1 + \frac{1}{\alpha }} T} +\BigO{\frac{\min{\set{\mathbb{V}_{\T}, \mathbb{W}_{\T}}}}{T} } \label{e:th:u2}\\ &= \BigO{\frac{1}{\sqrt{T}} + \frac{\min\set{\mathbb{V}_{\T}, \mathbb{W}_{\T}}}{T}} = o(1). \end{align} \label{th:maintheorem} \end{theorem} The proof is provided in Appendix~\ref{proof:t:maintheorem}. We prove that the fairness regret can be upper bounded with the time-averaged regrets of the primal policy operating over the set $\mathcal{X}$ and the dual policy operating over the set $\Theta$, combined with an extra term that is upper bounded with $\min\set{\mathbb{V}_{\T}, \mathbb{W}_{\T}}$. Note that the fairness regret upper bound in Eq.~\eqref{e:th:u1} can be much tighter than the one in Eq.~\eqref{e:th:u2}, because the gradients' norms can be lower than their upper bound at a given timeslot $t \in \mathcal{T}$. Furthermore, remark that the fairness regret has an any-time guarantee since the target time horizon $T$ is not an input to the policy, and the learning schedule is ``self-confident''~\cite{AUER200248} and adapts to the observed utilities. Additionally, we show that no policy can have a better dependency on the time horizon $T$ for fairness regret~\eqref{e:b_regret} than the one established in Theorem~\ref{th:maintheorem} under assumptions~\ref{a:1}--\ref{a:5}. Formally, \begin{theorem} \label{theorem:lowerbound} Any policy $\vec\mathcal{A}$ incurs $ \mathfrak{R}_T \parentheses{F_{\alpha}, \vec\A} = \Omega\parentheses{\frac{1}{\sqrt{T}}}$ fairness regret~\eqref{e:b_regret} for~$\alpha \geq 0$. \end{theorem} The proof can be found in Appendix~\ref{proof:lowerbound}. We show that the lower bound on regret in online convex optimization~\cite{Hazanoco2016} can be transferred to the fairness regret. \subsection{Adversarial Examples} \label{s:examples} In this section, we provide examples of adversaries satisfying Assumptions~\ref{a:1}--\ref{a:5}, with $\mathbb{V}_{\T} = o(T)$, $\mathbb{W}_{\T} = o(T)$, and stochastic adversaries. \begin{example} (Adversaries satisfying $\mathbb{V}_{\T} = o(T)$) Consider an adversary selecting utilities such that \begin{align} \u_t(\vec{x}) = \u(\vec{x}) +\vec \gamma_t\odot\vec p_t(\vec{x}), \end{align} where $\u: \mathcal{X} \to \mathbb{R}^{\mathcal{I}}$ is a fixed utility. The time-dependent function $\vec p_t : \mathcal{X} \to \mathbb{R}^{\mathcal{I}}$ is an adversarially selected perturbation where $\norm{\vec p_t}_{\infty} < \infty $, and $\vec \gamma_t \in \mathbb{R}^{\mathcal{I}}$ quantifies the severity of the perturbations, where $\vec \gamma_t \odot \vec p_t(\vec{x}) = \parentheses{\gamma_{t,i} p_{t,i}(\vec{x})}_{i \in \mathcal{I}}$ is the Hadamard product. The severity of the perturbations grows sublinearly in time $T$, i.e., $\sum^T_{t=1} \gamma_{t,i} = o(T)$ for every $i \in \mathcal{I}$. It is easy to check that, in this setting, it holds $\mathbb{V}_{\T} = o (T)$. We provide a simple-yet-illustrative example of such an adversary. We take $\mathcal{X} = [0,1] \subset \mathbb{R}$, two agents $\mathcal{I} = \set{1,2}$, fixed utilities $\u (x) = \parentheses{1 - x^2, 1+x}$, adversarial perturbations $\vec p_t(x) =\parentheses{a_{i,t}\cdot x}_{i \in \mathcal{I}}$ where $\vec a_t$ is selected uniformly at random from $[-1,1]^{\mathcal{I}}$ for every $t \in \mathcal{T}$. The perturbations' severity is selected as $\gamma_{\xi_{t,i},i} = t^{-s}$ where $\vec \xi_i: \mathcal{T} \to \mathcal{T}$ is a random permutation of the elements of $\mathcal{T}$ for $i \in \mathcal{I}$. The performance of Algorithm~\ref{alg:primal_dual_ogaogd} is provided in Fig.~\ref{fig:example1}. We observe that for smaller values of $s$, corresponding to lower perturbation's severity, the policy provides faster the same utilities as the HF benchmark~\eqref{eq:hf_objective}. \end{example} \begin{figure}[t] \centering \subcaptionbox{$s =\frac{1}{100} $}{\includegraphics[width=.24\linewidth]{figs/adv-example-1-0.01.pdf}} \subcaptionbox{$s = \frac{1}{10}$}{\includegraphics[width=.24\linewidth]{figs/adv-example-1-0.1.pdf}} \subcaptionbox{$s = \frac{1}{2}$}{\includegraphics[width=.24\linewidth]{figs/adv-example-1-0.5.pdf}} \subcaptionbox{Time-averaged utility}{\includegraphics[width=.26\linewidth]{figs/adv-example-1-algo.pdf}} \caption{Subfigures~(a)--(c) provide the utilities of agent 2 for different values of perturbations' severity parameter $s \in \set{\frac{1}{100}, \frac{1}{10}, \frac{1}{2}}$ under the benchmark's allocation $x_{\star}$. Subfigure~(d) provides the time-averaged utility of two agents. The dark dashed lines represent the utilities obtained by HF objective~\eqref{eq:hf_objective}.} \label{fig:example1} \end{figure} \begin{example} (Adversaries satisfying $\mathbb{W}_{\T} = o(T)$) \label{example:periodic} Consider a multiset denoted by $\mathcal{M}_t$, and an adversary that selects utilities $\vec u_t:\mathcal{X}\to \mathbb{R}^{\mathcal{I}}$ such that $\u_t \in \mathcal{M}_t$. Then the multiset is updated as follows: if $\mathcal{M}_t \setminus \set{\u_t}\neq \emptyset$, $\mathcal{M}_{t+1} = \mathcal{M}_t \setminus \set{\u_t}$, otherwise, $\mathcal{M}_t = \mathcal{M}_1$. In words, the adversary selects irrevocably elements (utilities) from the set $\mathcal{M}_1$. When all the elements are selected, the replenished $\mathcal{M}_1$ is offered again to the adversary. Consider the following decomposition for the period $\mathcal{T}$: $\set{1, 2, \dots, \card{\mathcal{M}_1}}\cup \set{\card{\mathcal{M}_1}+1, \card{\mathcal{M}_1}+2, \dots, 2 \card{\mathcal{M}_1}}\cup \dots = \mathcal{T}_1\cup \mathcal{T}_2\cup\dots\cup \mathcal{T}_{{T}/{\card{\mathcal{M}_1}}}$, without loss of generality, for a time horizon $T$ divisible by $\card{\mathcal{M}_1}$. By construction, it holds for every $\vec{x} \in \mathcal{X}$ \begin{align} \sum_{i \in \mathcal{I}} \abs{\sum_{t \in \mathcal{T}_k} \delta_{t,i}(\vec{x})} = 0, \forall k \in \set{1,2,\dots, \frac{T}{\card{\mathcal{M}_1}}},\label{e:adv_eg_a1} \end{align} because when the multiset is fully consumed by the adversary, the average experienced utility is a fixed function. When $\card{\mathcal{M}_1} = \Theta\parentheses{T^{\epsilon}}$ for $\epsilon \in [0,1/2)$ it holds (see Eq.~\eqref{e:uniform-partitioned-severity}) \begin{align} \sum^{T/\card{\mathcal{M}_1}}_{k=1} \frac{\card{\mathcal{T}_k}^2}{\sum_{k' < k } \card{\mathcal{T}_k}+1} = \sum^{T/\card{\mathcal{M}_1}}_{k=1} \frac{\card{\mathcal{M}_1}^2}{\card{\mathcal{M}_1} (k-1)+1} =\BigO{T^{2\epsilon}} = o(T).\label{e:adv_eg_a2} \end{align} Thus, Eq.~\eqref{e:adv_eg_a1} and Eq.~\eqref{e:adv_eg_a2} give $\mathbb{W}_{\T} = o(T)$. We provide a simple example of such an adversary. Consider $\mathcal{X} = [-1,1]$, and two agents $\mathcal{I} = \set{1,2}$. The adversary selects utilities from the multiset \begin{align*} \mathcal{M}_1 = \{\vGroup{(1\!-\!x, 1\!-\!(1\!-\!x)^2)}{\text{repeated $10$ times}}, \vGroup{({ 1\!-\!(1\!-\!x)^2, 1\!-\!4x})}{\text{repeated $20$ times}}, \vGroup{({ 1, -2x})}{\text{repeated $10$ times}}\}.\label{e:m1} \end{align*} For this particular choice of the multiset $\mathcal{M}_1$ we have $\card{\mathcal{M}_1} =40$, and hence $\mathbb{W}_{\T} = o(T)$. The performance of Algorithm~\ref{alg:primal_dual_ogaogd} is provided in Fig.~\ref{fig:example2} under different choice patterns over $\mathcal{M}_1$. We observe that the cyclic choice of utilities is more harmful than the u.a.r. one as it leads to slower convergence. Nonetheless, under both settings, the policy asymptotically yields the same utilities as the HF benchmark~\eqref{eq:hf_objective}. \end{example} \begin{figure}[t] \centering \subcaptionbox{Allocations (cyclic)}{\includegraphics[width=.24\linewidth]{figs/adv-example-2-primal-decision-oscillating.pdf}} \subcaptionbox{Allocations (u.a.r.)}{\includegraphics[width=.24\linewidth]{figs/adv-example-2-primal-decision-uar.pdf}} \subcaptionbox{Time-averaged utilities (cyclic)}{\includegraphics[width=.23\linewidth]{figs/adv-example-2-utilities-oscillating.pdf}} \subcaptionbox{Time-averaged utilities (u.a.r.)}{\includegraphics[width=.23\linewidth]{figs/adv-example-2-utilities-random-shuffle.pdf}} \caption{Subfigures~(a)--(b) provide the allocations of different agents of cyclic and u.a.r. choice of utilities over the set $\mathcal{M}_1$, respectively. Subfigures~(c)--(d) provide the time-averaged utility of cyclic and u.a.r. choice of utilities over the set $\mathcal{M}_1$, respectively. } \label{fig:example2} \end{figure} \begin{example} (Stochastic Adversary) \label{e:example3} Consider a scenario where $\vec u_t:\mathcal{X} \to \mathbb{R}^{\mathcal{I}}$ are drawn i.i.d. from an unknown distribution $\mathcal{D} (\U)$. Formally, the following corollary is obtained from Theorem~\ref{th:maintheorem}. \begin{corollary} When the utilities $ u_t:\mathcal{X} \to \mathbb{R}^{\mathcal{I}}$ are drawn i.i.d. from an unknown distribution $\mathcal{D}(\U)$ with support $\U$ satisfying Assumptions~\ref{a:1}--\ref{a:4}, the policy \textsc{OHF}{} in Algorithm~\ref{alg:primal_dual_ogaogd} attains the following expected fairness regret guarantee: \begin{align} \bar{\mathfrak{R}}_T \parentheses{F_{\alpha}, \vec\A} \triangleq\sup_{ \mathcal{D}(\U)} \set{ \underset{\substack{\u_t \sim \mathcal{D}(\U) \\ t \in \mathcal{T}}}{\mathbb E} \brackets{\max_{\vec{x} \in \mathcal{X}} F_{\alpha}\left({\frac{1}{T}\sum_{t \in \mathcal{T}}\vec u_{t}(\vec{x}_t)}\right) -F_{\alpha}\left({\frac{1}{T}\sum_{t \in \mathcal{T}}\vec u_{t}(\vec{x}_t)}\right)}} = \BigO{\frac{1}{\sqrt{T}}}. \end{align} Moreover, it holds with probability one: $\mathfrak{R}_T \parentheses{F_{\alpha}, \vec\A} \leq 0$ for $T \to \infty$. \label{corollary:stochastic} \end{corollary} The proof is available in Appendix~\ref{proof:stochastic}. The expected fairness regret guarantee follows from Theorem~\ref{th:maintheorem} and observing that $\mathbb E \brackets{\vec \delta_t (\vec{x})} = \vec 0$ holds for any $t \in \mathcal{T}$ and $\vec{x} \in \mathcal{X}$. The high probability fairness regret guarantee for large enough time horizons is obtained through Hoeffding's inequality paired with Eq.~\eqref{e:adv2}. \end{example} Note that we provide additional examples of adversaries, in the context of the application of our policy to a virtualized caching system, in Sec.~\ref{s:experiments}. \section{Extensions} \label{s:extensions} In this section, we first show that our algorithmic framework extends to cooperative bargaining settings, in particular Nash bargaining~\cite{Nash1950}. Secondly, we show that our framework also extends to the weighted $\alpha$-fairness criterion. \subsection{Nash Bargaining} Nash bargaining solution (NBS), proposed in the seminal paper~\cite{Nash1950}, is a fairness criterion for dispersing to a set of agents the utility of their cooperation. The solution guarantees, that whenever the agents cooperate, they achieve an individual performance that exceeds their performance when operating independently. This latter is also known as the disagreement point. NBS comes from the area of cooperative game theory, and it is self enforcing, i.e., the agents will agree to apply this solution without the need for an external authority to enforce compliance. NBS has been extensively applied in communication networks, e.g., to transmission power control~\cite{Boche2011}, mobile Internet sharing among wireless users~\cite{Iosifidis2017}, content delivery in ISP-CDN partnerships~\cite{Wenjie2009}, and cooperative caching in information-centric networks~\cite{Liang2017}. Nash bargaining can be incorporated through our fairness framework when $\alpha = 1$, and utilities as redefined for every $t \in \mathcal{T}$ as follows $ \u_t' (\vec{x}) = \u_t(\vec{x}) - \u^d_t$ where $u^d_i$ is the disagreement point of agent $i \in \mathcal{I}$. In particular, \textsc{OHF}{} provides the same guarantees. We also note that the dynamic model generalizes the NBS solution by allowing both the utilities and the disagreement points to change over time, while the benchmark is defined using~\eqref{eq:hf_objective} and $\alpha=1$. Hence, the proposed \textsc{OHF}{} allows the agents to collaborate without knowing in advance the benefits of their cooperation nor their disagreement points, in a way that guarantees they will achieve the commonly agreed NBS at the end of the horizon T (asymptotically). \subsection{The $(\vec w, \alpha)$-Fairness} The weighted $\alpha$-fairness or simply $(\vec w, \alpha)$-fairness with $\alpha \geq 0$ and $\vec w \in \Delta_{\mathcal{I}}\subset \mathbb{R}_{\geq 0}$, where $\Delta_{\mathcal{I}}$ is the probability simplex with support $\mathcal{I}$, is defined as~\cite{mo2000fair}: \begin{definition} A $(\vec w, \alpha)$-fairness function $F_{\vec w, \alpha}:\U\to \mathbb{R}$ is parameterized by the inequality aversion parameter $\alpha \in \mathbb{R}_{\geq 0}$, weights $\vec w \in \Delta_{\mathcal{I}}$ and it is given by $ F_{\vec w, \alpha}(\u) \triangleq \sum_{i \in \mathcal{I}} w_i f_{\alpha}(u_i)$ for every $\u \in \U$. Note that $\U \subset \mathbb{R}^\mathcal{I}_{\geq 0}$ for $\alpha <1$, and $\U \subset \mathbb{R}^\mathcal{I}_{>0}$ for $\alpha \geq 1$. \end{definition} It is easy to check that our $\alpha$-fairness framework captures the $(\vec w, \alpha)$-fairness by simply redefining the utilities incurred at time $t \in \mathcal{I}$ for agent $i \in \mathcal{I}$ as follows: \begin{align} u'_{t,i}(\vec{x}) =\begin{cases} w^{\frac{1}{1-\alpha}}_i u_{t,i}(\vec{x}) \qquad \text{for } \alpha \neq 1, \\ \parentheses{u_{t,i}(\vec{x})}^{w_i} \qquad \text{for } \alpha = 1. \end{cases} \end{align} Note that for $\alpha = 1$, for uniform weights, we recover the Nash bargaining as discussed previously; otherwise, we recover asymmetric Nash bargaining in which the different weights correspond to the bargaining powers of players~\cite{harsanyi1972generalized}. \section{Application} \label{s:experiments} In order to demonstrate the applicability of the proposed fairness framework, we target a representative resource management problem in virtualized caching systems where different caches cooperate by serving jointly the received content requests. This problem has been studied extensively in its static version, where the request rates for each content file are a priori known and the goal is to decide which files to store at each cache to maximize a fairness metric (such as the Nash bargaining solution) of cache hits across different caches, see for instance \cite{Liang2017, LIU2020102138}. We study the more realistic version of the problem where the request patterns are unknown. This online caching model has been recently studied as a learning problem in a series of papers \cite{paschos2019learning, sisalem2021no, mhaisen2022online, paria2021texttt,bura2021learning,Li2021}, yet none of them focuses on (or is able to handle) fairness metrics. \subsection{Multi-Agent Cache Networks} \paragraph{Cache network.} We assume that time is slotted and the set of timeslots is denoted by $\mathcal{T} \triangleq \set{1, 2, \dots, T}$. We consider a catalog of equally-sized files $\mathcal{F} \triangleq\set{1, 2, \dots, F}$.\footnote{Note that we assume equally-sized files to streamline the presentation. Our model supports unequally-sized files by replacing the cardinality constraint in Eq.~\eqref{e:constraint} with a knapsack constraint and the set $\mathcal{X}_c$ remains convex.} We model a cache network at timeslot $t \in \mathcal{T}$ as an undirected weighted graph $G_t(\C, \mathcal{E})$, where $\C \triangleq\set{1,2,\dots ,C}$ is the set of caches, and $(c, c') \in \mathcal{E}$ denotes the link connecting cache $c$ to $c'$ with associated weight $w_{t, (c,c')} \in \mathbb{R}_{>0}$. Let $\P_{t, (c,c')} =\set{c_1, c_2,\dots, c_{\card{\P_{t, (c,c')}}}} \in \C ^{\card{\P_{t, (c,c')}}}$ be the shortest path at timeslot $t \in \mathcal{T}$ from cache $c$ to cache $c'$ with associated weight $ w^{\mathrm{sp}}_{t, (c,c')} \triangleq \sum^{\card{\P_{t, (c,c')}}-1}_{k =1} w_{t, (c_k, c_{k+1})}$. We assume for each file $f \in\mathcal{F}$ there is a set $\Lambda_f (\C) \subset \C$ of designated repository servers that store it permanently. We denote by $x_{t,c,f} \in [0,1]$ the fraction of file $f \in\mathcal{F}$ stored at cache $c \in \C$ at timeslot $t \in \mathcal{T}$. The state of cache $c \in \C$ is given by $\vec{x}_{t,c}$ drawn from the set \begin{align} \mathcal{X}_c \triangleq \set{\vec{x} \in \brackets{0, 1}^ \mathcal{F}: \sum_{f \in\mathcal{F}} x_f \leq k_c, x_f \geq \mathds{1}{\parentheses{c \in \Lambda_f (\C)}}, \forall f \in\mathcal{F}},\label{e:constraint} \end{align} where $k_c \in \mathbb N$ is the capacity of cache $c \in \C$, and $\mathds{1}{\parentheses{\chi}} \in \set{0,1}$ is the indicator function set to 1 when condition $\chi$ is true. Thus, the state of the cache network belongs to $\mathcal{X} \triangleq \bigtimes_{c \in \C} \mathcal{X}_c$. The system model is summarized in Fig.~\ref{fig:system_model}, and it is aligned with many recent papers focusing on learning for caching~\cite{Ioannidis16,paschos2019learning,paria2021texttt}. \begin{figure}[t] \centering \includegraphics[width=.75\linewidth]{figures_paper/model.pdf} \caption{System model: a network comprised of a set of caching nodes $\C$. A request arrives at a cache node $c \in \C$, it can be partially served locally, and if needed, forwarded along the shortest retrieval path to another node to retrieve the remaining part of the file; a utility is incurred by the cache owner $i \in \mathcal{I}$. A set of permanently allocated files are spread across the network guaranteeing requests' serving even under the absence of caching.} \label{fig:system_model} \end{figure} \paragraph{Requests.} We denote by $r_{t,c,f} \in \mathbb N \cup \set{0}$ the number of requests for file $f \in \mathcal{F}$ submitted by users associated to cache $c \in \C$, during slot $t \in \mathcal{T}$. The request batch arriving at timeslot $t \in \mathcal{T}$ is denoted by $\vec r_{t} = \parentheses{r_{t,c,f}}_{(c,f) \in \C \times \mathcal{F}}$ and belongs to the set \begin{align} \mathcal{R}_t \triangleq \set{\vec r \in \parentheses{\mathbb N \cup \set{0}}^{ \C \times \mathcal{F}}: \sum_{c \in \C}\sum_{f \in\mathcal{F}} r_{c,f} =R_t}, \end{align} where $R_t \in \mathbb N$ is the total number of requests arriving at the system at timeslot $t \in \mathcal{T}$. \paragraph{Caching gain.} We consider an agent $i \in \mathcal{I}$ holds a set of caches $ \Gamma_i (\C) \subset \C$, and $\dot\bigcup_{i \in \mathcal{I}} \Gamma_i (\C) = \C$. Hence, the allocation set of agent $i$ is given by $\mathcal{X}_i = \bigtimes_{c \in \Gamma_i(\C)} \mathcal{X}_c$. Requests arriving at cache $c\in \C$ can be partially served locally, and if needed, forwarded along the shortest path to a nearby cache $c'\in \C$ storing the file, incurring a retrieval cost $w^{\mathrm{sp}}_{t, (c,c')}$. Let $\phi_{t,i, c} \triangleq \arg\min_{c' \in \Lambda_i (\C)} \set{w^{\mathrm{sp}}_{t, (c, c')}}$ and $\Phi_{t,i, c}: \set{1, 2, \dots, \phi_{t, i, c}} \subset \C \to\C$ be a map providing a retrieval cost ordering for every $c \in \set{1,2, \dots, \phi_{t,i, c}}$, $t \in \mathcal{T}$, and $i \in \mathcal{I}$, i.e., \begin{align} w^{\mathrm{sp}}_{t, (c, \Phi_{t, i,c}(\phi_{t,i, c}))} = \min\set{w^{\mathrm{sp}}_{t, (c,c')}: c' \in \Lambda_f(\C)} \geq \dots \geq w^{\mathrm{sp}}_{t, (c, \Phi_{t, i,c}(2)))} \geq w^{\mathrm{sp}}_{t, (c, \Phi_{t, i,c}(1)))} = 0. \end{align} When a request batch $\vec r_t \in \mathcal{R}_t$ arrives at timeslot $t \in \mathcal{T}$, agent $i \in \mathcal{I}$ incurs the following cost: \begin{align*} \mathrm{cost}_{t, i}(\vec{x}) \triangleq\!\!\!\! \sum_{c \in \Gamma_i(\C)} \sum_{f \in\mathcal{F}} r_{t,c,f}\sum^{\phi_{t,i, c}-1}_{k=1} \parentheses{w^{\mathrm{sp}}_{t, (c, \Phi_{t,i, c} (k+1))} - w^{\mathrm{sp}}_{t, (c, \Phi_{t,i, c} (k))}} \parentheses{1 - \min\set{1, \sum^k_{k'=1} x_{\Phi_{t,i, c}(k'), f}}}. \end{align*} This can be interpreted as a QoS cost paid by a user for the additional delay to retrieve part of the file from another cache, or it can represent the load on the network to provide the missing file. Note that by construction, the maximum cost is achieved for a network state, where all the caches are empty except for the repository allocations; formally, such state is given by $\vec{x}_0 \triangleq \parentheses{\mathds{1}\parentheses{c \in \Lambda_f(\C)}}_{(c,f) \in \C \times \mathcal{F}} \in \mathcal{X}$, and the cost of the agent at this state is given by \begin{align} \mathrm{cost}_{t, i}(\vec{x}_0) &= \sum_{c \in \Gamma_i(\C)} \sum_{f \in\mathcal{F}} r_{t,c,f} \min\set{w^{\mathrm{sp}}_{t, (c, c')}: c' \in \Lambda_f(\C)} \\&= \sum_{c \in \Gamma_i(\C)} \sum_{f \in\mathcal{F}} r_{t,c,f}\sum^{\phi_{t,i, c}-1}_{k=1} \parentheses{w^{\mathrm{sp}}_{t, (c, \Phi_{t,i, c} (k+1))} - w^{\mathrm{sp}}_{t, (c, \Phi_{t,i, c} (k))}}, \end{align} We can define the caching utility at timeslot $t\in \mathcal{T}$ as the cost reduction due to caching as: \begin{align} u_{t, i}(\vec{x}) &\triangleq \mathrm{cost}_{t,i}(\vec{x}_0) - \mathrm{cost}_{t,i}(\vec{x}) \\ &= \sum_{c \in \Gamma_i(\C)} \sum_{f \in\mathcal{F}} r_{t,c,f}\sum^{\phi_{t,i, c}-1}_{k=1} \parentheses{w^{\mathrm{sp}}_{t, (c, \Phi_{t,i, c} (k+1))} - w^{\mathrm{sp}}_{t, (c, \Phi_{t,i, c} (k))}} \min\set{1, \sum^k_{k'=1} x_{\Phi_{t,i, c}\parentheses{k'} , f}}. \end{align} The caching utility is a weighted sum of concave functions with positive weights, and thus concave in $\vec{x} \in \mathcal{X}$. It is straightforward to check that this problem satisfies Assumption~\ref{a:1}--\ref{a:4}. Also, note that the request batches and the time-varying retrieval costs determine whether Assumption~\ref{a:5} holds, e.g., when request batches are drawn i.i.d. from a fixed unknown distribution, it corresponds to the adversary provided in Example~\ref{e:example3}. \subsection{Results} Below we describe the experimental setup of the multi-agent cache networks problem, the request traces, and competing policies. Our results are summarized as follows: \begin{enumerate} \item Under stationary requests and small batch sizes (leading to large utility deviations from one timeslot to another), \textsc{OHF}{} achieves the same time-averaged utilities as the offline benchmark, whereas \textsc{OSF}, a counterpart policy to \textsc{OHF}{} targeting slot-fairness~\eqref{eq:sf_objective}, diverges and is unable to reach the Pareto front. \item In the Nash bargaining scenario, \textsc{OHF}{} achieves the Nash bargaining solution in all cases, while \textsc{OSF}{} fails when the disagreement points are exigent, i.e., an agent can guarantee itself a high utility. \item Widely used \textsc{LFU}{} and \textsc{LRU}, as expected, might perform arbitrarily bad w.r.t. fairness, and not even achieve any point in the Pareto front (hence, they are not only unfair, but also inefficient) \item Fairness comes at a higher price when $\alpha$ is increased or the number of agents is increased. This observation on the price of fairness provides experimental evidence for previous work~\cite{bertsimas2012efficiency}. \item \textsc{OHF}{} is robust to different network topologies and is able to obtain time-averaged utilities that match the offline benchmark. \item Under non-stationary requests, \textsc{OHF}{} policy achieves the same time-averaged utilities as the offline benchmark, whereas \textsc{OSF}{} can perform arbitrarily bad providing allocations that are both unfair and inefficient \end{enumerate} \paragraph{General Setup.} We consider three synthetic network topologies (\textsc{Cycle}, \textsc{Tree}, and \textsc{Grid}), and two real network topologies (\textsc{Abilene}{} and \textsc{GEANT}). A visualization of the network topologies is provided in Figure~\ref{fig:topologies}. The specifications of the network topologies used across the experiments are provided in Table~\ref{table:topologies} in the Appendix. A repository node permanently stores the entire catalog of files. The retrieval costs along the edges are sampled u.a.r. from $\set{1,2, \dots, 5}$, except for edges directly connected to a repository node which are sampled u.a.r. from $\set{6, 7, \dots, 10}$. All the retrieval costs remain fixed for every $t \in \mathcal{T}$. The capacity of each cache is sampled u.a.r. from $\set{1,2, \dots, 5}$, but for the \textsc{Cycle}{} topology in which each cache has capacity 5. An agent $i \in \mathcal{I}$ has a set of query nodes denoted by $\mathcal{Q}_i \subset \Gamma_i(\C)$, and a query node can generate a batch of requests from a catalog with $\card{\mathcal{F}} = 20$ files. Unless otherwise said, we consider $u_{\star, \min}= 0.1$ and $u_{\star, \max} = 1.0$. The fairness benchmark refers to the maximizer of the HF objective~\eqref{eq:hf_objective}, and the utilitarian benchmark refers to the maximizer of HF objective~\eqref{eq:hf_objective} for $\alpha = 0$. \begin{figure}[t] \centering \subcaptionbox*{}{\includegraphics[trim={0 5.3cm 0 0},clip,width=0.6\linewidth]{figures_paper/2players-topology-tree-multiplayer-legend.pdf}}\vspace{-2em}\\ \subcaptionbox{\textsc{Cycle}\label{sfig:1}}{\includegraphics[width=.135\linewidth]{figures_paper/twoplayers_topology_b1.pdf}} \subcaptionbox{\textsc{Tree}-1\label{sfig:2}}{\includegraphics[width=0.135\linewidth]{figures_paper/2players-topology-tree-multiplayer-2.pdf}} \subcaptionbox{\textsc{Tree}-2\label{sfig:3}}{\includegraphics[width=0.135\linewidth]{figures_paper/2players-topology-tree-multiplayer-3.pdf}} \subcaptionbox{\textsc{Tree}-3\label{sfig:4}}{\includegraphics[width=0.135\linewidth]{figures_paper/2players-topology-tree-multiplayer-4.pdf}} \subcaptionbox{\label{sfig:5}\textsc{Grid}}{\includegraphics[width=.135\linewidth]{figures_paper/topology-grid_2d.pdf}} \subcaptionbox{\label{sfig:6}\textsc{Abilene}}{\includegraphics[width=.135\linewidth]{figures_paper/topology-abilene.pdf}} \subcaptionbox{\label{sfig:7}\textsc{GEANT}}{\includegraphics[width=.135\linewidth]{figures_paper/topology-geant.pdf}} \caption{Network topologies used in experiments.} \label{fig:topologies} \end{figure} \paragraph{Traces.} Each query node generates requests according to the following: \begin{itemize} \item \textbf{Stationary trace (parameters: $\sigma, R, T, F$).} Requests are sampled i.i.d. from a Zipf distribution with exponent $\sigma \in \mathbb{R}_{\geq 0}$ from a catalog of files of size $F$. The requests are grouped into batches of size $\card{\mathcal{R}_{t}} = R, \forall t \in \mathcal{T}$. \item \textbf{Non-Stationary trace (parameters: $\sigma, R, T, F, D$).} Similarly, requests are sampled i.i.d. from a catalog of $F$ files according to a Zipf distribution with exponent $\sigma \in \mathbb{R}_{\geq 0}$. Every $D$ requests, the popularity distribution is modified in the following fashion: file $f \in \mathcal{F} = \set{1,2, \dots, F}$ assumes the popularity of file $f' = (f +F/2) \mod F$ ($F$ is even). The requests are grouped into batches of size $ \card{\mathcal{R}_{t}} = R, \forall t\in \mathcal{T}$. \end{itemize} Two sampled traces are depicted in Figure~\ref{fig:traces} in the Appendix. Unless otherwise said, query nodes generate \emph{Stationary} traces. The default parameters are $\sigma = 1.2$, $T = 10^4$, $R = 50$, and $D = 50$. \paragraph{Policies.} We implement the following policies and use them as comparison benchmarks for \textsc{OHF}. \begin{itemize} \item The classic \textsc{LRU}{} and \textsc{LFU}{} policies. A request is routed to the cache with the least retrieval cost that stores the requested file, and the cache state is updated according to the corresponding policy under a cache hit scenario. In addition, every cache with a lower retrieval cost performs an eviction according to the corresponding policy under a cache hit scenario. This corresponds to the popular path replication algorithm~\cite{pathreplication,Ioannidis16}, equipped with \textsc{LRU}{} or \textsc{LFU}{}, applied to our setting. \item Online slot-fairness (\textsc{OSF}) policy: an instance of \textsc{OHF}{} in Algorithm~\ref{alg:primal_dual_ogaogd} configured with a dual (conjugate) subspace $\Theta = \set{(-1)_{i \in \mathcal{I}}}$ (i.e., taking $\alpha \to 0$) comprised of a single point. This configuration strips away the dual policy in Algorithm~\ref{alg:primal_dual_ogaogd}. The revealed utilities at timeslot $t \in \mathcal{T}$ are the $\alpha$-fairness transformed utilities $\u'_t(\,\cdot\,) = (f_{\alpha} \parentheses{u_{t,i} (\,\cdot\,)})_{i \in \mathcal{I}}$. This policy is the slot-fairness~\eqref{eq:sf_objective} counterpart of \textsc{OHF}, where the primal allocations are determined by the same self-confident learning rates' schedule as \textsc{OHF}{} for a fair comparison. This policy is a no-regret policy (see Lemma~\ref{lemma:ogd_regret} in Appendix) w.r.t. the slot-fairness benchmark~\eqref{eq:sf_objective} for some $\alpha \in \mathbb{R}_{\geq 0}$. \end{itemize} \paragraph{Static analysis of symmetry-breaking parameters.} We start with a numerical investigation of the potential caching gains, and how these are affected by the fairness parameter~$\alpha$. In Figure~\ref{fig:static_exploration}, we consider the \textsc{Cycle}{} topology and different values of $\alpha \in [0,2]$. We show the impact on the fairness benchmark of varying the request patterns ($\sigma \in \set{0.6, 0.8, 1.0, 1.2}$) for agent~2 under the \emph{Stationary} trace in Fig.~\ref{fig:static_exploration}~(a), and of varying the retrieval costs between agent 1's cache and the repository ($w_{(1,3)} \in [2.5,4]$). In Figure~\ref{fig:static_exploration}~(a), we observe decreasing the skewness of the popularity distribution decreases the utility of agent~2 as reflected by the downward shift of the Pareto front. We note that, as far as the file popularity distribution at agent 2 is close to the one at agent~1 ($\sigma = 1.2$), different values of alpha still provide similar utilities. However, in highly asymmetric scenarios, different values of $\alpha$ lead to clearly distinct utilities for each agent. We also note that higher values of $\alpha$ guarantees higher fairness by that increasing the utility of agent~2. Similarly, in Figure~\ref{fig:static_exploration}~(b), we observe increasing the retrieval cost for agent~1 decreases the utility achieved by the same agent, as reflected by the leftward shift of the Pareto front; moreover, increasing the retrieval costs (higher asymmetry) highlights the difference between different values of~$\alpha$. \begin{figure}[t] \centering \subcaptionbox{\label{sfig:static_exploration2}}{\includegraphics[width=0.35\linewidth]{figures_paper/pareto_explore_exponents.pdf}} \subcaptionbox{\label{sfig:static_exploration1}}{\includegraphics[width=0.35\linewidth]{figures_paper/pareto_explore_retrievalcosts.pdf}} \caption{Pareto front and fairness benchmark's utilities for different values of $\alpha \in [0,2]$ under different request patterns~(a) ($\sigma \in \set{0.6, 0.8, 1.0, 1.2}$) for agent~2, and different retrieval costs~(b) between agent~1's cache and the repository ($w_{(1,3)} \in [2.5,4.0]$). } \label{fig:static_exploration} \end{figure} \paragraph{Online analysis of symmetry-breaking parameters.} In Figure~\ref{fig:online_setting1}, we consider the \textsc{Cycle}{} topology, and different values of $\alpha \in \set{0,1,2}$. In Figure~\ref{fig:online_setting1}~(a)--(b) we consider the retrieval cost $w_{(1,3)}= 3.5$ between agent~1's cache node and the repository node. In Figure~\ref{fig:online_setting1}~(c)--(d) query node of agent~1 generates \emph{Stationary} trace ($\sigma = 1.2$) and query node of agent~2 generates \emph{Stationary} trace ($\sigma = 0.6$). We consider two fixed request batch sizes $R \in \set{1, 50}$. We observe in Figures~\ref{fig:online_setting1}~(a) and~(c) for the batch size $R=1$, \textsc{OHF}{} approaches the fairness benchmark's utilities for different values of $\alpha$, but \textsc{OSF}{} diverges for values of $\alpha \neq 0$. For increased request batch size $R = 50$, \textsc{OHF}{} and \textsc{OSF}{} exhibit similar behavior. This is expected under stationary utilities; increasing the batch size reduces the variability in the incurred utilities at every timeslot, and the horizon-fairness and slot-fairness objectives become closer yielding similar allocations. Note that this observation implies that \textsc{OSF}{} is only capable to converge for utilities with low variability, which is far from realistic scenarios. \textsc{LFU}{} policy outperforms \textsc{LRU}{} and both policies do not approach the Pareto front; thus, the allocations selected by such policies are inefficient and unfair. \begin{figure}[t] \centering \subcaptionbox*{}{\includegraphics[trim={0 7.4cm 0 0},clip,width=0.55\linewidth]{figures_paper/two_players_b_50-legend.pdf}}\vspace{-2em}\\ \subcaptionbox{$R=1$ \label{sfig:online_setting11}}{\includegraphics[width=0.24\linewidth]{figures_paper/2players-online-alphas-1.pdf}} \subcaptionbox{$R=50$\label{sfig:online_setting12}}{\includegraphics[width=0.24\linewidth]{figures_paper/2players-online-alphas-50.pdf}} \subcaptionbox{$R=1$\label{sfig:online_setting21}}{\includegraphics[width=0.24\linewidth]{figures_paper/2players-online-exponents-1.pdf}} \subcaptionbox{$R=50$\label{sfig:online_setting22}}{\includegraphics[width=0.24\linewidth]{figures_paper/2players-online-exponents-50.pdf}} \caption{Time-averaged utilities of policies \textsc{OHF}, \textsc{OSF}, \textsc{LRU}, and \textsc{LFU}{} under \textsc{Cycle}{} topology. Subfigures~(a)--(b) are obtained under retrieval cost $w_{(1,3)} = 3.5$ for agent 1's query node. Subfigures~(c)--(d) are obtained when agent 2's query node generates \emph{Stationary trace} ($\sigma = 0.6$). The markers represent the iterations in the set $\set{100, 200, \dots, 10^4}$. } \label{fig:online_setting1} \end{figure} \paragraph{Nash bargaining.} In Figure~\ref{fig:nash_bargainig}, we consider the \textsc{Cycle}{} topology and $\alpha=1$ (Nash bargaining). We select different disagreement utilities for agent~2 in $\set{0.0,0.5,0.7,0.75}$, i.e., different utility values agent~2 expects to guarantee itself even in the absence of cooperation. Note how higher values of disagreement utilities for agent~2 lead to higher utilities for this agent at the fairness benchmark. We select $u_{\star,\min} = 0.01$. We observe that for smaller batch size $R = 1$, \textsc{OHF}{} approaches the same utilities achieved by the fairness benchmark for different disagreement points, whereas \textsc{OSF}{} fails to approach the Pareto front. Similarly, for a larger batch size $R = 50$, \textsc{OHF}{} approaches the fairness benchmark for different disagreement points, but the Pareto front is reached faster than with a batch size $R=1$. \textsc{OSF}{} diverges for non-zero disagreement points and for a large batch size $R=50$, because the disagreement points could initially be larger than the utility achieved by the agent and the allocations selected by the agent can easily yield utilities outside the domain of definition of the $\alpha$-fairness function ($\mathbb{R}^{\mathcal{I}}_{> 0}$ for $\alpha =1$), i.e, the selected utilities at timeslot $t\in\mathcal{T}$ for agent~$i\in\mathcal{I}$ can be less than the disagreement utility ($u_{t,i}(\vec{x}_t) - u_{t,i}< 0$). \begin{figure}[t] \centering \subcaptionbox*{}{\includegraphics[trim={0 7.4cm 0 0},clip,width=0.55\linewidth]{figures_paper/two_players_b_50-legend.pdf}}\vspace{-2em}\\ \subcaptionbox{$R=1$}{\includegraphics[width=.24\linewidth]{figures_paper/two_players_b_1.pdf}} \subcaptionbox{$R=50$}{\includegraphics[width=.24\linewidth]{figures_paper/two_players_b_50.pdf}} \caption{Time-averaged utilities obtained for policies \textsc{OHF}, \textsc{OSF}, \textsc{LRU}, and \textsc{LFU}{} for batch sizes (a)~$R = 1$ and (b)~$R=100$, under \textsc{Cycle}{} network topology. The markers represent the iterations in the set $\set{100, 200, \dots, 10^4}$. } \label{fig:nash_bargainig} \end{figure} \begin{figure}[t] \centering \subcaptionbox*{}{\includegraphics[trim={0 7.5cm 0 0},clip,width=0.4\linewidth]{figures_paper/2players-topology-tree-multiplayer-x-alpha-legend.pdf}}\vspace{-2em}\\ \subcaptionbox{$\alpha=1$\label{sfig:multiple_player11}}{\includegraphics[width=.24\linewidth]{figures_paper/2players-topology-tree-multiplayer-x-alpha-1.pdf}} \subcaptionbox{$\alpha=2$\label{sfig:multiple_player12}}{\includegraphics[width=.24\linewidth]{figures_paper/2players-topology-tree-multiplayer-x-alpha-2.pdf}} \subcaptionbox{$\alpha=3$\label{sfig:multiple_player13}}{\includegraphics[width=.24\linewidth]{figures_paper/2players-topology-tree-multiplayer-x-alpha-3.pdf}} \subcaptionbox{PoF\label{sfig:multiple_player14}}{\includegraphics[width=.3\linewidth]{figures_paper/2players-topology-tree-multiplayer-x-alpha-POF.pdf}} \caption{Subfigures~(a)--(c) provide the average utility for different agents obtained by \textsc{OHF}{}, fairness benchmark~(OPT for $\alpha \neq 0$), and utilitarian benchmark (OPT for $\alpha =0$); and Subfigure~(d) provides the PoF for different values of $\alpha \in \set{0, 1, 2, 3}$ under an increasing number of agents in~$\set{2,3,4}$ and \textsc{Tree}~1--3 network topology.} \label{fig:multiple_player1} \end{figure} \begin{figure}[t] \centering \subcaptionbox*{}{\includegraphics[trim={0 7.1cm 0 0},clip,width=0.6\linewidth]{figures_paper/2players-topology-tree-multiplayer-x-player-4-legend.pdf}}\vspace{-2em}\\ \subcaptionbox{}{\includegraphics[width=0.26\linewidth]{figures_paper/2players-topology-tree-multiplayer-x-player-2.pdf}} \subcaptionbox{}{\includegraphics[width=0.26\linewidth]{figures_paper/2players-topology-tree-multiplayer-x-player-3.pdf}} \subcaptionbox{}{\includegraphics[width=0.26\linewidth]{figures_paper/2players-topology-tree-multiplayer-x-player-4.pdf}} \caption{Subfigures (a)--(c) provide the time-averaged utility across different agents obtained by \textsc{OHF}{} policy and \textsc{OPT}{} for $\alpha = 2$ under an increasing number of agents in $\set{2,3,4}$ and \textsc{Tree}~1--3 network topology.} \label{fig:multiple_player2} \end{figure} \begin{figure} \centering \subcaptionbox{\label{sfig:multiple_topologies}}{\includegraphics[width=0.28\linewidth]{figures_paper/2players-topology-topologies.pdf}} \subcaptionbox{\label{sfig:adversarial_trace}}{\includegraphics[width =0.32\linewidth]{figures_paper/2players-online-adversarial-50.pdf}} \caption{Subfigure~(a) provides the average utility of \textsc{OHF}{} and fairness benchmark under network topologies \textsc{Tree}, \textsc{Grid}, \textsc{Abilene}, \textsc{GEANT}, \emph{Stationary} trace ($\sigma \in \set{0.6, 0.8, 1.2}$), and $\alpha =3$. Subfigure~(b) provides the time-averaged utilities obtained for \textsc{OHF}, \textsc{OSF}{} and batch size $R = 50, t \in \mathcal{T}$, under network topology Tree~(a) and \emph{Non-stationary} trace. The markers represent the iterations in the set $\set{100, 200, \dots, 10^4}$.} \label{fig:multiple_topologies_adversarial_trace} \end{figure} \paragraph{Impact of agents on the price of fairness.} In Figures~\ref{fig:multiple_player1} and~\ref{fig:multiple_player2}, we consider the \textsc{Tree}{}~1--3 topology, $\alpha \in \set{1, 2, 3}$, and number of agents in $\set{2, 3, 4}$. Agents' query nodes generate \emph{Stationary} trace~($\sigma \in \set{1.2,0.8,0.6}$). In Figures~\ref{fig:multiple_player1}~(a)--(c), we observe for increasing the number of agents, the division of utilities differs between the fairness benchmark and utilitarian benchmark; moreover, this difference is more evident for larger values of $\alpha$. Figure~\ref{fig:multiple_player1}~(d) provides the price of fairness, and we observe the price of fairness increases with the number of agents and $\alpha$. Nonetheless, under the different settings the price of fairness remains below $4\%$, i.e., we experience at most a $4\%$ drop in the social welfare to provide fair utility distribution across the different agents. Figure~\ref{fig:multiple_player2} gives the time-averaged utilities obtained by running \textsc{OHF}{} for $\alpha =2$. We observe the utilities obtained by \textsc{OHF}{} quickly converge to the same utilities obtained by the fairness benchmark. In this figure, we also highlight the difference between the utilities achieved by the fairness benchmark and utilitarian benchmark, is reflected by the increasing utility gap for a higher number of participating agents. \paragraph{Different network topologies.} In Figure~\ref{fig:multiple_topologies_adversarial_trace}~(a), we consider the network topologies \textsc{Tree}, \textsc{Grid}, \textsc{Abilene}, \textsc{GEANT}{} under \emph{Stationary} trace ($\sigma \in \set{0.6, 1.0, 1.2}$) and $\alpha =3$. \textsc{OHF}{} achieves the same utilities as the fairness benchmark across the different topologies, we note that for larger network topologies the utility achieved by the different agents is higher due to an increase in available resources. \paragraph{Impact of non-stationarity.} In Figure~\ref{fig:multiple_topologies_adversarial_trace}~(b), we consider the \textsc{Cycle}{} topology and $\alpha = 2$. The query node of agent~1 generates \emph{Non-Stationary} trace, while the query node of agent~2 generates a shuffled \emph{Non-Stationary} trace, i.e., we remove the non-stationarity from the trace for agent~2 while preserving the overall popularity of the requests. Therefore, on average the agents are symmetric and experience the same utilities. We observe in Figure~\ref{fig:multiple_topologies_adversarial_trace}~(b) that indeed this is the case for \textsc{OHF}{} policy; however, because \textsc{OSF}{} aims to insure fairness across the different timeslots the agents are not considered symmetric and the average utilities deviate from the Pareto front (not efficient). \textsc{OSF}{} favors agent~1 by increasing its utility by $20\%$ compared to the utility of agent~1. \section{Conclusion and Future Work} \label{s:conclusion} In this work, we proposed a novel \textsc{OHF}{} policy that achieves horizon-fairness in dynamic resource allocation problems under different adversarial perturbations. We demonstrated the applicability of this policy in virtualized caching systems where different agents can cooperate to increase their caching gain. Our work paves the road for several interesting next steps. A future research direction is to consider decentralized versions of the policy under which each agent selects an allocation with limited information exchange across agents. For the application to virtualized caching systems, the message exchange techniques in~\cite{Ioannidis16,Li2021} to estimate subgradients can be exploited. Another important future research direction is to bridge the horizon-fairness and slot-fairness criteria to target applications where the agents are interested in ensuring fairness within a target time window. We observed that \textsc{OHF}{} can encapsulate the two criteria, however, it remains an open question whether a policy can smoothly transition between them. A final interesting research direction is to consider a limited feedback scenario where only part of the utility is revealed to the agents. Our policy could be extended to this setting through gradient estimation techniques~\cite{Hazanoco2016}. \section{Technical Lemmas and Definitions} \subsection{Convex Conjugate} \begin{definition} \label{def:conjugate} Let $F: \U \subset \mathbb{R}^\mathcal{I} \to \mathbb{R} \cup \{-\infty,+\infty\}$ be a function. Define ${F}^\star: \mathbb{R}^\mathcal{I} \to \mathbb{R} \cup \{-\infty,+\infty\}$ by \begin{align} {F}^\star (\vec \theta ) = \sup_{\u \in \U} \left\{\u \cdot \vec{\theta} - F(\u)\right\}, \end{align} for $\vec{\theta} \in \mathbb{R}^\mathcal{I}$. This is the \emph{convex conjugate} of $F$. \end{definition} \subsection{Convex Conjugate of $\alpha$-Fairness Function} \begin{lemma} \label{lemma:convex_conjugate} Let $\U = \brackets{u_{\star, \min}, u_{\star, \max}}^\mathcal{I} \subset \mathbb{R}^\mathcal{I}_{> 0}$, $\Theta = \brackets{-1/u^\alpha_{\star, \min},- 1/u^\alpha_{\star, \max}}^\mathcal{I} \subset \mathbb{R}^\mathcal{I}_{<0}$, and $F_\alpha: \U \to \mathbb{R}$ be an $\alpha$-fairness function~\eqref{e:alpha-fair}. The convex conjugate of $-F_\alpha$ is given by \begin{align} \parentheses{-F_{\alpha}}^\star(\vec \theta) &= \begin{cases}\displaystyle \sum_{i \in \mathcal{I}} \frac{\alpha(-\theta_i)^{1-1/\alpha} - 1}{1-\alpha} & \text{ for $\alpha \in \mathbb{R}_{\geq 0} \setminus \set{1}$}, \\ \displaystyle \sum_{i \in \mathcal{I}} - \log(-\theta_i) - 1 &\text{ for $\alpha = 1$}, \end{cases} \end{align} where $\vec{\theta} \in \Theta$. \end{lemma} \begin{proof} The convex conjugate of $-f_{\alpha}(u) \triangleq -\frac{u^{1-\alpha} - 1}{1-\alpha}$ for $u \in \brackets{u_{\star,\min}, u_{\star,\max}}$ and $\alpha \in \mathbb{R}_{\geq 0}\setminus \set{1}$ is given by \begin{align} \parentheses{-f_{\alpha}}^\star(\theta) = \max_{u \in \brackets{u_{\star,\min}, u_{\star,\max}}} \set{u \theta +\frac{u^{1-\alpha} - 1}{1-\alpha}}.\label{e:sup-fenchel} \end{align} We evaluate the derivative to characterize the maxima of r.h.s. term in the above equation \begin{align} \frac{\partial}{\partial u} \parentheses{u \theta +\frac{u^{1-\alpha} - 1}{1-\alpha}} = \theta + \frac{1}{u^\alpha}. \end{align} The function $\theta + \frac{1}{u^\alpha}$ is a decreasing function in $u$; thus $\theta + \frac{1}{u^\alpha} \geq 0$ when $u \leq \parentheses{-\frac{1}{\theta}}^{\frac{1}{\alpha}}$, and $\theta + \frac{1}{u^\alpha} < 0$ otherwise. The maximum is achieved at $u = \parentheses{-\frac{1}{\theta}}^{\frac{1}{\alpha}}$. It holds through Eq.~\eqref{e:sup-fenchel} \begin{align} \parentheses{-f_{\alpha}}^\star(\theta) &= \frac{\alpha(-\theta)^{1-1/\alpha} - 1}{1-\alpha} &\text{for}\qquad \theta \in \brackets{-1/u^\alpha_{\star, \min},-1/u^\alpha_{\star, \max}}.\label{e:fenchel-conjugate1} \end{align} Moreover, it is easy to check that the same argument holds for $f_{1} (u) = \log(u)$ and we have \begin{align} \parentheses{-f_{1}}^\star(\theta) &= -1 - \log(-\theta)&\text{for}\qquad \theta \in \brackets{-{1}/{u_{\star, \min}},-{1}/{u_{\star, \max}}}.\label{e:fenchel-conjugate2} \end{align} The convex conjugate of $-F_\alpha(\u) = \sum_{i \in \mathcal{I}} f_{\alpha} (u_i)$ for $\u \in \U $, using Eq.~\eqref{e:fenchel-conjugate1} and Eq.~\eqref{e:fenchel-conjugate2}, is given by \begin{align} \parentheses{-F_{\alpha}}^\star(\vec \theta) = \sum_{i \in \mathcal{I}} \parentheses{-f_{\alpha}}^\star( \theta_i) &= \begin{cases}\displaystyle \sum_{i \in \mathcal{I}} \frac{\alpha(-\theta_i)^{1-1/\alpha} - 1}{1-\alpha} & \text{ for $\alpha \in \mathbb{R}_{\geq 0} \setminus \set{1}$}, \\ \displaystyle \sum_{i \in \mathcal{I}} - \log(-\theta_i) - 1 &\text{ for $\alpha = 1$}, \end{cases} \end{align} for $\vec{\theta} \in \Theta$, because $F_\alpha(\u)$ is separable in $\u \in \U$. \end{proof} \subsection{Convex Biconjugate of $\alpha$-Fairness Functions} The following Lemma provides a stronger condition on $\vec \theta$ compared to \cite[Lemma 2.2]{agrawal2014bandits}, i.e., we restrict $\vec{\theta} \in \Theta$ instead of $\norm{\vec{\theta}}_{\star} \leq L$ where $L \geq \norm{ \nabla_{\u} F_\alpha(\u)}_{\star}$ for all $\u \in \U$ and $\norm{\,\cdot\,}_\star$ is the dual norm of $\norm{\,\cdot\,}$. \begin{lemma} \label{l:recover_f} Let $\U = \brackets{u_{\star,\min}, u_{\star,\max}}^\mathcal{I} \subset \mathbb{R}^\mathcal{I}_{> 0}$, $\Theta = \brackets{-1/u_{\star, \min}^\alpha,- 1/u_{\star, \max}^\alpha}^\mathcal{I}\subset \mathbb{R}^\mathcal{I}_{<0}$, and $F_\alpha: \U \to \mathbb{R}$ be an $\alpha$-fairness function~\eqref{e:alpha-fair}. The function $F_{\alpha}$ can be always be recovered from the convex conjugate $\parentheses{-F_\alpha}^{\star}$, i.e., \begin{align} F_{\alpha}(\u) = \min_{\vec{\theta} \in \Theta} \left\{ \parentheses{-F_{\alpha}}^\star(\vec \theta) - \vec \theta \cdot \u\right\}, \end{align} for $\u \in \U$. \end{lemma} \begin{proof} This proof follows the same lines of the proof of~\cite[Lemma 2.2]{agrawal2014bandits}. Since $\vec u \in \U$, therefore the gradient of $F_\alpha$ at point $\vec u$ is given as $\nabla_{\u} F_\alpha(\vec u) = \brackets{1/{u^{\alpha}_i}}_{i \in \mathcal{I}} \in -\Theta =\brackets{1/u_{\star, \min}^\alpha, 1/u_{\star, \max}^\alpha}^\mathcal{I}$. Moreover, it holds \begin{align} \min_{\vec{\theta} \in \Theta} \left\{ \parentheses{-F_\alpha}^\star(\vec\theta) - \vec \theta \cdot \u\right\} &= \min_{\vec{\theta} \in \Theta} \left\{ \max_{\vec u' \in \U}\left\{ \vec \theta\cdot \u' + F_\alpha(\u) \right\}- \vec \theta \cdot \u\right\}\\ &= \max_{\vec u' \in \U} \min_{\vec{\theta} \in \Theta} \left\{ \vec \theta\cdot \u' + F_\alpha(\u') - \vec \theta \cdot \u\right\}. &\text{Minmax theorem} \label{e:minmax_eq} \end{align} We take \begin{align*} \min_{\vec{\theta} \in \Theta} \left\{ \vec \theta\cdot \u' + F_\alpha(\u') - \vec \theta \cdot \u\right\} &= \min_{\vec{\theta} \in \Theta} \left\{F_\alpha(\u') + \vec \theta \cdot (\u' - \u)\right\}\\ &\leq F_\alpha(\u') -\nabla F_\alpha(\u) \cdot (\u' - \u) &\text{Because $-\nabla F_\alpha(\u) \in \Theta$}\\ &\leq F_\alpha(\vec u). &\text{Use concavity of $F_\alpha$} \end{align*} The equality is achieved when $\u = \u'$ and the maximum value in \eqref{e:minmax_eq} is attained for this value. We conclude the proof. \end{proof} \subsection{Online Gradient Descent (OGD) with Self-Confident Learning Rates} Lemma~\ref{lemma:ogd_regret} provides the regret guarantee of OGD oblivious to the time horizon $T$ and bound on subgradients' norm for any $t \in \mathcal{T}$. This adopts the idea of~\cite{AUER200248} which denominate such learning schemes as self-confident. \begin{lemma} \label{lemma:ogd_regret} Consider a convex set $\mathcal{X}$, a sequence of $\sigma$-strongly convex functions $f_t: \mathcal{X} \to \mathbb{R}$ with subgradient $\vec g_t \in \partial f_t(\vec{x}_t)$ at $\vec{x}_t$, and OGD update rule $\vec{x}_{t+1} = \Pi_{\mathcal{X}} \parentheses{\vec{x}_{t} - \eta_t \vec g_t } = \arg\min_{\vec{x} \in \mathcal{X}} \norm{\vec{x} - \parentheses{\vec{x}_{t} - \eta_t\vec g_t}}_2$ initialized at $\vec{x}_1 \in \mathcal{X}$. Let $\diam {\mathcal{X}} \triangleq \max\set{\norm{\vec{x} - \vec{x}'}_2: \vec{x}, \vec{x}' \in \mathcal{X} }$. Selecting the learning rates as $\vec \eta: \mathcal{T} \to \mathbb{R}$ such that $\eta_{t} \leq \eta_{t-1}$ for all $t > 1$ gives the following regret guarantee against a fixed decision $\vec{x} \in \mathcal{X}$: \begin{align} \sum_{t \in \mathcal{T}} f_t(\vec{x}_t)-f_t(\vec{x}) \leq \mathrm{diam}^2(\mathcal{X}) \sum^T_{t=1} \parentheses{\frac{1}{\eta_t} - \frac{1}{\eta_{t-1}} - \sigma} + \sum^T_{t=1}\eta_t \norm{\vec g_t}^2_2. \end{align} \begin{itemize} \item When $\sigma >0$, selecting the learning rate schedule $\eta_t = \frac{1}{\sigma t}$ for $t \in \mathcal{T}$ gives \begin{align} \sum_{t \in \mathcal{T}} f_t(\vec{x}_t)-f_t(\vec{x}) &\leq \sum^T_{t=1}\frac{\norm{\vec g_t}^2_2}{t\sigma} = \BigO{\log(T)}. \end{align} \item When $\sigma = 0$, selecting the learning rate schedule $\eta_t = \frac{\diam{\mathcal{X}}}{\sqrt{ \sum^t_{s=1} \norm{\vec g_s}^2_2}}$ for $t \in \mathcal{T}$ gives \begin{align} \sum_{t \in \mathcal{T}} f_t(\vec{x}_t)-f_t(\vec{x}) &\leq 1.5 \diam{\mathcal{X}} \sqrt{ \sum_{t\in \mathcal{T}} \norm{\vec g_s}^2_2} = \BigO{\sqrt{T}}. \end{align} \end{itemize} \end{lemma} \begin{proof} This proof follows the same lines of the proof of~\cite{Hazanoco2016}. We do not assume a bound on the gradients is known beforehand and the time horizon $T$. Take a fixed $\vec{x} \in \mathcal{X}$. Applying the definition of $\sigma$-strong convexity to the pair of points $\vec{x}_t$ and $\vec{x}$, we have \begin{align} 2 \parentheses{f_t(\vec{x}_t) - f_t(\vec{x}) }\leq 2 \vec g_t \cdot (\vec{x}_t - \vec{x}) -\sigma \norm{\vec{x}_t - \vec{x}}^2_2.\label{eq:ogd1} \end{align} Pythagorean theorem implies \begin{align} \norm{\vec{x}_{t+1} - \vec{x}}_2^2 = \norm{\Pi_{\mathcal{X}}\parentheses{\vec{x}_t - \eta_t \vec g_t} - \vec{x}}_2^2 \leq \norm{\vec{x}_t - \eta_t - \vec{x}}_2^2, \end{align} Expanding the r.h.s. term gives \begin{align} \norm{\vec{x}_{t+1} - \vec{x}}_2^2 &\leq \norm{\vec{x}_t - \vec{x}}_2^2 + \eta^2_t \norm{\vec g_t}^2_2 - 2\eta_t \vec g_t \cdot \parentheses{\vec{x}_t - \vec{x}}, \\ 2\vec g_t \cdot \parentheses{\vec{x}_t - \vec{x}}&\leq\frac{ \norm{\vec{x}_t - \vec{x}}_2^2 - \norm{\vec{x}_{t+1} - \vec{x}}_2^2}{\eta_t} + \eta_t \norm{\vec g_t}^2_2.\label{eq:ogd2} \end{align} Combine Eq.~\eqref{eq:ogd1} and Eq.~\eqref{eq:ogd2} and for $t =1 $ to $t = T$: \begin{align*} 2\sum^T_{t =1} f_t(\vec{x}_t) - f_t(\vec{x}) &\leq \sum^T_{t=1} \frac{ \norm{\vec{x}_t - \vec{x}}_2^2 (1-\sigma \eta_t) - \norm{\vec{x}_{t+1} - \vec{x}}_2^2}{\eta_t} + \sum^T_{t=1}\eta_t \norm{\vec g_t}^2_2 \\ &\leq \sum^T_{t=1} \norm{\vec{x}_t - \vec{x}}_2^2 \parentheses{\frac{1}{\eta_t} - \frac{1}{\eta_{t-1}} - \sigma} + \sum^T_{t=1}\eta_t \norm{\vec g_t}^2_2 &\text{$\frac{1}{\eta_0} \triangleq0$}\\ &\leq \mathrm{diam}^2(\mathcal{X}) \parentheses{\frac{1}{\eta_T} -\sigma T } + \sum^T_{t=1}\eta_t \norm{\vec g_t}^2_2.&\text{Telescoping series} \label{eq:generic_bound} \end{align*} \noindent When $\sigma > 0$ and $\eta_t = \frac{1}{\sigma t}$, from Eq.~\eqref{eq:generic_bound} we have \begin{align} \sum^T_{t =1} f_t(\vec{x}_t) - f_t(\vec{x}) \leq 0 + \sum^T_{t=1} \frac{\norm{\vec g_t}^2_2}{2\sigma t} \leq \max_{t \in \mathcal{T}} \set{\norm{\vec g_t}^2_2} \sum^T_{t =1} \frac{1}{2\sigma} \leq \frac{\max_{t \in \mathcal{T}} \set{\norm{\vec g_t}^2_2}}{2\sigma} \mathrm{H}_T = \BigO{\log(T)}, \end{align} where $\mathrm{H}_T$ is the $T$-th harmonic number. \noindent When $\sigma = 0$ and $\eta_t = \frac{\diam{\mathcal{X}}}{\sqrt{ \sum^t_{s=1} \norm{\vec g_s}^2_2}}$, from Eq.~\eqref{eq:generic_bound} we have \begin{align} \sum^T_{t =1} f_t(\vec{x}_t) - f_t(\vec{x}) &\leq \frac{\diam{\mathcal{X}}}{2} {\sqrt{ \sum^T_{t=1} \norm{\vec g_s}^2_2}} + \frac{ \diam{\mathcal{X}} }{2} \sum^T_{t=1} \frac{\norm{\vec g_t}^2_2}{\sqrt{ \sum^t_{s=1} \norm{\vec g_s}^2_2}}\\ &\leq 1.5 \diam{\mathcal{X}}{\sqrt{ \sum^T_{t=1} \norm{\vec g_s}^2_2}} = \BigO{\sqrt{T}}. \end{align} Last inequality is obtained using \cite[Lemma~3.5]{AUER200248}, i.e., $\textstyle{\sum^T_{t=1} \frac{\abs{a_t}}{\sum^t_{s=1} \abs{a_s}}} \leq 2 \sqrt{\sum^T_{t=1} \abs{a_t}}$. This concludes the proof. \end{proof} \subsection{Saddle-Point Problem Formulation of $\alpha$-Fairness} \begin{lemma} \label{lemma:properties_saddle_function} \label{l:saddle_problem} Let $\mathcal{X}$ be a convex set, $\U = \brackets{u_{\star,\min}, u_{\star,\max}}^\mathcal{I} \subset \mathbb{R}^\mathcal{I}_{> 0}$, $u_i: \mathcal{X} \to \U$ be a concave function for every $i \in \mathcal{I}$, $\Theta = \brackets{-1/u_{\star, \min}^\alpha,- 1/u_{\star, \max}^\alpha}^\mathcal{I}\subset \mathbb{R}^\mathcal{I}_{<0}$, and $\Psi_\alpha: \Theta \times \mathcal{X} \to \mathbb{R} $ be a function given by \begin{align} \Psi_{\alpha} (\vec{\theta}, \vec{x}) \triangleq \parentheses{-F_\alpha}^\star(\vec \theta) - \vec \theta \cdot \vec u(\vec{x}). \end{align} The following holds: \begin{itemize} \item The solution of the saddle-point problem formed by $\Psi_{\alpha}$ is a maximizer of the $\alpha$-fairness function \begin{align} \max_{\vec{x} \in \mathcal{X}} \min_{\vec{\theta} \in \Theta} \Psi_{\alpha} (\vec{\theta}, \vec{x}) = \max_{\vec{x} \in \mathcal{X}} F_{\alpha} (\u(\vec{x})).\label{e:sp-1} \end{align} \item The function $\Psi_{\alpha}: \Theta \times \mathcal{X} \to \mathbb{R} $ is concave over $\mathcal{X}$. \item The function $\Psi_{\alpha}: \Theta \times \mathcal{X} \to \mathbb{R} $ is $ \frac{u_{\star, \min}^{1+1/\alpha}}{\alpha }$-strongly convex over $\Theta$ w.r.t. $\norm{\,\cdot\,}_2$ for $\alpha > 0$. \end{itemize} \end{lemma} \begin{proof} Equation~\eqref{e:sp-1} is a direct result of Lemma~\ref{l:recover_f}. The function $\Psi_{\alpha}$ is concave over $\mathcal{X}$ because $ - \vec \theta \cdot \vec u(\vec{x})$ is a weighted sum of concave functions with non-negative weights. To prove the strong convexity of $\Psi_{\alpha}$ w.r.t. $\norm{\,\cdot\,}_2$, a sufficient condition~\cite[Lemma~14]{Shalev2007} is given by ${\vec{\theta}'}^T \parentheses{\nabla_{{\vec{\theta}}}^2 \Psi_{\alpha} ({\vec{\theta}}, \vec{x})} {\vec{\theta}}' \geq \sigma \norm{ {\vec{\theta}}'}^2_2$ for all $\vec{\theta}, {\vec{\theta}}' \in \Theta$, and it holds \begin{align} {\vec{\theta}'}^T \parentheses{\nabla_{{\vec{\theta}}}^2 \Psi_{\alpha} ({\vec{\theta}}, \vec{x})} {\vec{\theta}}' = \sum_{i \in \mathcal{I}} {\theta_i'}^2 \frac{\partial^2}{\partial {\theta_i}} \parentheses{-F_{\alpha}}^\star({\vec{\theta}}) = \sum_{i \in \mathcal{I}} \frac{{\theta'_i}^2}{\alpha (-\theta_i)^{1+1/\alpha}} \geq \frac{u_{\star, \min}^{1+1/\alpha}}{\alpha } \norm{\vec{\theta}'}^2_2. \end{align} This concludes the proof. \end{proof} \section{Proof of Theorem~\ref{theorem:impossibility}} \label{proof:impossibility} \begin{proof} Consider two players $\mathcal{I} = \set{1,2}$, allocation set $\mathcal{X} = [-1,1]$ for all $t \in \mathcal{T}$. We define $\gamma_T \in [0.4,1]$, $\psi_T\triangleq\frac{1}{T}\sum^{\gamma_T T}_{t=1} {x_t}$. We assume w.l.g. $\gamma_T T$ is a natural number. We consider two strategies selected by the adversary: \noindent\textbf{Strategy 1.} The adversary reveals the following utilities: \begin{align} \vec u_t (x) &= \begin{cases} (1 + x, 2 - x) & \text{if } t \leq \gamma_T T, \\ (1, 1) & \text{otherwise}. \end{cases} \end{align} Under the selected utilities, the static optimum attains the following objective \begin{align} \mathrm{OPT}^{\mathrm{S1}} &= \max_{x \in \mathcal{X}}~ f_{\alpha}((1+x) \gamma_T + (1-\gamma_T)) +f_{\alpha}((2-x) \gamma_T + (1-\gamma_T))\\ &=\max_{x \in \mathcal{X}}~ f_{\alpha}(1 +\gamma_T x) +f_{\alpha}( 1 +\gamma_T - \gamma_T x). \end{align} The above objective is concave in $x$. We can perform a derivative test to characterize its maximum \begin{align} \frac{\partial f_{\alpha}(1 +\gamma_T x) +F_{\alpha}( 1 +\gamma_T - \gamma_T x)}{\partial x} = \frac{\gamma_T}{ \parentheses{1 + \gamma_T x}^{\alpha}} - \frac{\gamma_T}{\parentheses{1 + \gamma_T - \gamma_T x}^\alpha} = 0,\qquad \text{for}~x = \frac{1}{2}. \end{align} Thus, it holds \begin{align} \mathrm{OPT}^{\mathrm{S1}} = 2 f_{\alpha}( 1 + 0.5\gamma_T). \end{align} The fairness regret denoted by $\mathfrak{R}_T^{\mathrm{S1}} (F_\alpha, \vec A)$ under this strategy of a policy $\mathcal{A}$ is given by \begin{align} \mathfrak{R}_T^{\mathrm{S1}} (F_\alpha, \vec A) &= \mathrm{OPT}^{\mathrm{S1}} - f_{\alpha}\left(\frac{1}{T}\left(\sum^{\gamma_T T}_{t=1} {1 + x_t} \right) + 1 - \gamma_T\right) - f_{\alpha}\left(\frac{1}{T}\left(\sum^{\gamma_T T}_{t=1} {2 - x_t} \right) +1 -\gamma_T\right)\\ &= 2 f_{\alpha}( 1 + 0.5\gamma_T) - f_{\alpha}(1 +\psi_T) - f_{\alpha}(1 + \gamma_T - \psi_T). \end{align} \noindent\textbf{Strategy 2.} The adversary reveals the following utilities: \begin{align} \vec u_t (x) &= \begin{cases} (1 + x, 2 - x) & \text{if } t \leq \gamma_T T, \\ (2, 0) & \text{otherwise}. \end{cases} \end{align} Under the selected utilities, the static optimum attains the following objective \begin{align} \mathrm{OPT}^{\mathrm{S2}} &= \max_{x \in \mathcal{X}} f_{\alpha} ((1+x)\gamma_T + (1-\gamma_T)2) + f_{\alpha} ((2-x)\gamma_T)\\ &=\max_{x \in \mathcal{X}} f_{\alpha} (2 - \gamma_T + \gamma_T x) + f_{\alpha} (2\gamma_T - \gamma_T x ). \end{align} Similar to the previous strategy, we can perform a derivative test to characterize the maximum of the the above objective \begin{align*} \frac{\partial f_{\alpha} (2 - \gamma_T + \gamma_T x) + f_{\alpha} (2\gamma_T - \gamma_T x )}{\partial x} = \frac{\gamma_T}{\parentheses{2 - \gamma_T + \gamma_T x}^\alpha} -\frac{\gamma_T}{\parentheses{2\gamma_T - \gamma_T x}^\alpha} = 0, \qquad \text{for}~ x = 1.5 - \frac{1}{\gamma_T}. \end{align*} Therefore, it holds \begin{align} \mathrm{OPT}^{\mathrm{S2}} = 2f_{\alpha}( 1 + 0.5 \gamma_T). \end{align} and the fairness regret $\mathfrak{R}_T^{\mathrm{S2}} (F_\alpha, \vec A)$ under this strategy is \begin{align} \mathfrak{R}_T^{\mathrm{S2}} (F_\alpha, \vec A) &= \mathrm{OPT}^{\mathrm{S2}} - f_{\alpha}\left(\frac{1}{T}\left(\sum^{\gamma_T T}_{t=1} {1 + x_t} \right) + 2 -2 \gamma_T\right) - f_{\alpha}\left(\frac{1}{T}\left(\sum^{\gamma_T T}_{t=1} {2 - x_t} \right)\right)\\ &= 2 f_{\alpha}( 1 + 0.5 \gamma_T) - f_{\alpha}(2 - \gamma_T + \psi_T) - f_{\alpha}(2 \gamma_T - \psi_T). \end{align} We take the average fairness regret $\frac{1}{2}\left( \mathfrak{R}_T^{\mathrm{S1}} (F_\alpha, \vec A) + \mathfrak{R}_T^{\mathrm{S2}} (F_\alpha, \vec A)\right)$ across the two strategies \begin{align} &\frac{1}{2}\left( \mathfrak{R}_T^{\mathrm{S1}} (F_\alpha, \vec A) + \mathfrak{R}_T^{\mathrm{S2}} (F_\alpha, \vec A)\right)\\ &= 2 f_{\alpha}(1 + 0.5 \gamma_T) - \frac{1}{2} \left(f_{\alpha}(2 - \gamma_T + \psi_T) + f_{\alpha}(2 \gamma_T - \psi_T) + f_{\alpha}(1 +\psi_T) + f_{\alpha}(1 + \gamma_T - \psi_T)\right).\label{e:two_regrets} \end{align} The r.h.s. of the above equation is convex in $\psi_T$, so its minimizer can be characterized through the derivative as follows \begin{align} &\frac{\partial f_{\alpha}(2 - \gamma_T + \psi) + f_{\alpha}(2 \gamma_T - \psi) + f_{\alpha}(1 +\psi) + f_{\alpha}(1 + \gamma_T - \psi)}{\partial \psi} \\ &= \frac{1}{\parentheses{2 - \gamma_T + \psi}^\alpha} - \frac{1}{\parentheses{2 \gamma_T - \psi}^\alpha} + \frac{1}{\parentheses{1 +\psi}^\alpha }- \frac{1}{\parentheses{1 + \gamma_T - \psi}^\alpha} = 0, \qquad \text{for } \psi = \gamma_T - 0.5. \end{align} We replace $\psi_T = \gamma_T - 0.5$ in Eq.~\eqref{e:two_regrets} to get \begin{align} \frac{1}{2}\left( \mathfrak{R}_T^{\mathrm{S1}} (F_\alpha, \vec A) + \mathfrak{R}_T^{\mathrm{S2}} (F_\alpha, \vec A)\right) &\geq 2 f_{\alpha}(1 + 0.5 \gamma_T) - \left( f_{\alpha}(1.5) + f_{\alpha}(0.5 + \gamma_T) \right). \label{e:two_regrets2} \end{align} We take the derivative of the lower bound \begin{align} \frac{\partial 2 f_{\alpha}(1 + 0.5 \gamma_T) - \left(f_{\alpha}(1.5) + f_{\alpha}(0.5 + \gamma_T) \right)}{\partial \gamma_T} = \frac{\parentheses{0.5 + \gamma}^\alpha - \parentheses{1 + 0.5 \gamma}^\alpha}{\parentheses{0.5 + \gamma}^\alpha \parentheses{1 + 0.5 \gamma}^\alpha}. \end{align} Note that the sign of the derivative is determined by the numerator $\parentheses{0.5 + \gamma}^\alpha - \parentheses{1 + 0.5 \gamma}^\alpha$. It holds $\parentheses{0.5 + \gamma}^\alpha - \parentheses{1 + 0.5 \gamma}^\alpha < 0$ for $\gamma_T<1$, otherwise $\parentheses{0.5 + \gamma}^\alpha - \parentheses{1 + 0.5 \gamma}^\alpha = 0$. Hence, the lower bound in Eq.~\eqref{e:two_regrets2} is strictly decreasing for $\gamma_T < 1$, and it holds for $\gamma_T \leq 1 - \epsilon$ for $\epsilon >0$ \begin{align} \frac{1}{2}\left( \mathfrak{R}_T^{\mathrm{S1}} (F_\alpha, \vec A) + \mathfrak{R}_T^{\mathrm{S2}} (F_\alpha, \vec A)\right)&\geq 2 f_\alpha (1.5 - 0.5 \epsilon) - (f_\alpha(1.5) + f_\alpha (1.5 + 0.5 \epsilon)) > 0. \end{align} In other words, the fairness regret guarantee is not attainable\footnote{Note that the fairness regret must vanish for any adversarial choice of sequence of utilities.} for values of $\gamma_T \leq 1 - \epsilon$ for any $T$ and $\epsilon > 0$. We can also verify that \ref{a:5} is violated when $\gamma_T \leq 1 - \epsilon$ for any $T$ and $\epsilon > 0$. Note that $\gamma_T$ is defined to be in the set $[0.4, 1]$. Under strategy 1 we have $x_{\star} = \frac{1}{2}$ and it holds \begin{align} \frac{1}{T} \sum^T_{t=1} \vec u_t(x_{\star}) &= (1 + 0.5\gamma_T,1 + 0.5\gamma_T ), \text{ and } \vec u_t (x_{\star}) = \begin{cases} (1.5, 1.5) & \text{if } t \leq \gamma_T T, \\ (1, 1) & \text{otherwise}. \end{cases} \end{align} Then, it holds \begin{align} \mathbb{V}_{\T} &\geq 2 ( 1 -\gamma_t) \gamma_T T \geq 2 \epsilon \gamma_T T \geq 0.8 \epsilon T = \Omega(T). \end{align} Moreover, it can easily be checked that $\mathbb{W}_{\T} = \Omega(T)$ because there is no decomposition $\set{1,2,\dots, T} = \mathcal{T}_1\cup \mathcal{T}_2\cup\dots\cup\mathcal{T}_K$ where $\max\set{\mathcal{T}_k: k \in [K]} = o \left(T^{\frac{1}{2}}\right)$ under which $\sum^K_{k=1} \sum_{i \in \mathcal{I}} \abs{\sum_{t \in \mathcal{T}_k} \delta_{t,i}(\vec{x}_{\star})}= o (T)$. To conclude, when $\gamma_T = 1 - o(1)$, we have $\min\{\mathbb{V}_{\T}, \mathbb{W}_{\T}\} \leq \mathbb{V}_{\T} = o(T)$; thus, Assumption~\ref{a:5} only holds when $\gamma_T = 1 - o(1)$ for which the vanishing fairness regret guarantee is attainable. Figure~\ref{fig:impossibility_example} provides a summary of the connection between the fairness regret under scenarios~1 and~2 and Assumption~\ref{a:5}. \begin{figure}[t] \centering \includegraphics[width=.4\linewidth]{figs/impossibility_example.pdf} \caption{Assumption~\ref{a:5} and fairness regret~\eqref{e:b_regret} under scenarios~1 and 2.} \label{fig:impossibility_example} \end{figure} \end{proof} \section{Proof of Theorem~\ref{th:maintheorem}} \label{proof:t:maintheorem} \begin{proof} Note that $\Psi_{\alpha,t}: \Theta \times \mathcal{X} \to \mathbb{R}$ is the function given by \begin{align} \Psi_{\alpha, t}(\vec{\theta}, \vec{x}) = \parentheses{-F_{\alpha}}^\star(\vec{\theta}) - \vec{\theta} \cdot \u_{t}(\vec{x}), \label{e:primal_dual_loss_gain} \end{align} where $F_\alpha: \U \to \mathbb{R}$ is an $\alpha$-fairness function~\eqref{e:alpha-fair}. From Lemma~\ref{lemma:ogd_regret}, OGD operating over the set $\Theta$ under the $ \frac{u_{\star, \min}^{1+1/\alpha}}{\alpha }$-strongly convex (Lemma~\ref{lemma:properties_saddle_function}) cost functions $\Psi_{\alpha, t}(\underline{\vec \theta_t}, \vec{x}_t)$ has the following regret guarantee against any fixed $\vec{\theta} \in \Theta$ \begin{align} \label{e:algo_obj_p1} \frac{1}{T}\sum^T_{t=1} \Psi_{\alpha, t}(\vec{\theta}_t, \vec{x}_t) - \frac{1}{T}\sum^T_{t=1} \Psi_{\alpha, t}(\vec{\theta}, \vec{x}_t) &\leq \frac{1}{T} \,\cdot\, \vGroup{ \frac{1}{2}\sum^T_{t=1}\frac{\alpha}{u_{\star, \min}^{1+1/\alpha} t} \norm{\vec g_{\Theta, t}}^2_2}{\mathfrak{R}_{T, \Theta}} , \end{align} From Lemma~\ref{l:recover_f}, it holds \begin{align} \min_{\vec{\theta} \in \Theta} \frac{1}{T}\sum^T_{t=1} \Psi_{\alpha, t}(\vec{\theta}, \vec{x}_t) &= F_{\alpha}\left(\frac{1}{T} \sum^T_{t=1}\vec u_t(\vec{x}_t)\right).\label{e:algo_obj_p2} \end{align} Combine Eq.~\eqref{e:algo_obj_p1} and Eq.~\eqref{e:algo_obj_p2} to obtain the lower bound \begin{align} {F_{\alpha}\left(\frac{1}{T} \sum^T_{t=1}\u_t(\vec{x}_t)\right) }+ \frac{\mathfrak{R}_{T, \Theta}}{T} \geq { \frac{1}{T}\sum^T_{t=1} \Psi_{\alpha, t}(\vec{\theta}_t, \vec{x}_t)}. \label{e:dual_part1} \end{align} From Lemma~\ref{lemma:ogd_regret}, OGD operating over the set $\mathcal{X}$ under the reward functions $\Psi_{\alpha, t}(\vec \theta_t, \vec{x})$ has the following regret guarantee for any fixed $\vec{x}_{\star} \in \mathcal{X}$: \begin{align} \frac{1}{T}\sum^T_{t=1} \Psi_{\alpha, t}(\vec{\theta}_t, \vec{x}_{\star}) - \frac{1}{T}\sum^T_{t=1} \Psi_{\alpha, t}(\vec{\theta}_t, \vec{x}_t) \leq \frac{1}{T} \,\cdot\, \vGroup{1.5 \diam{\mathcal{X}} \sqrt{\sum_{t \in \mathcal{T}} \norm{\vec g_{\mathcal{X},t}}^2_2}}{\mathfrak{R}_{T, \mathcal{X}}}, \end{align} Hence, we have the following \begin{align} &{\frac{1}{T}\sum^T_{t=1} \Psi_{\alpha, t}(\vec{\theta}_t, \vec{x}_t)} + \frac{\mathfrak{R}_{T, \mathcal{X}}}{T} \geq {\frac{1}{T}\sum^T_{t=1} \Psi_{\alpha, t}(\vec{\theta}_t, \vec{x}_{\star}) }\nonumber\\ &= { \frac{1}{T}\sum^T_{t=1} F^\star(\vec{\theta}_t) -\frac{1}{T} \sum^T_{t=1} \vec{\theta}_t \cdot \u_t(\vec{x}_{\star}) }&\text{Replace $\Psi_{\alpha, t}(\vec{\theta}_t, \vec{x}_{\star})$ using Eq.~\eqref{e:primal_dual_loss_gain}}\nonumber\\ &\geq {F^\star\left(\frac{1}{T} \sum^T_{t=1}\vec{\theta}_t\right) - \frac{1}{T} \sum^T_{t=1}\vec{\theta}_t \cdot \u_t(\vec{x}_{\star})} &\text{Jensen's inequality \& convexity of $F^\star$}\nonumber\\ &\geq { F^\star\left(\bar{\vec{\theta}}\right) - \bar{\vec{\theta}} \cdot \left(\frac{1}{T} \sum^T_{t=1}\u_t(\vec{x}_{\star})\right)} - {\frac{1}{T} \sum^T_{t=1} (\vec{\theta}_t - \bar{\vec{\theta}}) \cdot \u_t(\vec{x}_{\star})}\nonumber\\ &\geq \min_{\vec{\theta} \in \Theta} \set{ {F^\star\left({\vec{\theta}}\right) - {\vec{\theta}} \cdot \left(\frac{1}{T} \sum^T_{t=1}\u_t(\vec{x}_{\star})\right) }}- { \frac{1}{T} \sum^T_{t=1} (\vec{\theta}_t - \bar{\vec{\theta}}) \cdot \u_t(\vec{x}_{\star}) }\nonumber\\ &= F_{\alpha}\left( {\frac{1}{T} \sum^T_{t=1} \u_t(\vec{x}_{\star})}\right) - { \frac{1}{T} \sum^T_{t=1}\left(\vec{\theta}_t - \bar{\vec{\theta}}\right) \cdot \u_t(\vec{x}_{\star})} .\label{e:primal_p2} \end{align} We combine the above equation and Eq.~\eqref{e:dual_part1} to obtain \begin{align} {F_{\alpha}\left( {\frac{1}{T} \sum^T_{t=1}\u_t(\vec{x}_{\star})}\right) - F_{\alpha}\left( {\frac{1}{T} \sum^T_{t=1}\u_t(\vec{x}_t)}\right)} &\leq {\frac{\mathfrak{R}_{T, \mathcal{X}}}{T} + \frac{\mathfrak{R}_{T, \Theta}}{T} + { \frac{1}{T} \sum^T_{t=1} \left(\vec{\theta}_t - \bar\vec{\theta}\right) \cdot \u_t(\vec{x}_{\star})}}\nonumber\\ &= { \frac{\mathfrak{R}_{T, \mathcal{X}}}{T} + \frac{\mathfrak{R}_{T, \Theta}}{T} } + \vGroup{{ \frac{1}{T} \sum^T_{t=1} \left( \bar\vec{\theta} - \vec{\theta}_t\right) \cdot \vec \delta_t(\vec{x}_{\star})}}{ \Sigma}\label{e:primal_dual_with_extra_term} \end{align} We provide two approaches to bound the r.h.s. term $\Sigma$ in Eq.~\eqref{eq:_a1}, and this gives the two conditions in Assumption~\ref{a:5}: \noindent\textbf{Bound 1.} We can bound the r.h.s. term $\Sigma$ in the above equation as follows \begin{align} \Sigma &= { \bar\vec{\theta}\cdot \sum^T_{t=1} \vec\delta_t(\vec{x}_{\star})}- { { \sum^T_{t=1} \vec{\theta}_t\cdot \vec\delta_t(\vec{x}_{\star}) }}= {-{ \sum^T_{t=1} \vec{\theta}_t\cdot \vec\delta_t(\vec{x}_{\star}) }} \\ &\leq\frac{1}{u_{\star, \min}}{\sum_{i \in \mathcal{I}} \sum^T_{t=1} \delta_{t,i}(\vec{x}_{\star}) \mathds{1}_{\set{\delta_{t,i}(\vec{x}_{\star}) \geq 0}}} = \BigO{\mathbb{V}_{\T}}.\label{eq:_a1} \end{align} \noindent\textbf{Bound 2.} We alternatively bound $ \Sigma$ as follows \begin{align} \Sigma &= { \sum^K_{k=1} \sum_{t \in \mathcal{T}_k} \left(\bar\vec{\theta} - \vec{\theta}_t \right) \cdot \vec \delta_t(\vec{x}_{\star})}= {\sum^K_{k=1} \sum_{t \in \mathcal{T}_k} \left(\bar\vec{\theta} - \vec{\theta}_{\min\parentheses{\mathcal{T}_k}}\right) \cdot \vec \delta_t(\vec{x}_{\star})+ \sum^K_{k=1} \sum_{t \in \mathcal{T}_k} \left( \vec{\theta}_{\min\parentheses{\mathcal{T}_k}} - \vec{\theta}_{t}\right) \cdot \vec \delta_t(\vec{x}_{\star})}\nonumber\\ &\leq \Delta_{\alpha}\sum^K_{k=1} \norm{\sum_{t \in \mathcal{T}_k} \vec \delta_t(\vec{x}_{\star})}_1 + { u_{\max} \sum^K_{k=1} \sum_{t \in \mathcal{T}_k} \norm{ \vec{\theta}_{\min\parentheses{\mathcal{T}_k}} - \vec{\theta}_{t}}_1 }, \label{e:w_rhs_proof_0} \end{align} where $\Delta_\alpha = \max \set{\norm{\vec{\theta} - \vec{\theta}'}_\infty : \vec{\theta}, \vec{\theta}' \in \Theta}$. We bound the term $\sum^K_{k=1} \sum_{t \in \mathcal{T}_k} \norm{ \vec{\theta}_{\min\parentheses{\mathcal{T}_k}} - \vec{\theta}_{t}}_1$ in the above equation as \begin{align} \sum^K_{k=1} \sum_{t \in \mathcal{T}_k} \norm{ \vec{\theta}_{\min\parentheses{\mathcal{T}_k}} - \vec{\theta}_{t}}_1 &\leq L_{\Theta} \sum^K_{k=1} \eta_{\Theta,\,{\min \parentheses{\mathcal{T}_k}}}\sum_{t \in \mathcal{T}_k} (t- \min\parentheses{\mathcal{T}_k})\leq L_{\Theta} \sum^K_{k=1} \eta_{\Theta,\,{\min \parentheses{\mathcal{T}_k}}} \card{\mathcal{T}_k}^2\\ &= L_{\Theta}\frac{u_{\star, \min}^{1 + \frac{1}{\alpha}}}{\alpha} \sum^K_{k=1} \frac{\card{\mathcal{T}_k}^2}{\min \parentheses{\mathcal{T}_k}} \label{e:w_rhs_proof}, \end{align} and replacing this upper-bound in Eq.~\eqref{e:w_rhs_proof_0} gives \begin{align} \Sigma \leq \Delta_{\alpha} \sum^K_{k=1} \norm{\sum_{t \in \mathcal{T}_k} \vec \delta_t(\vec{x}_{\star})}_1 + u_{\max} L_{\Theta}\frac{\alpha}{u_{\star, \min}^{1 + \frac{1}{\alpha}}} \sum^K_{k=1} \frac{\card{\mathcal{T}_k}^2}{\vGroup{\min \parentheses{\mathcal{T}_k}}{\sum_{k' < k } \card{\mathcal{T}_k}+1}} = \BigO{\mathbb{W}_{\T}}\label{eq:_a2}. \end{align} We combine Eq.~\eqref{eq:_a1}, Eq.~\eqref{eq:_a2}, and Eq.~\eqref{e:primal_dual_with_extra_term} to obtain \begin{align} \mathfrak{R}_T \parentheses{F_{\alpha}, \vec\A} &\leq \sup_{ \set{\u_t}^T_{t=1} \in {{\U^T}}} \set{\frac{1}{T} \parentheses{{\mathfrak{R}_{T, \mathcal{X}} + \mathfrak{R}_{T, \Theta}}} }+ \BigO{\frac{\min\set{\mathbb{V}_{\T}, \mathbb{W}_{\T}}}{T} }\\ &\leq \sup_{ \set{\u_t}^T_{t=1} \in {{\U^T}}} \set{\frac{1}{T} \parentheses{{1.5 \diam{\mathcal{X}} \sqrt{\sum_{t \in \mathcal{T}} \norm{\vec g_{\mathcal{X},t}}^2_2} +\frac{\alpha}{u_{\star, \min}^{1 + \frac{1}{\alpha}}} \sum^T_{t=1}\frac{ \norm{\vec g_{\Theta,t}}^2_2}{t} } }}+ \BigO{\frac{\min\set{\mathbb{V}_{\T}, \mathbb{W}_{\T}}}{T} }.\label{e:adaptive_bound} \end{align} The following upper bounds hold \begin{align*} \norm{\vec g_{\Theta,t}}_2 &= \norm{ \parentheses{\frac{1}{\parentheses{-\theta_{t,i}}^{1/\alpha}} - \left(\vec u_t (\vec{x}_t)\right)}_{i \in \mathcal{I}}}_2 \leq \sqrt{I} \max\set{\frac{1}{u_{\star, \min}^{1/\alpha}} - u_{\min}, u_{\max} - \frac{1}{u_{\star, \max}^{1/\alpha}}} = L_{\Theta},\\ \norm{\vec g_{\mathcal{X},t}}_2 &= {\norm{ \vec{\theta}_t \cdot \partial_{\vec{x}}\vec u_t (\vec{x}_t)}}_2 \leq \frac{1}{u_{\star, \min}^{\alpha}} \norm{\partial_{\vec{x}}\vec u_t (\vec{x}_t)}_2 \leq \frac{L_{\mathcal{X}}}{u_{\star, \min}^{\alpha}}. \end{align*} Thus, the regret bound in Eq.~\eqref{e:adaptive_bound} can be upper bounded as \begin{align*} \mathfrak{R}_T \parentheses{F_{\alpha}, \vec\A} &= \frac{1}{T} \sup_{ \set{\u_t}^T_{t=1} \in {{\U^T}}} \set{{{1.5\diam{\mathcal{X}} \frac{L_{\mathcal{X}} \sqrt{T}}{u_{\star, \min}^{\alpha}} +\frac{\alpha}{u_{\star, \min}^{1 + \frac{1}{\alpha}}} \sum^T_{t=1}\frac{L^2_\Theta}{t}}}} + \frac{\min{\set{\mathbb{V}_{\T}, \mathbb{W}_{\T}}}}{T} \\ &\leq \frac{1}{T} \sup_{ \set{\u_t}^T_{t=1} \in {{\U^T}}} \set{{1.5\diam{\mathcal{X}} \frac{L_{\mathcal{X}} \sqrt{T}}{u_{\star, \min}^{\alpha}} +\frac{\alpha}{u_{\star, \min}^{1 + \frac{1}{\alpha}}} L^2_{\Theta} (\log(T) + 1) }} + \frac{\min{\set{\mathbb{V}_{\T}, \mathbb{W}_{\T}}}}{T} \\ &= \BigO{\frac{1}{\sqrt{T}} + \frac{\min \set{\mathbb{V}_{\T}, \mathbb{W}_{\T}}}{T}}. \label{e:final_eq} \end{align*} This concludes the proof. \end{proof} \section{Proof of Theorem~\ref{theorem:lowerbound} (Lower Bound)} \label{proof:lowerbound} \begin{proof} Consider a scenario with a single player $\mathcal{I}=\{1\}$, $\mathcal{X} = \set{x \in \mathbb{R}, \abs{x} \leq 1}$, and the utility selected by an adversary at time slot $t \in \mathcal{T}$ is given by \begin{align} u_t(x) = w_t x + 1, \quad\text{where}~ w_t \in \set{-1,+1}. \end{align} The weight $w_{t}$ is selected in $\{-1,+1\}$ uniformly at random for $t \in \mathcal{T}$. A policy $\mathcal{A}$ selects the sequence of decisions $\set{x_t}^T_{t=1}$ and has the following fairness regret \begin{align*} &\mathbb{E} \brackets{\max_{x \in \mathcal{X}}f_{\alpha}\parentheses{ \frac{1}{T}\sum^T_{t=1} u_t(x)} -f_{\alpha}\parentheses{ \frac{1}{T}\sum^T_{t=1} u_t(x_t)} } \geq \mathbb{E} \brackets{\max_{x \in \mathcal{X}}f_{\alpha}\parentheses{ \frac{1}{T}\sum^T_{t=1} u_t(x)}} -\vGroup{f_{\alpha}\parentheses{\mathbb{E}\brackets{ \frac{1}{T}\sum^T_{t=1} u_t(x_t)}} }{\text{$= 0$}}\\ &= \mathbb{E} \brackets{f_{\alpha}\parentheses{\max_{x \in \mathcal{X}} \frac{1}{T}\sum^T_{t=1} u_t(x)}} = \mathbb{E} \brackets{f_{\alpha}\parentheses{\frac{1}{T}\abs{\sum^T_{t=1} w_{t,1}}+ 1} } \stackrel{ \mathrm{(a)}}{\geq}{\mathbb E \brackets{\frac{1}{T}\abs{\sum^T_{t=1} w_{t,1}}}} \parentheses{\frac{2^{1-\alpha}-1}{1-\alpha}}\stackrel{\mathrm{(b)}}{\geq} \frac{\parentheses{\frac{2^{1-\alpha}-1}{1-\alpha}}}{\sqrt{2T}}\\ &= \Omega\parentheses{\frac{1}{\sqrt{T}}}. \end{align*} Inequality (a) is obtained considering $f_{\alpha} (x+1)$ is concave in $x$, $f_{\alpha}(0 + 1)=0$, and $f_{\alpha}(x+1) \geq f_{\alpha}(2) x$ for $x \in [0, 1]$. Inequality~(b) is obtained through Khintchine inequality. A lower bound on the fairness regret~\eqref{e:b_regret1} can be established: \begin{align} \mathfrak{R}_T \parentheses{F_{\alpha}, \vec\A} \geq \mathbb{E} \brackets{\max_{x \in \mathcal{X}}f_{\alpha}\parentheses{ \frac{1}{T}\sum^T_{t=1} u_t(x)} -f_{\alpha}\parentheses{ \frac{1}{T}\sum^T_{t=1} u_t(x_t)} } = \Omega\parentheses{\frac{1}{\sqrt{T}}}. \end{align} This concludes the proof. \end{proof} \section{Proof of Corollary~\ref{corollary:stochastic}} \label{proof:stochastic} \begin{proof} \noindent \textbf{Expected regret.} When the utilities are i.i.d., we have the following \begin{align} \mathbb E \brackets{ \u_t (\vec{x})} = \vec u, \forall t \in \mathcal{T}, \end{align} for some fixed utility $\u \in \U$. In the proof Theorem~\ref{proof:t:maintheorem}, in particular, in Eq.~\eqref{e:primal_dual_with_extra_term} it holds \begin{align} {F_{\alpha}\left( {\frac{1}{T} \sum^T_{t=1}\u_t(\vec{x}_{\star})}\right) - F_{\alpha}\left( {\frac{1}{T} \sum^T_{t=1}\u_t(\vec{x}_t)}\right)} &\leq {\frac{\mathfrak{R}_{T, \mathcal{X}}}{T} + \frac{\mathfrak{R}_{T, \Theta}}{T} + { \frac{1}{T} \sum^T_{t=1} \left(\vec{\theta}_t - \bar\vec{\theta}\right) \cdot \u_t(\vec{x}_{\star})}}. \end{align} Taking the expectation of both sides gives \begin{align} \mathbb E \brackets{F_{\alpha}\left( {\frac{1}{T} \sum^T_{t=1}\u_t(\vec{x}_{\star})}\right) - F_{\alpha}\left( {\frac{1}{T} \sum^T_{t=1}\u_t(\vec{x}_t)}\right)} \leq \mathbb E \brackets{ \frac{\mathfrak{R}_{T, \mathcal{X}}}{T} + \frac{\mathfrak{R}_{T, \Theta }}{T}} + \mathbb E \brackets{{ \frac{1}{T} \sum^T_{t=1} \left(\vec{\theta}_t - \bar\vec{\theta}\right) \cdot \u_t(\vec{x}_{\star})}}. \end{align} The variables $\vec{\theta}_t$ and $\u_t$ are independent for $t \in \mathcal{T}$, thus we have \begin{align} \mathbb E \brackets{{ \frac{1}{T} \sum^T_{t=1} \left(\vec{\theta}_t - \bar\vec{\theta}\right) \cdot \u_t(\vec{x}_{\star})}} = \mathbb E \brackets{{ \parentheses{\bar\vec{\theta} - \bar\vec{\theta} }\cdot \u(\vec{x}_{\star})}} = 0. \end{align} Through Eq.~\eqref{e:final_eq}, it holds \begin{align} \mathbb E \brackets{F_{\alpha}\left( {\frac{1}{T} \sum^T_{t=1}\u_t(\vec{x}_{\star})}\right) - F_{\alpha}\left( {\frac{1}{T} \sum^T_{t=1}\u_t(\vec{x}_t)}\right)} = \BigO{\frac{1}{\sqrt{T}}}, \end{align} for any distribution $\mathcal{D}(\U)$. This concludes the first part of the proof. \noindent\textbf{Almost-sure zero-regret.} Let $\Delta = \parentheses{u_{\max} - u_{\min}}$, $\mathcal{T} = \mathcal{T}_1\cup\mathcal{T}_2\cup \dots \cup \mathcal{T}_K$ where $K = T^{2/3}$ and $\card{\mathcal{T}_k} = \kappa = T^{1/3}$ for $k \in \set{1,2,\dots, K}$, and let $\beta \in (0,1/6)$. Employing Hoeffding's inequality we can bound the l.h.s. term in Eq.~\eqref{e:adv1} for $i \in \mathcal{I}$ as \begin{align} \mathbb P \parentheses{ \abs{\sum_{t \in \mathcal{T}_k} \delta_{t,i} (\vec{x})} \leq \Delta T^{1/6+\beta}} &\geq 1 -2 \exp\parentheses{\frac{-2T^{1/3+2\beta}}{\parentheses{(T-\kappa) \kappa^2/T^2 + \kappa (\kappa / T -1)^2}}} = 1 -2 \exp\parentheses{\frac{-2T^{1/3+2\beta}}{\parentheses{\kappa -\kappa^2 / T}}} \\ &= 1 - 2\exp\parentheses{\frac{-2T^{1/3+2\beta}}{\parentheses{T^{1/3} -T^{ - 1/3}}}}. \end{align} Hence, it follows \begin{align*} \mathbb{P} \parentheses{\sum^K_{k=1} \sum_{i \in \mathcal{I}} \abs{\sum_{t \in \mathcal{T}_k} \delta_{t,i} (\vec{x})} \leq \Delta T^{5/6+\beta}} &\geq \parentheses{1 -2 \exp\parentheses{\frac{-2T^{1/3+2\beta}}{\parentheses{T^{1/3} -2 T^{ - 1/3}}}}}^{I T^{2/3}}\\ &\geq1 - 2I T^{2/3} \exp\parentheses{\frac{-2T^{1/3+2\beta}}{\parentheses{T^{1/3} -T^{ - 1/3}}}} &\text{Bernoulli's inequality}\\ & \geq 1 -2 I T^{2/3} \exp\parentheses{\frac{-2T^{1/3+2\beta}}{{T^{1/3}}}}\\ &\geq 1 - 2I T^{2/3} \exp(-2T^{2\beta}). \end{align*} It follows from the above equation paired with Eq.~\eqref{e:adv1} \begin{align} \mathbb{W}_{\T} = \BigO{T^{5/6+\beta} + T^{2/3}} = \BigO{T^{5/6+\beta}}, \qquad \text{w.p.}\qquad p \geq 1 - 2I T^{2/3} \exp(-2T^{2\beta}). \end{align} Thus, for any $\beta \in (0, 1/6)$ and $T\to \infty$, it holds \begin{align} \frac{\mathbb{W}_{\T}}{T} \leq 0, \qquad \text{w.p.}\qquad p \geq 1. \end{align} Note that given that $\mathbb{W}_{\T} \geq 0$ in Eq.~\eqref{e:adv2}, it holds $ \lim_{T \to \infty} \mathbb{W}_{\T} = 0$ w.p. $p = 1$. Therefore, it follows from Theorem~\ref{proof:t:maintheorem} for $T \to \infty$ \begin{align} \mathfrak{R}_T \parentheses{F_{\alpha}, \vec\A} = \BigO{\frac{1}{\sqrt{T}} + \frac{\min\set{\mathbb{V}_{\T}, \mathbb{W}_{\T}}}{T}} = \BigO{\frac{1}{\sqrt{T}} + \frac{\mathbb{W}_{\T}}{T}} \leq 0, \qquad \text{w.p.}\qquad 1. \end{align} This concludes the proof. \end{proof} \section{Additional Experimental Details} \begin{table}[h] \caption{Specification of the network topologies used in experiments.} \label{t:setting} \begin{footnotesize} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Topologies & $\card{\C}$ & $\card{\mathcal{E}}$ & $k_c$ & $\card{\mathcal{Q}_i}$ & $\card{\cup_{f \in \mathcal{F}} \Lambda_f(\C)}$ & $w$ & Figure \\ \hline \textsc{Cycle} & 3 & 3 & 5--5 & 1 & 1 & 1--2 & Fig.~\ref{fig:topologies}~(a) \\ \textsc{Tree}-1--\textsc{Tree}-3 & 13 & 12 & 1--5 & 2--5 & 1 & 1--9 & Fig.~\ref{fig:topologies}~(b)--(d) \\ \textsc{Grid} & 9 & 12 & 1--5 & 2 & 1 & 1--7 & Fig.~\ref{fig:topologies}~(e) \\ \textsc{Abilene} & 12 & 13 & 1--5 & 2 & 2 & 1--8 & Fig.~\ref{fig:topologies}~(f) \\ \textsc{GEANT} & 22 & 33 & 1--5 & 3 & 2 & 1--9 & Fig.~\ref{fig:topologies}~(g) \\ \hline \end{tabular} \end{center} \end{footnotesize} \label{table:topologies} \end{table} \begin{figure}[h] \centering \subcaptionbox{Stationary }{\includegraphics[width=.2\linewidth]{figures_paper/trace-stationary.pdf}} \subcaptionbox{Non-Stationary }{\includegraphics[width=.2\linewidth]{figures_paper/trace-non-stationary.pdf}} \caption{Request traces stationary~(a) and non-stationary~(b) configured with $\sigma = 1.2$, $T = 5000$, $F = 20$, $D = 100$. Each dot indicates a requested file.} \label{fig:traces} \end{figure} \section{Nash Bargaining}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{\label{sec:intro}Introduction} Observations of the cosmic microwave background anisotropies indicate that our universe contains only about 5\% of normal or luminous matter, about 27\% dark matter (DM), and the rest as dark energy which accounts for the accelerated expansion of the universe \cite{planck}. DM, as the name implies, has the characteristic feature that it rarely interacts with normal matter, if at all, to produce measurable signal in a detector. The DM particles can interact with (scatter off) the electrons or nuclei in the detector medium. The resulting recoiling nuclei or electrons would dump their energy in the detector material producing scintillation, ionization and also heat pulses (such as phonons in solid) \cite{LBaudis,schumann} depending on the chosen detector material. It is well known that there will also be electron recoils generated by gamma rays, beta particles and other ambient radiation quanta which will produce undesirable background signals. Neutron is an important background that affects the outcome of highly sensitive experiments such as direct search for DM\cite{LBaudis}, neutrinoless double beta decay (NDBD) \cite{CAlduino}, neutrino oscillation experiments \cite{FPAn} etc. Neutrons undergo elastic and inelastic scattering with nuclei of the active medium, resulting in nuclear recoil. The same kind of nuclear recoil may result from the interaction of DM particles. Therefore, neutron background limits the sensitivity of direct dark matter search experiments. Dark matter search at INO (DINO) is a proposed experiment in India on direct search for dark matter, to be set up at a future facility of the India-based Neutrino Observatory (INO) \cite{Whitepaper}. The experiment will be designed to look for the signature of DM candidates through observation of extremely tiny amount of nuclear recoil or electron recoil events in a suitable crystalline detector medium. Inorganic scintillators, such as Cesium Iodide (CsI) and Gadolinium Galium Aluminium Garnet (${\rm Gd}_3{\rm Ga}_3{\rm Al}_2{\rm O}_{12}$) are possible detector materials to explore for the DINO experiment. Both the scintillators manifest good light yield of around 50 photons$\thinspace$keV$^{-1}$ \cite{csi,ggag}. The first phase of the experiment involves exploration and evaluation of the background effects caused by ambient radiation and other sources at the specific site. To begin with, a small underground laboratory (approximately 5 m $\times$ 5 m $\times$ 2.2 m), named as Jaduguda Underground Science Laboratory (JUSL), is already established at depth of 555 m ($\sim$1600~mwe vertical rock overburden) in an existing mine of Uranium Corporation of India Limited (UCIL), located at Jaduguda in India for exploring the feasibility of the experiment. The main reason for setting up such experiments at deeper underground sites is to reduce the direct effects of cosmic secondary particles. However, most abundant and deeply penetrating component of the cosmic rays being muons, they can still reach the experimental cavern located underground producing neutrons and hadronic showers. Neutrons are generated by spallation reactions of the muons with the underground rock and shielding materials. Neutrons are also generated by hadronic showers. These neutrons are known as cosmogenic neutrons. While the energy spectrum and flux of these neutrons will depend mostly on the depth of the site and the rock composition, the other sources of neutrons, known as the radiogenic neutrons, caused by the ($\alpha,n$) reactions and spontaneous fission of naturally occurring $^{238}$U, $^{232}$Th and their daughters from the decay chains, will depend on both the natural radioactivity and the elemental composition of the surrounding rocks. Quantitative estimates for the two sources of neutron background are therefore, highly dependent on the site, specifically on the abundance of U/Th and the rock composition. Although the relative flux due to cosmogenic neutrons are a few orders of magnitude less than that of the radiogenic neutrons, cosmogenic neutrons are more energetic and therefore, penetrate deeper inside the passive shielding materials, such as lead, copper and plastic, used in dark matter experiments. It is important to compare these backgrounds for different passive shielding configurations to arrive at the best possible configuration for their effective reduction. The present study attempts to estimate the neutron background at the JUSL site using the GEANT4 toolkit \cite{geant}, with the sources of radiogenic neutrons generated from standard compilations \cite{Mei} and source of cosmogenic neutrons generated from muons using Gaisser parametrization \cite{gaisser}. Propagation of these neutrons inside the shielding materials are also studied to conclude about the best possible shielding configurations to setup a DM search experiment at the site. In section \ref{sec:Sources}, the various sources of neutrons in underground laboratories are discussed. In section \ref{sec:estgeant}, the estimation of radiogenic, cosmogenic and total neutron flux at the site is explained in detail. In section \ref{sec:shield}, various shielding combinations to reduce the neutron flux for a typical direct dark matter search setup assuming a CsI detector are discussed. The estimated sensitivity of a CsI detector based experiment at JUSL for direct dark matter search is presented in section \ref{sec:sens}. We then conclude with the consolidation of results and how they compare with other experimental locations around the world. \section{Sources of neutron in underground laboratories\label{sec:Sources}} Neutrons in underground laboratories are produced in reactions either by natural radioactivity or by cosmic rays muons. \paragraph{Radiogenic neutrons:} The radiogenic neutrons are mainly produced in $(\alpha,n)$ reactions caused by the $\alpha$ particles from the U/Th traces present in surrounding rocks and the detector materials. The neutrons produced from spontaneous fission of U or Th and neutron induced fission also contribute to radiogenic neutron flux. The neutron induced fission, being a secondary process, will have negligible contribution to the radiogenic neutron flux. The expected energy spectra of the radiogenic neutrons extend up to about 14 MeV. The radiogenic neutrons from the rock due to ($\alpha, n$) reactions and due to spontaneous fission of $^{238}$U are considered in the present work. \paragraph{Cosmogenic neutrons:} Cosmogenic neutrons are produced in cosmic ray muon induced interactions inside rock or shielding materials. Muons produce neutrons via following interactions: \begin{enumerate} \item interaction with nuclei producing nuclear disintegration. \item muon capture by nucleus followed by neutron emission. \item neutron production by hadrons from muon generated showers. \item neutron production by gammas from muon generated electromagnetic showers. \end{enumerate} Cosmogenic neutrons have harder energy spectrum with energies upto few GeVs. These neutrons can reach detector after propagating large distance, away from the parent muon track. Because of their energy, it is very hard to shield them. Some of these neutrons can be vetoed if the associated muons give hit in the muon veto. The neutrons which are produced inside rock cannot be vetoed. \section{Estimation of neutron flux using GEANT4\label{sec:estgeant}} Simulations have been performed using GEANT4 \cite{geant} version 10.02. The reference physics list ``\texttt{Shielding}'', which is recommended for shielding applications at high energies, is used for describing the physics processes \cite{shield}. The high energy part of this physics list is taken from \texttt{FTFP\_BERT} physics list. It uses high precision neutron model. It was originally developed for neutron penetration studies and ion - ion collision. It has been used for simulation of high energy calorimetry and underground or low background experiments. Secondary particle production threshold cuts are set to 0.7 mm for gammas and $e^+/e^-$. \subsection{Simulation of radiogenic neutron flux\label{sec:alpha}} For the calculation of neutron yield from $(\alpha,n)$ reactions, we have used the composition of rock surrounding JUSL obtained by wet-chemical, radiometric and Inductively Coupled Plasma - Optical Emission Spectrometry (ICP-OES) analyses \cite{JR} (given in Table~\ref{tab:table1}). The rock sample was collected by core drilling at 555 m depth. ICP-OES and wet-chemical analyses give the elemental composition in the rock whereas the radiometric analysis only gives an estimate of the K, U and Th content of the rock. Due to the variation among rock samples and the results obtained from different methods, the uncertainty in the elemental composition is around 10-20\%. The rock, having a density $\sim2.89\pm0.09$ g$\thinspace$cm$^{-3}$, contains 8 ppm of U and 16 ppm of Th \cite{JR}. \begin{table}[h] \centering \caption{\label{tab:table1}The composition of Jaduguda rock as obtained by wet-chemical, radiometric and ICP-OES analyses \cite{JR}.} \begin{tabular}{cc|cc} \hline Element & Conc (\%) & Element & Conc (\%)\\ \hline U & 0.0008 & Na & 1.2\\ Th & 0.0016 & K & 2.2\\ $^{40}$K & 0.00034 & Ti & 0.34\\ Si & 31.0 & P & 0.079\\ O & 47.8 & Mn & 0.023\\ Al & 9.6 & Mo & 0.002\\ Fe & 3.8 & H & 0.028\\ Ca & 1.3 & S & 0.3\\ Mg & 0.83 & Others & $<1.5$\\ \hline \end{tabular} \end{table} The neutron yield $Y_{i}(E_{n})$ from ($\alpha , n$) reactions for a thick target $i$, have been taken from \cite{Mei}. The total neutron yield from rock is calculated by adding the individual neutron yield from the elements weighted by their mass ratio in the rock. The rock composition information given in Table~\ref{tab:table1} is used for this calculation\footnote{An intuitive program developed by the authors of Ref. \cite{Mei} available at \url{http://neutronyield.usd.edu/} has been used to obtain the neutron yield by providing the Jaduguda rock composition as input.}. The neutron yields from ($\alpha,n$) reactions in the rock have been obtained as $6.77\pm1.12$ yr$^{-1}$g$^{-1}$ from $^{238}$U and $5.33\pm0.90$ yr$^{-1}$g$^{-1}$ from $^{232}$Th. The specific neutron activity due ($\alpha,n$) reaction in 1 g of rock normalized to the U and Th content is given in Figure ~\ref{fig:anflux}. It can be noticed that the neutrons from ($\alpha,n$) reactions have energies less than 12 MeV. The natural abundance of $^{238}$U is 99.27\% with a spontaneous fission half-life of $8.2\times10^{15}$ years \cite{nubase,thiel}. The spontaneous fission half-life of $^{232}$Th is around 6 orders of magnitude higher and can be neglected \cite{nubase}. To simulate neutrons due to spontaneous fission, the Watt function has been used. The Watt function was initially used to explain fission due to thermal neutrons in $^{235}$U \cite{watt}. But it holds good for spontaneous fission of other heavy nuclei as well. The Watt function is given as \begin{equation} W(a,b,E^\prime)=a\sqrt{\frac{4a}{\pi b}}\exp\left(-\frac{b}{4a}-aE^\prime\right)\sinh(\sqrt{bE^\prime}), \end{equation} where $a=1.54245$ MeV$^{-1}$ and $b=6.81057$ MeV$^{-1}$ are constants (for $^{238}$U) and $E^\prime$ is the secondary neutron energy \cite{verbeke}. The average neutron multiplicity ($\bar{\nu}$) for spontaneous fission of $^{238}$U is 2.01 \cite{verbeke}. Neutron yield due to spontaneous fission of $^{238}$U in the rock is estimated to be $3.43\pm0.55$ yr$^{-1}$g$^{-1}$. \begin{figure}[h] \centering \includegraphics[width=0.6\textwidth]{JGrockyield.pdf} \caption{\label{fig:anflux}Expected specific neutron activity of Jaduguda rock due to ($\alpha,n$) reactions and spontaneous fission of $^{238}$U.} \end{figure} \subsubsection{Transmission of radiogenic neutrons through rock} \label{sub:alntrans} The JUSL has a vertical rock overburden of 555 m. Radiogenic neutrons that are produced a few meters away from the rock-laboratory boundary, do not contribute to the neutron flux in the lab. A study is done to find the rock element thickness to be considered for the calculation. This will help in reducing statistical fluctuations and efficient calculation of neutron flux. The rock composition given in Table~\ref{tab:table1} is used for defining rock in GEANT4. The radiogenic neutron energy distribution for simulation is generated using the specific neutron activity given in Figure~\ref{fig:anflux}. To calculate the neutron transmission probability, $10^5$ neutrons are randomly generated on the surface of a rock slab in an area 0.5 m $\times$ 0.5 m with momentum along $-Z$ direction. Thickness of the rock is varied and the length and breadth are fixed to 1 m. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{rock_trans.pdf} \caption{The rock slab model used in GEANT4 to calculate the transmission probability of neutrons.} \label{fig:radNtrans} \end{figure} As neutrons propagate through the rock, they lose energy and get absorbed or scatter off. Neutrons coming out of the other side of the rock are recorded. The transmission probability, which is the ratio of the number of output neutrons to the number of input neutrons is calculated. It is shown as a function of rock thickness in Figure~\ref{fig:TRradN}. \begin{figure}[h] \centering \includegraphics[width=0.6\textwidth]{RadioTrans_new.pdf} \caption{Radiogenic neutron transmission probability as a function of rock thickness.} \label{fig:TRradN} \end{figure} Only about 0.5\% of total neutrons have been transmitted through rock of thickness 1.0 m. Therefore, for our simulation estimates, we consider 1 m of rock thickness surrounding the laboratory cavern as active material contributing to the radiogenic neutron flux. \subsubsection{Flux of radiogenic neutrons in JUSL} \label{sec:radflux} We consider the cavern to be a hollow cube of outer side 8 m and inner side 4 m. In the cavern, the volume made up of rock is called the `Outer cavern' and the hollow volume is called the `Inner cavern'. To estimate the neutron flux in the laboratory, contributions from ($\alpha,n$) reactions and spontaneous fission are taken into account. In the rock, neutron events with the total energy distribution given in Figure~\ref{fig:anflux} and isotropic angular distribution, are generated in the 1 m thick grey region of the Inner cavern as shown in Figure~\ref{fig:alneve}(a). The energy distribution of 10$^7$ neutron events is shown in Figure~\ref{fig:alneve}(b). Some neutrons propagate through rock and some of them reach the experimental setup. These neutrons while propagating through rock can further produce neutrons. \begin{figure}[h] \centering \subfigure[\label{alneve:a}]{\includegraphics[width=0.35\textwidth]{Cavern_neutron_radio.pdf}} \subfigure[\label{alneve:b}]{\includegraphics[width=0.5\textwidth]{RadioEnergyDist_new.pdf}} \caption{(a) The side view schematic of the cavern as implemented in GEANT4 to calculate the radiogenic neutron flux. (b) Energy distribution of radiogenic neutrons generated for $10^7$ events using the specific neutron activity shown in Figure \ref{fig:anflux}.} \label{fig:alneve} \end{figure} Neutrons reaching the Inner cavern are recorded. The flux of radiogenic neutrons reaching the laboratory is obtained as 1.12($\pm0.11)\times 10^{-5}\thinspace$cm$^{-2}\thinspace$s$^{-1}$ above 100 keV (mean energy of 1.34 MeV) and 5.75($\pm0.58)\times 10^{-6}\thinspace$cm$^{-2}\thinspace$s$^{-1}$ above 1 MeV (mean energy of 2.18 MeV). The uncertainty shown in the parentheses include both statistical and systematic uncertainties. The same convention is followed throughout the paper unless otherwise specified. The systematic uncertainty arises due to the variation of rock density and is around 10 \%. The flux of radiogenic neutrons reaching the laboratory as a function of energy is shown in Figure ~\ref{fig:alnflux}. \begin{figure}[h] \centering \includegraphics[width=0.6\textwidth]{NeutEnergyDist_total_new_wfission.pdf} \caption{Flux of radiogenic neutrons reaching laboratory shown as a function of energy.} \label{fig:alnflux} \end{figure} \subsection{Simulation of cosmogenic neutron flux} \label{sec:cosmo} Cosmic ray muons are created in the upper atmosphere from the interaction of primary cosmic rays with the atmospheric nuclei. They are the most abundant charged particles at sea level with an average energy of about 4 GeV and intensity $\sim$1 cm$^{-2}\thinspace$min$^{-1}$ \cite{pdg2018}. Since they are minimum ionizing particles, they can penetrate the rock overburden and reach the cavern. Cosmic ray muon events can be tagged using active veto shielding, but stopping them completely is not possible. Even though the flux of cosmic muons is small in underground laboratories, high energy muons can penetrate the rock and shielding materials and generate hadrons. Low energy muons get stopped in the rock. \subsubsection{Cosmic muon event generation\label{sec:mugen}} To generate realistic neutron flux, the muon flux at the experimental site needs to be determined. The minimum energy of muons required to reach the cavern and their maximum lateral displacement need to be calculated. \subsubsection{Muon lateral displacement and maximum distance} To reach the cavern from the surface, cosmic muons have to be quite energetic. Moreover, muons interact with the earth/rock and undergo scattering. Their initial direction of propagation is altered and their expected position at a depth is displaced. By calculating the maximum distance traversed by muons of given energy, we can estimate the minimum energy required by muons to reach the cavern. Muons with different fixed energies were made to pass through a cube of rock of side 6 km along the $-Z$ direction. The maximum distance traversed and lateral displacement were calculated. The results are shown in Figure \ref{fig:latdisp}(a-d). From Figures \ref{fig:latdisp}(a) and \ref{fig:latdisp}(b), it can be seen that the average lateral displacement saturates to $\sim2.3$ m. But the maximum lateral displacement can be as high as 30 m. From Figures \ref{fig:latdisp}(c) and \ref{fig:latdisp}(d) we see that the minimum energy of muon that can reach the cavern (depth 555 m) is around 300 GeV. Therefore, we do not consider cosmic muons of energy less than 300 GeV in our simulation. The lateral displacement helps in making the simulation faster and is discussed in the next section. \begin{figure}[h] \begin{minipage}{0.5\linewidth} \includegraphics[width=\linewidth]{LateralDisplacementDistribution.pdf} \end{minipage} \begin{minipage}{0.5\linewidth} \includegraphics[width=\linewidth]{LateralDisplacement.pdf} \end{minipage}\\ \begin{minipage}{0.5\linewidth} \includegraphics[width=\linewidth]{DistanceDistribution.pdf} \end{minipage} \begin{minipage}{0.5\linewidth} \includegraphics[width=\linewidth]{Distance_new.pdf} \end{minipage} \caption{\label{fig:latdisp}(a) Lateral displacement distribution of muons from their initial direction of propagation in the rock. (b) Average lateral displacement as a function of muon energy. (c) Distribution of distance traversed in the rock by muons of different incident energies. (d) Maximum and average distance traversed by muons as a function of energy.} \end{figure} \subsubsection{Calculation of muon flux at the cavern} The latitude, longitude and elevation information of Jaduguda area has been obtained using Google Earth Pro \cite{jgmap} and the topological profile is shown in Figure ~\ref{fig:jmap}. The density of rock is taken to be 2.89~g$\thinspace$cm$^{-3}$ \cite{JR}. \begin{figure}[h] \centering \includegraphics[width=0.65\linewidth]{JUSLelevationprofile.png} \caption{Elevation map of the area around JUSL (12.2 km $\times$ 5.45 km).} \label{fig:jmap} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.65\linewidth]{Cosmogenic_muon_strategy.pdf} \caption{Schematic describing the methodology of calculating muon flux at the cavern.} \label{fig:jmap_strat} \end{figure} The energy and angular distribution of muons on the ground surface are generated using Gaisser's parameterization \cite{gaisser} \begin{equation} \label{eq:f0} \frac{\mathrm{d}^2N_{\mu}}{\mathrm{d}E_{\mu}\mathrm{d}\Omega} \approx \frac{0.14 E_{\mu}^{-2,7}}{\hbox{cm}^2\thinspace\hbox{s}\thinspace\hbox{sr}\thinspace\hbox{GeV}} \times \left[\dfrac{1}{1 + \dfrac{1.1 E_{\mu} \cos\theta_z}{\epsilon_\pi}} + \dfrac{\eta}{1 + \dfrac{1.1 E_{\mu}\cos\theta_z}{\epsilon_K}}\right], \end{equation} where $\theta_z$ is the zenith angle, $E_{\mu}$ is the energy of the muon, $N_\mu$ is the number of muons, $\Omega$ is the solid angle, $\epsilon_\pi$ = 115 GeV, $\epsilon_K$ = 850 GeV and $\eta$ = 0.054. This parametrization is valid when the decay of muon is negligible and curvature of the earth can be neglected (i.e. for zenith angle $\theta_z < 70^\circ$). The azimuthal angle $\phi$ distribution is generated uniformly. Muons are propagated to the experimental hall/cavern at 555 m below the surface. As muon propagates through the earth/rock, it scatters and loses energy. Muon events are generated in the range 330 GeV to 15 TeV. Both $\mu^{+}$ and $\mu^{-}$ events are generated. Since muons impinging on the surface at $\theta_z < 70^\circ$ are considered, they are generated over a circular area of radius $\sim$1.5 km with the center at cavern location as shown in Figure \ref{fig:jmap_strat}. Altitude information ($z$) at each point $(x,y)$ is obtained from the topological profile (Figure \ref{fig:jmap}). The altitude dependence of muon flux is ignored. Since a huge number of events\footnote{The number of muons that are expected to fall over a circular area of radius 1.5 km in a day is $\sim20\times10^9$.} are required to be generated on the surface to have a reasonable number of muons reaching the cavern, the simulation becomes time consuming and computationally intensive. In order to avoid laborious simulation, we can use the lateral displacement mentioned in the previous section to save time. From Figure \ref{fig:latdisp}(a), as the maximum lateral displacement is about 30 m, muons with incident direction within a cone having an axis as the line connecting the point of incidence to the center of the cavern and radius 30 m (cone opening angle of $\alpha\sim3.1^{\circ}$) are simulated (See Figure \ref{fig:jmap_strat}). The other muons are not simulated and are counted as incident but not reaching the cavern. The energy distribution of muons before and after reaching the cavern surface by passing through the earth/rock is shown in Figure \ref{fig:munprod}(a). It is observed that the flux of muons on the top surface of the outer cavern is $4.49(\pm0.25)\times10^{-7}\thinspace$cm$^{-2}\thinspace$s$^{-1}$. The information of muons such as position, momentum etc. are recorded. Since the energy of neutrons produced from muon interaction gets attenuated in the rock, similar to the case of radiogenic neutrons, only a rock element with finite size will contribute to the flux of cosmogenic neutron in the laboratory. A study similar to that described in the section \ref{sub:alntrans} is done for finding the rock thickness to be considered for simulating the muon induced neutron flux in the experimental hall. The rock geometry given in Figure~\ref{fig:radNtrans} is used for the simulation. The energy distribution of muons at the surface of the outer cavern shown in Figure~\ref{fig:munprod} (blue histogram) is used. The average energy of muons is around 200 GeV. Muons are propagated from random positions on a plane of dimension (0.5 m $\times$ 0.5 m) (Figure ~\ref{fig:radNtrans}(a)) through the rock in the $-Z$ direction. The muon interactions with rock generate neutrons. The neutrons coming out on the other side of the rock are recorded. The simulation is repeated for different rock thicknesses ($t$ = 10 cm, 25 cm, 50 cm, 75 cm, 100 cm, 150 cm). \begin{figure}[h] \includegraphics[width=0.50\linewidth]{MuEnDist_new.pdf} \includegraphics[width=0.50\textwidth]{Production_650k.pdf} \includegraphics[width=0.50\textwidth]{Transmission_650k.pdf} \includegraphics[width=0.50\textwidth]{TransNeutDistComp_new.pdf} \caption{(a) Energy distribution of muons at the surface and after reaching the cavern. (b) Neutron produced from muon interaction as a function of rock thickness. (c) Neutron transmission probability as a function of rock thickness. (d) The spectra of neutron energy at production ($N_\mathrm{prod}$) and after transmission through rock of 150 cm thickness ($N_\mathrm{out}$) shown upto 14 GeV.} \label{fig:munprod} \end{figure} Number of neutrons produced ($N_\mathrm{prod}$) as a function of rock thickness is given in Figure~\ref{fig:munprod}(b). As the rock thickness increases, the neutron production also increases due to the increase in the probability of interaction. \subsubsection{Transmission of cosmogenic neutrons through rock} Neutron energy spectra at production ($N_\mathrm{prod}$) and after transmission through rock ($N_\mathrm{out}$) of thickness 150 cm are shown in Figure~\ref{fig:munprod}(d). The production rate of neutrons for 150 cm rock thickness is around 0.1 neutron/muon. The neutron transmission probability which is the ratio of number of neutrons coming out on the other side of the rock to the number of neutrons produced in the rock ($R = N_\mathrm{out}/N_\mathrm{prod}$), as a function of rock thickness is shown in Figure~\ref{fig:munprod}(c). As the rock thickness increases, the transmission probability decreases. About 7\% of total neutrons are transmitted through rock of thickness 200 cm. \subsubsection{Flux of muon induced neutrons in JUSL} To estimate the cosmogenic neutron background at the JUSL, the muon flux obtained at the surface of the outer cavern is used. The muon events are generated on the five surfaces of the 2 m thick rock around the cavern as shown in the Figure~\ref{fig:mungeo} (front and back surfaces are not shown in the figure). Muons are allowed to propagate through the rock and reach the cavern. There are no muons propagating from the bottom side. While going through the rock, they generate neutrons and other shower particles like hadrons, gamma and electrons which then enter the laboratory. Some of the neutrons get absorbed in the rock itself. \begin{figure}[h] \begin{minipage}{0.5\linewidth} \centering \includegraphics[width=0.8\linewidth]{Cavern_neutron.pdf}\\ \small{(a)} \end{minipage} \begin{minipage}{0.5\linewidth} \centering \includegraphics[width=\linewidth]{NeutronEnergy_Cosmogenic_10days_normalized.pdf}\\ \small{(b)} \end{minipage} \caption{(a) Geometry used for simulation. (b) Flux of muon induced neutrons in the Inner cavern.} \label{fig:mungeo} \end{figure} The flux of neutrons reaching the cavern is shown in Figure ~\ref{fig:mungeo}(b). It can be seen that the neutrons produced in rock have energies up to 10s of GeVs. The muon induced neutron flux in the cavern is found to be $0.93(\pm0.08)\times 10^{-8}\thinspace$cm$^{-2}\thinspace$ s$^{-1}$ with no energy threshold and $7.25(\pm0.65)\times 10^{-9}\thinspace$cm$^{-2}\thinspace$ s$^{-1}$ above 1 MeV. The systematic uncertainty is due to the variation of the rock density and the fact that around 7\% of neutrons can still come from distances greater than 2 m from the cavern boundary. \subsection{Total neutron flux in JUSL} \label{sec:totN} \begin{figure}[h] \centering \includegraphics[width=0.6\linewidth]{TotalNeutronFlux_new_1.pdf} \caption{Total neutron flux due to radiogenic and cosmogenic sources expected at the cavern shown as a function of energy.} \label{fig:totnf} \end{figure} The total neutron flux from radiogenic origin and muon interactions reaching the laboratory in the energy range 1 MeV - 100 GeV is shown in the Figure ~\ref{fig:totnf}. Above 1 MeV energy threshold, the flux of radiogenic neutrons is 5.75$(\pm0.58)\times$ 10$^{-6}\thinspace$cm$^{-2}\thinspace$s$^{-1}$, whereas the flux of neutrons produced by muon interaction in rock is 7.25($\pm0.65)\times$ 10$^{-9}\thinspace$cm$\thinspace^{-2}\thinspace$s$^{-1}$. For energies less than $\sim$10 MeV, neutrons flux from radiogenic neutrons is around 3 orders of magnitude greater than the muon induced neutron flux. For energies above 10 MeV only muon induced neutrons contribute to the spectrum. Therefore the total neutron flux reaching the cavern/laboratory above 1 MeV energy threshold is found to be 5.76($\pm0.58)\times$ 10$^{-6}\thinspace$cm$\thinspace^{-2}\thinspace$s$^{-1}$. Our values are comparable with neutron flux estimations done for dark matter experiments at Boulby and WIPP salt mines \cite{boulby,WIPP}. \section{Shielding combinations to reduce neutron flux\label{sec:shield}} A typical experimental setup usually consists of several layers of active and passive shielding. Active shielding can veto muons and associated neutrons. Passive shielding systems consist of Lead(Pb) or Iron(Fe) for shielding gammas, hydrocarbons for moderating neutrons and copper for attenuation of gammas. For simplicity, a rectangular geometry of shielding ($x~=~$1 m, $y = $ 1 m for all the layers) is considered as shown in Figure ~\ref{fig:shield1}(a)(side view). There is an air gap of 1 cm between each layer. Thickness of the materials is varied. Different combinations of Pb and Polypropylene are studied to find the optimal shielding composition for neutron reduction. \subsection{Reduction of radiogenic neutron flux} Radiogenic neutrons reaching the cavern as obtained in Section~\ref{sec:radflux}, Figure \ref{fig:alnflux} are allowed to pass through the shielding. All the radiogenic neutrons are stopped by 40 cm thick polypropylene shielding and number of neutrons reaching Pb surface is found to be zero. Radiogenic neutrons generated from rock have energy upto a few MeVs and can be shielded using an outer polypropylene layer only. \subsection{Reduction of muon induced neutron flux} The flux of neutrons produced from muon interactions in rock (Figure ~\ref{fig:shield1}(b)) is used to generate 10$^4$ neutrons on a plane of area (0.5 m $\times$ 0.5 m) which is at 1 cm above the polypropylene layer (PP1) of thickness 40 cm. Neutrons crossing the boundary of each layer are recorded. \begin{figure}[h] \subfigure[\label{shield1:a}]{\includegraphics[width=0.45\textwidth]{Shielding_calc.pdf}} \subfigure[\label{shield1:b}]{\includegraphics[width=0.5\textwidth]{NeutronEnergy_Cosmogenic_10days.pdf}} \caption{(a) Rectangular shielding layers used for simulation. The thicknesses of Pb and PP2 are varied. (b) Energy distribution of generated cosmogenic neutrons.} \label{fig:shield1} \end{figure} \begin{table}[h] \centering \caption{\label{tab:shielddes}Different shielding configurations and their effectiveness. Uncertainties shown are statistical only.} \begin{tabular}{c|p{1 cm}|p{1 cm}|p{1 cm}|c} \hline & \multicolumn{3}{c|}{Thicknesses of different} & \\ Configuration & \multicolumn{3}{c|}{shielding layers (cm)} & Transmission (\%)\\ \cline{2-4} & \centering PP1 & \centering Pb & \centering PP2 & \\ \hline CFG-1 & \centering 40 & \centering -- & \centering -- & $52.31 \pm 0.72$ \\ CFG-2 & \centering 40 & \centering 30 & \centering -- & $136.3 \pm 1.17$ \\ CFG-3 & \centering 40 & \centering 30 & \centering 10 & $32.52 \pm 0.57$ \\ CFG-4& \centering 40 & \centering 30 & \centering 20 & $10.44 \pm 0.32$ \\ CFG-5& \centering 40 & \centering 25 & \centering 20 & $12.36 \pm 0.35$ \\ \hline \end{tabular} \end{table} About 47\% of the input neutrons are stopped by the 400 mm thick polypropylene. It is observed that a large number of neutrons are produced in Pb from interactions initiated by incoming neutrons. Neutron back-scattering is also large at the Pb boundary. A second polypropylene layer (PP2) is needed to attenuate the neutrons produced in Pb. From Table \ref{tab:shielddes}, we see that shielding combination CFG-4 provides the best neutron reduction. Neutrons, gamma, muons and electrons are the major backgrounds in a typical experimental setup. Muons and neutrons can also interact with the detector / shielding materials to produce more neutrons. We investigate the effect of shielding and calculate neutron flux at a detector using a simple geometry. The shielding design is based on the dimensions of shielding materials obtained from CFG-4 shown in Table \ref{tab:shielddes}. The geometry of the rectangular rock element and the experimental setup is given in Figure~\ref{fig:detful}. The experimental setup consists of a cylindrical CsI crystal with a radius of 2.2 cm and height 4 cm. Surrounding the crystal there are cylindrical layers of covering and shielding materials with various thicknesses: Teflon (0.05 cm), copper (0.6 cm), polypropylene (20 cm, PP1), Pb (30 cm), polypropylene (40 mm, PP1). There is rock block with thickness of 200 cm surrounding this experimental set up with $\sim100$ cm of air gap between them. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{Cavern_neutron_full.pdf} \caption{{Schematic diagram of geometry used for simulation. The crystal, teflon and copper layers are shown together with wave pattern, PP2 is the black and light gray checkered region, Pb is the light grey region and PP1 is the black and white checkered region. The dark grey rectangular region is responsible for the radiogenic neutron background and black rectangular region is responsible for the cosmogenic neutron flux.}} \label{fig:detful} \end{figure} Muons can interact with the shielding materials to produce high energy neutrons having energies up to few GeVs. Shielding these neutrons are difficult. The knowledge of the muon flux is therefore very important for shielding design. The muon flux reaching the outer boundary of each layer of the experimental setup obtained from this simulation for 10 days of data are shown in the Table~\ref{tab:table4}. During passage of cosmic muons through the shield materials, almost no reduction of muon flux can be seen. \begin{table}[h] \centering \caption{\label{tab:table4}The flux of muons at the top surface of different layers.} \begin{tabular}{c|c} \hline Material & Flux $(\mathrm{cm^{-2}\thinspace s^{-1}}$) \\ \hline PP1 & 4.45($\pm$0.24) $\times$ 10$^{-7}$ \\ \hline Pb & 4.52($\pm$0.26) $\times$ 10$^{-7}$ \\ \hline PP2 & 4.67($\pm$0.30) $\times$ 10$^{-7}$ \\ \hline Cu & 3.82($\pm$1.12) $\times$ 10$^{-7}$ \\ \hline \end{tabular} \end{table} The muons and muon induced neutrons are tracked through the successive layers of shielding. Neutrons produced in each layer of shielding is recorded. Neutrons are produced from both muon initiated interactions and neutron interactions. The neutron production rate in each layer is shown in Table~\ref{tab:table2}. \begin{table}[h] \centering \caption{\label{tab:table2}The production rate of neutrons in different layers. Uncertainties shown are statistical only.} \begin{tabular}{c|c} \hline Material & Neutron production rate ($\mathrm{cm^{-3}\thinspace s^{-1}}$) \\ \hline PP1 & 2.05($\pm$0.04) $\times$ 10$^{-10}$ \\ \hline Pb & 1.72($\pm$0.01) $\times$ 10$^{-8}$ \\ \hline PP2 & 6.64($\pm$0.06) $\times$ 10$^{-10}$ \\ \hline \end{tabular} \end{table} Only a fraction of neutrons produced in each layer reach the inner layer of detector. Others either get absorbed or scatter off. The neutrons which get reflected back from a layer (i) can be absorbed by the previous layer (ii) get transmitted out of the setup or (iii) get reflected back again to the same layer. These effects are taken into consideration to avoid multiple counting. Neutrons reaching each layer of the detector include neutrons produced in all previous layers. For instance neutrons reaching the copper layer include neutrons produced in rock, polypropylene and lead. The flux of cosmogenic neutrons estimated at the top surface of different layers for 20 days of data are shown in Figure ~\ref{fig:munelayer}. \begin{figure}[h] \centering \includegraphics[width=0.6\textwidth]{Flux_Neutrons_Cosmogenic_Shielding.pdf} \caption{Comparison of neutron energy distribution at different layers of experimental setup} \label{fig:munelayer} \end{figure} The flux of neutrons that reach the top surface of each layer of the experimental setup for 20 days of data is shown in Table~\ref{tab:table3}. \begin{table}[h] \centering \caption{\label{tab:table3}The flux of neutrons at the top surface of each layer. Uncertainties shown are statistical only.} \begin{tabular}{c|c|c} \hline Material & $E_\mathrm{neutron}^\mathrm{mean}$ (MeV) & Flux ($\mathrm{cm^{-2}\thinspace s^{-1}}$) \\ \hline PP1 & 81 & 8.19($\pm$0.11) $\times$ 10$^{-9}$\\ \hline Pb & 280 & 3.04($\pm$0.12) $\times$ 10$^{-9}$\\ \hline PP2 & 8 & 1.44($\pm$0.02) $\times$ 10$^{-7}$\\ \hline Cu & 19 & 7.44($\pm$3.72) $\times$ 10$^{-8}$\\ \hline CsI & 9 & 6.15($\pm$4.35) $\times$ 10$^{-8}$\\ \hline \end{tabular} \end{table} The increase in neutron flux at the boundary of PP2 is due to the production of new neutrons in the Pb layer. The increase in mean energy of neutrons at the Cu layer can be due to the absorption of lower energy neutrons by PP2. The mean scattering length of neutrons is smaller in hydrogen compared to other materials like C, Pb or Fe for neutron energies less than $\sim$10 MeV. Whereas for higher neutron energies, the mean scattering length increases compared to other materials \cite{CBungau}. Hence, the higher energy neutrons cannot be moderated easily using hydrogen-based shielding material. \section{WIMP sensitivity estimate based on neutron background\label{sec:sens}} We make estimates for sensitivity by following the method suggested in Ref. \cite{lewin}, and used in KIMS \cite{kimsfirst} DM search. Considering a dark matter halo model with a Maxwellian velocity distribution as described in Ref. \cite{lewin}, the total WIMP event-rate in recoil energy range between $E_{R_1}$ and $E_{R_2}$ is given by \cite{kimsfirst} \begin{equation}\begin{split} R(v_E,v_\mathrm{esc}) &= \frac{k_0}{k_1}\int_{E_{R_1}}^{E_{R_2}} dE_R\left\{c_1\frac{R_0}{E_0r}e^{-c_2E_R/E_0r}-\frac{R_0}{E_0r}e^{-v^2_\mathrm{esc}/v_0^2}\right\},\\ R_0&=5.47\left(\frac{\mathrm{GeV}/c^2}{m_\chi}\right)\left(\frac{\mathrm{GeV}/c^2}{m_t}\right)\left(\frac{\sigma_0}{\mathrm{pb}}\right)\left(\frac{\rho_\chi}{\mathrm{GeV}/c^2/\mathrm{cm}^3}\right)\left(\frac{v_0}{\mathrm{km/s}}\right),\\ E_0&=\frac{1}{2}m_\chi v_0^2,\quad r=\frac{4m_\chi m_t}{(m_\chi+m_t)^2}, \end{split} \end{equation} where $m_\chi$ is the dark matter mass, $m_t$ is the mass of a target nucleus, $\rho_\chi = 0.3$ GeV$\thinspace$cm$^{-3}$ is the local dark matter density, $v_0 = 220$ km$\thinspace$s$^{-1}$ is a Maxwell velocity parameter, $v_\mathrm{esc} = 650$ km$\thinspace$s$^{-1}$ is the local galactic escape velocity of WIMP, $k_0/k_1\approx 1$ and $c_1 , c_2$ are constants which depend on the Earth (target) velocity $v_E$, relative to the dark matter distribution as discussed in Ref. \cite{lewin}. $\sigma_0$ is the WIMP-nucleus `zero momentum transfer' cross-section, and $R_0$ is the total event rate (in kg$^{-1}$day$^{-1}$) for $v_E$ = 0, and $v_\mathrm{esc}=\infty$. Radiogenic neutrons were completely stopped by the shielding. Only cosmogenic neutrons were able to reach the crystal. From simulation, the total number of nuclear recoil events due to neutrons within the energy range 8-60 keV in the CsI crystal is estimated to be $\sim$6 kg$^{-1}$year$^{-1}$ (corresponding to a 90\% poisson CL of 12 kg$^{-1}$year$^{-1}$). The nuclear recoil energy scale could be converted into the electron equivalent energy scale using the quenching factors for CsI crystals reported in Ref. \cite{park}. This turns out to be $\sim$1.5 to 6.5 keV. The WIMP-nucleon cross-section can be obtained from the WIMP-nucleus cross-section using the formula \cite{kimsfirst} \begin{equation} \sigma_{W-n}=\sigma_{W-A}\frac{\mu_n^2}{\mu_A^2}\frac{C_n}{C_A} \end{equation} where $\sigma_{W-n}$ is the WIMP-nucleon cross-section, $\sigma_{W-A}$ is the WIMP-nucleus cross-section for nucleus of mass number $A$, $\mu_n$ and $\mu_A$ are the reduced masses of WIMP-nucleon and WIMP-nucleus systems respectively and $C_A/C_n=A^2$ for spin independent interaction \cite{kimsfirst}. Since the CsI crystal has two different nuclei, the combined limit on the WIMP-nucleon cross-section ($\sigma$) is given by \cite{kimsfirst} \begin{equation} \frac{1}{\sigma}=\frac{1}{\sigma_\mathrm{Cs}}+\frac{1}{\sigma_\mathrm{I}}, \end{equation} where $\sigma_\mathrm{Cs}$ and $\sigma_\mathrm{I}$ are the WIMP-nucleon cross sections limits for Cs and I nuclei respectively. Neglecting the background contribution from gamma and $\alpha$ particles, the estimated sensitivity of a CsI based direct dark matter search experiment at JUSL is shown in Figure \ref{fig:sensitive}. Spin independent interaction has been considered and the WIMP-nucleon cross-section is shown as a function of WIMP-mass. The blue line corresponds to a 272 g detector as considered in the simulation, running for 1 year. The red dotted line corresponds to a 200 kg detector running for 1 year assuming the same level of background. Sensitivity estimates in the present study only assume background events from neutrons; this leads to an optimistic estimate of the sensitivity. A more realistic calculation will require consideration of other backgrounds, which are not calculated in this study, along with neutrons. The parameters such as the quenching factors etc. have been assumed to be similar to that of KIMS. A better estimate of the sensitivity will be obtained if all the detector parameters are measured and understood specifically for DINO. Nevertheless, setting up a direct dark matter search experiment at JUSL is feasible. \begin{figure}[h] \centering \includegraphics[width=0.6\textwidth]{Sensitivity_DINO.pdf} \caption{Sensitivity of a CsI based detector at JUSL. Blue line shows the sensitivity for a 100 kg$\thinspace$day exposure (272 g detector for 1 year) and the red line shows the sensitivity for 200 kg$\thinspace$year exposure (200 kg detector for 1 year).} \label{fig:sensitive} \end{figure} \section{Conclusion} \label{sec:conc} An estimation of the neutron flux due to radiogenic and cosmogenic sources for a dark matter search experiment at JUSL, an underground laboratory in India with 555 m rock overburden, has been done using GEANT4 framework. The radiogenic neutrons have energies upto few MeVs whereas muon-induced neutron energy extends up to few 10s of GeVs. A study has also been performed to find the optimal shielding combination for effective reduction of these neutron backgrounds. The specific neutron activity due to ($\alpha,n$) reactions from the surrounding rock materials has been obtained as $6.77\pm1.12\thinspace$yr$^{-1}\thinspace$g$^{-1}$ of rock from $^{238}$U and $5.33\pm0.90\thinspace$yr$^{-1}\thinspace$g$^{-1}$ of rock from $^{232}$Th. The specific neutron activity due to spontaneous fission of $^{238}$U is obtained as $3.43\pm0.55\thinspace$yr$^{-1}\thinspace$g$^{-1}$. The flux of radiogenic neutrons reaching the inner cavern is obtained as $1.12(\pm0.11)\times10^{-5}\thinspace$cm$^{-2}\thinspace$s$^{-1}$ above 100 keV energy threshold with a mean energy of 1.34 MeV and $5.75(\pm0.58)\times10^{-6}\thinspace$cm$^{-2}\thinspace$s$^{-1}$ above 1 MeV energy threshold with a mean energy of 2.18 MeV. Cosmic muon events are generated on the surface using Gaisser's parametrization and are made to propagate through the rock material. It is observed that only about 7\% of the generated cosmogenic neutrons pass through the rock of thickness 2 m. The muon flux at the outer cavern is found to be nearly $4.49(\pm0.25)\times10^{-7}$ cm$^{-2}\thinspace$s$^{-1}$ with an average energy of about 200 GeV. These muons are propagated through rock to obtain the muon and muon induced neutron flux at the cavern. The muon flux at the boundary of the inner cavern is obtained to be $4.45(\pm 0.24)\times 10^{-7}$ cm$^{-2}\thinspace$s$^{-1}$. Muon induced neutron flux from rock in the inner cavern is found to be $0.93(\pm0.08)\times10^{-8}\thinspace$cm$^{-2}$s$^{-1}$ without any energy threshold and $7.25(\pm0.65)\times10^{-9}\thinspace$cm$^{-2}\thinspace$ s$^{-1}$ above 1 MeV. Our estimated values of neutron and muon fluxes are comparable with calculations for dark matter experiments in Boulby mine \cite{boulby}. The measured value of muon flux at the WIPP salt mine which is at a similar depth ($\sim1580$ m.w.e) is $4.77\times10^{-7}$ cm$^{-2}\thinspace$s$^{-1}$ \cite{WIPP}. Our estimation of muon induced neutron flux is comparable with their calculation (1.6 $\times$ 10$^{-8}\thinspace$cm$^{-2}\thinspace$s$^{-1}$) reported in the same paper. The total neutron flux reaching the Inner cavern/laboratory from both radiogenic and muon induced reactions in rock is found to be 5.76($\pm0.58)\times$ 10$^{-6}\thinspace$cm$\thinspace^{-2}\thinspace$s$^{-1}$ mostly dominated by radiogenic neutrons. Neutrons produced from muon and neutron interaction with the shielding materials also contribute to the neutron flux at the detector. A high number of neutrons are produced in Pb. Radiogenic neutrons are easily stopped by the 40 cm thick polypropylene layer. Cosmogenic neutrons can penetrate the shielding and reach the detector. Moreover, muons generate neutrons while traversing through the shielding. Muon and neutron fluxes have been estimated at various layers of shielding. It is seen that a shielding configuration comprising of Polypropylene (40 cm) + Pb (30 cm) + Polypropylene (20 cm) from the outside towards the experimental setup gives the best reduction in neutron background. Since the neutron production rate is high in Pb, a second layer of polypropylene is needed for effective shielding of these neutrons. With the best determined shielding configuration, the flux of muon induced neutron at the detector is found to be 6.15($\pm$4.35) $\times$ 10$^{-8}$ cm$^{-2}\thinspace$s$^{-1}$. The radiogenic neutron yield from contamination in the detector materials has not been considered in this calculation. D.M. Mei et al. \cite{Mei} showed that there is no $(\alpha,n)$ neutron yield in lead due to a very high Coulomb barrier. The sensitivity of a CsI based WIMP dark matter search experiment at the Jaduguda mine has been estimated. The results show that a direct WIMP dark matter search experiment is feasible at JUSL. \section*{Acknowledgements} We would like to thank the Department of Atomic Energy (DAE), Government of India, for support through the project Research in Basic Sciences (Dark Matter). BM acknowledges the support of JC Bose National fellowship of the Department of Science and Technology (DST), Government of India. PB and SS acknowledge the support under Raja Ramanna fellowship of the Department of Atomic Energy (DAE), Government of India.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Understanding the dependence among data components is an important goal in high-dimensional data analysis as different dependence structures lead to different inference procedures, for instance in the Hotelling's test for the mean \citep{Hotelling_1931} and Fisher's linear discriminant analysis, the pooled covariance estimate is used under the assumption of the same covariance matrix between the two samples. For high dimensional data, the covariance matrices are utilized in the form of its inverse to enhance the signal strength in the innovated Higher Criticism test for high dimensional means \citep{Hall_Jin_2010} and in {Gaussian graphical models \citep{Liu_2013, Ren_2015}. In genetic studies, covariances are widely used to understand the interactions among genes, to study functionally related genes \citep{Yi_2007}, and to construct and compare co-expression genetic networks \citep{Fuente_2010}. As a multivariate statistical procedure is likely constructed based on a specific dependence structure of the data, testing for the equality of two covariance matrices ${\bf\Sigma}_1$ and ${\bf\Sigma}_2$ from two populations has been an enduring task. \cite{John_1971}, \cite{Gupta_Giri_1973}, \cite{Nagao_1973} and \cite{Perlman_1980} presented studies under the conventional fixed dimensional setting; see \cite{Anderson_2003} for a comprehensive review. The modern high-dimensional data have generated a renewed interest under the so-called ``large $p$, small $n$" paradigm. For Gaussian data with the dimension $p$ and the sample size $n$ being the same order, \cite{Schott_2007} and \cite{Srivastava_Yanagihara_2010} proposed two sample tests based on the distance measure $\|{\bf \Sigma}_1-{\bf \Sigma}_2\|^2_F$, the squared Frobenius matrix norm between the two covariances. \cite{Bai_2009} considered a corrected likelihood ratio test via the large dimensional random matrix theory. For nonparametric settings without explicitly restricting $p$ and the sample sizes, \cite{Li_Chen_2012} proposed an $\ell_{2}$-test based on a linear combination of $U$-statistics which is an unbiased estimator of $\|{\bf \Sigma}_1-{\bf \Sigma}_2\|^2_F$. \cite{QC_2012} studied an $\ell_{2}$-test for the bandedness of a covariance. \cite{Cai_Liu_Xia_2013} proposed a test based on the maximal standardized differences (an $\ell_{\max}$-type formulation) between the entries of two sample covariance matrices. \cite{Chang_2017} constructed a simulation based approach to approximate the distribution of the maximal statistics. Studies have shown that the $\ell_{2}$-tests are powerful for detecting dense and weak differences in the covariances, while the $\ell_{\max}$-formulation is powerful against sparse and strong signal settings. Detecting rare and faint signals has attracted much attention in high-dimensional statistical inference. The studies have been largely concentrated for the mean problems (\citealp{Fan_1996}; \citealp{Donoho_Jin_2004}; \citealp{Delaigle_2011_JRSSB}; \citealp{Zhong_Chen_Xu_2013}; \citealp{Qiu_Chen_Nettleton_2016}), while studies for covariance matrices are much less. \cite{AC_2012} investigated near optimal testing rules for detecting nonzero correlations in a one sample setting for Gaussian data with clustered nonzero signals. The aim of this paper is on enhancing the power performance in testing differences between two covariances when the differences are both sparse and faint, which is the most challenging setting for signal detection and brings about the issue of optimal detection boundary for covariance matrices. We introduce thresholding on the $\ell_2$-formulation of \cite{Li_Chen_2012} to remove those non-signal bearing entries of the covariances, which reduces the overall noise (variance) level of the test statistic and increases the signal to noise ratio for the testing problem. The formulation may be viewed as a parallel development to the thresholding method for detecting differences in the means, for instance the Higher Criticism (HC) test of \cite{Donoho_Jin_2004, Hall_Jin_2010} and \cite{Delaigle_2011_JRSSB}, and the $\ell_2$-thresholding formulation in \cite{Fan_1996}, \cite{Zhong_Chen_Xu_2013} and \cite{Qiu_Chen_Nettleton_2016}. However, comparing with the studies on the thresholding tests for the means, there is few work on the thresholding tests for covariance matrices beyond a discussion in \cite{Donoho_Jin_2015}, largely due to a difficulty {in} treating the dependence among the entries of the sample covariance matrices. To overcome the theoretical difficulty, we adopt a matrix version of the blocking method to partition the matrix entries to big square blocks separated by small rectangular blocks. The coupling technique is used to construct an equivalent U-statistic to the thresholding test statistic based on the covariance matrix block partition. The equivalent U-statistic formulation allows establishing the martingale central limit theorem \citep{Hall_Heyde_1980} for the asymptotic distribution of the test statistic. A multi-thresholding test procedure is proposed to make the test adaptive to the unknown signal strength and sparsity. Under the setting of rare and faint differences between the two covariances, the power of the proposed test is studied and its detection boundary is derived, which shows the benefits of the multi-thresholding over existing two sample covariance tests. The paper is organized as follows. We introduce the setting of the covariance testing in Section 2. The thresholding statistic and the multi-level thresholding test are proposed in Sections 3 and 4, with its power and detection boundary established in Section 5. Simulation studies and discussions are presented in Sections 6 and 7, respectively. Proofs and a real data analysis are relegated to the appendix and the supplementary material (SM). \setcounter{equation}{0} \section{Preliminary} Suppose that there are two independent samples of $p$-dimensional random vectors ${\mathbf X}_{1},\dots,{\mathbf X}_{n_1} \overset{i.i.d.}{\sim} \text{F}_1$ and ${\mathbf Y}_{1},\dots, {\mathbf Y}_{n_2} \overset{i.i.d.}{\sim} \text{F}_2$ drawn from two distributions $\text{F}_1$ and $\text{F}_2$, respectively, where ${\mathbf X}_{k}=(X_{k 1},\dots,X_{k p})^{{ \mathrm{\scriptscriptstyle T} }}$, ${\mathbf Y}_{k}=(Y_{k 1},\dots,Y_{k p})^{{ \mathrm{\scriptscriptstyle T} }}$, $n_1$ and $n_2$ are the sample sizes, and ``i.i.d.'' stands for ``independent and identically distributed''. Let ${\bf \mu}_1 = (\mu_{11}, \ldots, \mu_{1p})^{{ \mathrm{\scriptscriptstyle T} }}$ and ${\bf\mu}_2 = (\mu_{21}, \ldots, \mu_{2p})^{{ \mathrm{\scriptscriptstyle T} }}$ be the means of $\text{F}_1$ and $\text{F}_2$, and ${\bf\Sigma}_1=(\sigma_{ij1})_{p\times p}$ and ${\bf \Sigma}_2=(\sigma_{ij2})_{p\times p}$ be the covariance matrices of $\text{F}_1$ and $\text{F}_2$, respectively. Let $\bPsi_{1} = (\rho_{ij1})_{p\times p}$ and $\bPsi_{2} = (\rho_{ij2})_{p\times p}$ be the corresponding correlation matrices. We consider testing \begin{equation} H_0:{\bf \Sigma}_1={\bf \Sigma}_2 \quad \text{vs.} \quad H_a:{\bf \Sigma}_1\neq{\bf \Sigma}_2 \label{H0} \end{equation} under a high-dimensional setting where $p \gg n_1,n_2$. Let ${\bf \Delta} = {\bf \Sigma}_1 - {\bf \Sigma}_2 = (\delta_{ij})$ where $\delta_{ij} = \sigma_{ij1} - \sigma_{ij2}$ are component-wise differences between ${\bf \Sigma}_1$ and ${\bf \Sigma}_2$, $q=p(p+1)/2$ be the number of distinct parameters and $n = n_1n_2 / (n_1 + n_2)$ be the effective sample size in the testing problem. While hypothesis (\ref{H0}) offers all possible alternatives against the equality of the two covariances, we consider in this study a subset of the alternatives that constitutes the most challenging setting with the number of non-zero $\delta_{ij}$ % being rare and the magnitude of the non-zero $\delta_{ij}$ being faint; see \cite{Donoho_Jin_2004} and \cite{Hall_Jin_2010} for similar settings in the context for testing means. Let $m_a$ denote the number of nonzero $\delta_{ij}$ for $i \leq j$. We assume a sparse setting such that $m_a = \lfloor q^{(1-\beta)} \rfloor$ for a $\beta\in(1/2,1)$, where $\beta$ is the sparsity parameter and $\lfloor \cdot \rfloor$ is the integer truncation function. We note that $\beta \in (0,1/2]$ is the dense case under which the testing is easier. The faintness of signals is characterized by \begin{equation} \delta_{ij} = \sqrt{2r_{0, ij}\log(q)/n} = \sqrt{4r_{0, ij}\log(p)/n}\{1 + o(1)\} ~~ \mbox{if~~ $\delta_{ij} \ne 0$} \label{eq:Sstrength}\end{equation} for $r_{0, ij}>0$. As shown in Theorem \ref{tm3}, $\sqrt{\log(p) / n}$ in (\ref{eq:Sstrength}) is the minimum rate for successful signal detection under the sparse setting. Specifically, our analysis focuses on a special case of (\ref{H0}) such that \begin{equation}\begin{split} &H_{0}: \delta_{ij} = 0 \mbox{ \ for all $1\leq i \leq j \leq p$ \ vs.} \\ &H_{a}: \mbox{ $m_a =\lfloor q^{(1-\beta)} \rfloor$ nonzero $\delta_{ij}$ with strength specified in (\ref{eq:Sstrength}).} \end{split} \label{SparseH1}\end{equation} Here, the signal strength $r_{0, ij}$ together with $\beta \in (1/2,1)$ constitutes the rare and faint signal setting, which has been used to evaluate tests on means and regression coefficients (\citealp{Donoho_Jin_2004}; \citealp{Hall_Jin_2010}; \citealp{Zhong_Chen_Xu_2013}; \citealp{Qiu_Chen_Nettleton_2016}). {Our proposed test is designed to achieve high power under $H_a$ of (\ref{SparseH1}) that offers the most challenging setting for detecting unequal covariances, as shown in Theorem \ref{tm3}.} Hypotheses (\ref{SparseH1}) are composite null versus composite alternative. Under the null, although the two covariances are the same, they can take different values; and under the alternative no prior distribution is assumed on the location of the nonzero $\delta_{ij}$. This is different from the simple null versus simple alternative setting of \cite{Donoho_Jin_2004}. The derivation of the optimal detection boundary for such composite hypotheses is more difficult as shown in the later analysis. Let $\{\pi_{\ell, p}\}_{\ell=1}^{p !}$ denote all possible permutations of $\{1, \dots, p\}$ and ${{\mathbf X}}_{k}(\pi_{\ell, p})$ and ${{\mathbf Y}}_{k}(\pi_{\ell, p})$ be the reordering of ${{\mathbf X}}_{k}$ and ${{\mathbf Y}}_{k}$ corresponding to a permutation $\pi_{\ell, p}$. We assume that there is a permutation $\pi_{\ell_{*}, p}$ such that ${{\mathbf X}}_{k}(\pi_{\ell_{*}, p})$ and ${{\mathbf Y}}_{k}(\pi_{\ell_{*}, p})$ are weakly dependent, defined via the $\beta$-mixing { \citep{Bradley_2005}. As the proposed statistic in (\ref{SingleTest})} is of the $\ell_2$-type and is invariant to permutations of ${{\mathbf X}}_{k}$ and ${{\mathbf Y}}_{k}$, there is no need to know $\pi_{\ell_{*}, p}$. Let $\{{{\mathbf X}}_{k}\} = \{{{\mathbf X}}_{k}(\pi_{\ell_{*}, p})\}$ and $\{{{\mathbf Y}}_{k}\} = \{{{\mathbf Y}}_{k}(\pi_{\ell_{*}, p})\}$ to simplify notation. Let $\mathcal{F}_{m_a}^{m_b}({{\mathbf X}}_{k})=\sigma\{X_{kj}: m_a \leq j \leq m_b\}$ and $\mathcal{F}_{m_a}^{m_b}({{\mathbf Y}}_{k})=\sigma\{Y_{kj}: m_a \leq j \leq m_b\}$ be the $\sigma$-fields generated by $\{{{\mathbf X}}_{k}\}$ and $\{{{\mathbf Y}}_{k}\}$ for $1 \leq m_a \leq m_b \leq p$. The $\beta$-mixing coefficients are $\zeta_{x, p}(h) = \sup_{1 \leq m \leq p - h} \zeta\{\mathcal{F}_{1}^{m}({{\mathbf X}}_{k}), \mathcal{F}_{m+h}^{p}({{\mathbf X}}_{k})\}$ and $\zeta_{y, p}(h) = \sup_{1 \leq m \leq p - h} \zeta\{\mathcal{F}_{1}^{m}({{\mathbf Y}}_{k}), \mathcal{F}_{m+h}^{p}({{\mathbf Y}}_{k})\}$ \citep{Bradley_2005}, where for two $\sigma$-fields $\mathcal{A}$ and $\mathcal{B}$, $$\zeta(\mathcal{A}, \mathcal{B}) = \frac{1}{2}\sup \sum_{l_1 = 1}^{u_1}\sum_{l_2 = 1}^{u_2}\big| P(A_{l_1} \cap B_{l_2}) - P(A_{l_1})P(B_{l_2}) \big|.$$ Here, the supremum is taken over all finite partitions $\{A_{l_1} \in \mathcal{A} \}_{l_1=1}^{u_1}$ and $\{B_{l_2} \in \mathcal{B}\}_{l_2=1}^{u_2}$ of the sample space, and $u_1,u_2 \in \mathbb{Z}^{+}$, the set of positive integers. Let $\bar{{\mathbf X}}=\sum_{k=1}^{n_1}{\mathbf X}_k/n_{1}$ and $\bar{{\mathbf Y}}=\sum_{k=1}^{n_2}{\mathbf Y}_k/n_{2}$ be the two sample means where $\bar{{\mathbf X}} = (\bar{X}_1, \dots, \bar{X}_p)^{{ \mathrm{\scriptscriptstyle T} }}$ and $\bar{{\mathbf Y}} = (\bar{Y}_1, \dots, \bar{Y}_p)^{{ \mathrm{\scriptscriptstyle T} }}$. Let \begin{eqnarray} \widehat{{\bf\Sigma}}_1 &=& (\hat{\sigma}_{ij1}) \ = \ \frac{1}{n_1}\sum_{k=1}^{n_1}({\mathbf X}_{k}-\bar{{\mathbf X}})({\mathbf X}_{k}-\bar{{\mathbf X}})^{{ \mathrm{\scriptscriptstyle T} }} \ \mbox{and } \nonumber \\ \widehat{{\bf\Sigma}}_2 &=& (\hat{\sigma}_{ij2}) \ = \ \frac{1}{n_2}\sum_{k=1}^{n_2}({\mathbf Y}_{k}-\bar{{\mathbf Y}})({\mathbf Y}_{k}-\bar{{\mathbf Y}})^{{ \mathrm{\scriptscriptstyle T} }}, \nonumber \end{eqnarray} and $\kappa = \lim_{n_1,n_2 \to \infty} {n_1}/(n_1+n_2)$. Moreover, let $\theta_{ij1}=\text{var}\{(X_{ki}-\mu_{1i})(X_{kj}-\mu_{1j})\}$, $\theta_{ij2}=\text{var}\{(Y_{ki}-\mu_{2i})(Y_{kj}-\mu_{2j})\}$; $\rho_{ij,lm}^{{ \mathrm{\scriptscriptstyle (1)} }} = \text{Cor}\{(X_{ki} - \mu_{1i})(X_{kj} - \mu_{1j}), (X_{kl} - \mu_{1l})(X_{km} - \mu_{1m})\}$, and $\rho_{ij,lm}^{{ \mathrm{\scriptscriptstyle (2)} }} = \text{Cor}\{(Y_{ki} - \mu_{2i})(Y_{kj} - \mu_{2j}), (Y_{kl} - \mu_{2l})(Y_{km} - \mu_{2m})\}$. Both $\theta_{ij1}$ and $\theta_{ij2}$ can be estimated by \begin{eqnarray} \hat{\theta}_{ij1} &=& \frac{1}{n_1}\sum_{k=1}^{n_1}\{(X_{ki}-\bar{X}_i)(X_{kj}-\bar{X}_j)-\hat{\sigma}_{ij1}\}^2 \ \mbox{and} \nonumber \\ \hat{\theta}_{ij2} &=& \frac{1}{n_2}\sum_{k=1}^{n_2}\{(Y_{ki}-\bar{Y}_i)(Y_{kj}-\bar{Y}_j)-\hat{\sigma}_{ij2}\}^2. \nonumber \end{eqnarray} As $\hat{\theta}_{ij1}/n_1+\hat{\theta}_{ij2}/n_2$ is ratioly consistent to the variance of $\hat{\sigma}_{ij1}-\hat{\sigma}_{ij2}$, we define a standardized difference between $\hat{\sigma}_{ij1}$ and $\hat{\sigma}_{ij2}$ as \[ M_{ij}=F_{ij}^{2} \mbox{ \ for \ } F_{ij}=\frac{\hat{\sigma}_{ij1}-\hat{\sigma}_{ij2}}{(\hat{\theta}_{ij1}/n_1+\hat{\theta}_{ij2}/n_2)^{1/2}}, \ 1\leq i\leq j \leq p. \] \cite{Cai_Liu_Xia_2013} proposed a maximum statistic $M_n=\max_{1\leq i\leq j\leq p} M_{ij}$ that targets at the largest signal between ${\bf\Sigma}_1$ and ${\bf\Sigma}_2$. \cite{Li_Chen_2012} proposed an $\ell_2$-test that aims at $\|{\bf \Sigma}_1-{\bf \Sigma}_2\|^2_F$. \cite{Donoho_Jin_2015} briefly discussed the possibility of applying the Higher Criticism (HC) statistic for testing $H_0: {\bf\Sigma} = {\bf I}_p$ with Gaussian data. We are to propose a test by carrying out multi-level thresholding on $\{M_{ij}\}$ to filter out potential signals via an $\ell_2$-formulation, and show that such thresholding leads to a more powerful test than both the maximum test and the $\ell_2$-type tests when the signals are rare and faint. \setcounter{equation}{0} \section{Thresholding statistics for covariance matrices} By the moderate deviation result in Lemma 2 in the SM, {under Assumptions \ref{as1} (or \ref{as1poly}), \ref{as2}, \ref{as3} and $H_{0}$ of (\ref{H0}), $P\big\{\max_{1\leq i\leq j\leq p} M_{ij}>4\log (p)\big\} \to 0$ as $n, p \to \infty$. } This implies that a threshold level of $4\log (p)$ is asymptotically too large under the null hypothesis, and suggests a smaller threshold $\lambda_{p}(s)=4s\log (p)$ for a thresholding parameter $s\in(0,1)$. This leads to a thresholding statistic \begin{equation}\label{SingleTest} T_n(s)=\sum_{1\leq i\leq j\leq p} M_{ij}\mathbb{I}\{M_{ij}>\lambda_{p}(s)\}, \end{equation} where $\mathbb{I}(\cdot)$ denotes the indicator function. Statistic $T_n(s)$ removes those small standardized differences $M_{ij}$ between $\widehat{{\bf\Sigma}}_1$ and $\widehat{{\bf\Sigma}}_2$. Compared with the $\ell_2$-statistic of \cite{Li_Chen_2012}, $T_n(s)$ keeps only large $M_{ij}$ after filtering out the potentially insignificant ones. By removing those smaller $M_{ij}$'s, the variance of $T_n(s)$ is much reduced from that of \cite{Li_Chen_2012} which translates to a larger power as shown in the next section. Compared to the $\ell_{\max}$-test of \cite{Cai_Liu_Xia_2013} whose power is determined by the maximum of $M_{ij}$, the thresholding statistic not only uses the largest $M_{ij}$, but also all relatively large entries. This enhances the ability in detecting weak signals as reflected in the power and the detection boundary in Section 5. Let $C$ be a positive constant whose value may change in the context. For two real sequences $\{a_n\}$ and $\{b_n\}$, $a_n \sim b_n$ means that there are two positive constants $c_1$ and $c_2$ such that $c_1\leq a_n/b_n\leq c_2$ for all $n$. We make the following assumptions in our analysis. \begin{customas}{1A} \label{as1} As $n\to \infty$, $p\to \infty$, $\log p \sim n^{\varpi}$ for a $\varpi \in (0, 1/5)$. \end{customas} \begin{customas}{1B} \label{as1poly} As $n\to \infty$, $p\to \infty$, $n \sim p^{\xi}$ for a $\xi \in (0, 2)$. \end{customas} \begin{customas}{2} \label{as2} There exists a positive constant $\tau$ such that \begin{eqnarray} &\tau < \min_{1\leq i \leq p}\{\sigma_{ii1}, \sigma_{ii2}\} \leq \max_{1\leq i \leq p}\{\sigma_{ii1}, \sigma_{ii2}\} < \tau^{-1} \ \text{and} \label{Assum2-1} \\ &\min_{i,j}\{ \theta_{ij1}/(\sigma_{ii1}\sigma_{jj1}),\theta_{ij2}/(\sigma_{ii2}\sigma_{jj2})\} >\tau.\label{Assum2-2} \end{eqnarray} \end{customas} \begin{customas}{3} \label{as3} There exist positive constants $\eta$ and $C$ such that for all $|t|<\eta$, \[ E[\exp\{t(X_{ki}-\mu_{1i})^2\}]\leq C~~\text{and}~~E[\exp\{t(Y_{ki}-\mu_{2i})^2\}]\leq C \quad \text{for}~~i=1,\dots, p. \] \end{customas} \begin{customas}{4} \label{as4} There exists a small positive constant $\rho_0$ such that \begin{equation} \max\{|\rho_{ij1}|, |\rho_{ij2}|\} < 1 - \rho_0 \mbox{ \ for any $i \neq j$}, \label{Assum4-1}\end{equation} and $\max\{|\rho_{ij,lm}^{{ \mathrm{\scriptscriptstyle (1)} }}|, |\rho_{ij,lm}^{{ \mathrm{\scriptscriptstyle (2)} }}|\} < 1 - \rho_0$ for any $(i, j) \neq (l, m)$. \end{customas} \begin{customas}{5} \label{as5} There is a permutation ($\pi_{\ell_{*}, p}$) of the data sequences $\{X_{kj}\}_{j=1}^{p}$ and $\{Y_{kj}\}_{j=1}^{p}$ such that the permuted sequences are $\beta$-mixing with the mixing coefficients satisfying $\max\{ \zeta_{x, p}(h), \zeta_{y, p}(h)\}\leq C \gamma^{h}$ for a constant $\gamma \in (0, 1)$, any $p \in \mathbb{Z}^{+}$ and positive integer $h \leq p - 1$. \end{customas} Assumptions \ref{as1} and \ref{as1poly} specify the exponential and polynomial growth rates of $p$ relative to $n$, respectively. Assumption \ref{as2} prescribes that $\theta_{ij1}$ and $\theta_{ij2}$ are bounded away from zero to ensure the denominators of $M_{ij}$ being bounded away from zero with probability approaching 1. Assumption \ref{as3} assumes the distributions of $X_{ki}$ and $Y_{ki}$ are sub-Gaussian. Sub-Gaussianity is commonly assumed in high-dimensional literature \citep{BL_2008a, Xue_Ma_Zou_2012, Cai_Liu_Xia_2013}. Assumption \ref{as4} regulates the correlations among variables in ${\bf X}_{k}$ and ${\bf Y}_{k}$, and subsequently the correlations among $\{ F_{ij}\}$ where $M_{ij} = F_{ij}^{2}$. The $\beta$-mixing Assumption \ref{as5} is made for the unknown variable permutation $\pi_{\ell_{*}, p}$. Similar mixing conditions for the column-wise dependence were made in \cite{Delaigle_2011_JRSSB} and \cite{Zhong_Chen_Xu_2013} for thresholding tests of means. If $\{X_{kj}\}_{j = 1}^{p}$ and $\{Y_{kj}\}_{j = 1}^{p}$ are both Markov chains (the vector sequence under the variable permutation), Theorem 3.3 in \cite{Bradley_2005} provides conditions for the processes being $\beta$-mixing. If $\{X_{kj}\}_{j = 1}^{p}$ and $\{Y_{kj}\}_{j = 1}^{p}$ are linear processes with i.i.d.\ innovation processes $\{\epsilon_{x, kj}\}_{j = 1}^{p}$ and $\{\epsilon_{y, kj}\}_{j = 1}^{p}$, which include the ARMA processes as the special case, then they are $\beta$-mixing provided the innovation processes are absolutely continuous \citep{Mokkadem_1988}. The latter is particularly weak. Under the Gaussian distribution, any covariance that matches to the covariance of an ARMA process up to a permutation will be $\beta$-mixing. Furthermore, normally distributed data with banded covariance or block diagonal covariance after certain variable permutation also satisfy this assumption. The $\beta$-mixing coefficients are assumed to decay at an exponential rate in Assumption \ref{as5} to simplify proofs, while arithmetic rates can be entertained at the expense of more technical details. There are implications of the $\beta$-mixing on $\sigma_{ij 1}$ and $\sigma_{ij 2}$ due to Davydov's inequality, which potentially restricts the signal level $\delta_{ij} = \sqrt{2 r_{0, ij} \log(q)/n}$. However, as the $\beta$-mixing is assumed for the unknown permutation $\pi_{\ell_{*}, p}$, which is likely not the ordering of the observed data, the restriction would be minimal. In the unlikely event that the observed order of the data matches that under $\pi_{\ell_{*}, p}$, the $\beta$-mixing would imply that the signals would appear near the main diagonal of $\boldsymbol{\Sigma}_1$ and $\boldsymbol{\Sigma}_2$. However, as the power of the test is determined by the detectable signal strength at or larger than the order $\sqrt{\log(q) / n}$, the effect of the $\beta$-mixing on the alternative hypothesis and the power is limited as long as there exists a portion of differences with the standardized strength above the detection boundary $\rho^{*}(\beta,\xi)$ established in Propositions \ref{pn3} and \ref{pn4}. Let $\mu_{{T}_n,0}(s)$ and $\sigma^2_{{T}_n,0}(s)$ be the mean and variance of the thresholding statistic ${T}_n(s)$, respectively, under $H_0$. Let $\phi(\cdot)$ and $\bar{\Phi}(\cdot)$ be the density and survival functions of $N(0,1)$, respectively. Recall that $q=p(p+1)/2$. The following proposition provides expansions of $\mu_{{T}_n,0}(s)$ and $\sigma^2_{{T}_n,0}(s)$. \begin{pn} \label{pn1} Under Assumptions \ref{as1} or \ref{as1poly} and Assumptions \ref{as2}-\ref{as5}, we have $\mu_{{T}_n,0}(s)=\tilde{\mu}_{{T}_n,0}(s)\{1+O(\lambda_{p}^{3/2}(s) n^{-1/2})\}$ where \[ \tilde{\mu}_{{T}_n,0}(s) = q\{2\lambda_p^{1/2}(s)\phi(\lambda_p^{1/2}(s))+2\bar{\Phi}(\lambda_p^{1/2}(s))\}. \] In addition, under either \text{(i)} Assumption \ref{as1} with $s>1/2$ or \text{(ii)} Assumption \ref{as1poly} with $s>1/2-\xi/4$, $\sigma^2_{{T}_n,0}(s)=\tilde{\sigma}^2_{{T}_n,0}(s)\{1+o(1)\}$, where $ \tilde{\sigma}^2_{{T}_n,0}(s) = q[2\{\lambda_p^{3/2}(s)+3\lambda_p^{1/2}(s)\}\phi(\lambda_p^{1/2}(s))+6\bar{\Phi}(\lambda_p^{1/2}(s))]. $ \end{pn} From Proposition \ref{pn1}, we see that the main orders $\tilde{\mu}_{{T}_n,0}(s)$ and $\tilde{\sigma}^2_{{T}_n,0}(s)$ of $\mu_{{T}_n,0}(s)$ and $\sigma^2_{{T}_n,0}(s)$ are known and are solely determined by $p$ and $s$, and hence can be readily used to estimate the mean and variance of $T_{n}(s)$. The smaller order term $\lambda_{p}^{3/2}(s) n^{-1/2}$ in $\mu_{{T}_n,0}(s)$ is useful in analyzing the performance of the thresholding test as in (\ref{EstMeanDifference}) later. Compared to the variance of the thresholding statistic on the means \citep{Zhong_Chen_Xu_2013}, the exact main order $\tilde{\sigma}^2_{{T}_n,0}(s)$ of $\sigma^2_{{T}_n,0}(s)$ requires a minimum bound on the threshold levels, which is due to the more complex dependence among $\{M_{ij}\mathbb{I}(M_{ij} > \lambda_p(s))\}$. More discussion regarding this is provided after Theorem \ref{tm1}. {Next, we derive the asymptotic distribution of $T_{n}(s)$ at a given $s$.} The testing for the covariances involves a more complex dependency structure than those in time series and spatial data. In particular, although the data vector is $\beta$-mixing under a permutation, the vectorization of $(M_{ij})_{p \times p}$ is not necessarily a mixing sequence, as the sample covariances in the same row or column are dependent since they share common segments of data. As a result, the conventional blocking plus the coupling approach \citep{Berbee_1979} for mixing series is insufficient to establish the asymptotic distribution of $T_{n}(s)$. To tackle the challenge, we first use a combination of the matrix blocking, as illustrated in Figure \ref{Fig_Demo} in Appendix, and the coupling method. Due to the circular dependence of the sample covariances, this only produces independence among the big matrix blocks with none overlapping indices, and those matrix blocks that share common indices are still dependent. To respect this reality, we introduce a novel U-statistic representation (\ref{Ustat}), which allows the use of the martingale central limit theorem on the U-statistic representation to attain the asymptotic normality of $T_n(s)$. \begin{tm} \label{tm1} Suppose Assumptions \ref{as2}-\ref{as5} are satisfied. Then, under the $H_0$ of (\ref{H0}), and either \text{(i)} Assumption \ref{as1} with $s>1/2$ or \text{(ii)} Assumption \ref{as1poly} with $s>1/2-\xi/4$, we have \[ \sigma^{-1}_{{T}_n,0}(s)\{{T}_n(s)-\mu_{{T}_n,0}(s)\}\stackrel{d}\to N(0,1) \quad \mbox{as~$n, p \to \infty$.} \] \end{tm} As the dependence between $M_{i_1j_1}\mathbb{I}\{M_{i_1j_1}>\lambda_{p}(s)\}$ and $M_{i_2j_2}\mathbb{I}\{M_{i_2j_2}>\lambda_{p}(s)\}$ decreases as the threshold level $s$ increases, the restriction on $s$ in Theorem \ref{tm1} is to control the dependence among the thresholded sample covariances in $T_n(s)$. Under Assumption \ref{as1poly} that prescribes the polynomial growth of $p$, the minimum threshold level that guarantees the Gaussian limit of $T_n(s)$ can be chosen as close to 0 as $\xi$ approaches 2. Compared to the thresholding statistic on the means \citep{Zhong_Chen_Xu_2013}, the thresholding on the covariance matrices requires a larger threshold level in order to control the dependence among entries of the sample covariances. \section{Multi-Thresholding test} { To formulate the multi-thresholding test, we need to first construct a single level thresholding test based on Theorem \ref{tm1}.} From Proposition \ref{pn1}, we note that $\tilde{\sigma}^2_{{T}_n,0}(s)/\sigma^2_{{T}_n,0}(s) \to 1$. Let $\hat{\mu}_{{T}_n,0}(s)$ be an estimate of ${\mu}_{{T}_n,0}(s)$ that satisfies \begin{equation} \label{sufficient} \hat{\mu}_{{T}_n,0}(s) -{\mu}_{{T}_n,0}(s) =o_p\{\tilde{\sigma}_{{T}_n,0}(s)\}. \end{equation} By Slutsky's theorem, under (\ref{sufficient}), the conclusion of Theorem \ref{tm1} is still valid if $\mu_{{T}_n,0}(s)$ and $\sigma^2_{{T}_n,0}(s)$ are replaced by $\hat{\mu}_{{T}_n,0}(s)$ and $\tilde{\sigma}^2_{{T}_n,0}(s)$, respectively. A natural choice of $\hat{\mu}_{{T}_n,0}(s)$ is the main order term $\tilde{\mu}_{{T}_n,0}(s)$ given in Proposition \ref{pn1}. According to the expansion of $\mu_{{T}_n,0}(s)$, \begin{equation} \frac{\mu_{{T}_n,0}(s)-\tilde{\mu}_{{T}_n,0}(s)}{\tilde{\sigma}_{{T}_n,0}(s)}=O_p\{\lambda_p^{5/4}(s)p^{1-s}n^{-1/2}\}, \label{EstMeanDifference}\end{equation} which converges to zero under Assumption \ref{as1poly} and $s>1-\xi/2$. Therefore, we reject the null hypothesis of (\ref{H0}) if \begin{equation} {T}_n(s) > \tilde{\mu}_{{T}_n,0}(s) + z_{\alpha}{\tilde{\sigma}_{{T}_n,0}(s)}, \label{test} \end{equation} where $z_{\alpha}$ is the upper $\alpha$ quantile of $N(0, 1)$. We would call (\ref{test}) the single level thresholding test, since it is based on a single $s$. It is noted that Condition (\ref{sufficient}) {is to simplify} the analysis on the thresholding statistic. When estimators satisfying (\ref{sufficient}) are not available, we may choose $\hat{\mu}_{T_n,0}(s)=\tilde{\mu}_{T_n,0}(s)$ while the lower threshold bound has to be chosen as $1 - \xi / 2$ to make (\ref{EstMeanDifference}) converge to 0. More accurate estimator of $\mu_{T_{n}, 0}(s)$ can be constructed by establishing expansions for ${\mu}_{{T}_n,0}(s)$ and then correcting for the bias empirically. \cite{Delaigle_2011_JRSSB} found that more precise moderate deviation results can be derived for the bootstrap calibrated t-statistics, which provides more accurate estimator for the mean. Existing works (\citealp{Donoho_Jin_2004}; \citealp{Delaigle_2011_JRSSB}) have shown that for detecting rare and faint signals in means, a single level thresholding cannot make the testing procedure adaptive to the unknown signal strength and sparsity. However, utilizing many thresholding levels can capture the underlying sparse and faint signals. This is the path we take for the covariance testing problem. Let $\mathcal{T}_n(s)=\tilde{\sigma}^{-1}_{{T}_n,0}(s)\{{T}_n(s)-\hat{\mu}_{{T}_n,0}(s)\}$ be the standardization of ${T}_n(s)$. We construct a multi-level thresholding statistic by maximizing $\mathcal{T}_n(s)$ over a range of thresholds. This is in the same spirit of the HC test of \cite{Donoho_Jin_2004} and the multi-thresholding test of \cite{Zhong_Chen_Xu_2013} for the means. Define the multi-level thresholding statistic \begin{equation}\label{MultiThreshold} {\mathcal{V}}_n(s_0)=\sup_{s\in \mathcal{S}(s_0)}\mathcal{T}_n(s),\end{equation} where $\mathcal{S}(s_0)=(s_0,1-\eta]$ for a lower bound $s_0$ and an arbitrarily small positive constant $\eta$. From Theorem \ref{tm1}, a choice of $s_0$ is either $1/2$ or $1/2 - \xi/4$ depending on $p$ having the exponential or polynomial growth. Define \begin{equation} \mathcal{S}_n(s_0)=\{s_{ij}: s_{ij}=M_{ij}/(4\log (p))~\text{and}~s_0< s_{ij}\leq (1-\eta)\}. \label{thresholdset} \end{equation} Since both $\hat{\mu}_{{T}_n,0}(s)$ and $\tilde{\sigma}_{{T}_n,0}(s)$ are monotone decreasing, ${\mathcal{V}}_n(s_0)$ can be attained on $\mathcal{S}_n(s_0)$ such that \begin{equation} {\mathcal{V}}_n(s_0)=\sup_{s\in \mathcal{S}_n(s_0)}\mathcal{T}_n(s). \label{sup_test} \end{equation} This reduces the computation to finite number of threshold levels. The asymptotic distribution of ${\mathcal{V}}_n(s_0)$ is given in the following theorem. \begin{tm} \label{tm2} Suppose conditions of Theorem \ref{tm1} and (\ref{sufficient}) hold, under $H_0$ of (\ref{H0}), \[ P\{ a(\log (p)){\mathcal{V}}_n(s_0)-b(\log (p), s_0, \eta)\leq x \} \to \exp(-e^{-x}), \] where $a(y)=(2\log (y))^{1/2}$ and $b(y,s_0,\eta)=2\log (y)+2^{-1}\log\log(y)-2^{-1}\log(\pi)\\+\log(1-s_0-\eta)$. \end{tm} This leads to an asymptotic $\alpha$-level multi-thresholding test (MTT) that rejects $H_0$ if \begin{equation} {\mathcal{V}}_n(s_0)>\{ q_{\alpha}+b(\log (p),s_0,\eta)\}/a(\log (p)), \label{testproc}\end{equation} where $q_{\alpha}$ is the upper $\alpha$ quantile of the Gumbel distribution. The test is adaptive to the unknown signal strength and sparsity as revealed in the next section. However, the convergence of ${\mathcal{V}}_n(s_0)$ can be slow, which may cause certain degree of size distortion. To speed up the convergence, we will present a parametric bootstrap procedure with estimated covariances to approximate the null distribution of ${\mathcal{V}}_n(s_0)$ in Section 6. \setcounter{equation}{0} \section{Power and detection boundary} We evaluate the power performance of the proposed thresholding test (\ref{testproc}) under the alternative hypothesis (\ref{SparseH1}) by deriving its detection boundary, and demonstrate its superiority over the $\ell_2$-type and $\ell_{\max}$-type tests. A detection boundary is a phase transition diagram in terms of the signal strength and sparsity parameters $(r, \beta)$. We first outline the notion in the context of testing for high dimensional means. \cite{Donoho_Jin_2004} considered testing hypotheses for means from $p$ independent $N(\mu_j, 1)$-distributed data for \begin{equation} H_{0}^{(m)}: \mu_{j} = 0 \mbox{ for all $j$ \ vs. \ } H_{a}^{(m)}: \mu_1,\ldots,\mu_p \overset{i.i.d.}{\sim} (1 - \epsilon) \nu_{0} + \epsilon \nu_{\mu_{a}} \label{eq:TestMean}\end{equation} where $\epsilon = p^{-\beta}$, $\mu_{a} = \sqrt{2 r \log (p)}$, $\beta \in (0, 1)$ and $r \in (0, 1)$, $\nu_{0}$ and $\nu_{\mu_{a}}$ denote the point mass distributions at $0$ and $\mu_{a}$, respectively. The high dimensionality is reflected by $p \to \infty$. Let \begin{equation} \rho(\beta) = \left\{ \begin{array}{l l} \max\{0, \beta - 1/2\} & \quad \text{if $0 < \beta \leq 3/4$,} \\ (1 - \sqrt{1 - \beta})^{2} & \quad \text{if $3/4 < \beta < 1$.} \\ \end{array} \right. \label{eq: DBoundaryMean}\end{equation} \cite{Ingster_1997} showed that $r=\rho(\beta)$ is the optimal detection boundary for hypotheses (\ref{eq:TestMean}) under the Gaussian distributed data setting of \cite{Donoho_Jin_2004}, in the sense that (i) for any test of hypothesis (\ref{eq:TestMean}), \begin{equation} P(\mbox{Reject $H_{0}^{(m)} | H_{0}^{(m)}$}) + P(\mbox{Not reject $H_{0}^{(m)} | H_{a}^{(m)}$}) \to 1 \quad \mbox{ \ if \ $r < \rho(\beta)$;} \label{eq: belowDB}\end{equation} and (ii) there exists a test such that \begin{equation} P(\mbox{Reject $H_{0}^{(m)} | H_{0}^{(m)}$}) + P(\mbox{Not reject $H_{0}^{(m)} | H_{a}^{(m)}$}) \to 0 \mbox{ \ if \ $r > \rho(\beta)$,} \label{eq: aboveDB}\end{equation} as $n, p \to \infty$. \cite{Donoho_Jin_2004} showed that the HC test attains this detection boundary, and thus is optimal. They also derived phase transition diagrams for non-Gaussian data. See \cite{Zhong_Chen_Xu_2013} and \cite{Qiu_Chen_Nettleton_2016} in other constructions for testing means and regression coefficients that also have $r = \rho(\beta)$ as the detection boundary which is not necessarily optimal under nonparametric data distributions. Define the standardized signal strength \begin{equation} r_{ij} = r_{0, ij} / \{(1 - \kappa)\theta_{ij1} + \kappa \theta_{ij2}\} \quad \hbox{for} \quad \sigma_{ij1} \neq \sigma_{ij2}, \label{SSignal}\end{equation} by recognizing that the denominator is the main order term of the variance of $\sqrt{n}(\hat{\sigma}_{ij1}-\hat{\sigma}_{ij2})$. Under Gaussian distributions, $\theta_{ij1} = \sigma_{ii1}\sigma_{jj1} + \sigma_{ij1}^2$ and $\theta_{ij2} = \sigma_{ii2}\sigma_{jj2} + \sigma_{ij2}^2$. Under the alternative hypothesis in (\ref{SparseH1}), since the difference between $\sigma_{ij1}$ and $\sigma_{ij2}$ is at most at the order $\sqrt{\log(p) / n}$, we have $r_{ij} = r_{0, ij} / (\sigma_{ii1}\sigma_{jj1} + \sigma_{ij1}^{2}) \{1 + O(\sqrt{\log(p) / n})\}$. Define the maximal and minimal standardized signal strength \begin{equation} \bar{r} = \max_{(i, j): \sigma_{ij1} \neq \sigma_{ij2}} r_{ij} \mbox{ \ and \ } \munderbar{r} = \min_{(i, j): \sigma_{ij1} \neq \sigma_{ij2}} r_{ij}. \label{MSSignal}\end{equation} Let \vspace{-0.5cm} \begin{equation}\begin{split} \mathcal{C}(\beta, \bar{r}, \munderbar{r}) =& \big\{({\bf \Sigma}_{1}, {\bf \Sigma}_{2}): \mbox{under $H_{a}$ of (\ref{SparseH1}) such that $m_a= \lfloor q^{1-\beta} \rfloor$, maximal } \\ & \ \ \ \mbox{and minimal standardized signal strength are $\bar{r}$ and $\munderbar{r}$,} \\ & \ \ \ \mbox{respectively, and satisfy Assumptions \ref{as2}, \ref{as4} and \ref{as5}} \big\} \end{split}\nonumber\end{equation} be the class of covariance matrices with sparse and weak differences. {For any $({\bf \Sigma}_{1}, {\bf \Sigma}_{2}) \in \mathcal{C}(\beta, \bar{r}, \munderbar{r})$, } let $\mu_{{T}_n,1}(s)$ and $\sigma^2_{{T}_n,1}(s)$ be the mean and variance of ${T}_n(s)$ under $H_a$ in (\ref{SparseH1}), and let \begin{equation} \mbox{Power}_{n}({\bf \Sigma}_{1}, {\bf \Sigma}_{2}) = P\big[{\mathcal{V}}_n(s_0) > \{ q_{\alpha}+b(\log(p),s_0,\eta)\}/a(\log (p)) | {\bf \Sigma}_{1}, {\bf \Sigma}_{2}\big] \nonumber \end{equation} be the power of the MTT in (\ref{testproc}). Put $\mbox{SNR}(s) = \frac{\mu_{T_n,1}(s) - \mu_{T_n,0}(s)}{{\sigma}_{{T}_n,1}(s)}$ be the signal to noise ratio under $H_{a}$ in (\ref{SparseH1}). Note that \[{\mathcal{V}}_n(s_{0}) = \max_{s\in \mathcal{S}(s_0) } \frac{{\sigma}_{{T}_n,1}(s)}{\tilde{\sigma}_{{T}_n,0}(s)} \bigg\{\frac{{{T}}_n(s)-{\mu}_{{T}_n,1}(s)}{{\sigma}_{{T}_n,1}(s)} - \frac{\hat{\mu}_{{T}_n,0}(s) - {\mu}_{{T}_n,0}(s)}{{\sigma}_{{T}_n,1}(s)} +\text{SNR}(s) \bigg\}. \] Thus, the power of the MTT is critically determined by $\mbox{SNR}(s)$. The next proposition gives the mean and variance of $T_{n}(s)$ under $H_{a}$ of (\ref{SparseH1}) with the same standardized signal strength ${r}^{*}$, corresponding to the cases that $r_{ij} = \munderbar{r}$ for all $\sigma_{ij1} \neq \sigma_{ij2}$ (${r}^{*} = \munderbar{r}$) and $r_{ij} = \bar{r}$ for all $\sigma_{ij1} \neq \sigma_{ij2}$ (${r}^{*} = \bar{r}$). Let $L_p$ be a multi-$\log (p)$ term which may change in context. \begin{pn} \label{pn2} Under Assumptions \ref{as1} or \ref{as1poly}, \ref{as2}-\ref{as5} and $H_{a}$ in (\ref{SparseH1}) with $r_{ij} = {r}^{*}$ for all $\sigma_{ij1} \neq \sigma_{ij2}$, $\mu_{{T}_n,1}(s) = \mu_{T_n,0}(s)+\mu_{T_n,a}(s)$, where \[ \mu_{{T}_n,a}(s) = L_pq^{(1-\beta)}\mathbb{I}(s<{r}^{*})+L_pq^{(1-\beta)}p^{-2(\sqrt{s}-\sqrt{{r}^{*}})^2}\mathbb{I}(s>{r}^{*}). \] In addition, under either \text{(i)} Assumption \ref{as1} with $s>1/2$ or \text{(ii)} Assumption \ref{as1poly} with $s>1/2-\xi/4$, $\sigma^2_{{T}_n,1}(s)=L_pq^{(1-\beta)}p^{-2(\sqrt{s}-\sqrt{{r}^{*}})^2}\mathbb{I}(s>{r}^{*})+L_pq^{(1-\beta)}\mathbb{I}(s<{r}^{*})+L_pqp^{-2s}$. \end{pn} From Proposition \ref{pn2} via the maximal and minimal signal strength defined in (\ref{MSSignal}), the detection boundary of the proposed MTT are established in Propositions \ref{pn3} and \ref{pn4} below. As shown in the previous section, a lower threshold bound $s_{0}$ is needed to control the dependence among the entries of the sample covariance matrices. The restriction on the threshold levels leads to a slightly higher detection boundary as compared with that given in (\ref{eq: DBoundaryMean}). Before proceeding further, let us define a family of detection boundaries indexed by $\xi \in [0,2]$ that connects $p$ and $n$ via $n \sim p^{\xi}$: \begin{equation} \label{detect_real} \rho^{*}(\beta,\xi)=\left\{\begin{matrix} \frac{(\sqrt{4-2\xi}-\sqrt{6-8\beta-\xi})^2}{8},& 1/2<\beta\leq 5/8-\xi/16,\\ \beta-1/2,& 5/8-\xi/16<\beta\leq 3/4,\\ (1-\sqrt{1-\beta})^2, & 3/4<\beta<1. \end{matrix}\right. \end{equation} It is noted that the phase diagrams $\rho^{*}(\beta,\xi)$ are only defined over $\beta \in (1/2,1)$, the sparse signal range. It can be checked that $\rho^{*}(\beta,\xi) \ge \rho(\beta)$ for $\beta \in (1/2,1)$ and any $\xi \in [0,2]$. The following proposition considers the case of $n \sim p^{\xi}$ for $\xi \in (0, 2)$ as prescribed in Assumption \ref{as1poly}, a case considered in \cite{Delaigle_2011_JRSSB} in the context of mean testing. \begin{figure}[h] \centering \includegraphics[scale=0.46]{DB2.eps} \setlength{\abovecaptionskip}{0pt} \setlength{\belowcaptionskip}{-5pt} \caption{The detection boundary $\rho^{*}(\beta,\xi)$ in (\ref{detect_real}) of the proposed multi-level thresholding test with $s_0 = 1/2 - \xi / 4$ for $\xi = 0, 0.75, 1.5$ and $n = p^{\xi}$, and the two pieces (in dashed and dotted curves) that constitute the optimal detection boundary $\rho(\beta)$ for testing means given in (\ref{eq: DBoundaryMean}). } \label{Fig_DB} \end{figure} \begin{pn} \label{pn3} Under Assumptions \ref{as1poly}, \ref{as2}-\ref{as5}, (\ref{sufficient}) and the alternative hypothesis (\ref{SparseH1}), for $s_0 = 1/2 - \xi / 4$, an arbitrarily small $\epsilon>0$, and a series of nominal sizes $\alpha_n=\bar{\Phi}((\log p)^{\epsilon})\to 0$, as $n, p \to \infty$, \text{(i)} if $\munderbar{r} > \rho^{*}(\beta,\xi)$, $\inf_{({\bf \Sigma}_{1}, {\bf \Sigma}_{2}) \in \mathcal{C}(\beta, \bar{r}, \munderbar{r})} \mbox{Power}_{n}({\bf \Sigma}_{1}, {\bf \Sigma}_{2}) \to 1$; \text{(ii)} if $\bar{r} < \rho^{*}(\beta,\xi)$, $\sup_{({\bf \Sigma}_{1}, {\bf \Sigma}_{2}) \in \mathcal{C}(\beta, \bar{r}, \munderbar{r})} \mbox{Power}_{n}({\bf \Sigma}_{1}, {\bf \Sigma}_{2}) \to 0$. \end{pn} {Proposition \ref{pn3} shows that the power of the proposed MTT over the class $\mathcal{C}(\beta, \bar{r}, \munderbar{r})$ is determined by $\beta$, and the minimum and maximum standardized signal strength. More importantly, $\rho^{*}(\beta,\xi)$ in (\ref{detect_real}) is the detection boundary of the MTT.} The power converges to 1 if $\munderbar{r}$ is above this boundary, and diminishes to 0 if $\bar{r}$ is below it. The detection boundaries $\rho^{*}(\beta,\xi)$ are displayed in Figure \ref{Fig_DB} for three values of $\xi$. Note that $\rho(\beta)$ in (\ref{eq: DBoundaryMean}) is the detection boundary of the MTT for $s_0 = 0$ that corresponds to $\xi=2$, which is the lowest one in the family. It can be shown that $\rho^{*}(\beta,\xi)$ approaches to $\rho(\beta)$ as $\xi \to 2$; namely if $n \sim p^2$, we have $\rho^{*}(\beta,2) = \rho(\beta)$, which is the optimal detection boundary for testing the means with uncorrelated Gaussian data. Restricting $s \geq s_0= 1/2 - \xi / 4$ elevates the detection boundary $\rho^{*}(\beta,\xi)$ of the proposed MTT for $1/2<\beta\leq 5/8-\xi/16$ as a price for controlling the size of the test. Similar results on the influence of the lower threshold bound on testing means were given in \cite{Delaigle_2011_JRSSB}. The following proposition shows that $\rho^{*}(\beta,0)$ is the detection boundary when dimension $p$ grows exponentially fast with $n$, which can be viewed as a degenerated polynomial growth case with $\xi = 0$. \begin{pn} \label{pn4} Under Assumption \ref{as1}, \ref{as2}-\ref{as5}, (\ref{sufficient}) and the alternative hypothesis (\ref{SparseH1}), for $s_0 = 1/2$, an arbitrarily small $\epsilon>0$, and a series of nominal sizes $\alpha_n=\bar{\Phi}((\log p)^{\epsilon})\to 0$, as $n, p \to \infty$, \text{(i)} if $\munderbar{r} > \rho^{*}(\beta,0)$, $\inf_{({\bf \Sigma}_{1}, {\bf \Sigma}_{2}) \in \mathcal{C}(\beta, \bar{r}, \munderbar{r})} \mbox{Power}_{n}({\bf \Sigma}_{1}, {\bf \Sigma}_{2}) \to 1$; \text{(ii)} if $\bar{r} < \rho^{*}(\beta,0)$, $\sup_{({\bf \Sigma}_{1}, {\bf \Sigma}_{2}) \in \mathcal{C}(\beta, \bar{r}, \munderbar{r})} \mbox{Power}_{n}({\bf \Sigma}_{1}, {\bf \Sigma}_{2}) \to 0$. \end{pn} As $\rho^{*}(\beta,0) \ge \rho^{*}(\beta,\xi)$ for any $\xi \in (0,2]$, the result also shows that a higher growth rate of $p$ leads to a higher detection boundary that may be viewed as a sacrifice of the power due to the higher dimensionality. From \cite{Cai_Liu_Xia_2013}, the power of the $\ell_{\max}$-test converges to 1 if $$\max_{1 \leq i \leq j \leq p} \frac{|\sigma_{ij1} - \sigma_{ij2}|}{(\theta_{ij1} / n_1 + \theta_{ij2} / n_2)^{1/2}} > 4 \sqrt{\log p},$$ which is equivalent to $\bar{r} > 4$ in our context. Hence, the signal strength required by the $\ell_{\max}$-test is stronger than that $r_{ij} \in (0, 1)$ required by the MTT in this paper. Also, the $\ell_2$-test of \cite{Li_Chen_2012} does not have non-trivial power for $\beta > 1/2$. Hence, the proposed MTT is more powerful than both the $\ell_2$-tests and $\ell_{\max}$-tests in detecting sparse and weak signals. Propositions \ref{pn3} and \ref{pn4} indicate that the MTT can detect the differences between the unequal covariances in ${\bf \Sigma}_{1}$ and ${\bf \Sigma}_{2}$ at the order of $c_{a}\sqrt{\log(p) / n}$ for some positive constant $c_{a}$. We are to show that the order $\sqrt{\log(p) / n}$ is minimax optimal. Let $\mathcal{W}_{\alpha}$ be the collection of all $\alpha$-level tests for hypotheses (\ref{H0}) under Gaussian distributions and Assumptions \ref{as2}, \ref{as4} and \ref{as5}, namely, $P(W_{\alpha} = 1 | H_{0}) \leq \alpha$ for any $W_{\alpha} \in \mathcal{W}_{\alpha}$. Note that (\ref{Assum2-1}) and (\ref{Assum4-1}) are sufficient conditions for (\ref{Assum2-2}) and the second part of Assumption \ref{as4} under Gaussian Distributions, respectively. Define a class of covariance matrices with the differences being at least of order $\{\log(p) / n\}^{1/2}$: \begin{eqnarray} \underline{\mathcal{C}}(\beta, c) &=& \big\{({\bf \Sigma}_{1}, {\bf \Sigma}_{2}): \mbox{under $H_{a}$ of (\ref{SparseH1}) such that} \ m_{a} = \lfloor q^{1-\beta} \rfloor, r_{0, ij} \geq c \nonumber \\ && \ \ \ \mbox{for all $\sigma_{ij1} \neq \sigma_{ij2}$, and satisfy Assumptions \ref{as2}, \ref{as4} and \ref{as5}} \big\}. \nonumber \end{eqnarray} Having Assumptions \ref{as4} and \ref{as5} in $\underline{\mathcal{C}}(\beta, c_{0})$ and $\mathcal{W}_{\alpha}$ is for comparing the power performance of the MTT with the minimax rate. Comparing with the covariance class $\mathcal{C}(\beta, \bar{r}, \munderbar{r})$, $\underline{\mathcal{C}}(\beta, c)$ has no constraint on the maximal signal strength. For Gaussian data, $\theta_{ij1}, \theta_{ij2} \leq 2 \tau^{-2}$ where $\tau$ specifies the bounds in (\ref{Assum2-1}). Thus, the standardized signal strength $r_{ij} \geq c\tau^{2} / 2$ for all $\sigma_{ij1} \neq \sigma_{ij2}$. For the MTT, from Propositions \ref{pn3} and \ref{pn4}, $\inf_{({\bf \Sigma}_{1}, {\bf \Sigma}_{2}) \in \underline{\mathcal{C}}(\beta, c)} \mbox{Power}_{n}({\bf \Sigma}_{1}, {\bf \Sigma}_{2}) \to 1$ as $n, p \to \infty$ for a large constant $c$. The following theorem shows that the lower bound $\{\log(p) / n\}^{1/2}$ for the signal in $\underline{\mathcal{C}}(\beta, c)$ is the optimal rate, namely there is no $\alpha$-level test that can distinguish $H_{a}$ from $H_{0}$ in (\ref{SparseH1}) with probability approaching 1 uniformly over the class $\underline{\mathcal{C}}(\beta, c_{0})$ for some $c_{0} > 0$. \begin{tm} \label{tm3} For the Gaussian distributed data, under Assumptions \ref{as1poly}, \ref{as2}, \ref{as4} and \ref{as5}, for any $\tau > 0$, $0 < \omega < 1 - \alpha$ and $\max\{2/3, (3 - \xi) / 4\} < \beta < 1$, there exists a constant $c_{0} > 0$ such that, as $n, p \to \infty$, \[ \sup_{W_{\alpha} \in \mathcal{W}_{\alpha}} \inf_{({\bf \Sigma}_{1}, {\bf \Sigma}_{2}) \in \underline{\mathcal{C}}(\beta, c_{0})} P(W_{\alpha} = 1) \leq 1 - \omega. \] \end{tm} As Propositions 3 and 4 have shown that the proposed MTT can detect signals at the rate of $\{\log(p) / n\}^{1/2}$ for $\beta > 1/2$, the MTT test is at least minimax rate optimal for $\beta > \max\{2/3, (3 - \xi) / 4\}$. Compared to Theorem 4 of \cite{Cai_Liu_Xia_2013}, by studying the alternative structures in $\underline{\mathcal{C}}(\beta, c_{0})$, we extend the minimax result from the highly sparse signal regime $3/4 < \beta < 1$ to $\max\{2/3, (3 - \xi) / 4\} < \beta < 1$, which offers a wider range of the signal sparsity. The optimality under $1/2 < \beta \leq \max\{2/3, (3 - \xi) / 4\}$ requires investigation in a separate effort. Obtaining the lower and upper bounds of the detectable signal strength at the rate $\sqrt{\log(p) / n}$ requires more sophisticated derivation. These two bounds could be the same under certain conditions for testing one-sample covariances. However, for the two-sample test, the lower and upper bounds may not match. This is due to the composite null hypothesis in (\ref{SparseH1}). {More discussion on this issue is given in Section 7.} \setcounter{equation}{0} \section{Simulation Results} We report results from simulation experiments which were designed to evaluate the performances of the proposed two-sample MTT under high dimensionality with sparse and faint signals. We also compare the proposed test with the tests in \cite{Srivastava_Yanagihara_2010} (SY), \cite{Li_Chen_2012} (LC) and \cite{Cai_Liu_Xia_2013} (CLX). In the simulation studies, the two random samples $\{{\mathbf X}_k\}_{k=1}^{n_1}$ and $\{{\mathbf Y}_k\}_{k=1}^{n_2}$ were respectively generated from \begin{equation} {\mathbf X}_k={\bf \Sigma}_1^{\frac{1}{2}}{\bf Z}_{1k} \quad \text{and}\quad {\mathbf Y}_k={\bf \Sigma}_2^{\frac{1}{2}}{\bf Z}_{2k}, \label{DataGenerate} \end{equation} where $\{{\bf Z}_{1k}\}$ and $\{{\bf Z}_{2k}\}$ are i.i.d.\ random vectors from a common population. We considered two distributions for the innovation vectors ${\bf Z}_{1k}$ and ${\bf Z}_{2k}$: (i) $N(0,{\bf I}_p)$; (ii) Gamma distribution where components of ${\bf Z}_{1k}$ and ${\bf Z}_{2k}$ were i.i.d.\ standardized Gamma(4,2) with mean 0 and variance 1. To design the covariances ${\bf \Sigma}_1$ and ${\bf \Sigma}_2$, let ${\bf \Sigma}_1^{(0)}={\bf{D}}_0^{\frac{1}{2}}{\bf \Sigma}^{(*)}{\bf{D}}_0^{\frac{1}{2}}$, where ${\bf{D}}_0 = \mbox{diag}(d_{1}, \ldots, d_{p})$ with elements generated according to the uniform distribution $\text{U}(0.1,1)$, and ${\bf \Sigma}^{(*)}=(\sigma_{ij}^{*})$ was a positive definite correlation matrix. Once generated, ${\bf{D}}_0$ was held fixed throughout the simulation. The following two designs of ${\bf \Sigma}^{(*)}$ were considered in the simulation: \begin{eqnarray} &\text{Design 1:}& \sigma_{ij}^{*} = 0.4^{|i-j|}; \label{Model1} \\ &\text{Design 2:}& \sigma_{ij}^{*} = 0.5 \mathbb{I}(i = j) + 0.5 \mathbb{I}(i, j \in [4 k_0 - 3, 4 k_0] ). \label{Model2} \end{eqnarray} for $k_0 = 1, \ldots, \lfloor p/4\rfloor$. Design 1 has an auto-regressive structure and Design 2 is block diagonal with block size 4. Matrix ${\bf{D}}_0$ created heterogeneity for different dimensions of the data. To generate scenarios of sparse and weak signals under the alternative hypothesis, we chose \begin{equation} {\bf \Sigma}_1^{(\star)} = {\bf \Sigma}_1^{(0)} + \epsilon_c {\bf I_p} \quad \hbox{and} \quad {\bf \Sigma}_2^{(\star)} = {\bf \Sigma}_1^{(0)} + {\bf U} + \epsilon_c {\bf I_p}, \label{Sigma_simu} \end{equation} {where ${\bf U} = (u_{kl})_{p\times p}$ is a banded symmetric matrix and $\epsilon_c$ is a positive number to guarantee the positive definiteness of ${\bf \Sigma}_2^{(\star)}$. Specifically, let $k_0 = \lfloor m_p / p \rfloor$, where $m_p = \lfloor q^{1-\beta} / 2\rfloor$ is the number of distinct pairs with nonzero $u_{kl}$. Let $u_{l + k_0 + 1\, l} = u_{l\, l + k_0 + 1} = \sqrt{4r\log p/n}$ for $l = 1, \ldots, k_1$ and $k_1 = m_p - p k_0 + k_0(k_0 + 1) / 2$, and let $u_{kl} = \sqrt{4r\log p/n}$ for $|k - l| \leq k_0$ and $k \neq l$ if $k_0 \geq 1$.} Set $\epsilon_c = | \min\{\lambda_{\min}({\bf\Sigma}_{1}^{(0)} + {\bf U}), 0\} | +0.05$, where $\lambda_{\min}(A)$ denotes the minimum eigenvalue of a matrix $A$. Since $\epsilon_c > 0$ and $\lambda_{\min}({\bf \Sigma}_2^{(\star)}) \geq \lambda_{\min}({\bf\Sigma}_{1}^{(0)} + {\bf U}) + \epsilon_c > 0$, both ${\bf \Sigma}_1^{(\star)}$ and ${\bf \Sigma}_2^{(\star)}$ were positive definite under both Designs 1 and 2. {Under the null hypothesis, we chose ${\bf \Sigma}_1 = {\bf \Sigma}_2 = {\bf \Sigma}_1^{(0)}$ in (\ref{DataGenerate}), while under the alternative hypothesis, ${\bf \Sigma}_1 = {\bf \Sigma}_1^{(\star)}$ and ${\bf \Sigma}_2 = {\bf \Sigma}_2^{(\star)}$. The simulated data were generated as a reordering of ${\mathbf X}_k$ and ${\mathbf Y}_k$ from (\ref{DataGenerate}) according to a randomly selected permutation $\pi_p$ of $\{1, \ldots, p\}$. Once $\pi_p$ was generated, it was held fixed throughout the simulation.} To mimic the regime of sparse and faint signals, we generated a set of $\beta$ and $r$ values. First, we fixed $\beta=0.6$ and set $r=0.1, 0.2, \ldots, 1$ to create different signal strengths utilized in the simulation results shown in Figure \ref{Fig_DGP_1_r}. Then, $r=0.6$ was fixed while $\beta$ was varied from $0.3$ to $0.9$ to show the impacts of sparsity levels on the tests in Figure \ref{Fig_DGP_2_r}. We chose the sample sizes $(n_1,n_2)$ as $(60,60)$, $(80,80)$, $(100,100)$ and $(120,120)$ respectively, and the corresponding dimensions $p=175, 277, 396$ and $530$ according to $p=\lfloor 0.25n_1^{1.6}\rfloor$. We set $s_0 =0.5$ according to Theorem \ref{tm1} and the discussion following (\ref{MultiThreshold}), and $\eta$ was chosen as 0.05 in (\ref{thresholdset}). {We chose $\hat{\mu}_{{T}_n,0}(s) = \tilde{\mu}_{{T}_n,0}(s)$.} The process was replicated 500 times for each setting of the simulation. Since the convergence of $\mathcal{V}_n(s_0)$ to the Gumbel distribution given in (\ref{testproc}) can be slow when the sample size was small, we employed a bootstrap procedure in conjunction with a consistent covariance estimator proposed by \cite{Rothman_2012}, which ensures the positive definiteness of the estimated covariance. Since ${\bf\Sigma}_1={\bf\Sigma}_2$ under the null hypothesis, the two samples $\{{\mathbf X}_k\}_{k=1}^{n_1}$ and $\{{\mathbf Y}_k\}_{k=1}^{n_2}$ were pooled together to estimate ${\bf\Sigma}_1$. Denote the estimator of \cite{Rothman_2012} as $\widehat{{\bf \Sigma}}$. For the $b$-th bootstrap resample, we drew $n_1$ samples of ${{\mathbf X}}^{*}$ and $n_2$ samples of ${{\mathbf Y}}^{*}$ independently from $N(0,\widehat{{\bf \Sigma}})$. Then, the bootstrap test statistic $\mathcal{V}_n^{*(b)}(s_0)$ was obtained based on ${{\mathbf X}}^{*}$ and ${{\mathbf Y}}^{*}$. This procedure was repeated $B=250$ times to obtain the bootstrap sample of the proposed multi-thresholding statistic $\{\mathcal{V}_n^{*(1)}(s_0),\ldots,\mathcal{V}_n^{*(B)}(s_0)\}$ under the null hypothesis. The bootstrap empirical null distribution of the proposed statistic was $\widehat{F}_0(x) = \frac{1}{B}\sum_{b = 1}^{B} \mathbb{I}\{\mathcal{V}_n^{*(b)}(s_0) \leq x\}$ and the bootstrap p-value was $1 - \widehat{F}_0(\mathcal{V}_n(s_0))$, where $\mathcal{V}_n(s_0)$ was the multi-thresholding statistic from the original sample. We reject the null hypothesis if this p-value is smaller than the nominal significant level $\alpha=0.05$. The validity of the bootstrap approximation can be justified in two key steps. First of all, if we generate the ``parametric bootstrap samples'' from the two normal distributions with the true population covariance matrices, by Theorems 1 and 2, the bootstrap version of the single thresholding and multi-thresholding statistics will have the same limiting Gaussian distribution and the extreme value distribution, respectively. Secondly, we can replace the true covariance above by a consistently estimated covariance matrix $\hat{{\bf \Sigma}}$ \citep{Rothman_2012}, which is {positive definite}. The justification of the bootstrap procedure can be made by showing the consistency of $\hat{{\bf \Sigma}}$ by extending the results in \cite{Rothman_2012}. Table\ref{size_gaussian} reports the empirical sizes of the proposed multi-thresholding test using the limiting Gumbel distribution for the critical value (denoted as MTT) and the bootstrap calibration procedure described above (MTT-BT), together with three existing methods, with the nominal level 0.05, and the Gaussian and Gamma distributed random vectors, respectively. We observe that the MTT based on the asymptotic distribution exhibited some size distortion when the sample size was small. However, with the increase of the sample size, the sizes of MTT became closer to the nominal level. At the meantime, the CLX and SY tests also experienced some size distortion under the Gamma scenario in smaller samples. It is observed that the proposed multi-thresholding test with the bootstrap calibration (MTT-BT) performed consistently well under all the scenarios with accurate empirical sizes. This shows that the bootstrap distribution offered more accurate approximation than the limiting Gumbel distribution to the distribution of the test statistic $\mathcal{V}_n(s_0)$ under the null hypothesis. \newcolumntype{C}[1]{>{\centering\arraybackslash}p{#1}} \begin{table}[!htb] \caption{Empirical sizes for the tests of \cite{Srivastava_Yanagihara_2010} (SY), \cite{Li_Chen_2012} (LC), \cite{Cai_Liu_Xia_2013} (CLX) and the proposed multi-level thresholding test based on the limiting distribution calibration in (\ref{testproc}) (MTT) and the bootstrap calibration (MTT-BT) for Designs 1 and 2 under the Gaussian and Gamma distributions with the nominal level of $5\%$. } \label{size_gaussian} \begin{center} \begin{tabular}{cc|C{1.4cm}C{1.4cm}C{1.4cm}C{1.4cm}C{1.4cm}} \hline \hline $ p $ & $(n_1, n_2)$ & SY & LC & CLX & MTT & MTT-BT \\ \hline & &\multicolumn{5}{c}{Gaussian Design 1} \\ \hline 175 & (60, 60) & 0.048 & 0.058 & 0.054 & 0.088 & 0.058 \\ \cline{3-7} 277 & (80, 80) & 0.052 & 0.052 & 0.058 & 0.064 & 0.056 \\ \cline{3-7} 396 & (100, 100) & 0.042 & 0.046 & 0.058 & 0.064 & 0.054 \\ \cline{3-7} 530 & (120, 120) & 0.056 & 0.048 & 0.050 & 0.056 & 0.046 \\ \hline & &\multicolumn{5}{c}{Gaussian Design 2} \\ \hline 175 & (60, 60) & 0.060 & 0.048 & 0.052 & 0.094 & 0.048 \\ \cline{3-7} 277 & (80, 80) & 0.040 & 0.060 & 0.040 & 0.064 & 0.052 \\ \cline{3-7} 396 & (100, 100) & 0.052 & 0.042 & 0.044 & 0.090 & 0.048 \\ \cline{3-7} 530 & (120, 120) & 0.050 &0.046&0.044& 0.060 & 0.054 \\ \hline & &\multicolumn{5}{c}{Gamma Design 1} \\ \hline 175 & (60, 60) & 0.046 & 0.060 & 0.066 & 0.110 & 0.056 \\ \cline{3-7} 277 & (80, 80) & 0.060 & 0.050 & 0.044 & 0.076 & 0.044 \\ \cline{3-7} 396 & (100, 100) & 0.046 & 0.052 & 0.046& 0.066 & 0.054 \\ \cline{3-7} 530 & (120, 120) & 0.060 & 0.056 & 0.048 & 0.060 & 0.048 \\ \hline & &\multicolumn{5}{c}{Gamma Design 2} \\ \hline 175 & (60, 60) & 0.070 & 0.056 & 0.066 & 0.108 & 0.056 \\ \cline{3-7} 277 & (80, 80) & 0.060 & 0.058 & 0.068 & 0.112 & 0.044 \\ \cline{3-7} 396 & (100, 100) & 0.060 & 0.050 & 0.044 & 0.068 & 0.046 \\ \cline{3-7} 530 & (120, 120) & 0.054 & 0.056 & 0.048& 0.056 & 0.048 \\ \hline \hline \end{tabular} \end{center} \end{table} Figure \ref{Fig_DGP_1_r} displays the empirical powers with respect to different signal strengths $r$ for covariance matrix Designs 1 and 2 with $n_1=n_2=80$, $p=277$ and $n_1=n_2=100$, $p=396$ under the Gaussian distribution, respectively. Figure \ref{Fig_DGP_2_r} reports the empirical powers under different sparsity ($\beta$) levels when the signal strength $r$ was fixed at 0.6. Simulation results on the powers under the Gamma distribution are available in the SM. It is noted that at $\beta=0.6$, there were only 68 and 90 unequal entries between {the upper triangles of} ${\bf \Sigma}_1$ and ${\bf \Sigma}_2$ among a total of $q=38503$ and $78606$ unique entries for $p=277$ and $396$, respectively. To make the powers comparable for different methods, we adjusted the critical values of the tests by their respective empirical null distributions so that the actual sizes were approximately equal to the nominal level 5\%. Due to the size adjustment, the MTT based on the limiting distribution and the MTT-BT based on the bootstrap calibration had the same test statistic, and hence the same power. Here, we only reported the numerical power results for the MTT-BT. Figure \ref{Fig_DGP_1_r} reveals that the power of the proposed MTT-BT was the highest among all the tests under all the scenarios. Even though the powers of other tests improved as the signal strength $r$ was increased, the proposed MTT-BT maintained a lead over the whole range of $r \in [0.1, 1]$. The extra power advantage of the MTT-BT over the other three tests got larger as the signal strength $r$ increased. We observe from Figure \ref{Fig_DGP_2_r} that the proposed test also had the highest empirical power across the range of $\beta$. The powers of the MTT-BT at the high sparsity level ($\beta \ge 0.7$) were higher than those of the CLX test. The latter test is known for doing well in the power when the signal was sparse. We take this as an empirical confirmation to the attractive detection boundary of the proposed {MTT} established in the theoretical analysis reported in Section 5. The monotone decrease pattern in the power profile of the four tests reflected the reality of reduction in the number of signals as $\beta$ was increased. It is noted that the two $\ell_2$ norm based tests SY and LC are known to have good powers when the signals are dense, i.e. $\beta\leq 0.5$. This was well reflected in Figure \ref{Fig_DGP_2_r} indicating the two tests had comparable powers to the MTT-BT when $\beta =0.3$ and $0.4$. However, after $\beta$ was larger than 0.5, both SY and LC's powers started to decline quickly and were surpassed by the CLX, which were consistent with the results of Figure \ref{Fig_DGP_1_r} that the $\ell_2$-tests without regularization incorporated too many uninformative dimensions and lowered their signal to noise ratios. We also observe that as the level of the sparsity was in the range of $[0.4,0.7]$, the extend of the power advantage of the proposed test over the other three tests became larger, which may be viewed as another confirmation of the theoretical results of the {MTT}. \begin{figure} \setlength{\abovecaptionskip}{0pt} \setlength{\belowcaptionskip}{10pt} \caption{Empirical powers with respect to the signal strength $r$ for the tests of \cite{Srivastava_Yanagihara_2010} (SY), \cite{Li_Chen_2012} (LC), \cite{Cai_Liu_Xia_2013} (CLX) and the proposed multi-level thresholding test with the bootstrap calibration (MTT-BT) for Designs 1 and 2 with Gaussian innovations under $\beta=0.6$ when $p=277$, $n_1=n_2=80$ and $p=396$, $n_1=n_2=100$ respectively.}\centering \includegraphics[scale=0.23]{Gaussian_p_277_model1_fixedbeta.eps} \includegraphics[scale=0.23]{Gaussian_p_277_model2_fixedbeta.eps} \includegraphics[scale=0.23]{Gaussian_p_396_model1_fixedbeta.eps} \includegraphics[scale=0.23]{Gaussian_p_396_model2_fixedbeta.eps} \label{Fig_DGP_1_r} \end{figure} \begin{figure} \setlength{\abovecaptionskip}{0pt} \setlength{\belowcaptionskip}{10pt} \caption{Empirical powers with respect to the sparsity level $\beta$ for the tests of \cite{Srivastava_Yanagihara_2010} (SY), \cite{Li_Chen_2012} (LC), \cite{Cai_Liu_Xia_2013} (CLX) and the proposed multi-level thresholding test with the bootstrap calibration (MTT-BT) for Designs 1 and 2 with Gaussian innovations under $r=0.6$ when $p=277$, $n_1=n_2=80$ and $p=396$, $n_1=n_2=100$ respectively. }\centering \includegraphics[scale=0.23]{Gaussian_p_277_model1_fixedr.eps} \includegraphics[scale=0.23]{Gaussian_p_277_model2_fixedr.eps} \includegraphics[scale=0.23]{Gaussian_p_396_model1_fixedr.eps} \includegraphics[scale=0.23]{Gaussian_p_396_model2_fixedr.eps} \label{Fig_DGP_2_r} \end{figure} \setcounter{equation}{0} \setcounter{equation}{0} \setcounter{equation}{0} \section{Discussion} For establishing the asymptotic normality of the thresholding statistic $T_{n}(s)$ in (\ref{SingleTest}), the $\beta$-mixing condition (Assumption \ref{as5}) can be weakened. Polynomial rate of the $\beta$-mixing coefficients can be assumed at the expense of more dedicated proofs. Under this case, to prove Theorem \ref{tm1}, the length of the small and big segments of the matrix blocking ($b$ and $a$ in Figure \ref{Fig_Demo}) need to be chosen at polynomial rates of $p$, where the orders depend on the decaying rate of the $\beta$-mixing coefficients. Although Theorem \ref{tm3} provides the minimax rate $\sqrt{\log(p) / n}$ of the signal strength for testing hypotheses (\ref{SparseH1}), the lower and upper bounds at this rate may not match due to the composite nature of the hypotheses for the two-sample test. To illustrate this point, let $W \in \mathcal{W}_{\alpha}$ be the critical function of a test for the hypotheses (\ref{SparseH1}). Let $\rm E_{0, \boldsymbol{\Sigma}_{1}}$ and $\rm E_{\boldsymbol{\Sigma}_{1}, \boldsymbol{\Sigma}_{2}}$ be the expectation with respect to the data distribution under the null and alternative hypotheses, respectively. The derivation of the minimax bound starts from the following \begin{eqnarray} 1 + \alpha - \sup_{W \in \mathcal{W}_{\alpha}} \inf_{\boldsymbol{\Sigma}_{1}, \boldsymbol{\Sigma}_{2}} \rm E_{\boldsymbol{\Sigma}_{1}, \boldsymbol{\Sigma}_{2}}(W) &\geq& \inf_{W \in \mathcal{W}_{\alpha}} \sup_{\boldsymbol{\Sigma}_{1}, \boldsymbol{\Sigma}_{2}} \{ \rm E_{0, \boldsymbol{\Sigma}_{1}} W + \rm E_{\boldsymbol{\Sigma}_{1}, \boldsymbol{\Sigma}_{2}} (1 - W) \} \nonumber \\ &\geq& \sup_{\boldsymbol{\Sigma}_{1}} \inf_{W} \sup_{\boldsymbol{\Sigma}_{2}} \{ \rm E_{0, \boldsymbol{\Sigma}_{1}} W + \rm E_{\boldsymbol{\Sigma}_{1}, \boldsymbol{\Sigma}_{2}} (1 - W) \}. \nonumber \end{eqnarray} In the last inequality above, the infimum over the test $W$ is taken under fixed ${\bf\Sigma}_{1}$. This essentially reduces to one-sample hypothesis testing. Then, a least favorable prior will be constructed on ${\bf\Sigma}_{2}$ given ${\bf\Sigma}_{1}$ is known. As the test under known ${\bf\Sigma}_{1}$ cannot control the type I error for the two-sample hypotheses (\ref{SparseH1}), the bound on the maxmin power for hypotheses (\ref{SparseH1}) is not tight. It is for this reason that one may derive the tight minimax bound for the one sample spherical hypothesis $H_{0}: {\bf \Sigma} = \sigma \mathbf{I}$. We will leave this problem as a future work, especially for the unexplored region $1/2 < \beta < \max\{2/3, (3 - \xi) / 4\}$ in Theorem \ref{tm3}. The proposed thresholding tests can be extended to testing for correlation matrices between the two populations. Recall that $\bPsi_{1} = (\rho_{ij1})_{p\times p}$ and $\bPsi_{2} = (\rho_{ij2})_{p\times p}$ are correlation matrices of ${\mathbf X}_{k}$ and ${\mathbf Y}_{k}$. Consider the hypotheses $$H_0:{\bf \Psi}_1={\bf \Psi}_2 \quad \text{vs.} \quad H_a:{\bf \Psi}_1\neq{\bf \Psi}_2.$$ Let $\hat{\rho}_{ij1} = \hat{\sigma}_{ij1} / (\hat{\sigma}_{ii1}\hat{\sigma}_{jj1})^{1/2}$ and $\hat{\rho}_{ij2} = \hat{\sigma}_{ij2} / (\hat{\sigma}_{ii2}\hat{\sigma}_{jj2})^{1/2}$ be the sample correlations of the two groups. As for $M_{ij}$, the squared standardized difference $M_{ij}^{\star}$ between the sample correlations can be constructed based on $\hat{\rho}_{ij1}$ and $\hat{\rho}_{ij2}$ and their estimated variances. Let $$T_n^{\star}(s) = \sum_{1\leq i\leq j\leq p} M_{ij}^{\star}\mathbb{I}\{M_{ij}^{\star} > \lambda_{p}(s)\}$$ be the single level thresholding statistic based on the sample correlations. Similar to the case of sample covariances, the moderate deviation results on $\hat{\rho}_{ij1} - \hat{\rho}_{ij2}$ can be derived. It can be shown that $T_n^{\star}(s)$ has the same asymptotic distribution as $T_n(s)$. The multi-thresholding test can be constructed similar to (\ref{sup_test}) and (\ref{testproc}). \setcounter{equation}{0} \defA.\arabic{equation}{A.\arabic{equation}} \section{Appendix} In this section, we provide proof to Theorem \ref{tm1}. The theoretical proofs for all other propositions and theorems are relegated to the supplementary material. Without loss of generality, we assume $E({\mathbf X}_1)=0$ and $E({\mathbf Y}_1)=0$. Let $C$ and $L_p$ be a constant and a multi-$\log(p)$ term which may change from case to case, respectively. \medskip \noindent {\bf Proof of Theorem \ref{tm1}}. To prove Theorem \ref{tm1}, we propose a novel technique that constructs an equivalent U-statistic to $T_{n}(s)$ which is based on a partition of covariance into a group of big square blocks separated by small strips as shown in Figure \ref{Fig_Demo}. Specifically, the indices $\{1, \ldots, p\}$ are grouped into a sequence of big segments of length $a$ and small segments of length $b$: $$\{ 1, \dots, a\}, \{a+1, \dots, a + b\}, \{a + b + 1, \dots, 2a + b\}, \{2a + b + 1, \dots, 2a + 2b\}, \dots$$ where $b = o(a)$. Let $d = \lfloor p / (a + b) \rfloor$ be the total number of pairs of large and small segments. The sets of indices for the large segments and the small segments are, respectively, $$S_{m} = \{(m - 1)(a + b) + 1, \dots, m a + (m - 1)b\} \mbox{ \ and \ }$$ $$ R_{m} = \{m a + (m - 1)b + 1, \dots, m(a + b)\}$$ for $m = 1, \dots, d$, and a remainder set $R_{d + 1} = \{d(a + b) + 1, \dots, p\}$. For a two dimensional array $\{(i, j): 1 \leq i \leq j \leq p\}$, the above index partition results in $d (d-1)/2$ square index blocks of size $a \times a$: $\{\mathcal{I}_{m_1 m_2} = S_{m_1} \times S_{m_2}: 1 \leq m_1 < m_2 \leq d\}$, colored in blue in Figure \ref{Fig_Demo}. They are separated by $d$ smaller horizontal and vertical rectangles with widths $a$ by $b$ and square blocks of size $b$. There are also $d$ residual triangular blocks with $a (a + 1) / 2$ elements along the main diagonal. The blocking scheme is demonstrated in Figure \ref{Fig_Demo}. \begin{figure}[h] \centering \includegraphics[scale=0.5]{Cov-Demo-new.eps} \setlength{\abovecaptionskip}{-30pt} \setlength{\belowcaptionskip}{0pt} \caption{Matrix partition in the upper triangle of a covariance matrix. The square sub-matrices (in blue color) of size $a$ are the bigger blocks, which are separated by smaller size strips of width $b$ (marked by the 45-degree lines). There are $d$ triangle blocks along the diagonal plus remaining smaller size blocks {in the residual set $R_{d+1}$} which are not shown in the diagram. } \label{Fig_Demo} \end{figure} Let $A_{ij}(s) = L_{ij}(s) - \mu_{0,ij}(s)$, where $\mu_{0,ij}(s) = E(L_{ij}(s) | H_{0})$ and $L_{ij}(s) = M_{ij}\mathbb{I}(M_{ij} > \lambda_{p}(s))$. Then, $T_{n}(s) - E\{T_{n}(s)\} = \sum_{1 \leq i \leq j \leq p} A_{ij}(s)$ under the null hypothesis. Here, we drop the threshold level $s$ in the notations $A_{ij}(s)$, $L_{ij}(s)$ and $\mu_{0,ij}(s)$ for simplicity of the statement, when there is no confusion. Based on the matrix partition in Figure \ref{Fig_Demo}, $T_{n}(s) - E\{T_{n}(s)\}$ can be divided into summation of $A_{ij}(s)$ over the big square blocks of size $a \times a$, the small strips and the triangular blocks along the main diagonal. Let $R = \cup_{m=1}^{d} R_{m}$ be the collection of the indices in the small segments. From Figure \ref{Fig_Demo}, $T_{n}(s) - E\{T_{n}(s)\}$ can be divided into four parts such that \begin{equation} T_{n}(s) - E\{T_{n}(s)\} = B_{1,n} + B_{2,n} + B_{3,n} + B_{4,n}, \label{eq:decom} \end{equation} where \begin{eqnarray} B_{1, n} = \sum_{1\leq m_1<m_2 \leq d} \ \sum_{i \in S_{m_1},~ j \in S_{m_2}} A_{ij}, && B_{2, n} = \sum_{i \in R ~ or ~ j \in R, ~i \leq j} A_{ij}, \nonumber \\ B_{3, n} = \sum_{1 \leq m \leq d} \ \sum_{i,j \in S_{m}, ~i \leq j} A_{ij}, && B_{4, n} = \sum_{j \in R_{d+1}, ~i \leq j} A_{ij}, \label{eq:Decomposition}\end{eqnarray} Here, $B_{1, n}$ is the sum of $A_{ij}$ over the $d(d-1)/2$ big $a \times a$ square blocks in Figure \ref{Fig_Demo}, $B_{2, n}$ is the sum over all the smaller rectangular and square blocks, $B_{3, n}$ is over the $d$ triangular blocks along the main diagonal, and $B_{4, n}$ is over the remaining segments including the residual blocks $R_{d + 1}$ towards the right end columns of the matrix, respectively. For the decomposition of $T_{n}(s) - E\{T_{n}(s)\}$ in (\ref{eq:Decomposition}), let $N_{l}$ be the number of elements in $B_{l, n}$ for $l = 1, \dots, 4$. Note that $N_{1} = a^2 d (d - 1) / 2 = q(1 + o(1))$, $N_2 \leq d p b \leq p^2 b / (a + b) = o(p^{2})$, $N_{3} = d a^2 / 2 \leq p a / 2 = o(p^2)$ and $N_{4} \leq |R_{d+1}| p \leq (a + b)p = o(p^2)$. Similar as deriving $\text{Var}\{T_{n}(s)\}$ in Proposition \ref{pn1}, we have \begin{eqnarray} \text{Var}(B_{2, n}) &=& \sum_{i \in R ~or~ j \in R, ~i \leq j} \text{Var}(A_{ij}) \label{eq:B2Var} \\ &+& \mbox{Cov}\bigg( \sum_{\substack{i_{1} \in R ~or~ j_{1} \in R \\ i_{1} \leq j_{1}}} A_{i_1j_1}, \sum_{\substack{i_{2} \in R ~or~ j_{2} \in R \\ i_{2} \leq j_{2}}} A_{i_2j_2} \bigg), \label{eq:B2Cov} \end{eqnarray} where $\mbox{Var}(A_{ij}) = \mbox{Var}(L_{ij}) = v(0, s)\{1 + o(1)\} \sim L_{p}p^{-2s}$ under the null hypothesis, which is given in Lemma 5 in the SM. Notice that the summation of the variances on the right side of (\ref{eq:B2Var}) is bounded by $L_{p}p^{-2s} N_{2} = o\big(\text{Var}\{T_{n}(s)\}\big)$. For the covariance terms in (\ref{eq:B2Cov}), let $d_{i_1j_1,i_2j_2} = \min(|i_1 -i_2|, |i_1 -j_2|, |j_1-j_2|, |j_1-i_2|)$ be the minimum coordinate distance between $(i_1,j_1)$ and $(i_2,j_2)$, and between $(i_1,j_1)$ and $(j_2,i_2)$, where $i_{1} \leq j_{1}$ and $i_{2} \leq j_{2}$. For any fixed $(i_1,j_1)$ and a large positive constant $M$, by Assumption \ref{as5} and Davydov's inequality (Corollary 1.1 of \cite{Bosq_1998}, p.21), there exists a constant $c > 0$ such that $|\mbox{Cov}(L_{i_1j_1}, L_{i_2j_2})| \leq C\gamma_{1}^{d_{i_1j_1,i_2j_2}} \leq p^{-M}$ for $\gamma_{1} \in (0, 1)$ and any $d_{i_1j_1,i_2j_2} > c \log p$. Therefore, \begin{eqnarray} &&\bigg| \mbox{Cov}\bigg( \sum_{i_{1} \in R ~or~ j_{1} \in R, ~i_{1} \leq j_{1}} A_{i_1j_1}, \sum_{i_{2} \in R ~or~ j_{2} \in R, ~i_{2} \leq j_{2}} A_{i_2j_2} \bigg) \bigg| \nonumber \\ &\leq& \sum_{i_{1} \in R ~or~ j_{1} \in R, ~i_{1} \leq j_{1}} \bigg\{ N_{2}p^{-M} + \sum_{\substack{d_{i_1j_1,i_2j_2} \leq c \log p \\ i_{2} \in R ~or~ j_{2} \in R, ~i_{2} \leq j_{2}}} \big| \mbox{Cov}(A_{i_1j_1}, A_{i_2j_2} )\big| \bigg\}, \nonumber \end{eqnarray} where by Lemmas 5 and 6 in the SM, $|\mbox{Cov}(A_{i_1j_1}, A_{i_2j_2} )\big| \leq L_{p}|\rho_{i_1j_1,i_2j_2}| p^{-\frac{4s}{2 - \epsilon}} + \mu_{0,i_1j_1}\mu_{0,i_2j_2}L_{p}n^{-1/2}$ for a small $\epsilon > 0$. It follows that \begin{eqnarray} \sum_{\substack{d_{i_1j_1,i_2j_2} \leq c \log p \\ i_{2} \in R ~or~ j_{2} \in R, ~i_{2} \leq j_{2}}} \big| \mbox{Cov}(A_{i_1j_1}, A_{i_2j_2} )\big| &\leq& \sum_{|i_2 - i_1| \leq c \log p}\sum_{j_2 = 1}^{p} \big| \mbox{Cov}(A_{i_1j_1}, A_{i_2j_2} )\big| \nonumber \\ &+& \sum_{|j_2 - j_1| \leq c\log p}\sum_{i_2 = 1}^{p} \big| \mbox{Cov}(A_{i_1j_1}, A_{i_2j_2} )\big|, \nonumber \end{eqnarray} which is bounded by $L_{p}\sum_{j_2 = 1}^{p} |\rho_{i_1j_1,i_2j_2}|p^{-\frac{4s}{2 - \epsilon}} + L_{p}p^{1-4s}n^{-1/2}$. It has been shown that $\sum_{j_2 = 1}^{p} |\rho_{i_1j_1,i_2j_2}| \leq C < \infty$ in (S.23) in the SM. By choosing $M$ large, the covariance term in (\ref{eq:B2Cov}) is bounded by $L_{p}N_{2}p^{-\frac{4s}{2 - \epsilon}} + L_{p}N_{2}p^{1-4s}n^{-1/2}$, which is a small order term of $\text{Var}\{T_{n}(s)\}$ if $s > 1/2$ under Assumption \ref{as1} or $s > 1/2 - \xi / 4$ under Assumption \ref{as1poly}. Therefore, $\text{Var}(B_{2, n})= o\big(\text{Var}\{T_{n}(s)\}\big)$. For $B_{3, n}$, note that the triangles $\{(i, j)\in S_{m}, i \leq j\}$ along the diagonal are at least $b$ apart from each other, where $b \sim \log(p)$. The covariances between $\sum_{i,j \in S_{m_{1}}} A_{ij}$ and $\sum_{i,j \in S_{m_{2}}} A_{ij}$ are negligible for $m_1 \neq m_2$. It follows that \begin{eqnarray} \mbox{Var}(B_{3, n}) &=& \sum_{1 \leq m \leq d} \mbox{Var}\bigg(\sum_{i,j \in S_{m}, ~i \leq j} A_{ij}\bigg) \{1 + o(1)\} \nonumber \\ &=& \sum_{1 \leq m \leq d}\sum_{i_{1},j_{1} \in S_{m}, ~i_{1} \leq j_{1}}\sum_{i_{2},j_{2} \in S_{m}, ~i_{2} \leq j_{2}} \mbox{Cov}(A_{i_{1}j_{1}}, A_{i_{2}j_{2}}) \{1 + o(1)\}, \nonumber \end{eqnarray} which is bounded by $C d a^{4} v(0, s) = o( L_{p} a^{3} p^{1-2s})$. This shows that $\mbox{Var}(B_{3, n}) = o\big(\text{Var}\{T_{n}(s)\}\big)$ when $a \ll p^{1/3}$. Here, for two positive sequences $\{c_{1, n}\}$ and $\{c_{2, n}\}$, $c_{1, n} \ll c_{2, n}$ means that $c_{1, n} = o(c_{2, n})$. For $B_{4, n}$, we have \begin{eqnarray} \mbox{Var}(B_{4, n}) &\leq& N_{4}v(0, s)\{1 + o(1)\} + \sum_{j_{1} \in R_{d+1}}\sum_{j_{2} \in R_{d+1}}|\mbox{Cov}(A_{i_{1}j_{1}}, A_{i_{2}j_{2}})| \nonumber \\ &=& o(p^{2 - 2s}) + \sum_{j_{1} \in R_{d+1}}\bigg( N_{4}p^{-M} + \sum_{\substack{d_{i_1j_1,i_2j_2} \leq c \log p \\ j_{2} \in R_{d+1}}}|\mbox{Cov}(A_{i_{1}j_{1}}, A_{i_{2}j_{2}})|\bigg). \nonumber \end{eqnarray} Similar to the case of $\mbox{Var}(B_{2, n})$, the last summation term above is equal to $N_{4}^{2}p^{-M} + L_{p}N_{4}p^{-\frac{4s}{2 - \epsilon}} + L_{p}N_{4}p^{1-4s}n^{-1/2}$, which is a small order term of $\text{Var}\{T_{n}(s)\}$. Meanwhile, since $N_{1} = q(1 + o(1))$, following the same derivation of Proposition \ref{pn1}, it can be shown that $\text{Var}(B_{1, n}) = \text{Var}(T_{n}(s))\{1 + o(1)\}$. Combining the above results, we see that $\text{Var}(B_{l, n})$ are at a smaller order of $\text{Var}\{T_{n}(s)\}$ for $l = 2, \ldots, 4$. This together with (\ref{eq:decom}) imply \begin{equation}\frac{T_{n}(s) - E(T_{n}(s))}{\sqrt{\text{Var}(T_{n}(s))}} = \frac{B_{1, n}}{\sqrt{\text{Var}(T_{n}(s))}} + o(1).\label{eq:BlockMainOrder}\end{equation} Therefore, to show the asymptotical normality of $T_{n}(s) - E\{T_{n}(s)\}$, it suffices to focus on its main order term $B_{1, n}$. Let ${\bf Z}_{S_{m}} = \{{\bf X}_{S_{m}}, {\bf Y}_{S_{m}}\}$ for $m = 1, \ldots, d$, where ${\bf X}_{S_{m}} = \{X_{ki}: 1 \leq k \leq n_1, i \in S_{m}\}$ and ${\bf Y}_{S_{m}} = \{Y_{ki}: 1 \leq k \leq n_2, i \in S_{m}\}$ are the segments of the two data matrices with the columns in $S_{m}$. Notice that the summation of $A_{ij}$ in $B_{1, n}$ can be expressed as $$\sum_{i \in S_{m_1}, j \in S_{m_2}} A_{ij} = f({\bf Z}_{S_{m_1}}, {\bf Z}_{S_{m_2}})$$ for some function $f(\cdot, \cdot)$. Let $\mathcal{F}_{m_a}^{m_b}({\bf Z})=\sigma\{{\bf Z}_{S_{m}}: m_a \leq m \leq m_b\}$ be the $\sigma$-algebra generated by $\{{\bf Z}_{S_{m}}\}$ for $1 \leq m_a \leq m_b \leq d$. Let $\zeta_{z}(h) = \sup_{1 \leq m \leq d - h} \zeta\{\mathcal{F}_{1}^{m}({\bf Z}), \mathcal{F}_{m+h}^{d}({\bf Z})\}$ be the $\beta$-mixing coefficient of the sequence ${\bf Z}_{S_{1}}, \dots, {\bf Z}_{S_{d}}$. By Theorem 5.1 in \cite{Bradley_2005} and Assumption \ref{as5}, we have $$\zeta_{z}(h) \leq \sum_{k_1=1}^{n_1}\zeta_{x, p}(hb) + \sum_{k_2=1}^{n_2}\zeta_{y, p}(hb) \leq C(n_1 + n_2)\gamma^{hb}$$ for some $\gamma\in(0,1)$. Choosing $b = b_{0} - \log(n_1 + n_2) / \log(\gamma)$ leads to $\zeta_{z}(h) \leq C \gamma^{hb_{0}}(n_1 + n_2)^{1-h} \leq C \gamma^{hb_{0}}$. By Berbee's theorem (page 516 in \cite{AthreyaLahiri}), there exist ${\bf Z}^{^{\ast}}_{S_{2}}$ independent of ${\bf Z}_{S_{1}}$ such that $P({\bf Z}_{S_{2}} \neq {\bf Z}^{^{\ast}}_{S_{2}}) = \zeta\{\sigma({\bf Z}_{S_{1}}), \sigma({\bf Z}_{S_{2}})\} \leq \zeta_{z}(1) \leq C\gamma^{b_{0}}$. By applying this theorem recursively, there exist ${\bf Z}_{S_{1}}, {\bf Z}^{^{\ast}}_{S_{2}}, \dots, {\bf Z}^{^{\ast}}_{S_{d}}$ that are mutually independent with each other, and $P({\bf Z}_{S_{m}} \neq {\bf Z}^{^{\ast}}_{S_{m}}) \leq C\gamma^{b_{0}}$ for $m = 2, \dots, d$. Let $D = \cup_{m=2}^{d}\{{\bf Z}_{S_{m}} \neq {\bf Z}^{^{\ast}}_{S_{m}}\}$, then $P(D) \leq C d \gamma^{b_{0}}$. By choosing $b_{0} = -c_{0} \log(p) / \log(\gamma)$ for a large positive number $c_{0}$, we have $P(D)$ converges to $0$ at the rate $p^{1 - c_{0}} / (a + b)$. Notice that $\text{Var}(B_{1, n})$ is at the order $p^{2 - 2s}$. Since $E(|B_{1, n} \mathbb{I}_{D}|) \leq \{\text{Var}(B_{1, n}) P(D)\}^{1/2}$ converges to 0 for a sufficiently large $c_{0}$, it follows that $B_{1, n} \mathbb{I}_{D} \to 0$ in probability by choosing a large $c_{0}$. Thus, by letting $b = c_{1}\max\{\log(p), \log(n_1+n_2)\}$ for a large constant $c_{1}>0$, there exists an array of mutually independent random vectors ${\bf Z}_{S_{1}}^{\ast}, \dots, {\bf Z}_{S_{d}}^{\ast}$ such that ${\bf Z}_{S_{m}}^{\ast} = {\bf Z}_{S_{m}}$ with overwhelming probability for $m = 1, \ldots, d$ and $B_{1, n}$ can be expressed as a $U$-statistic formulation on a sequence of mutually independent random vectors as \begin{equation} B_{1, n} = \sum_{m_1 < m_2} f({\bf Z}_{S_{m_1}}^{\ast},{\bf Z}_{S_{m_2}}^{\ast}). \label{Ustat}\end{equation} For simplicity of notations, we will drop the superscript $\ast$ in (\ref{Ustat}) in the following proof. Now, we only need to establish the asymptotical normality of $B_{1, n}$ under the expression (\ref{Ustat}). To this end, we first study the conditional distribution of $M_{ij}$ given the $j$th variable, where $i \in S_{m_1}$, $j \in S_{m_2}$ and $m_1 \neq m_2$. Recall that $$F_{ij}=(\hat{\sigma}_{ij1}-\hat{\sigma}_{ij2})(\hat{\theta}_{ij1}/n_1+\hat{\theta}_{ij2}/n_2)^{-1/2}$$ is the standardization of $\hat{\sigma}_{ij1}-\hat{\sigma}_{ij2}$, where $\hat{\sigma}_{ij1}=\tilde{\sigma}_{ij1}-\bar{X}_i\bar{X}_j$ and $\hat{\sigma}_{ij2} = \tilde{\sigma}_{ij2}-\bar{Y}_i\bar{Y}_j$ for $\tilde{\sigma}_{ij1}=\sum_{k=1}^{n_1}X_{ki}X_{kj} / n_1$ and $\tilde{\sigma}_{ij2}=\sum_{k=1}^{n_2}Y_{ki}Y_{kj} / n_2$. Then, $M_{ij} = F_{ij}^{2}$. Note that the unconditional asymptotical distribution of $F_{ij}$ is standard normal. Let $E_{j}(\cdot)$, $\text{Var}_{j}(\cdot)$ and $\text{Cov}_{j}(\cdot)$ be the conditional mean, variance and covariance given the $j$th variable, respectively. From the proof of Lemma 7 in the SM, we have that $E_{j}(\hat{\sigma}_{ij1}) = E_{j}(\hat{\sigma}_{ij2}) = 0$, $\text{Var}_{j}(\tilde{\sigma}_{ij1}) = \sigma_{ii1}\tilde{\sigma}_{jj1} / n_1$ and $\text{Cov}_{j}(\tilde{\sigma}_{ij1}, \bar{X}_i\bar{X}_j) = \text{Var}_{j}(\bar{X}_i\bar{X}_j) = \sigma_{ii1}(\bar{X}_{j})^{2} / n_1$. It follows that $\text{Var}_{j}(\hat{\sigma}_{ij1}) = \sigma_{ii1}\hat{\sigma}_{jj1} / n_1$ and $\text{Var}_{j}(\hat{\sigma}_{ij2}) = \sigma_{ii2}\hat{\sigma}_{jj2} / n_2$. In the proof of Lemma 7, it has also been shown that $\hat{\theta}_{ij1} = \sigma_{ii1}\hat{\sigma}_{jj1} + O_{p}(\sqrt{\log(p) / n})$ and $\hat{\theta}_{ij2} = \sigma_{ii2}\hat{\sigma}_{jj2} + O_{p}(\sqrt{\log(p) / n})$ given the $j$th variable. Similar results hold given the $i$th variable. Therefore, $F_{ij}$ is still asymptotically standard normal distributed given either the $i$th or the $j$th variable. And, the moderate deviation results from Lemma 2.3 and Theorem 3.1 in \cite{Saulis_1991} for independent but non-identically distributed variables can be applied to $F_{ij}$, given either one of the variables. Let $\mathscr{F}_0=\{\emptyset,\Omega\}$ and $\mathscr{F}_m=\sigma\{{\bf Z}_{S_{1}},\dots,{\bf Z}_{S_{m}}\}$ for $m=1,2,\cdots,d$ be a sequence of $\sigma$-field generated by $\{{\bf Z}_{S_{1}},\dots,{\bf Z}_{S_{m}}\}$. Let $E_{\mathscr{F}_m}(\cdot)$ denote the conditional expectation with respect to $\mathscr{F}_m$. Write $B_{1, n}=\sum^d_{m=1}D_{m}$, where $D_{m}=(E_{\mathscr{F}_m} - E_{\mathscr{F}_{m-1}})B_{1, n}$. Then for every $n, p$, $\{D_{m},1\leq m\leq d\}$ is a martingale difference sequence with respect to the $\sigma$-fields $\{\mathscr{F}_m\}_{m=0}^{\infty}$. Let $\sigma^2_{m}=E_{\mathscr{F}_{m-1}}(D^2_{m})$. By the martingale central limit theorem \citep[Chapter 3 in][]{Hall_Heyde_1980}, to show the asymptotical normality of $B_{1, n}$, it suffices to show \begin{equation} \frac{\sum^d_{m=1}\sigma^2_{m}}{\text{Var}(B_{1, n})}\xrightarrow[]{p}1\quad\mbox{ \ and \ }\quad\frac{\sum^d_{m=1} E(D^4_{m})}{\text{Var}^{2}(B_{1, n})}\xrightarrow[]{}0. \label{MCLT}\end{equation} By the independence between $\{{\bf Z}_{S_{1}},\dots,{\bf Z}_{S_{d}}\}$, we have \begin{eqnarray} D_{m} &=& \sum_{m_1 = 1}^{m-1}f({\bf Z}_{S_{m_1}},{\bf Z}_{S_{m}}) \ + \sum_{m_2>m} E_{\mathscr{F}_m} f({\bf Z}_{S_{m}},{\bf Z}_{S_{m_2}}) \label{eq:MDiff1} \\ &-& \sum_{m_1 = 1}^{m - 1}E_{\mathscr{F}_{m-1}} f({\bf Z}_{S_{m_1}},{\bf Z}_{S_{m}}),\nonumber \end{eqnarray} where for any $m_1 < m_2$, $$E_{\mathscr{F}_{m_1}} f({\bf Z}_{S_{m_1}},{\bf Z}_{S_{m_2}}) = E_{\mathscr{F}_{m_1}} \sum_{i \in S_{m_1}, j \in S_{m_2}} A_{ij} = \sum_{i \in S_{m_1}, j \in S_{m_2}} E_{i} A_{ij}.$$ For $m_1 < m_2$, let \begin{equation}\begin{split} &\tilde{f}({\bf Z}_{S_{m_1}}, {\bf Z}_{S_{m_2}}) = \sum_{i \in S_{m_1}, j \in S_{m_2}} \tilde{A}_{ij} \mbox{ \ for} \\ \tilde{A}_{ij} = M_{ij}&\mathbb{I}(M_{ij} > \lambda_{p}(s)) - E_{\mathscr{F}_{m_1}}\{M_{ij}\mathbb{I}(M_{ij} > \lambda_{p}(s))\}, \end{split}\nonumber\end{equation} where $i \in S_{m_1}$ and $j \in S_{m_2}$. We can decompose $D_{m} = D_{m, 1} + D_{m, 2}$, where \begin{equation} D_{m, 1} = \sum_{m_1 = 1}^{m-1}\tilde{f}({\bf Z}_{S_{m_1}},{\bf Z}_{S_{m}}) \mbox{ \ and \ } D_{m, 2} = \sum_{m_2>m} E_{\mathscr{F}_m} f({\bf Z}_{S_{m}},{\bf Z}_{S_{m_2}}). \label{eq:MDiff2}\end{equation} Let $G_{0} = \{\max_{k_{1}, k_{2}, i}(|X_{k_{1}i}|, |Y_{k_{2}i}|) \leq c \sqrt{\log p}\}$ for a positive constant $c$. Under Assumption \ref{as3}, $P(G_{0}^{c})\to 0$ in a polynomial rate of $p$ for a large $c$. To study $\{\sigma_{m}^{2}\}$, we focus on the set $G_{0}$. By Lemma 7 in the SM, we have $$E_{i} \{M_{ij}\mathbb{I}(M_{ij} > \lambda_{p}(s)) \} = \mu_{0, ij}\big\{1 + O(L_pn^{-1/2})\big\},$$ which implies $E_{i} A_{ij} = \mu_{0, ij} O(L_pn^{-1/2})$. This leads to $E_{\mathscr{F}_m} f({\bf Z}_{S_{m}},{\bf Z}_{S_{m_2}}) = L_p O(a^{2} p^{-2s} n^{-1/2})$ for $m_2 > m$, and $D_{m, 2} = (d - m) L_p O(a^{2} p^{-2s} n^{-1/2})$. From (\ref{eq:MDiff2}), we can write $D_{m}^{2} = D_{m, 1}^{2} + 2 D_{m, 1} D_{m, 2} + D_{m, 2}^{2}$, where $D_{m, 2}^{2} = (d - m)^{2} L_p O(a^{4} p^{-4s} n^{-1})$. Note that $E_{\mathscr{F}_{m-1}}(D_{m, 1} D_{m, 2})$ is equal to \begin{eqnarray} \sum_{m_1 = 1}^{m-1}\sum_{m_2 > m}\sum_{j_1 \in S_{m_1},j_2\in S_{m}}\sum_{j_3 \in S_{m},j_4\in S_{m_2}} E_{\mathscr{F}_{m-1}}\{\tilde{A}_{j_1j_2} E_{\mathscr{F}_{m}}(A_{j_3j_4})\}. \nonumber \end{eqnarray} Similar as applying the coupling method on the big segments ${\bf Z}_{S_{1}}, \dots, {\bf Z}_{S_{d}}$ of the variables, the $j_2$th and $j_3$th variables can be effectively viewed as independent when $|j_2 - j_3| > c \log p$ for some constant $c > 0$. Therefore, given $\mathscr{F}_{m-1}$, $E_{\mathscr{F}_{m-1}}\{\tilde{A}_{j_1j_2} E_{\mathscr{F}_{m}}(A_{j_3j_4})\}$ is negligible when $|j_2 - j_3| > c \log p$. Meanwhile, notice that $\big| E_{\mathscr{F}_{m-1}}\{\tilde{A}_{j_1j_2} E_{\mathscr{F}_{m}}(A_{j_3j_4})\} \big| \leq O(L_p p^{-2s} n^{-1/2}) E_{\mathscr{F}_{m-1}}(|\tilde{A}_{j_1j_2}|)$ and $E_{\mathscr{F}_{m-1}}(|\tilde{A}_{j_1j_2}|) \leq 2 E_{\mathscr{F}_{m-1}}\{M_{ij}\mathbb{I}(M_{ij} > \lambda_{p}(s))\}$, which is at the order $L_{p}p^{-2s}$. Therefore, we have $$\big| E_{\mathscr{F}_{m-1}}(D_{m, 1} D_{m, 2}) \big| \leq O(L_{p} d^{2} a^{3} p^{-4s} n^{-1/2}).$$ Base on the above results, by choosing $a \ll \sqrt{n}$, $\sigma^2_{m} = E_{\mathscr{F}_{m-1}}(D^2_{m})$ can be expressed as $\sigma^2_{m} = E_{\mathscr{F}_{m-1}}(D_{m, 1}^{2}) + O(L_p d^{2} a^{3} p^{-4s} n^{-1/2})$, where \begin{equation} E_{\mathscr{F}_{m-1}}(D_{m, 1}^{2}) = \sum_{m_1,m_2 = 1}^{m-1} \sum_{\substack{j_1\in S_{m_1} \\ j_2 \in S_{m}}} \sum_{\substack{j_3\in S_{m_2} \\ j_4 \in S_{m}}} E_{\mathscr{F}_{m-1}}(\tilde{A}_{j_1j_2}\tilde{A}_{j_3j_4}).\label{eq:MDiffVar}\end{equation} For the above summation in (\ref{eq:MDiffVar}), note that when $j_1 = j_3$, $j_2 = j_4$, $$E_{\mathscr{F}_{m-1}}(\tilde{A}_{j_1j_2}^2) = E_{j_1}\{M_{j_1j_2}^{2}\mathbb{I}(M_{j_1j_2} > \lambda_{p}(s))\} - \mu_{0, j_1j_2}^{2}(1 + o_p(1)).$$ By Lemma 7 in the SM, we have $E_{j_1}\{M_{j_1j_2}^{2}\mathbb{I}(M_{j_1j_2} > \lambda_{p}(s))\} = E(L_{j_1j_2}^{2} | H_{0})\{1 + o_p(1)\}$, which implies $E_{\mathscr{F}_{m-1}}\tilde{A}_{j_1j_2}^{2} = \text{Var}\{A_{j_1j_2} | H_{0}\}(1 + o_p(1))$, where $L_{j_1j_2} = M_{j_1j_2}\mathbb{I}(M_{j_1j_2} > \lambda_{p}(s))$. Let $\rho_{j_1j_2 1} = \mbox{Cor}(X_{kj_1}, X_{kj_2})$ and $\rho_{j_1j_2 2} = \mbox{Cor}(Y_{kj_1}, Y_{kj_2})$ be the correlations. Let $\tilde{\rho}_{j_1j_2 1} = \tilde{\sigma}_{j_1j_2 1} / (\tilde{\sigma}_{j_1j_1 1}\tilde{\sigma}_{j_2j_2 1})^{1/2}$ and $\tilde{\rho}_{j_1j_2 2} = \tilde{\sigma}_{j_1j_2 2} / (\tilde{\sigma}_{j_1j_1 2}\tilde{\sigma}_{j_2j_2 2})^{1/2}$. For $j_1 \neq j_3$ and $j_2 = j_4$, by Lemma 7 in the SM, $$\big| \text{Cor}_{(j_1,j_3)}(\tilde{\sigma}_{j_1j_2 1} - \tilde{\sigma}_{j_1j_2 2}, \tilde{\sigma}_{j_3j_2 1} - \tilde{\sigma}_{j_3j_2 2}) \big| \leq \tilde{\rho}_{j_1j_3},$$ where $\tilde{\rho}_{j_1j_3} = \max\{|\tilde{\rho}_{j_1j_3 1}|, |\tilde{\rho}_{j_1j_3 2}|\}$. By Lemmas 6 and 7 in the SM, we have \begin{eqnarray} |E_{(j_1,j_3)}(\tilde{A}_{j_1j_2}\tilde{A}_{j_3j_2})| &\leq& L_p\tilde{\rho}_{j_1j_3} p^{-\frac{4s}{1 + \tilde{\rho}_{j_1j_3}}} \{1 + o_{p}(1)\} + O_{p}(L_p p^{-4s}n^{-1/2}). \nonumber \end{eqnarray} Similarly, for $j_1 = j_3$ and $j_2 \neq j_4$, by Lemma 7, we have that $\big| \text{Cor}_{j_1}(\tilde{\sigma}_{j_1j_2 1} - \tilde{\sigma}_{j_1j_2 2}, \tilde{\sigma}_{j_1j_4 1} - \tilde{\sigma}_{j_1j_4 2}) \big| \leq \rho_{j_2j_4}$ and $$|E_{j_1}(\tilde{A}_{j_1j_2}\tilde{A}_{j_1j_4})| \leq L_{p}\rho_{j_2j_4} p^{-{4s} / (1 + \rho_{j_2j_4})} \{1 + o_{p}(1)\} + O_{p}(L_{p} p^{-4s}n^{-1/2}),$$ where $\rho_{j_2j_4} = \max\{|\rho_{j_2j_4 1}|, |\rho_{j_2j_4 2}|\}$. For $j_1 \neq j_3$ and $j_2 \neq j_4$, we have $$\big| \text{Cor}_{(j_1,j_3)}(\tilde{\sigma}_{j_1j_2 1} - \tilde{\sigma}_{j_1j_2 2}, \tilde{\sigma}_{j_3j_4 1} - \tilde{\sigma}_{j_3j_4 2}) \big| \leq \tilde{\rho}_{j_1j_3}\rho_{j_2j_4}.$$ By Assumption \ref{as5} and Davydov's inequality, for any positive constant $M$, there exists a constant $c > 0$ such that \begin{equation} |E_{(j_1,j_3)}(\tilde{A}_{j_1j_2}\tilde{A}_{j_3j_4})| \leq C \gamma_{2}^{|j_2 - j_4|} \leq p^{-M} \label{eq:LargeDist}\end{equation} for a constant $\gamma_2 \in (0, 1)$ and $|j_2 - j_4| > c \log p$. For $j_2$ and $j_4$ close, by Lemmas 6 and 7, it follows that $$|E_{(j_1,j_3)}(\tilde{A}_{j_1j_2}\tilde{A}_{j_3j_4})| \leq L_p \tilde{\rho}_{j_1j_3}\rho_{j_2j_4} p^{-{4s} / (1 + \tilde{\rho}_{j_1j_3}\rho_{j_2j_4})} \{1 + o_{p}(1)\} + O_{p}(L_{p} p^{-4s}n^{-1/2}).$$ Combining all the different cases above for the indexes $(j_1,j_2,j_3,j_4)$ together, equation (\ref{eq:MDiffVar}) can be decomposed as \begin{eqnarray} E_{\mathscr{F}_{m-1}}(D_{m, 1}^{2}) &=& a^{2}(m-1)\text{Var}\{A_{12} | H_{0}\}\{1 + o_{p}(1)\} \nonumber \\ &+& \sum_{m_1,m_2 = 1}^{m-1} \sum_{j_1\in S_{m_1} \neq j_3 \in S_{m_2}} \sum_{j_2 \in S_{m}} E_{(j_1,j_3)}(\tilde{A}_{j_1j_2}\tilde{A}_{j_3j_2}) \label{eq:MDiffVar1} \\ &+& \sum_{m_1 = 1}^{m-1} \sum_{j_1\in S_{m_1}} \sum_{j_2 \neq j_4 \in S_{m}} E_{j_1}(\tilde{A}_{j_1j_2}\tilde{A}_{j_1j_4}) \label{eq:MDiffVar2} \\ &+& \sum_{m_1,m_2 = 1}^{m-1} \sum_{j_1\in S_{m_1} \neq j_3 \in S_{m_2}} \sum_{j_2 \neq j_4 \in S_{m}} E_{\mathscr{F}_{m-1}}(\tilde{A}_{j_1j_2}\tilde{A}_{j_3j_4}). \label{eq:MDiffVar3} \end{eqnarray} Note that $\rho_{j_1j_3} = 0$ for $m_1 \neq m_2$ due to the independence between ${\bf Z}_{S_{m_1}}$ and ${\bf Z}_{S_{m_2}}$. Under Assumption \ref{as3}, we also have $|\tilde{\rho}_{j_1j_3} - \rho_{j_1j_3}| \leq L_p n^{-1/2}$ for $j_1\in S_{m_1}$ and $j_3\in S_{m_2}$. The term in (\ref{eq:MDiffVar1}) is bounded by \begin{eqnarray} a^3(m-1)^2 L_p n^{-1/2} p^{-4s} + a(m-1) \sum_{j_1\neq j_3 \in S_{m_1}} L_p \rho_{j_1j_3} p^{-\frac{4s}{1 + \rho_{j_1j_3}}}. \nonumber \end{eqnarray} By Assumption \ref{as5} and Davydov's inequality, for any $M > 0$, there exists a constant $c > 0$ such that $\rho_{j_1j_3} \leq p^{-M}$ for $|j_1 - j_3| > c \log p$. Therefore, the summation of $L_p \rho_{j_1j_3} p^{-{4s} / (1 + \rho_{j_1j_3})}$ over $j_1 \neq j_3 \in S_{m_1}$ is bounded by $$\sum_{|j_1 - j_3| \leq c \log p}L_p \rho_{j_1j_3} p^{-4s / (1 + \rho_{j_1j_3})} + \sum_{|j_1 - j_3| > c \log p} L_p p^{-M-4s} \leq aL_pp^{-4s / (2 - \epsilon)}$$ for a small positive constant $\epsilon > 0$. For (\ref{eq:MDiffVar2}), similarly, we have \vspace{-0.75cm} \begin{equation}\begin{split} \bigg|\sum_{m_1 = 1}^{m-1} \sum_{j_1\in S_{m_1}} \sum_{j_2 \neq j_4 \in S_{m}} E_{j_1}(\tilde{A}_{j_1j_2}\tilde{A}_{j_1j_4})\bigg| &\leq a^3(m-1)O_{p}(L_p n^{-1/2} p^{-4s}) \\ + \ a(m - 1) &\sum_{j_2 \neq j_4 \in S_{m}} L_p \rho_{j_2j_4} p^{-{4s} / (1 + \rho_{j_2j_4})}, \end{split}\nonumber\end{equation} which is bounded by $a^3(m-1)O_{p}(L_p n^{-1/2} p^{-4s}) + a^{2}(m - 1)L_p p^{-4s / (2 - \epsilon)}$. For the last term in (\ref{eq:MDiffVar3}), by choosing $M$ in (\ref{eq:LargeDist}) sufficiently large, it is bounded by $a^3(m-1)^2 L_p n^{-1/2} p^{-4s} + a^2(m-1) L_p p^{-4s / (2 - \epsilon)}$. Notice that $\sigma^2_{m} = E_{\mathscr{F}_{m-1}}(D_{m, 1}^{2}) + O_{p}(L_p d^{2} a^{3} p^{-4s} n^{-1/2})$ by choosing $a \ll \sqrt{n}$. Summing up all the terms in (\ref{eq:MDiffVar1}) -- (\ref{eq:MDiffVar3}), up to a multiplication of $1 + o_p(1)$, we have that \begin{eqnarray} \sum_{m=1}^{d}\sigma^2_{m} &=& \frac{a^{2}d(d-1)}{2}\text{Var}\{A_{12} | H_{0}\} + O_{p}(p^3 L_p n^{-1/2} p^{-4s}) + O(p^2 L_pp^{-\frac{4s}{2 - \epsilon}}), \nonumber \end{eqnarray} where $a^{2}d(d-1)\text{Var}\{A_{12} | H_{0}\} / 2 = \text{Var}(B_{1, n} | H_{0})(1 + o(1))$. Since $L_pp^{2-\frac{4s}{2 - \epsilon}} = o \{\text{Var}(B_{1, n} | H_{0})\}$, it follows that $$\sum_{m=1}^{d}\sigma^2_{m} = \text{Var}(B_{1, n} | H_{0})(1 + o(1)) + O_{p}(p^3 L_p n^{-1/2} p^{-4s}).$$ Note that $p^3 L_p n^{-1/2} p^{-4s} = o(L_pp^{2 - 2s})$ for any $n$ and $p$ when $s > 1/2$. Given $n = p^{\xi}$ for $\xi\in(0, 2)$, $p^3 L_p n^{-1/2} p^{-4s}$ is at a small order of $\text{Var}(B_{1, n} | H_{0}) = L_pp^{2 - 2s}$ if $s > 1/2 - \xi/4$, which proves the first claim of (\ref{MCLT}). For the second claim of (\ref{MCLT}), notice that $D_{m} = D_{m, 1} + D_{m, 2}$ where $|D_{m, 2}| \leq d L_p O(a^{2} p^{-2s} n^{-1/2})$. Given $a \ll \sqrt{n}$, we have $\sum_{m=1}^{d}d^{4} a^{8} p^{-8s} n^{-2} \ll \text{Var}^{2}(B_{1, n} | H_{0})$ when $s > 1/4 - \xi/8$. Since $D_{m}^{4} \leq 8 (D_{m, 1}^{4} + D_{m, 2}^{4})$, to show the second claim of (\ref{MCLT}), we only need to focus on $D_{m, 1}^{4}$, which is \begin{eqnarray} && \sum_{m_1=1}^{m-1} \tilde{f}({\bf Z}_{S_{m_1}},{\bf Z}_{S_{m}})^4 + c_{2}\sum_{m_1, m_2}^{\ast} \tilde{f}({\bf Z}_{S_{m_1}},{\bf Z}_{S_{m}})^2 \tilde{f}({\bf Z}_{S_{m_2}},{\bf Z}_{S_{m}})^2 \nonumber \\ && \ \ \ \ \ + \ c_{3}\sum_{m_1, m_2, m_3}^{\ast} \tilde{f}({\bf Z}_{S_{m_1}},{\bf Z}_{S_{m}})^2 \tilde{f}({\bf Z}_{S_{m_2}},{\bf Z}_{S_{m}}) \tilde{f}({\bf Z}_{S_{m_3}},{\bf Z}_{S_{m}}) \label{Dm4} \\ && \ \ + \sum_{m_1, m_2, m_3, m_4}^{\ast} \tilde{f}({\bf Z}_{S_{m_1}},{\bf Z}_{S_{m}}) \tilde{f}({\bf Z}_{S_{m_2}},{\bf Z}_{S_{m}}) \tilde{f}({\bf Z}_{S_{m_3}},{\bf Z}_{S_{m}}) \tilde{f}({\bf Z}_{S_{m_4}},{\bf Z}_{S_{m}}) \nonumber \end{eqnarray} where $\sum^{\ast}$ indicates summation over distinct indices smaller than $m$, and $c_2$ and $c_3$ are two positive constants. Note that the expectation of the last term in (\ref{Dm4}) equals to the expectation of its conditional expectation given ${\bf Z}_{S_{m}}$, where the conditional expectation is bounded by \begin{equation}\begin{split} & \bigg| \sum_{m_1, m_2, m_3, m_4}^{\ast} \{E_{S_{m}}\tilde{f}({\bf Z}_{S_{m_1}},{\bf Z}_{S_{m}})\} \{E_{S_{m}}\tilde{f}({\bf Z}_{S_{m_2}},{\bf Z}_{S_{m}})\}\\ &~~~~ ~~~~~~~~~~~~~\times\{E_{S_{m}}\tilde{f}({\bf Z}_{S_{m_3}},{\bf Z}_{S_{m}})\} \{E_{S_{m}}\tilde{f}({\bf Z}_{S_{m_4}},{\bf Z}_{S_{m}})\} \bigg| \nonumber \\ \leq \ & \bigg( \sum_{m_1} | E_{S_{m}}\tilde{f}({\bf Z}_{S_{m_1}},{\bf Z}_{S_{m}}) | \bigg)^{4} = O\{a^{8}(m-1)^4 L_p p^{-8s} n^{-2}\}. \end{split}\end{equation} Note that the summation of this quantity over $1 \leq m \leq d$ is a small order term of $\text{Var}^{2}(B_{1, n} | H_{0})$ giving $s > 1/4 - \xi/8$. Next, following the derivation of $\sigma_{m}^{2}$, $E_{S_{m}} \tilde{f}({\bf Z}_{S_{m_1}},{\bf Z}_{S_{m}})^2$ is equal to $$\sum_{j_1 \in S_{m_1}, j_2 \in S_{m}}\sum_{j_3 \in S_{m_1}, j_4 \in S_{m}} E_{S_{m}}(\tilde{A}_{j_1j_2}\tilde{A}_{j_3j_4}) = O\{L_pa^2 p^{-2s}\},$$ which leads to \begin{equation}\begin{split} E_{S_{m}} \sum_{m_1, m_2}^{\ast} \tilde{f}({\bf Z}_{S_{m_1}},{\bf Z}_{S_{m}})^2 \tilde{f}({\bf Z}_{S_{m_2}},{\bf Z}_{S_{m}})^2 &= O\{a^4(m-1)^2L_pp^{-4s}\} \mbox{ \ and } \nonumber \\ E_{S_{m}} \sum_{m_1, m_2, m_3}^{\ast} \tilde{f}({\bf Z}_{S_{m_1}},{\bf Z}_{S_{m}})^2 \tilde{f}({\bf Z}_{S_{m_2}},{\bf Z}_{S_{m}}) &\tilde{f}({\bf Z}_{S_{m_3}},{\bf Z}_{S_{m}}) \nonumber \\ &= O\{a^{6} (m-1)^{3} L_p p^{-6s} n^{-1}\}. \nonumber \end{split}\end{equation} The summation of the two terms above over $1 \leq m \leq d$ are at smaller orders of $\text{Var}^{2}(B_{1, n} | H_{0})$. Also notice that \begin{eqnarray} E \sum_{m_1=1}^{m-1} \tilde{f}({\bf Z}_{S_{m_1}},{\bf Z}_{S_{m}})^4 \leq \sum_{m_1=1}^{m-1} a^{6} \sum_{i \in S_{m_1} j \in S_{m}} E \tilde{A}_{ij}^{4} = a^{8}(m-1)L_pp^{-2s}. \nonumber \end{eqnarray} Since $\sum_{m=1}^{d}a^{8}(m-1)L_pp^{-2s} = a^{6}L_pp^{2-2s} \ll p^{4-4s}$ if $a \ll p^{(1-s)/3}$, the second claim of (\ref{MCLT}) is valid given $a \ll \min\{n^{1/2}, p^{(1-s)/3}\}$ and $s > 1/4 - \xi/8$. This proves the asymptotical normality of $T_{n}(s)$ for $s > 1/2 - \xi/4$ under $H_{0}$ of (\ref{H0}) by choosing $a \ll \min\{n^{1/2}, p^{(1-s)/3}\}$, $b \sim \max\{\log(p), \log(n_1+n_2)\}$ and $b \ll a$. $\square$
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Galaxy clusters are the largest gravitationally bound structures in the universe. They typically contain several hundreds to thousands of galaxies orbiting in a gravitational potential well formed primarily by dark matter (e.g., \citealt{bah77}). They are also filled with hot gas with $T\sim2-10$ keV that loses thermal energy prolifically by emitting X-rays. In the absence of any heat sources, the radiative cooling in the cores of rich clusters would result in a cooling flow in which gas settles in the gravitational potential and drops out as cold condensations (e.g., \citealt{fab94}). However, recent high resolution {\it Chandra} and {\it XMM-Newton} observations have not shown the expected signature of gas cooling below one-third of the virial temperature \citep{bla01,pet01,pet03,tam01,joh02,all01}. This strongly suggests that there must be some source of heat that manages to balance the radiative cooling and thus prevents the mass dropout at the central regions of galaxy clusters. Most theoretical models proposed so far for cluster heating fall into the following two groups: (1) energy injection via radiations, jets, outflows, bubbles, or sound waves from a central active galactic nucleus (\citealt{cio01,chu02,kai03,rus04,roy04,vec04} and references therein); (2) diffusive transport of heat from the outer regions of the cluster to the center via conduction \citep{tuc83,bre88,nar01,voi02,zak03,kim03a} or turbulent mixing \citep{cho03,kim03b,voi04,den05}. Although less well studied than those mentioned above, there are clearly other energy sources in the cluster environments. These include kinetic energy in the orbital motions of galaxies \citep{mil86,jus90}, gravitational potential energy of the gas \citep{mar01,fab03}, feedback from intracluster supernovae \citep{dom04}, dissipation of turbulent energy driven by infall of subclusters \citep{fuj04}, etc. Amount of available energy in each source is comparable to or larger than thermal energy of the intracluster medium (ICM), so that radiative cooling would be easily offset if there exist physical mechanisms that can tap the energy from the sources and convert it into thermal energy in the ICM. One such process is dynamical fiction (DF) which occurs when a galaxy moving around a cluster center interacts with its gravitationally induced wake in the ICM (e.g., \citealt{dok64,hun71,rud71,rep80}). Because of the gravitational drag, the galaxy loses some of its kinetic energy. Turbulence in the ICM generated by the superposition of the wakes of many galaxies or by other processes will absorb the lost kinetic energy and, in the presence of tangled magnetic fields and/or viscosity, turns it into heat at the dissipation scales (e.g., \citealt{dei96}). \citet{mil86} and \citet{jus90} independently estimated the heating rates by DF of galaxies and found that the DF-induced heating can fairly well balance radiative cooling of a rich cluster, provided the mass-to-light ratio is about 20. Using updated data for the properties of galaxies and gas in the Perseus cluster and employing Monte-Carlo approaches, \citet[hereafter EKK04]{elz04} very recently showed that the total power generated by DF within the cooling radius is comparable to radiative loss from that region if the average mass-to-light ratio of galaxies is about 10. They also noted that this mechanism is self-regulating in the sense that gas would not be overheated since the DF of galaxies becomes ineffective when it attains high enough temperature for which galaxy motions become subsonic (see also \citealt{mil86}). While the results of the aforementioned work suggest that the {\it total} power supplied by the motions of member galaxies is comparable to X-ray luminosity of a typical rich cluster, it is sill uncertain whether it can balance the radiative cooling {\it locally} as well. If the DF of galaxy motions is to serve as a primary heat supplier in clusters in equilibrium, observed density and temperature profiles of clusters should be a natural consequence of the local heat balance between radiative cooling and DF-induced heating. Since optically-thin, X-ray emitting gas is prone to thermal instability (e.g., \citealt{fab94}), it is also an interesting question whether the DF-induced heating is able to suppress thermal instability completely or at least lengthen its growth time to the level comparable to the ages of clusters; even an equilibrium cluster would otherwise be subject to a mass dropout at its center \citep{kim03a}. In this paper, we take one step further from \citet{mil86} and EKK04 to investigate equilibrium cluster models in which DF-induced heating is a main heating mechanism. We will first construct density and temperature distributions of the gas in clusters that are in strict hydrostatic and thermal equilibrium and compare them with observed profiles. We note that recent X-ray data from {\it BeppoSAX}, {\it Chandra}, and {\it XMM-Newton} observations indicate that gas temperature in rich clusters rapidly increases with radius for $r\simlt (0.05-0.08)r_{180}$, remains roughly isothermal up to $r\sim 0.2r_{180}$, and then gradually declines farther out (\citealt{ett00,deg02,pif05,vik05}, see also \citealt{mar98} for {\it ASCA} results). Here, $r_{180}$ is a virial radius the interior of which has the mean density of gas equal to 180 times the critical density. While declining temperature profiles at large radii may provide important clues to cosmological structure formation, they appear not to be directly relevant to the cooling flow problem. This is because gas density in clusters decreases as or faster than $\sim r^{-2}$ at the outer parts, so that the radiative cooling time in the $r\simgt 0.2r_{180}$ regions are much longer than the Hubble time. Indeed, the results of numerical simulations for cluster formation show that the shapes of declining temperature profiles are essentially independent of the presence of radiative cooling and/or supernova feedback (e.g., \citealt{lok02,mot04}). For this reason, when we compare our model results with observations, we pay attention to density and temperature profiles only in cooling regions with $r\simlt 0.2r_{180}$ (typically $\sim0.5-1$ Mpc) beyond which the effect of radiative cooling is not serious and thus heating mechanisms are not required. The remainder of this paper is organized as follows: In \S2 we evaluate the total heating rate resulting from the DF of galaxies using the formula given by \citet{ost99} for gravitational drag force in a gaseous medium. While \citet{mil86} and EKK04 approximated galaxy motions as being highly supersonic, we allow for both subsonic and supersonic motions of galaxies. In \S3 we calculate equilibrium density and temperature profiles of galaxy clusters by assuming that the DF-induced heating is deposited at the locations of galaxies. Effects of distributed heating of DF and the radial mass variation of galaxies are discussed in \S4. In \S5 we analyze local thermal stability of the gas while in \S6 we present the results of numerical hydrodynamic simulations that investigate the evolution of initially out of equilibrium configurations. Finally, in \S7 we conclude this work and discuss other potential consequences of DF of galaxies in clusters. \section{Total Heating Rate} Galaxies orbiting around the center of a galaxy cluster gravitationally induce wakes in the ICM. The gravitational interaction between the galaxies and wakes causes the former to lose their orbital kinetic energy and converts it into thermal energy of the ICM via either compressional heating (including shock) or turbulent dissipation of the gas kinetic energy in the wakes. Although the notion that DF of galaxies can generate heat in the hot ICM has been well recognized and widely studied, most of previous work on this subject assumed galaxies in highly supersonic motions, estimated only total heating rate, and compared it with total radiative loss rate in clusters (\citealt{rud71,rep80,mil86}; EKK04). However, the motions of galaxies are only slightly supersonic, with an average Mach number of about 1.5 (e.g., \citealt{sar88,fal04}), and even become subsonic at the outer parts of clusters. In this paper we consider both subsonic and supersonic galaxy motions and adopt the general formula of \citet{ost99} for the DF force in a gaseous medium. Following \citet{ost99}, we consider a galaxy of mass $M_g$ moving at velocity ${\bf v}$ through a uniform medium with density $\rho$ and sound speed $c_s$. The dynamical-friction force that the galaxy experiences is given by \begin{equation} \label{Fdf} \FDF = -\frac{4\pi\rho(GM_g)^2I}{v^3} {\bf v}, \end{equation} where the efficiency factor $I$ is defined by \begin{eqnarray} \label{I} I \equiv \left\{ \begin{array}{ll} \frac{1}{2} \ln\left(\frac{1+\cal{M}}{1-\cal{M}}\right) - \cal{M},& {\cal M}<1 \\ \frac{1}{2} \ln\left(1-{\cal M}^{-2}\right) + \ln\left({vt}/{r_{\rm min}}\right),& {\cal M}>1, \end{array} \right. \end{eqnarray} with ${\cal M}\equiv v/c_s$ being the Mach number of the galaxy motion and $r_{\rm min}$ the effective size of a galaxy \citep{ost99}. Note that for ${\cal M}\gg1$, equations (\ref{Fdf}) and (\ref{I}) with $vt=r_{\rm max}$ corresponding to the maximum system size, become identical to the \citet{cha43} formula for the drag force due to collisionless particles. Although equation (\ref{Fdf}) is valid for a perturber moving on rectilinear trajectory through a uniform-density medium, numerical simulations show that it is a good approximation even in a radially-stratified, spherical system \citep{san01}. We assume that the orbital velocities of $N_g$ galaxies that contribute to heating of the gas are described by the Maxwellian distribution, \begin{equation} \label{Maxwell} f(v) = \frac{4\pi N_g}{(2\pi\sigma_r^2)^{3/2}}v^2e^{-v^2/(2\sigma_r^2)}, \end{equation} with a one-dimensional velocity dispersion $\sigma_r$. The total heating rate due to the DF is then given by \begin{equation} \label{Edot} \langle\dot{E}\rangle = N_g \langle -\FDF \cdot {\bf v} \rangle = \frac{4\pi\rho(GM_g)^2N_g}{c_s} \left\langle\frac{I}{\cal M}\right\rangle, \end{equation} where the angular brackets denote an average over equation (\ref{Maxwell}). Using equations (\ref{I}) and (\ref{Maxwell}), one can evaluate $\langle I/{\cal M}\rangle$ numerically. Figure \ref{I_over_M} plots $\langle I/{\cal M}\rangle$ as functions of $m\equiv \sigma_r/c_s$ for some values of $\ln{(vt/\rmin)}$. For supersonic cases, the density perturbations in the wake are highly asymmetric and the regions influenced by the perturber shrink as $m$ increases. This results in a smaller heating rate for larger $m$. In this case, one can show $\langle I/{\cal M}\rangle \approx (2/\pi)^{1/2}m^{-1}\exp(-0.5/m^2)\ln{(vt/\rmin)}$ for $m\gg1$. For a subsonically moving perturber, on the other hand, the perturbed density in the front and back sides of the perturber becomes more or less symmetric, producing a smaller heating rate with decreasing $m$. In the limit of vanishingly small $m$, $\langle I/{\cal M}\rangle \rightarrow m^2$. In general, $\langle I/{\cal M}\rangle$ peaks at $m=1$, as Figure \ref{I_over_M} indicates. Taking $vt=1$ Mpc from a typical size of clusters and $\rmin=10$ kpc as the galaxy size, $\ln{(vt/\rmin)}\approx 4.6$. Since $m$ usually varies from 0.8 to 3, $\langle I/{\cal M}\rangle\approx 2$. Therefore, we have the total heating rate \begin{equation} \label{num} \langle\dot{E}\rangle \approx4\times 10^{44}\;{\rm ergs\;s^{-1}} \left(\frac{N_g} {500}\right) \left(\frac{M_g} {10^{11}\; \Msun}\right)^2 \left(\frac{n_e} {0.01\pcc}\right) \left(\frac{T} {5 \;\rm keV}\right)^{-1/2} \left(\frac{\langle I/\mathcal M \rangle}{2}\right), \end{equation} where $n_e$ and $T$ denote the electron number density and gas temperature, respectively, and we adopt the solar abundance. Notice that the rate of gas heating given in equation (\ref{num}) is similar to typical X-ray luminosity of rich clusters \citep[e.g.,][]{sar88,ros02}, implying that there is a sufficient amount of energy available in the orbital motions of galaxies. This is essentially what led EKK04 to conclude that the dynamical-friction coupling between cluster galaxies and gas can provide thermal energy enough to compensate for radiative loss. However, it is still questionable whether DF of galaxies heats the gas in a right manner. That is, are the observed density and temperature profiles of the ICM a direct consequence of energy balance between heating by DF and radiative cooling? Is the intracluster gas heated by DF thermally stable? In what follows, we shall study models of clusters with DF-induced heating in detail by assuming hydrostatic equilibrium and thermal energy balance. \section{Model With Localized Heating} DF of galaxies is mediated by gravity which is a long-range force, so that heating of gas due to a single galaxy is likely to be well distributed throughout its wake. The functional form of heat distribution is unknown and its derivation may require numerical simulations, which are beyond the scope of the present work. In this section, we instead make a simplifying assumption that heat is deposited {\it locally} at the galaxy position. The effects of heat distribution will be discussed in \S4. \subsection{Local Heating Function} Consider a spherically symmetric distribution of galaxies, with the number density given by $n_g(r)$ at radius $r$. The local heating rate per unit volume due to DF is given by \begin{equation} \label{edotr} \edotr = \frac{4\pi\rho(GM_g)^2\IM}{c_s} n_g(r), \end{equation} where we assume equal galaxy mass. To find $n_g(r)$, we assume that the orbits of galaxies are isotropic and isothermal. The usual Jeans equation (e.g., eq.\ [4-55] of \citealt{bin87}) then reads \begin{equation} \label{Jeans} \sigma_r^2 \frac{d\ln n_g}{dr} = -\frac{d\Phi}{dr}, \end{equation} where $\Phi$ is the gravitational potential. Under the NFW distribution of dark matter \begin{equation} \label {rho_dm} \rho_{\rm NFW} = \frac{M_0/2\pi}{r(r+r_s)^2}, \end{equation} with a scale radius $r_s$ and a characteristic mass $M_0$, the gravitational acceleration is given by \begin{equation} \label{dphi_dr} -\frac{d\Phi}{dr} = -\frac{2GM_0}{r_s^2} \left[\frac{\ln(1+x)}{x^2} - \frac{1}{x(1+x)}\right], \end{equation} where $x\equiv r/r_s$ is the dimensionless radius \citep{nav97,kly01}. Combining equations (\ref{Jeans}) and (\ref{dphi_dr}) together, one can easily find \begin{equation} \label{ng} n_g(x) = n_g(0)g(x) = N_g \frac{g(x)}{r_s^3\int g(x')d^3x'}, \end{equation} where \begin{equation}\label{g} g(x) \equiv \left[\frac{(1+x)^{1/x}}{e}\right]^\eta, \end{equation} with the dimensionless parameter $\eta$ defined by \begin{equation} \label{eta} \eta \equiv \frac{2GM_0}{r_s\sigma_r^2}. \end{equation} Note that $\eta$ measures the ratio of the gravitational potential energy of a galaxy in a cluster to that galaxy's kinetic energy at $r=r_s$. While the functional form of $g(x)$ looks strange, it behaves quite well near $x\sim0$ and gives density profiles similar to the isothermal $\beta$-model of gas distributions \citep{mak98} or to the King model of the observed galaxy distributions \citep[e.g.,][]{gir98}. For example, \citet{gir98} found that the best-fit distribution of galaxies in A1795 is $n_g(r)=n_g(0)[1+(r/R_c)^2]^{-1.27}$ with a core radius $R_c=43$ pc, which is plotted as a solid line in Figure \ref{den_comp}. On the other hand, X-ray and optical observations indicate $M_0=6.6\times10^{14}\Msun$, $r_s=460$ kpc, and $\sigma_r\approx800\kms$ for A1795 \citep{gir98,ett02}, corresponding to $\eta\approx 19$. Figure \ref{den_comp} plots as a dotted line the $g(x)$ curve with $\eta=19$, which is in good agreement with the observed galaxy distribution. We take $2r_s$ as the outer boundary of a cluster. Inserting equation (\ref{ng}) into equation (\ref{edotr}), one obtains \begin{eqnarray} \label{edot} \edotx &=& \frac{4\pi\rho(GM_g)^2N_g}{c_sr_s^3\int g(x')d^3x'} \left\langle\frac{I}{\cal M}\right\rangle g(x) \nonumber \\ &=& 1.7\times 10^{-25} r_{s,460}^{-3} N_{g,500}M_{g,11}^2 n_e \TkeV^{-1/2}\IM g(x) \ergcms, \end{eqnarray} where $r_{s,460} = r_s/(460$ kpc), $N_{g,500}=N_g/500$, $M_{g,11} = M_g/(10^{11}\Msun)$, and $\TkeV$ is the temperature of the gas in units of keV. Equation (\ref{edot}) is our desired equation for the volume heating rate due to the DF of galaxies. Note that $\IM\approx2$ at $m\sim1$ and depends on temperature. \subsection{Equilibrium Model} We look for cluster density and temperature distributions in which the hot gas satisfies both hydrostatic equilibrium and energy balance between radiative cooling and DF-induced heating. For thermal bremsstrahlung, the rate of energy loss per unit volume is given by \begin{equation}\label{j} j = 7.2\times 10^{-24} n_e^2\TkeV^{1/2}\ergcms, \end{equation} \citep{ryb79,zak03}. From equations (\ref{edot}) and (\ref{j}), the condition of thermal energy balance yields \begin{equation} \label{neT} n_e\TkeV = 2.4 \times10^{-2} r_{s,460}^{-3}N_{g,500}M_{g,11}^2 \IM g(x). \end{equation} Neglecting the weak temperature dependence of $\IM$ and assuming that $M_g$ does not vary with radius, equation (\ref{neT}) states that the gas pressure in equilibrium should trace the distribution of galaxies. Hydrostatic equilibrium of the gas requires \begin{equation} \label{HSE} \frac{c_0^2}{n_e}\frac{d(n_e\TkeV)}{dr} = - \frac{d\Phi}{dr}, \end{equation} where $c_0 = 390\kms$ is an {\it isothermal} sound speed at $\TkeV=1$. Substituting equations (\ref{dphi_dr}) and (\ref{neT}) into equation (\ref{HSE}) and using equation (\ref{g}), we obtain \begin{equation} \label{diff} \frac{d}{dx}\ln\IM = -\eta \left(\frac{\sigma_r^2}{c_0^2\TkeV} - 1 \right) \left[\frac{\ln(1+x)}{x^2} - \frac{1}{x(1+x)}\right]. \end{equation} Since $\IM$ depends on $\TkeV$, equation (\ref{diff}) can be integrated as an initial value problem to yield $\TkeV(x)$. The equilibrium profile for $n_e(x)$ can then be found from equation (\ref{neT}). Since $n_e(x)$ depends rather sensitively on less-well known quantities such and $N_g$ and $M_g^2$ and is easily influenced by the mass profile of galaxies, we concentrate on $\TkeV(x)$ that is independent of the properties of galaxies other than $\sigma_r$. Equation (\ref{diff}) has a trivial isothermal solution \begin{equation} \label{Tsol} \Tiso = 6.5 \left(\frac{\sigma_{r}}{10^3\kms}\right)^2\;\;{\rm keV}, \end{equation} independent of the radius. The corresponding density profile is \begin{equation} \label{nsol} n_e = 7.4\times10^{-3} r_{s,460}^{-3} \sigma_{r,3}^{-2} N_{g,500}M_{g,11}^2 g(x), \end{equation} where $\sigma_{r,3}=\sigma_{r}/(10^3 \kms)$, indicating that the gas density follows the galaxy distribution exactly. Considering uncertainties in the values of $r_{s,460}$, $\sigma_{v,1000}$, $N_{g,500}$, and $M_{g,11}$,\footnote{For example, the average mass of galaxies would increase up to $\sim 5\times 10^{11}\Msun$ if the contribution from dark halos is taken into account \citep[e.g.,][]{zen03}.} equation (\ref{nsol}) gives $n_e\sim 0.01-0.2\pcc$ at the cluster centers, which is not much different from observed values. Note that the Mach number corresponding to $\Tiso$ is $m=\gamma^{-1/2}$. For later purposes, we define the transonic temperature \begin{equation}\label{Tsonic} \Tsonic \equiv \frac{1}{\gamma}\Tiso, \end{equation} corresponding to unity Mach number of galaxy motions. The isothermal solution (\ref{Tsol}) is consistent with what has long been known as the scaling relation between the gas temperature and the velocity dispersion of galaxies for many clusters (e.g., \citealt{mus84,edg91,lub93,wu99}, see also \citealt{sar88,ros02} for review). It explains why the values of the $\beta$ $(\equiv \mu m_p \sigma_r^2/kT)$ parameter in the $\beta$ models for gas distributions are close to unity \citep{lub93,bah94,gir96b,gir98}. Physically, the scaling relation implies that the plasma and the galaxies are well relaxed under the common gravitational potential. It is uncertain whether the observed scaling law results truly from the DF coupling between the ICM and galaxies, but equation (\ref{Tsol}) suggests that the latter certainly makes the former tighter. Although gas in the regions surrounding cooling cores of clusters has nearly constant temperatures close to the virial values, recent X-ray observations exhibit positive radial gradients of gas temperatures at the central $\sim300$ kpc regions \citep{pet01,pet03}, while showing declining temperature distributions at far outer regions \citep{mar98,deg02,pif05,vik05}. Thus, the isothermal solution cannot describe the observed temperature distributions for the whole range of radii. We are particularly interested in the regions within and adjacent to the cooling radius in which gas density is high enough to experience significant radiative cooling. To check whether the general solutions of equation (\ref{diff}) produce temperature and density distributions similar to the observed in such regions, we solve it numerically. We adopt $\eta=19$, $r_s=460$ kpc, $N_g=500$, $M_g=10^{11} \Msun$, and $\sigma_r=10^3\kms$ corresponding to $\Tiso= 6.5$ keV. We choose a value for the central temperature $T(0)$ and then integrate equation (\ref{diff}) from $r=0$ to $2r_s$. The resulting temperature and density profiles for a few selected values of $T(0)$ are plotted in Figure \ref{local}. The spatial behavior of the solutions depends critically on $T(0)$. When $T(0)>\Tsonic$, galaxy motions are subsonic and $d\IM/dm>0$ from Figure \ref{I_over_M}. If $T(0)>\Tiso$ (or $m<\gamma^{-1/2}$), equation (\ref{diff}) gives $d\IM/dr= -(m/2T)(d\IM/dm)(dT/dr)>0$ for $T(0)>\Tiso$, implying $dT/dr<0$. As temperature monotonically decreases with radius, $m$ increases until it reaches the isothermal value $\gamma^{-1/2}$ where $T=\Tiso$. One can similarly show that $dT/dr>0$ for $\Tiso>T(0)>\Tsonic$, and thus temperature slowly increases toward $\Tiso$. As long as $T(0)>\Tsonic$, the solutions asymptote to the isothermal ones (eqs.\ [\ref{Tsol}] and [\ref{nsol}]) as the radius increases. On the other hand, $dT/dr<0$ when $T(0)<\Tsonic$ (or $m>1$). Since $m$ increases with decreasing $T$ and since $d\IM/dr$ does not change its sign for $m>1$, $dT/dr<0$ is satisfied for all radii. The decrease of temperature is much faster than that of $g(x)$, resulting in a unrealistic distribution of electron number density that increases with radius. As Figure \ref{local} shows, the local heat balance with DF-induced heating yields rising temperature profiles, if and only if $\Tsonic<T<\Tiso$. While this is a tantalizing result considering the central depression of temperatures seen in cooling-flow clusters, the required range of temperatures is very narrow. If we identify $\Tiso$ with the virial temperature, the central temperatures in our equilibrium models must be larger than $\Tsonic=0.6\Tiso$ (for $\gamma=5/3$), which is about twice larger than the observed values of $T(0)\sim(0.3-0.4)\Tiso$ (e.g., \citealt{pet01,pet03,all01}). By having too stringent upper and lower limits of temperatures, therefore, heating by DF of galaxies alone is unlikely to explain observed temperature and density distributions of galaxy clusters. We note however that the tight range of temperatures for the rising temperature profiles may be caused by the basic assumptions of this section, namely that DF-induced heating of the ICM is all localized at the positions of galaxies, and that all galaxies have equal mass. We relax these assumptions in the next section. \section{Effects of Distributed Heating} In the presence of viscosity and/or turbulent magnetic fields in the ICM, the kinetic energy lost by a galaxy via DF will eventually be converted into heat, rather than bulk motions distributed onto the gravitational wakes of the galaxy. \citet{dei96} used a quasi-linear fluctuation theory to derive the spatial structure (in Fourier space) of mechanical heating in a turbulent medium self-consistently driven by DF of many galaxies. To our knowledge, there is no published work that study spatial heat distribution (in real space) caused by DF. If ICM heating by DF of galaxies occurs through turbulent dissipation of gas kinetic energy, finding the functional form of the heat distribution would not be viable unless the characteristics of ICM turbulence and related processes are prescribed \citep{dei90}. Instead of attempting to derive a realistic heat distribution, in this work we simply adopt a Gaussian function to parametrize the spatial extent to which heat is distributed\footnote{We also tried with logarithmic heat distribution functions used in EKK04 and found that results are similar to those with Gaussian functions presented in this section.}. Our aim in this section is to examine the effects of heat distribution on equilibrium structures in comparison with the localized heating models. Since masses of galaxies inside a cluster are likely to vary with the radius within the cluster, we also allow for spatially varying galaxy masses. Assuming that heat distribution follows a Gaussian profile centered at the location of the galaxy, we write the heating rate per unit volume as \begin{equation} \label{dist_edotr} \edotr = \frac{4\pi\rho G^2\IM}{c_s} h(r), \end{equation} where \begin{equation} \label{conv} h(r) \equiv \frac{1}{\pi^{1/2} l_h} \int n_g(r')M_g(r')^2 e^{-(r-r')^2/l_h^2} dr', \end{equation} is a convolution of $n_gM_g^2$ with the Gaussian function with scale length $l_h$. In writing equations (\ref{dist_edotr}) and (\ref{conv}), we assume that density and temperature of the gas do not change significantly inside the wake of a galaxy, which is acceptable within the framework of equation (\ref{Fdf}). The condition of hydrostatic equilibrium and thermal energy balance is then reduced to \begin{equation} \label{diff2} \frac{d}{dx}\ln\IM = -\eta \frac{\sigma_r^2}{c_0^2\TkeV} \left[\frac{\ln(1+x)}{x^2} - \frac{1}{x(1+x)}\right] - \frac{d\ln h(x)}{dx}. \end{equation} It can be easily verified that equation (\ref{diff2}) recovers equation (\ref{diff}) in the limit of $l_h\rightarrow 0$. Figure \ref{disth} plots the solutions of equation (\ref{diff2}) for the case of constant $M_g$. More widely distributed heating is equivalent to localized heating with a flatter distribution of galaxy number density, causing the $|d\ln h/dx|$ term in equation (\ref{diff2}) to be smaller. Consequently, an equilibrium cluster with a larger value of $l_h$ tends to make $\IM$ decreases faster with increasing $r$. This implies that for $T(0)<\Tsonic$ ($T(0)>\Tsonic$), the temperature in models with non-zero $l_h$ decreases (increases) more rapidly than in the $l_h=0$ counterpart, although the difference between models with $l_h/r_s=0.1$ and 1.0 is negligible for small $r$. Note that even when $T(0)>\Tiso$, for which localized-heating models have declining temperature profiles, equilibrium temperature in distributed-heating models increases in the regions with $r<l_h$ and eventually converges to $\Tiso$ at $r\simgt(2-3)l_h$. We see that distributed heating does not change the condition $\Tsonic<T<\Tiso$ for the existence of rising temperature distributions. The distributed-heating models do not work better than the localized-heating model in terms of producing temperature distributions similar to observations; it in fact aggravates the situation by making all of the gas in clusters nearly isothermal except only in the central $\sim 1$ kpc regions. Another factor that may affect the equilibrium structure is the radial mass variation of cluster galaxies. Although galaxies located near the central parts of clusters tend to have a large fraction of luminous matter, (e.g., \citealt{biv02,mer03}), the strong tidal stripping of dark-matter halos is likely to cause the total (luminous + dark) masses of individual galaxies to be smaller toward the cluster center (\citealt{zen03}; EKK04; \citealt{nag05}). Motivated by this consideration, we adopt several algebraic forms of $M_g(r)$ that either monotonically decrease or increase by about a factor of 20 or less from the cluster center to $r=2r_s$, and then solve equation (\ref{diff2}). The resulting temperature profiles are almost identical to those of the constant-mass cases presented in Figure \ref{disth}. Because the distribution of galaxy masses appears logarithmically in equation (\ref{diff2}), it does not considerably affect temperature structures, although it dramatically changes electron number-density profiles. \section{Thermal Stability} We now discuss thermal stability of a system in which radiative cooling is locally balanced by DF-mediated heating. For spatially localized heating, the net loss function $\rho\calL$ is given by \begin{equation} \label{calL} \rho\calL = j-\edot = \alpha \rho^2T^{1/2} - \beta \rho T^{-1/2+p}, \end{equation} where the power index $p$ accounts for the local temperature dependence of $\IM$ in a piecewise manner, and the positive constants $\alpha$ and $\beta$ contain all information on the cluster properties other than gas density and temperature. Figure \ref{slope} plots $p=d\ln\IM/d\ln T$ as a function of $m$, which gives $p\approx 0$ at around the transonic temperature ($m\approx 1$), while $p\approx 0.3-0.5$ for supersonic temperatures ($m>1$). The curves asymptote to $-1$ for $m\ll1$ and to $0.5$ for $m\gg1$, as the asymptotic formulae given in \S2 suggest. Note that $p$ is insensitive to the choice of $\ln(vt/r_{\rm min})$ as long as $m>0.8$. Thermal stability of a system can be easily checked by using the generalized Field criterion which states that the system is thermally unstable to isobaric perturbations if \begin{equation}\label{gfield} \left.\frac{\partial (\calL/T)}{\partial T}\right|_P = \frac{1}{\rho T} \left[ \left.\frac{\partial (\rho\calL)}{\partial T}\right|_\rho - \frac{\rho}{T} \left.\frac{\partial(\rho\calL)}{\partial \rho}\right|_T \right] = - p \frac{\alpha \rho}{T^{3/2}} < 0, \end{equation} where the second equality assumes $\rho\calL=0$ corresponding to thermal equilibrium \citep{fie65,bal86}. Since $p>0$ for $m>1$ (and $p<0$ for $m<1$), this implies that the ICM is thermally unstable (stable) if heating is caused preferentially by galaxies moving at a supersonic (subsonic) speed. Dense inner parts of galaxy clusters, where the cooling time is less than the Hubble time, are filled with low-temperature gas such that galaxy motions are readily supersonic, implying that DF-induced heating is unable to quench thermal instability in those cooling cores. Whether thermal instability has important dynamical consequences on cluster evolution can be judged by considering its growth time which amounts to \begin{equation}\label{tgrow} t_{\rm grow} = -\frac{\gamma}{\gamma-1} \frac{P}{\rho T^2} \left.\frac{\partial(\calL/T)}{\partial T}\right|_P^{-1} =0.96\; p^{-1}\; {\rm Gyr}\; \left(\frac{n_e}{0.05 \pcc}\right)^{-1} \left(\frac{k_BT}{2\; {\rm keV}}\right)^{1/2}, \end{equation} (cf.\ \citealt{kim03a}). Even if cooling cores are thermally unstable, therefore, the virulence of thermal instability may or may not manifest itself during the lifetime of clusters depending on the value of $p$. For example, for a cluster with $\sigma_r=10^3\kms$ and $T=2$ keV at the central regions, $m=1.4$ and $p=0.23$ from Figure \ref{slope}, giving $t_{\rm grow}=4.2$ Gyr. This is almost comparable to the ages of clusters since the last major merger event ($\sim7$ Gyr for massive clusters; \citealt{kit96}), suggesting that thermally instability may be dynamically unimportant for practical purposes. The inability of DF-induced heating to suppress thermal instability seems not to be a fatal problem as long as the following two conditions are met: i) cooling cores are in thermal equilibrium, a basic premise of the stability analysis; and ii) galaxies near the central parts move at near-transonic speeds ($m\simlt2$), so as to have a sufficiently small value of $p$. While the second condition appears to be easily satisfied in clusters, the results of the preceding section suggests that the first condition may not be so if DF-induced heating is to balance radiative cooling for the {\it whole} range of radii. Since the cooling time outside the cooling radius is longer than the Hubble time, one may argue that material beyond the cooling radius does not need any heating source and that it is sufficient for DF of galaxies to supply heat to gas only in cooling cores. We address this possibility in the next section. As a final remark of this section, we compare our results with previous work. Formally, equation (\ref{gfield}) implies that gas subject to DF-induced heating has a critical temperature above (below) which it becomes thermally stable (unstable) and that this critical temperature corresponds exactly to unity Mach number of galaxy motions. This result is quite different from those of previous work which made some approximations on the heating function. For instance, \citet{mil86} argued that gas with drag heating is always thermally unstable and rapidly turns into a multi-phase medium, a consequence of his neglecting the temperature dependence of the heating function (corresponding to $p=1/2$ in eq.\ [\ref{calL}]). Although \citet{bre89} found a similar critical gas temperature, they considered only the supersonic regime and used an approximate heating function from the shock jump conditions, which makes their critical temperature smaller than ours by a factor of about three. Our result, based on the general formula of \citet{ost99} for the drag force applicable for both supersonic and subsonic cases, represents the situation better. \section{Time Dependent Approach} We have shown in the preceding sections that an equilibrium cluster subject to DF-mediated heating has a temperature profile that either decreases (for $T>\Tiso$ or $T<\Tsonic$) or increases (for $\Tsonic<T<\Tiso$) with radius. While the radially increasing temperature profile is attractive, the requirement that thermal energy is balanced locally in the entire regions out to $2r_s$ causes the central gas temperature to be no less than 0.6 times the virial temperature, about a factor two larger than observations. Since radiative cooling is important only in the dense central parts inside the cooling radius, however, it would be sufficient for DF of galaxies to provide heat only in such a cooling core instead of the entire regions. In addition, thermal instability would not be an issue provided that gas is in thermal equilibrium and that galaxy motions are near transonic (\S5). Therefore, it may be possible for a cluster to start from an arbitrary non-equilibrium state and evolve slowly under both radiative cooling and DF-mediated heating, ending up with an equilibrium core in which thermal instability has a very long growth time. The resulting temperature profile, albeit consistent with observations, need not represent an equilibrium for the whole range of radii. To test this idea, we have run a number of one-dimensional hydrodynamic simulations in spherical polar coordinates. We set up a logarithmically spaced radial grid, with 400 zones, from 1 kpc to 1 Mpc, and explicitly implement the net cooling-heating function described by equation (\ref{calL}). We also include the effect of gravitational potential due to {\it passive} dark matter using equation (\ref{dphi_dr}) with $M_0=6.6\times 10^{14}\Msun$ and $r_s=460$ kpc; the DF-coupling between gas and the dark matter and back reaction on the latter component are not taken into account. As initial conditions, we consider spherically symmetric clusters and take $n_e=n_e(0)g(x)\pcc$ with the central electron density $n_e(0)= 0.002$, 0.005, 0.01, 0.02 cm$^{-3}$, and fix $T=6.5$ keV independent of the radius for all models. We adopt $r_s=460$ kpc, $N_g=500$, and $M_g=10^{11} \Msun$, and consider both localized heating and distributed heating with $l_h/r_s=$0.1 and 1. Note that the corresponding equilibrium density profile has $n_e(0)= 0.0074$ cm$^{-3}$ when heating is localized. We adopt the outflow boundary conditions for scalar variables (i.e., density, energy, etc.) that assign the same values in the ghost zones as in the corresponding active zones. For the radial velocity, we allow it to vary as a linear function of radius at the inner boundary, while fixing it to be zero at the outer boundary (e.g., \citealt{kim03a}). Using the ZEUS hydrodynamic code \citep{sto92}, we solve the basic hydrodynamic equations and follow the nonlinear evolution of each model cluster. The resulting radial profiles of electron number density and temperature of model A with $n_e(0)=0.02\pcc$ and model B with $n_e(0)=0.005\pcc$, both of which assume spatially localized heating, are shown in Figure \ref{evol} for a few time epochs. Model A, which initially has larger density than the equilibrium value everywhere, immediately develops radial mass inflows. As time evolves, the temperature drops and the density increases. Compared to pure cooling cases which fairly well maintain near isobaric conditions (e.g., \citealt{kim03a}), DF-induced heating is found to cause thermal pressure to increase with density as $P\propto\rho^{0.3}$. Since $j\propto T^{-2.4}$ and $\edot \propto T^{-1.9+p}$ with $p>0$ for supersonic galaxy motions, however, the system is thermally unstable and cooling occurs at a much faster rate as temperature decreases. Model A experiences a cooling catastrophe in less than 4 Gyr. On the other hand, model B with initial overheating becomes hotter and less dense, which in turn tends to increase the ratio of heating rate to the cooling rate for $p\simgt -0.5$. As the gas temperature increases, the Mach number of galaxy motions become smaller, reducing the value of $p$. The cooling rate will therefore eventually exceed the heating rate and the cluster evolution will be reversed. In the case of model B, this turnaround will occur at $t=70$ Gyr when $T\sim18$ keV (or $m\sim0.5$) is reached. The evolution of models with different density and different length scales for heat distribution is qualitatively similar to those of models A and B, namely that clusters catastrophically evolve toward vanishing central temperature if cooling dominates initially, while clusters with initial excessive heating heat up steadily. This suggests that DF-induced heating does not naturally lead non-equilibrium clusters to thermally stable, equilibrium cores. As shown in \S3 and \S4, DF of galaxies does not explain the observed temperature distributions of clusters if the condition of thermal energy balance is imposed for all radii. Although it is enough for DF-induced heating to balance radiative cooling only in cooling cores, such cooling cores do not naturally form from non-equilibrium states. Unless the properties and galaxies and ICM are fine tuned, small departures from an equilibrium state rapidly evolve into an extreme configuration. Therefore, we conclude that DF-induced heating {\it alone} is not likely to account for the absence of cold gas in the centers of galaxy clusters. Although heating by DF from galaxies does not appear to provide a complete solution to the cooling flow problem, we see that the DF-induced heating can still offset a considerable amount of radiative cooling. The isobaric cooling time is given by \begin{equation}\label{tcool} t_{\rm cool} = \frac{\gamma}{\gamma-1} \left(\frac{P}{\rho\calL}\right) = \frac{0.96}{1-\mathcal{R}/(n_e\TkeV)} \left(\frac{n_e}{0.05\pcc}\right)^{-1} \left(\frac{kT}{2\;{\rm keV}}\right)^{1/2} \;{\rm Gyr}, \end{equation} where $\mathcal{R} \equiv 0.024 r_{s,460}^{-3} N_{g,500} M_{g,11}^2\IM$ represents the contribution of the DF-induced heating. Figure \ref{tscale} plots as solid lines the cooling time as a function of $n_e$ for $N_{g,500} M_{g,11}^2=1$ or as a function of $N_{g,500} M_{g,11}^2$ with varying density. The cases without any heating are compared as dotted lines. The time epochs when clusters experience cooling catastrophe in the numerical simulations are marked in Figure \ref{tscale}$a$ as solid circle, triangle, cross, and open circle for no heating, the distributed heating with $l_h/r=1.0$ and 0.1, and the localized heating cases, respectively. The numerical results are in good agreement with the theoretical prediction. Models with distributed heating tend to have longer cooling time. When $N_{g,500} M_{g,11}^2=1$ and $n_e(0)=0.05\pcc$, Figure \ref{tscale}$a$ shows that DF-induced heating lengthens the cooling time by about 13\% compared to the pure cooling case. However, as Figure \ref{tscale}$b$ indicates, the offset of cooling by DF-induced heating is increasingly larger as $N_{g,500}M_{g,11}^2$ increases. The increment of the cooling time could be as large as 140\% when $n_e(0)\sim0.03-0.1\pcc$ and $N_{g,500}M_{g,11}\sim3-7$, suggesting that thermal energy supplied by DF of galaxies is by no means non-negligible. \section{Conclusions and Discussion} Friction of galaxy motions via the gravitational interaction with their own gravitationally induced wake in the ICM has often been invoked as an efficient heating mechanism of the gas (e.g., \citealt{mil86,jus90,dei96}; EKK04). This idea of DF-mediated heating of the ICM is quite attractive because there is sufficient energy available in galaxy motions and the mechanism is self-regulating; it operates effectively only when the temperature of gas is in a certain range, which happens to be the typical gas temperatures in the cooling cores of galaxy clusters. In this paper, we take one step further from \citet{mil86} and EKK04 to calculate equilibrium density and temperature profiles of the hot gas heated by DF of galaxies. Instead of restricting to cases where galaxy motions are all supersonic, we use the general formula derived by \citet{ost99} for the drag force that takes allowance for both supersonic and subsonic galaxy motions in a gaseous medium. We show that the total heating rate due to the DF of galaxies in a typical rich cluster is comparable to its total X-ray luminosity, confirming the results of \citet{mil86} and EKK04 (see \S2). Next, we derive the local heating function (eq.\ [\ref{edot}]) assuming that the orbits of galaxies are isotropic and isothermal under the NFW distribution of dark matter and that the kinetic energy lost by a galaxy is deposited into heat at the location of the galaxy. The condition that the gas is in hydrostatic equilibrium and maintains energy balance requires the temperature profile to be one of the following three kinds: isothermal, with $T=\Tiso$; a decreasing profile with radius when $T<\Tsonic$ or $T>\Tiso$; an increasing profile when $\Tsonic<T<\Tiso$, where $\Tiso$ and $\Tsonic$ denote the temperatures corresponding to unity Mach number of galaxy motions with respect to the isothermal and adiabatic sound speeds, respectively (eqs.\ [\ref{Tsol}] and [\ref{Tsonic}]). The isothermal solution is interesting because it quite well describes the observed scaling relationship among clusters properties. Although the rising temperature solution is attractive since clusters usually show a temperatures drop at the central regions, it strictly requires the central temperature to be no lower than 0.6 times the virial temperature, which is roughly twice smaller than the observed values (\S3). We also consider cases in which DF-induced heating is distributed in space according to a Gaussian form, and/or the masses of cluster galaxies vary over radius, and find that the stringent limit of $T>\Tsonic=0.6\Tiso$ for rising temperature distributions remains unchanged (\S4). Using the local heating function we have derived, we analyze thermal stability of the gas subject to radiative cooling and DF-mediated heating (\S5). When galaxy motions are subsonic, the heating rate that deceases steeply with temperature suppresses thermal instability completely. On the other hand, supersonic galaxy motions for which the heating rate is relatively insensitive to temperature are unable to stop the growth of thermal instability. The growth time in the presence of DF-induced heating is at least twice that in the pure cooling case, and becomes progressively longer as the Mach number of galaxies decreases. When galaxies move at slightly supersonic velocities with Mach number less than 2, which is very likely at cluster centers, the growth time becomes comparable to the ages of clusters. This implies that thermal instability, even if operates, may have insignificant dynamical consequences on cluster evolution. Noting that regions outside the cooling radius need not be in strict thermal equilibrium, we look for a possibility using numerical hydrodynamic simulations that clusters evolve from an arbitrary non-equilibrium state and form current cooling cores in which DF-induced heating balances radiative cooling (\S6). We find that clusters that were initially dominated by cooling unavoidably develop radial mass inflows and decreases their central temperatures in a runaway fashion, whereas clusters with initial overheating slowly heat up and result in radially decreasing temperature profiles. Equilibrium solutions therefore do not appear to form an attracting set for galaxy and gas configurations, suggesting that when DF from galaxies is the sole heating source, it is extremely difficult to obtain equilibrium cores by smoothly evolving non-equilibrium clusters (even if in some cases the cooling catastrophe can be deferred so as to occur on a longer timescale). Putting together all the results of this paper we conclude that DF of galaxies {\it alone}, albeit an interesting heating mechanism with a lot of available energy, cannot be the lone heating agency to balance radiative cooling in rich galaxy clusters. We nonetheless note that the heating due DF of galaxies could considerably lengthen the cooling time, depending on the value of $N_g M_g^2$, and thus should not be neglected in energetics of galaxy clusters. One of the key assumptions made in this paper is that all the kinetic energy lost by galaxies via DF is transferred to the thermal energy of the surrounding gas. The rationale behind this assumption is that the superposition of the wakes produces turbulence in the ICM and the kinetic energy injected into the ICM turbulence at saturation cascades down along the Kolmogorov-like energy spectrum, and eventually transforms into heat through viscous dissipation at small scales. Another possibility is that a large fraction of the kinetic energy of galaxies is used to merely enhance the level of the turbulence instead of being converted into heat. This may occur when the turbulence is not fully developed yet. The characteristics of the ICM turbulence is not well known, yet observations and numerical simulations suggest an average velocity dispersion $\vturb\sim 200-400\kms$ on scales of $\lambda\sim5-20$ kpc \citep{ric01,car02,chu04,sch04,fal04}. The associated turbulent kinetic energy is $(1/2)M_{\rm gas}\vturb^2 \sim 9\times 10^{61}$ erg for the total mass $M_{\rm gas}\sim 10^{14}\Msun$ of the ICM in a rich cluster. Let us assume that this energy is supplied solely by DF of galaxies at a rate given by equation (\ref{num}). If the level of turbulence keeps increasing without dissipation, it would take about 7 Gyr for the DF of galaxies to feed the ICM turbulence to the observed level, which is almost comparable to the lifetime of the cluster. Conversely, if the turbulence is fully developed and in a steady state such that the energy injection rate by DF is equal to the rate of dissipation, it would have $\vturb = (2\lambda\dot{E}/M_{\rm gas})^{1/3}\sim 60 \kms$ for $\lambda\sim 20$ kpc, which is too small to explain the observations. All of these imply that the contribution from the DF of galaxies to the ICM turbulence is not considerable (see also \citealt{san99b}). If the dissipation of the (fully developed) turbulence is to provide enough heat to balance radiative cooling, as suggested by \citet{chu04} and \citet{fuj04}, the energy injection should not be entirely due to the DF of galaxies; it requires other energy sources, including jets from active galactic nuclei and mergers with smaller groups or clusters. The assertion that the conversion of galaxy kinetic energy into thermal energy of the gas may not be so effective is supported by the results of \citet{fal04} who studied using numerical simulations cluster formation in $\Lambda$CDM cosmology, with the DF of galaxies included explicitly. They found that (1) the motions of galaxies at the present epoch are slightly supersonic, with an average Mach number of ${\cal M}\approx 1.4$; (2) gas within the virial radius has a velocity dispersion $\vturb\sim(0.3-0.5)c_s$ resulting probably from infall motions of galaxies and small groups, with small contribution from the DF of galaxies; and (3) the clusters still suffer from the cooling catastrophe. Although higher-resolution, more-controlled simulations are required to make a definitive statement, the last point of their results suggests that the system is thermally unstable or heating by DF may be quite inefficient --- for, as we have seen in the previous section, systems evolved from arbitrary initial conditions do not generally tend to the equilibrium solutions, which do not therefore form an attracting set; and for some initial parameters the delay in the cooling catastrophe that the DF mechanism leads to is not significant. We finally note that several issues of potential importance were not investigated in this paper. One of these relates to the galaxy velocity anisotropy. It is easy to show that, for isothermal distribution with constant anisotropy, it is possible to obtain equilibrium solutions with smaller central gas temperatures, more in line with observations. However these tend to have unrealistic galaxy number density distributions. The equilibrium and stability of gas configurations in more realistic anisotropic models have not been investigated. We have adopted the general formula of \citet{ost99} for the drag force on a galaxy moving at an arbitrary speed. This is an improvement over previous studies (e.g., \citealt{mil86}; EKK04) that considered only supersonic galaxy motions. While its explicit dependence on the Mach number enabled us to explore temperature profiles of equilibrium clusters, the formula still assumes a galaxy moving in a {\it straight-line} orbit through a {\it uniform} gaseous medium. Clearly, the ICM is radially stratified and galaxies follow curved rather than rectilinear trajectories. In a collisionless system, \citet{jus04} recently showed that the density inhomogeneity reduces the Coulomb logarithm of the Chandrasekhar formula by limiting the the maximum impact parameter to the local scale length of density variation (see also \citealt{has03,spi03}), and induces an additional drag force in the lateral direction of the galaxy motion (see also \citealt{bin77}). They further showed that the reduction of the Coulomb logarithm makes the orbital energy loss 15\% less effective, inhibiting the circularization of the orbit, while the additional tangential drag force has a negligible effect on the orbital evolution. Similar effects are expected to occur in a gaseous medium, yet their consequences are not known quantitatively. A related important question is how heat (or turbulent) energy is distributed in the wakes that form in the turbulent, inhomogeneous, magnetized ICM. A real assessment of the effects of DF to dynamical evolution of galaxy clusters requires answers to the above questions, which will direct our future research. \acknowledgments We are pleased to thank P. Goldreich, M.\ G.\ Lee, C.\ McKee, R.\ Narayan, and E.\ Ostriker for many helpful discussions and communications. We also thank an anonymous referee for useful comments and suggestions. W.-T.\ Kim was supported in part by Korea Science and Engineering Foundation (KOSEF) grant R01-2004-000-10490-0 at the SNU and in part by NSF grant AST 0307433 at the CfA. M.\ Kamionkowski was supported by NASA NNG05GF69G at Caltech.
{ "redpajama_set_name": "RedPajamaArXiv" }
\subsubsection*{\bibname}} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{lmodern} \usepackage{url} \usepackage{booktabs} \usepackage{amsfonts} \usepackage{nicefrac} \usepackage{microtype} \usepackage{amsmath} \usepackage{amsthm} \usepackage{algpseudocode,algorithm,algorithmicx} \usepackage{graphicx} \graphicspath{ {images/} } \usepackage[usenames, dvipsnames]{color} \def\do\/\do-{\do\/\do-} \DeclareMathOperator*{\argmax}{arg\,max} \DeclareMathOperator*{\argmin}{arg\,min} \usepackage{mathtools} \definecolor{dark-gray}{gray}{.35} \definecolor{myorange}{RGB}{246, 164, 16} \definecolor{mygreen}{RGB}{1, 100, 3} \usepackage{footnote} \makesavenoteenv{table} \makesavenoteenv{tabular} \usepackage{cleveref}[2012/02/15 \crefformat{footnote}{#2\footnotemark[#1]#3} \begin{document} \twocolumn[ \aistatstitle{Connections between Support Vector Machines, Wasserstein distance and gradient-penalty GANs} \aistatsauthor{ Alexia Jolicoeur-Martineau \And Ioannis Mitliagkas} \aistatsaddress{ Mila, University of Montreal \And Mila, University of Montreal} ] \begin{abstract} We generalize the concept of maximum-margin classifiers (MMCs) to arbitrary norms and non-linear functions. Support Vector Machines (SVMs) are a special case of MMC. We find that MMCs can be formulated as Integral Probability Metrics (IPMs) or classifiers with some form of gradient norm penalty. This implies a direct link to a class of Generative adversarial networks (GANs) which penalize a gradient norm. We show that the Discriminator in Wasserstein, Standard, Least-Squares, and Hinge GAN with Gradient Penalty is an MMC. We explain why maximizing a margin may be helpful in GANs. We hypothesize and confirm experimentally that $L^\infty$-norm penalties with Hinge loss produce better GANs than $L^2$-norm penalties (based on common evaluation metrics). We derive the margins of Relativistic paired (Rp) and average (Ra) GANs \end{abstract} \section{Introduction} Support Vector Machines (SVMs) \cite{cortes1995support} are a very popular type of maximum-margin classifier (MMC). The margin can be conceptualized as the minimum $L^p$ distance between the decision boundary of the classifier and any data-point. An SVM is a linear classifier which maximizes the minimum $L^2$ margin. A significant body of work has been done on generalizing SVM beyond a simple linear classifier through the kernel trick \cite{aizerman1964theoretical}. However, until very recently, SVMs had not been generalized to arbitrary norms with non-linear classifiers (e.g., neural networks). In this paper, we describe how to train MMCs (which generalize SVMs) through different approximations of the $L^p$-norm margin and we show that this results in loss functions with a gradient norm penalty. Generative adversarial networks (GANs) \citep{GAN} are a very successful class of generative models. Their most common formulation involves a game played between two competing neural networks, the discriminator $D$ and the generator $G$. $D$ is a classifier trained to distinguish real from fake examples, while $G$ is trained to generate fake examples that will confuse $D$ into recognizing them as real. When the discriminator's objective is maximized, it yields the value of a specific divergence (i.e., a distance between probability distributions) between the distributions of real and fake examples. The generator then aims to minimize that divergence (although this interpretation is not perfect; see \citet{jolicoeur2018beyonddivergence}). Importantly, many GANs apply some form of gradient norm penalty to the discriminator \citep{WGAN-GP,fedus2017many,mescheder2018training,karras2019style}. This penalty is motivated by a Wasserstein distance formulation in \citet{WGAN-GP}, or by numerical stability arguments \cite{mescheder2018training,karras2019style}. In this paper, we show that discriminator loss functions that use a gradient penalty correspond to specific types of MMCs. Our contributions are the following: \begin{enumerate} \item We define the concept of expected margin maximization and show that Wasserstein-, Standard-, Least-Squares-, and Hinge-GANs can be derived from this framework. \item We derive a new method from this framework, a GAN that penalize $L^\infty$ gradient norm values above 1 (instead of penalizing all values unequal to 1 as done by \citet{WGAN-GP}). We hypothesize and experimentally show that this method leads to better generated outputs. \item We describe how margin maximization (and thereby gradient penalties) help reduce vanishing gradients at fake (generated) samples, a known problem in many GANs. \item We derive the margins of Relativistic paired and average GANs \citep{jolicoeur2018relativistic}. \end{enumerate} It is worth noting that \cite{lim2017geometric} explore a similar connection between GANs and SVMs, which they use to propose Geometric GANs. The main difference to our work is that they assume a linear classifier working on the {\em feature space of a neural network's output} instead of the {\em input space}. Furthermore, that work does not exploit the duality theory of SVMs. Thereby, it does not draw a connection to gradient penalty terms. Our work explores this new connection which motivates an $L^\infty$ norm gradient penalty and shows great promise over the standard $L^2$ norm gradient penalty. The paper is organized as follows. In Section~\ref{sec:2}, we review SVMs and GANs. In Section~\ref{sec:3}, we generalize the concept of maximum-margin classifiers (MMCs). In Section~4, we explain the connections between MMCs and GANs with gradient penalty. In Section~\ref{sec:4.1}, we mention that enforcing 1-Lipschitz is equivalent to assuming a bounded gradient; this implies that Wasserstein's distance can be approximated with an MMC formulation. In Section~\ref{sec:4.2}, we describe the benefits of using MMCs in GANs. In Section~\ref{sec:4.3}, we hypothesize that $L^1$-norm margins may lead to more robust classifiers. In Section~\ref{sec:4.4}, we derive margins for Relativistic paired and average GANs. Finally, in Section~\ref{sec:5}, we provide experiments to support the hypotheses in our contributions. \section{Review of SVMs and GANs} \label{sec:2} \subsection{Notation} \label{sec:notation} We focus on binary classifiers. Let $f$ be the classifier and $(x,y) \sim \mathbb{D}$ the distribution (of a dataset $D$) with $n$ data samples $x$ and labels $y$. As per SVM literature, $y=1$ when $x$ is sampled from class 1 and $y=-1$ when $x$ is sampled from class 2. Furthermore, we denote $x_1 = x | (y = 1) \sim \mathbb{P}$ and $x_2 = x | (y = -1) \sim \mathbb{Q}$ as the data samples from class 1 and class 2 respectively (with distributions $\mathbb{P}$ and $\mathbb{Q}$). When discussing GANs, $x_1 \sim \mathbb{P}$ (class 1) refer to real data samples and $x_2 \sim \mathbb{Q}$ (class 2) refer to fake data samples (produced by the generator). The $L^{\infty}$-norm is defined as: $|| x ||_\infty = \max(|x_1|,|x_2|,\ldots,|x_k|)$. Note that we sometimes refer to a function $F$; this is an objective function to be maximized (not to be confused with the classifier $f$). The critic (C) is the discriminator (D) before applying any activation function (i.e., $D(x)=a(C(x))$, where $a$ is the activation function). For consistency with existing literature, we will generally refer to the critic rather than the discriminator. \subsection{SVMs} In this section, we explain how to obtain a linear maximum-margin classifier (MMC) which maximizes the minimum $L^2$-norm margin (i.e., SVMs). \subsubsection{Decision boundary and margin} The \em decision boundary \em of a classifier is defined as the set of points $x_0$ such that $f(x_0)=0$. The margin is either defined as i) the minimum distance between a sample and the boundary, or ii) the minimum distance between the \em closest sample \em to the boundary and the boundary. The former thus corresponds to the \em margin of a sample \em and the latter corresponds to the \em margin of a dataset \em. In order to disambiguate the two cases, we refer to the former as the \em margin \em and the latter as the \em minimum margin\em. The first step towards obtaining a linear MMC is to define the $L^p$-norm margin: \begin{align}\label{eq:10} \gamma(x) = &\min_{x_0} || x_0 - x ||_p \hspace{5pt} \text{ s.t. } \hspace{5pt} f(x_0)=0 \end{align} With a linear classifier (i.e., $f(x) = w^T x - b$) and $p=2$, we have: $$\gamma(x) = \frac{|w^T x - b|}{||w||_2} = \frac{\alpha(x)}{\beta}$$ Our goal is to maximize this margin, but we also want to obtain a classifier. To do so, we simply replace $\alpha(x)=|w^T x - b|$ by $\widetilde{\alpha}(x,y) = y(w^T x - b)$. We call $\widetilde{\alpha}$ the \em functional margin\em. After replacement, we obtain the \em geometric margin\em: $$\widetilde{\gamma}(x,y) = \frac{y(w^T x - b)}{||w||_2} = \frac{\widetilde{\alpha}(x,y)}{\beta}$$ The specific goal of SVMs is to find a linear classifier which maximizes the \em minimum \em $L^2$-norm geometric margin (in each class): \begin{align}\label{eqn:1} \max_{w,b} \min_{(x,y) \in D} \widetilde{\gamma}(x,Y). \end{align} \subsubsection{Formulations} Directly solving equation \eqref{eqn:1} is an ill-posed problem for multiple reasons. Firstly, the numerator and denominator are dependent on one another; increasing the functional margin also increases the norm of the weights (and vice-versa). Thereby, there are infinite solutions which maximize the geometric margin. Secondly, maximizing $\widetilde{\alpha}$ means minimizing the denominator which can cause numerical issues (division by near zero). Thirdly, it makes for a very difficult optimization given the max-min formulation. For these reasons, we generally prefer to i) constrain the numerator and minimize the denominator, or ii) constrain the denominator and maximize the numerator. The classical approach is to minimize the denominator and constrain the numerator using the following formulation: \begin{align}\label{eqn:2} \min_{w,b} {||w||^2_2} \hspace{5pt} \text{ s.t. } \hspace{5pt} y(w^T x - b) \ge 1 \hspace{2pt}\forall\hspace{2pt} (x,y) \in D \end{align} This formulation corresponds to Hard-Margin SVM. The main limitation of this approach is that it only works when the data are separable. However, if we take the opposite approach of maximizing a function of $y(w^T x - b)$ and constraining the denominator $|| w ||_2$, we can still solve the problem with non-separable data. For this reason, we prefer solving of the following Soft-Margin SVM: \begin{align*} \min_{w,b} \frac{1}{n} \sum_{(x,y)\in D}\left[\max(0,1-y(w^T x - b))\right] \hspace{1pt} \text{ s.t. } \hspace{1pt} ||w||_2 = 1 \end{align*} This can be rewritten equivalently with a KKT multiplier $\lambda$ in the following way: \begin{align*} \min_{w,b} \frac{1}{n} \sum_{(x,y)\in D}\left[\max(0,1-y(w^T x - b))\right] + \lambda (||w||^2_2 - 1), \end{align*} Note that the Hinge function $max(0,1-y(w^T x - b)$ is simply a relaxation of the hard constraint $y(w^T x - b) \ge 1 \hspace{2pt}\forall\hspace{2pt} (x,y) \in D$. Thereby, we are not actually solving equation \eqref{eqn:1} anymore. \subsection{GANs}\label{sec:GAN} GANs can be formulated in the following way: \begin{align}\label{eqn:3} &\max_{C: \mathcal{X} \to \mathbb{R}} \mathbb{E}_{x_1 \sim \mathbb{P}}\left[ f_1(C(x_1)) \right] + \mathbb{E}_{z \sim \mathbb{Z}} \left[ f_2(C(G(z))) \right], \\ &\min_{G: Z \to \mathcal{X}} \mathbb{E}_{z \sim \mathbb{Z}} \left[ f_3(C(G(z))) \right], \end{align}\label{eqn:4} where $f_1, f_2, f_3:\mathbb{R} \to \mathbb{R}$, $\mathbb{P}$ is the distribution of real data with support $\mathcal{X}$, $\mathbb{Z}$ is a multivariate normal distribution with support $Z=\mathbb{R}$, $C(x)$ is the critic evaluated at $x$, $G(z)$ is the generator evaluated at $z$, and $G(z) \sim \mathbb{Q}$, where $\mathbb{Q}$ is the distribution of fake data. Many variants exist; to name a few: Standard GAN (SGAN) \citep{GAN} corresponds to $f_1(z)=\log(\text{sigmoid}(z))$, $f_2(z)=\log(\text{sigmoid}(-z))$, and $f_3(z)=-f_1(z)$. Least-Squares GAN (LSGAN) \citep{LSGAN} corresponds to $f_1(z)=-(1-z)^2$, $f_2(z)=-(1+z)^2$, and $f_3(z)=-f_1(z)$. HingeGAN \citep{lim2017geometric} corresponds to $f_1(z)=-max(0,1-z)$, $f_2(z)=-max(0,1+z)$, and $f_3(z)=-z$. An important class of GANs are those based on Integral probability metrics (IPMs) \citep{muller1997integral}. IPMs are statistical divergences (distances between probability distributions) defined in the following way: \[ IPM_{F} (\mathbb{P} || \mathbb{Q}) = \sup_{C \in \mathcal{F}} \mathbb{E}_{x_1 \sim \mathbb{P}}[C(x_1)] - \mathbb{E}_{x_2 \sim \mathbb{Q}}[C(x_2)], \] where $\mathcal{F}$ is a class of real-valued functions. Of note, certain connections between IPMs and SVMs have been identified in \citet{sriperumbudur2009integral}. IPM-based GANs attempt to solve the following problem $$ \min_{G} \max_{C \in \mathcal{F}} \mathbb{E}_{x_2 \sim \mathbb{P}}[C(x_1)] - \mathbb{E}_{z \sim \mathbb{Z}}[C(G(z))].$$ There are many GANs based on IPMs \citep{mroueh2017sobolev,Fisher}, but we will focus on two of them: WGAN \citep{WGAN} and WGAN-GP \citep{WGAN-GP}. WGAN is an IPM-based GAN which uses the first-order Wasserstein's distance ($W_1$), the IPM restricted to the class of all 1-Lipschitz functions. This corresponds to the set of functions $C$ such that $\frac{C(x_1)-C(x_2)}{d(x_1,x_2)} \leq 1$ for all $x_1$,$x_2$, where $d(x_1,x_2)$ is a metric. $W_1$ also has a primal form which can be written in the following way: $$ W_1 (\mathbb{P}, \mathbb{Q}):= \inf_{\pi \in \Pi (\mathbb{P}, \mathbb{Q})} \int_{M \times M} d(x_1, x_2) \, \mathrm{d} \pi (x_1, x_2),$$ where $\Pi (\mathbb{P}, \mathbb{Q})$ is the set of all distributions with marginals $\mathbb{P}$ and $\mathbb{Q}$ and we call $\pi$ a coupling. The original way to enforce the 1-Lipschitz property on the critic was to clamp its weights after each update. This was later shown to be problematic \citep{WGAN-GP}. Albeit with its issues, WGAN improves the stability of GANs and reduces the incidence of mode collapse (when the generator produces less diversity than the training dataset) \citep{WGAN}. \citet{WGAN-GP} showed that if the optimal critic $f^{*}(x)$ is differentiable everywhere and $\hat{x} = \alpha x_1 + (1-\alpha)x_2$ for $0 \leq \alpha \leq 1$, we have that $||\nabla C^{*}(\hat{x})|| = 1$ almost everywhere for all pair $(x_1,x_2)$ which comes from the optimal coupling $\pi^{*}$. Sampling from the optimal coupling is difficult so they suggest to softly penalize $\mathbb{E}_{\tilde{x}}{(||\nabla_{\tilde{x}} C(\tilde{x})||_2-1)^2}$, where $\tilde{x} = \alpha x + (1-\alpha)y$, $\alpha \sim U(0,1)$, $x \sim \mathbb{P}$, and $y \sim \mathbb{Q}$. This penalty works well in practice and is a popular way to approximate Wasserstein's distance. However, this is not equivalent to estimating Wasserstein's distance since we are not sampling from $\pi^{*}$ and $f^{*}$ does not need to be differentiable everywhere \citep{petzka2017regularization}. Of importance, gradient norm penalties of the form $\mathbb{E}_{x}{(||\nabla_{x} D(x)||_2-\delta)^2}$, for some $\delta \in \mathbb{R}$ are very popular in GANs. Remember that $D(x)=a(C(x))$; in the case of IPM-based-GANs, we have that $D(x)=C(x)$. It has been shown that the GP-1 penalty ($\delta=1$), as in WGAN-GP, also improves the performance of non-IPM-based GANs \citep{ManyPaths}. Another successful variant is GP-0 ($\delta=0$ and $x \sim \mathbb{P}$) \citep{mescheder2018training,karras2019style}. Although there are explanations to why gradient penalties may be helpful \citep{mescheder2018training, kodali2017convergence, WGAN-GP}, the theory is still lacking. There are other GAN variants which improve the stability of training and will be relevant to our discussion. The first one is HingeGAN \citep{lim2017geometric} which uses the Hinge loss as objective function. This corresponds to using equation \eqref{eqn:3} and \eqref{eqn:4} using $f_1(z)=-max(0,1-z)$, $f_2(z)=-max(0,1+z)$, and $f_3(z)=-z$. Another class of GANs relevant to our discussion are Relativistic paired GANs (RpGANs) \citep{jolicoeur2018relativistic,jolicoeur2019relativistic}: \begin{align*} \max\limits_{C:\mathcal{X} \to \mathbb{R}} \hspace{1pt} &\underset{ \substack{x_1 \sim \mathbb{P} \\ x_2 \sim \mathbb{Q}}}{\mathbb{E}\vphantom{p}} \left[ f_1 \left( C(x_1) - C(x_2) \right) \right], \\ \min_{G} \hspace{1pt}& \underset{ \substack{x_1 \sim \mathbb{P} \\ x_2 \sim \mathbb{Q}}}{\mathbb{E}\vphantom{p}} \left[ f_2 \left( C(x_1) - C(x_2) \right) \right], \end{align*} and Relativistic average GANs (RaGANs) \citep{jolicoeur2018relativistic,jolicoeur2019relativistic}: \begin{align*} \max\limits_{C:\mathcal{X} \to \mathbb{R}} &\mathbb{E}_{x_1 \sim \mathbb{P}}\left[ f_1\left( C(x_1)-\mathbb{E}_{x_2 \sim \mathbb{Q}} \hspace{1pt} C(x_2) \right)) \right] + \\ &\mathbb{E}_{x_2 \sim \mathbb{Q}} \left[ f_2 \left( C(x_2)-\mathbb{E}_{x_1 \sim \mathbb{P}} \hspace{1pt} C(x_1) \right) \right], \\ \max_{G} \hspace{2pt} &\mathbb{E}_{x_2 \sim \mathbb{Q}}\left[ f_1\left( C(x_2)-\mathbb{E}_{x_1 \sim \mathbb{P}} \hspace{1pt} C(x_1) \right)) \right] + \\ &\mathbb{E}_{x_1 \sim \mathbb{P}} \left[ f_2 \left( C(x_1)-\mathbb{E}_{x_2 \sim \mathbb{Q}} \hspace{1pt} C(x_2) \right) \right]. \end{align*} Most loss functions can be represented as RaGANs or RpGANs; SGAN, LSGAN, and HingeGAN all have relativistic counterparts. \section{Generalizing SVMs} \label{sec:3} The main approach used to generalize SVMs beyond the linear classifier is to apply the kernel trick \citep{aizerman1964theoretical}. This simply consists in replacing $f(x)=w^T x - b$ by $f(x) = w^T \phi(x) - b$, where $\phi(x)$ is a kernel. Kernels can be chosen a priori or learned \citep{goodfellow2016deep}. In this section, we generalize SVMs to arbitrary classifiers $f(x)$, $L^p$-norms and loss functions. We start by showing how to derive an $L^p$-norm geometric margin. Then, we present the concept of maximizing the \em expected \em margin, rather than the \em minimum \em margin. \subsection{Approximating the geometric margin} Calculating the geometric margin involves computing a projection. For general $L^p$ norms it has no closed form. One way to approximate it, is using a Taylor expansion. Depending on the order of the Taylor expansion (before or after solving for the projection), we can get two different approximations: one new and one existing. \subsubsection{Taylor approximation (After solving)}\label{sec:3.1.1} The formulation of the $L^p$-norm margin \eqref{eq:10} has no closed form for arbitrary non-linear classifiers. However, when $p=2$, if we use a Taylor's expansion, we can show that \begin{align*} \gamma_2(x) &= \frac{|\nabla_{x_0} f(x_0)^T (x-x_0)|}{|| \nabla_{x_0} f(x_0) ||_2} \\ &\approx \frac{|f(x)|}{|| \nabla_{x_0} f(x_0) ||_2} \hspace*{2pt} \text{ (Taylor's expansion)} \\ &\approx \frac{|f(x)|}{|| \nabla_x f(x) ||_2} \hspace*{2pt} \text{ (if $f(x)\approx w^T x - b$)} \end{align*} This approximation depends on approximate linearity of the classifier. If $f(x)=w^T x - b$, we have that $|| \nabla_x f(x) ||_2 = ||w||$ (and vice-versa). This means that if we enforce $|| \nabla_x f(x) ||_2 \approx 1$ for all $x$, we have $|| \nabla_{x_0} f(x_0) ||_2 \approx 1$ for all points $x_0$ on the boundary. This may appear to bring us back to the original scenario with a linear classifier. However, in practice, we only penalize the gradient norm in expectation which means that we do not obtain a linear classifier. Thus, we can use the following pseudo-margin: $$ \gamma(x)_2^{+} = \frac{yf(x)}{|| \nabla_x f(x) ||_2}. $$ \subsubsection{Taylor approximation (Before solving)} An alternative approach to derive a pseudo-margin is to use Taylor's approximation before solving the problem rather than after (as done by \citet{matyasko2017margin} and \citet{elsayed2018large}): \begin{align*} \gamma_p(x) &= \min_{r} || r ||_p \hspace{5pt} \text{ s.t. } \hspace{5pt} f(x+r)=0 \\ &\approx \min_{r} || r ||_p \hspace{5pt} \text{ s.t. } \hspace{5pt} f(x)+\nabla_x f(x)^T r=0 \\ &=\frac{|f(x)|}{|| \nabla_x f(x) ||_q}, \end{align*} where $||\cdot||_q$ is the dual norm \citep{boyd2004convex} of $||\cdot||_p$. By Hölder's inequality \citep{holder1889ueber, rogers1888extension}, we have that $1/p + 1/q=1$. This means that if $p=2$, we still get $q=2$; if $p=\infty$, we get $q=1$; if $p=1$, we get $q=\infty$. We can then define the geometric margin as: $$ \gamma_p^{-} = \frac{yf(x)}{|| \nabla_x f(x) ||_q}.$$ \subsection{Maximizing the expected margin} As previously discussed, the goal of hard-margin SVMs is to maximize the \em minimum margin \em as in equation \eqref{eqn:1}. However, this problem is infeasible in non-linearly separable datasets. In these cases, the soft-margin formulation of SVM is most common: \begin{align}\label{eqn:4andahalf} \max_f \mathbb{E}_{(x,y)\sim D}\left[ F(\gamma(x,y)) \right], \end{align} where $F:\mathbb{R}\to \mathbb{R}$ is an objective to be maximized (not to be confused with the classifier $f$) and the expectation represents the empirical average over a sampled dataset $D$. For large datasets, the empirical average is a good approximation of the expectation of the data distribution, $\mathbb{D}$. This is an easier optimization problem to solve compared to equation \eqref{eqn:1}, and is also always feasible. If $F$ is chosen to be the negative hinge function (i.e., $F(z)=-max(0,1-z)$), we ignore samples far from the boundary (as in SVMs). For general choices of $F$, every sample may influence the solution. The identity function $F(z)=z$, cross entropy with sigmoid activation $F(z)=\log(\text{sigmoid}(z)))$ and least-squares $F(z)=-(1-z)^2$ are also valid choices. However, as before, we prefer to separate the numerator from the denominator of the margin. Furthermore, the denominator (the norm of the gradient) is now a random variable. To make things as general as possible, we use the following formulation: \begin{align}\label{eqn:6} \max_f \mathbb{E}_{(x,y)\sim \mathbb{D}}\left[F(yf(x)) - \lambda g(||\nabla_x f(x)||_q)\right]. \end{align} where $F,g:\mathbb{R}\to \mathbb{R}$ and $\lambda$ is a scalar penalty term. There are many potential choices of $F$ and $g$ which we can use. The standard choice of $g$ (in SVMs) is $g(z)=(z^2-1)$. This corresponds to constraining $|| \nabla_x f(x) || = 1$ or $|| \nabla_x f(x) || \leq 1$ for all $x$ (by KKT conditions). Since the gradient norm is a random variable, we do not want it to be equal to one everywhere. For this reason, we will generally work with softer constraints of the form $g(z)=(z-1)^2$ or $g(z)=max(0,z-1)$. The first function enforces a soft equality constraint so that $z\approx 1$ while the second function enforces a soft inequality constraint so that $z \leq 1$. Of note, under perfect separation of the data and with a linear classifier, it has been shown that the empirical version of equation \eqref{eqn:6} (integrating over a dataset $D$ drawn from distribution $\mathbb{D}$) divided by its norm is equivalent to \eqref{eqn:1} under the constraint $||w||=1$ when $\lambda \to 0$ \citep{rosset2004margin}. This is true for cross-entropy and Hinge loss functions, but not least-squares. This implies that, under strong assumptions, maximizing the expected margin could also maximize the minimum margin. \subsection{Better approximation of the margin} In Section~\ref{sec:3.1.1}, we showed an approximation to the $L^2$-norm geometric margin. To reach a closed form, we had to assume that the classifier was approximately linear. This approximation is problematic since samples are pushed away from the boundary so we may never minimize the gradient norm at the boundary (as needed to actually maximize the geometric margin). Given that we separate the problem of estimating an MMC into maximizing a function of the numerator ($yf(x)$) and minimizing a function of the denominator (gradient norm), we do not need to make this approximation. Rather than finding the closest element of the decision boundary $x_0$ for a given sample $x$, we can simply apply the penalty on the decision boundary. However, working on the boundary is intractable given the infinite size of the decision boundary. Although sampling from the decision boundary is difficult, sampling around it is easy. Rather than working on the decision boundary, we can instead apply the constraint in a bigger region encompassing all points of the decision boundary. A simple way to do so is to sample from all linear interpolations between samples from classes 1 and 2. This can be formulated as: \begin{align}\label{eqn:7} \max_f \mathbb{E}_{(x,y)\sim \mathbb{D}}\left[F(yf(x)) \right] - \lambda \mathbb{E}_{\tilde{x}} \left[g(||\nabla_{\tilde{x}} f(\tilde{x})||_2)\right], \end{align} where $\tilde{x} = \alpha x + (1-\alpha)y$, $\alpha \sim U(0,1)$, $x \sim \mathbb{P}$, and $y \sim \mathbb{Q}$. This is same interpolation as used in WGAN-GP; this provides an additional argument in favor of this practice. \section{Connections to GANs} Let $f(x)=C(x)$. Although not immediately clear given the different notations, the objective functions of the discriminator/critic in many penalized GANs are equivalent to the ones from MMCs based on \eqref{eqn:7}. If $g(z)=(z-1)^2$, we have that $F(z)=z$ corresponds to WGAN-GP, $F(z)=\log(\text{sigmoid}(z))$ corresponds to SGAN, $F(z)=-(1-z)^2$ corresponds to LSGAN, and $F(z)=-max(0,1-z)$ corresponds to HingeGAN with gradient penalty. Thus, all of these penalized GANs maximize an expected $L^2$-norm margin. \subsection{Equivalence between gradient norm constraints and Lipschitz functions} \label{sec:4.1} As stated in Section~\ref{sec:GAN}, the popular approach of softly enforcing $|| \nabla_x f(x) ||_2 \approx 1$ at all interpolations between real and fake samples does not ensure that we estimate the Wasserstein distance ($W_1$). On the other hand, we show here that enforcing $|| \nabla_x f(x) ||_2 \leq 1$ is sufficient in order to estimate $W_1$. Assuming $d(x_1,x_2)$ is a $L^p$-norm, $p \ge 2$ and $f(x)$ is differentiable, we have that: \begin{align*} || \nabla f(x) ||_p \leq K \iff f \text{ is K-Lipschitz on $L^p$}. \end{align*} See appendix for the proof. \citet{adler2018banach} showed a similar result on dual norms. This suggests that, in order to work on the set of Lipschitz functions, we can penalize $|| \nabla_x f(x) || \leq 1$ for all $x$. This can be done easily through \eqref{eqn:6} by choosing $g(z)=\max(0,z-1)$. \citet{petzka2017regularization} also suggested using a similar function (the square hinge) in order to only penalize gradient norms above 1. If we let $F(z)=z$ and $g(z)=\max(0,z-1)$, we have an IPM over all Lipschitz functions; thus, we effectively approximate $W_1$. This means that $W_1$ can be found through maximizing a geometric margin. Importantly, most successful GANs \citep{brock2018large,karras2019style,karras2017progressive} either enforce the 1-Lipschitz property using Spectral normalization \citep{miyato2018spectral} or use some form of gradient norm penalty \citep{WGAN-GP,mescheder2018training}. Since 1-Lipschitz is equivalent to enforcing a gradient norm constraint (as shown above), we have that most successful GANs effectively train a discriminator/critic to maximize a geometric margin. This suggests that the key contributor to stable and effective GAN training may not be having a 1-Lipschitz discriminator, but may be maximizing a geometric margin. \subsection{Why do maximum-margin classifiers make good GAN discriminators/critics?} \label{sec:4.2} To answer this question, we focus on a simple two-dimensional example where $x=(x_{(1)},x_{(2)})$. Let real data (class 1) be uniformly distributed on the line between $(1,-1)$ and $(1,1)$. Let fake data (class 2) be uniformly distributed on the line between $(-1,-1)$ and $(-1,1)$. This is represented by Figure~\ref{fig:fig1}. Clearly, the maximum-margin boundary is the line $x_{(1)}=0$ and any classifier should learn to ignore $x_{(2)}$. For a classifier of the form $f(x)=c + w_1 x_{(1)} + w_2 x_{(2)}$, the maximum-margin classifier is $f^{*}(x)=w_1 x_{(1)}$ for any choice of $w_1 > 0$. We can see this by looking at the expected geometric margin: \begin{align*} \mathbb{E}_{(z,y)\sim \mathbb{D}}\left[ \gamma(x,y) \right] &= \mathbb{E}_{(z,y)\sim \mathbb{D}}\left[\frac{y w_1 x_{(1)}}{|w_1|} \right] \\ &= \frac{2 w_1 x_{(1)}}{|w_1|} \\ &= \begin{cases} 2 &\text{if $w_1 > 0$}\\ -2 &\text{if $w_1 < 0$}. \end{cases} \end{align*} \vspace*{-15pt} \begin{figure}[!ht] \centering \includegraphics[scale=0.59]{fig1.pdf} \caption{Two-dimensional GAN example with different choices of boundaries.} \label{fig:fig1} \end{figure} This means that the problem is overparameterized (there are infinitely many solutions). We will show that this is problematic in GANs. In GANs, the dynamics of the game depends in great part on $||\nabla_{x_2} f(x_2)||$ where $x_2$'s are samples from the fake, or generated, distribution (not to be confused with $x_{(2)}$, see Section~\ref{sec:notation} for definition). This is because the generator only learns through the discriminator/critic and it uses $\nabla_{x_2} f(x_2)$ in order to improve its objective function. Thus, for stable training, $||\nabla_{x_2} f(x_2)||$ should not be too big or too small. There are two ways of ensuring this property in this example: we can either i) fix the gradient norm to 1 or ii) fix $y w_1 x_{(1)} = 1$. Both solutions lead to $w_1=1$. The former is the approach taken by soft-margin SVMs and the latter is the approach taken by hard-margin SVMs. This means that, in order to get stable GAN training, maximizing a margin is not enough. We need to ensure that we obtain a solution with a stable non-zero gradient around fake samples. Thus, it is preferable to solve the penalized formulation from equation \eqref{eqn:7} and choose a large penalty term $\lambda$ in order to obtain a small-gradient solution. When the gradient norm is 1 everywhere, the only solution is a linear classifier which leads to the gradient being fixed everywhere. In this case, the placement of the margin may not be particularly important for GAN stability since the gradient is the same everywhere (although we do still obtain an MMC). When we have a non-linear classifier and we impose $||\nabla_{x} f(x)|| \leq 1$ through $g(z)=\max(0,z-1)$, the gradient norm will fade toward zero as we move away from the boundary. Thus, in this case, obtaining a maximum-margin solution is important because it reduces the risk of vanishing gradients at fake samples. To see this, we can consider our simple example, but assume $f(x)=\text{sigmoid}(w_1 x_{(1)}+w_0)$ (See Figure~\ref{fig:fig2}). \begin{figure}[!ht] \centering \includegraphics[scale=0.59]{fig3.pdf} \caption{$\nabla f(x_{(1)})$ at different values of $x_{(1)}$ for the two-dimensional example assuming a sigmoid function.} \label{fig:fig2} \end{figure} We can enforce $||\nabla_{x} f(x)|| \leq 1$, by choosing $w_1 \leq 4$. We let $w_1 = 4$ because it leads to the best classifier. The maximum-margin boundary is at $x_{(1)}=0$ (which we get by taking $w_0=0$; blue curve in Figure~\ref{fig:fig2}); for this choice, we have that $f(x_1)=.02$ and $f(x_2)=.98$ for real and fake samples respectively. Meanwhile, if we take a slightly worse margin with boundary at $x_{(1)} = \frac{1}{4}$ (equivalent to choosing $w_0=-1$; red curve in Figure~\ref{fig:fig2}), we have that $f(x_1)=.01$ and $f(x_2)=.95$ for real and fake samples respectively. Thus, both solutions almost perfectly classify the samples. However, the optimal margin has gradient $.07$, while the worse margin has gradient $.03$ at fake samples. Thus, the maximum-margin provides a stronger signal for the generator. Had we not imposed a gradient penalty constraint, we could have chosen $w_1 = 8$ (green curve in Figure~\ref{fig:fig2}) and we would have ended up with vanishing gradients at fake samples while still using a maximum-margin classifier. In summary, imposing $||\nabla_{x} f(x)|| \approx 1$, as done in WGAN-GP, may be helpful because it approximately fixes the gradient to 1 everywhere which stabilizes the generator's training. However, it imposes a strong constraint (linear discriminator) which only leads to a lower bound on Wasserstein's distance. Meanwhile, imposing $||\nabla_{x} f(x)|| \leq 1$, as we suggested to properly estimate Wasserstein's distance, may be helpful because it reduces the risk of having no gradient at fake samples. \subsection{Are certain margins better than others?} \label{sec:4.3} It is well known that $L^p$-norms (with $p\ge 1$) are more sensitive to outliers as $p$ increases which is why many robust methods minimize the $L^1$-norm \citep{bloomfield1983least}. Furthermore, minimizing the $L^1$-norm loss results in a median estimator \citep{bloomfield1983least}. This suggests that penalizing the $L^2$ gradient norm penalty ($p=2$) may not lead to the most robust classifier. We hypothesize that $L^\infty$ gradient norm penalties may improve robustness in comparision to $L^2$ gradient norm penalties since they correspond to maximizing $L^1$-norm margin. In Section~\ref{sec:experiments} we provide experimental evidence in support of our hypothesis. \subsection{Margins in Relativistic GANs} \label{sec:4.4} Relativistic paired GANs (RpGANs) and Relativistic average GANs (RaGAN) \citep{jolicoeur2018relativistic} are GAN variants which tend to be more stable than their non-relativistic counterparts. Below, we explain how we can link both approaches to MMCs. \subsubsection{Relativistic average GANs} From the loss function of RaGAN, we can deduce its decision boundary. Contrary to typical classifiers, we define two boundaries, depending on the label. The two surfaces are defined as two sets of points $(x_0,y_0)$ such that: \begin{align*} f(x_0) &= \mathbb{E}_{x \sim \mathbb{Q}}[f(x)] \text{, when } y_0 = 1 (\text{real}) \\ f(x_0) &= \mathbb{E}_{x \sim \mathbb{P}}[f(x)] \text{, when } y_0 = -1 (\text{fake}) \end{align*} It can be shown that the relativistic average geometric margin is approximated as: \begin{align*} \gamma_p^{Ra-}(x,y) = &\frac{((y+1)/2)(f(x)-\mathbb{E}_{x \sim \mathbb{Q}}[f(x)])}{|| \nabla_x f(x) ||_q} + \\ &\frac{ ((y-1)/2)(f(x)-\mathbb{E}_{x \sim \mathbb{P}}[f(x)])}{|| \nabla_x f(x) ||_q} \\ = &\frac{\alpha_{Ra}(x,y)}{\beta(x)}. \end{align*} Maximizing the boundary of RaGANs can be done in the following way: \begin{align*} \max_f \mathbb{E}_{(x,y)\sim \mathbb{D}}\left[F(\alpha_{Ra}(x,y)) - \lambda g(||\nabla_x f(x)||_q)\right]. \end{align*} Of note, when $F(z)=z$ (identity function), the loss is equivalent to its non-relativistic counterpart (WGAN-GP). Of all the RaGAN variants presented by \citet{jolicoeur2018relativistic}, RaHingeGAN with $F(z)=-\max(0,1-z)$ is the only one which maximizes the relativistic average geometric margin when using a gradient norm penalty. \subsubsection{Relativistic paired GANs} From its loss function (as described in section \ref{sec:GAN}), it is not clear what the boundary of RpGANs can be. However, through reverse engineering, it is possible to realize that the boundary is the same as the one from non-relativistic GANs, but using a different margin. We previously derived that the approximated margin (non-geometric) for any point is $\gamma_p(x) \approx \frac{|f(x)|}{|| \nabla_x f(x) ||_q}$. We define the geometric margin as the margin after replacing $|f(x)|$ by $yf(x)$ so that it depends on both $x$ and $y$. However, there is an alternative way to transform the margin in order to achieve a classifier. We call it the \em relativistic paired margin\em: \begin{align*} \gamma^{*}_p(x_1,x_2) &= \gamma_p(x_1) - \gamma_p(x_2) \\ & = \frac{f(x_1)}{|| \nabla_{x_1} f(x_1) ||_q} - \frac{f(x_2)}{|| \nabla_{x_2} f(x_2) ||_q}. \end{align*} where $x_1$ is a sample from $\mathbb{P}$ and $x_2$ is a sample from $\mathbb{Q}$. This alternate margin does not depend on the label $y$, but only ask that for any pair of class 1 (real) and class 2 (fake) samples, we maximize the relativistic paired margin. This margin is hard to work with, but if we enforce $|| \nabla_{x_1} f(x_1) ||_q \approx || \nabla_{x_2} f(x_2) ||_q$, for all $x_1 \sim \mathbb{P}$,$x_2 \sim \mathbb{Q}$, we have that: \begin{align*} \gamma^{*}_p(x_1,x_2) \approx \frac{f(x_1) - f(x_2)}{|| \nabla_{x} f(x) ||_q}, \end{align*} where $x$ is any sample (from class 1 or 2). Thus, we can train an MMC to maximize the relativistic paired margin in the following way: \begin{align*} \max_f \underset{ \substack{x_1 \sim \mathbb{P} \\ x_2 \sim \mathbb{Q}}}{\mathbb{E}\vphantom{p}}&\left[F(f(x_1)-f(x_2))\right] - \\ \lambda& \mathbb{E}_{(x,y)\sim \mathbb{D}}\left[g(||\nabla_x f(x)||_q)\right], \end{align*} where $g$ must constrain $||\nabla_x f(x)||_q$ to a constant. This means that maximizing $F(f(x_1)-f(x_2))$ without gradient penalty can be problematic if we have different gradient norms at samples from class 1 (real) and 2 (fake). This provides an explanation as to why RpGANs do not perform very well unless using a gradient penalty \citep{jolicoeur2018relativistic}. \section{Experiments} \label{sec:5} \label{sec:experiments} Following our analysis and discussion in the previous sections, we hypothesized that $L^1$ margins, corresponding to a $L^\infty$ gradient norm penalty, would perform better than $L^2$ margins ($L^2$ gradient norm penalty). As far as we know, researchers have not yet tried using a $L^\infty$ gradient norm penalty in GANs. In addition, we showed that it would be more sensible to penalize violations of $||\nabla f(x)||_q \leq 1$ rather than $||\nabla f(x)||_q \approx 1$. To test these hypotheses, we ran experiments on CIFAR-10 \citep{krizhevsky2009learning} using HingeGAN ($F(z)=-\max(0,1-z)$) and WGAN ($F(z)=z$) loss functions with $L^1$, $L^2$, $L^\infty$ gradient norm penalties. We enforce either $||\nabla f(x)||_q \approx 1$ using Least Squares (LS) $(g(z)=(z-1)^2)$ or $||\nabla f(x)||_q \leq 1$ using Hinge $(g(z)=\max(0,z-1))$. We used the ADAM optimizer \citep{Adam} with $\beta=(.5,.99)$ and a DCGAN architecture \citep{DCGAN}. As per convention, we report the Fréchet Inception Distance (FID) \citep{heusel2017gans}; lower values correspond to better generated outputs (higher quality and diversity). We ran all experiments using seed 1 and with gradient penalty $\lambda=20$. Details on the architectures are in the Appendix. Code is available on \em https://github.com/AlexiaJM/MaximumMarginGANs\em. The results are shown in Table~\ref{tab:1}. \begin{table}[!ht] \caption{Fréchet Inception Distance (FID) after 100k generator iterations on CIFAR-10.} \label{tab:1} \centering \begin{tabular}{ccc} \toprule $g(||\nabla_x f(x))||_q)$ & WGAN & HingeGAN \\ \cmidrule(){1-3} $(||\nabla_x f(x))||_1-1)^2$ & 99.7 & 88.9 \\ $\max(0,||\nabla_x f(x))||_1-1)$ & 65.6 & 77.3 \\ \cmidrule(){1-3} $(||\nabla_x f(x))||_2-1)^2$ & 37.6 & 32.8 \\ $\max(0,||\nabla_x f(x))||_2-1)$ & 37.8 & 33.9 \\ \cmidrule(){1-3} $(||\nabla_x f(x))||_{\infty}-1)^2$ & 33.4 & 33.6 \\ $\max(0,||\nabla_x f(x))||_{\infty}-1)$ & 36 & \fontseries{b}\selectfont 27.1 \\ \bottomrule \end{tabular} \end{table} Due to space constraint, we only show the previously stated experiments in Table~\ref{tab:1}. However, we also ran additional experiments on CIFAR-10 with 1) Relativistic paired and average HingeGAN, 2) $\beta=(0,.90)$, 3) the standard CNN architecture from \citet{miyato2018spectral}. Furthermore, we ran experiments on CAT \citep{cat} with 1) Standard CNN (in 32x32), and 2) DCGAN (in 64x64). These experiments correspond to Table~\ref{tab:2},~\ref{tab:3},~\ref{tab:4},~\ref{tab:5}, and~\ref{tab:6} from the appendix. In all sets of experiments, we generally observed that we obtain smaller FIDs by using: i) a larger $q$ (as theorized), ii) the Hinge penalty to enforce an inequality gradient norm constraint (in both WGAN and HingeGAN), and iii) HingeGAN instead of WGAN. \section{Conclusion} This work provides a framework in which to derive MMCs that results in very effective GAN loss functions. In the future, this could be used to derive new gradient norm penalties which further improve the performance of GANs. Rather than trying to devise better ways of enforcing 1-Lipschitz, researchers may instead want to focus on constructing better MMCs (possibly by devising better margins). This research shows a strong link between GANs with gradient penalties, Wasserstein's distance, and SVMs. Maximizing the minimum $L^2$-norm geometric margin, as done in SVMs, has been shown to lower bounds on the VC dimension which implies lower generalization error \citep{vapnik1998statistical,mount2015sure}. This paper may help researchers bridge the gap needed to derive PAC bounds on Wasserstein's distance and GANs/IPMs with gradient penalty. Furthermore, it may be of interest to theoreticians whether certain margins lead to lower bounds on the VC dimension. \section{Acknowledgements} This work was supported by Borealis AI through the Borealis AI Global Fellowship Award. We would also like to thank Compute Canada and Calcul Québec for the GPUs which were used in this work. This work was also partially supported by the FRQNT new researcher program (2019-NC-257943), the NSERC Discovery grant (RGPIN-2019-06512), a startup grant by IVADO, a grant by Microsoft Research and a Canada CIFAR AI chair. \bibliographystyle{unsrtnat}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{\label{}} \section{Introduction} Although the existence of dark matter (DM) is now well-established from various observations, such as rotation curves, structure formation, elliptical galaxy NGC 720, gravitational lensing, bullet cluster, temperature fluctuation of cosmic microwave background, {\it etc.}. However, all of the above evidences result from the gravitational interactions of DM(s), and the nature of DM(s) is still not well-known. Thus the search for DM interactions, especially non-gravitational ones, is one of the hot topics in theoretical and experimental physics. The recently observed 3.5 keV X-ray line signal in a stacked spectrum of galaxies and clusters of galaxies~\cite{Xray_exp}, if it is confirmed, can be a first strong hint for the non-gravitational DM interaction. The conventional scenario for the X-ray line in terms of DM models is the decay of sterile neutrino with mass $m_s=7.06 \pm 0.5$ keV into a 3.5 keV photon and an active neutrinos. The observed flux~\cite{Xray_exp} \begin{eqnarray} \Phi_{\rm X-ray} \propto n_s \Gamma_s &=& 1.39 \times 10^{-22}\, {\rm s}^{-1} \sin^2 2\th \left(m_s \over {\rm keV}\right)^5 \frac{\rho_{\rm DM}}{m_s} \nonumber\\ &=& (1.5 \times 10^{-25} - 2.7 \times10^{-24})\, {\rm cm^{-3} s^{-1}}, \label{eq:flux} \end{eqnarray} can be explained by mixing angle given by $\sin^2 2\th =(2-20) \times 10^{-11}$. It would be interesting to consider a model with an interplay between DM and other sectors of the SM, for example, the neutrino sector. Then measurement of one sector may predict or constrain the other sector. One of these scenarios has been studied in~\cite{Baek:2012ub}. In \cite{Baek:2012ub}, we introduced scalar dark matter coupled to the Zee-Babu model which generates neutrino masses radiatively at two-loop level~\cite{Zee-Babu}. We showed that the model can successfully explain Fermi-LAT 130 GeV gamma-ray line. In this talk based on~\cite{Baek:2014}, we gauge the global $U(1)_{B-L}$ symmetry of \cite{Baek:2012ub}. To cancel the gauge anomaly we need to introduce three right-handed neutrinos $N_{R_i} (i=1,2,3)$ with $B-L=-1$. We also introduce a complex scalar field $\varphi$ with $B-L=2$ which breaks the local $U(1)_{B-L}$ symmetry when $\varphi$ gets vacuum expectation value (VEV), $v_\varphi$. The $\varphi$ field also generates the soft lepton number breaking term of original Zee-Babu model dynamically. The $U(1)_{B-L}$ symmetry allows the Yukawa interaction $\ell H N_{R_i}$ and Majorana mass terms $ N_{R_i} N_{R_i} \varphi$, which will generate neutrino masses through the usual seesaw mechanism after $U(1)_{B-L}$ symmetry is broken. Since we want to generate the neutrino masses only through Zee-Babu mechanism~\cite{Zee-Babu}, we forbid the above Yukawa interaction by introducing a global $Z_2$ symmetry under which only $N_{R_i}$ are odd and all other particles are even. We introduce Dirac fermionic dark matter candidates $\psi_i (i=1,2,3)$ to explain the X-ray line signal. The $\psi_i$ are neutral under the SM gauge group but charged under the local $U(1)_{B-L}$ symmetry. They are vector-like under $U(1)_{B-L}$ symmetry and gauge anomaly is not generated. If we assign the $U(1)_{B-L}$ charges of $\psi_i$ fields in such a way that $\Delta Q_{\psi} \equiv Q_{\psi_2} - Q_{\psi_1} = Q_{\psi_3} -Q_{\psi_2}=2$, {\it off-diagonal} Yukawa interactions, $\ol{\psi_1} \psi_2 \varphi^*$ and $\ol{\psi_2} \psi_3 \varphi^*$, are allowed. After $\varphi$ gets VEV, off-diagonal terms in the mass matrix of $\psi$'s are generated, which induces the dark-flavor-changing $Z'$ couplings at tree level. And flavor-changing radiative decay of DM is allowed. We can also see that a discrete symmetry remains after $U(1)_{B-L}$ symmetry is broken. This {\it local} discrete symmetry guarantees absolute stability of the lightest state of $\psi_i$~\cite{local_DM}, as opposed to the global symmetry which can be broken by quantum gravity. The quantum gravity effect can break the global $Z_2$ symmetry, and the right-handed neutrinos can decay very fast without causing cosmological problems such as BBN. The light singlet particle also decays before BBN. We show that transition magnetic dipole operator (TMDO) $\ol{\psi'_1} \sigma_{\mu\nu} \psi'_2 F^{\mu\nu}/\Lambda$ ($\psi'_i$ are mass eigenstates) can be generated by two-loop diagrams involving Zee-Babu scalars, $\varphi$ scalar, and $B-L$ gauge boson. The heavier state $\psi'_2$ can decay into the lighter state and photon through this TMDO. If the mass difference between the two states is about $3.5$ keV, we can explain the observed X-ray line signal. Since the TMDO is generated at two-loop level, the effective cut-off scale $\Lambda$ of the operator can be very high, even if all the particles running inside the loop have (sub-)electroweak scale masses. As a consequence $\psi'_2$ can live much longer than the age of the universe and can be a decaying DM candidate. { In our model there appear some small parameters, such as $v_\eta/v_\varphi$, $\Delta m_{21}/m_\psi$, {\it etc.}, which seems to be fine-tuning at first sight. However, we will show that they are technically natural in the sense of 't Hooft: \begin{eqnarray} \text {\it ``A parameter is naturally small if setting it to zero increases the symmetry of the theory.''}. \label{itm:tHooft} \end{eqnarray} } \section{The model and 3.5 keV line signal} \label{sec:model} The model contains two electrically charged Zee-Babu scalar fields $h^+$, $k^{++}$, a SM-singlet complex dark scalar $\varphi$, a singlet real scalar $\eta$, three right-handed neutrinos $N_{R_i} (i=1,2,3)$ and three SM-singlet Dirac fermion dark matter candidates $\psi_i$ in addition to the SM fields. In Table~\ref{tab:B-L}, we show the charge assignments of the fields under $U(1)_{B-L}$, and $Z'_2$. \begin{table}[htb] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Fields & $q_{i}$ & $\ell_i$ &$h^+, k^{++}$ & $\varphi$ & $\eta$ & $N_{R_i}$ & $\psi_i$ \\ \hline\hline $B-L$ & $1/3$ & $-1$ & $2$ & $2$ & $0$& $-1$ & $1/3,7/3,13/3$ \\ \hline $Z_2$ & $+$ & $+$ & $+$ & $+$ & $+$ & $-$ & $\pm$ \\ \hline \end{tabular} \end{center} \caption{The assignment of $B-L$ charges ($i=1,2,3$).} \label{tab:B-L} \end{table} The Lagrangian for the model can be written as~\cite{Zee-Babu} \begin{eqnarray} {\cal L}&=& {\cal L}_{\rm SM}+{\cal L}_{\rm Zee-Babu} +{\cal L}_{\rm kin}+ {\cal L}_\Psi -V, \nonumber\\ {\cal L}_{\rm Zee-Babu} &=& f_{ab} l_{aL}^{Ti} C l_{bL}^{j} \epsilon_{ij} h^+ + h^\prime_{ab} l_{aR}^{T} C l_{bR}^{j} k^{++} + {h.c}, \nonumber\\ {\cal L}_{N_R} &=& \ol{N_{R_i}} i \gamma^\mu D_\mu N_{R_i} -{1 \over 2} \Big(\lambda_{N_{ij}} \varphi \ol{N^c_{R_i}} N_{R_j} + h.c.\Big)\nonumber\\ {\cal L}_\Psi &=& \ol{\psi_i} i \gamma^\mu D_\mu \psi_i- m_{\psi_i} \ol{\psi_i} \psi_i -f_{12} \Big(\ol{\psi_1} \psi_2 \varphi^* + \ol{\psi_2} \psi_1 \varphi \Big) -f_{23} \Big(\ol{\psi_2} \psi_3 \varphi^* + \ol{\psi_3} \psi_2 \varphi \Big) \nonumber\\ && -\eta( y_1 \ol{\psi_1} \psi_1 + y_2 \ol{\psi_2} \psi_2 + y_3 \ol{\psi_3} \psi_3), \nonumber\\ {\cal L}_{\rm kin} &=& |{\cal D}_\mu h^+|^2 +|{\cal D}_\mu k^{++}|^2 +|{\cal D}_\mu \varphi|^2 { +{1 \over 2} \left(\partial_\mu \eta\right)^2 +\sum_{i=1}^{3} \ol{\psi_i} i \gamma^\mu {\cal D}_\mu \psi_i} -{1 \over 4 } \hat{Z}^\prime_{\mu\nu} \hat{Z}^{\prime \mu\nu} -{\sin\chi \over 2 } \hat{Z}'_{\mu\nu} \hat{B}^{\mu\nu}, \label{eq:Lag} \end{eqnarray} where $V$ is scalar potential, ${\cal D}_\mu =\partial_\mu + i \hat{e} Q \hat{A}_\mu + i \hat{g}_{Z'} Q' \hat{Z}'_\mu$, and $\hat{B}_{\mu\nu}$, $\hat{Z}'_{\mu\nu}$ are the field strength tensors of $U(1)_Y$, $U(1)'_{B-L}$ gauge field, respectively. We should get non-trivial mixing in $\psi_i$ states to generate TMDO. After $U(1)_{B-L}$ symmetry breaking, the mass terms of the Dirac dark fermions are given by \begin{eqnarray} {\cal L}_{\text{$\psi$ mass}} &=& - \left(\begin{array}{ccc}\ol{\psi_1} & \ol{\psi_2} & \ol{\psi_3} \end{array}\right) \left(\begin{array}{ccc} m_{\psi_1} & \frac{f_{12} v_\varphi}{\sqrt{2}} & 0\\ \frac{f_{12} v_\varphi}{\sqrt{2}} & m_{\psi_2} & \frac{f_{23} v_\varphi}{\sqrt{2}} \\ 0 &\frac{f_{23} v_\varphi}{\sqrt{2}} & m_{\psi_3} \end{array} \right) \left(\begin{array}{c} {\psi_1} \\ {\psi_2} \\ {\psi_3} \end{array} \right), \end{eqnarray} which provides the required mixing. The lightest $\psi_1^\prime$ is absolutely stable due to the local $Z_6$ symmetry and become a DM candidate. { We take $m_{\psi'_1} \sim {\cal O}(1) \, {\rm TeV}$, because this gives not only the correct relic density, but the necessary self-scattering cross section to solve the small scale structure problems of the CDM, when the coupling of DM with the light scalar is of order one.} The $\psi'_2$ can decay into $\psi_1^\prime$ and a photon through the TMDO, which can explain the 3.5 keV X-ray line signal. It can also be a DM component if its lifetime is much longer than the age of the universe. To get 3.5 keV X-ray line in the decay process $\psi_2^\prime \to \psi_1^\prime \gamma$ we fix the mass difference \begin{eqnarray} \Delta m_{21} \equiv m_{\psi_2^\prime} - m_{\psi_1^\prime} = \frac{2 m_{\psi_2^\prime} E_\gamma}{m_{\psi_2^\prime} + m_{\psi_1^\prime}} \simeq E_\gamma =3.5\, {\rm keV}, \end{eqnarray} where we assumed $m_{\psi_i^\prime} \gg 3.5\, {\rm keV}$. The effective operator for magnetic transition $\psi'_2 \to \psi'_1 \gamma$, is given by \begin{eqnarray} {\cal L}_{\rm eff} &=& {1 \over \Lambda} \overline{\psi'_1} \sigma_{\mu\nu} \psi'_2 F^{\mu\nu}, \label{eq:MDO} \end{eqnarray} It is generated by so-called ``Barr-Zee'' type two-loop diagrams~\cite{Arhrib:2001xx}. The state $\psi'_2$ decays almost 100\% via (\ref{eq:MDO}). Given that $\chi$ and $h-\phi(n)$~\cite{Baek:2011aa} mixing are strongly constrained and the Barr-Zee type diagrams are generated even in the limits where those mixings vanish, we can consider the effects of non-vanishing mixings as small perturbations. The leading contribution of two-loop Barr-Zee type diagrams to $1/\Lambda$ is obtained to be \begin{eqnarray} {1 \over \Lambda}&\simeq&-\sum_{s=h^+,k^{++}}\frac{8 e g^2_{Z'} \Delta Q_\psi Q_s Q_{s}^\prime \lambda_{\varphi s} \delta^2 \cos 2\theta_{12} s_{13} s_{23} }{(4\pi)^4} \nonumber\\ &\times&\int_0^1 dx \int [d\beta] \frac{x \beta_4^2 m_{\psi'_3}^3} {\left(\beta_1 m_{Z'}^2 + \beta_2 m_\phi^2 +\beta_3 m_s^2/(x(1-x)) +\beta_4^2 m_{\psi'_1}^2\right)^2}, \label{eq:d_M} \end{eqnarray} where $[d\beta] \equiv d\beta_1 d\beta_2 d\beta_3 d\beta_4 \delta(1-\beta_1 -\beta_2 -\beta_3 -\beta_4)$, $\Delta Q_\psi = Q_{\psi'_3}-Q_{\psi'_2}= Q_{\psi'_2}-Q_{\psi'_1}=2$, $\delta=\Delta m_{31}/m_{\psi'_3}$ and we neglected small contribution proportional to $\Delta m_{21} (\simeq 3.5 {\rm keV})$. In Fig.~\ref{fig:X-ray}, the red-colored region explains the observed X-ray line signal in the $(m_{\psi'_1},g_{Z'})$-plane. For the left (right) panel we have taken $M_{Z'}=10 \,(20)$ TeV. For other parameters we have fixed $\delta=0.2$, $\th_{12}=\th_{23} =0.2$, $m_\phi= m_{h^+}= m_{k^{++}}=1$ TeV, $\lambda_{\varphi h} =\lambda_{\varphi k} =1$. We have checked that the signal region is not very sensitive to the mass parameters, $m_\phi, m_{h^+}$ and $m_{k^{++}}$. The black solid (dashed) lines satisfy the observed relic abundance of dark matters, $\Omega_\psi h^2 =0.1199 \pm 0.0027$, for $y_i=2 \, (1)$. The vertical lines come from the annihilation channel $\psi_{1(2)} \overline{\psi_{1(2)}} \to n n$ and therefore are sensitive to the Yukawa couplings $y_i$. There are also resonance regions when $m_{\psi'_1} \approx M_{Z'}/2$. The region with dark gray color is excluded because it does not satisfy the longevity of the decaying DM. The light grey region is excluded by LUX DM direct search experiment and blue line show the sensitivity of future DM experiment XENON1T. In our case the direct detection of DM is dominated by $Z'$ boson exchange diagram even though $Z'$ is very heavy, $M_{Z'}=10 (20) \, {\rm TeV}$. {We note that $m_{h^+}= m_{k^{++}}=1$ TeV can easily evade the constraints from the lepton flavor violating processes with $f_{ab} , h'_{ab} \sim {\cal O}(0.01)$, while still being able to explain neutrino masses. } \begin{figure} \includegraphics[width=5cm]{figs/MDM3_10.eps} \includegraphics[width=5cm]{figs/MDM3_20.eps} \caption{Plots in $(m_{\psi'_1},g_{Z'})$-plane. The red-colored region can explain the 3.5 keV X-ray line signal. The dark gray region is excluded because the lifetime of $\psi'_2$ is shorter than the age of the universe. The light gray region is excluded by LUX DM direct detection experiment. The blue line is the sensitivity the next XENONO1T experiment can reach. The black solid (dashed) line gives the correct relic abundance of DM for $y_i=2(1)$. For the left (right) plot we set $M_{Z'}=10 (20)$ TeV.} \label{fig:X-ray} \end{figure} For light $\eta$ particle ($m_\eta \sim 1-10 \, {\rm MeV}$), $\eta$-exchanging (in)elastic self-interacting processes $\psi'_{1(2)},\psi'_{1(2)} \to \psi'_{1(2)},\psi'_{1(2)}$ can be strong. When they have cross sections \begin{eqnarray} \sigma_T/m_{\psi'_1} \sim 0.1 - 10 \, {\rm cm^2/g}, \end{eqnarray} we can solve the small scale structure problems such as core-vs-cusp problem and too-big-to-fail problem in our model. In our model, we need relatively large ($y_i \sim {\cal O}(1)$) Yukawa coupling of $\eta$ with the DM, to get the correct relic density. We can see that the DM scattering cross section can be in the $0.1-10 \, {\rm cm^2/g}$ range for $m_{\psi'_1}=0.1 - 10$ TeV and $m_\eta=0.1-10$ MeV. \section{Conclusions\label{sec:Conclusions}} We extended the Zee-Babu model for neutrino masses to have $U(1)_{B-L}$ gauge symmetry and to incorporate Dirac dark matters to explain the X-ray line signal. We also introduced $U(1)_{B-L}$ breaking scalar, singlet scalar, and right-handed neutrinos. The charges of the particle content are assigned in such a way that after the $U(1)_{B-L}$ breaking scalar getting VEV the local $U(1)_{B-L}$ symmetry is broken down to a discrete symmetry. The lightest Dirac dark fermion $\psi'_1$ whose mass is TeV scale transforms non-trivially under this discrete symmetry and becomes stable. { The heavier $\psi'_2$ particle can decay almost 100\% through the magnetic dipole transition operator $\overline{\psi'_1} \sigma_{\mu\nu} \psi'_2 F^{\mu\nu}/\Lambda$. Since this operator is generated at two-loop so-called Barr-Zee diagrams, the cut-off scale $\Lambda$ is very high $\sim 10^{15}$ GeV and the lifetime of $\psi'_2$ is much longer than the age of the universe. And $\psi'_2$ can be a decaying dark matter candidate. If $\Delta m_{21} =m_{\psi'_2}-m_{\psi'_1} \simeq 3.5$ keV, the recently claimed X-ray line signal~\cite{Xray_exp} can be accommodated for wide range of dark matter masses. } The relic abundance of dark matters in the current universe can also be explained by the dark matter annihilation into two singlet scalars and also by the $Z'$-resonance annihilation. Although our $Z'$ is very heavy $\gtrsim 10$ TeV, it can still mediate the dark matter scattering off atomic nuclei at the level that can be probed at the next generation dark matter direct search experiments. The singlet scalar can be very light ($m_\eta =0.1-10$ MeV) and mediate strong self-interactions of dark matters with cross section $\sigma_T = 0.1 -10 \, {\rm cm^2/g}$, which can solve small scale structure problems, such as the core-vs-cusp problem and the too-big-to-fail problems, of the standard $\Lambda$CDM model. { The small mass difference $\Delta m_{21}$ and the small VEV of $\eta$ are technically natural in the sense of 't Hooft. The singlet scalar and the right-handed neutrinos decay fast without causing any cosmological problems.} \begin{acknowledgments} This work was supported by in part NRF Grant 2012R1A2A1A01006053. \end{acknowledgments} \bigskip
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The final helium core mass usually determines the fate of very massive stars (zero age main sequence mass $\ge 80 {\ensuremath{{M}_{\odot} }}$). When the mass of a forged helium core exceeds $\ge 35$ {\ensuremath{{M}_{\odot} }}, some pressure-supporting photons start to convert into electron-positron pairs during the central carbon/oxygen burning, which softens the equations of state by reducing the adiabatic index, $\gamma < 4/3$. It then results in a contraction of the core, and the non-hydrostatic burning drives temperatures to pulsate at the dynamic time scale of a few hundred seconds. At this time, the pulsating in temperature produces only sonic waves without any destructive explosions. For stars having more massive helium cores of $45 - 65$ {\ensuremath{{M}_{\odot} }}, several strong eruptions triggered by pair-instabilities occur before the star finally collapses into a black hole. The later eruptions usually gain more energy from the explosive burning but carry less ejecta, which makes them collide with the earlier ejecta and results in many interesting outcomes. They are called ``pulsational pair-instability supernovae (PPISNe),'' first introduced by \citet{Bar67}, and are subjects of detailed studies done by \citet{Woo07,Woo17}. As the helium core mass increases, the pulse becomes more energetic and ejects less mass, and the duration between pulses increases from a few hundred seconds to several years. Such an eruption starts to generate an SN-like transit at a helium core mass of about 45 {\ensuremath{{M}_{\odot} }}. If the massive star still has a substantial hydrogen envelope before the eruption occurs, the first strong pulse carries energy of $10^{49} - 10^{50}$ {\ensuremath{\mathrm{erg}}}\ and can easily blow out the hydrogen envelope with a binding energy of $10^{43} - 10^{44}$ {\ensuremath{\mathrm{erg}}}\ and produce a faint Type IIp SN. However, subsequent pulses colliding with the first one can produce a much brighter Type IIn SN. If no hydrogen envelope remains at the time of eruptions, the collision of helium mass shells may produce a luminous Type I SN. For the helium core mass ranging from 45 to 55 {\ensuremath{{M}_{\odot} }}, the duration between pulsations becomes about several years, and the shell collisions would happen at $10^{15}-10^{16}$ cm from the center of the star, assuming ejecta are moving at a speed of $\sim$ 1000 km sec$^{-1}$. In this circumstance, much of the collision energy dissipates in the optical emission. An energetic collision of 10$^{51}$ erg pulse may result in a superluminous supernova \citep[SLSNe]{Ins16,Tak18} such as SN 2007bi \citep{2007bi}. Recent stellar evolution models by \citet{Woo17,Leu19} confirmed a broad variety of outcomes of PPISNe. Their consequent radiation properties studied by \citet{Woo07,Des15,Mor15,Smi15,Jer16,Woo17} showed that PPISN can provide outcomes ranging from several faint transit events to 1-2 SLSNe. The newly discovered SN IPTF14hls \citep{14hls}, which has a long-duration and multi-peak light curve, poses a grand challenge to our understanding of its emission mechanics. It is likely associated with multi-collisions of circumstellar interactions that can be explained naturally by PPISNe \citep{Woo18}. Another possibility for such dramatic changes in the light curves (LCs) may be due to the asymmetry of ejecta that leads to the formation of clumpy structures and results in the inhomogeneous emissions. \citet{Woo07} modeled a PPISN of a 110 {\ensuremath{{M}_{\odot} }}\ star of solar metallicity with the 1D \texttt{KEPLER}{} code \citep{kepler,Heg01} to explain the light curve of SN 2007bi. A detailed follow-up study \citep{Woo17} showed the diversified and enriched outcomes of PPISNe. However, in these 1D simulations, a large-density spike always formed during the shell collisions due to rising fluid instabilities. However, a fundamental deficiency of 1D simulations is that they cannot address fluid instabilities from first principles. \cite{Che14b} have performed the two-dimensional simulation of PPISN and showed that the development of Rayleigh-Taylor (RT) instabilities drives mixing between the colliding shells. The intrinsic nature of this problem requires radiation hydrodynamic (rad-hydro) simulations in at least two-dimensions (2D) with a sufficient spacial resolution to model the thin radiating region from the shell collision before the observational signatures of PPISNe can be properly calculated. The results from previous hydrodynamic simulations may change significantly in the context of rad-hydro, in which radiation co-evolves the gas dynamics through radiative heating and cooling. Highly-resolved radiation transport simulations of colliding shells pushes the envelope of the state-of-the-art astrophysical simulations. They require robust numerical algorithms and substantial supercomputing resources of millions of CPU hours to achieve. Since we are not sure of the detailed emission spectra from PPISNe, the radiation is presumably coming from the thermal radiation during the shell collisions. As the first step, we use a single group of radiation transport (gray-approximation) based on the frequency-integrated formulation of the rad-hydroequations. It provides a good starting point for studying the radiation emission and its impact on the dynamics of ejecta. To examine the PPISN radiation properties in great detail, we performed one-, two-, and three- dimensional radiation transportation simulations and compared them with the previous hydro simulations. Thus, the comparison provides a deeper understanding of the PPISN and its observational signatures. We describe our numerical approaches to the rad-hydro simulations in Section 2 and then present the results of 1D, 2D, and 3D models in Sections 3, 4, and 5, respectively. The significance and applications of our results are discussed in Section 6, and the conclusions are given in Section 7. \section{PPISN Model / Numerical Method} We take a fiducial case the PPISN studied in \citet{Woo07, Che14b}, whose progenitor was a solar-metallicity star with the zero-age main sequence mass of 110 {\ensuremath{{M}_{\odot} }}. When this star evolved to the post-main sequence with a helium core mass of 49.9 {\ensuremath{{M}_{\odot} }}\,, the first energetic pair-instability eruption occurred and ejected most of its hydrogen envelope, making a faint Type II supernova and leaving a residual of 50.7 {\ensuremath{{M}_{\odot} }}. The helium core again encountered the pair instability, twice in rapid succession, at 6.8 years after the first eruption. The second and third eruptions ejected 5.1 {\ensuremath{{M}_{\odot} }}\ gas with a kinetic energy of $6 \times 10^{50}$ erg. This star produced three major pair-instability (PI) pulses, P1, P2, and P3. P3 collided with P2, and the merged ejecta started to catch up to P1 at $r\sim$ 10$^{15}$ cm at a speed of a few 1000 km sec$^{-1}$. During the catastrophic collisions that occurred at this time, part of the kinetic energy of the ejecta transformed into radiation. To study the collisions among the three eruptions, we mapped this progenitor star calculated by 1D stellar evolution \texttt{KEPLER}{} \citep{kepler,Heg02} onto the rad-hydro code, \texttt{CASTRO}\ , when all three eruptions are present. At this time, the forward shock of P1 has propagated to $r \sim 2 \times$ 10$^{16}$ cm. \citet{Woo07} suggested that most of the radiation from the collision between P2/P3 and P1 is emitted when the shock of P2+P3 has reached $r \sim 1. \times 10^{16}$ cm; that marks the finish point of our simulations. \subsection{\texttt{CASTRO}{}} \texttt{CASTRO}\ is a multidimensional adaptive mesh refinement (AMR) code designed exclusively for astrophysical simulations \citep{Alm10,Zha11}. It uses an unsplit piecewise parabolic method (PPM) hydro scheme \citep{Woo84} with multi-species advection. We use $\gamma$ law equation of state (EOS) ($\gamma = 5/3$ for the ideal gas), which is sufficient for the colliding shell problems. The ideal gas EOS is a good approximation for the colliding PPISNe gas, which is not as extreme as that in the core of stellar explosions, which requires the degenerate electrons and Columb corrections, such as the Helmos EOS \citet{Tim00}. Densities, velocities, temperatures, and mass fractions of elements from the 1D \texttt{KEPLER}{} model are mapped in the \texttt{CASTRO}{} AMR grids with a conservative scheme developed by \citet{Che11, Che14a}, which conserves physical quantities such as energy and mass during the mapping. Self-gravity uses a monopole approximation by constructing a 1D gravitational potential from the radial average of the density, then calculates gravitational force for each AMR grid. Such an approximation remains valid, while the global spherical symmetry of ejecta does not break down much. To track the mixing of elements, we follow the evolution of four different species of {\ensuremath{^{1} \mathrm{H}}} , {\ensuremath{^{4} \mathrm{He}}}, {\ensuremath{^{12}\mathrm{C}}}, and {\ensuremath{^{16}\mathrm{O}}}, which are domain elements in the ejecta. Please note that the eruptions of PPISNe are mainly driven by the central {\ensuremath{^{16}\mathrm{O}}}\ burning; therefore, elements of {\ensuremath{^{28}\mathrm{Si}}}, ... to {\ensuremath{^{56}\mathrm{Ni}}}\ are invisible. The detailed setup for 1D, 2D, and 3D rad-hydro simulations is described in the following section. \subsection{Radiation Hydrodynamics} The rad-hydro solver in \texttt{CASTRO}\ uses the mixed-frame approach and adopts the flux-limited diffusion (FLD) and local thermodynamic equilibrium assumptions. The detailed formulation can be found in \citep{Zha11, Zha13}. It uses a second-order explicit Godunov method for the hyperbolic part of the system and a first-order backward Euler method for the parabolic part. The rad-hydro version of \texttt{CASTRO}\ has been used to address a wide range of rad-hydro problems in the astrophysics context, such as neutrino-driven explosions in the core-collapse SNe \citep{Dol15} and SNe shock breakout problems \citep{Lov17}. The major advantage of \texttt{CASTRO}\ is its efficiency due to the use of AMR combined with a good scaling behavior on modern supercomputers up to 30,000 CPU cores, which are more efficient than other rad-hydro codes, for example, {\tt ZEUS-MP} \citep{Hay06}, {\tt Orion} \citep{Kru07}, {\tt HERACLES} \citep{Gon07}, {\tt V2D} \citep{Swe09}, {\tt RAMSES} \citep{Com11}, and {\tt CRASH} \citep{crash}. A unique strength of \texttt{CASTRO}\ is that its hyperbolic solver uses an unsplit version of the PPM method that avoids spurious noise caused by dimensional splitting. \texttt{CASTRO}\ is based on a mixed-frame formulation, similar to that of {\tt Orion}. A main advantage of the mixed-frame approach is its strict energy conservation, whereas its drawback is in limited use for line transport. However, line transport cannot be treated by a gray radiation solver regardless of the choice of frame. Compared with the two-moment approach, the FLD approach is computationally cheaper, and it uses much less memory for the multi-group radiation. However, it carries low accuracy for optically-thin flows. For the current setup, we use a gray approximation based on the frequency-integrated formulation of the rad-hydro equations. A multi-group rad-hydro is much more computationally expensive and needs to be supported by realistic opacities. Therefore, we defer it for a future study. \subsection{Opacities} Opacities determine how the radiation interacts with gas by emission, absorption, or scattering; that is important to any rad-hydro simulations. Realistic opacities can be calculated based on the gas temperature, density, composition of elements, and ionization states. As a starting point, we employ simple scattering and absorption opacities by assuming \begin{equation} \kappa = \kappa_0\ \rho^{m} T^{-n} \nu^{p}, \label{eq:kappa} \end{equation} where $\kappa$ is either Planck or Rosseland mean opacities, $ \kappa_0$ is a constant, $\rho$ is gas density, $T$ is gas temperature, $\nu$ is radiation frequency, and $m$, $n$, and $p$ are constants. $\kappa$ has the unit of $\mathrm{cm}^{-1}$. For the gray solver, the opacities are made independent of radiation frequency by setting $p$ to zero. \texttt{CASTRO}\ allows for two temperatures (different radiation and gas temperature, so $E_\mathrm{r} \ne a T_\mathrm{gas}^4$). Correspondingly, \texttt{CASTRO}\ takes both the Planck mean, $\kappa_P$, and Rosseland mean, $\kappa_R$, opacities---these have different weightings. If we set $\kappa_P \Delta x \gg 1$ ($\kappa_P$ is large), $ \Delta x$ is the zone size in the simulation, and then the two temperatures become the same. Scattering contributes Rosseland mean but not Planck mean. In our \texttt{CASTRO}\ run, we set $\kappa_R$ to Thomas scattering at a sufficiently high temperature, which can be expressed as \begin{equation} \kappa_T = 0.2(1+X) \; {\rm cm^{2}g^{-1}} , \label{eq:ka} \end{equation} where $X$ is the hydrogen mass fraction. Units of opacities in \texttt{CASTRO}\ are cm$^{-1}$, $\kappa_R = \kappa_T\rho$ where $\rho$ is gas density, so we parameterize $\kappa_T =$ 0.1, 0.2, 0.3, and 0.4 by assuming different hydrogen mass fractions or the ionized environment. Ideally, we would like to use realistic opacities based on a table or solve the ionization states of the multi-species gas; however, doing so creates a great challenge in the simulations of colliding shells. In PPISNe, radiation originates from colliding shells and likely propagates inside the clumpy ejecta. Therefore, the gas density can be highly inhomogeneous and anisotropic, so it causes the opacities to vary significantly along the line of sight for the outgoing photons. Because the opacities in FLD scheme are derived in the diffusion approximation to the radiation transport equation, the code prefers the isotropic and homogeneous gas field in the length scale comparable to the photon mean free path. Sudden changes in opacities easily makes the rad-hydro simulations numerically unstable, which leads to the crash of the simulations. Therefore, on the first attempt, we use the simple opacities of Thomas scattering. Even so, the advancing rad-hydro time step remains tiny. Evolving our 2D/3D rad-hydro simulations for a time scale of $\sim 300 - 400$ days requires $10^5 -10^6 $ time steps to complete each run, which makes these simulations very computationally expensive. We have spent about three million CPU hours running these models (mainly for the 2D and 3D simulations) on two powerful Cray XC-40 supercomputers: Edison ($\sim134,000$ CPU cores) at the National Energy Research Scientific Computing Center (NERSC) and Aterui ($\sim25,000$ CPU cores) at the National Astronomical Observatory of Japan (NAOJ). The single 3D run consumed more than 1.5 million CPU hours. \subsubsection{1D Setup} 1D \texttt{CASTRO}\ uses a spherical symmetry coordinate, $r$, with a reflect and outflow at the inner and outer boundaries, respectively. In 1D runs, we use different resolutions of 1024, 4096, and 8192 uniform grids with a domain size of $r =2 \times$ 10$^{16}$ cm to examine the 1D results under different resolutions and opacities. \subsubsection{2D Setup} 2D \texttt{CASTRO}\ uses a cylindrical grid, $r$ and $z$. We simulate only one quadrant of the star; therefore, the outflow and reflecting boundary conditions are set on the upper and lower boundaries in $r$ and $z$, respectively. The root grid has 512$^2$ zones with three levels of refinement for an additional factor of up to 8 (2$^3$) in the spatial resolution. The grids are refined based on gradients of density and velocity. This setup provides an effective simulation domain of $4,096 \times 4,096$ zones to cover a domain of $[2 \times$10$^{16}]^2$ cm$^2$. \subsubsection{3D Setup} 3D \texttt{CASTRO}\ uses a cartesian grid, $x$, $y$, and $z$. We simulate a full star with $4\pi$ geometry in 3D. Thus, the outflow boundary conditions are on all boundaries. The root grid has 512$^3$ zones and up to two levels of refinement for an additional factor of two in spatial resolution. The refinement criteria are the same as 2D cases. This setup provides $2,048 \times 2,048 \times 2,048$ zones for the simulation domain of $[2 \times 10^{16}]^3$ cm$^3$ while the star is located at the center of the simulated box. \subsection{Light Curve Calculation} We calculate the LCs by collecting the radiation flux at the boundary of our simulation domain and assume the photons start free-streaming from it. The LCs can be expressed as \begin{equation} L = 4\pi r_p^2 F_{rad}, \end{equation} where $L$ is the luminosity in erg $s^{-1}$, $F_{rad}$ is the radiation flux in erg cm$^{-2}$ sec$^{-1}$ at the photosphere, and $r_p$ is cm located at the boundary of the simulated box. The barometric LCs can be calculated directly from our simulations. \section{1D Results} \subsection{1D PPISN Evolution} 1D simulations provide a direct physical picture of the dynamics of ejecta and serve as a good starting point for our rad-hydro runs. At the beginning of 1D simulations, peak velocities of P2 and P3 are $\sim 3.9 \times 10^7$ cm sec$^{-1}$ at $r \approx 7.9 \times 10^{14}$ cm and $\sim 4.8 \times 10^7$ cm sec$^{-1}$ at $r \approx 3.2 \times 10^{14}$ cm, respectively. In the previous pure hydro simulation of \cite{Che14b}, the faster-moving P3 overtakes P2 within 50 days and has completely merged with it at $r \approx 2.3 \times 10^{15}$ cm, then together they collide into P1. In the rad-hydro simulations, the photons diffusing from P2 and P3 generate radiative cooling and lead to the production of two thin shells (density spikes in 1D) before their merger. This radiative cooling also decelerates the shock velocities of P2 and P3. Radiation starts diffusing out and turning into the LCs. The merger of the two pulses takes place at $r \approx 5.1 \times 10^{15}$ cm at 130 days. There is a sudden jump in the radiation flux ahead of the shock front of P2 through free-streaming photons before it merges with P3 or P2+P3. We show the evolution of velocity, gas density, radiation energy, and radiation flux of 1D PPISNe in \Fig{fig:1d_evol}. Density spikes appear shortly after the simulation starts, and their density constraint is relative to its surrounding, $\Delta_S = <\frac{\delta \rho}{\rho}> \sim 1,000 - 2,000$. Most of the radiation flux comes from the same location as the density spike, which indicates that the colliding of the shells produces a great deal of thermal radiation, and the ejecta ahead of the shell is heated up to a few $10^5$ K. Depending on the energetics, masses, and duration between eruptions, the electromagnetic spectrum of colliding shells may range from radio to UV. \begin{figure*} \begin{center} \includegraphics[width=.8\textwidth]{figs/f1} \caption{Evolution of a 1D PPISNe. Curves of different colors show the profiles of gas density, velocity, radiation, radiation energy density, and radiation flux from 5 - 333 days after the shell collisions. The high-resolution model of the 8192 zone is employed in this example. The collisions between two major shells can be seen in the velocity. Two large density spikes of P2 and P3 first appear, then merge into one. The peaks of radiation energy density and flux are also located around the density spikes. \lFig{fig:1d_evol}} \end{center} \end{figure*} \begin{figure}[h] \begin{center} \includegraphics[width=\columnwidth]{figs/f2} \caption{Zoom-in of the density spike at $t$ = 300 days. The red-dashed line indicates the peak of the density spike. The width of the density spike is about $1.4 \times 10^{14}$ cm, and it is resolved by 50 zones; the zone number for this model is 8192. The spike is sitting at the shock. Most of the radiation flux originates from this density spike. \lFig{fig:spike_zoom}} \end{center} \end{figure} 1D Lagrange rad-hydro codes, such as \texttt{KEPLER}\ \citep{kepler,Heg02} or \texttt{STELLA}\ \citep{stella}, usually fail to resolve the structure of the density spike because of their spatial resolution, which is a few hundred to a thousand grids per model. A Eulerian code with many grids is required in order to achieve a high enough spatial resolution to probe the structure of the density spike. We use resolutions of 1024, 2048, and 8192 zones in the 1D runs and are able to reveal the detailed structure of the density spike. \Fig{fig:spike_zoom} shows the close-up of the spike in the instant that two pulses merge. The width of the density spike is about $1.4 \times 10^{14}$ cm, and densities range from $\sim 8 \times 10^{-16} \ensuremath{\mathrm{g}\,\mathrm{cm}^{-3}}$ to $\sim 1.1 \times 10^{-12} \ensuremath{\mathrm{g}\,\mathrm{cm}^{-3}}$. The $\Delta_S$ is $\sim 1,375$, making it look like a spike in the entire simulation domain. This spike locates at the shock front, which creates a discontinuity in density, velocity, and radiation flux. Radiation directly streams out of the spike. The formation for a density spike in 1D usually implies the development of fluid instabilities because of the piling up of the gas. Due to the dimensional limitation, 1D models cannot follow the fluid instabilities from the first principles. We discuss how this spike transforms in the multidimensional rad-hydro simulations in later sections. \subsection{1D Light Curves} A crucical goal of rad-hydro simulations is calculating the observational signatures. With the single group rad-hydro, we can obtain the bolometric LCs (total amount of radiation energy emitted in one band). However, the color information of LCs and the full spectra are unavailable. We show the effect of resolution to LCs in \Fig{fig:reso}. The 1D LCs are from simulations of 1024, 4096, and 8192 zones with the same parameters. The LC results suggest that radiation flux is sensitive to its resolution. The lower-resolution runs (1024 zones) tend to yield a higher peak luminosity with a shorter light curve duration. The second peak in light curve appears in the higher-resolution runs --- 4096- and 8192-zone runs---and the rising time of the second peak is earlier in the 4096-zone run. The 1D results of different resolutions suggest that its light curve about peak luminosity and duration of this 110 {\ensuremath{{M}_{\odot} }}\ PPISN is about $9 - 19.5 \times 10^{42}$ erg sec$^{-1}$ with a duration of $100 - 200$ days. The rising time of the second peak should result from the collision of the two density spikes of P2 and P3; however, it is not resolved in the 1024-zone run. The resolution may affect the LCs through the numerical diffusion of photons that affect the flux from the photosphere. Since our opacity setting is not realistic, we employ several constant opacities, $\kappa =$ 0.1, 02, and 0.4, to examine how the opacities change the 1D LCs in \Fig{fig:1d_lc}. In three cases, we use the highest resolution of the 8192-zone. With increased opacities, the peak of the LCs becomes broader with a shorter height. The second peak feature again appears in the high-resolved runs regardless of opacities. LCs peaks at $7 - 11.5 \times 10^{42}$ erg sec$^{-1}$ with a duration of $150 - 200$ days. The separation between the first and second peak is about 150 days. The increasing opacities result in a longer diffusion time of photons and reflect in its observational signatures by broadening its LCs and lowering its peak luminosity. Opacities determine the decoupling of photons and gas; therefore, they affect LCs by deciding the location of the photosphere and its emitted radiation flux. In the realistic situations, LCs should be more sensitive to opacities. The 1D results suggest that our PPISN from a 110 {\ensuremath{{M}_{\odot} }}\ star have a characteristic peak luminosity of $7- 19.5 \times 10^{42}$ erg sec$^{-1}$ with duration of $150 - 200$ days and a second peak feature of $5-6 \times 10^{42}$ erg sec$^{-1}$ with a duration of about $50$ days. \begin{figure}[h] \begin{center} \includegraphics[width=\columnwidth]{figs/f3} \caption{ 1D LCs of different resolutions. As the spacial resolution increases, the peak luminosity decreases and its peak becomes broad. The second peak features appear only in higher-resolution runs. These 1D runs use $\kappa = 0.2$. \lFig{fig:reso}} \end{center} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=\columnwidth]{figs/f4} \caption{ 1D LCs of different opacities in 8192-zone runs. Two peak signatures appear in these 1D LCs. The peak luminosity is $\sim 10^{43} \, {\ensuremath{\mathrm{erg}}} \sec^{-1}$ and deceases as $\kappa$ increases. LCs broaden as the peak luminosity drops, and the location of the peak shifts to a later time. \lFig{fig:1d_lc}} \end{center} \end{figure} \section{2D Results} \subsection{2D PPISN Evolution} 1D results suggest the formation of a density spike in the colliding shells that is consistent with previous 1D rad-hydro results \citep{Woo07,Woo17}. In this section, we investigate how the 1D spikes evolve in the 2D simulations. We first show the 2D evolution of $\kappa = 0.2$ in \Fig{fig:2d_den}. At the beginning of the simulation, the original density structure is perfectly spherical-symmetrical. As time goes on, two distinct shells form from P2 and P3 (\Fig{fig:2d_den} (b)). The thin shells form in the shock front due to radiative cooling. The shella are pushed by the hot ejecta. The velocity of the P2 shell is $\sim 2 \times 10^8$ cm sec$^{-1}$. We compare the early phase of the P2 and P3 collision for both rad-hydro and pure-hydro in \Fig{fig:comp2d}. In the pure-hydro simulations \citep{Che14b}, the merging shock from P2 and P3 collides with the P1 ejecta within 50 days. Due to the snowplowing mass ahead of the shock, a reverse shock forms to drive RT instabilities to mix up the ejecta behind it. However, for the rad-hydro model, the energy of the radiative shock quickly dissipates as it propagates. Thermal radiation from the collision between P2 and P3, together with that from P2 and P1, turns into photons to power the light curve of PPISNe. The outer shell where the radiation originates lacks spherical symmetry and deforms as it moves out. See \Fig{fig:comp2d} for the comparison of 2D rad-hydro and pure-hyrdro models at 256 days. The shock features in the rad-hydro simulations seem to disappear. In the rad-hydro simulations, the shock propagating in nearly optically thin regions quickly dissipates its energy because the radiation diffuses from the shock front. The RT fingers of colliding shells found in the previous pure-hydro simulations are not prominent in the rad-hydro model due to the absence of a reverse shock to drive RT instabilities. However, the non-uniform heating and radiative cooling distort the spherical symmetry of the shell. We compare 2D simulations of different opacities in \Fig{fig:2d_comp1}. The density shell shows different width and inhomogeneity in different opacities, which suggests that the dynamic of the shell depends on the opacities. The radiation bursts out from the shell surface, which is deformed by radiative cooling. If the photosphere gets close to the emitting surface, the radiation becomes inhomogeneous and anisotropic, as shown in \Fig{fig:2d_flux}, and makes PPISNe LC sensitive to viewing angles. \begin{figure*} \begin{center} \includegraphics[width=.8\textwidth]{figs/f5} \caption{ Evolution of gas density and velocity of the $\kappa = 0.2$ model in 2D. Panels (a), (b), (c), and (d) show the density and velocity at snapshots of 0, 100, 200, 300, and 400 days, respectively. A dense shell forms, and its shape is deviated from spherical symmetry with small-scale irregular structures. This shell travels at a velocity of $\sim 2 \times 10^8$ cm sec$^{-1}$. \lFig{fig:2d_den}} \end{center} \end{figure*} \begin{figure*}[h] \begin{center} \includegraphics[width=\textwidth]{figs/f6_a} \includegraphics[width=\textwidth]{figs/f6_b} \caption{ Density profiles of rad-hydro and pure-hydro at 50 and 256 days since the P3 eruption. In the rad-hydro model ($\kappa = 0.2$), the shock front does not produce a visible density discontinuity. Fluid instabilities are also smeared out in the rad-hydro model. At 256 days, the pure-hydro results show visible RT fingers and the shock front (the outer green arc); however, these features are invisible in the rad-hydro run. \lFig{fig:comp2d}} \end{center} \end{figure*} \subsection{2D Light Curves} We show the 2D LCs in \Fig{fig:2d_lc}. The 2D LCs differ from 1D LCs in several places. The peaks of LCs from simulations of $\kappa = $ 0.1, 02, and 0.4 all exceed $10^{43}$ {\ensuremath{\mathrm{erg}}}\ sec$^{-1}$. They are slightly more luminous than their 1D counterparts, and the second peak seems to smooth out in 2D. Once the merging shell approaches the boundary of the simulated box (the assumed location of the photosphere), we start to see the variation of LCs that is reflected in the wiggly structure of the shell, which affects the radiation flux and the resulting LCs. If we use realistic opacities, the angle-depending effects may appear at earlier times. \begin{figure}[h] \begin{center} \includegraphics[width=\columnwidth]{figs/f7} \caption{ 2D density profiles of $\kappa = 0.1$ and $\kappa = 0.2$ at 300 days after the collisions. Both show the formation of the irregular dense shell. The width of the shell in $\kappa = 0.1$ is slightly thicker than that of $\kappa = 0.2$. \lFig{fig:2d_comp1}} \end{center} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=\columnwidth]{figs/f8} \caption{ 2D radiation flux at 212 days. The hot spots of radiation flux appear right in front of the colliding regions. These spots are randomly distributed and suggest the non-homogeneous emission of the PPISNe. \lFig{fig:2d_flux}} \end{center} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=\columnwidth]{figs/f9} \caption{ LCs of 2D models. The LCs are calculated from rad-hydro models of $\kappa = 0.1$, $\kappa = 0.2$, and $\kappa = 0.4$. In each model, there are ten LCs from different viewing angles from $\theta = 0 - 90$ degree with gradual colors. The peak luminosity is around $1 -3 \times 10^{43}$ {\ensuremath{\mathrm{erg}}}\,sec$^{-1}$ and lasts for $50 -120$ days. The sharp double peak features seen in 1D are smoothed out in 2D. The late-time luminosity becomes more sensitive to the viewing angle dependence. The luminosity drops quickly when the colliding shell passes $r \sim 10^{16}$ cm. \lFig{fig:2d_lc} } \end{center} \end{figure} \section{3D Results} \subsection{3D PPISN Evolution} The 1D and 2D results provide a quantitative understanding of the PPISNe. We would like to compare the realistic 3D model with the 1D and 2D results discussed in this work. \Fig{fig:3d_flux} shows the 3D radiation flux originating from the irregular surface, which is similar to its 2D counterpart. It suggests that the non-uniform and non-isotropic radiation flux is emitted from this surface. Large flux sites (hot spots) come from the dents of the shell, which reflect the impact of radiative cooling on the dynamics of colliding shells. Our results also suggest the region near the core is very turbulent. The turbulence may not affect the observational signatures of colliding shells, but it may affect the consequent core evolution. Finally, we compare the results from 2D and 3D rad-hydro models and find the 3D shell is thicker and smoother than the shell of the 2D model, as shown in \Fig{fig:2d_3d}. \subsection{3D Light Curves} We show the 3D LCs from different viewing angles in \Fig{fig:3d_lc}. The peak luminosity of 3D LCs is about $1.8 - 2.3 \times 10^{43}$ erg sec$^{-1}$, and the second peak in the LC also smooths out. The peak of the light curve lasts about 100 days, then it enters a plateau region of $L \sim 2-3 \times 10^{42}$ erg sec$^{-1}$. When the photo-sphere is close to the colliding shell, radiation flux becomes more sensitive to the viewing angles. \begin{figure}[h] \begin{center} \includegraphics[width=\columnwidth]{figs/f10} \caption{ Density and radiation vector field from the 3D model at 260 days. A dense shell again forms in the 3D model. The radiation field is scaled with the size of the arrows based on its flux. The larger flux tends to originate from the dented regions of the shell. \lFig{fig:3d_flux}} \end{center} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=\columnwidth]{figs/f11} \caption{ Comparison of density structures from 2D and 3D rad-hydro simulations at 250 days. The width of the 2D shell is thinner than the width of the 3D. There are also fewer fine structures in the 3D. The positions of the 2D and 3D shells are both at $r \approx 8\times10^{15}$ cm. \lFig{fig:2d_3d}} \end{center} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=\columnwidth]{figs/f12} \caption{ LCs from the 3D model. Different colors represent 25 LCs from different viewing polar angles $\theta = 0 -180^{\circ}$ and azimuthal angles $\phi = 0 -360^{\circ}$. The peak luminosity is $\sim 2\times 10^{43} \, {\ensuremath{\mathrm{erg}}} \sec^{-1}$ and lasts about 100 days. The late-time LCs start to vary greatly as the shell approaches our photosphere at $r \sim 10^{16}$ cm. \lFig{fig:3d_lc}} \end{center} \end{figure} \section{Discussion} 1D evolution shows a big density spike formation, which turns into a deformed shell in 2D and an irregular surface in 3D. Regardless of the dimensionality of rad-hydro simulations, a thin dense shell eventually forms due to the radiative cooling. But multidimensional simulations suggest this shell should have non-spherical symmetry. To understand the intrinsic properties of shells, we plot the 1D angle-averaged density profiles from multi-D rad-hydro models and compare them with 1D results in \Fig{fig:den_comp}. The 1D density spike forms regardless of the opacities chosen. $\Delta_S$ in the density spike easily exceeds 1000 within a few days after P3 launch. The spike truncates into multiple bumps of $\Delta_S \sim 100 $ in 2D and 3D. These bumps also imply inhomogeneous and anisotropic radiation emission of PPISNe in multi-D. Finer structures are found in the bumps of $\kappa = 0.2$ than $\kappa = 0.1$. A comparison of the same $\kappa = 0.2$ for 2D and 3D shows that the 3D seems to develop a thicker shell with fewer finer structures. The total kinetic energy of P1 and P2 is $\sim 6 \times 10^{50}$ erg. Based on the LC results of 3D, the total radiation emitted until the shock passes $r \sim 10^{16}$ cm is about $1.64 \times 10^{50}$ erg. Therefore, about $27\%$ of the kinetic energy of P1+P2 is converted into radiation. Radiation released from the colliding shell can be estimated by using inelastic collisions (momentum conservation). For example, Pulses A and B have mass and velocity of $M_a$, $V_a$ and $M_b$, $V_b$, respectively. If the thermal energy during the collision converts into radiation, the amount of radiation can be expressed as \begin{equation} E_r \sim \frac{M_a M_b}{M_a+M_b}(V_a-V_b)^2. \label{eq:ka} \end{equation} By estimating the mass of P2+P3 and P1 to be $\sim 5 {\ensuremath{{M}_{\odot} }}$ and $\sim 50 {\ensuremath{{M}_{\odot} }}$ with $V_a - V_b \sim 2\times10^8$ cm sec$^{-1}$, we obtain the amount of radiation energy $\sim 1.86\times 10^{50}$ erg, which is consistent with our results. We also compared the mixing of different elements in the rad-hydro and pure-hydro simulations. The mixing in the rad-hydro shows distinctive features from those found in the previous multidimensional pure-hydro simulations. PPISNe only eject heavy elements having atomic numbers below {\ensuremath{^{28}\mathrm{Si}}}. The iron group elements frequently seen in other types of SNe, such as {\ensuremath{^{56}\mathrm{Ni}}}\,, have not been seen in PPISNe. In the pure-hydro simulation, we found the mixing is distributed in a region of $\sim 2-3\times 10^{15}$ cm. However, in the multidimensional rad-hydro runs, the mixture of {\ensuremath{^{12}\mathrm{C}}}\ and {\ensuremath{^{16}\mathrm{O}}}\ occurs in a shell with a of thinness $<5 \times 10^{14}$ cm, and {\ensuremath{^{12}\mathrm{C}}}/{\ensuremath{^{16}\mathrm{O}}}\ have dredged up to the shock front, as shown in \Fig{fig:mixing}. The unique spectra feature of PPISNe has rich {\ensuremath{^{12}\mathrm{C}}}\ and {\ensuremath{^{16}\mathrm{O}}}\ but a deficiency in {\ensuremath{^{28}\mathrm{Si}}}\ and {\ensuremath{^{56}\mathrm{Fe}}}\.. Our results suggest this rapid mixing in the thin shell should also reflect on the spectra evolution at the peak of LC. \begin{figure}[h] \begin{center} \includegraphics[width=\columnwidth]{figs/f13} \caption{ Angle-averaged density profiles of 1D, 2D, and 3D rad-hydro runs at 250 days (offseted for clarity). As the dimensionality increases, the density spike in 1D transforms to a broader and more noisy bump in 2D, and it is even more smooth in 3D. 2D results suggest the higher opacity would make the shell thinner with finer structures. \lFig{fig:den_comp}} \end{center} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=\columnwidth]{figs/f14} \caption{ Angle-averaged elemental and velocity profiles of 2D rad-hydro runs at 135 days. The dark region indicates the position of the forward shock as well as the thin shell. Elements in the post-shocked region (gray shade) have been homogeneously mixed up. \lFig{fig:mixing}} \end{center} \end{figure} \subsection{Toward Realistic Observational Signatures of PPISNe} The constant opacities in our simulation do not represent realistic situations. Opacities may vary dramatically in the photon trajectories due to sudden changes in ionization states of the gas. To provide sophisticated predictions of PPISNe, the first task is to improve the opacities of simulations by calculating the electron fraction with multi-species Saha equations or using comprehensive opacity tables. Once the realistic opacities become available, we can use multi-group rad-hydro simulations to obtain color LCs, which are very useful for probing the nature of PPISNe and guiding the observational strategies. Due to the lesser accuracy of the FLD method in the optically thin region, new radiation transfer schemes, such as Variable-Eddington-Tensor \cite[VET]{vet1, vet2}, may address the rad fluid instabilities in the optically thin limit more accurately. In this paper, we present only one PPISN from a progenitor star of 110 {\ensuremath{{M}_{\odot} }}. Next, we plan to carry out a grid of different progenitor stars based on \cite{Woo17}. Depending on the eruptions' energy and duration, PPISNe can provide observational signatures from radio to UV, and they will also be exciting targets for the current and coming observatories. However, the above improvements increase the technical difficulty of running rad-hydro simulations and boost the computation expense at the cost of tens of millions of CPU hours. The planned simulations will push the envelope of state-of-the-art computational astrophysics, and they will be feasible for years to come. \section{Conclusion} We have presented 1D, 2D, and 3D models of a PPISN from a 110{\ensuremath{{M}_{\odot} }}\ solar metallicity star by using rad-hydro simulations with \texttt{CASTRO}. Earlier 1D rad-hydro results \citep{Woo07, Woo17} have suggested a big density spike formation that implies the fluid instabilities, and follow-up 2D pure-hydro simulations by \citep{Che14b} showed the development of RT instabilities. Our 1D rad-hydro simulation resolves the structure of the spike found in previous studies, and the resulting LCs are more smooth. Both resolution and opacity can affect ejecta dynamics and emission. There is a second peak feature in the 1D LCs, and it comes from the subsequent collision of the two thin shells of P2 and P3. The 1D density spikes truncate into multiple bumps, and the second peak feature of LC is smoothed out in the 2D and 3D runs. The results of 2D rad-hydro are different from their pure-hydro counterparts. The forward-shock feature and the reverse-shock-driven mixing are less prominent than those in previous 2D pure-hydro models. The radiative cooling dissipates the forward shock's velocities of pulses and transforms them into thin shells. Therefore, it results in a weak reverse shock and a weak mixing. An irregular shell starts emitting with inhomogeneous and anisotropic flux to power LCs. We conclude by stating the characteristics of a representative PPISN of a 110 {\ensuremath{{M}_{\odot} }}\ star. The peak of bolometric LCs is about $8-20 \times 10^{42}$ erg sec$^{-1}$ with a duration of $100 - 200$ days with a second peak possibly appearing at $100 -150$ days after the first peak, but it may just appear to be a broad bump in the LCs. There is a long plateau phase of $2-3\times 10^{42}$ {\ensuremath{\mathrm{erg}}}\ sec$^{-1}$ lasting for $150-200$ days, and it is possibly sensitive to its viewing-angle. About $27\%$ of the kinetic energy of P1 and P2 are converted into radiation. The ejecta are dominated by {\ensuremath{^{4} \mathrm{He}}}, {\ensuremath{^{12}\mathrm{C}}}, and {\ensuremath{^{16}\mathrm{O}}}. The luminous PPISNe provide a powerful probe to the early universe and also open new windows on massive star formation in both the local and primordial universe. With the advancement of models and new data taken from coming transit factories, such as the {\it Zwicky Transient Facility} (ZTF) and the {\it Large Synoptic Survey Telescope} (LSST), we shall soon gain a better understanding of PPISNe. Our multidimensional radiation transport simulations shed light on the important characteristics of PPISNe. In our future papers, we will build new models, including new radiation schemes, realistic opacities, a large grid of models to obtain sophisticated light curves, and spectra of PPISNe. \acknowledgements We thank the members of CCSE at LBNL for help with \texttt{CASTRO}{}. We also thank Stan Woosley and Alexander Heger for providing the \texttt{KEPLER}\ models. KC thanks Dan Kasen, Ann Almgren, Lars Bildsten, and Ken Nomoto for many useful discussions. This research is supported by an EACOA Fellowship, and by the Ministry of Science and Technology, Taiwan, R.O.C. under Grant no. MOST 107-2112-M-001-044-MY3. KC thanks the hospitality of the Aspen Center for Physics, which is supported by NSF PHY-1066293, and the Kavli Institute for Theoretical Physics, which is supported by NSF PHY-1748958. Numerical simulations are supported by the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility operated under Contract No. DE-AC02-05CH11231; the Center for Computational Astrophysics (CfCA) at National Astronomical Observatory of Japan (NAOJ); and the TIARA Cluster at the Academia Sinica Institute of Astronomy and Astrophysics (ASIAA).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{Supplementary Material} \end{center} \input{10_supplementary} {\small \bibliographystyle{ieee_fullname} \section{In-hand Object Scanning with Aria Glasses~\cite{aria_pilot_dataset}} \label{sec:aria} The recently introduced Aria glasses~\cite{aria_pilot_dataset} provide a first-person capture of the environment using cameras mounted on the glasses. A head-mounted camera provides an intuitive and simple way for scanning the object using both hands and has several applications in AR/VR. Here, we show that our method can be applied to reconstruct unknown objects from sequences captured using the Aria glasses. Figure~\ref{fig:aria_ex} shows the Aria glasses and an egocentric view of two hands manipulating an object from the YCB dataset. We first linearize the fish-eye images from the Aria sequence and use Detic~\cite{zhou2022detecting} to obtain the hand and unknown object masks in the images. The object reconstruction result on the mustard bottle sequence captured using the Aria glasses shown in Figure~\ref{fig:aria_res} demonstrates that our proposed method can be applied to this in-hand scanning scenario as well. \begin{figure}[b] \centering \includegraphics[trim=100 180 205 240,clip,width=1\linewidth]{figures/supplementary/aria_glasses.pdf} \caption{\textbf{Aria glasses.} Aria glasses~\cite{aria_pilot_dataset} provide egocentric views of the environment using cameras mounted on the glasses.} \label{fig:aria_ex} \end{figure} \begin{figure*} \centering \includegraphics[trim=200 180 205 140,clip,width=1\linewidth]{figures/supplementary/aria_result.pdf} \caption{\textbf{In-hand object scanning with Aria glasses.} Our method can be applied to reconstruct (bottom left) objects and estimate its poses (bottom right) from RGB sequences captured using Aria glasses (top). The different segments created by our approach is color coded in the pose trajectory.} \label{fig:aria_res} \end{figure*} \section{More Qualitative Results} We show more qualitative results from the HO-3D sequences in Figure~\ref{fig:suppl_results}. Corresponding quantitative results are provided in Tables~1-3 of the main paper. Our method can reconstruct partially-textured and texture-less objects such as the mustard bottle and mug in Figure~\ref{fig:suppl_results}. Fingers grasping the object are reconstructed on the mustard bottle sequence (second column of Figure~\ref{fig:suppl_results}) due to inaccurate hand masks and static grasp pose of the hand throughout the sequence. Parts of the object that are always occluded by the hand as in the mug sequence (third column of Figure~\ref{fig:suppl_results}) are also inaccurately reconstructed as we do not assume any prior on the object shape. \begin{figure*} \centering \includegraphics[page=1,trim=10 80 10 60,clip,width=0.9\linewidth]{figures/supplementary/suppl_results.pdf} \caption{\textbf{More qualitative results (reconstructed models and pose trajectories) on HO3D dataset.} Left column: Our method and COLMAP obtain high quality reconstruction on textured objects. Middle column: Our method manages to return a complete reconstruction on this partially textured object, while COLMAP fails to reconstruct the back. The fingers reconstructed as part of the mustard bottle are due to inaccurate hand masks. Third column: We achieve reasonable results on this very challenging texture-less object, on which COLMAP fails completely. We could not reconstruct the parts of the object that are always occluded by the hand. The reconstruction quality of our method is similar to the quality obtained when using the ground truth poses.} \label{fig:suppl_results} \end{figure*} \section{More Implementation Details} As discussed in Section~3.4.1 of the main paper, we divide the input RGB sequence into multiple overlapping segments, and incrementally reconstruct the object shape and estimate its pose in each segment. As reconstructing the object from every frame of the entire RGB sequence is not feasible, we first subsample the input RGB sequence and manually select the frame interval from the entire sequence on which we run our method. The interval is selected such that all parts of the object are visible during scanning. In Table~\ref{tab:ho3d_seq_info}, we provide the names of the sequences from the HO-3D dataset which are used for reconstruction, the chosen frame intervals, and the number of segments the RGB sequence is divided into by our method. Additionally, in Figure~\ref{fig:obj_area}, we show the frame interval on which the reconstruction is performed for two objects in the HO-3D dataset along with the segment boundaries \begin{figure} \centering \begin{minipage}{0.33\linewidth} \subcaptionbox{Image} {\includegraphics[page=1,trim=100 200 620 120,clip,width=1\linewidth]{figures/supplementary/masks.pdf}} \end{minipage}~ \begin{minipage}{0.33\linewidth} \subcaptionbox{Object mask} {\includegraphics[page=1,trim=275 200 445 120,clip,width=1\linewidth]{figures/supplementary/masks.pdf}} \end{minipage}~ \begin{minipage}{0.33\linewidth} \subcaptionbox{Hand mask} {\includegraphics[page=1,trim=455 200 265 120,clip,width=1\linewidth]{figures/supplementary/masks.pdf}} \end{minipage} \caption{\textbf{Hand and object segmentation masks.} We obtain foreground masks from \cite{boerdijk2020learning} and hand masks from \cite{wu2021seqformer}.} \label{fig:hand_obj_masks} \end{figure} \begin{figure} \centering \includegraphics[page=1,trim=160 120 120 100,clip,width=1\linewidth]{figures/supplementary/failure.pdf} \caption{\textbf{Failure scenarios.} Our methods fails to obtain poses and reconstruct textureless symmetrical objects (left) and thin objects (right).} \label{fig:limitations} \end{figure} \begin{figure*}[t] \centering \begin{minipage}{0.7\linewidth} \subcaptionbox{Bleach bottle} {\includegraphics[page=1,trim=170 100 200 120,clip,width=1\linewidth]{figures/supplementary/area_curves.pdf}} \end{minipage} \begin{minipage}{0.7\linewidth} \subcaptionbox{Pitcher base} {\includegraphics[page=2,trim=175 110 235 140,clip,width=1\linewidth]{figures/supplementary/area_curves.pdf}} \end{minipage}~ \caption{\textbf{Object area curves and segment boundaries.} We show the segment boundaries for two objects (bleach bottle and pitcher base) which are calculated from the object area curves. In each segment, the incremental object reconstruction and pose tracking starts at the local maximum of the object area and ends at the local minimum.} \label{fig:obj_area} \end{figure*} \begin{table}[t!] \footnotesize \begin{center} \begin{tabularx}{\columnwidth}{l Y Y Y Y} \toprule Object & Sequence ID & Start Frame ID & End Frame ID & No. of Segments\\ \midrule 3: cracker box &MC1 & 210 & 650 & 8\\ 4: sugar box &ShSu12 & 276 & 1552& 8\\ 6: mustard &SM2 & 300 & 576 & 4\\ 10: potted meat &GPMF12 & 70 & 334 & 4\\ 21: bleach &ABF14 & 686 & 960 & 4\\ 35: power drill &MDF14 & 270 & 772 & 3\\ 19: pitcher base &AP13 & 60 & 370 & 4\\ 11: banana &BB12 & 1100&1260 & 4\\ 25: mug &SMu1 & 702 &1236 & 5\\ \bottomrule \end{tabularx} \caption{\textbf{Sequence IDs and frame intervals chosen for reconstruction from the HO-3D dataset, and number of segments created by our approach.}} \label{tab:ho3d_seq_info} \end{center} \end{table} \section{Hand and Object Masks} We show some object and hand masks used by our method in Figure~\ref{fig:hand_obj_masks}. We rely on the pre-trained network from \cite{boerdijk2020learning} which segments dynamic foreground from static background and segments hand and object as one class. We then obtain hand-only masks from \cite{wu2021seqformer} and combine with foreground mask from \cite{boerdijk2020learning} to obtain hand and object masks. For the Aria sequences, as discussed in Section~\ref{sec:aria} which do not have static background, we use Detic~\cite{zhou2022detecting} to obtain hand and object masks. \section{Limitations} Our method relies on the geometric or texture features on the object to incrementally reconstruct and estimate its pose within a segment.The proposed approach results in inaccurate pose estimates for texture-less and nearly symmetrical objects such as banana leading to erroneous reconstruction as shown in Figure~\ref{fig:limitations}. Our method also fails to estimate poses of thin objects such as scissors leading to inaccurate reconstructions as also shown in Figure~\ref{fig:limitations}. We believe hand pose information can provide additional cues to estimate the object poses during more challenging scenarios and is a potential future direction for our approach. \section{Introduction} \begin{figure}[h!] \begin{center} \includegraphics[trim=80 190 540 115, clip, width=\columnwidth]{figures/teaser_v2.pdf}\vspace{-1.0ex} \captionof{figure}{Given an RGB sequence of a hand manipulating an unknown object, our method reconstructs the 3D shape and color of the object, even if the object surface is non-Lambertian or poorly textured. We first split the input sequence into multiple overlapping segments (two in this figure) in which the object can be reliably reconstructed and tracked. We then use the tracked object-camera relative poses to initialize a global optimization that produces the final model of the object and the camera pose trajectory. } \label{fig:teaser} \end{center} \end{figure} Reconstructing 3D models of unknown objects from multi-view images is a long-standing computer vision problem which has received considerable attention~\cite{han2019image}. With a single camera, a user can capture multiple views of an object by manually moving the camera around a static object~\cite{runz2020frodo, unisurf, yariv2020multiview} or by turning the object in front of the camera~\cite{rusinkiewicz-02-realtime3dmodelacquisition, Weise, Weise2, tzionas-iccv15-3dobjectreconstruction}. The latter approach is often referenced as \textit{in-hand object scanning} and is particularly convenient for reconstructing objects while providing complete $360^{\circ}$ views. In-hand object scanning also has several applications in AR/VR head-mounted devices such as Microsoft HoloLens or Meta Quest headsets. Recent 3D reconstruction methods rely on neural representations~\cite{park-cvpr19-deepsdf, mescheder2019occupancy, yariv2020multiview, unisurf, DVR, yariv2021volume}. By contrast with earlier reconstruction methods~\cite{hartley-00-multipleviewsgeometry}, the recent methods can provide an accurate dense 3D reconstruction even in non-Lambertian conditions and without any prior knowledge of the object shape. However, most of these methods assume that images are associated with known camera poses, typically obtained by Structure-from-Motion~(SfM) methods such as COLMAP~\cite{schonberger2016pixelwise}. Applying SfM methods to in-hand object scanning is problematic as these methods require a sufficient number of distinct visual features and can thus handle well only textured objects. NeRF-based methods such as~\cite{barf, scnerf, nerfmm, neroic}, which simultaneously estimate the radiance field of the object and the camera poses without requiring initialization from COLMAP, are restricted to forward-facing camera captures. As we experimentally demonstrate, these methods fail to converge if the images cover a larger range of viewpoints, which is typical for in-hand scanning. We propose a method for in-hand object scanning from an RGB image sequence with unknown camera-object relative poses. We rely on a neural representation that captures both the geometry and the appearance of the object and therefore enables reconstructing even poorly textured objects, as shown in Fig.~\ref{fig:teaser}. By contrast with most NeRF-based methods, we do not assume that the camera poses are available and instead simultaneously optimize both the object model and the camera trajectory. As global optimization over all input frames is prone to fail, we propose an incremental optimization approach. We start by splitting the sequence into carefully selected overlapping segments within which the optimization is likely to succeed. We then optimize our objective for incremental object reconstruction and pose tracking within each segment independently. The segments are then combined by aligning poses estimated at the overlapping frames, and we finally optimize the objective globally over all frames of the input sequence to achieve complete object reconstruction. We experimentally demonstrate that the proposed method is able to reconstruct the shape and color of both textured and challenging texture-less objects. We evaluate the method on datasets HO-3D~\cite{hampali-cvpr20-honnotate}, RGBD-Obj~\cite{wang_dataset} and on the newly captured sequences with challenging texture-less objects. We show that the proposed method achieves higher-quality reconstruction than COLMAP~\cite{schonberger2016pixelwise}, which fails to estimate the object poses in the case of poorly textured objects and is in par with a strong baseline method which uses ground-truth object poses. Our method also outperforms a very recent single-image based object reconstruction method~\cite{cmu}, even though this method is trained on sequences of the same object \section{Related Work} This section reviews previous work on in-hand scanning and general object reconstruction from color images. \subsection{In-Hand Object Scanning} Using an RGB-D sensor, several early in-hand scanning systems~\cite{rusinkiewicz-02-realtime3dmodelacquisition,Weise,Weise2,Weise3} rely on tracking and are able to recover the shape of small objects manipulated by hands. Later, \cite{tzionas-iccv15-3dobjectreconstruction} showed how to use the motion of the hand and its contact points with the object to add constraints useful to deal with texture-less and highly symmetric objects, while restricting the contact points to stay fixed during the scanning. Unfortunately, the requirement for an RGB-D sensor limits applications of these techniques. More recently, with the development of deep learning, several methods have shown that it is possible to infer the object shape from a single image~\cite{hasson-cvpr19-learningjointreconstruction,karunratanakul-20-graspingfield,cmu} after training on images of hands manipulating an object with annotations of the object pose and shape. Given the fact that the geometry is estimated from a single image, the results are impressive. However, the reconstruction quality is still limited, especially because these methods do not see the back of the object and cannot provide a good prediction of the appearance of the object for all possible viewpoints. In this paper, we propose an approach for in-hand object scanning which estimates the shape and color of a completely unknown object from a sequence of RGB images, without any pre-training on annotated images. \subsection{Reconstruction from Color Images} Recovering the 3D geometry of a static scene and the camera poses from multiple RGB images has a long history in computer vision~\cite{faugeras-93-book,hartley-00-multipleviewsgeometry,snavely2006photo,furukawa2009accurate}. Structure-from-Motion (SfM) methods are now very robust and accurate, however they are limited to scenes with textures, which is not the case for many common objects. In the past few years, with the emergence of neural implicit representations as effective means of modeling 3D geometry and appearance, many methods~\cite{DVR, unisurf, nerf, yariv2020multiview, yariv2021volume} reconstruct a 3D scene by optimizing a neural implicit representation from multi-view images by minimising the discrepancy between the observed and rendered images. These methods achieve impressive reconstructions on many scenes, but they still need near perfect camera poses, which are typically estimated by Structure-from-Motion methods. Several NeRF-based methods have attempted to retrieve the camera poses while reconstructing the scene. Methods such as NeRF-\,\!-~\cite{nerfmm}, SCNeRF~\cite{scnerf} and BARF~\cite{barf} show that camera poses can be estimated even when initialized with identity matrix or random poses while simultaneously estimating the radiance field. However, these methods are shown to converge only on forward facing scenes and require coarse initialization of poses for $360^{\circ}$ captures as in in-hand object scanning. More recently, SAMURAI~\cite{samurai}) used manual rough quadrant annotations for coarse-level pose initialization and showed that object shape and material can be recovered along with the accurate camera poses. In this work we propose to estimate the camera-object relative pose in from RGB image sequence and reconstruct the object shape without any prior information of the object or its poses. Unlike previous methods, we rely on the temporal information and incrementally reconstruct the object shape and estimate its pose. \section{In-hand object scanning setup} \label{sec:problem} The setup for in-hand object scanning consists of a single RGB camera and an unknown rigid object being manipulated by one or two hands in the field of view of the camera. Our method relies on segmentation masks of the object which can be obtained from existing pre-trained networks that either work on a single RGB image~\cite{zhou2022detecting} or on a pair of images when the background is static~\cite{boerdijk2020learning}. As we aim to obtain reconstruction of only the object and not the hands, we additionally use segmentation masks from \cite{wu2021seqformer} to ignore the hand pixels in our method. We also assume that the object is far enough from the light sources and the camera such that the relative change in object's 3D location is small throughout the scanning. This assumption allows us to simplify the modeling of specular reflections on the object surface during scanning as explained in the next section. Such an assumption is reasonable as rotation is the primary transformation of the object during in-hand scanning. On the HO-3D dataset, which contains sequences of a hand manipulating objects, the maximum standard deviation of the object's 3D location is only \shreyasrmk{XX}\,mm. \section{Problem formulation} We represent the object surface using an occupancy field and a color field inspired by UNISURF~\cite{unisurf}. The occupancy field $o(.)$ of UNISURF is a mapping of the form: \begin{equation} o_\theta({\bf x}): \mathbb{R}^3 \longrightarrow [0,1] \> , \end{equation} implemented with a deep network, where $\theta$ represents the parameters of the network and ${\bf x}$ the 3D coordinates of a point in the object coordinate system. The object surface ${\cal S}$ is represented by 3D points ${\bf x}_{\cal S}$ for which $o_\theta({\bf x}_{\cal S})=0.5$. The 3D mesh of the object can then be obtained from this implicit surface using the Marching Cubes algorithm~\cite{mcubes}. The color field $c_\theta({\bf x}; {\bf d}, {\bf n}_{\bf x}, {\bf h}_{\bf x})$ in UNISURF is also realized by a deep neural network, represents the color at each 3D point and is conditioned on the viewing direction ${\bf d}$, the normal ${\bf n}_{\bf x}$ at point ${\bf x}$ and the geometry feature ${\bf h}_{\bf x}$ at point ${\bf x}$. Note that to simplify the notations we include in $\theta$ both the parameters of the occupancy field and of the color field as these two networks are optimized together. Such a model accounts for the specular reflections on the object surface under the Phong reflection model~\cite{HughesDamEtAl13}. We propose to use the same color field model for the object during in-hand scanning as the object-light source distance is large compared to the variations in object's 3D position. As the object moves w.r.t light source during scanning, the observed color at a point on the surface of the object depends on the direction of incoming light, the viewing direction and direction of normal as the point. The direction of incoming light can be approximated to remain unchanged throughout the sequence as the relative change in 3D position of the object is small during scanning. Thus, the observed color depends only on the viewing direction and normal. In UNISURF, the rendered color at a pixel in a frame $i$ is obtained by integrating the color values along the ray $\textbf{r}$ originating from the camera center and passing through the pixel. The continuous integration is approximated as: \begin{gather} \hat{C}_i(\textbf{r}) = \sum_{k=1}^M \gamma({\bf x}_k) c_\theta({\bf x}_k, {\bf d}, {\bf h}_{{\bf x}_k}, {\bf n}_{{\bf x}_k}, p_{{\bf x}_k}) \>, \\ \gamma({\bf x}_k) = o_\theta({\bf x}_k)\prod_{l<k}\big(1-o_\theta({\bf x}_l)\big) \nonumber \> , \label{eq:rend_unisurf} \end{gather} where $\{{\bf x}_k\}$ are $M$ samples along ray $\textbf{r}$. $\hat{C}_i(\textbf{r})$ is the estimated colored in frame $i$ at pixel $\textbf{r}$, and $\gamma({\bf x}_k)$ is the estimated alpha-blending coefficient that evaluates to 1 if point ${\bf x}_k$ is on the surface of the object and 0 otherwise. \shreyas{The ray $\textbf{r}$ is sampled from the entire image space and surrounding environment along with the object is reconstructed. In our case, we are not interested in reconstructing the background nor the hand. Hence, we sample the rays for rendering only from the object regions of the image which is obtained from the segmentation mask of the object.}\shreyasrmk{The background image was required only because the masks were not perfect. I propose to keep this small modification of $C(\textbf{r})$ with background pixels as an implementation detail. If the masks were good (which it is in most scenarios), we did not have to worry about this. I think this way we can keep the explanation simple and also not emphasis on static camera} \vincentrmk{I am not sure anymore if we need a $C_i(\textbf{r})$ different from unisurf, besides the background?} UNISURF estimates the network parameters by minimizing the photometric loss \begin{gather} \theta^* = \arg\min_\theta \sum_i \sum_{\textbf{r} \in {\cal R}_i} \mathcal{L}_{\text{col}}^i(\textbf{r}) \nonumber \> ,\\ \mathcal{L}_\text{col}^i(\textbf{r}) = \lvert\lvert \hat{C}_i(\textbf{r}) - C_i(\textbf{r})\lvert\lvert \>, \label{eq:loss_col} \end{gather} where $C_i(\textbf{r})$ is the observed color for pixel with ray $\textbf{r}$ in image $i$ and ${\cal R}_i$ is the set of rays in frame $i$. \shreyas{ In our case, we also need to estimate the camera pose with respect to the object. We thus also optimize on the camera pose in the object coordinate system by solving: \begin{align} \label{eq:complete_loss} \theta^*, \{\mathcal{T}_i^*\}_{i=1}^N &= \arg\min_{\theta, \{\mathcal{T}_i\}_{i=1}^N} \sum_i \Big(\sum_{\textbf{r}\in {\cal H}_i} \mathcal{L}_\col^i(\textbf{r})+ \sum_{\textbf{r}\in {\cal M}_i} \mathcal{L}_\seg^i(\textbf{r})\Big) \> . \end{align} \shreyasrmk{will fix the eqn overflow}$\mathcal{T}_i$ is the camera pose in the object coordinate system for frame $i$. ${\cal H}_i$ is the set of object rays in frame $i$, ${\cal M}_i$ is the set of object and background rays in images $i$ and the direction and origin of ray $\textbf{r}\in\{{\cal H}_i, {\cal M}_i\}$ is a function of the camera pose $\mathcal{T}_i$. $\mathcal{L}_\col^i$ is the photometric loss between rendered color $\hat{C}_i(\textbf{r})$ and observed color $C_i(\textbf{r})$, as defined in Eq.~\eqref{eq:loss_col} for frame $i$. We use only rays originating from the object or background pixels in Eq.~\eqref{eq:complete_loss} and ignore the hand pixels as the hand occludes the object during manipulation and the color of the object in the occluded regions is unknown.} The second term $\mathcal{L}_\seg(\textbf{r})$ in Eq.~\eqref{eq:complete_loss} is a ``segmentation loss'' for ray $\textbf{r}$: \begin{align} \label{eq:seg_loss} \mathcal{L}_\seg^i(\textbf{r}) &= \BCE\Big(\max_k~\{o_\theta({\bf x}_k)\}, S_i(\textbf{r}) \Big) \> , \end{align} where $\BCE(\cdot)$ is the binary cross-entropy loss, and $S_i(\textbf{r})$ is the observed object segmentation value for ray $\textbf{r}$ in frame $i$. $S_i(\textbf{r})$ is equal to 1 if the pixel corresponding to ray $\textbf{r}$ lies on the object according to the image segmentation, and 0 otherwise. $\max_k~\{o_\theta({\bf x}_k)\}$ is the maximum occupancy along ray $\textbf{r}$, accroding to the estimated occupancy field $o_\theta(.)$. This maximum occupancy should also be equal to 1 if the ray intersects the object and to 0 otherwise. We detail in the next section how we solve Eq.~\eqref{eq:complete_loss}. \section{Volumetric Rendering of an Implicit Surface} \vincentrmk{I am not sure this is the right place. Shouldnt we merge this with the Problem Formulation section, around Paragraph 2?} \vincent{ We use an occupancy field to represent the surface of the object to be reconstructed. In this section, we provide an overview of the volumetric rendering scheme for occupancy fields which we use for reconstructing the object. This rendering scheme was proposed for UNISURF~\cite{unisurf} and estimates the object surface from multi-view posed images. An occupancy field is represented by a deep network that implements a mapping of the form \begin{equation} o_\theta({\bf x}): \mathbb{R}^3 \longrightarrow [0,1] \> , \end{equation} where $\theta$ represents the parameters of the network and ${\bf x}$ the 3D coordinates of a point in the object coordinate system. The object surface ${\cal S}$ is represented by the set of 3D points ${\bf x}_{\cal S}$ such that $o_\theta({\bf x}_{\cal S})=0.5$. A color field $c_\theta({\bf x}, {\bf d}, {\bf n}_{\bf x}, {\bf h}_{\bf x})$ is used to represent the color at each 3D point and is conditioned on the viewing ray direction ${\bf d}$, the normal ${\bf n}_{\bf x}$ at point ${\bf x}$ and the geometry feature ${\bf h}_{\bf x}$ at point ${\bf x}$. Such a model accounts for the specular reflections on the object surface under the Phong reflection model~\cite{HughesDamEtAl13} when the position of the light source w.r.t to the object is fixed. The static pose between the object and the light source in all the images results in a constant incoming light radiance at any point on the object surface as shown in Figure~\ref{fig:static_object}. \vincentrmk{add a ref to our problem} The rendered color at a pixel is obtained by integrating the color values along the ray $\textbf{r}$ originating from the camera center and passing through the pixel. The continuous integration is approximated using quadrature and is given by \begin{gather} \hat{C}_i(\textbf{r}) = \sum_{k=1}^M \gamma({\bf x}_k) c_\theta({\bf x}_k, {\bf d}, {\bf h}_{{\bf x}_k}, {\bf n}_{{\bf x}_k}) \>, \\ \gamma({\bf x}_k) = o_\theta({\bf x}_k)\prod_{l<k}\big(1-o_\theta({\bf x}_l)\big) \nonumber \> , \label{eq:rend_unisurf} \end{gather} where $\{{\bf x}_k\}$ are $M$ samples along the ray $\textbf{r}$, $\hat{C}_i(\textbf{r})$ represents the rendered colored in frame $i$ at pixel $\textbf{r}$, and $\gamma({\bf x}_k)$ is the alpha-blending coefficient that evaluates to 1 if point ${\bf x}_k$ is on the surface of the object and 0 otherwise. In UNISURF, the network parameters are obtained by minimizing the photometric loss given by \begin{gather} \theta^* = \arg\min_\theta \sum_i \sum_{\textbf{r} \in {\cal R}_i} \mathcal{L}_{\text{col}}^i(\textbf{r}) \nonumber \> ,\\ \mathcal{L}_\text{col}^i(\textbf{r}) = \lvert\lvert \hat{C}_i(\textbf{r}) - C_i(\textbf{r})\lvert\lvert \>, \label{eq:loss_col} \end{gather} where $C_i(\textbf{r})$ is the observed color for pixel with ray $\textbf{r}$ in image $i$ and ${\cal R}_i$ is the set of rays in frame $i$. } \section{Problem Formulation} \label{sec:problem} \shreyas{ Our setup for in-hand object scanning consists of a static RGB camera and an unknown rigid object being handled by one or two hands for in the field of view of the camera for scanning. We assume a static background whose image is either obtained separately or is estimated from the sequence using the segmentation masks of the hand and the object. We use a pre-trained network from \cite{boerdijk2020learning} to segment the dynamic foreground and combine it with accurate hand segmentations obtained from \cite{wu2021seqformer} to obtain the masks of the hand, the object, and the background. We represent the object surface using the occupancy field and color field similar to UNISURF~\cite{unisurf}. However, the varying position of the object w.r.t the light source results in varying incoming light radiance, $\omega_i$ on the object surface as illustrated in Figure~\ref{fig:static_camera} for the in-hand object scanning setup. Under the Phong reflection model~\cite{HughesDamEtAl13}, the observed radiance at a point on the object surface depends on the viewing direction, ${\bf d}$, direction of the normal at the point, ${\bf n}$ and the incoming light direction, $\omega_i$. As the relative position between the camera and the light source remains fixed, the incoming light direction can be parameterized as $\omega_i = f({\bf x}, {\bf d}, {\bf n}, p)$, where $p$ is the length of the viewing ray. $p$ is computed as the distance between the camera optic center and the 3D point at which the observed radiance is calculated. Thus, we represent the color field as $c_\theta({\bf x},{\bf d}, p,{\bf n}_x,{\bf h}_x)$ where we additionally condition the color field on viewing ray length compared to previous approaches~\cite{unisurf,Mildenhall2020nerf,idr}. \begin{figure} \centering \begin{minipage}{1\linewidth} \subcaptionbox{Static-object and static-light capture setup\label{fig:static_object}} {\includegraphics[page=1,width=1\linewidth,trim=140 150 75 80, clip,]{figures/method/ray_length.pdf}} \end{minipage} \begin{minipage}{1\linewidth} \subcaptionbox{Static-camera and static-light capture (In-hand scanning) setup \label{fig:static_camera}} {\includegraphics[page=2,width=1\linewidth,trim=130 150 105 120, clip,]{figures/method/ray_length.pdf}} \end{minipage} \caption{\shreyas{(a) In the case of the standard NeRF setting, the camera moves with respect to the object, and the observed radiance at a 3D point depends only on the viewing ray direction and the normal as the incident light direction is fixed in all frames. (b) In the case of in-hand scanning, the object moves with respect to the fixed camera and the lights setup, and the incident light direction at a 3D point on the surface of the object depends on the camera-object relative pose. To capture this, we simply condition the radiance on the distance between the camera and the point (ray length), in addition to the direction of the viewing ray and normal.}} \label{fig:raydist} \end{figure} A second major difference with other NeRF approaches is that we need to estimate the camera pose with respect to the object, while previous approaches often assume the camera poses to be given. We thus estimate the object's shape and appearance by solving: \begin{align} \label{eq:complete_loss} \theta^*, \{\mathcal{T}_i^*\}_{i=1}^N &= \arg\min_{\theta, \{\mathcal{T}_i\}_{i=1}^N} \sum_i \sum_{\textbf{r}\in {\cal H}_i} \Big(\mathcal{L}_\col^i(\textbf{r})+\mathcal{L}_\seg^i(\textbf{r})\Big) \> . \end{align} $\mathcal{T}_i$ is the camera pose in the object coordinate system for frame $i$. ${\cal H}_i$ is the set of object and background rays in frame $i$ and the direction and origin of ray $\textbf{r}\in{\cal H}_i$ is a function of the camera pose $\mathcal{T}_i$. Remember that we segment the object, hand and background using pre-trained networks~\cite{boerdijk2020learning,wu2021seqformer}. $\mathcal{L}_\col^i$ is the photometric loss between rendered color $\hat{C}_i(\textbf{r})$ and observed color, $C_i(\textbf{r})$ as defined in Eq.~(\ref{eq:loss_col}) for frame $i$. $\mathcal{L}_\seg$ is the segmentation loss which we explain later. Unlike the static-object based setup as in Figure~\ref{fig:static_object}, the background in the in-hand object scanning setup is static as in Figure~\ref{fig:static_camera} whose image is assumed to be available in this paper. We thus modify the rendering equation in Eq.~(\ref{eq:rend_unisurf}) to include pixels from the background image as below, \begin{align*} \hat{C}_i(\textbf{r}) &= \sum_{k=1}^M \gamma({\bf x}_k) c_\theta({\bf x}_k,{\bf d}, p_k, {\bf n}_{{\bf x}_k}, {\bf h}_{{\bf x}_k}) + \gamma_{bg}c_{bg}(\textbf{r}) \> , \end{align*} where the $M$ samples $\{{\bf x}_k\}$ along the ray $\textbf{r}$ are in the object coordinate system and the blending coefficients $\gamma({\bf x}_k)$ is as defined in Eq.~(\ref{eq:rend_unisurf}). The blending coefficient for the background pixel is computed as $\gamma_{bg} = \prod_{k}\big(1-o_\theta({\bf x}_k)\big)$. $c_{bg}(\textbf{r})$ is the background pixel value at pixe $\textbf{r}$. When the object masks are accurate, the background pixels in each image can be set to black color and original rendering equation in Eq.~(\ref{eq:rend_unisurf}) can be used. However, since the masks obtained from the segmentation network~\cite{boerdijk2020learning} are often noisy we use background pixels during rendering to mitigate the effect of inaccurate masks. We use only rays originating from the object and background pixels in Eq.~(\ref{eq:complete_loss}) and ignore the hand pixels as the hand occludes the object during manipulation and the color of the object in the occluded regions is unknown. The second term $\mathcal{L}_\seg(\textbf{r})$ in Eq.~\eqref{eq:complete_loss} is a ``segmentation loss'' for ray $\textbf{r}$: \begin{align} \label{eq:seg_loss} \mathcal{L}_\seg^i(\textbf{r}) &= \BCE\Big(\max_k~\{o_\theta({\bf x}_k)\}, S_i(\textbf{r}) \Big) \> , \end{align} where $\BCE(\cdot)$ is the binary cross-entropy loss, and $S_i(\textbf{r})$ is the observed object segmentation value for ray $\textbf{r}$ in frame $i$. $S_i(\textbf{r})$ is equal to 1 if the ray projects on the object segmentation, and 0 otherwise. $\max_k~\{o_\theta({\bf x}_k)\}$ is the maximum occupancy along ray $\textbf{r}$, which should also be equal to 1 if the ray intersects the object and to 0 otherwise. We detail in the next section how we solve Eq.~\eqref{eq:complete_loss}. } \section{Proposed Method} \label{sec:method} In this section, we first describe the considered setup and how we represent the object with a neural representation. Then, we derive an objective function for estimating the object reconstruction and the camera pose trajectory, and explain how we optimize this function. \subsection{In-Hand Object Scanning Setup} \label{sec:reconstruction_setup} \paragraph{Input and Output.} Our input is a sequence of RGB images showing an unknown rigid object being manipulated by one or two hands in the field of view of the camera. The output is a color 3D model of the manipulated object. The input sequence is captured by an egocentric camera or a camera mounted on a tripod. In both cases, the relative pose between the camera and the object is not known for any of the input images. In order to achieve full reconstruction of the object, the image sequence is assumed to show the object from all sides. \customparagraph{Available Object and Hand Masks.} The segmentation masks of the object and hand are assumed available for all input images. In our experiments, we obtain the masks by of-the-shelf networks -- either by Detic~\cite{zhou2022detecting} which can segment unknown objects in a single RGB image, or by DistinctNet~\cite{boerdijk2020learning} which can segment an unknown moving object from a pair of images with static background. We additionally use segmentation masks from SeqFormer~\cite{wu2021seqformer} to ignore pixels of hands that manipulate the object. \customparagraph{Phong Reflection Model and Distant Lights.} The object to reconstruct is assumed to be solid (\ie, non-translucent), and we model the reflectance properties of the object surface with the Phong reflection model~\cite{HughesDamEtAl13} and assume that the light sources are far from the object and the camera. Under the Phong model, the observed color at a surface point depends on the viewing direction, the surface normal direction, and the light direction. If the light sources are far, the incoming light direction can be approximated to remain unchanged, which allows us to use the standard neural radiance field~\cite{nerf} to model the object appearance. This assumption is reasonable as rotation is the primary transformation of the object during in-hand manipulation--on the HO-3D dataset, which contains sequences of a hand manipulating objects, the maximum standard deviation of the object's 3D location is only 7.9cm. \subsection{Object Representation} \label{sec:object_representation} \noindent\textbf{Implicit Neural Fields.} As in UNISURF~\cite{unisurf}, we represent the object geometry by an occupancy field and the object appearance by a color field, with each realized by a neural network. The occupancy field is defined as a mapping: $o_\theta({\bf x}): \mathbb{R}^3 \rightarrow [0,1]$, where $\theta$ represents the parameters of the network and ${\bf x}$ is a 3D point in the object coordinate system. The object surface is represented by 3D points ${\cal S} = \{{\bf x}\,|\, o_\theta({\bf x})=0.5\}$, and the surface mesh can be recovered by the Marching Cubes algorithm~\cite{mcubes}. The color field is a mapping: $c_\theta({\bf x}; {\bf d}, {\bf n}, {\bf h}): \mathbb{R}^3 \times \mathbb{R}^3 \times \mathbb{R}^3 \times \mathbb{R}^n \rightarrow \mathbb{R}^3$ that represents the color at a surface point ${\bf x} \in {\cal S}$ and is conditioned on the viewing direction ${\bf d}$ (\ie, the direction from the camera center to the point ${\bf x}$), the normal vector ${\bf n}$ at ${\bf x}$, and the geometry feature ${\bf h}$ at ${\bf x}$ which has $n$ dimensions and is extracted from the occupancy field network. The color for a particular pixel/ray ${\bf r}$ is defined as $\hat{C}_i(\textbf{r}) = c_\theta({\bf x}_s)$, where ${\bf x}_s$ is the closest point on the object surface along ray ${\bf r}$ (the object is assumed non-translucent). To simplify the notation, we include in $\theta$ both the parameters of the occupancy field and of the color field as these two networks are optimized together. \customparagraph{Rendering.} As in~\cite{unisurf}, the rendered color at a pixel in a frame $i$ is obtained by integrating colors along the ray $\textbf{r}$ originating from the camera center and passing through the pixel. The continuous integration is approximated as: \begin{gather} \hat{C}_i(\textbf{r}) = \sum_{k=1}^M \gamma({\bf x}_k) c_\theta({\bf x}_k; {\bf d}_k, {\bf h}_k, {\bf n}_k) \\ \text{with } \gamma({\bf x}_k) = o_\theta({\bf x}_k)\prod_{l<k}\big(1-o_\theta({\bf x}_l)\big) \> , \label{eq:rend_unisurf} \end{gather} where $\{{\bf x}_k\}$ are $M$ samples along the ray $\textbf{r}$. The alpha-blending coefficient $\gamma({\bf x}_k)$, is defined as in~\cite{unisurf}, is $1$ if point ${\bf x}_k$ is on the visible surface of the object and $0$ otherwise. \subsection{Reconstruction Objective} In UNISURF~\cite{unisurf}, the network parameters $\theta$ are estimated by solving the following optimization problem: \begin{gather} \theta^* = \argmin_\theta \sum_i \sum_{\textbf{r} \in {\cal R}_i} \mathcal{L}_{\col}^i(\textbf{r}) \> ,\\ \mathcal{L}_{\col}^i(\textbf{r}) = \lvert\lvert \hat{C}_i(\textbf{r}) - C_i(\textbf{r})\lvert\lvert \>, \label{eq:loss_col} \end{gather} where $\mathcal{L}_{\col}^i(\textbf{r})$ is the photometric loss measuring the difference between the rendered color $\hat{C}_i(\textbf{r})$ and the observed color $C_i(\textbf{r})$ at a pixel intersected by the ray $\textbf{r}$ in the frame $i$, and ${\cal R}_i$ is the set of rays sampled in the frame $i$. In our case, we additionally optimize the camera poses: \begin{align} \label{eq:complete_loss} \theta^*, \{\mathcal{T}_i^*\} = \argmin_{\theta, \{\mathcal{T}_i\}} \sum_i \Big( \sum_{\textbf{r}\in {\cal H}_i} \mathcal{L}_{\col}^i(\textbf{r})+\!\!\!\sum_{\textbf{r}\in {\cal M}_i} \mathcal{L}_{\seg}^i(\textbf{r})\Big) \> , \end{align} where $\mathcal{T}_i$ is the camera pose of frame $i$ expressed by a rigid transformation from the camera coordinate system to the object coordinate system. ${\cal H}_i$ is the set of object rays in frame $i$, and ${\cal M}_i$ is the set of object and background rays in frame $i$. We only use rays passing through the object and background pixels and ignore the hand pixels. The term $\mathcal{L}_{\seg}^i(\textbf{r})$ is a segmentation loss for ray $\textbf{r}\in{\cal M}_i$: \begin{align} \label{eq:seg_loss} \mathcal{L}_{\seg}^i(\textbf{r}) &= \BCE\Big(\max_k~\{o_\theta({\bf x}_k)\}, S_i(\textbf{r}) \Big) \> , \end{align} where $\BCE(\cdot)$ is the binary cross-entropy loss, and $S_i(\textbf{r})$ is the object mask value for ray $\textbf{r}$ in the frame $i$ (the mask is obtained as described in Sec.~\ref{sec:reconstruction_setup}). The value of $S_i(\textbf{r})$ is 1 if the pixel corresponding to ray $\textbf{r}$ lies in the provided object mask, and 0 otherwise. The term $\max_k~\{o_\theta({\bf x}_k)\}$ is the maximum occupancy along the ray $\textbf{r}$ according to the estimated occupancy field $o_\theta(.)$, and is expected to be 1 if $\textbf{r}$ intersects the object and 0 otherwise. \subsection{Optimization} \label{sec:opt_obj} Directly optimizing Eq.~\eqref{eq:complete_loss} is prone to fail. As we show in Sec.~\ref{sec:experiments}, a random (or a fixed) initialization of poses followed by an optimization procedure similar to the one used in BARF~\cite{barf} leads to degenerate solutions. Instead, we propose an incremental optimization approach which starts by splitting the sequence into carefully selected overlapping segments, within which the optimization is more likely to succeed (Sec.~\ref{sec:splitting}). We optimize the objective in each segment by incremental frame-by-frame reconstruction and tracking, with the objective being extended by additional loss terms to stabilize the tracking (Sec.~\ref{sec:segment_tracking}). Then, we merge the segments by aligning poses estimated at the overlapping frames (Sec.~\ref{sec:stitch}), and finally optimize the objective globally over all frames of the sequence (Sec.~\ref{sec:stitch}). \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figures/method/sequence_segmentation.pdf} \caption{\textbf{Splitting a sequence into easy-to-track segments.} The segment boundaries are defined at frames with locally maximal or minimal area of the object mask (the start and the end of each segment is shifted by a few frames from the extremum to make the segments overlap). Note that we can track backwards in time, from a local maximum to a local minimum.} \label{fig:seg_area} \end{figure} \subsubsection{Input Sequence Segmentation} \label{sec:splitting} We observed in our early experiments that frame-to-frame tracking is prone to fail when previously observed parts of the object start disappearing and new parts start appearing. This is not surprising as there is no 3D knowledge about the new parts yet, and the current reconstruction of the object is disappearing and cannot be used to track these new parts. We therefore propose to split the sequence into segments so that tracking on each segment is unlikely to drift much. How can we detect when new parts are appearing? We observe that this can be done based on the apparent area of the object: Under the assumption that the distance of the object to the camera and the occlusions by the hand do not change much, large parts of the object disappear when the apparent area goes through a minimum. This is illustrated in Figure~\ref{fig:seg_area}. As mentioned earlier, we obtain a mask of the object by segmenting the image, so we can easily compute its apparent area. We therefore split the input sequence into multiple segments such that each segment starts at a frame where the area of the object mask reaches a local maximum and ends at a frame where it reaches a local minimum. The start and the end of each segment is shifted by a few frames from the extremum to introduce overlaps with the neighboring segments (the overlaps are used in Sec.~\ref{sec:stitch} to merge the estimated per-segment pose trajectories). An additional advantage of this approach is that tracking a segment starts with a point of view where the object is relatively large in the image, which facilitates bootstrapping the tracking, especially thanks to our shape regularization loss. \subsubsection{Per-Segment Optimization} \label{sec:segment_tracking} Within each segment, we iteratively optimize the following objective on a progressively larger portion of the segment allowing us to incrementally reconstruct the object and track its pose. The $T$ frames in a segment are denoted by the set $\{{\cal S}\}_{i=1}^T$. Over the course of the optimization, the index $t$ of the currently considered frame progresses from the first to the last frame of the segment, and for each step we solve: \begin{align} \label{eq:inc_loss} &\theta^*, \{\mathcal{T}_i^*\}_{i=1}^t = \argmin_{\theta, \{\mathcal{T}_i\}_{i=1}^t} \quad \sum_{i=1}^t \sum_{\textbf{r}\in{\cal H}_i} \mathcal{L}_\col^i (\textbf{r}) + \text{...} \\ & \sum_{i=1}^t \sum_{\textbf{r}\in{\cal M}_i} \Big(\mathcal{L}_\seg^i(\textbf{r}) + \mathcal{L}_\opf^i(\textbf{r}) + \mathcal{L}_\reg^i(\textbf{r})\Big) \;+ \sum_{i=1}^{t-1}\sum_{\textbf{r}\in{\cal M}_i}\mathcal{L}_\dep^i(\textbf{r}) \> . \nonumber \end{align} The terms $\mathcal{L}_\col^i$ and $\mathcal{L}_\seg^i$ are the color and mask losses defined in Eq.~\eqref{eq:complete_loss}, $\mathcal{L}_\opf^i$ is a loss based on optical flow that provides constraints on the poses, $\mathcal{L}_\reg^i$ is a shape regularization term that prevents degenerate object shapes, and $\mathcal{L}_\dep^i$ is a synthetic-depth loss that stabilizes the tracking. ${\cal H}_i$ is the set of rays going through pixels on the object in frame $i$, and ${\cal M}_i$ the set of rays going through pixels on the object or the background in frame $i$. More details on the loss terms $\mathcal{L}_\opf^i, \mathcal{L}_\reg^i$ and $\mathcal{L}_\dep^i$ are provided later in this section. The network parameters $\theta$ are initialized by their estimate from the previous iteration $t-1$. The camera pose ${\cal T}_t$ for $t>1$ is initialized using the second-order motion model applied to the previous poses $\{{\cal T}_i\}_{i=0}^{t-1}$. The first camera pose ${\cal T}_0$ is initialized to a fixed distance from the origin of the object coordinate system and orientated such that the image plane faces the origin. At each iteration, we sample a fixed percentage of rays from the new frame (set to $15\%$ empirically) and the rest from the previous frames. \customparagraph{Optical Flow Loss.} This term provides additional constraints on the camera poses and is defined as: \noindent \resizebox{0.94\linewidth}{!}{ \begin{minipage}{\linewidth} \vspace{-0.5mm} \begin{eqnarray} \mathcal{L}_\opf^i(\textbf{r}) = \sum_k \gamma({\bf x}_k) \big( \pi_i({\bf x}_k) - \pi_{i\text{-}1}({\bf x}_k) - \OF_i(\pi_{i\text{-}1}({\bf x}_k)) \big)^2 \> , \end{eqnarray} \vspace{0.5mm} \end{minipage} } \noindent where $\{{\bf x}_k\}_i$ are 3D points along ray $\textbf{r}$, $\pi_i({\bf x})$ is the 2D reprojection of the point ${\bf x}$ in the frame $i$, $\OF_i$ is the optical flow between frame $i-1$ and frame $i$, and $\gamma(\cdot)$ is as defined in Eq.~\eqref{eq:rend_unisurf} and evaluates to one for points on the object surface and zero elsewhere. Fig.~\ref{fig:optical_flow} shows the effect of optical flow loss on the trajectory after several optimization steps. We use \cite{yang2019vcn} to compute the optical flow. \begin{figure}[h!] \begin{center} \includegraphics[width=\linewidth]{figures/ablation/flow/flow_ablation.pdf}\vspace{-1ex} \caption{\textbf{Effect of the optical flow loss $\mathcal{L}_\opf$.} Pose estimates (red) are more stable when the loss $\mathcal{L}_\opf$ is applied (right). The ground-truth poses are shown in blue.} \label{fig:optical_flow} \end{center} \end{figure} \customparagraph{Shape Regularization Loss.} During early iterations~(\ie, when $t$ is small), the occupancy field is under-constrained and needs to be regularized to avoid degenerate object shapes. We introduce a regularization that encourages reconstruction near the origin of the object coordinate system: \begin{equation} \label{eq:reg_loss} \mathcal{L}_\reg^i(\textbf{r}) = \sum_k o_\theta({\bf x}_k) \exp{(\alpha \cdot \|{\bf x}_k\|_2)} \> , \end{equation} where $\alpha$ is a hyperparameter. At $t=0$, $\mathcal{L}_\reg^i$ results in an object surface that is parallel to the image plane--this can be seen by considering an orthographic projection of the rays and noting that for each ray $\textbf{r}$, $\mathcal{L}_\reg(\textbf{r})$ is minimized when $\|{\bf x}_k\|$ is minimized, \ie, when ${\bf x}_k$ is on a plane perpendicular to the ray direction and passing through the origin. Encouraging a planar proxy as an approximation of the object shape helps to stabilize the early stage of the optimization. Fig.~\ref{fig:regularization} shows examples achieved with and without the regularization. \begin{figure}[h!] \begin{center} \includegraphics[width=\linewidth]{figures/reg_term/reg_term_v3.pdf}\vspace{-1ex} \caption{\textbf{Effect of the shape regularization loss $\mathcal{L}_\reg$.} Left to right: The ground-truth object mesh, implicit surface reconstructed without the regularization term at $t=1$, and implicit surface reconstructed with the regularization term at $t=1$. } \label{fig:regularization} \end{center} \end{figure} \customparagraph{Synthetic-Depth Loss.} We also introduce a loss based on synthetic depth maps rendered by the object shape estimate. The motivation for this term is to regularize the evolution of the shape estimate and prevent its drift. It is defined as: \begin{equation} \mathcal{L}_\dep^i (\textbf{r}) = \big(\sum_k \gamma({\bf x}_k) \text{dep}_i({\bf x}_k)-\hat{d}_i(\textbf{r})\big)^2 \> , \end{equation} where $\text{dep}_i({\bf x}_k)$ is the depth of the point ${\bf x}_k$ along the ray $\textbf{r}$, $\gamma(\cdot)$ is as defined in Eq.~\ref{eq:rend_unisurf} and $\hat{d}_i$ is the depth map rendered using the previous estimates of the object model and the camera pose for frame $i$. Note that $\mathcal{L}_\dep^i$ is only applied on rays from frames $[1,t-1]$ at optimization step $t$ of Eq.~\eqref{eq:inc_loss} whose synthetic depths are pre-computed. Figure~\ref{fig:depth_term} illustrates the contribution of this term. \begin{figure}[h!] \begin{center} \includegraphics[width=\linewidth]{figures/method/depth_ablation_v3.pdf}\vspace{-1ex} \caption{\textbf{Effect of the synthetic-depth loss $\mathcal{L}_\dep$.} Large pose changes (highlighted by black boxes) can deform previously reconstructed parts of the object if the depth loss $\mathcal{L}_\dep$ is not used (left). } \label{fig:depth_term} \end{center} \end{figure} \subsubsection{Global Optimization} \label{sec:stitch} The camera trajectories and object reconstruction for each segment are recovered up to a rigid motion and a scaling factor. To express the overlapping segments in a common coordinate frame, we align the pose estimates at the overlapping frames with the following procedure. Let $\mathcal{T}_i^k =[\phi_i, t_i]$ be the rotation and translation of the camera for the frame $i$ in the segment $k$ with $N_s$ frames. We obtain a normalized pose by taking $\hat{\mathcal{T}}_i^k = \big[\phi_i; t_i/\frac{1}{N_s}\sum_j \|t_j\|\big]$. We then retrieve the rigid motion $\mathcal{T}_{k_1\rightarrow k_2}$~(rotation and translation) that aligns two overlapping segments $k_1$ and $k_2$: \begin{equation} \mathcal{T}_{k_1\rightarrow k_2} = \argmin_\mathcal{T} \sum_i \|\mathcal{T}\cdot\hat{\mathcal{T}}_i^{k_1}-\hat{\mathcal{T}}_{{\cal N}(i)}^{k_2}\|_F \> , \end{equation} where $\|\cdot\|_F$ denotes the Frobenius norm, ${\cal N}(i)$ is the frame index in segment $k_2$ corresponding to frame $i$ in segment $k_1$, and the summation is over the set of all overlapping frames. In practice, we observed that as less as a single overlapping frame is sufficient for connecting the segments. We use the aligned poses of two neighboring segments as pose initialization and optimize the objective function from Eq.~\eqref{eq:complete_loss}. The network parameters $\theta$ are initialized to reconstruct a sphere. The neighboring segments are combined iteratively until we obtain complete reconstruction from the full sequence. Fig.~\ref{fig:rand_init} shows the reconstruction with different pose initializations -- even for a textured object, coarse initialization is necessary for convergence. In Fig.~\ref{fig:seg_ablation}, we show a situation where the incremental reconstruction and pose tracking continues beyond the segment boundary -- the solution degrades when new surface parts appear. \begin{figure}[h!] \begin{center} \begin{minipage}{.33\linewidth} \subcaptionbox{From random poses} {\includegraphics[trim=180 230 640 175, clip,width=1\linewidth]{figures/ablation/init_ablation.pdf}} \end{minipage}~ \begin{minipage}{.33\linewidth} \subcaptionbox{From zero poses} {\includegraphics[trim=320 155 350 100, clip,width=1\linewidth]{figures/ablation/init_ablation.pdf}} \end{minipage}~ \begin{minipage}{.33\linewidth} \subcaptionbox{From our pose est.} {\includegraphics[trim=600 210 180 150, clip,width=1\linewidth]{figures/ablation/init_ablation.pdf}} \end{minipage} \vspace{0.5ex} \caption{\textbf{Reconstruction from different initial poses.} Only initialization from coarse pose estimates yield a meaningful solution.} \label{fig:rand_init} \end{center} \vspace{1ex} \begin{center} \includegraphics[trim=70 300 230 80, clip,width=1\linewidth]{figures/ablation/segments/segment_ablation.pdf}\\ \includegraphics[trim=70 130 230 250, clip,width=1\linewidth]{figures/ablation/segments/segment_ablation.pdf}\vspace{-1ex} \caption{\textbf{Incremental reconstruction and pose tracking is prone to fail beyond the segment boundary.} On this representative example, the incremental reconstruction and pose tracking procedure works well as long as the front face is visible. When the front face starts to disappear and new parts start to appear, reconstruction degrades and pose tracking drifts.} \label{fig:seg_ablation} \end{center} \end{figure} \section{Implementation Details} The occupancy and color field networks are implemented by 8-layer MLP's with ReLU activations and a hidden dimension of $F$. Fourier features~\cite{nerf} at $k_x$ octaves are used to encode the 3D coordinates, and at $k_d$ octaves to encode the view direction. During the per-segment optimization, similar to~\cite{lasr}, instead of directly optimizing the 6D pose parameters, the pose is parameterized with a CNN, that takes the RGB image as input and outputs the 6DoF pose. Weights of the CNN are initialized with weights pre-trained on ImageNet~\cite{imagenet}. The CNN provides a neural basis for the pose parameters and acts as a regularizer. Without the CNN parameterization, the per-segment optimization procedure described in the section Sec.~\ref{sec:segment_tracking} typically fails. During the per-segment optimization (Sec.~\ref{sec:segment_tracking}), we set $F$\,$=$\,$128$, $k_x$\,$=$\,$4$, $k_d$\,$=$\,$2$ and run 6k gradient descent iterations at each tracking step. Further, at each step, we add 5 frames to increase the optimization speed. For the global optimization (Sec.~\ref{sec:stitch}), we use $F$\,$=$\,$256$, $k_x$\,$=$\,$8$, $k_d$\,$=$\,$4$ and run 25k gradient descent iterations for a pair of segments. We use smooth masking of frequency bands as described in BARF~\cite{barf} for better convergence and optimize the 6D pose variables directly instead of using CNN parameterization in this stage. The frames are subsampled such that their maximum number is $150$. We compute the local maxima and minima from the object area curve as explained in Sec.~\ref{sec:splitting} by first performing a Gaussian filtering of per frame object areas. \section{The proposed method} Optimizing directly Eq.~\eqref{eq:complete_loss} is prone to fail: As we show in the experiment section, a random (or fixed) initialization of the pose parameters followed by an optimization similar to the one in BARF~\cite{barf} leads to degenerate solutions. We therefore obtain estimates of poses using frame-to-frame tracking and incremental object reconstruction. The estimated poses are refined using the global optimization in Eq.~\eqref{eq:complete_loss} which considers all the frames in the sequence. Simultaneous 3D pose tracking and reconstruction is prone to drift, and during our early experiments, we noticed this happens very often on in-hand object tracking sequences such as the ones from the HO-3D dataset~\cite{hampali-cvpr20-honnotate}: While the object rotates, there can be a drastic disappearance of visual features followed by the emergence of new visual features that are not reconstructed yet, as illustrated in Figure~\ref{fig:pipeline} on the left. We observed these situations quickly result in important drift. \vincentrmk{we probably need to show that in the experiments} As shown in Figure~\ref{fig:pipeline} our solution is to first split the input sequence into overlapping video segments on which we perform tracking independently to obtain a first estimate of the camera poses, before merging the estimates for the segments and running a global optimization. To split the sequence, we introduce a heuristics based on the apparent area of the object, as drift is likely to occur when the object becomes smaller in the image. \vincentrmk{i will add more details about the other steps} Section~\ref{sec:splitting} details how we split the sequence. We then explain how we perform simultaneous pose tracking and reconstruction on each segment in Section~\ref{sec:segment_tracking} to obtain a first estimate of the poses. Section~\ref{sec:stitch} explains how we perform the global optimization of Eq.~\eqref{eq:complete_loss} using the estimated poses. \begin{figure} \centering \includegraphics[trim=90 130 90 2, clip, width=1\linewidth]{figures/method/seg_area.pdf} \caption{Segmenting the input video into easy-to-track segments. We plot the object segmentation area obtained by segmenting the background and the hand over time. We use the local maxima and minima of this plot as segment boundaries. Note that we can track ``backward'' in time, from a local maximum to a local minimum.} \label{fig:seg_area} \end{figure} \subsection{Segmenting the video sequence} We split the input sequence into segments so that tracking on each segment is unlikely to drift much. We privilege segments in which the object surface self-occludes with time, rather than segments in which new surface portions appear as this is likely to provoke drift. This is however \emph{a priori} a chicken-and-egg problem, as we do not know the surface yet. Fortunately, we can consider the area of the 2D object's silhouette, which we can obtain from the pre-trained segmentation networks~\cite{boerdijk2020learning,wu2021seqformer}\vincentrmk{why 2 references?}. \vincent{As illustrated in Figure~\ref{fig:seg_area}, we divide the sequence into multiple segments such that each segment starts at a frame where the object segmentation area reaches a local maximum, and ends at a frame where the object area reaches a local minimum. Figure~\ref{fig:seg_area} shows the plot of object segmentation and the three segments. The direction of tracking within the segment can be in the reverse direction of capture, as the full video is taken as input. Disappearance of object surfaces due to self-occlusion and emergence of new object surfaces during in-hand object scanning indeed correspond to local minima in the apparent 2D surface of the object as we show with a formal proof in the supplementary material\vincentrmk{really? It probably would not happen in practice but cant you make the object getting closer to the camera while rotating it to make this heuristic fails? We probably should add this is under the assumption the distance of the object to the camera remains about constant.}. Thus, by stopping tracking at the end of the segments which are local minimum in the object area we avoid large drifts.} A second advantage of this strategy is that the object's surface in the frames corresponding to local maxima of the object's area tend to be nearly parallel to the image plane. We also refer the reader to the supplementary material for proof. We exploit this property to converge better to the correct surface by introducing a regularization term that encourages nearly parallel surface in the first frame of each segment. The neighboring segments overlap by at least one frame, which we used to merge the estimated pose trajectories before running a global optimization of Eq.~\eqref{eq:full_loss}. \begin{figure} \centering \includegraphics[trim=10 160 10 100, clip, width=1\linewidth]{figures/reg_term/reg_wt.pdf} \caption{Ground-truth object mesh and pose, w/o regularization term, w/ regularization term.} \label{fig:my_label} \end{figure} \subsection{Per-segment optimization} \label{sec:tracking_method} We now explain how we retrieve a first estimate of parameters $\theta$ of the occupancy and color fields and of poses ${\cal T}_j$ for the frames of a video segment. \vincent{ Let's denote the frames in the segment by $\{{\cal S}_i\}_{i=0}^T$. We successively minimize, for t in $[0;T]$: \begin{align} \label{eq:inc_loss} \sum_{i=0}^t \sum_{\textbf{r}\in{\cal R}({\cal S}_i)} \mathcal{L}_\col^i(\textbf{r}) + \mathcal{L}_\seg^i(\textbf{r}) &+ \mathcal{L}_\opf^i(\textbf{r}) + \mathcal{L}_\reg^i(\textbf{r}) \;+ \nonumber \\& \sum_{i=0}^{t-1}\sum_{\textbf{r}\in{\cal R}({\cal S}_i)}\mathcal{L}_\dep^i(\textbf{r}) \> , \end{align} over $\theta$ and the $\{{\cal T}_i\}_{i=1}^t$ poses. The first two terms $\mathcal{L}_\col$ and $\mathcal{L}_\seg$ are the color and segmentation losses as defined in Eq.~\eqref{eq:complete_loss}. $\mathcal{L}_\opf$ is a loss based on optical flow that provides constraints on the poses, $\mathcal{L}_\reg$ is a regularization term that prevents degenerate object shapes, and $\mathcal{L}_\dep$ is a depth loss using rendered depths that stabilizes the tracking. At each tracking step, $\mathcal{L}_\dep$ is only applied to previously tracked frames using the depth map that is rendered and stored for each frame after it is tracked. We provide more details about the latter three loss terms later in this section. The network parameters $\theta$ are initialized from their estimate after the previous tracking step $t-1$. The camera pose ${\cal T}_t$ for $t>1$ is initialized using the second-order motion model applied to previous poses $\{{\cal T}_i\}_{i=0}^{t-1}$. Camera pose ${\cal T}_0$ is initialized to a fixed distance from the origin of the object coordinate system and in a direction such that the imaging plane faces the origin. } We now provide details about the loss terms we introduced in Eq.~\eqref{eq:inc_loss}. \paragraph{Optical flow loss $\mathcal{L}_\text{opf}^i$.} This term provides additional constraints on the camera poses and is defined as \begin{align} \mathcal{L}_\opf^i(\textbf{r}) = \sum_k \gamma({\bf x}_k) \big(\pi_i({\bf x}_k) - \pi_{i-1}({\bf x}_k) - \OF_{i-1}(\pi_{i-1}({\bf x}_k)) \big) \> , \end{align} where the $\{{\bf x}_k\}_i$ are 3D points along ray $\textbf{r}$. $\pi_i({\bf x})$ is the 2D reprojection of point ${\bf x}$ in camera frame $i$. $\OF_{i-1}$ is the optical flow between frame $i-1$ and frame $i$. In practice, we use \vincentrmk{..} method to estimate the flow. \shreyas{ \paragraph{Regularization term $\mathcal{L}_\reg^i$.} During the first tracking steps (\ie, when $t$ is small), the occupancy field is under-constrained and needs to be regularized to avoid degenerate object shapes. We introduce a regularizer term that encourages reconstruction near the origin of the object coordinate system. It is defined as: \begin{equation} \label{eq:reg_loss} \mathcal{L}_\reg^i(\textbf{r}) = \sum_k o_\theta({\bf x}_k) \exp{(\alpha \cdot \|{\bf x}_k\|_2)} \> , \end{equation} where $\alpha$ is a hyperparameter. Additionally, at $t=0$, $\mathcal{L}_\reg^i$ results in a object surface that is parallel to the image plane. This can be proven by considering an orthographic projection of the rays and noting that for each ray $\textbf{r}$, $\mathcal{L}_\reg(\textbf{r})$ is minimized when $\|{\bf x}_k\|$ is minimized, \ie, when ${\bf x}_k$ is on a plane perpendicular to the ray direction and passing through the origin. We exploit this parallel plane property of the regularizer by choosing the first frame of the segment such that the observed object surface is nearly parallel to the image plane as mentioned in Section~\ref{sec:split}. Figure~\ref{fig:} shows the reconstructed surfaces with and without the regularizer term for an example. } \shreyas{ \paragraph{Depth term $\mathcal{L}_\dep^i$.} We additionally use a loss term based on rendered depths to stabilize the tracking. This loss is only applied to rays from the frames that were previously tracked. After tracking each frame $i$, the depth map, $\hat{d}^i$ is rendered using the optimized pose and used in the future tracking steps using the term \begin{equation} \mathcal{L}_\dep^i = \big(d^i(\textbf{r})-\hat{d}^i(\textbf{r})\big)^2 \> , \end{equation} where $d(\textbf{r}) = \sum_k \gamma({\bf x}_k) p_k$ is the estimated depth of the ray. $\gamma(\cdot)$ and $p_k$ are the blending coefficient and ray length defined in Section~\ref{sec:problem}. \vincentrmk{Does 'depth of the ray' mean the same as 'length of the ray'?} \shreyasrmk{Yes, I think we need a better term to differentiate the two!}. Figure~\ref{fig:depth_term}(a-c) shows the reconstructed object and the estimated and ground-truth object trajectories without the depth loss term at three tracking steps. The object shape is distorted at step $t=5$ and the estimated poses are inaccurate as seen in step $t=9$. When using the depth term, the object shape and the tracked poses are more stable as shown in Figure~\ref{fig:depth_term}(d-f).} \shreyasrmk{percentage of sampled rays in a batch} \begin{figure}[t] \begin{minipage}{.32\linewidth} \subcaptionbox{$t=0$, w/o $\mathcal{L}_\text{dep}$} {\includegraphics[page=1,trim=170 50 140 35,clip,width=1\linewidth]{figures/method/depth_term2.pdf}}~\subcaptionbox{$t=5$, w/o $\mathcal{L}_\text{dep}$} {\includegraphics[page=2,trim=170 50 140 35,clip,width=1\linewidth]{figures/method/depth_term2.pdf}}~\subcaptionbox{$t=9$, w/o $\mathcal{L}_\text{dep}$} {\includegraphics[page=3,trim=170 50 140 35,clip,width=1\linewidth]{figures/method/depth_term2.pdf}} \end{minipage} \begin{minipage}{.32\linewidth} \centering \subcaptionbox{$t=0$, w/ $\mathcal{L}_\text{dep}$} {\includegraphics[page=4,trim=170 50 140 35,clip,width=1\linewidth]{figures/method/depth_term2.pdf}}~\subcaptionbox{$t=5$, w/ $\mathcal{L}_\text{dep}$} {\includegraphics[page=5,trim=170 50 140 35,clip,width=1\linewidth]{figures/method/depth_term2.pdf}}~\subcaptionbox{$t=9$, w/ $\mathcal{L}_\text{dep}$} {\includegraphics[page=6,trim=170 50 140 35,clip,width=1\linewidth]{figures/method/depth_term2.pdf}} \end{minipage} \caption{Effect of $\mathcal{L}_\dep$ on object reconstruction visualized at three different tracking steps, $t=\{0,5,9\}$. The estimated and the ground-truth camera trajectories are shown in red and blue, respectively. The rendered image on the first camera of the estimated trajectory, shown in yellow, is displayed on the inset. $\mathcal{L}_\dep$ provides more constraints on the occupancy field, which results in more stable tracking.} \label{fig:depth_term} \vspace{-0.3cm} \end{figure} \subsection{Global optimization} \label{sec:stitch} We define the neighboring video segments so that they overlap over a few frames: The camera trajectories and object reconstruction for each segment are recovered up to a rigid motion and scale factor, and these shared frames allow us to express them into a unique coordinate frame. We thus normalise the camera poses to give the camera trajectories the same scale before combining two segments. We achieve this with the following procedure. Let $\mathcal{T}_i^k =[\phi_i; t_i]$ be the rotation and translation of the camera for the $i^\text{th}$ frame in segment $k$ with $N_s$ frames. We obtain a normalized pose by taking $\hat{\mathcal{T}}_i^k = \big[\phi_i; t_i/\frac{1}{N_s}\sum_j \|t_j\|\big]$. \vincentrmk{I am surprised that works. This assumes that the object remains at the same distance from the camera for the 2 segments, no?} We then retrieve the rigid motion $\mathcal{T}$~(rotation and translation) that aligns the camera poses for the frames shared by the two segments: \begin{equation} \mathcal{T}_{k\rightarrow{\cal N}(k)} = \arg\min_\mathcal{T} \sum_i \|\mathcal{T}\cdot\hat{\mathcal{T}}_i^k-\hat{\mathcal{T}}_i^{{\cal N}(k)}\|_F \> , \end{equation} where $\|\cdot\|_F$ denotes the Frobenius norm and the summation is over the set of overlapping frames. In practice, we observed that as less as a single overlapping frame is sufficient for combining the segments. \vincent{ Finally, we optimize the loss function in Eq.~\eqref{eq:first_loss} over all the frames of all the segments, starting from the cameras to combine the segments The cameras are initialized with the unit-scale normalized poses and the loss function defined in Eq.~\eqref{eq:first_loss} is optimized to combine the segments. The network parameters $\theta$ are initialized to reconstruct a sphere. Pairs of neighboring segments are iteratively combined until the object is reconstructed from the full sequence. } \section{Implementation details} \begin{figure*}[t!] \begin{center} \includegraphics[width=0.95\linewidth]{figures/fig8.png} \caption{\textbf{Reconstructed models and pose trajectories on HO-3D} for COLMAP~\cite{colmap}~(top row), our method~(middle row), and the UNISURF~\cite{unisurf} baseline that uses ground-truth poses~(bottom row). We restrict the keypoint matches in COLMAP to only object pixels using the segmentation masks obtained from a pre-trained network (Sec.~\ref{sec:reconstruction_setup}). COLMAP recovers only incomplete pose trajectories in absence of texture, which leads to incomplete or failed reconstructions. Our method relies on both geometric and texture features and produces reliable pose estimates, which results in similar reconstructions as produced by the strong baseline relying on ground-truth poses. } \label{fig:all_models} \end{center} \end{figure*} \section{Experiments} \label{sec:experiments} We evaluate our method quantitatively and qualitatively on the HO-3D dataset~\cite{hampali-cvpr20-honnotate}, which contains sequences of objects from the YCB dataset~\cite{ycb} being manipulated by one hand. We also show qualitative results on the RGB images from the RGBD-Obj dataset~\cite{wang_dataset} and on sequences that we captured for this project and that show two challenging texture-less YCB objects: the clamp and the cube. The latter two datasets feature two hands but do not provide object pose annotations. We evaluate the accuracy of the reconstructed shape and color, and of the estimated poses. \customparagraph{3D Reconstruction Metric.} As in \cite{rgbd_tim}, we first align the estimated object mesh with the ground-truth mesh by ICP and then calculate the RMSE Hausdorff distance from the estimated mesh to the ground truth mesh. As our meshes are only estimated up to a scaling factor, we allow the meshes to scale during the ICP alignment. \customparagraph{Object Texture Metrics.} The recovered object texture is compared with the ground truth using the PSNR, LPIPS and SSIM metrics. Specifically, we render the recovered object appearance from the ground truth poses for images that were not used in the optimization and compare the renderings with the images. Since the pose has to be accurate to obtain reliable metrics, we first perform photometric optimization on the trained model to obtain accurate poses and then render the images as in BARF~\cite{barf}. \customparagraph{Pose Trajectory Metric.} As the 3D model and poses are recovered up to a 3D similarity transformation, we first align the estimated poses with the ground truth by estimating the transformation between the two. We then calculate the absolute trajectory error (ATE)~\cite{david,imap,scnerf} and plot the percentage of frames for which the error is less than a threshold. We use the area under curve of the ATE plot as the metric. \subsection{Evaluation on HO-3D} HO-3D contains 27 multi-view sequences~(68 single-view sequences) of hand-object interactions with 10 YCB objects~\cite{ycb} annotated with 3D poses. We consider the same multi-view sequences as in \cite{rgbd_tim} for the 3D reconstruction. As the ground-truth 3D poses are provided in this dataset, we also evaluate the accuracy of our estimated poses along with the reconstruction and texture accuracy. \customparagraph{Baselines.} We compare with COLMAP~\cite{schonberger2016pixelwise}, the single-image object reconstruction method by Ye \etal \cite{cmu}, the RGB-D reconstruction method by Patten \etal \cite{rgbd_tim}, and UNISURF~\cite{unisurf}. The last two methods rely on the ground-truth camera poses, whereas the other methods (including ours) do not. In the case of \cite{rgbd_tim}, we compare only the 3D reconstruction accuracy with this method as the pose and object texture evaluations are not reported. We obtain results from COLMAP using the sequential keypoint matching setting as it results in better reconstruction than the other keypoint matching procedures. We observed that COLMAP fails to obtain complete reconstruction for most objects due to insufficient keypoint matches and results in multiple non-overlapping partial reconstructions, which cannot be combined. The method by Ye \etal \cite{cmu} uses a single input image but is pre-trained on sequences of the same object. Note that our method is not pre-trained and thus the reconstructed object is completely unknown to the method. \customparagraph{Results.} Table~\ref{tab:ho3d_model} compares the one-way RMSE Hausdorff distance of our method with COLMAP, the single-frame method of \cite{cmu}, and the RGBD method of \cite{rgbd_tim}. We calculate the average metric over the sequence for \cite{cmu}. Our method consistently achieves higher performance than COLMAP on all objects and higher performance over \cite{cmu} on average. Our method is competitive with the RGBD-based method and the strong baseline for most objects. COLMAP fails to obtain keypoint matches on banana and scissors. The lower accuracy of our method on pitcher and banana is due to the lack of both geometric and texture features. COLMAP achieves accurate pose trajectories only for the cracker box and sugar box as they contain rich image features on all the surfaces. The lower accuracy of COLMAP on other objects is due to poor texture (Figure~\ref{fig:all_models}). Table~\ref{tab:ate_area} shows the area under curve of the ATE plot with the maximum ATE threshold of 10\,cm. Our method outperforms COLMAP, which cannot recover the complete trajectory for many objects. Both our method and COLMAP fail to obtain meaningful reconstruction for scissors due to its thin structure (we do not show the metrics for this object). In Table~\ref{tab:psnr}, we provide the PSNR, SSIM and LPIPS metrics for our proposed method and the strong baseline that uses the ground-truth poses. Our method achieves similar accuracy as the baseline method on all objects, despite the fact that our method estimates the poses instead of using the ground-truth ones. This can be seen also in Fig.~\ref{fig:all_models}. Reconstructed models and estimated pose trajectories from our method, the UNISURF+GT baseline and COLMAP are shown in Figure~\ref{fig:all_models}. \begin{table}[t!] \footnotesize \begin{center} \begin{tabularx}{\columnwidth}{l Y Y Y | Y Y} \toprule Object & Ye~\emph{et~al.} \cite{cmu} & COLMAP \cite{colmap} & Ours & UNISU. \cite{unisurf} & RGBD \cite{rgbd_tim}\\ \midrule 3: cracker box &10.21& 4.08 & \textbf{2.91} & 3.40 & 3.54\\ 4: sugar box &6.19 & 6.66 & \textbf{3.01} & 3.49 & 3.34\\ 6: mustard &2.61 & \textbf{4.43} & 4.44 & 4.34 & 3.28\\ 10: potted meat &3.43 & 10.21 & \textbf{1.95} & 1.54 & 3.26\\ 21: bleach &\textbf{4.18} & 14.11 & 5.63 & 3.41 & 2.43\\ 35: power drill &15.15& 11.06 & \textbf{5.48} & 5.33 & 3.77\\ 19: pitcher base &\textbf{8.87} & 43.38 & 9.21 & 4.63 & 4.73\\ 11: banana &\textbf{3.47} & - & 4.60 & 3.98 & 2.44\\ \midrule Average &6.76 & 13.41 & \textbf{4.65} & 3.76 & 3.34\\ \bottomrule \end{tabularx} \caption{\textbf{RMSE Hausdorff distance (mm) from the estimated to the ground-truth 3D model.} UNISURF~\cite{unisurf} and RGBD~\cite{rgbd_tim} are strong baselines as they use ground-truth poses, and the latter also depth images. Our object reconstructions are close to the baselines, even though we do not use the ground-truth poses, and systematically better than COLMAP, which slipped on the banana.} \label{tab:ho3d_model} \end{center} \end{table} \begin{table} \footnotesize \begin{center} \begin{tabularx}{\columnwidth}{l Y Y Y Y Y Y Y Y Y Y} \toprule Object & 3 & 4 & 6 & 10 & 21 & 35 & 19 & 25 & 11 & Avg \\ \midrule COLMAP & 7.4 & \textbf{7.4} & 3.5 & 0.1 & 1.5 & 2.8 & 4.1 & \textbf{2.4} & 0.0 & 2.9\\ Ours & \textbf{7.6}& 6.8 & \textbf{5.2} & \textbf{6.8} & \textbf{4.7} & \textbf{6.4} & \textbf{4.6} & 2.2 & \textbf{0.6} & \textbf{4.5}\\ \bottomrule \end{tabularx} \caption{\textbf{Area under the curve of the absolute trajectory error.} COLMAP succeeds on textured objects like the first two but struggles to recover the complete trajectory for less textured objects. } \label{tab:ate_area} \end{center} \end{table} \begin{table} \footnotesize \begin{center} \begin{tabularx}{\columnwidth}{l Y Y} \toprule \multirow{2}{*}{\parbox{0.75cm}{\centering Object}} & \multicolumn{2}{c }{\centering PSNR$\uparrow$ ~/~SSIM$\uparrow$~/~LPIPS$\downarrow$}\\ & Ours & UNISURF~\cite{unisurf}\\ \midrule 3: cracker box & 29.77~/~0.73~/~0.31 & 29.79~/~0.74~/~0.33\\ 4: sugar box & 30.77~/~0.82~/~0.31 & 30.73~/~0.76~/~0.33\\ 6: mustard & 30.73~/~0.74~/~0.39 & 30.72~/~0.74~/~0.37\\ 10: potted meat & 31.07~/~0.77~/~0.35 & 31.28~/~0.78~/~0.35\\ 21: bleach & 30.82~/~0.74~/~0.36 & 29.87~/~0.67~/~0.42\\ 35: power drill & 31.82~/~0.78~/~0.26 & 31.81~/~0.76~/~0.28\\ 19: pitcher base & 32.13~/~0.83~/~0.26 & 32.28~/~0.83~/~0.25\\ 25: mug & 31.18~/~0.74~/~0.39 & 31.69~/~0.76~/~0.37\\ \midrule Average & 31.01~/~0.77~/~0.32 & 31.02~/~0.75~/~0.34\\ \bottomrule \end{tabularx} \caption{\textbf{Evaluation of the estimated object texture.} The proposed method achieves comparable quality of the recovered object texture as UNISURF which uses ground-truth poses. } \vspace{-2ex} \label{tab:psnr} \end{center} \end{table} \subsection{Evaluation on RGBD-Obj and New Sequences} Qualitative results from an RGBD-Obj~\cite{wang_dataset} sequence showing the mustard bottle and from two new sequences with the extra large clamp and Rubik's cube from YCB, which we captured for this project, are shown in Figure~\ref{fig:new_dataset_results}. Our method is able to produce 3D models also for the latter two objects, which are poorly textured and classical feature-based methods such as~\cite{schonberger2016pixelwise} fail to reconstruct them. \begin{figure} \begin{center} \begin{minipage}{0.95\linewidth} \centering \subcaptionbox{Results on RGBD-Obj~\cite{wang_dataset}\label{fig:wang_result}} {\includegraphics[trim=220 325 380 70, clip,width=1\linewidth]{figures/main_results/wang_ours_results.pdf}} \end{minipage}\vspace{3.0ex} \begin{minipage}{0.95\linewidth} \centering \subcaptionbox{Results on our new sequences\label{fig:our_dataset_results}} {\includegraphics[trim=220 70 380 225, clip,width=1\linewidth]{figures/main_results/wang_ours_results.pdf}} \end{minipage} \vspace{1.0ex} \caption{\textbf{Results on RGBD-Obj~\cite{wang_dataset} and two new sequences.} The left column shows a sample image from the input sequence showing an unknown object manipulated by hands. The right column shows two views at the reconstructed color 3D model.} \label{fig:new_dataset_results} \end{center} \end{figure} \subsection{Ablation Study} The benefit of individual loss terms proposed in Sec.~\ref{sec:opt_obj} is demonstrated in Figures~\ref{fig:optical_flow}--\ref{fig:rand_init}. The optical flow loss enforces the consistency between the predicted camera poses and the observation images, the shape regularization loss stabilizes the optimization especially in its early stage, and the synthetic depth loss preserves previously reconstructed surface parts. Without the synthetic depth loss, the object can be significantly deformed especially when the camera performs larger motions in newly considered frames. Figure~\ref{fig:seg_ablation} shows the importance of splitting the input sequence into segments. Figure~\ref{fig:rand_init} demonstrates the benefit of initializing the optimization of Eq.~\eqref{eq:complete_loss} with poses estimated from segments over initializing with random or zero poses. \section{Conclusion} We introduced a method that is able to reconstruct an unknown object manipulated by hands from color images. The main challenge resides in preventing drift during the simultaneous tracking and reconstruction. We believe our strategy of splitting the sequence based on the apparent area of the object and our regularization terms to be general and useful ideas, and hope they will inspire other researchers. \section{Introduction} After receiving paper reviews, authors may optionally submit a rebuttal to address the reviewers' comments, which will be limited to a {\bf one page} PDF file. Please follow the steps and style guidelines outlined below for submitting your author response. The author rebuttal is optional and, following similar guidelines to previous CVPR conferences, is meant to provide you with an opportunity to rebut factual errors or to supply additional information requested by the reviewers. It is NOT intended to add new contributions (theorems, algorithms, experiments) that were absent in the original submission and NOT specifically requested by the reviewers. You may optionally add a figure, graph, or proof to your rebuttal to better illustrate your answer to the reviewers' comments. Per a passed 2018 PAMI-TC motion, reviewers should refrain from requesting significant additional experiments for the rebuttal or penalize for lack of additional experiments. Authors should refrain from including new experimental results in the rebuttal, especially when not specifically requested to do so by the reviewers. Authors may include figures with illustrations or comparison tables of results reported in the submission/supplemental material or in other papers. Just like the original submission, the rebuttal must maintain anonymity and cannot include external links that reveal the author identity or circumvent the length restriction. The rebuttal must comply with this template (the use of sections is not required, though it is recommended to structure the rebuttal for ease of reading). \subsection{Response length} Author responses must be no longer than 1 page in length including any references and figures. Overlength responses will simply not be reviewed. This includes responses where the margins and formatting are deemed to have been significantly altered from those laid down by this style guide. Note that this \LaTeX\ guide already sets figure captions and references in a smaller font. \section{Formatting your Response} {\bf Make sure to update the paper title and paper ID in the appropriate place in the tex file.} All text must be in a two-column format. The total allowable size of the text area is $6\frac78$ inches (17.46 cm) wide by $8\frac78$ inches (22.54 cm) high. Columns are to be $3\frac14$ inches (8.25 cm) wide, with a $\frac{5}{16}$ inch (0.8 cm) space between them. The top margin should begin 1 inch (2.54 cm) from the top edge of the page. The bottom margin should be $1\frac{1}{8}$ inches (2.86 cm) from the bottom edge of the page for $8.5 \times 11$-inch paper; for A4 paper, approximately $1\frac{5}{8}$ inches (4.13 cm) from the bottom edge of the page. Please number any displayed equations. It is important for readers to be able to refer to any particular equation. Wherever Times is specified, Times Roman may also be used. Main text should be in 10-point Times, single-spaced. Section headings should be in 10 or 12 point Times. All paragraphs should be indented 1 pica (approx.~$\frac{1}{6}$ inch or 0.422 cm). Figure and table captions should be 9-point Roman type as in \cref{fig:onecol}. List and number all bibliographical references in 9-point Times, single-spaced, at the end of your response. When referenced in the text, enclose the citation number in square brackets, for example~\cite{Alpher05}. Where appropriate, include the name(s) of editors of referenced books. \begin{figure}[t] \centering \fbox{\rule{0pt}{0.5in} \rule{0.9\linewidth}{0pt}} \caption{Example of caption. It is set in Roman so that mathematics (always set in Roman: $B \sin A = A \sin B$) may be included without an ugly clash.} \label{fig:onecol} \end{figure} To avoid ambiguities, it is best if the numbering for equations, figures, tables, and references in the author response does not overlap with that in the main paper (the reviewer may wonder if you talk about \cref{fig:onecol} in the author response or in the paper). See \LaTeX\ template for a workaround. \subsection{Illustrations, graphs, and photographs} All graphics should be centered. Please ensure that any point you wish to make is resolvable in a printed copy of the response. Resize fonts in figures to match the font in the body text, and choose line widths which render effectively in print. Readers (and reviewers), even of an electronic copy, may choose to print your response in order to read it. You cannot insist that they do otherwise, and therefore must not assume that they can zoom in to see tiny details on a graphic. When placing figures in \LaTeX, it is almost always best to use \verb+\includegraphics+, and to specify the figure width as a multiple of the line width as in the example below {\small\begin{verbatim} \usepackage{graphicx} ... \includegraphics[width=0.8\linewidth] {myfile.pdf} \end{verbatim} } {\small \bibliographystyle{ieee_fullname} \section{Problem Formulation} \label{sec:problem} \vincentrmk{It is weird to introduce campose using its inverse ($\mathcal{T}_i^{-1}$). Isnt it more logical to talk about object pose rather than cam pose here?}\shreyasrmk{If we introduce the object pose, we will need to invert it everytime in the equations later. I tried introducing campose directly below} \vincentrmk{We should postpone the introduction of notations and equations as much as possible anyway}\shreyasrmk{Will work on this} \vincent{ Our setup for simultaneous object reconstruction and tracking consists of a static RGB camera and an unknown rigid object being scanned by one or two hands in the field of view of the camera under static lighting conditions. \shreyas{We denote the $N$ images in the captured sequence by $\{{\cal I}_i\}_{i=1}^N$, and the pose of the camera in the object coordinate system at frame $i$ by $\mathcal{T}_i$. We assume a static background whose image is either obtained separately or is estimated from the sequence using the segmentation masks of hand and object.} } \vincentrmk{I understand what you mean but this can be a bit confusing. Do we need it? :}\shreyasrmk{The other way is to describe everything in the camera coordinate system (right now all the equations are in object coordinate system). I think people are used to seeing nerf rendering equations in the object coordinates and changing this to camera coordinates may make them look complex? Fig.2 is great, how about I refine it a little bit and refer to it for this interpretation?} \vincentrmk{Sure, do not hesitate to change it} The resulting images can be seen as images of the rigid object captured from multiple views with a fixed relative transformation between the camera and the light sources. Similarly to UNISURF~\cite{unisurf}, we represent the object to reconstruct by an occupancy field network and a color field network at every 3D point and aim to find the optimal parameters for these two networks. There is however a very important difference: Standard NeRF methods~\cite{nerf, unisurf, other_nerf_papers} work under the assumption of a static scene and light environment and varying camera locations. This allows these methods to condition the color field only on the viewing ray in the scene coordinate system to capture non-Lambertian surfaces. In our case, as the camera and the light sources with their relative poses fixed capture the static object from different views, we also need to condition the color field on the ray \emph{length}. \vincentrmk{Isnt this because the rays are defined in the camera coordinate system? What happens if you consider rays in the object coordinate system? We should probably discuss the alternatives.} This is illustrated in Figure~\ref{fig:raydist}: When a 3D point on the object surface is viewed by the camera under two different object poses, its color may change even if the direction of the line of sight remains the same. This is because the angle between the line of sight and the light direction changes, and the amount of light reflected toward the camera is a function of this angle. A simple way to capture this phenomenon is to condition the color field on the distance between the camera and the 3D point. \vincentrmk{this does not capture everything, no? The color also depends on the normal. If the object rotates around the point, the ray direction and length and light direction do not change but the normal changes, which will change the point's color. Maybe it does not matter because it does not happen in practice? } shows an example of a point on the object surface viewed by a camera at two different poses of the object relative to the camera such that the viewing direction is the same. A point-light source located at a fixed position, ${\cal M}$ w.r.t to the camera results in two different incident light directions and is a function of viewing direction, ray length and camera position ${\cal M}$. Our color field $c_\theta({\bf x},{\bf d},p)$ is thus conditioned on view direction, ${\bf d}$ and ray length, $p$ as ${\cal M}$ is constant in all the images. \vincentrmk{Maybe $c$ should be bold since it is a (color) vector?} \begin{figure} \centering \includegraphics[width=1\linewidth]{figures/ray_length.png} \caption{Top: In the case of the standard NeRF setting, the camera moves with respect to the scene, and the color of a 3D point depends only on the direction of the light of sight. Bottom: In the case of in-hand scanning, the object moves with respect to the camera and the lights, and the color of a 3D point on the surface of the object does not depend only on the direction of the light of sight but also on the object pose. To capture this, we simply condition the color of the point on the distance between the camera and the point, in addition to the direction of the line of sight. \vincentrmk{figure is pretty ugly, but gives a better idea I think}} \label{fig:raydist} \end{figure} \vincent{ A second major difference with other NeRF approaches is that we need to estimate the camera pose with respect to the object, while previous approaches often assume the camera poses to be given. We thus estimate the object's shape and appearance by solving: \begin{align} \label{eq:complete_loss} \theta^*, \{\mathcal{T}_i^*\}_{i=1}^N &= \arg\min_{\theta, \{\mathcal{T}_i\}_{i=1}^N} \sum_j \sum_{\textbf{r}\in {\cal R}_j} \Big(\mathcal{L}_\text{col}(\textbf{r})+\mathcal{L}_\text{seg}(\textbf{r})\Big) \> . \end{align} $\theta$ denotes the parameters of the neural surface, and the $\mathcal{T}_i$s are the camera poses in the object coordinate system\vincentrmk{Is that correct?} for the sequence. ${\cal R}_j$ is the set of rays for frame $j$. Term $\mathcal{L}_\text{col}(\textbf{r})$ is a loss on the predicted color for ray $\textbf{r}$: \begin{align} \label{eq:col_loss} \mathcal{L}_\text{col}(\textbf{r}) &= \|\hat{C}(\textbf{r}) - C(\textbf{r})\|_2 \> . \end{align} where $\hat{C}(\textbf{r})$ is the predicted color for ray $\textbf{r}$ and $C(\textbf{r})$ is its observed color. For $\hat{C}(\textbf{r})$, we follow the rendering equation of UNISURF: \begin{align} \label{eqn:rend_eqn} \hat{C}(\textbf{r}) &= \sum_{k=1}^M \gamma({\bf x}_k) c_\theta({\bf x}_k,{\bf d},p_k) \> . \end{align} $c_\theta({\bf x}_k,{\bf d},p_k)$ is the neural color field for point ${\bf x}_k$ sampled along ray $\textbf{r}$, seen under the direction ${\bf d}$ of ray $\textbf{r}$, and with distance $p_k$ from the camera center for frame $j$. \vincentrmk{Do we really need $\mathcal{T}$ in $c$?} $\gamma({\bf x}_k)$ is the \vincentrmk{Does gamma have a name? visibility maybe?} and is computed as \begin{align} \label{eqn:gamma} \gamma({\bf x}_k) &= o_\theta({\bf x}_k) \prod_{l<k} \big(1-o_\theta({\bf x}_l)\big) \> , \end{align} \vincentrmk{do gamma and $o$ really depend on the camera pose?} where $o_\theta({\bf x}_{k,i};\mathcal{T}_i)$ is the \vincentrmk{? neural occupancy field?}. The second term $\mathcal{L}_\text{seg}(\textbf{r})$ in Eq.~\eqref{eq:complete_loss} is a ``segmentation loss'' for ray $\textbf{r}$: \begin{align} \label{eq:seg_loss} \mathcal{L}_\text{seg}(\textbf{r}) &= \text{BCE}\Big(\max_k~\{o_\theta({\bf x}_k)\}, S_o(\textbf{r}) \Big) \> , \end{align} \vincentrmk{there is no campose in $o$ in this equation} where $\text{BCE}(\cdot)$ is the binary cross-entropy loss, and $S_o(\textbf{r})$ is the observed object segmentation value for ray $\textbf{r}$: Remember that we segment hand and background to keep only the object in the images. $S_o(\textbf{r})$ is equal to 1 if the ray projects on the object as estimated by the segmentation, and 0 otherwise. $\max_k~\{o_\theta({\bf x}_k)\}$ is the maximum occupancy along ray $\textbf{r}$, which should also be equal to 1 if the ray intersects the object and to 0 otherwise. We detail in the next section how we solve Eq.~\eqref{eq:complete_loss}. } \section{Method} As the object undergoes a large motion in 3D during scanning, solving Eq.~\eqref{eq:full_loss} requires coarse initialization of the camera poses: As we show later in our experiments, random or fixed initialization of the pose parameters followed by optimization similar to BARF~\cite{barf} leads to degenerate solutions. Though COLMAP~\cite{colmap} is the standard method for obtaining coarse level pose estimates in NeRF-based methods~\cite{nerf,nerf_w,nerv,nerd}, it fails to obtain sufficient keypoint matches that are necessary for pose estimation in the regions of the object that contain less texture. As a result, COLMAP can provide reliable pose initialization only for highly-textured objects. Our approach to reconstruction of generic objects (both textured and non-textured) and their pose estimation is based on tracking, which incrementally reconstructs the object surface while simultaneously optimizing the poses at each tracking step. As tracking algorithms suffer from drift~\cite{}, we divide the video sequence into multiple overlapping segments and reconstruct parts of the object surface and obtain pose estimates within each segment independently. Finally, using the camera poses obtained in each segment, the segments are \textit{stitched} together and a global optimization similar to Eq.~\eqref{eq:first_loss} is performed to achieve complete object reconstruction. We provide details about the intra-segment tracking, inter-segment optimization and the algorithm for dividing the video sequence into multiple segments below. \subsection{Intra-Sequence Segment Tracking} \label{sec:tracking_method} Let ${\cal S}_t$ denote the set of new frames from the sequence that are added at tracking step $t$. $\{{\cal S}_i\}_{i=0}^{t-1}$ denotes the set of frames which are used for tracking in the previous $t-1$ steps. The poses of the cameras corresponding to frames in ${\cal S}_t$ are initialized using the second-order motion model on the trajectory of the poses of $\{{\cal S}_i\}_{i=0}^{t-1}$. The occupancy and color network parameters, $\theta$, at step $t$ are initialized from the previous tracking step $t-1$. Further, after each tracking step $t$, we render and store the set of depth maps, ${\cal D}_t$ in the views ${\cal S}_t$ using the optimized poses. The optimization at each tracking step is intended to update the network parameters, $\theta$, such that, \begin{itemize} \item the updated parameters also represent the incremental new surface visible in views ${\cal S}_t$. \item the object surface reconstructed in the previous views $\{{\cal S}_i\}_{i=0}^{t-1}$ is intact and not \textit{forgotten} \end{itemize} The incremental surface visible in ${\cal S}_t$ is reconstructed by minimizing the objective function, \begin{equation} \label{eq:inc_loss} \mathcal{L}_\text{inc} = \sum_{\textbf{r}\in\{{\cal S}_i\}_{i=0}^t} \mathcal{L}_\text{col}(\textbf{r}) + \mathcal{L}_\text{seg}(\textbf{r}) + \mathcal{L}_\text{opf}(\textbf{r}) + \mathcal{L}_\text{reg}(\textbf{r}) \> , \end{equation} where the former two terms $\mathcal{L}_\text{col}$ and $\mathcal{L}_\text{seg}$ are color and segmentation losses as defined in Eq.~\eqref{eq:first_loss}, and the latter two terms $\mathcal{L}_\text{ofp}$ and $\mathcal{L}_\text{reg}$ are the optical flow loss and the regularization term. We provide more details about the latter two terms below. \paragraph{Optical Flow Loss, $\mathcal{L}_\text{opf}$}This term provides additional constraint for the camera poses and is defined as, \begin{align} \mathcal{L}_\text{opf}(\textbf{r}_k) = \sum_i &\Big(\pi\big( \mathcal{T}_{{\cal N}(k)} {\bf x}_{k,i}\big) - \pi\big( \mathcal{T}_k {\bf x}_{k,i}\big) - OF_{\textbf{r}_k}\Big) \cdot \nonumber \\ &\gamma({\bf x}_{k,i};\mathcal{T}_k), \nonumber \end{align} where $\textbf{r}_k$ is the ray from the $k^\text{th}$ frame, $\{{\bf x}_{k,i}\}_i$ are the samples along the ray $\textbf{r}_k$, $\mathcal{T}_{{\cal N}(k)}$ is the pose of the frame that is neighboring the frame $k$ with pose $\mathcal{T}_k$, $\pi(\cdot)$ is the camera projection function, and $OF_{\textbf{r}_k}$ is the optical flow precomputed at pixel/ray $\textbf{r}_k$. \paragraph{Regularization Term, $\mathcal{L}_\text{reg}$}Minimizing Eq.~(\ref{eq:inc_loss}) using rays from one or more frames without the regularization term results in degenerate solutions as shown by the occupancy fields in Figure~\ref{fig:1frame_wo_reg} and Figure~\ref{fig:5frames_wo_reg}. The distortion mainly occurs in low-texture regions, such as bottom of the $\textit{power\_drill}$ object in Figure~\ref{fig:reg_term} where $\mathcal{L}_\text{col}$ cannot provide meaningful gradients to optimize the poses, while the rendered images are still reasonable. We introduce regularizer on the occupancy field to avoid degenerate solutions and is defined as, \begin{equation} \label{eq:reg_loss} \mathcal{L}_\text{reg}(\textbf{r}_k) = \sum_i o_\theta({\bf x}_{k,i}) \exp{(\alpha \cdot \|{\bf x}_{k,i}\|_2)}, \end{equation} where, $\alpha$ is a hyper-parameter. $\mathcal{L}_\text{reg}$ ensures that the object is reconstructed near the origin of the coordinate system and discourages artifacts in the occupancy field. When considering rays from a single camera located at a certain distance from the origin and pointing towards the origin, $\mathcal{L}_\text{reg}$ forces reconstruction of the object surface on a plane that is nearly parallel to the image plane and passing through the origin as shown in Figure~\ref{fig:1frame_w_reg}. This can be proved by considering an orthographic projection of the rays and noting that for each ray, $\textbf{r}_k$, $o_\theta({\bf x}_{k,i}) = \{0,1\}$~\cite{unisurf} and $\mathcal{L}_\text{reg}(\textbf{r}_k)$ is minimized when $\|{\bf x}_{k,i}\|$ is minimized, which is a point on a plane perpendicular to the ray direction and passing through the origin. When minimizing Eq.~(\ref{eq:inc_loss}) using rays from multiple frames, we obtain more accurate reconstruction of the surface as shown in Figure~\ref{fig:5frames_w_reg}, where the reconstruction of the low-textured surface is driven by $\mathcal{L}_\text{seg}$ and $\mathcal{L}_\text{reg}$. \begin{figure*}[h!tb] \begin{minipage}{.19\linewidth} \centering \subcaptionbox{Single Frame Real Depth} {\includegraphics[page=1,trim=140 105 170 40, clip, width=1\linewidth]{figures/reg_term/reg_term2.pdf}}~\vrule~% \subcaptionbox{Single Frame w/o Reg.\label{fig:1frame_wo_reg}} {\includegraphics[page=2,trim=140 105 170 40, clip, width=1\linewidth]{figures/reg_term/reg_term2.pdf}}~% \subcaptionbox{Five Frames w/o Reg.\label{fig:5frames_wo_reg}} {\includegraphics[page=3,trim=140 105 170 40, clip, width=1\linewidth]{figures/reg_term/reg_term2.pdf}}~% \subcaptionbox{Single Frame w/ Reg.\label{fig:1frame_w_reg}} {\includegraphics[page=4,trim=140 105 170 40, clip, width=1\linewidth]{figures/reg_term/reg_term2.pdf}}~% \subcaptionbox{Five Frames w/ Reg.\label{fig:1frame_w_reg}} {\includegraphics[page=5,trim=140 105 170 40, clip, width=1\linewidth]{figures/reg_term/reg_term2.pdf}} \end{minipage} \caption{Visualization of the top view of the camera and the point cloud of the object using real depth (a), and the occupancy field obtained from our method under different settings (b-e). The occupancy field obtained without our regularization term with one or five frames is distorted (b,c), more so in the low-texture regions of the object (bottom part of $\textit{power\_drill}$). When using the regularization term with a single frame (d), the occupancy field is mostly constrained to a plane parallel to the image plane (shown in green). With multiple frames and regularization term (e), we obtain more accurate reconstruction of the object surface. The images rendered on to the cameras in black in each setting is shown on the inset.} \label{fig:reg_term} \vspace{-0.3cm} \end{figure*} For the tracking step, $t=0$, the camera poses corresponding to the frames ${\cal S}_0$ are all initialized to the same pose positioning them at a certain distance from the origin and pointing towards the origin. As multiple frames are required to reconstruct the object surface as shown in Figure~\ref{fig:5frames_w_reg}, we add more than one frame in each step of the tracking process such that $\lvert{\cal S}_i\rvert = N_\text{trac}$, where $\lvert \cdot \rvert$ represents cardinality of the set and $N_\text{trac}>1$. As every set of new frames during tracking refine the object surface, we optimize both the object shape and color, and poses of all the frames until the current tracking step as indicated in Eq.~(\ref{eq:inc_loss}). The poses corresponding to frames in the previous tracking steps $\{{\cal S}_i\}_{i=0}^{t-1}$ are also optimized as they depend on the object shape. During optimization, we draw a fixed percentage of rays in a batch from the latest frames, ${\cal S}_t$ to ensure that the budget for rays required for reconstructing the incremental new surface remains unchanged. The ray budget allocation and non-uniform distribution of the poses of the object may result in uneven sampling of the rays around the object causing drift in poses and object shape during tracking as shown in Figure~\ref{fig:depth_term} (a-c). We introduce a depth loss term based on the rendered depths to constrain drift in object shape and is defined as \begin{equation} \mathcal{L}_\text{dep} = \sum_{\textbf{r}\in \{{\cal S}_i\}_{i=0}^{t-1}} \big(\hat{d}(\textbf{r})-\bar{d}(\textbf{r})\big)^2, \end{equation} where, $\bar{d}(\textbf{r})$ is the pre-computed depth of the ray and $\hat{d}(\textbf{r})$ is the depth of the ray obtained as $\hat{d}(\textbf{r}) = \sum_i \gamma({\bf x}_i) p_i$, where $\gamma(\cdot)$ and $p_i$ are as defined in equation Eq.~(\ref{eqn:rend_eqn}). Note that at each tracking step $t$, the depth loss term only applies to rays from all the frames in the previous tracking steps $\{{\cal S}_i\}_{i=0}^{t-1}$ for which $\bar{d}$ can be pre-computed. After tracking step $t$, we only compute the depth for frames in ${\cal S}_t$ to avoid re-rendering all the previous frames. We observe that such an approximation has little impact on the tracking accuracy. The depth loss term provides a way for `remembering' object shape reconstructed in the previous tracking steps and overall tracking loss is defined as \begin{equation} \mathcal{L}_\text{track} = \mathcal{L}_\text{inc} + \mathcal{L}_\text{dep} \> . \end{equation} Figure~\ref{fig:depth_term} (d-f) shows the result when using the depth term during tracking. The tracking is more stable with better reconstruction of the object when using $\mathcal{L}_\text{dep}$. \begin{figure}[t] \begin{minipage}{.32\linewidth} \subcaptionbox{$t=0$, w/o $\mathcal{L}_\text{dep}$} {\includegraphics[page=1,trim=170 50 140 35,clip,width=1\linewidth]{figures/method/depth_term2.pdf}}~\subcaptionbox{$t=5$, w/o $\mathcal{L}_\text{dep}$} {\includegraphics[page=2,trim=170 50 140 35,clip,width=1\linewidth]{figures/method/depth_term2.pdf}}~\subcaptionbox{$t=9$, w/o $\mathcal{L}_\text{dep}$} {\includegraphics[page=3,trim=170 50 140 35,clip,width=1\linewidth]{figures/method/depth_term2.pdf}} \end{minipage} \begin{minipage}{.32\linewidth} \centering \subcaptionbox{$t=0$, w/ $\mathcal{L}_\text{dep}$} {\includegraphics[page=4,trim=170 50 140 35,clip,width=1\linewidth]{figures/method/depth_term2.pdf}}~\subcaptionbox{$t=5$, w/ $\mathcal{L}_\text{dep}$} {\includegraphics[page=5,trim=170 50 140 35,clip,width=1\linewidth]{figures/method/depth_term2.pdf}}~\subcaptionbox{$t=9$, w/ $\mathcal{L}_\text{dep}$} {\includegraphics[page=6,trim=170 50 140 35,clip,width=1\linewidth]{figures/method/depth_term2.pdf}} \end{minipage} \caption{Effect of $\mathcal{L}_\text{dep}$ on object reconstruction visualized at three different tracking steps, $t=\{0,5,9\}$. The estimated and the ground-truth camera trajectory are shown in red and blue, respectively. The rendered image on the first camera of the estimated trajectory, shown in yellow is displayed on the inset. $\mathcal{L}_\text{dep}$ provides more constraints on the occupancy field resulting in more stable tracking.} \label{fig:depth_term} \vspace{-0.3cm} \end{figure} \subsection{Inter-Segment Optimization} \label{sec:stitch} Neighboring segments of the sequence contain overlapping frames whose poses are used to combine the segments. As the parts of object may be reconstructed in each segment with a different scale, the poses in each segment are scale-normalized before combining two segments. Let $\mathcal{T}_i^k =[\phi_i; t_i]$ be the rotation and translation of the camera for the $i^\text{th}$ frame in a segment, $k$ with $N_s$ frames, the unit-scale normalized pose of the camera is obtained as, $\hat{\mathcal{T}}_i^k = \big[\phi_i; t_i/\frac{1}{N_s}\sum_i \|t_i\|\big]$. The camera poses of the overlapping frames in the segment $k$ are SE(3) transformed to align with frames in the neighboring segment, ${\cal N}(k)$ using the matrix, $\mathcal{T}_{k\rightarrow{\cal N}(k)}$ obtained as, \begin{equation} \mathcal{T}_{k\rightarrow{\cal N}(k)} = \arg\min_\mathcal{T} \sum_i \|\mathcal{T}\cdot\hat{\mathcal{T}}_i^k-\hat{\mathcal{T}}_i^{{\cal N}(k)}\|_F, \end{equation} where, $\|\cdot\|_F$ denotes the Frobenius norm and the summation is over the set of overlapping frames. In practice, we observed that as less as a single overlapping frame is sufficient for combining the segments. The cameras are initialized with the unit-scale normalized poses and the loss function defined in Eq.~(\ref{eq:first_loss}) is optimized to combine the segments. The network parameters, $\theta$ are initialized to reconstruct a sphere. Pairs of neighboring segments are iteratively combined until the object is reconstructed from the full sequence. \subsection{Segmenting the Video Sequence} Our choice of the algorithm for dividing the video sequence into multiple segments is driven by our tracking procedure explained in Section~\ref{sec:tracking_method} and to obtain accurate tracking within each segment. Recall that the regularization term (Eq.~(\ref{eq:reg_loss})) enforces near-plane surface reconstruction in the tracking step, $t=0$. We divide the video sequence into multiple segments and determine the tracking direction in each segment such that the observed object surface in the first frame of the segment is nearly-parallel to the image plane. We rely on object reprojection area in the image to determine the segment boundaries in the sequence and the tracking direction within the segment. More specifically, frames with large object reprojection area contain object surfaces that are nearly-parallel to the image plane (see supplementary material for formal proof) and are thus good starting points for tracking. Figure~\ref{fig:seg_area} shows the plot of the object reprojection area through the sequence and sampled frames. The local maxima of the reprojection area are representative of the frames with large parallel object surface and are thus the segment-start boundaries. The local minima are chosen to be the segment-end boundaries as these are representative of the frames where the tracked object surface in the segment begins to self-occlude (see Figure~\ref{fig:seg_area}). We also set a threshold on the segment length to discard noisy segment boundaries and enforce 1-2 frame overlap between the segments. \begin{figure}[t] \centering \includegraphics[trim=90 130 90 2, clip, width=1\linewidth]{figures/method/seg_area.pdf} \caption{Plot of object reprojection area for different frames in the sequence. The local maxima and minima of the reprojection area curve are used for determining the segment boundaries. The images of object at different parts of the sequence are shown in the inset.} \label{fig:seg_area} \end{figure}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} There is a general consensus that quasars belong to two different radio populations, radio-quiet quasars (RQQSOs) and radio-loud quasars. $R$ is usually defined as the ratio of the radio (6~\mbox{cm}) to the optical (440~\mbox{nm}) flux densities and the radio-quiet quasars have a value of $R<10$, while the radio-loud quasars have $R>10$ (Kellermann et al. 1989). It is found that $\sim10~\mbox{\%}$ of quasars are in the radio-loud category. An additional distinction between active galactic nuclei (AGN) with strong and weak radio sources comes from the observation that radio loud objects essentially all occur in elliptical galaxies and RQQSOs appear to reside in galaxies that are dominated by exponential disks. However the RQQSOs that occur in elliptical host galaxies are in general more luminous than those that reside in disks (Taylor et al. 1996). Little is known about the short-term variability of ra\-dio-quiet quasars, because few studies have been carried out (Gopal-Krishna et al. 1993 and 1995; Jang \& Miller 1995; Sagar et al. 1996). In contrast blazars display rapid variability in the wavelength range from radio to gamma rays. The blazar class encompasses both optically-violent\-ly-variable (OVV) quasars and BL Lac objects and about one quarter of all radio-loud quasars are also in the blazar category (Webb et al. 1988; Pica et al. 1988). There are many theoretical models which endeavour to explain the large and rapid variability exhibited by blazars and these are usually divided into extrinsic and intrinsic categories. One extrinsic mechanism is microlensing of emission knots in a relativistic jet when they pass behind planets in an intervening galaxy (McBreen \& Metcalfe 1987; Gopal-Krishna \& Subramanian 1991). The rapid variability from superluminal-microlensing may be \linebreak responsible for the variability observed in \object{AO 0235$+$164} (Rabbette et el. 1996) and \object{PKS 0537$-$441} (Romero et al. 1995). One family of intrinsic models is based on a rotating supermassive black hole which accretes matter from a surrounding accretion disc and ejects two oppositely directed jets. The shocked-jet model involves shocks which move with relativistic speeds along the jet (Qian et al. 1991; Marscher, 1980). It is believed the shock propagates along the line of sight, through inhomogeneous, small-scale structures distributed along the jet. These inhomogeneous structures are illuminated, or excited, by the moving relativistic shock, through the amplification of the magnetic field and the acceleration of electrons which causes the variability in polarization and in flux density that are observed over a wide range of frequencies (Hughes et al. 1986). Another family of intrinsic models invokes numerous flares or hotspots in the accretion disk and the corona that is believed to surround the central engine (Wiita et al. 1992; Mangalam \& Wiita 1993) and indeed a similar model has been proposed to explain X-ray variations in blazars (Abramowicz et al. 1991). The fact that RQQSOs generally lie on the far-infrared versus radio correlation (Sopp \& Alexander 1991) suggest that star formation plays an important role in their radio emission. It has been suggested by Terlevich et al. (1992) that the low values of $R$ in RQQSOs can be explained without jets or accretion discs, by postulating a circumnuclear starburst within a dense, high-metallicity nuclear environment. In this model the optical/UV and bolometric luminosity arises from young stars; the variability comes from cooling instabilities in the shell of compact supernova remnants and supernova flashes. Variability on intranight timescales is however difficult to explain with this model because of the short timescales involved. Furthermore radio-quiet and radio-loud quasars have very different radio power outputs but have similar spectral shapes in the radio region and suggest that a significant fraction of the RQQSOs may be capable of producing powerful radio emission (Barvainis et al. 1996). Kellermann et al. (1994) found possible radio extensions up to about 300~\mbox{kpc} in a few RQQSOs and assert that for at least these few cases, the emission is too large to be starburst related (Stein 1996). Recently, some evidence suggesting rapid optical variability in the RQQSOs \object{PG 0946$+$301} and \object{PG 1444$+$407} was reported by Sagar et al. (1996). They also reported long-term variability for four RQQSOs. Jang \& Miller (1995) reported intranight variability for one RQQSO out of a sample of nine sources. Brinkmann et al. (1996) obtained ASCA observations of the radio-quiet, infrared \linebreak quasar \object{IRAS 13349$+$2438} and detected substantial X-ray variability on a timescale of only a few hours. The results of the photometric observations of a sample of mainly high luminosity and high redshift RQQSOs are presented. The observations and data reduction are given in Sect.~2. The results including tables listing the differential photometry and some light curves are presented in Sect.~3. The discussion and conclusions are given in Sects.~4 and 5. Sect.~4 also includes a discussion on two remarkable RQQSOs, \object{PG 1416$-$129} and \object{IRAS 13349$+$2438}. CCD images of the fields containing the radio-quiet quas\-ars and reference stars used in the differential photometry are also included. A value of $\mathrm{H}_\mathrm{0}=50~\mbox{km s}^{-1}~\mbox{Mpc}^{-1}$ and $\mathrm{q}_\mathrm{0}=0.5$ has been adopted. \section{Observations and Data Reduction} The radio-quiet quasars were selected from the catalogues of V\'{e}ron-Cetty \& V\'{e}ron (1985), Hewitt \& Burbidge (1987) and Irwin et al. (1991). All the sources are at high redshifts ($z>1$) with the sole exception of \object{PG 2112$+$059} ($z=0.466$) and the majority of the sources also have high absolute magnitudes ($-27<M_\mathrm{V}<-30$). The photometric data presented here were obtained during a number of observing runs spanning a period of six years. The observations were carried out during September 1990, October/November 1991, August/September 1992, February 1993, March 1994, December 1995 and May/June 1996. All the RQQSOs listed in Tables~\ref{three-quasars} and \ref{twenty-quasars}, were monitored during at least one of these observing periods and a few were observed over a number of years. The observations were made with the one metre JK Telescope at the Observatorio del Roque de los Muchachos. A GEC CCD with $380\times580$ pixels was used for the September 1990 and October/November 1991 runs. An EEV CCD with $1246\times1152$ pixels and an image scale of 0.30 arcseconds pixel$^{-1}$ was used for all the subsequent observing runs. The latter CCD was preferred because the larger field of view offered a greater choice of reference stars for the differential photometry. The seeing typically varied between 1.0 and 1.8 arcseconds. The CCD fields containing the sources were observed through B, V or R-band filters and integrations times varied between 3 and 14 minutes. The simultaneous observations of the source and several comparison stars allowed variations arising from fluctuations in atmospheric transmission and extinction to be removed. The radio-quiet quasars and the reference stars used in the analysis are identified in the CCD images (Fig.~\ref{charts}). Where possible, reference stars of comparable brightness and colour to the source were selected for the differential photometry. In all cases two or more reference stars were used. The CCD frames were processed using the standard software package IRAF. The DAOPHOT routine was used to carry out aperture-photometry on each star in the CCD frame. The differential magnitudes were then calculated for any pair of objects in the frame. \section{Results} The dates of observations and the differential photometry results for each source are presented in Tables~\ref{three-quasars} and \ref{twenty-quasars}. The source name and redshift are given in columns 1 and 2. It should be noted that the right ascension (RA) and declination (DEC) of these sources are included with the finding charts (Fig.~\ref{charts}). The dates when each source was observed and the number of observations in each night per filter are presented in columns 3 and 4 respectively. The differential magnitudes ($\Delta$B, $\Delta$V, $\Delta$R) between the reference stars (R1, R2, R3) and the radio-quiet quasar (Q) are given in column 5. In cases where there was more than one observation of a source in a night, the average value of the differential magnitudes are given in column 5. The photometric error ($1\sigma$) is given in column 6. \begin{figure*} \centering \vspace{22cm} \caption{The finding charts for the radio quiet quasars (RQQ) and the reference stars (R1, R2, R3) used in the differential photometry. Each frame is orientated so north is up and east to the left. The length of the bar is one arc minute. B1950 coordinates have been used.} \label{charts} \end{figure*} \begin{table*} \caption{B and V-band differential photometry results for the three medium redshift radio-quiet quasars.} \centering \begin{tabular}{l|l|l|ll|lll|lll|l} \hline & & & \multicolumn{2}{l|}{\bf number of} & \multicolumn{6}{c|} {\bf differential magnitude} &\\ {\bf object Name} & {\bf $z$} & {\bf dates of} & \multicolumn{2}{l|}{\bf obs.} & \multicolumn{3}{l|}{\bf $\Delta$B} & \multicolumn{3}{l|}{\bf $\Delta$V} & {\bf error} \\ & & {\bf obs.} & \multicolumn{2}{l|}{\bf per filter} & {\bf R1$-$Q} & {\bf R2$-$Q} & {\bf R3$-$Q} & {\bf R1$-$Q} & {\bf R2$-$Q} & {\bf R3$-$Q} & {\bf $1\sigma$} \\ \hline \ {\bf \object{PG 0117$+$213}} & 1.493 & 15-09-90 & 13B & 16V & $-0.59$ & $-0.13$ & --- & $-1.23$ & $-0.52$ & --- & 0.05 \\ & & 19-09-90 & 14B & 15V & $-0.59$ & $-0.12$ & --- & $-1.22$ & $-0.51$ & --- & 0.05 \\ & & 31-10-91& 4B & 5V & $-0.56$ & $-0.11$ & --- & $-1.23$ & $-0.53$ & --- & 0.02 \\ & & 01-11-91& 5B & 6V & $-0.57$ & $-0.13$ & --- & $-1.23$ & $-0.53$ & --- & 0.02 \\ & & 21-09-92& 1B & 1V & $-0.57$ & --- & 3.10 & $-1.19$ & --- & 2.28 & 0.04 \\ & & 22-09-92& 1B & 1V & $-0.58$ & --- & 3.21 & $-1.18$ & --- & --- & 0.04 \\ & & 23-09-92& 1B & 1V & $-0.54$ & --- & 3.21 & $-1.20$ & --- & --- & 0.04 \\ & & 24-09-92& 1B & 1V & $-0.56$ & --- & 3.17 & $-1.24$ & --- & 2.30 & 0.04 \\ & & 25-09-92 & --- & 1V & --- & --- & --- & $-1.23$ & --- & 2.39 & 0.04 \\ & & 26-09-92& 1B & 1V & $-0.55$ & --- & --- & $-1.21$ & --- & 2.33 & 0.04 \\ & & 27-09-92& 1B & 1V & $-0.57$ & --- & 3.35 & $-1.24$ & --- & 2.32 & 0.04 \\ {\bf \object{PG 2112$+$059}} & 0.466 & 15-09-90 & --- & 8V & --- & --- & --- & 0.35 & 0.66 & --- & 0.03 \\ & & 16-09-90 & 6B & 6V & 0.76 & 1.34 & --- & 0.35 & 0.65 & --- & 0.03 \\ & & 21-09-92 & 2B & 2V & 0.72 & 1.34 & 0.82 & 0.31 & 0.61 & 0.23 & 0.03 \\ & & 22-09-92 & 1B & 1V & 0.71 & 1.33 & 0.82 & 0.33 & 0.62 & 0.21 & 0.03 \\ & & 25-09-92 & 2B & 2V & 0.73 & 1.34 & 0.84 & 0.32 & 0.60 & 0.22 & 0.03 \\ & & 26-09-92 & 1B & 1V & 0.72 & 1.32 & 0.82 & 0.33 & 0.60 & 0.23 & 0.03 \\ & & 28-05-96 & --- & 2V & --- & --- & --- & 0.51 & 0.81 & 0.41 & 0.03 \\ & & 31-05-96 & --- & 3V & --- & --- & --- & 0.50 & 0.80 & 0.39 & 0.03 \\ & & 01-06-96 & --- & 2V & --- & --- & --- & 0.49 & 0.79 & 0.40 & 0.03 \\ & & 03-06-96 & --- & 4V & --- & --- & --- & 0.50 & 0.77 & 0.42 & 0.03 \\ {\bf \object{PG 2302$+$029}} & 1.044 & 26-10-91 & 1B & 1V & 0.22 & 0.59 & --- & $-0.44$ & 0.19 & --- & 0.03 \\ & & 31-11-91 & 2B & 2V & 0.26 & 0.54 & --- & $-0.40$ & 0.24 & --- & 0.03 \\ & & 01-11-91 & 1B & 1V & 0.25 & 0.56 & ---& --- & 0.22 & ---& 0.03 \\ & & 02-11-91 & 1B & 1V & 0.25 & 0.60 & ---& $-0.40$ & 0.24 & --- & 0.03 \\ & & 20-08-92 & 1B & 1V & 0.27 & 0.54 & --- & $-0.38$ & 0.24 & --- & 0.03 \\ \hline \end{tabular} \label{three-quasars} \end{table*} \begin{figure*} \resizebox{\hsize}{!}{\includegraphics{pg2112.eps}} \caption{Differential V band photometry between the radio quiet quasar \object{PG 2112$+$059} (QSO) and three reference stars R1, R2 and R3 and differential photometry between the reference stars R1-R2 and R1-R3. The star R3 was unobservable during September 1990, due to the small CCD field of view. The average relative magnitude per night is plotted.} \label{PG2112+059} \end{figure*} \begin{table*} \caption{V {\it or} R-band differential photometry results for twenty high redshift radio-quiet quasars.} \centering \begin{tabular}{l|l|l|l|lll|l} \hline & & & {\bf number of} & & && \\ {\bf object Name}& {\bf $z$}&{\bf dates of}& {\bf observations} & \multicolumn{3}{l|}{\bf differential magnitude} & {\bf error} \\ & & {\bf observations} & {\bf per filter}& {\bf R1$-$Q} & {\bf R2$-$Q} & {\bf R3$-$Q} & {\bf $1\sigma$} \\ \hline {\bf \object{US 1420}} & 1.473 & 18-02-93 & 2V & $-0.90$ & $-1.78$ & 0.30 & 0.04 \\ & & 20-02-93 & 1V & $-0.94$ & $-1.80$ & 0.28 & 0.04 \\ & & 21-02-93 & 2V & $-0.94$ & $-1.82$ & 0.34 & 0.04 \\ & & 23-02-93 & 1V & $-0.93$ & $-1.82$ & 0.32 & 0.04 \\ {\bf \object{US 1443}} & 1.564 & 18-02-93 & 2V & $-0.97$ & $-0.74$ & $-0.52$ & 0.03 \\ & & 21-02-93 & 2V & $-0.95$ & $-0.75$ & $-0.50$ & 0.03 \\ & & 23-02-93 & 2V & $-0.98$ & $-0.72$ & $-0.52$ & 0.03 \\ {\bf \object{US 1498}} & 1.406 & 18-02-93 & 2V & 0.10 & $-0.44$ & 0.37 & 0.03 \\ & & 21-02-93 & 2V & 0.10 & $-0.44$ & 0.35 & 0.03 \\ & & 23-02-93 & 2V & 0.09 & $-0.45$ & 0.34 & 0.03 \\ {\bf \object{BR 0945$-$04}} & 4.118 & 22-02-93 & 2R & $-3.59$ & $-2.02$ & --- & 0.07 \\ & & 24-02-93 & 2R & $-3.52$ & $-1.95$ & --- & 0.03 \\ & & 19-12-95 & 3R & $-3.54$ & $-1.97$ & --- & 0.03 \\ {\bf \object{H 1011$+$091}} & 2.27 & 18-02-93&2R & --- & $-1.15$ & $-1.61$ & 0.03 \\ & & 21-02-93 & 2R & 0.12 & 1.12 & $-1.58$ & 0.03 \\ & & 23-02-93&2R & 0.11 & $-1.10$ & $-1.60$ & 0.03 \\ {\bf \object{BRI 1013$+$00}} & 4.38& 22-02-93& 2R & $-2.76$ & $-2.89$ & --- & 0.03 \\ & &24-02-93& 2R & $-2.78$ & $-2.89$ & --- & 0.03 \\ {\bf \object{BR 1033$-$03}} & 4.50& 22-02-93 & 2R & $-1.13$ & $-2.59$ & --- & 0.03 \\ {\bf \object{BRI 1050$-$00}} & 4.29 & 22-02-93 & 2R & $-0.37$ & 0.20 & --- & 0.04 \\ & & 19-12-95& 2R & $-0.32$ & 0.22 & --- & 0.04 \\ {\bf \object{BRI 1108$-$07}} & 3.94 & 22-02-93& 2R & $-0.38$ & $-1.83$ & --- & 0.04 \\ {\bf \object{BRI 1110$+$01}} & 3.93 & 22-02-93& 2R & $-1.67$ & $-1.87$ & --- & 0.04 \\ {\bf \object{BR 1144$-$08}} & 4.16 & 22-02-93 & 2R & $-2.60$ & $-1.83$ & --- & 0.04 \\ {\bf \object{1146$+$111D}} & 2.12 & 18-02-93& 2R & $-0.20$ & 0.15 & $-2.03$ & 0.03 \\ & & 20-02-93 & 2R & $-0.20$ & 0.13 & $-2.04$ & 0.03 \\ & & 21-02-93& 2R & $-0.18$ & 0.17 & $-2.00$ & 0.03 \\ & & 23-02-93& 2R & $-0.19$ & 0.18 & $-1.99$ & 0.03 \\ {\bf \object{1159$+$123}} & 3.51 & 18-02-93 & 2R & 0.23 & $-0.29$ & 0.44 & 0.04 \\ & & 20-02-93 & 2R & 0.21 & $-0.28$ & 0.47 & 0.04 \\ & & 21-02-93 & 2R & 0.20 & $-0.29$ & 0.44 & 0.04 \\ {\bf \object{1201$-$015}} &2.26& 19-02-93& 2R & $-3.39$ & $-2.38$ & --- & 0.04 \\ & & 21-02-93& 1R & $-3.44$ & $-2.34$ & $-0.68$ & 0.04 \\ & & 23-02-93& 2R & $-3.45$ & $-2.33$ & $-0.67$ & 0.04 \\ {\bf \object{BR 1202$-$07}} & 4.70 & 22-02-93 & 2R & $-0.03$ & $-0.61$ & --- & 0.07 \\ {\bf \object{1222$+$023}} & 2.05 & 19-02-93 & 2R & $-3.00$ & 0.92 & $-1.81$ & 0.05 \\ & & 21-02-93& 2R & $-3.02$ & 0.92 & $-1.81$ & 0.05 \\ & & 23-02-93& 2R & $-2.98$ & 0.93 & $-1.76$ & 0.05 \\ {\bf \object{PG 1247$+$268}} & 2.041 & 18-02-93& 1R & 1.37 & 2.03 & 1.73 & 0.03 \\ & & 21-02-93 & 2R & 1.35 & 2.06 & 1.77 & 0.03 \\ {\bf \object{W 61972}} & 1.92 & 18-02-93 & 2R & 0.51 & $-1.20$ & $-1.83$ & 0.02 \\ & & 21-02-93 & 2R & 0.51 & $-1.20$ & $-1.82$ & 0.02 \\ & & 23-02-93& 1R & --- & $-1.20$ & $-1.85$ & 0.02 \\ {\bf \object{PG 1329$+$412}} & 1.93 & 18-02-93 & 2R & $-0.67$ & 0.79 & $-1.55$ & 0.03 \\ & & 23-02-93 & 2R & $-0.67$ & 0.77 & $-1.54$ & 0.03 \\ {\bf \object{BRI 1500$+$08}} & 3.96 & 23-02-93 & 2R & $-0.82$ & $-0.92$ & --- & 0.07 \\ & & 17-03-94 & 2R & $-0.86$ & --- & --- & 0.07 \\ \hline \end{tabular} \label{twenty-quasars} \end{table*} \begin{figure*} \resizebox{\hsize}{!}{\includegraphics{pg0117.eps}} \caption{Differential V band photometry between the radio quiet quasar \object{PG 0117$+$213} (QSO) and the two reference stars R1 and R2 and differential photometry between the reference stars R1-R2. The origins of the time axes of the two upper plots are 00:50 UT on September 15 and 00:32 UT on September 19, 1990. The origins of the time axes of the lower plots are 22:12 UT on October 31 and 22:43 UT on November 1, 1991.} \label{PG0117+213} \end{figure*} The major result is that no statistically significant rapid variability was observed for any of the sources listed Tables~\ref{three-quasars} and \ref{twenty-quasars}. The data also reveal no night to night or longer term variability for any of the sources, with the sole exception of \object{PG 2112$+$059}. This source brightened by 0.18 magnitudes in the V-band between September 1992 and June 1996 (Table~\ref{three-quasars} and Fig.~\ref{PG2112+059}). The source with the largest number of observations is \object{PG 0117$+$213} (Table~\ref{three-quasars}) and some of the differential V-band results are presented in Fig.~\ref{PG0117+213}. No significant short term variability was observed. Furthermore careful analysis of the remainder of the V-band data and all of the B-band revealed no significant short or long term variability. The sources in this sample can be divided into three main redshift groups; A($z<2$), B($2<z<3$), C($z>3$), and the following section includes comments on a number of the sources in these three categories. \begin{description} \item[\bf A($z<2$)] \object{PG 0117$+$213} was the most frequently observed source in the sample and no significant variability was detected. The spectra of the source reveal the emission lines \ion{C}{iii} and \ion{C}{iv} at $z_\mathrm{em}=1.493$ (Hewitt \& Burbidge 1987). Infrared observations over a period of years show maximum variability of 0.05 and 0.16 magnitudes at 2.2~\mbox{$\mu$m} and 10.1~\mbox{$\mu$m} respectively (Neugebauer et al. 1989). Sagar et al. (1996) observed this RQQSO during November 1996 and detected a hint of optical microvariability and suggested that careful monitoring of this source should continue. The only source in this sample of RQQSOs to display variability is \object{PG 2112$+$059}. It was observed during 1990, 1992 and 1996 and was found to have brightened by 0.18 magnitudes between September 1992 and June 1996 (Table~\ref{three-quasars}). \object{PG 2302$+$029} reveals emission peaks in its spectrum which have been identified with \ion{Fe}{iii} multiplets (Wampler, 1986). This source has a redshift $z=1.044$ and $m_\mathrm{B}=16.03$. No variability was detected during the 1991 or 1992 observing runs. \object{US 1420}, \object{US 1443} and \object{US 1498} are all ultraviolet excess sources. Spectroscopy of blue and ultraviolet excess sources was reported Mitchell et al. (1984) and the redshifts are given in Table~\ref{twenty-quasars}. These sources and the remaining sources in this category displayed no rapid variability. \item[\bf B($2<z<3$)] \object{H 1011$+$091} is a broad absorption line \linebreak (BAL) RQQSO with $m_\mathrm{V}=17.8$ (Hartig \& Baldwin 1986). The \ion{Mg}{ii} emission line yields $z=2.27$ (Drew \& Boksenberg 1984). The source \object{1146$+$111D} has a redshift $z=2.12$ and is part of a compact group of five quasars with similar apparent magnitude and redshift and all within a diameter of 4 arcminutes (Hazard et al. 1979). The sources \object{1201$-$015} and \object{1222$+$023} were detected optically and listed by MacAlpine \& Williams (1981). \object{1201$-$015} has an estimated $m_\mathrm{V}=18.0$, and emission lines $\lambda$3970 and $\lambda$4560 believed to be \ion{Ly}{$\alpha$} and \ion{O}{iv} yielding $z=2.26$; \object{1222$+$023} has $z=2.05$ and an estimated $m_\mathrm{V}=17.0$. \object{PG 1247$+$268} was observed by Green et al. (1980). The optical spectrum shows four strong emission features \ion{Ly}{$\alpha$}$+$\ion{N}{v}, \ion{Si}{iv}$+$\ion{O}{iv}, \ion{C}{iv} and \ion{C}{iii} with $z=2.041$ and $m_\mathrm{V}=15.8$. \item[\bf C($z>3)$] \object{1159$+$123} discovered by Hazard et al. (1984) using objective prism plates from the UK Schmidt telescope is a strong emission-line RQQSO with $z_\mathrm{em}=3.51$ and $m_\mathrm{V}=17.5$. Irwin et al. (1991) carried out a very successful multicolour survey in B, R and I (selected B-R) to search for high redshift quasars. Using this method they found 27 quasars at $z>4$. R-band data for nine of these high redshift quasars are given in Table~\ref{twenty-quasars}: \object{BR 0945$-$0411}, $m_\mathrm{R}=18.80$; \object{BRI 1013$+$0035}, $m_\mathrm{R}=18.80$; \object{BR 1033$-$0327}, $m_\mathrm{R}=18.50$; \object{BRI 1050$-$0000}, $m_\mathrm{R}=18.59$; \object{BRI 1108$-$0747}, $m_\mathrm{R}=18.13$; \object{BRI 1110$+$0106}, $m_\mathrm{R}=18.30$; \object{BR 1144$-$0723}, $m_\mathrm{R}=18.60$; \object{BR 1202$-$0725}, $m_\mathrm{R}=18.70$ and \object{BRI 1500$+$0824}, $m_\mathrm{R}=19.25$. No variability was found, however the search for optical variability in these high redshift RQQSOs will continue. \end{description} \section{Discussion} The mechanisms that make some quasars radio-loud and others radio-quiet are not well understood but the major differences may be attributable to the spin of the black hole (Wilson \& Colbert 1995). There are a number of well established distinctions between the two classes. (i) Radio-loud quasars are associated with elliptical host galaxies and radio-quiet quasars tend to reside in spiral galaxies. The mean absolute magnitude of the underlying galaxies of radio-loud quasars is similar to that of radio galaxies, however the host galaxies of radio-quiet quasars are 0.6-1.0 magnitudes less luminous (Smith et al. 1986: V\'{e}ron-Cetty \& Woltjer 1990). Recent deep near infrared (K-band) imaging of host galaxies of quasars revealed that more than half of RQQSOs appear to lie in galaxies that are dominated by an exponential disk (Taylor et al. 1996). Those RQQSOs that have elliptical host galaxies show signs of interaction and are in general more luminous than those that reside in disk galaxies. There is also some evidence to suggest that a large majority of low-luminosity radio-quiet AGN lie in disk galaxies but a significant fraction of RQQSOs more luminous than $M_\mathrm{V}\approx-23.5$ have elliptical host galaxies. (ii) Unlike RQQSOs, radio-loud quasars produce large scale jets and lobes and can be defined by their radio luminosity -- those with $\mathrm{L}(5~\mbox{GHz})<10^{25}~\mbox{W~Hz}^{-1}~\mbox{sr}^{-1}$ are classified as RQQSOs, and those with $\mathrm{L}(5~\mbox{GHz})>10^{25}~\mbox{W~Hz}^{-1}~\mbox{sr}^{-1}$ as radio loud quasars (Miller et al. 1990). (iii) Gamma-ray results recently obtained by the Compton Gamma Ray Observatory (CGRO) have produced evidence for two classes of AGN (Dermer \& Gehrels 1995). These classes are defined by their redshift, luminosity distributions and high energy spectral properties. The first class of objects have redshifts $\leq0.06$ and 50--150~\mbox{keV} luminosities in the range $10^{41}$--$10^{44}~\mbox{ergs}^{-1}$. Associated with this group are Seyfert galaxies, RQQSOs and radio-galaxies viewed at large angles with respect to the radio jet axis. The gamma-ray spectra for these sources soften between $\sim100~\mbox{keV}$ and several MeV. The second class of source consists of blazars with redshifts as large as 2.3 which have detectable fluxes of gamma-rays above 100~\mbox{MeV} with emission extending into the GeV band. This class of source probably consists of AGN that are observed close to the axis of a radio jet. So far no RQQSO has been detected at gamma ray energies above 10~\mbox{MeV}. Unlike radio-loud quasars, intensive optical monitoring of RQQSOs only started in the early 1990s. Gopal-Krishna et al. (1995) monitored a sample of six optically bright and luminous RQQSOs ($m_\mathrm{V}\approx16$ and $M_\mathrm{V}<-23$) and found strong hints for variability in three RQQSOs; \object{PG 0946$+$301} displayed an R-band increase of $\sim0.05$ magnitudes in a time of $\sim0.5~\mbox{hr}$; \object{PG 1049$-$006} varied in V-band by $\sim0.05$ magnitudes in $\sim0.6~\mbox{hr}$ and \object{PG 1206$+$459} varied by 0.04 magnitudes in $\sim2~\mbox{hr}$ in the R-band. Sagar et al. (1996) reported that the flux density from \object{PG 1444$+$407} dropped by 0.04 magnitudes in $\sim0.5~\mbox{hr}$ in the R-band. Sagar et al. (1996) also reported long-term variability (over $\sim1~\mbox{yr}$) for four RQQSOs in their sample, with the largest variability of about 0.15 magnitudes in 11 months, recorded for the source \object{PG 1049$-$005}. Jang \& Miller (1995) reported intranight variability in the RQQSO \object{II Zw 175} which varied by $\sim0.05$ magnitudes over a period of 4~days. The reported detection of variability in RQQSOs is not in conflict with the results presented here and in Tables~\ref{three-quasars} and \ref{twenty-quasars}, because the small levels of variability would not have been detected above the $3\sigma$ level in many of the sources monitored in this survey. One source, \object{PG 2112$+$059}, displayed long-term variability decreasing by 0.18 magnitudes in the V-band over a period of almost four years. The long term variability of large samples of optically-selected quasars have been studied over decades (Hook et al. 1994; Cristiani et al. 1996). In these samples, a strong negative correlation between variability and quasar luminosity was found with the more luminous quasars displaying less variability. This result is interesting considering that RQQSOs in this sample all have high luminosities ($-27<M_\mathrm{V}<-30$), and no rapid optical or night to night variability was detected in any of the sources. \subsection{Radio results on RQQSOs} The radio spectra of RQQSOs probably have contributions from three components: (i) optically thin synchro\-tron from star forming regions in the disk of the host galaxy and in a circumnuclear starburst, (ii) optically thin synchrotron from an extended or possibly jet-like component powered by an active nucleus and (iii) partially opaque synchrotron from a compact VLBI-scale core. In an extensive survey of RQQSOs, Kellermann et al. (1994) obtained VLA maps which show extended and double lobe radio structures in some sources that are similar to those observed in radio-loud quasars. The RQQSOs mostly have a radio luminosity well in excess of the $10^{22}~\mbox{W Hz}^{-1}$ found for most normal spiral and elliptical galaxies and hence the radio emission is not simply that from the underlying galaxy. It is quite possible that for sensitivity reasons quasars with additional low-surface brightness features may have been missed in the VLA mapping of the radio-quiet sources. Nevertheless, a number of the radio-quiet quasars in the Kellermann et al. (1994) survey (e.g. \object{0953$+$41}; \object{1116$+$21}; \object{1634$+$70}) are well resolved and show extended structure ranging between 49 and $\sim300~\mbox{kpc}$. Recent VLA results (Barvainis et al. 1996) revealed heterogenous spectral shapes in the radio spectra of a sample of RQQSOs that could be classified into general categories similar to radio loud quasars. Furthermore variability was discovered for seven sources most of which had flat or inverted radio spectra. In one source VLBI revealed that essentially all the flux emanated from one compact sub-parsec core. The radio results on these types of sources appear to be inconsistent with starburst models and imply that the cores of many RQQSOs may be scaled down versions of those found in radio loud quasars (Stein 1996). \subsection{The remarkable RQQSOs, \object{IRAS 13349$+$2438} and \object{PG 1416$-$129}} Recent new results have highlighted the unusual properties of two RQQSOs, both of which lie near the top of the upper band of radio emission for RQQSOs ($R\sim1$). The first source \object{IRAS 13349$+$2438} was initially detected through its strong infrared emission (Beichman et al. 1986) and has been classified as a radio-quiet, infrared bright quasar with a value of $R=1.9$. It has a redshift of $z=0.107$ and a high polarisation which rises from 1.4~\mbox{\%} at 2.2~\mbox{$\mu$m} (K-band) to 8~\mbox{\%} at 0.36~\mbox{$\mu$m} (U-band). Wills et al. (1992) found no variability of the polarisation or flux density on timescales from days to months and discussed a bipolar geometry to account for its polarisation and other properties. VLA observations revealed an unresolved source with an unusual radio spectrum with a maximum flux density of 7~\mbox{mJy} at a frequency of 5~\mbox{GHz}. The origin of the peaked radio emission is not understood but absorption of the radio emission may occur in the dusty dense interstellar medium and also the contributions of different spectra from several source components may be involved. \object{IRAS 13349$+$2438} has high polarisation, strong \ion{Fe}{ii} emission and is radioquiet but no broad absorption lines (BAL) have been observed. \object{IRAS 13349$+$2438} has been observed on several occasions by ROSAT where the source was found to vary by a factor of 4.1 in about one year and about 25~\mbox{\%} in one week (Brandt et al. 1996) but showed no evidence for the large intrinsic absorption of soft x-rays by cold neutral matter. The soft x-ray variability excluded electron scattering for most of the soft x-rays and suggest absorption by a warm ionized gas with internal dust. Recent ASCA observations of \object{IRAS 13349$+$2438} discovered for the first time rapid x-ray variability or blazar like behaviour in a RQQSO. The source displayed intensity variations on two separate occasions by factors of two on timescales of only a few hours without any significant spectral changes (Brinkmann et al. 1996). There is also some evidence in the x-ray data for even more rapid variability. The 0.6 to 8~\mbox{keV} spectrum was fitted with a power law with $\Gamma=2.40$ which is steeper than the average value of $\Gamma= 1.9$ found for RQQSOs in the ASCA band. Brinkmann et al (1996) point out that the line of sight to the quasar may graze the edge of the torus and suggest that small changes in viewing conditions could produce marked changes in intensity and spectral shape. However it is difficult to understand how changes in viewing conditions could produce such rapid variability with no significant spectral changes. It is plausible that \object{IRAS 13349$+$2438} is an example of a radio loud quasar that is viewed through a dusty ionizing outflow, possibly associated with a merger, that severely attenuates the radio emission so that the source is classified as a RQQSO. In this context it should be noted that the quasar \object{III Zw 2} could be classified as a RQQSO at 5~\mbox{GHz} but the radio spectrum rises steeply toward higher frequencies and this source is radio-loud when the 90~\mbox{GHz} flux density is adopted (Schnopper et al. 1978). Further observations of \object{IRAS 13349$+$2438} across the full spectrum including VLBI searches for a compact self-absorbed component or indeed multiple source components may help elucidate the nature of this unusual hybrid type source. The second unusual RQQSO is \object{PG 1416$-$129} at $z=0.129$. It has been classified (Turnshek \& Grillmair 1986; Ulrich 1988) as a broad absorption line quasar \linebreak (BALQSO). The value of $R$ is 1.1 and similar to \object{IRAS 13349$+$2438} it lies near the top of the upper band for RQQSOs which is also heavily populated with BALQSOs (Francis et al. 1995). This source has a soft x-ray excess (de Kool \& Meurs 1994) and the hardest spectral index of any source in the energy range 2--20~\mbox{keV} when observed with GINGA (Williams et al. 1992). Staubert \& Maisack (1996) detected this bright RQQSO at energies above 50~\mbox{keV} with the OSSE telescope on CGRO and the flux was found to be variable on a timescale of days during the 14 day observation. The BAL classification of this source has been questioned by Green \& Mathur (1996) who proposed that large values of the optical to X-ray slope ($\alpha_\mathrm{ox}>1.8$) be the defining characteristic of BALQSOs. They report a low value of ($\alpha_\mathrm{ox}\sim1.4$) for \object{PG 1416$-$129} and suggest further observations with HST to check the BAL classification. VLA observations of \object{PG 1416$-$129} reveal unusual results for a RQQSO: (i) the radio source consists of an unresolved core coincident with an extended component that is assumed not to be an unrelated background source (Kellermann et al. (1994), (ii) the source varied by a factor of 4.5 over a period of ten years and could have been more variable given the very limited monitoring at radio frequencies and (iii) the radio spectrum is consistent with a flat or inverted spectrum (Barvainis et al. 1996). No VLBI observations have been reported for \object{PG 1416$-$129} but one of the other variable RQQSOs \object{PG 1216$+$069} was detected with essentially all of its flux in the VLBI core and the high brightness temperature limit confirmed the self-absorbed synchrotron hypothesis for the flat spectrum component (Barvainis et al. 1996). This component may well be a scaled-down version of the radio sources observed in radio loud quasars and future VLBI observations of \object{PG 1416$-$129} may also reveal a similar component. \section{Conclusions} A long-term survey of a sample of high luminosity ($-27<M_\mathrm{V}<-30$) and medium to high redshift ($0.466<z<4.7$) radio-quiet quasars was undertaken, in order to search for short and long term optical variability on timescales of hours to years. A large sample of 23 RQQSOs was observed, with a total observation time of about 60 hours spread over a period of several years. Long-term variability was detected in the RQQSO \object{PG 2112$+$059} when it varied by 0.18 magnitudes in the V-band between 1992 and 1996. No rapid variability was observed in any of the sources in this sample of RQQSOs. The finding charts are included because they identify the RQQSO and the reference stars used in the photometry and hence are available to other observers. There have been a few reports of rapid optical variability in a number of RQQSOs but these reports are not in conflict with the results presented here because such small variability would not have been detected in many of the sources monitored in this survey. The unusual properties of two sources are highlighted. These sources were not monitored in this survey but have recently been added to the list of sources for study. The remarkable source \object{IRAS 13349$+$2438} combines some of the properties of blazars and radio quiet quasars and hence further observations may elucidate the nature of this hybrid source. The two unusual sources have $R$ values near the top of the range for RQQSOs and also have unusual radio spectra that may signify the presence of several source components. Further observations with VLA and VLBI should reveal new and enlightening views on the radio properties of these sources. \begin{acknowledgements} The Jacobus Kapteyn Telescope on the island of La Palma is operated by the Royal Greenwich Observatory at the Spanish Observatorio del Roque de los Muchachos of the Institato de Astrofisica de Canarias. We are grateful to Catherine Handley and Matt Delaney for their help in the preparation of this paper. \end{acknowledgements}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Notations and Assumptions} We use notation and conventions as in \cite[Section 1.2]{CHR-hmin}: \begin{itemize} \item Given a valued field $K$, we write $|x|$ for the valuation of an element $x \in K$, we use multiplicative notation for the value group, which is denoted by $\Gamma^\times_K$, and we set $\Gamma_K := \Gamma^\times_K \cup \{0\}$, where $0 := |0|$. The valuation ring of $K$ is denoted by $\mathscr{O}_K$. \item Given $a \in K$ and $\lambda \in \Gamma_K$, we write $B_{<\lambda}(a) = \{x \in K : |x-a| < \lambda\}$ for the open ball of radius $\lambda$ around $a$. \item Given $\lambda \le |1|$ in $\Gamma^\times_K$, we write $\RV_{\lambda,K} := (K^\times/B_{<\lambda}(1)) \cup \{0\}$ for the corresponding leading term structure (recall that $B_{<\lambda}(1)$ is a subgroup of the multiplicative group $K^\times$) and $\rv_\lambda\colon K \to \RV_{\lambda,K}$ for the canonical map. In the case $\lambda = |1|$, we will also omit the index $\lambda$, writing $\rv\colon K \to \RV_{K}$. \item We will freely consider $\Gamma_K$ as an imaginary sort, and also $\RV_{\lambda,K}$, when $\lambda$ is $\emptyset$-definable. (Actually, the only $\lambda$ relevant in this note are of the form $|p^\nu|$, where $p$ is the residue characteristic.) \end{itemize} The notions of $0$-h-minimality (from \cite{CHR-hmin}) and $0$-h$^{\mathrm{mix}}$-minimality (from \cite{CHRV-hmin2}) are defined by imposing conditions on definable subsets of $K$. Instead of recalling those definitions, we cite some direct consequences as Lemmas~\ref{lem.hmin.equi} and \ref{lem.hmin.mix}. Those are simply family versions of the definitions of $0$-h-minimality and $0$-h$^{\mathrm{mix}}$-minimality. (The actual definitions are obtained by restricting those lemmas to the case $k = 0$.) \begin{lem}[{\cite[Proposition~2.6.2]{CHR-hmin}}]\label{lem.hmin.equi} Suppose that $K$ is a valued field of equi-characteristic $0$ with $0$-h-minimal theory, that $A \subseteq K$ is an arbitrary (parameter) set and that $W \subseteq K \times \RV_K^k$ is $(A \cup \RV_K)$-definable. Then there exists a finite (without loss non-empty) $A$-definable set $C \subset K$ such for $x \in K$, the fiber $W_x \subseteq \RV_K^k$ only depends on the tuple $(\rv(x - c))_{c \in C}$; i.e., if $\rv(x - c) = \rv(x' - c)$ for all $c \in C$, then $W_x = W_{x'}$. \end{lem} \begin{lem}[{\cite[Corollary~2.3.2]{CHRV-hmin2}}]\label{lem.hmin.mix} Suppose that $K$ is a valued field of mixed characteristic with $0$-h$^{\mathrm{mix}}$-minimal theory, that $A \subseteq K$ is an arbitrary (parameter) set, and that $W \subseteq K \times \RV_{|m'|,K}^k$ is $(A \cup \RV_{|m'|,K})$-definable, for some integer $m' \ge 1$. Then there exists a finite $A$-definable set $C \subset K$ and an integer $m \ge 1$ such that, for all $x \in K$, the fiber $W_x \subseteq \RV_{|m'|,K}^k$ only depends on the tuple $(\rv_{|m|}(x - c))_{c \in C}$. \end{lem} \begin{rmk} In \cite[Corollary~2.3.2]{CHRV-hmin2}, $W$ is required to be $A$-definable. However, if $\phi(x, a', a)$ defines $W \subseteq K \times \RV_{|m'|,K}^k$ for some $a' \in \RV_{|m'|}^\ell$ and $a \subset A$, then one can apply the original version of \cite[Corollary~2.3.2]{CHRV-hmin2} to the subset of $K \times \RV_{|m'|,K}^{k+\ell}$ defined by $\phi(x, y, a)$. \end{rmk} Note that if in Lemma~\ref{lem.hmin.mix}, one takes $K$ of equi-characteristic $0$, then $|m'| = |m| = 1$, so one just gets back Lemma~\ref{lem.hmin.equi}. In a similar way, equi-characteristic zero and mixed characteristic could be treated together in this entire note. However, since the latter is more technical, we will often explain the equi-characteristic zero case first. Any field $K$ as in Theorem~\ref{t.sphc.ex} is definably spherically complete. In equi-characteristic $0$, this is just \cite[Lemma~2.7.1]{CHR-hmin}, and in general, it follows \emph{a posteriori} from Theorem~\ref{t.sphc.ex}. We nevertheless give a quick separate proof in the mixed characteristic case: \begin{prop}\label{prop:dsc-mix} If $K$ is a finitely ramified valued field of mixed characteristic with $0$-h$^{\mathrm{mix}}$-minimal theory, then it is definably spherically complete, i.e., every definable family of nested balls $B_q$ has non-empty interesction, where $q$ runs over an arbitrary (maybe imaginary) definable set $Q$. \end{prop} \begin{proof} For each $\gamma \in \Gamma_K$, consider the unique open ball $B'_{<\gamma}$ of open radius $\gamma$ that contains some ball $B_q$, for some $q \in Q$, if it exists; let $Q' \subseteq \Gamma_K$ be the set of those $\gamma$ for which $B^\prime_{<\gamma}$ exists. Since the chains $(B_q)_{q \in Q}$ and $(B'_{<\gamma})_{\gamma \in Q'}$ have the same intersection, by working with the latter, we can, and do, assume without loss that $Q \subseteq \Gamma_K$ and that every $B_q$ is an open ball of radius $q$. Pick a finite set $C \subset K$ (using Lemma~\ref{lem.hmin.mix}) and an integer $m \ge 1$ such that for $x \in K$, the set $W_x := \{\xi \in \RV_K \mid x \in B_{v(\xi)}\}$ only depends on the tuple $(\rv_{|m|}(x - c))_{c \in C}$. In particular, for every $q \in Q$, if we have some $x' \in B_q$ and some $x \in K$ satisfying $\rv_{|m|}(x - c) = \rv_{|m|}(x' - c)$ for every $c \in C$, then $x$ is also an element of $B_q$, too. Suppose now that there exists a $q \in Q$ such that $B_q \cap C = \emptyset$. (Otherwise, $C$ contains an element of the intersection of all $B_q$ and we are done.) We claim that every $q' \in Q$ satisfies $q' \ge q\cdot |m|$. Since $K$ is finitely ramified, this claim implies that below $B_q$, the chain is finite and hence has a minimum (which is then equal to the intersection of the entire chain). To prove the claim, suppose for a contradiction that $q' < q\cdot |m|$. Pick any $x' \in B_{q'}$ and any $x \in K$ with $|x - x'| = q'$ (and hence $x \in B_q \setminus B_{q'}$). Then, for any $c \in C$ we have $|c - x| \ge q$ (since $C \cap B_q = \emptyset$) and hence $|c-x|\cdot|m| \ge q\cdot |m| > q' = |x - x'| = |(x -c) - (x'-c)|$. Thus $\rv_{|m|}(x - c) = \rv_{|m|}(x' - c)$ for all $c \in C$, contradicting our choice of $C$ (and the fact that $x' \in B_{q'}$ and $x \notin B_{q'}$). \end{proof} Note that in mixed characteristic, the assumption of finite ramification is really necessary to obtain definable spherical completeness, as the following example shows. \begin{exa}\label{exa:not-dsc} Let $K$ be the algebraic closure of $\mathbb{Q}$, considered as a valued field with the $p$-adic valuation. Fix any elements $a_n \in K$ ($n \in \mathbb{N}_{\ge1}$) with $|a_n| = \lambda_n := |p|^{1-1/n}$ and set $a_I := \sum_{i \in I} a_i$ for $I \subset \mathbb{N}_{\ge 1}$ finite. The balls $B_{\le\lambda_n}(a_I)$ ($n \in \mathbb{N}_{\ge1}$, $I \subset \{1, \dots, n\}$) form an infinite binary tree (with $B_{\le\lambda_n}(a_I)$ containing $B_{\le\lambda_{n+1}}(a_I)$ and $B_{\le\lambda_{n+1}}(a_{I\cup\{n\}})$), so since $K$ is countable, there exists a chain $B_n = B_{\le\lambda_n}(b_n)$ (where $b_n = a_{I_n}$ for suitable $I_n$) which has empty intersection. We fix such a chain. Since $K$ is henselian, it is $1$-h-minimal. (By \cite{CHR-hmin}, it is even $\omega$-h$^{\mathrm{ecc}}$-minimal.) Since each $B_n$ has a radius between $1$ and $|p|$, it is the preimage of a subset of the residue ring $\mathscr{O}_K / B_{<|p|}(0) \subset \RV_{|p|}$, so we can turn the chain $(B_n)_{n \in \mathbb{N}_{\ge 1}}$ into a definable family by expanding the language by a suitable predicate on $\RV_{|p|}^2$. By \cite[Proposition~2.6.4]{CHRV-hmin2}, $K$ stays $1$-h-minimmal when expanding the language by this predicate. In particular, it is $0$-h$^{\mathrm{mix}}$-minimal. However, it is not definably spherically complete, as witnessed by the (now definable) chain $(B_n)_{n \in \mathbb{N}_{\ge 1}}$. \end{exa} \section{Lemmas} First we recall briefly a basic well-known lemma concerning the relationship between $\rv_{\lambda}$ and $\res_{\lambda}$, where $\res_{\lambda}\colon \mathscr{O}_K \rightarrow \mathscr{O}_K / B_{<\lambda}(0)$ is the canonical ring homomorphism onto the residue ring $\mathscr{O}_K / B_{<\lambda}(0)$. \begin{lem} \label{lem.rv.res} For all $a,a' \in \mathscr{O}_K$ and all $\lambda \leq 1$ in $\Gamma^{\times}_K$, \[ \rv_{\lambda}(a)=\rv_{\lambda}(a') \implies \res_{\lambda}(a)=\res_{\lambda}(a'). \] \end{lem} \begin{proof} Recall that $\rv_{\lambda}(a)=\rv_{\lambda}(a')$ if and only if $|a-a'| < \lambda |a|$. Suppose that $a,a' \in \mathscr{O}_K$ and $\rv_{\lambda}(a)=\rv_{\lambda}(a')$. As $|a| \leq 1$, we have that $|a-a'| < \lambda$. Therefore $a-a' \in B_{<\lambda}(0)$ so $\res_{\lambda}(a-a') = 0$. Hence, as $\res_{\lambda}$ is a ring homomorphism, we have $\res_{\lambda}(a) = \res_{\lambda}(a')$. \end{proof} \begin{lem} \label{lem.finite} Suppose $K$ is a valued field of characteristic 0. Let $A \subset K$ be a finite subset of $K$. \begin{itemize} \item In case $K$ has finite residue characteristic $p$, let $\ell$ be any natural number such that $\#A < p^{\ell}$ and set $\lambda := |p^\ell|$; \item Otherwise, when $K$ has residue characteristic $0$, set $\lambda :=1$. \end{itemize} In either case, for any $a,a' \in A$, \[a=a' \iff \{\rv_{\lambda}(a-v) : v \in A\} = \{\rv_{\lambda}(a'-v) : v \in A\}.\] \end{lem} \begin{proof}\renewcommand{\qedsymbol}{$\square$ (Lemma \ref{lem.finite})} As in the statement of the lemma, suppose $K$ is of characteristic $0$ and let $A \subset K$ be the given finite subset of $K$. In case $A$ is empty or a singleton, the lemma is trivial. So we continue under the assumption that $\#A \geq 2$. To each $a \in A$, associate the finite subset $D_a \subseteq \RV_{\lambda}$ defined by \[ D_a := \{\rv_{\lambda}(a-v) : v \in A\}, \] for $\lambda$ as in the statement of the lemma. Suppose, towards a contradiction to the lemma, that there are $a_1 \neq a_2$ in $A$ such that $D_{a_1} = D_{a_2}$. Without loss of generality, we assume that $a_1-a_2 = 1$. This assumption can be made without loss as $\rv_{\lambda}$ is multiplicative; so dividing every element of $A$ by $a_1-a_2$ allows us to reduce to the case where $a_1-a_2 = 1$. With this assumption we have that \[ \rv_{\lambda}(a_1 - a_2) = \res_{\lambda}(a_1 - a_2) = \res_{\lambda}(1) = 1. \] \begin{claim*} Assuming the above, in particular that $a_1 \neq a_2$ but $D_{a_1}=D_{a_2}$, we find, for every natural number $i \ge 3$, an element $a_i$ in $A$, satisfying \begin{enumerate} \item $|a_i-a_1| \leq 1$ and \item $\res_{\lambda}(a_1-a_i)=\res_{\lambda}(i-1)$. \end{enumerate} \end{claim*} (Note that the condition already holds for $i = 1, 2$.) \begin{proof}[Proof of the claim]\renewcommand{\qedsymbol}{$\square$ (Claim)} Assume inductively that for some $i \in \mathbb{N}$ we have found such an $a_i \in A$. By the definition of $D_{a_1}$ and the assumption that $D_{a_1} = D_{a_2}$, we have that \[ \rv_{\lambda}(a_1 - a_i) \in D_{a_1} = D_{a_2}. \] Therefore there exists some $a_{i+1} \in A$ such that $\rv_{\lambda}(a_2-a_{i+1}) = \rv_{\lambda}(a_1-a_{i})$. First note that this implies $|a_2-a_{i+1}|=|a_1-a_{i}| \leq 1$; the last inequality by property (1) of the claim for $a_i$. Then, applying the ultrametric inequality, we calculate \[ |a_1-a_{i+1}| = |a_1 - a_2 + a_2 - a_{i+1}| \leq \max\{|a_1 - a_2|,|a_2 - a_{i+1}|\} \leq 1. \] This establishes that $a_{i+1}$ satisfies part (1) of the claim. Now by Lemma \ref{lem.rv.res}, the fact that $\rv_{\lambda}(a_2-a_{i+1}) = \rv_{\lambda}(a_1-a_{i})$ implies that $\res_{\lambda}(a_2-a_{i+1}) = \res_{\lambda}(a_1-a_{i})$. By the inductive assumption on $a_i$ we have $\res_{\lambda}(a_1-a_{i}) = i-1$, so $\res_{\lambda}(a_2-a_{i+1}) = i-1$. Now as $\res_{\lambda}(a_1-a_2) = 1$, we have \[ \res_{\lambda}(a_1-a_{i+1})=\res_{\lambda}(a_1-a_2+a_2-a_{i+1})=\res_{\lambda}(a_1-a_2)+\res_{\lambda}(a_2-a_{i+1}) = i. \] This establishes that $a_{i+1}$ also satisfies part (2) of the claim. Hence, by induction, we have proved the claim. \end{proof} To finish the proof, it remains to verify that, for some $N > \# A$, the elements $\res_\lambda(0)$,~\dots,~$\res_\lambda(N-1)$ are all distinct. Since then, part (2) of the claim implies that $a_1, \dots, a_N$ are distinct, too, contradicting $N > \# A$; the contradiction going back to the assumption that $D_{a_1}=D_{a_2}$ while $a_1 \neq a_2$. If $K$ is of residue characteristic 0 then we have set $\lambda = 1$, and clearly the elements of $\mathbb{N} \subseteq K$ have different residues in the residue field. In case $K$ has finite residue characteristic $p$, recall that we chose $\ell$ so that $\#A < p^{\ell} =: N$ and that we have set $\lambda := |p^\ell|$. In particular, we have $B_{<\lambda}(0) \subseteq p^{\ell} \mathscr{O}_K$, so $0,1,...,p^{\ell}-1$ have different residues in the residue ring $\mathscr{O}_K / B_{<\lambda}(0)$. This finishes the proof of the lemma. \end{proof} \begin{defn}\label{defn.pcl} Suppose that $K \subseteq M$ are valued fields. An element $\alpha \in M$ is called a \emph{pcl} over $K$ (which stands for ``pseudo Cauchy limit'') if for all $a \in K$ there exists an $a' \in K$ such that $|\alpha - a'| < |\alpha - a|$. \end{defn} Note that if $\alpha$ is a pcl over $K$, then we can find an $a' \in K$ with $|\alpha - a'| < |\alpha - 0|$ and hence $\rv(\alpha) = \rv(a')$. In particular, we have $\rv(\alpha) \in \RV_K$. \begin{lem}\label{lem.emptydef} Assume $K$ is an expansion of a valued field of characteristic 0 and either: \begin{itemize} \item $K$ has residue characteristic 0 and $\Th(K)$ is $0$-h-minimal; or \item $K$ is finitely ramified with finite residue characteristic and $\Th(K)$ is $0$-h$^{\mathrm{mix}}$-minimal. \end{itemize} For $K$ of residue characteristic 0, set $\lambda := 1$; in case $K$ has finite residue characteristic $p$, let $\ell$ be any non-negative integer and set $\lambda := |p^\ell|$. Suppose that \begin{itemize} \item $M$ is an elementary extension of $K$, \item $\alpha \in M$ is a pcl over $K$, and \item $(W_x)_{x\in M}$ is a $\emptyset$-definable family of subsets of $\RV_{\lambda,M}^n$ (for some $n$) \end{itemize} Then there exists some $a \in K$ such that $W_\alpha = W_a$. In particular, if the language contains constants for all elements of $K$, then $W_\alpha$ is $\emptyset$-definable. \end{lem} \begin{proof}\renewcommand{\qedsymbol}{$\square$ (Lemma \ref{lem.emptydef})} First assume that $K$ is of equi-characteristic 0 and $\Th(K)=\Th(M)$ is $0$-h-minimal. By applying Lemma~\ref{lem.hmin.equi} in the valued field $M$ with the parameter set $A = \emptyset$, we obtain a finite $\emptyset$-definable subset $C$ of $M$ such that the fiber $W_x$ (for $x \in M$) only depends on the tuple $(\rv(x -c))_{c \in C}$. As $C$ is finite and $\emptyset$-definable, and as $K$ is an elementary submodel of $M$, we have $C \subset K$. As $C \subset K$ and $\alpha$ is a pcl over $K$, for every $c \in C$, there exists $a' \in K$ such that $|\alpha - a'| < |\alpha - c|$. Hence as $C$ is finite, there exists some $a \in K$ such that for all $c \in C$ we have $\rv(a - c) = \rv(\alpha - c)$ . Hence by our choice of $C$, having the defining property given in Lemma \ref{lem.hmin.equi}, we have that $W_a = W_\alpha$ as required. Suppose now that $K$ has finite residue characteristic $p$ and $\Th(K)=\Th(M)$ is 0-h$^{\mathrm{mix}}$-minimal. Let $\lambda := |p^\ell|$ for some integer $\ell \geq 0$ as in the statement of the lemma. Now apply Lemma~\ref{lem.hmin.mix} in the valued field $M$ to the $\emptyset$-definable family $W_x$. This provides a finite $\emptyset$-definable set $C \subset M$ and the existence of an integer $m \geq 1$ such that the fiber $W_x $ (where $x \in M$) only depends on the tuple $(\rv_{|m|}(x -c))_{c \in C}$; so fix such an integer $m$. Again, as $C$ is finite and $\emptyset$-definable, and as $K$ is an elementary submodel of $M$, we have $C \subset K$. Take any $c \in C$. As $\alpha$ is a pcl over $K$, for any positive integer $s$, there is a finite sequence $u_1,...,u_s$ of elements of $K$ such that $|\alpha - u_s| < \dots < |\alpha - u_1| < |\alpha - c|$. Now since $M$ is finitely ramified, by taking $s$ to be a large enough positive integer and $\nu$ such that $|p^\nu|\le|m|$, we obtain that $|\alpha - u_s| < |p^\nu|\cdot|\alpha - c| \le |m|\cdot|\alpha - c|$. This implies that $\rv_{|m|}(\alpha - c) = \rv_{|m|}(u_s - c)$. Since this works for each of the finitely many $c$ in $C$, we can find an $a \in K$ such that $\rv_{|m|}(\alpha - c) = \rv_{|m|}(a - c)$ for all $c \in C$. It then follows from the defining property of $C$ given by Lemma \ref{lem.hmin.mix} that $W_a = W_\alpha$, as required. \end{proof} \section{Spherically complete models} In this section, we prove the main result, Theorem~\ref{t.sphc.ex}. Throughout this section, we fix the valued field $K$ and work in an elementary extension $M \succ K$ (which we will at some point assume to be sufficiently saturated). Note that a field extension $L$ of $K$ is an immediate extension if and only if $\RV_{L} = \RV_K$. We will deduce Theorem~\ref{t.sphc.ex} from the following proposition. \begin{prop}\label{p.sphc.ex} Suppose that $K \prec M$ are as in Theorem~\ref{t.sphc.ex} (of equi-characteristic $0$ and $0$-h-minimal, or of mixed characteristic, finitely ramified, and $0$-h$^{\mathrm{mix}}$-minimal). Let $\alpha \in M$ be a pcl over $K$ (see Definition~\ref{defn.pcl}) and set $L := \acl_{\VF}(K, \alpha)$. Then we have $L \succ K$ and $\RV_{L} = \RV_K$. \end{prop} Here and in the following, $\acl_{\VF}$ means the field-sort part of the (relative model theoretic) algebraic closure inside the fixed extension $M$; we also use $\dcl_{\VF}$ in a similar way. \begin{proof}[Proof of Proposition~\ref{p.sphc.ex}]\renewcommand{\qedsymbol}{$\square$ (Proposition \ref{p.sphc.ex})} In the following, we work in the language with all elements from $K$ added as constants, so $\dcl_{\VF}(\emptyset) = K$ and $L = \acl_{\VF}(\alpha)$. Note that adding constants from the valued field to the language preserves $0$-h-minimality and $0$-h$^{\mathrm{mix}}$-minimality, by \cite[Lemma~4.1.19]{CHR-hmin} and \cite[Lemma~2.3.1]{CHRV-hmin2}. (Alternatively, since we claimed that we only use Lemmas~\ref{lem.hmin.equi} and \ref{lem.hmin.mix}, note that the statements of these lemmas permit adding constants.) We will prove the following (in this order): \begin{claim} $L = \dcl_{\VF}(\alpha)$. \end{claim} \begin{claim} Suppose that $\mu \in \Gamma_K$ is either equal to $1$, or, if the residue characteristic of $K$ is $p$, $\mu=|p^\nu|$ for some $\nu \in \mathbb{N}$. Then every subset of $(\RV_{\mu,M})^n$ (for any $n$) that is definable with parameters from $L$ is $\emptyset$-definable. \end{claim} \begin{claim} $L \prec M$. \end{claim} Then Claim 2 implies that $\RV_{L} = \RV_K$, since any element $\xi \in \RV_{L} \setminus \RV_K$ would provide a subset $\{\xi\}$ of $\RV_{1,M}$ that is definable with parameters from $L$ but is not $\emptyset$-definable. Claim 3 implies that $K \prec L$. Thus upon proving the claims above, we will be done. \begin{proof}[Proof of Claim 1]\renewcommand{\qedsymbol}{$\square$ (Claim 1)} Suppose that $\beta \in L$ and that the algebraicity of $\beta$ over $\{\alpha\}$ is witnessed by the formula $\phi(\alpha, y)$. Let $A$ be the finite set $\phi(\alpha,M)$, which is a subset of $L$ by the choice of $L$. Depending on the residue characteristic of $K$, fix $\lambda$ as in the statement of Lemma \ref{lem.finite} (applied to this set $A$). Given any $b \in A$, we consider the (finite) set of ``$\RV_{\lambda}$ differences'' \[ D_b := \{\rv_\lambda(b - y) : y \in \phi(\alpha, M)\}. \] As $\phi(\alpha,M) \subseteq L$, each $D_b$ is a (finite) subset of $\RV_{\lambda,L}$. By Lemma \ref{lem.finite}, different $b$ satisfying $\phi(\alpha, b)$ yield different finite sets $D_b \subseteq \RV_{\lambda,L}$. So $\beta$ can be defined by a formula (a priori over $L$) stating that \[ \phi(\alpha, y) \,\,\,\wedge\,\,\, D_y = D_\beta, \] which uses the (finitely many) elements of $D_\beta \subseteq \RV_{\lambda,L}$ as additional parameters. To prove Claim 1, it therefore suffices to check that $D_\beta \subseteq \RV_{\lambda,K}$ after all; from which it follows that $\beta$ can be defined over $K \cup \{\alpha\}$. \begin{claim*} $D_\beta \subseteq \RV_{\lambda,K}$. \end{claim*} Now, with $x$ running over $M$, consider the $\emptyset$-definable family of sets \[ W_x := \{\rv_{\lambda}(y - y') : y \in \phi(x, M) \wedge y' \in \phi(x, M) \}. \] Each such $W_x$ is a subset of $\RV_{\lambda,M}$. Note that $D_\beta \subseteq W_\alpha$, which is clear from their respective definitions. Now we apply Lemma \ref{lem.emptydef} to the $\emptyset$-definable family $W_x$ to obtain that $W_\alpha$ is $\emptyset$-definable. As $\acl_{\VF}(\emptyset) = K$ (and $W_\alpha$ is finite), we therefore have that $W_\alpha \subseteq \RV_{\lambda,K}$. So in particular $D_\beta \subseteq \RV_{\lambda,K}$, which establishes the claim. \end{proof} \begin{proof}[Proof of Claim 2]\renewcommand{\qedsymbol}{$\square$ (Claim 2)} By Claim 1, any set definable over $L$ is in fact $\{\alpha\}$-definable. So take any subset $W_\alpha$ of $(\RV_{\mu,M})^n$ that is definable over $L$ and let $\psi(\alpha,y)$ be a definition for it, where $\psi(x,y)$ is a formula over $\emptyset$. Now consider the $\emptyset$-definable family of sets $W_x := \psi(x,M)$ in $M$. Applying Lemma~\ref{lem.emptydef} to this family yields that, for some $a \in K$, we have $W_\alpha=W_a$, and hence is $\emptyset$-definable, as required. \end{proof} \begin{proof}[Proof of Claim 3]\renewcommand{\qedsymbol}{$\square$ (Claim 3)} We need to verify that every non-empty subset $Y \subseteq M$ that is definable with parameters from $L$ already contains a point of $L$. From the $0$-h-minimality assumption, we obtain that there exists a finite $L$-definable set $C = \{c_1, \dots, c_r\} \subset M$ such that $Y$ is a union of fibers of the map \[ \begin{array}{rcl} \rho\colon M &\to& (\RV_{\mu,M})^r,\\ y &\mapsto& (\rv_{\mu}(y - c_i))_{i\leq r} \end{array} \] for a suitable $\mu$. Indeed, if $K$ is of equi-characteristic 0 and $\Th(K)=\Th(M)$ is 0-h-minimal, then Lemma~\ref{lem.hmin.equi} (applied to $Y$ considered as a subset of $M \times \RV^0_M$) provides such a $C$ with $\mu = 1$. If $K$ is of mixed characteristic and $\Th(K) = \Th(M)$ is $0$-h$^{\mathrm{mix}}$-minimal, then we use Lemma~\ref{lem.hmin.mix} instead, which yields the analogous statement with $\mu = |p^\nu|$ for some integer $\nu \ge 1$. Fix $C$ and $\mu$ for the rest of the proof; also fix an enumeration of $C$ and the corresponding map $\rho$ as above. Since $L= \acl_{\VF}(L)$, we have $C \subseteq L$, so $\rho$ is $L$-definable, and so is the image $\rho(Y) \subseteq (\RV_{\mu,M})^r$ of $Y$. Hence by Claim 2, that image is $\emptyset$-definable. By assumption $Y \ne \emptyset$, so we also have $\rho(Y) \ne \emptyset$; as $K$ is an elementary substructure of $M$, $\rho(Y) \cap (\RV_{\mu,K})^r$ is non-empty, too. Choose any $\xi = (\xi_1, \dots, \xi_r) \in \rho(Y) \cap (\RV_{\mu,K})^r$ in this intersection. Since $Y$ is a union of fibers of $\rho$, to show that $L \cap Y$ is non-empty, it suffices to prove that the preimage of $\xi$ in $M$, \[ \rho^{-1}(\xi) = \bigcap_{i \le r} (c_i + \rv_{\mu}^{-1}(\xi_i)), \] has non-empty intersection with $L$. Each of the finitely many sets $B_i := c_i + \rv_{\mu}^{-1}(\xi_i)$ is a ball and the intersection of all of them is non-empty, since $\xi \in \rho(Y) \subseteq \rho(M)$. By the ultrametric inequality, this intersection is equal to one of those balls, say $B_j$. Since $\xi_j \in \RV_{\mu,K}$, there exists $a \in \rv_{\mu}^{-1}(\xi_i) \cap K$. Thus we obtain the desired element $c_j + a \in L \cap B_j = L \cap \rho^{-1}(\xi) \subseteq L \cap Y$. \end{proof} As explained after the statements of the claims, the proposition follows. \end{proof We now conclude with the proof of Theorem~\ref{t.sphc.ex}. \begin{proof}[Proof of Theorem~\ref{t.sphc.ex}]\renewcommand{\qedsymbol}{$\square$ (Theorem~\ref{t.sphc.ex})} Fix some $(\#K)^+$-saturated elementary extension $M \succ K$. By Zorn's Lemma, there exists maximal elementary extensions $N \succ K$ satisfying $N \prec M$ and such that $\RV_{N} = \RV_K$. Fix $N$ to be one of them. The rest of the proof consists in showing that such an $N$ is spherically complete (using Proposition~\ref{p.sphc.ex}). Suppose, towards a contradiction, that $(B_i)_{i \in I}$ is a nested family of closed balls in $N$ such that $\bigcap_{i \in I} B_i = \emptyset$. Being nested, this family of balls defines a partial type over $N$. Without loss, we assume that all the $B_i$ have different radii, so there are at most as many balls as the cardinality of the value group of $N$. As $N$ is an immediate extension of $K$, the value group of $N$ is the same as the value group of $K$. Therefore $M$ is sufficiently saturated to contain a realisation $\alpha \in M$ of this partial type. This $\alpha$ is a pcl over $N$, since for every $a \in N$ there exists an $i \in I$ with $a \notin B_i$; hence for any $a' \in B_i$, we have $|\alpha - a'| < |\alpha - a|$. Now Proposition~\ref{p.sphc.ex} (applied to $N \prec M$) implies that $L:=\acl_{\VF}(N,\alpha)$ is a proper elementary extension of $N$ in $M$ satisfying $\RV_{L} =\RV_{N} = \RV_K$, contradicting the maximality of $N$. \end{proof} \section{Open questions} We have no clue whether our result holds much more generally. In particular: \begin{qu} Note that Theorem~\ref{t.sphc.ex} applies to power-bounded $T$-convex valued fields $K$ (since those are $0$-h-minimal), but does the result also hold for non-power-bounded $T$-convex valued fields $K$ (which are not $0$-h-minimal)? Is such a $K$ even definably spherically complete? \end{qu} \begin{qu} We answer Question~\ref{Q} affirmatively assuming $0$-h-minimality, but is this assumption necessary at all? \begin{enumerate} \item Indeed, could it be that any definably spherically complete valued field $K$ (in any language) has a spherically complete elementary extension $L \succ K$? \item Perhaps even an immediate one? \end{enumerate} \end{qu} We would rather guess that the answer to the latter question is no, but we do not have a counter-example.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The recent discovery of ferromagnetic order in two-dimensional (2D) CrI$_3$ \cite{Huang2017} has initiated a vast interest in 2D magnetism \cite{Gibertini2019, Gong2019, Jiang2021}. Several other materials have subsequently been demonstrated to preserve magnetic order in the monolayer limit when exfoliated from magnetic van der Waals bonded compounds and the family of 2D magnets is steadily growing. A crucial requirement for magnetic order to persist in the 2D limit is the presence of magnetic anisotropy that breaks the spin rotational symmetry that would otherwise render magnetic order at finite temperatures impossible by the Mermin-Wagner theorem \cite{Mermin1966}. This is exemplified by the cases of 2D CrBr$_3$ \cite{Zhang2019,Sun2021} and CrCl$_3$ \cite{Cai2019, Bedoya-Pinto2021}, which are isostructural to CrI$_3$ and while the former remains ferromagnetic in the atomic limit due to easy-axis anisotropy (like CrI$_3$) the latter has a weak easy plane that forbids proper long range order. Other materials with persisting ferromagnetic order in the 2D limit include the metallic compounds Fe$_{3/4/5}$GeTe$_2$ \cite{Fei2018,Seo2020,Li2020} and the anisotropic insulator CrSBr \cite{Lee2021}, which has an easy-axis aligned with the atomic plane. Finally, FePS$_3$ \cite{Lee2016} and MnPS$_3$ \cite{Long2020} constitute examples of in-plane anti-ferromagnets that preserve magnetic order in the monolayer limit due to easy-axis anisotropy, whereas the magnetic order is lost in monolayers of the isostructural easy-plane compound NiPS$_3$ \cite{Kim2019}. The 2D materials mentioned above all constitute examples of rather simple collinear magnets. However, the ground state of three-dimensional magnetic materials often exhibit complicated non-collinear order that gives rise to a range of interesting properties \cite{Qin2020}. Such materials, are so far largely lacking from the field of 2D magnetism and the discovery of new non-collinear 2D magnets would greatly enhance the possibilities of constructing versatile magnetic materials using 2D magnets as building blocks \cite{Sierra2021}. The ground state of the classical isotropic Heisenberg model can be shown to be a planar spin spiral characterised by a propagation vector $\mathbf{Q}$ \cite{Kaplan1959} and such spin configurations thus comprise a broad class of states that generalise the concept of ferromagnetism and anti-ferromagnetism. In fact, spin spiral order is rather common in layered van der Waals bonded materials \cite{mcguire2017crystal} and it is thus natural to investigate the ground state order of the corresponding monolayers for spin spiral order. Moreover, for non-bipartite magnetic lattices the concept of anti-ferromagnetism is not unique. This is exemplified by the abundant example of the triangular lattice where one may consider the cases of anti-aligned ferromagnetic stripes or 120$^\circ$ non-collinear order, which can be represented as spin spirals of $\mathbf{Q}=(1/2,0)$ and $\mathbf{Q}=(1/3,1/3)$ respectively \cite{Chernyshev2009,Maksimov2019}. The concept of spin spirals thus constitute a general framework for specifying the magnetic order, which may or may not be commensurate with the crystal lattice. Finite spin spiral vectors typically break symmetries inherent to the crystal lattice and may thus induce physical properties that are predicted to be absent if one only considers the crystal symmetries. In particular, the spin spiral may yield a polar axis that lead to ferroelectric order \cite{Kimura2003}. Such materials are referred to as type II multiferroics and examples include MnWO$_4$ \cite{PhysRevLett.97.097203}, CoCr$_2$O$_4$ \cite{PhysRevLett.96.207204}, LiCu$_2$O$_2$ \cite{PhysRevLett.98.057601} and LiCuVO$_4$ \cite{doi:10.1143/JPSJ.76.023708} as well as the triangular magnets CuFeO$_2$ \cite{PhysRevB.73.220401}, CuCrO$_2$ \cite{PhysRevB.73.220401}, AgCrO$_2$ \cite{PhysRevLett.101.067204} and MnI$_2$ \cite{PhysRevLett.106.167206}. In addition to these materials, 2D NiI$_2$ has recently been shown to host a spin spiral ground state that induces a spontaneous polarization \cite{Song2022} and 2D NiI$_2$ thus comprises the first example of a 2D type II multiferroic The prediction of new materials with certain desired properties can be vastly accelerated by first principles simulations. In general, the search for materials with spin spiral ground states is complicated by the fact that the magnetic order requires large super cells in the simulations. However, if one neglects spinorbit coupling, spin spirals of arbitrary wavevectors can be represented in the chemical unit cell by utilising the generalized Bloch theorem that encodes the spiral in the boundary conditions \cite{Sandratskii_1986, Sandratskii1991}. This method has been applied in conjunction with density functional theory (DFT) to a wide range of materials and typically produces results that are in good agreement with experiments \cite{Bylander1998,Kurz2001,Sandratskii2007,Zimmermann2019,Gutzeit2022}. In the present work we use DFT simulations in the framework of the generalized Bloch theorem to investigate the magnetic ground state of monolayers derived from layered van der Waals magnets. We then calculate the preferred orientation of the spiral plane by adding a single component of the spinorbit coupling in the normal direction of various trial spiral planes. This yields a complete classification of the magnetic ground state for these materials under the assumption that higher order spin interactions can be neglected. On the other hand, the effect of higher order spin interactions can be quantified by deviations between spin spiral energies in the primitive unit cell and a minimal super cell. The results for all compounds are discussed and compared with existing knowledge from experiments on the parent bulk materials. Finally, we analyse the spontaneous polarization in all cases where an incommensurate ordering vector is predicted. The paper is organised as follows. In Sec. \ref{sec:theory} we summarise the theory used to obtain spin spiral ground states based on the generalized Bloch theorem and briefly outline the implementation. In Sec. \ref{sec:results} we present the results and summarise the magnetic ground states of all the investigated materials. Sec. \ref{sec:conclusion} provides a conclusion and outlook. \begin{figure*}[tb] \centering \subfloat[]{\parbox[b][3cm][t]{.8\textwidth}{\includegraphics[page=37, width=0.8\textwidth]{Tikz_figures.pdf}}}\hfill \subfloat[]{\parbox[b][3cm][t]{.15\textwidth}{\includegraphics[width=0.15\textwidth]{bz_plot.pdf}}} \caption{(a) Examples of magnetic structures in the triangular lattice. The $\mathbf{Q}=(1/3,1/3)$ (corresponding to the high symmetry point K) is the classical ground state in the isotropic Heisenberg model with nearest neighbour antiferromagnetic exchange and is degenerate with $\mathbf{Q}=(-1/3,-1/3)$. The stripy antiferromagnetic $\mathbf{Q}=(1/2,0)$ (corresponding to the high symmetry point M) is only found for CoI$_2$ in the present study and is degenerate with $\mathbf{Q}=(0,1/2)$ and $\mathbf{Q}=(1/2,1/2)$. The incommensurate spiral with $\mathbf{Q}=(0.14, 0.14)$ corresponds to the prediction of NiI$_2$ in the present work. The rectangular cell with $\mathbf{Q}=(0,1/2)$ is a bicollinear antiferromagnet that corresponds to superpositions of (0, $\pm$1/4) states in the primitive cell. (b) Brillouin zone of the hexagonal (blue) and rectangular (orange) unit cell. The high symmetry band paths used to sample the spiral ordering vectors are shown in black.} \label{fig:spirals} \end{figure*} \section{Theory}\label{sec:theory} \subsection{Generalized Bloch's Theorem} The Heisenberg model plays a prominent role in the theory of magnetism and typically gives an accurate account of the fundamental magnetic excitations as well as the thermodynamic properties of a given material. In the isotropic case it can be written as \begin{align}\label{eq:heisenberg} H=-\frac{1}{2}\sum_{ij}J_{ij}\mathbf{S}_i\cdot\mathbf{S}_j, \end{align} where $\mathbf{S}_i$ is the spin operator for site $i$ and $J_{ij}$ is the exchange coupling between sites $i$ and $j$. In a classical treatment, the spin operators are replaced by vectors of fixed magnitude and it can be shown that the classical energy is minimised by a planar spin spiral \cite{Kaplan1959}. Such a spin configuration is characterised by a wave vector $\mathbf{Q}$, which is determined by the set of exchange parameters $J_{ij}$. The spin at site $i$ is rotated by an angle $\mathbf{Q}\cdot\mathbf{R}_i$ with respect to the origin and the wave vector may or may not be commensurate with the lattice. In a first principles framework it is thus natural to search for planar spin spiral ground states that give rise to periodically modulated magnetisation densities satisfying \begin{align}\label{eq:mag} \mathbf{m}_\mathbf{q}(\mathbf{r}+\mathbf{R}_i)=\mathrm{U}_{\mathbf{q},\mathbf{R}_i}\mathbf{m}_\mathbf{q}(\mathbf{r}). \end{align} Here $\mathbf{R}_i$ is a lattice vector (of the chemical unit cell) and $\mathrm{U}_{\mathbf{q},\mathbf{R_i}}$ is a rotation matrix that rotates the magnetisation by an angle $\mathbf{q}\cdot\mathbf{R}_i$ around the normal of the spiral plane. In the absence of spinorbit coupling we are free to perform a global rotation of the magnetisation density and we will fix the spiral plane to the $xy$-plane from hereon. In the framework of DFT, the magnetisation density \eqref{eq:mag} gives rise to an exchange-correlation magnetic field satisfying the same symmetry under translation. If spinorbit coupling is neglected the Kohn-Sham Hamiltonian thus commutes with the combined action of translation (by a lattice vector) and a rotation of spinors by the angle $\mathbf{q}\cdot\mathbf{R}_i$. This implies that the Kohn-Sham eigenstates can be written as \begin{equation}\label{eq:GBT} \psi_{\mathbf{q},\mathbf{k}}(\mathbf{r})=e^{i\mathbf{k}\cdot\mathbf{r}} U_\mathbf{q}^\dag(\mathbf{r}) \begin{pmatrix} u^{\uparrow}_{\mathbf{q},\mathbf{k}}(\mathbf{r})\\ u^{\downarrow}_{\mathbf{q},\mathbf{k}}(\mathbf{r}) \end{pmatrix} \end{equation} where $u^{\uparrow}_{\mathbf{q},\mathbf{k}}(\mathbf{r})$ and $u^{\downarrow}_{\mathbf{q},\mathbf{k}}(\mathbf{r})$ are periodic in the chemical unit cell and the spin rotation matrix is given by \begin{align}\label{eq:U} U_\mathbf{q}(\mathbf{r})= \begin{pmatrix} e^{i\mathbf{q}\cdot\mathbf{r}/2} & 0\\ 0 & e^{-i\mathbf{q}\cdot\mathbf{r}/2} \end{pmatrix} \end{align} This is known as the generalized Bloch Theorem (GBT) and the Kohn-Sham equations can then be written as \begin{align} H_\mathbf{q,k}^\mathrm{KS}u_\mathbf{q,k}=\varepsilon_\mathbf{q,k}u_\mathbf{q,k} \end{align} where the generalized Bloch Hamiltonian: \begin{align} H_\mathbf{q,k}^\mathrm{KS}&=e^{-i\mathbf{k}\cdot\mathbf{r}} U_\mathbf{q}(\mathbf{r})H^\mathrm{KS}U_\mathbf{q}^\dag(\mathbf{r})e^{i\mathbf{k}\cdot\mathbf{r}} \end{align} is periodic in the unit cell. Here $\mathbf{k}$ is the crystal momentum, $\mathbf{q}$ is the spiral wave vector and $H^\mathrm{KS}$ is the Kohn-Sham Hamiltonian, which couples to the spin degrees of freedom through the exchange-correlation magnetic field In the present work, we will not consider constraints besides the boundary conditions defined by Eq. \eqref{eq:mag}. For a given $\mathbf{q}$ we can thus obtain a unique total energy $E_\mathbf{q}$ and the magnetic ordering vector is determined as the point where $E_\mathbf{q}$ has a minimum (denoted by $\mathbf{Q}$) when evaluated over the entire Brillouin zone. However, if the chemical unit cell contains more than one magnetic atom there may be different local extrema corresponding to different intracell alignments of magnetic moments. In order ensure that the correct ground state is obtained it is thus pertinent to perform a comparison between calculations that are initialised with different relative magnetic moments. As a simple example of this, one may consider a honeycomb lattice of magnetic atoms where the ferromagnetic and anti-ferromagnetic configurations both correspond to $\mathbf{q=0}$, but are distinguished by different intracell orderings of the local magnetic moments. We will discuss this in the context of CrI$_3$ in section \ref{sec:AB3} We also note that the true magnetic ground state is not necessarily representable by the ansatz \eqref{eq:mag} and one is therefore not guaranteed to find the ground state by searching for spin spirals based on the minimal unit cell. In figure \ref{fig:spirals} we show four examples of possible magnetic ground states of the triangular lattice. Three of these correspond to spin spirals of the minimal unit cell while the fourth - a bicollinear antiferromagnet - requires a larger unit cell. The bicollinear state may arise as a consequence of higher order exchange interactions, which tend to stabilize linear combinations of degenerate single-$q$ states \subsection{Spinorbit coupling}\label{sec:soc} In the presence of spinorbit coupling, the spin spiral plane will have a preferred orientation and the magnetic ground state is thus characterised by a normal vector $\mathbf{\hat n}_0$ of the spiral plane as well as the spiral vector $\mathbf{Q}$. Spinorbit coupling is, however, incompatible with application of the GBT and has to be approximated in a post processing step when working with the spin spiral representation in the chemical unit cell. It can be shown that first order perturbation theory only involves contributions from the spinorbit components orthogonal to the plane \cite{Heide2009} \begin{align} \langle\psi_{\mathbf{q},\mathbf{\hat n}}|\mathbf{L}\cdot\mathbf{S}|\psi_{\mathbf{q},\mathbf{\hat n}}\rangle=\langle\psi_{\mathbf{q},\mathbf{\hat n}}|(\mathbf{L}\cdot \mathbf{\hat n})(\mathbf{S}\cdot\mathbf{\hat n})|\psi_{\mathbf{q},\mathbf{\hat n}}\rangle, \end{align} and this term is thus expected to yield the most important contribution to the spinorbit coupling. Since $(\mathbf{L}\cdot \mathbf{\hat n})(\mathbf{S}\cdot\mathbf{\hat n})$ commutes with a spin rotation around the axis $\mathbf{\hat n}$, the spin spiral wavefunctions remain eigenstates when such a term is included in $H^\mathrm{KS}$. This approach was proposed by Sandratskii \cite{sandratskii2017insight} and we will refer to it as the projected spinorbit coupling (PSO). For the spin spiral calculations in the present work we include spinorbit coupling non-selfconsistently by performing a full diagonalization of the $H^\mathrm{KS}_{\mathbf{q},\mathbf{k}}$ including the PSO. The magnetic ground state is then found by evaluating the total energy at all normal vectors $\mathbf{\hat n}$, which will yield $\mathbf{\hat n}_0$ as the normal vector that minimizes the energy \subsection{Computational Details} The GBT has been implemented in the electronic structure software package GPAW \cite{Enkovaara2010}, which is based on the projector augmented wave method (PAW) and plane waves. The implementation uses a fully non-collinear treatment within the local spin density approximation where both the interstitial and atom-centered PAW regions are handled non-collinearly. Spinorbit coupling is included non-selfconsistently \cite{Olsen2016a} as described in Section \ref{sec:soc}. The implementation is described in detail in Appendix \ref{sec:implementation} and benchmarked for fcc Fe in Appendix \ref{sec:benchmark}. We find good agreement with previous results from the literature and we also assert that results from spin spiral calculations within the GBT agree exactly with supercell calculations without spinorbit in the case of bilayer CoPt. Finally, we compare the results of the PSO approximations with full inclusion of spinorbit coupling for both supercells and GBT spin spirals of the CoPt bilayer. We find exact agreement between the PSO in the supercell and GBT spin spiral and the approximation only deviates slightly compared to full spinorbit coupling for the supercell calculations. All calculations have been carried out with a plane wave cutoff of 800 eV, a $k$-point density of 14 {\AA} and a Fermi smearing of 0.1 eV. The structures and initial magnetic moments are taken from the Computational Materials Database (C2DB) \cite{Haastrup2018,Gjerding2021} In order to find the value of $\mathbf{Q}$, which describes the ground state magnetic order, we calculate $E_\mathbf{q}$ along a representative path connecting high symmetry points in the Brillouin zone. While the true value of $\mathbf{Q}$ could be situated away from such high symmetry lines we deem this approach sufficient for the present study. \begin{table*}[tb] \begin{tabular}{l|l|l|l|l|l|l|l|l} & $\mathbf{Q}$ & $\mathrm{E}_{min}$ [meV] & $(\theta, \varphi)$ & Exp. IP order & BW [meV] & PSO BW [meV] & $m_\Gamma$ $[\mu_\mathrm{B}]$ & $\Delta\varepsilon_\mathbf{Q}$ [eV] \\ \hline TiBr$_2$ & (1/3, 1/3) & -78.12 & (90,90) & - & 78.1 & 0.6 & 1.5 & 0.0 \\ TiI$_2$ & (1/3, 1/3) & -44.33 & (90,90) & - & 44.3 & 1.0 & 1.9 & 0.0 \\ NiCl$_2$ & (0.06, 0.06) & -0.81 & (90,31) & FM $\parallel$ & 45.2 & 0.0 & 2.0 & 0.81 \\ NiBr$_2$ & (0.11, 0.11) & -8.62 & (44,0) & FM $\parallel$, HM & 50.7 & 0.3 & 2.0 & 0.62 \\ NiI$_2$ & (0.14, 0.14) & -28.48 & (64,0) & HM & 68.3 & 4.1 & 1.8 & 0.28 \\ VCl$_2$ & (1/3, 1/3) & -60.07 & (90,0) & $120\degree$ & 60.1 & 0.1 & 3.0 & 0.96 \\ VBr$_2$ & (1/3, 1/3) & -36.21 & (90,18) & $120\degree$ & 36.2 & 0.1 & 3.0 & 0.9 \\ VI$_2$ & (0.14, 0.14) & -4.43 & (6,0) & stripe & 9.8 & 0.7 & 3.0 & 0.96 \\ MnCl$_2$ & (1/3, 1/3) & -20.48 & (90,15) & stripe or HM & 20.5 & 0.0 & 5.0 & 1.92 \\ MnBr$_2$ & (1/3, 1/3) & -20.13 & (90,15) & stripe $\parallel$ & 20.1 & 0.1 & 5.0 & 1.76 \\ MnI$_2$ & (1/3, 1/3) & -21.32 & (0,0) & HM & 21.3 & 1.1 & 5.0 & 1.41 \\ FeCl$_2$ & (0, 0) & 0.0 & (0, 0)$^*$ & FM $\perp$ & 115.2 & 0.5$^*$ & 4.0 & 0.0 \\ FeBr$_2$ & (0, 0) & 0.0 & (0, 0)$^*$ & FM $\perp$ & 81.3 & 0.8$^*$ & 4.0 & 0.0 \\ FeI$_2$ & (0, 0) & 0.0 & (0, 0)$^*$ & stripe $\perp$ & 36.5 & 1.9$^*$ & 4.0 & 0.0 \\ CoCl$_2$ & (0, 0) & 0.0 & (90,90)$^*$ & FM $\parallel$ & 46.0 & 1.2$^*$ & 3.0 & 0.0 \\ CoBr$_2$ & (0.03, 0.03) & -0.04 & (0,0) & FM $\parallel$ & 21.2 & 0.1 & 3.0 & 0.0 \\ CoI$_2$ & (1/2, 0) & -20.95 & (90,90) & HM & 41.7 & 5.6 & 1.2 & 0.0 \end{tabular} \caption{Summary of magnetic properties of the AB$_2$ compounds. The ground state ordering vector is denoted by $\mathbf{Q}$ and $E_\mathrm{min}$ is the ground state energy relative to the ferromagnetic state. The normal vector of the spiral plane is defined by the angles $\theta$ and $\varphi$ (see text). We also display the experimental in-plane order of the parent layered compound (Exp. IP order). In addition we state the spin spiral band width BW, the magnetic moment per unit cell in the ferromagnetic state $m_\Gamma$ and the band gap at the ordering vector $\Delta\varepsilon_\mathbf{Q}$. For the case of NiI$_2$, $m_\Gamma$ deviates from an integer value because the ferromagnetic state is metallic in LDA (whereas the spin spiral ground state has a gap). The cases of FeX$_2$, CoCl$_2$ and CoBr$_2$ are half metals, which enforces integer magnetic moment despite the metallic ground state. The asterisks indicate ferromagnets where full spinorbit coupling was included and the angles then refer to the direction of the spins rather that the spiral plane normal vector.} \label{tab:AB2Result} \end{table*} \section{results}\label{sec:results} A comprehensive review on the magnetic properties of layered transition metal halides was provided in Ref. \cite{mcguire2017crystal}. Here we present spin spiral calculations and extract the magnetic properties of the corresponding monolayers. In addition to the magnetic moments, the properties are mainly characterised by a spiral ordering vector $\mathbf{Q}$ and the normal vector to the spin spiral plane $\mathbf{\hat n}_0$. The materials either have AB$_2$ or AB$_3$ stoichiometries and we will discuss these cases separately below. We have performed LDA and LDA+U calculations for all materials. In most cases, the Hubbard corrections does not make any qualitative difference although the spiral ordering vector does change slightly and we will not discuss these calculations further here. The Mn halides comprise an exception to this where LDA+U calculations differ significantly from those of bare LDA and the LDA+U calculations will be discussed separately for these materials below. For the AB$_2$ materials, we find 12 that exhibit a spiral order that breaks the crystal symmetry and yields a ferroelectric ground state. For six of these compounds we have calculated the spontaneous polarization by performing full relaxation (including self-consistent spinorbit coupling) in supercells hosting the spiral order. \subsection{Magnetic ground state of AB$_2$ materials} The AB$_2$ materials all have space group $P\bar3m1$ corresponding to monolayers of the CdI$_2$ (or CdCl$_2$) prototype. The magnetic lattice is triangular and a few representative possibilities for the magnetic order is illustrated in figure \ref{fig:spirals}. The magnetic properties of all the considered compounds are summarized in table \ref{tab:AB2Result}. In addition to the ordering vector $\mathbf{Q}$ we provide the angles $\theta$ and $\phi$, which are the polar and azimuthal angles of $\mathbf{\hat n}_0$ with respect to the out-of-plane direction and the ordering vector respectively. It will be convenient to consider three limiting cases of the orientation of spin spiral planes: the proper screw ($\theta=90, \varphi=0$), the out-of-plane cycloid ($\theta=90, \varphi=90$) and the in-plane cycloid ($\theta=0, \varphi=0$). We also provide the ground state energy relative to the ferromagnetic configuration ($\mathbf{Q}=(0,0)$), the band gap, the spin spiral band width, which reflects the strength of the magnetic interactions and the PSO band width, which is the energy difference between the easy and hard orientations of the spiral plane. The magnetic moments are calculated as the total moment in the unit cell using the ferromagnetic configurations without spinorbit interaction and thus yields an integer number of Bohr magnetons for insulators. The magnitude of the local magnetic moments (obtained by integrating the magnetization density over the PAW spheres) in the ground state are generally found to be very close to the moments in the ferromagnetic configuration, unless explicitly mentioned. The spin spiral energy dispersions are provided for all AB$_2$ materials in the supporting information. The different classes of materials are described in detail below. \begin{figure*}[tb] \subfloat[]{\includegraphics[page=39, width=0.35\textwidth]{Tikz_figures.pdf}} \subfloat[]{\includegraphics[width=0.333\textwidth]{NiI2_fullbz_nosoc.pdf}} \subfloat[]{\includegraphics[page=38, width=0.35\textwidth]{Tikz_figures.pdf}} \caption{Spin spiral energy of NiI$_2$. Left: the spin spiral energy as a function of $\mathbf{q}$ without spinorbit coupling. Center: Spin spiral energy in evaluated in entire Brillouin zone. Right: spiral energy as a function of spiral plane orientation evaluated at the minimum $\mathbf{Q}=(0.14, 0.14)$. The spiral plane orientation is parameterized in terms of the polar angle $\theta$ and azimuthal angle $\varphi$ (measured from $\mathbf{Q}$) of the spiral plane normal vector.} \label{fig:NiI2} \end{figure*} \subsubsection*{NiX$_2$} The nickel halides all have ground states with incommensurate spiral vectors between $\Gamma$ and K. Experimentally, both NiI$_2$ and NiBr$_2$ in bulk form have been determined to have incommensurate spiral vectors \cite{ADAM19801, day1980incommensurate, KUINDERSMA1981231} in qualitative agreement with the LDA results. The case of NiCl$_2$, however, have been found to have ferromagnetic intra-layer order whereas we find a rather small spiral vector of $\mathbf{Q}=(0.06, 0.06)$. In bulk NiI$_2$ the experimental ordering vector $\mathbf{Q}_\mathrm{exp} = (0.1384, 0, 1.457)$ has an in-plane component in the $\Gamma\mathrm{M}$-direction with a magnitude of roughly $1/7$ of a reciprocal lattice vector, while for the monolayer we find $\mathbf{Q} = (0.14, 0.14, 0)$, which is in the $\Gamma\mathrm{K}$-direction. Evaluating the spin spiral energy in the entire Brillouin zone, however, reveals a nearly degenerate ring encircling the $\Gamma$-point with a radius of roughly 1/5 of a reciprocal lattice vector. The point $\mathbf{q}_\mathrm{M} = (0.21, 0)$ thus comprises a very shallow saddle point with an energy that exceeds the minimum by merely 2 meV. This is illustrated in figure \ref{fig:NiI2}. We also show a scan of the spin spiral energy (within the PSO approximation) as a function of orientation of the spin spiral plane on a path that connects the limiting cases of in-plane cycloid, out-of-plane cycloid and proper screw. An unconstrained spin spiral calculation using the rectangular unit cell of figure \ref{fig:spirals} does not reveal any new minima in the energy, which implies that the ground state is well represented by a single-$q$ spiral and that higher order exchange interactions are neglectable in NiI$_2$. The normal vector of the spiral makes an angle of 64{\degree} with the out-of-plane direction. This orientation is in good agreement with the experimental assignment of a proper screw (along $\mathbf{Q}_\mathrm{exp} = (0.1384, 0, 1.457)$), which corresponds to a tilt of 55\degree$\pm$10{\degree} with respect to the $c$-axis \cite{KUINDERSMA1981231}, but disagrees with the model proposed in Ref. \cite{Song2022} where the spiral was found to be a proper screw At low temperatures NiBr$_2$ has been reported to exhibit $\mathbf{Q}_\mathrm{exp}=(x, x, 3/2)$ where x changes continuously from 0.027 at 4.2 K to 0.09 at 22.8 K and then undergoes first order transition at 24 K to intra-layer ferromagnetic order \cite{adam1980neutron}. The structure predicted here is close to the one observed in bulk at 22.8 K. The discrepancy could be due to the magnetoelastic deformation \cite{tokunaga2011multiferroicity} that has been associated with the modulation of the spiral vector. This effect could in principle be captured by relaxing the structure in supercell calculations, but the small wavelength spirals require prohibitively large supercells and are not easily captured by first principles methods. It is also highly likely that LDA is simply not accurate enough to describe the intricate exchange interactions that define the true ground state in this material. Bulk NiCl$_2$ is known to be an inter-layer antiferromagnet with ferromagnetically ordered layers \cite{pollard1982electronic}. We find the ground state to be a long wavelength incommensurate spin spiral with $\mathbf{Q}=(0.06, 0.06)$, which is in rather close proximity to ferromagnetic order. The ground state energy is less than 1 meV lower than the ferromagnetic state, but we cannot say at present whether this is due to inaccuracies of LDA or if the true ground state indeed exhibits spiral magnetic order in the monolayer limit \subsubsection*{VX$_2$} The three vanadium halides are insulators and whereas VCl$_2$ and VBr$_2$ are found to form $\mathbf{Q} = (1/3,1/3)$ spiral structures, VI$_2$ has an incommensurate ground state with $\mathbf{Q} = (0.14,0.14)$. The magnetic ground state of VCl$_2$ and VBr$_2$ is in good agreement with experiments on bulk materials where both have been found to exhibit out-of-plane 120{\degree} order \cite{kadowaki1985neutron}. This structure is expected to arise from strong nearest neighbour anti-ferromagnetic interactions between the V atoms. The case of VI$_2$ has a significantly smaller spiral band width, signalling weaker exchange interactions compared to VCl$_2$ and VBr$_2$. A collinear energy mapping based on the Perdew-Burke-Ernzerhof (PBE) exchange-correlation functional \cite{Gjerding2021} yields a weakly ferromagnetic nearest neighbour interaction for VI$_2$ and strong anti-ferromagnetic interactions for VCl$_2$ and VBr$_2$. This is in agreement with the present result, which indicate that the magnetic order of VI$_2$ is not dominated by nearest neighbour interactions. Experimentally \cite{kuindersma1979magnetic}, the bulk VI$_2$ magnetic order has been found to undergo a phase transition at 14.4 K from a 120{\degree} state to a bicollinear state with $\mathbf{Q} = (1/2, 0)$, where the spins are perpendicular to $\mathbf{Q}$ and tilted by $29\degree$ from the z-axis. Such a bicollinear state implies that the true ground state is a double-$q$ state stabilized by higher order spin interactions and cannot be represented as a spin spiral in the primitive unit cell. To check whether LDA predicts the experimental ground state we have therefore performed spiral calculations in the rectangular cell shown in figure \ref{fig:spirals}. The result is shown in figure \ref{fig:VI2} along with the spiral calculation in the primitive cell and we do not find any new minima in the super cell calculation. We have initalized angles in the super cell caluculation such that they corresponds to bicollinear order and the angles are observed to relax to the single-$q$ spin spiral of the primitive cell. It is likely that LDA is insufficient to capture the subtle higher order exchange interactions in this material, but it is possible that the monolayer simply has a magnetic order that differs from the individual layers in the bulk material. \begin{figure}[tb] \centering \includegraphics[width=0.5\textwidth]{VI2_superposed.pdf} \caption{Spin spiral energies of VI$_2$ obtained from the primitive cell (black) and the rectangular super cell (blue). The dashed lines repeat the primitive cell results on the corresponding super cell path.} \label{fig:VI2} \end{figure} In the PSO approximation we find that VCl$_2$ and VBr$_2$ prefer out-of-plane spiral planes. The energy is rather insensitive to $\varphi$ forming a nearly degenerate subspace of ground states with a slight preference of the proper screw. The ground state of VI$_2$ is found to be close to the in-plane cycloid with a normal vector to the spiral plane forming a 6{\degree} angle with $\mathbf{Q}$. The spinorbit corrections in VI$_2$ are also found to be the smallest compared to other iodine based transition metal halides studied here and the ground state energy only deviates by $0.7$ meV per unit cell from the out-of-plane cycloid, which constitutes the orientation of the spin plane with highest energy. \subsubsection*{MnX$_2$} The manganese halides are all found to form 120{\degree} ground states, which is in agreement with previous theoretical studies \cite{li2020high} using PBE. In contrast to the other insulators studied in the present work, however, we find that the results are qualitatively sensitive to the inclusion of Hubbard corrections. This was also found in Ref. \cite{torelli2019high}, where the sign of the nearest neighbour exchange coupling was shown to change sign when a Hubbard U parameter was included in the calculations. With U = 3.8 eV we find that all three compounds has spiral ground states with incommensurate spiral vector $\mathbf{Q} = (0.11, 0.11, 0)$. Moreover, spin spiral band width in the LDA+U calculations decrease by more than an order of magnitude compared to the bare LDA calculations. The experimental magnetic structure of the manganese halides are rather complicated, exhibiting several magnetic phase transitions in a range of 0.1 K below the initial ordering temperature. In particular MnI$_2$ (MnBr$_2$) has been found to have three (two) complex non-collinear phases \cite{SATO1995224}, and MnCl$_2$ has two complex phases that are possibly collinear \cite{wilkinson1958neutron}. The experimental ground state of bulk MnCl$_2$ not unambiguously known, but under the assumption of collinearity a possible ground state contains 15 Mn atoms in an extended stripy pattern \cite{wilkinson1958neutron}. Due to the weak and subtle nature of magnetic interactions in the manganese compounds, however, it is not unlikely that the ground state in the monolayers can differ from that of bulk. This is corroborated by an experimental study of MnCl$_2$ intercalated by graphite where a helimagnetic ground state with $\mathbf{Q}_\mathrm{exp} = (0.153, 0.153)$ was found \cite{wiesler1995determination}. This is rather close to our predicted ordering vector obtained from LDA+U. Experimentally, bulk MnBr$_2$ is found to exhibit a stripy bicollinear \textit{uudd} order at low temperatures \cite{wollan1958neutron}. The order cannot be represented by a spiral in the minimal cell, but requires calculations in rectangular unit cells with spiral order $\mathbf{Q} = (0, 1/2)$ similar to VI$_2$ discussed above. We have calculated the high symmetry band path required to show this order and do not find any new minima. It is likely that the situation resembles MnCl$_2$ where a single-$q$ spiral has been observed for decoupled monolayers in agreement with our calculations. \subsubsection*{FeX$_2$} We find all the iron halides to have ferromagnetic ground states. For FeCl$_2$ and FeBr$_2$ this is in agreement with the experimentally determined magnetic order for the bulk compounds \cite{wilkinson1959neutron}. In contrast, FeI$_2$ has been reported to exhibit a bicollinear antiferromagnetic ground state \cite{gelard1974magnetic} similar to the case of MnBr$_2$ discussed above. It is again possible that the ground state of the monolayer (calculated here) could differ from the magnetic ground state of the bulk compound as has been found for MnCl$_2$. LDA predict the three compounds to be half metals, meaning that the majority spin bands are fully occupied and only the minority bands have states at the Fermi level. This enforces an integer number of Bohr magnetons (four) per unit cell at any $\mathbf{q}$-vector in the spin spiral calculations. Thus longitudinal fluctuations are expected to be strongly suppressed in iron halides and it is likely that these materials can be accurately modelled by Heisenberg Hamiltonians despite the itinerant nature of the electronic structure. The projected spin orbit coupling is not applicable to collinear structures and we therefore include full spin orbit coupling, which is compatible with the $\mathbf{Q}=(0,0)$ ground state. We find that all the iron compounds have an out-of-plane easy axis, which is in agreement with experiments. The bandwidth provided in table \ref{tab:AB2Result} then simply corresponds to the magnetic anisotropy energy which is smallest for FeCl$_2$ and increases for the heavier Br and I compounds as expected. \subsubsection*{CoX$_2$} We predict CoCl$_2$ to have an in-plane ferromagnetic ground state in agreement with the experimentally determined magnetic order of the bulk compound \cite{wilkinson1959neutron}. CoBr$_2$ is found to have a long wavelength spin spiral with $\mathbf{Q}=(0.03, 0.03)$. The spiral energy in the vicinity of the $\Gamma$-point is, however, extremely flat with almost vanishing curvature and the ground state energy is merely 0.04 meV lower than the ferromagnetic state. We regard this as being in agreement with the experimental report of intra-layer ferromagnetic order in the bulk compound \cite{wilkinson1959neutron}. The case of CoI$_2$ deviates substantially from the other two halides. CoCl$_2$ and CoBr$_2$ are half-metals with $m=3\;\mu_\mathrm{B}$ per unit cell, whereas CoI$_2$ is an ordinary metal with $m\approx1.2\;\mu_\mathrm{B}$ per unit cell. We find the magnetic ground state of CoI$_2$ to be stripy anti-ferromagnetic with $\mathbf{Q}=(1/2,0)$, whereas experiments on the bulk compound have reported helimagnetic in-plane order with $\mathbf{Q}_\mathrm{exp}=(1/6, 1/8, 1/2)$ in the rectangular cell \cite{MEKATA1992859}. We note, however, that the calculated local magnetic moments vary strongly with $\mathbf{q}$ (up to 0.5 $\mu\mathrm{B}$) in the spin spiral calculations, which signals strong longitudinal fluctuations. This could imply that the material comprises a rather challenging case for DFT and LDA may be insufficient to treat this material properly. \subsection{Spontaneous polarization of AB$_2$ materials} The materials in table \ref{tab:AB2Result} that exhibit spin spiral ground states are expected to introduce a polar axis due to spinorbit coupling and thus allow for spontaneous electric polarization. The stripy antiferromagnet with $\mathbf{Q}=(1/2, 0)$ preserves a site-centered inversion center and remains non-polar. In addition, the case of $\mathbf{Q}=(1/3, 1/3)$ with in-plane orientation of the spiral plane breaks inversion symmetry, but retains the three-fold rotational symmetry (up to translation of a lattice vector) and therefore cannot acquire components of in-plane polarization. To investigate the effect of symmetry breaking we have constructed $7\times1$ supercells of VI$_2$ and the Ni halides and performed a full relaxation of the $\mathbf{q}=(1/7,0)$ spin spiral commensurate with the supercell. This is not exactly the spin spirals found as the ground state from LDA, but we will use these to get a rough estimate of the spontaneous polarization. We note that this is very close to the in-plane component of $\mathbf{Q}_\mathrm{exp}$ for bulk NiI$_2$, which is found to be nearly degenerate with the predicted ground state (see figure \ref{fig:NiI2}). The other materials exhibit similar near-degeneracies, but the calculated polarization could be sensitive to which spiral ordering vector is used. We have chosen to focus on the incommensurate spirals, but note that all the $\mathbf{Q}=(1/3,1/3)$ materials of table \ref{tab:pol} are expected to introduce a spontaneous polarization as well. Besides the incommensurate spirals we thus only include the cases of MnBr$_2$ and MnI$_2$ where the $\mathbf{Q}=(1/3, 1/3)$ spirals may be represented in $\sqrt{3}\times\sqrt{3}$ supercells. The former case represents an example of a proper screw while the latter is an in-plane cycloid. The experimental order in the Mn halides materials is complicated, and our LDA+U calculations yield an ordering vector that differs from that of LDA. However, here we mostly consider these examples for comparison and to check the symmetry constraints on the polarization in the $\mathbf{Q}=(1/3, 1/3)$ spirals In order to calculate the spontaneous polarization we relax the atomic positions in the super cells both with and without spinorbit coupling (included self-consistently) and calculate the 2D polarization from the Berry phase formula \cite{Gjerding2021}. The results are summarized in Tab. \ref{tab:pol}. We can separate the effect of relaxation from the pure electronic contribution by calculating the polarization (including spin-orbit) of the structures that were relaxed without spinorbit coupling. These numbers are stated in brackets in table \ref{tab:pol} as well as the total polarization (including relaxation) and the angles that define the orientation of the spiral plane with respect to $\mathbf{Q}$. The self-consistent calculations yield the optimal orientations of the spiral planes without the PSO approximations and it is reassuring that the orientation roughly coincides with the results of the GBT and the PSO approximation The magnitude of polarization largely scales with the atomic number of ligands (as expected from the strength of spinorbit coupling) and the iodide compounds thus produce the largest polarization. The in-plane cycloid in MnI$_2$ only give rise to out-of-plane polarization as expected from symmetry and the $\mathbf{Q}=(1/3,1/3)$ proper screw in MnBr$_2$ has polarization that is strictly aligned with $\mathbf{Q}$. The latter results is expected for any proper screw in the $\Gamma$K-direction because $\mathbf{Q}$ then coincides with a two-fold rotational axis and the ground state remains invariant under the combined action of this rotation and time-reversal symmetry. Since the polarization is not affected by time-reversal it must be aligned with the two-fold axis. The polarization vectors of the remaining materials (except for NiCl$_2$) are roughly aligned with the intersection between the spiral plane and the atomic plane. It is interesting to note that the calculated magnitudes of total polarization are 5-10 times larger than the prediction from the pure electronic contribution where the atoms were not relaxed with spinorbit coupling. We also tried to calculate the polarization by using the Born effective charge tensors (without spin-orbit) and the atomic deviations from the centrosymmetric positions. However, this approximation severely underestimates the polarization and even produces the wrong sign of the polarization in the case of NiBr$_2$ and NiI$_2$. To obtain reliable values for the polarization it is thus crucial to include the relaxation effects and take the electronic contribution properly into account (going beyond the Born effective charge approximation). In Ref. \cite{Song2022} a value of 141 fC/m was predicted in 2D NiI$_2$ from the gKNB model \cite{Xiang2011} and this is comparable to the values found in table \ref{tab:pol} without relaxation effects. When relaxation is included we find a magnitude of 1.9 pC/m for NiI$_2$, which is an order of magnitude larger compared to the previous prediction. The results are, however, not directly comparable since Ref. \cite{Song2022} considered a spiral along the $\Gamma$K direction whereas the present result is for a spiral along $\Gamma$M. We note that Ref. \cite{Song2022} finds the polarization to be aligned with $\mathbf{Q}$ in agreement with the symmetry considerations above. Finally, the values for the spontaneous polarization in table \ref{tab:pol} may be compared with those of ordinary 2D ferroelectrics, which are typically on the order of a few hundred pC/m for in-plane ferroelectrics and a few pC/m for out-of-plane ferroelectrics \cite{Kruse2022} . In all of these type II multiferroics, the orientation of the induced polarization depends on the direction of the ordering vector, which may thus be switched by application of an external electric field. We have checked explicitly that the sign of polarization is changed if we relax a right-handed instead of a left-handed spiral (corresponding to a reversed ordering vector). The small values of spontaneous polarization in these materials implies that rather modest electric fields are required for switching the ordering vector and thus comprise an interesting alternative to standard multiferroics such as BiFeO$_3$ and YMnO$_3$, where the coercive electric fields are orders of magnitude larger. \subsection{Magnetic ground state of AB$_3$ materials}\label{sec:AB3} The AB$_3$ materials all have space group $P\bar3m1$ corresponding to monolayers of the BI$_3$ (or AlCl$_3$) prototype. The magnetic lattice is the honeycomb motif, thus hosting two magnetic ions in the primitive cell. Several materials of this prototype have been characterized experimentally, but here we only present results for the Cr compounds. This is due to the fact that experimental data of in-plane order is missing for all but CrX$_3$, FeCl$_3$ and RuCl$_3$. Moreover, all magnetic compounds were found to have a simple ferromagnetic ground state. RuCl$_3$ is a well known insulator with stripy antiferromagnetic in-plane order. However, bare LDA finds a metallic state and both Hubbard corrections and self-consistent spinorbit coupling are required to obtain the correct insulating state \cite{kim2015kitaev}. The latter is incompatible with the GBT approach and we have not pursued this further here. Bulk FeCl$_3$ is known to be an insulating helimagnet with $\mathbf{Q} = (\frac{4}{15}, \frac{1}{15}, \frac{3}{2})$ \cite{cable1962neutron}, while we find the monolayer to be a metallic ferromagnet. For CrI$_3$ we compare the spin spiral dispersion to the spiral energy determined by a third nearest neighbour energy mapping procedure. The prototype thus serves as a testing ground for applying unconstrained GBT to materials with multiple magnetic atoms in the unit cell. We analyse the intracell angle between the Cr atoms of CrI$_3$ and provide an expression for generating good initial magnetic moments for GBT calculations. We finally discuss the observed deviations from the classical Heisenberg model and to what extend the flat spiral spectrum can be used to obtain the magnon excitation spectrum. \begin{table}[tb] \begin{tabular}{c|c|c|c|c} & $(\theta,\varphi)$ & $P_\parallel$ & $P_\perp$ & $P_z$ \\ \hline VI$_2$ & (11, 0) & -0.6 (-31) & 290 (96) & 0.05 (0.11) \\ NiCl$_2$ & (90, -30) & -37 (-1.4) & 76 (15) & 3.5(-5.1) \\ NiBr$_2$ & (69, -10) & 12 (-6) & 340 (32) & 26 (37) \\ NiI$_2$ & (70, 0) & -8 (-48) & 1890 (400) & -0.18 (12) \\ MnBr$_2$ & (90, 0) & 430 (38) & 0 (0.02) & 0 (0) \\ MnI$_2$ & (0, 0) & 0 (0.6) & 0.3 (-7) & -260 (-105) \end{tabular} \caption{Orientation of spin planes, and 2D polarization (in fC/m) of selected transition metal halides. $P_\parallel$ denotes the polarization along $\mathbf{Q}$, while $P_\perp$ denotes the polarization in the atomic plane orthogonal to $\mathbf{Q}$ and $P_z$ is the polarization orthogonal to the atomic plane. The numbers in brackets are the polarization values obtained prior to relaxation of atomic positions. We have used $7\times1$ supercells for the V and Ni halides and $\sqrt{3}\times\sqrt{3}$ supercells for the Mn halides. All calculations are set up with left-handed spirals. The numbers in brackets state the spontaneous polarization without relaxation effects.} \label{tab:pol} \end{table} \subsubsection*{CrX3} The chromium trihalides are of considerable interest due to the versatile properties that arise across the three different halides. Monolayer CrI$_3$ was the first 2D monolayer that were demonstrated to host ferromagnetic order below 45 K \cite{Huang2017} and has spurred intensive scrutiny in the physics of 2D magnetism. The magnetic order is governed by strong magnetic easy-axis anisotropy, which is accurately reproduced by first principles simulations \cite{Lado2017,Torelli2018}. In contrast, monolayers of CrCl$_3$ exhibit ferromagnetic interactions as well, but no proper long range order due easy-plane anisotropy. Instead, these monolayers exhibit Kosterlitz-Thouless physics, which give rise to quasi long range order below 13 K \cite{Bedoya-Pinto2021}. The GBT is not really necessary to find the ground state of the monolayer chromium halides. They are all ferromagnetic and insulating and only involve short range exchange interactions that are readily obtained from collinear energy mapping methods \cite{Lado2017, Torelli2018, Olsen2019}. Nevertheless, the gap between the acoustic and optical magnons in bulk CrI$_3$ has been proposed to arise from either (second neighbor) Dzyalosinskii-Moriya interactions \cite{Chen2018a} or Kitaev interactions \cite{Xu2018a, Lee2020}. The former could in principle be extracted directly from planar spin-spiral calculations \cite{sandratskii2017insight}, while the latter requires conical spin spirals. The origin of this gap is, however, still subject to debate \cite{Do2022} and here we will mainly focus on the magnetic interactions that do not rely on spinorbit coupling. In the following we will focus on CrI$_3$ as a representative member of the family \begin{figure*}[tb] \centering \includegraphics[width=1\textwidth]{CrI3.pdf} \caption{(Left: spin spiral energies of CrI$_3$ compared to third nearest neighbour energy mapping. Right: angles beteen the two magnetic moments. The spin spirals are initialised with angles determined by Eq. \eqref{eq:nnminimize} which are shown in black. The moments are collinear on the $\Gamma K$ path and so the AFM solution is also quasi-stable in DFT. Center: the magnitude of local magnetic moments along the spiral path.} \label{fig:CrI3} \end{figure*} The honeycomb lattice contains two magnetic atoms per unit cell and the magnetic moments at the two sites will in general differ by an angle $\xi$. Since we do not impose any constraints except for the boundary conditions specified by $\mathbf{q}$, the angle will be relaxed to its optimal value when the Kohn-Sham equations are solved self-consistently. The convergence of $\xi$, may be a tedious process since the total energy has a rather weak dependence on $\xi$. For a given $\mathbf{q}$ the classical energy of the model \eqref{eq:heisenberg} is minimized by the angle $\xi^0$ given by \begin{align} \tan{\xi^0}=-\frac{\mathrm{Im}J^{12}(\mathbf{q})}{\mathrm{Re}J ^{12}(\mathbf{q})}, \label{eq:nnminimize} \end{align} where \begin{align} J^{12}(\mathbf{q})=\sum_iJ^{12}_{0i}e^{-i\mathbf{q}\cdot\mathbf{R}_i} \end{align} is the Fourier transform of the inter-sublattice exchange coupling. If one assumes nearest neighbour interactions only, $\xi^0$ becomes independent of exchange parameters and the resulting expression thus comprises a suitable initial guess for the inter-sublattice angle. We note that the classical spiral energy is independent of $\xi$ (in the absence of spinorbit coupling) when $J^{12}(\mathbf{q})=0$ and the angle may be discontinuous at such $\mathbf{q}$-points. This occurs for example in the magnetic honeycomb lattice at the K-point ($\mathbf{q}=(1/3,1/3)$). In general, Eq. \eqref{eq:nnminimize} has two solutions that differ by $\pi$ and only one of these minimzes the energy while the other maximizes it. The maximum energy constitutes an "optical" spin spiral branch, which is if interest if one wishes to extract the exchange coupling constants. The spiral energies of CrI$_3$ (with optimized intracell angles) are shown in figure \ref{fig:CrI3}, where we show both the ferromagnetic ($\xi=0$) and the antiferromagnetic ($\xi=\pi$) results on the $\Gamma\mathrm{K}$ path. We also show the spiral energy obtained from the model \eqref{eq:heisenberg} with exchange parameters calculated from a collinear energy mapping using four differnet spin configurations. We get $J_1=2.47$ meV, $J_2=0.682$ meV and $J_3=-0.247$ meV for the first, second and nearest neighbour interactions respectively, which is in good agreement with previous LDA calculations \cite{Olsen2021}. The model spiral energy is seen to agree very well with that obtained from the GBT, which largely validates such a three-parameter model (when spinorbit is neglected). We do, however, find a small deviation in the regions between high-symmetry points. This is likely due to higher order exchange interaction, which will deviate in the two approaches. For example, a biquadratic exchange term \cite{Gutzeit2022}, will cancel out in any collinear mapping, but will influence the energies obtained from the GBT. Biquadratic exchange parameters could thus be extracted from the deviation between the two calculations. In figure \ref{fig:CrI3} we also show the calculated values of $\xi$ and the magnitude of the local magnetic moment at the Cr sites along the path. The self-consistent intracell angles are found to match very well with the initial guess, except for a slight deviation on the Brillouin zone boundary. This corroborates the fact that exchange couplings beyond second neighbours are insignificant (the second nearest neighbor coupling is an intra-sublattice interaction and does not influence the angle). It is also rather instructive to analyze the variation in the magnitude of local magnetic moments. In general, the mapping of electronic structure problems to Heisenberg types of models like \eqref{eq:heisenberg} rests on an adiabatic assumption where it is assumed that the magnitude of the moments are fixed. However, the present variation in the magnitude of moments does not imply a breakdown of the adiabatic assumption, but reflects that DFT should be mapped to a quantum mechanical Heisenberg model rather than a classical model. In particular, the ratio of spin expectation values between the ferromagnetic ground state and the (anti-ferromagnetic) state of highest energy is approximately $\langle S_i\rangle_\mathrm{AFM}/\langle S_i\rangle_\mathrm{FM}=0.83$ in the quantized model \cite{Torelli2020}. While this ratio is somewhat smaller than the difference between ferromagnetic and anti-ferromagnetic moments found here, the result does imply that the magnitude of moments should depend on $\mathbf{q}$. And the fact that the $\mathbf{q=0}$ anti-ferromagnetic moments are smaller than the ferromagnetic ones in a self-consistent treatments reflects that DFT captures part of the quantum fluctuations inherent to the model \eqref{eq:heisenberg}. We note that the spin spiral energy $E_\mathbf{q}$ calculated from the isotropic Heisenberg model using the optimal angle given by Eq. \eqref{eq:nnminimize} is related to the dynamical excitations (magnon energies) by $\omega_\mathbf{q}^\pm=E_\mathbf{q}^\pm/S$ and the spiral energies thus comprise a simple method to get the magnetic excitation spectrum. However, even if a model like \eqref{eq:heisenberg} fully describes a magnetic material (no anisotropy or higher order terms) there will be a systematic error in the extracted exchange parameters (and resulting magnon spectrum) if the parameters are extracted by mapping to the classical model. The reason is, that the classical energies correspond to expectation values of spin configurations with fixed magnitude of the spin, which is not accommodated in a self-consistent approach. This error is directly reflected by the variation of the magnitude of moments in figure \ref{fig:CrI3}. The true exchange parameters can only be obtained either by mapping to eigenstates of the model \cite{Torelli2020} or by considering infinitesimal rotations of the spin, which may be handled non-selfconsistently using the magnetic force theorem \cite{Liechtenstein1987,Bruno2003,Halilov1998,Zimmermann2019,Durhuus2022}. Nevertheless, the deviations between exchange parameters obtained from classical and quantum mechanical energy mapping typically deviates by less than 5 {\%} \cite{Torelli2020} and for insulators it is a good approximation to extract the magnon energies from planar spiral calculations although the mapping is only strictly valid in the limit of small $\mathbf{q}$. \section{Conclusion and outlook}\label{sec:conclusion} In conclusion, we have demonstrated the abundance of spiral magnetic order in 2D transition metal dichalcogenides from first principles calculations. The calculations imply that type II multiferroic order is rather common in these materials and we have calculated the spontaneous polarization in a selected subset of these using fully relaxed structures in super cells. While the super cell calculations does not correspond to the exact spirals found from the GBT, the calculations show that relaxation effects plays a crucial role for the induced polarization and should be taken into account in any quantitative analysis. The spontaneous polarization in type II multiferroics is in general rather small compared to what is found in ordinary 2D ferroelectrics and could imply that the chirality of spirals are switchable by small electric fields. It would be highly interesting to calculate the coercive field for switching in these materials, but due to the importance of relaxation effects and spin-orbit coupling this is a non-trivial computation that cannot simply be obtained from the Born effective charges and force constant matrix. The GBT comprises a powerful framework for extracting the magnetic properties of materials from first principles. In addition to the single-$q$ states considered here, one may use super cells to extract the importance of higher order exchange interactions and unravel the possibility of having multi-$q$ ground states. In addition, for non-centrosymmetric materials, the PSO approach may be readily applied to obtain the Dzyaloshinskii-Moriya interactions, which may lead to Skyrmion lattice ground states or stabilize other multi-$q$ states. \onecolumngrid \section{Appendix} \subsection{Implementation}\label{sec:implementation} In the PAW formalism we expand the spiral spinors using the standard PAW transformation \cite{blochl} \begin{align} \psi_{\mathbf{q},\mathbf{k}}(\mathbf{r}) &= \hat{\mathcal{T}} \tilde{\psi}_{\mathbf{q},\mathbf{k}}(\mathbf{r})= \tilde{\psi}_{\mathbf{q},\mathbf{k}}(\mathbf{r})+\sum_a \sum_i (\phi_i^a(\mathbf{r}) - \tilde{\phi}_i^a(\mathbf{r}))\int d\mathbf{r} [\tilde{p}_i^{a}(\mathbf{r})]^*\tilde{\psi}_{\mathbf{q},\mathbf{k}}(\mathbf{r}), \end{align} where $\tilde{\psi}_{\mathbf{q},\mathbf{k}}(\mathbf{r})$ is a smooth (spinor) pseudo-wavefunction that coincides with $\psi_{\mathbf{q},\mathbf{k}}(\mathbf{r})$ outside the augmentation spheres and deviates from $\psi_{\mathbf{q},\mathbf{k}}(\mathbf{r})$ by the second term inside the augmentation spheres. The all-electron wavefunction $\psi_{\mathbf{q},\mathbf{k}}(\mathbf{r})$ is thus expanded in terms of (spinor) atomic orbitals $\phi_i^a$ inside the PAW spheres and the expansion coefficients are given by the overlap between the pseudowavefunction and atom-centered spinor projector functions $\tilde p_i^a$. Using Eq. \eqref{eq:GBT} we may write this as \begin{align} \psi_{\mathbf{q},\mathbf{k}}(\mathbf{r}) &= e^{i\mathbf{k}\cdot\mathbf{r}}U^\dag_\mathbf{q}(\mathbf{r})\tilde{u}_\mathbf{q,k}(\mathbf{r})+\sum_a \sum_i (\phi_i^a(\mathbf{r}) - \tilde{\phi}_i^a(\mathbf{r}))\int d\mathbf{r} [\tilde{p}_i^{a}(\mathbf{r})]^*e^{i\mathbf{k}\cdot\mathbf{r}}U^\dag_\mathbf{q}(\mathbf{r})\tilde{u}_\mathbf{q,k}(\mathbf{r})\notag\\ &=e^{i\mathbf{k}\cdot\mathbf{r}}U^\dag_\mathbf{q}(\mathbf{r})\tilde{u}_\mathbf{q,k}(\mathbf{r})+\sum_a \sum_i (\phi_i^a(\mathbf{r}) - \tilde{\phi}_i^a(\mathbf{r}))\int d\mathbf{r} [e^{-i\mathbf{k}\cdot\mathbf{r}}U_\mathbf{q}(\mathbf{r})\tilde{p}_i^{a}(\mathbf{r})]^*\tilde{u}_\mathbf{q,k}(\mathbf{r})\notag\\ &=e^{i\mathbf{k}\cdot\mathbf{r}}U^\dag_\mathbf{q}(\mathbf{r})\tilde{u}_\mathbf{q,k}(\mathbf{r})+\sum_a \sum_i (\phi_i^a(\mathbf{r}) - \tilde{\phi}_i^a(\mathbf{r}))\int d\mathbf{r} [\tilde{p}_{i,\mathbf{q,k}}^a(\mathbf{r}) ]^*\tilde{u}_\mathbf{q,k}(\mathbf{r})\notag\\ &\equiv\mathcal{T}_\mathbf{q,k}\tilde{u}_\mathbf{q,k}(\mathbf{r}) , \end{align} where $U_\mathbf{q}(\mathbf{r})$ was given in Eq. \eqref{eq:U} and we defined \begin{align}\label{eq:projector} \tilde{p}_{i,\mathbf{q,k}}^a(\mathbf{r}) &= e^{-i\mathbf{k}\cdot\mathbf{r}}U_{\mathbf{q}} (\mathbf{r})\tilde{p}_i^{a}(\mathbf{r}). \end{align} The PAW transformed Kohn-Sham equations then read \begin{align} \tilde{H}_{\mathbf{q},\mathbf{k}}\tilde{u}_{\mathbf{q},\mathbf{k}}(\mathbf{r}) = \epsilon_\mathbf{q,k}S_\mathbf{q,k}\tilde{u}_{\mathbf{q},\mathbf{k}}(\mathbf{r}), \end{align} with \begin{align} \tilde{H}_\mathbf{q,k}=\mathcal{T}_\mathbf{q,k}^\dag H\mathcal{T}_\mathbf{q,k},\qquad S_\mathbf{q,k}=\mathcal{T}_\mathbf{q,k}^\dag \mathcal{T}_\mathbf{q,k}. \end{align} Calculations in the framework of the GBT thus requires two modifications compared to the approach for solving the ordinary Kohn-Sham equations in the PAW formalism. 1) The $k$-dependence of the standard Bloch Hamiltonian is replaced by $\mathbf{k}\rightarrow\mathbf{k}\mp\mathbf{q}/2$ for spin-up and spin down components respectively. 2) Different spin dependent projector functions has to be applied when calculating the projector overlaps with the spin-up and spin-down components of the psudowavefunctions (see Eq. \eqref{eq:projector}). \vspace{5mm} \twocolumngrid \begin{figure}[tb] \centering \includegraphics[width=0.45\textwidth]{CoPt_nosoc_en.pdf} \caption{Comparison between GBT spin spiral calculations and supercell calculations without spinorbit coupling in monolayer CoPt.} \label{fig:CoPtsupercell} \end{figure} \begin{figure}[tb] \centering \includegraphics[width=0.45\textwidth]{fcc_Fe.pdf} \caption{Spin spiral energies of fcc Fe for the experimental lattice constant (red) and a strained latice constant, which is known to reproduce the experimental spin spiral order in (blue). The dashed vertical lines indicate the minima found in Ref. \cite{marsman2002broken}.} \label{fig:fcciron} \end{figure} \begin{figure}[tb] \centering \includegraphics[width=0.45\textwidth]{CoPt_soc_en.pdf} \caption{Comparison between GBT spin spiral calculations and supercell calculations with projected and full spinorbit coupling in monolayer CoPt.} \label{fig:CoPt_energy} \end{figure} \subsection{Benchmark}\label{sec:benchmark} The LDA implementation of the GBT have been tested by checking that our results agree with similar calculations from the literature and by verifying internal consistency by comparing with super cell calculations. The case of fcc Fe has been found to have a spin spiral ground state \cite{Y_Tsunoda_1987} and the calculation of the ordering vector $\mathbf{Q}$ has been become a standard benchmark for spin spiral implementations \cite{kurz2004ab}. In previous simulations the ordering vector was found to be rather sensitive to the lattice constant and in figure \ref{fig:fcciron} we show the spin spiral energies along the $\Gamma$XW path using the experimental lattice constant as well as the lattice constant which has been found to reproduce the experimental ordering vector \cite{marsman2002broken}. The calculated value of $\mathbf{Q}$ is in good agreement with previous reports in both cases \cite{garcia2004first}. We also confirm a similar low energy barrier between the two local minima, as is expected from LDA \cite{knopfle2000spin}. In order to check internal consistency we have investigated the case of monolayer CoPt \cite{sandratskii2017insight} where we compare spin spiral energies calculated using the GBT with energies calculated from super cells. We thus construct a $16x1$ super cell of the CoPt monolayer and consider spirals with $\mathbf{q}_\mathrm{c}=(\frac{n}{16})$ in units of reciprocal lattice vectors. This allows us to extract 16 different spiral energies in the supercell using standard non-collinear DFT. In order to compare the two methods we have used a $k$-point grid of $16\times16\times1$ for the GBT and $1\times16\times1$ for the supercell and a plane wave cutoff of 700 eV for both calculations. In Fig. \ref{fig:CoPtsupercell} we compare the results without spinorbit coupling and find excellent agreement between supercell and GBT calculations. We note that when spinorbit coupling is neglected one has $E_{\mathbf{q}}=E_{-\mathbf{q}}$. Since spinorbit coupling is incompatible with the GBT one has to resort to approximate schemes to include it in the calculations. In the present work we have used the PSO method proposed by Sandratskii \cite{sandratskii2017insight}. In Fig. \ref{fig:CoPt_energy} we compare spin spiral calculations with supercell calculations where the spinorbit coupling has been included either fully or by the PSO method. The PSO method is fully compatible with the GBT and we find excellent agreement between the spin spiral energies calculated with the GBT and with supercells. The PSO approach is, however an approximation and the correct result can only be obtained from the supercell using the full spinorbit coupling. We see that the PSO calculations are in good agreement with those obtained from full spinorbit coupling but overestimates the energies at the Brillouin zone boundary by a few percent. In contrast, if one tries to include the full spinorbit operator in the GBT calculations (by diagonalizing $H^\mathrm{KS}$ including spinorbit coupling on a basis of GBT eigenstates without spinorbit coupling) the energies are severely underestimated with respect to the exact result (from the supercell calculation). We note that the spiral energies including spinorbit coupling shows a slight asymmetry between points at $q$ and $-q$, which can be related to the Dzyaloshinskii-Moriya interactions in the system \cite{sandratskii2017insight}. \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The interference channel (IFC) models a wireless network where every transmitter (user) communicates with its unique intended receiver while causing interference to the remaining receivers. For the two-user IFC, the topic of study in this paper and henceforth simply referred to as an IFC, the capacity region is not known in general even when the channel is time-invariant, i.e., non-fading. Capacity results are known only for specific classes of non-fading two-user IFCs where the classes are identified by the relative strength of the channel gains of the interfering cross-links and the intended direct links. Thus, strong and weak IFCs refer to the cases where the channel gains of the cross-links are at least as large as those of the direct links and vice-versa. The capacity region for the class of strong Gaussian IFCs is developed independently in \cite{cap_theorems:Sato_IC,cap_theorems:Carleial_VSIFC,cap_theorems:KobayashiHan_IC} and can be achieved when both receivers decode both the intended and interfering messages. In contrast, for the weak channels, the sum-capacity can be achieved by ignoring interference when the channel gains of one of the cross-links is zero, i.e., for a one-sided IFC \cite{cap_theorems:Costa_IC}. More recently, the sum-capacity of a class of noisy or very weak Gaussian IFCs has been determined independently in \cite{cap_theorems:ShangKramerChen}, \cite{cap_theorems:MotaKhan}, and \cite{cap_theorems:AR_VVV}. Outer bounds for the IFC are developed in \cite{cap_theorems:Kramer_IFCOB} and \cite{cap_theorems:ETW} while several achievable rate regions for the Gaussian IFC are studied in \cite{cap_theorems:I_Sason_IFC}. The best known inner bound is due to Han and Kobayashi (HK) \cite{cap_theorems:KobayashiHan_IC}. Recently, in \cite{cap_theorems:ETW} a simple HK type scheme is shown to achieve every rate pair within 1 bit/s/Hz of the capacity region. In \cite{cap_theorems:Weng_Tuninetti}, the authors reformulate the HK region as a sum of two sets to characterize the maximum sum-rate achieved by Gaussian inputs and without time-sharing. More recently, the approximate capacity of two-user Gaussian IFCs is characterized using a deterministic channel model in \cite{cap_theorems:Bresler_Tse}. The sum-capacity of the class of non-fading MIMO IFCs is studied in \cite{cap_theorems:Shang_MIMOIFC}. Relatively fewer results are known for parallel or fading IFCs. In \cite{cap_theorems:ChuCioffi_IC}, the authors develop an achievable scheme of a class of two-user parallel Gaussian IFCs where each parallel channel is strong using independent encoding and decoding in each parallel channel. In \cite{cap_theorems:SumCap_ParZIFC}, Sung \textit{et al.} present an achievable scheme for a class of one-sided two-user parallel Gaussian IFCs. The achievable scheme involves encoding and decoding signals over each parallel channel independently such that, depending on whether a parallel channel is weak or strong (including very strong) one-sided IFC, the interference in that channel is either viewed as noise or completely decoded, respectively. In this paper, we show that independent coding across sub-channels is in general not sum-capacity optimal. Recently, for parallel Gaussian IFCs, \cite{cap_theorems:Shang_03} determines the conditions on the channel coefficients and power constraints for which independent transmission across sub-channels and treating interference as noise is optimal. Techniques for MIMO IFCs \cite{cap_theorems:Shang_MIMOIFC} are applied to study separability in parallel Gaussian\ IFCs (PGICs) in \cite{cap_theorems:KaistParIFC}. It is worth noting that PGICs are a special case of ergodic fading IFCs in which each sub-channel is assigned the same weight, i.e., occurs with the same probability; furthermore, they can also be viewed as a special case of MIMO IFCs and thus results from MIMO IFCs can be directly applied. For fading interference networks with three or more users, in \cite{cap_theorems:CadamJafar_IFCAlign}, the authors develop an \textit{interference alignment} coding scheme to show that the sum-capacity of a $K$-user IFC scales linearly with $K$ in the high signal-to-noise ratio (SNR) regime when all links in the network have similar channel statistics. In this paper, we study ergodic fading two-user Gaussian IFCs and determine the sum-capacity and the corresponding optimal power policies for specific sub-classes, where we define each sub-class by the fading statistics. Noting that ergodic fading IFCs are a weighted collection of parallel IFCs (sub-channels), we identify four sub-classes that jointly contain the set of all ergodic fading IFCs. We develop the sum-capacity for two of them. For the third sub-class, we develop the sum-capacity when only one of the two receivers is affected by interference, i.e., for a one-sided ergodic fading IFC. While the four sub-classes are formally defined in the sequel, we refer the reader to Fig. \ref{FigIFCVenn} for a pictorial representation. An overview of the capacity results is illustrated in the sequel in Fig. \ref{Fig_IFCVenn}. A natural question that arises in studying ergodic fading and parallel channels is the optimality of \textit{separable coding}, i.e., whether encoding and decoding independently on each sub-channel is optimal in achieving one or more points on the boundary of the capacity region. For each sub-class of IFCs we consider, we address the optimality of separable coding, often referred to as \textit{separability}, and demonstrate that in contrast to point-to-point, multiple-access, and broadcast channels without common messages \cite{cap_theorems:GoldsmithVaraiya,cap_theorems:TH01,cap_theorems:Tse_BC}, separable coding is not necessarily sum-capacity optimal for ergodic fading IFCs. The first of the four sub-classes is the set of \textit{ergodic very strong} (EVS) IFCs in which each sub-channel can be either weak or strong but averaged over all fading states (sub-channels) the interference at each receiver is sufficiently strong that the two direct links from each transmitter to its intended receiver are the bottle-necks limiting the sum-rate. For this sub-class, we show that requiring both receivers to decode the signals from both transmitters is optimal, i.e., the ergodic very strong IFC modifies to a two-user ergodic fading compound multiple-access channel (C-MAC) in which the transmitted signal from each user is intended for both receivers \cite{cap_theorems:SEP}. To this end, as an achievable rate region for IFCs and as a problem of independent interest, we develop the capacity region and the optimal power policies that achieve them for ergodic fading C-MACs (see also \cite{cap_theorems:SEP}). For EVS IFCs we also show that achieving the sum-capacity (and the capacity region) requires transmitting information (encoding and decoding) jointly across all sub-channels, i.e., separable coding in each sub-channel is strictly sub-optimal. Intuitively, the reason for joint coding across channels lies in the fact that, analogous to parallel broadcast channels with common messages \cite{cap_theorems:JindalGold}, both transmitters in the EVS IFCs transmit only common messages intended for both receivers for which independent coding across sub-channels becomes strictly sub-optimal. To the best of our knowledge this is the first capacity result for fading two-user IFCs with a mix of weak and strong sub-channels. For such mixed ergodic IFCs, recently, a strategy of \textit{ergodic interference alignment} is proposed in \cite{cap_theorems:Nazer01}, and is shown to achieve the sum-capacity in \cite{cap_theorems:Jafar_ErgIFC} for a class of $K$-user fading IFCs with uniformly distributed phase and at least $K/2$ disjoint equal strength interference links. The second sub-class is the set of \textit{uniformly strong }(\textit{US}% )\textit{ }IFCs in which every sub-channel is strong, i.e., the cross-links have larger fading gains than the direct links for each fading realization. For this sub-class, we show that the capacity region is the same as that of an ergodic fading C-MAC with the same fading statistics and that achieving this region requires joint coding across all sub-channels. The third sub-class is the set of \textit{uniformly weak }(\textit{UW}% )\textit{ }IFCs for which every sub-channel is weak. As a first step, we study the one-sided uniformly weak IFC and develop genie-aided outer bounds. We show that the bounds are tight when the interfering receiver ignores the weak interference in every sub-channel. Furthermore, we show that separable coding is optimal for this sub-class. The sum-capacity results for the one-sided channel are used to develop outer bounds for the two-sided case; however, sum-capacity results for the two-sided case will require techniques such as those developed in \cite{cap_theorems:Shang_03} that also determine the channel statistics and power policies for which ignoring interference and separable coding is optimal. The final sub-class is the set of \textit{hybrid }IFCs for which the sub-channels are a mix of strong and weak such that there is at least one weak and one strong sub-channel but are not EVS IFCs (and by definition also not US and UW\ IFCs). The capacity-achieving strategy for EVS and US IFCs suggest that a joint coding strategy across the sub-channels can potentially take advantage of the strong states to partially eliminate interference. To this end, for ergodic fading \textit{one-sided IFCs}, we propose a general joint coding strategy that uses rate-splitting and Gaussian codebooks without time-sharing for all sub-class of IFCs. For two-sided IFCs, the coding strategy we present generalizes to a two-sided HK-based scheme with Gaussian codebooks and no time-sharing that is presented and studied in \cite{cap_theorems:Tuninetti}. In the non-fading case, a one-sided non-fading IFC is either weak or strong and the sum-capacity is known in both cases. In fact, for the weak case the sum-capacity is achieved by ignoring the interference and for the strong case it is achieved by decoding the interference at the receiver subject to the interference. However, for ergodic fading one-sided IFCs, in addition to the UW\ and US sub-classes, we also have to contend with the hybrid and EVS sub-classes each of which has a unique mix of weak and strong sub-channels. The HK-based achievable strategy we propose applies to all sub-classes of one-sided IFCs and includes the capacity-achieving strategies for the EVS, US, and UW as special cases. The sub-class of \textit{uniformly mixed }(\textit{UM})\textit{ }IFCs obtained by overlapping two complementary one-sided IFCs, one of which is uniformly strong and the other uniformly weak, belongs to the sub-class of hybrid (two-sided) IFCs. For UM\ IFCs, we show that to achieve sum-capacity the transmitter that interferes strongly transmits a common message across all sub-channels while the weakly interfering transmitter transmits a private message across all sub-channels. The two different interfering links however require joint encoding and decoding across all sub-channels to ensure optimal coding at the receiver with strong interference. Finally, a note on separability. In \cite{cap_theorems:CadamJafar_Insep}, Cadambe and Jafar demonstrate the inseparability of parallel interference channels using an example of a three-user frequency selective fading IFC. The authors use interference alignment schemes to show that separability is not optimal for fading IFCs with three or more users while leaving open the question for the two-user fading IFC. We addressed this question in \cite{cap_theorems:SXEP} for the ergodic fading one-sided IFC and developed the conditions for the optimality of separability for EVS\ and US one-sided IFCs. In this paper, we readdress this question for all sub-classes of fading IFCs. Our results suggest that in general both one-sided and two-sided IFCs benefit from transmitting the same information across all sub-channels, i.e., not independently encoding and decoding in each sub-channel, thereby exploiting the fading diversity to mitigate interference. The paper is organized as follows. In\ Section \ref{Section 2}, we present the channel models studied. In Section \ref{section 3}, we summarize our main results. The capacity region of an ergodic fading C-MAC is developed in Section \ref{Sec_CM}. The proofs are collected in Section \ref{Sec_4}. We discuss our results with numerical examples in Section \ref{Sec_Dis} and conclude in Section \ref{Sec_Con}.% \begin{figure}[tbp] \centering {\includegraphics[ height=2.6705in, width=5.5867in ]% {Two_One_sided_IFCs_Venn_noresults.eps}% }% \caption{A Venn diagram representation of the four sub-classes of ergodic fading one- and two-sided IFCs.}\label{FigIFCVenn}% \end{figure}% \section{\label{Section 2}Channel Model and Preliminaries}% \begin{figure} [ptb] \begin{center} \includegraphics[ height=2.1655in, width=5.9931in ]% {IC_and_CMAC.eps}% \caption{The two-user Gaussian\ two-sided IFC and C-MAC and the two-user Gaussian one-sided IFC.}% \label{Fig_IC}% \end{center} \end{figure} A two-sender two-receiver (also referred to as the two-user) ergodic fading Gaussian IFC consists of two source nodes $S_{1}$ and $S_{2}$, and two destination nodes $D_{1}$ and $D_{2}$ as shown in Fig. \ref{Fig_IC}. Source $S_{k}$, $k=1,2$, uses the channel $n$ times to transmit its message $W_{k}$, which is distributed uniformly in the set $\,\{1,2,\ldots,2^{B_{k}}\}$ and is independent of the message from the other source, to its intended receiver, $D_{k}$, at a rate $R_{k}=B_{k}/n$ bits per channel use. In each use of the channel, $S_{k}$ transmits the signal $X_{k}$ while the destination $D_{k}$ receives $Y_{k}$, $k=1,2.$ For $\mathbf{X}=\left[ X_{1}\text{ }X_{2}\right] ^{T}$, the channel output vector $\mathbf{Y}=\left[ Y_{1}\text{ }% Y_{2}\right] ^{T}$ is given by% \begin{equation} \mathbf{Y}=\mathbf{HX}+\mathbf{Z} \label{IC_Y}% \end{equation} where $\mathbf{Z}=\left[ Z_{1}\text{ }Z_{2}\right] ^{T}$ is a noise vector with entries that are zero-mean, unit variance, circularly symmetric complex Gaussian noise variables and $\mathbf{H}$ is a random matrix of fading gains with entries $H_{m,k}$, for all $m,k=1,2$, such that $H_{m,k}$ denotes the fading gain between receiver $m$ and transmitter $k$. We use $\mathbf{h}$ to denote a realization of $\mathbf{H}$. We assume the fading process $\left\{ \mathbf{H}\right\} $ is stationary and ergodic but not necessarily Gaussian. Note that the channel gains $H_{m,k}$, for all $m$ and $k$, are not assumed to be independent; however, $\mathbf{H}$ is known instantaneously at all the transmitters and receivers. Over $n$ uses of the channel, the transmit sequences $\left\{ X_{k,i}% \right\} $ are constrained in power according to% \begin{equation} \left. \sum\limits_{i=1}^{n}\left\vert X_{k,i}\right\vert ^{2}\leq n\overline{P}_{k}\right. ,\text{ for all }k=1,2\text{.} \label{IFC_Pwr}% \end{equation} Since the transmitters know the fading states of the links on which they transmit, they can allocate their transmitted signal power according to the channel state information. A power policy \underline{$P$}$(\mathbf{h})$ is a mapping from the fading state space consisting of the set of all fading states (instantiations) $\mathbf{h}$ to the set of non-negative real values in $\mathcal{R}_{+}^{2}$. The entries of \underline{$P$}$(\mathbf{h})$ are $P_{k}(\mathbf{h})$, the power policy at user $k$, $k=1,2$. While \underline{$P$}$(\mathbf{h})$ denotes the map for a particular fading state, we write \underline{$P$}$(\mathbf{H})$ to explicitly describe the policy for the entire set of random fading states. Thus, we use the notation \underline{$P$}$(\mathbf{H})$ when averaging over all fading states or describing a collection of policies, one for every $\mathbf{h}$. The entries of \underline{$P$}$(\mathbf{H})$ are $P_{k}(\mathbf{H}),$ for all $k$. For an ergodic fading channel, (\ref{IFC_Pwr}) then simplifies to \begin{equation} \left. \mathbb{E}\left[ P_{k}(\mathbf{H})\right] \leq\overline{P}% _{k}\right. \text{ for all }k=1,2, \label{ErgPwr}% \end{equation} where the expectation in (\ref{ErgPwr}) is over the distribution of $\mathbf{H}$. We denote the set of all feasible policies $\underline{P}\left( \mathbf{h}\right) $, i.e., the power policies whose entries satisfy (\ref{ErgPwr}), by $\mathcal{P}$. Finally, we write \underline{$\overline{P}$} to denote the vector of average power constraints with entries $\overline {P}_{k}$, $k=1,2$. For the special case where both receivers decode the messages from both transmitters, we obtain a compound MAC (see Fig. \ref{Fig_IC}(a)). A one-sided fading Gaussian IFC results when either $H_{1,2}=0$ or $H_{2,1}=0$ (see Fig. \ref{Fig_IC}(b)). Without loss of generality, we develop sum-capacity results for a one-sided IFC (Z-IFC) with $H_{2,1}=0$. The results extend naturally to the complementary one-sided model with $H_{1,2}=0$. A two-sided IFC can be viewed as a collection of two complementary one-sided IFCs, one with $H_{1,2}=0$ and the other with $H_{2,1}=0$. We write $\mathcal{C}_{\text{IFC}}\left( \overline{P}_{1},\overline{P}% _{2}\right) $ and $\mathcal{C}_{\text{C-MAC}}\left( \overline{P}% _{1},\overline{P}_{2}\right) $ to denote the capacity region of an ergodic fading IFC and C-MAC, respectively. Our definition of average error probabilities, capacity regions, and achievable rate pairs $\left( R_{1},R_{2}\right) $ for both the IFC and C-MAC mirror the standard information-theoretic definitions \cite[Chap. 14]{cap_theorems:CTbook}. Non-fading IFCs can be classified by the relative strengths of the interfering to intended signals at each of the receivers. A (two-sided non-fading) \textit{strong }IFC is one in which the cross-link channel gains are larger than the direct link channel gains to the intended receivers \cite{cap_theorems:Sato_IC}, i.e., \begin{equation}% \begin{array} [c]{cc}% \left\vert H_{j,k}\right\vert >\left\vert H_{k,k}\right\vert & \text{for all }j,k=1,2,\text{ }j\not =k. \end{array} \label{IFC_Str}% \end{equation} A strong IFC is \textit{very strong} if the cross-link channel gains dominate the transmit powers such that (see for e.g., \cite{cap_theorems:Sato_IC,cap_theorems:Carleial_VSIFC})% \begin{equation}% \begin{array} [c]{cc}% \sum\limits_{k=1}^{2}C\left( \left\vert H_{k,k}\right\vert ^{2}P_{k}\left( \mathbf{H}\right) \right) <C\left( \sum\limits_{k=1}^{2}\left\vert H_{j,k}\right\vert ^{2}P_{j}\left( \mathbf{H}\right) \right) & \text{for all }j=1,2, \end{array} \label{IFC_VStr}% \end{equation} where for the non-fading IFC, $P_{k}\left( \mathbf{H}\right) =\overline {P}_{k}$ in (\ref{IFC_Pwr}). One can verify that (\ref{IFC_VStr}) implies (\ref{IFC_Str}), i.e., a very strong IFC is also strong. A non-fading IFC is \textit{weak} when (\ref{IFC_Str}) is not satisfied for all $j,k$, i.e., neither of the two complementary one-sided IFCs that a two-sided IFC can be decomposed into are strong. A non-fading IFC is \textit{mixed} when one of complementary\ one-sided IFCs is weak while the other is strong, i.e., \begin{equation}% \begin{array} [c]{ccc}% \left\vert H_{1,2}\right\vert >\left\vert H_{2,2}\right\vert \text{ } & \text{and } & \left\vert H_{2,1}\right\vert <\left\vert H_{1,1}\right\vert \end{array} \end{equation} or \begin{equation}% \begin{array} [c]{ccc}% \left\vert H_{1,2}\right\vert >\left\vert H_{2,2}\right\vert \text{ } & \text{and } & \left\vert H_{2,1}\right\vert <\left\vert H_{1,1}\right\vert . \end{array} \end{equation} An ergodic fading IFC is a collection of parallel sub-channels (fading states), and thus, each sub-channel can be either very strong, strong, or weak. Since a fading IFC can contain a mixture of different types of sub-channels, we introduce the following definitions to classify the set of all ergodic fading two-user Gaussian IFCs (see also Fig. \ref{FigIFCVenn}). Unless otherwise stated, we henceforth simply write IFC to denote a two-user ergodic fading Gaussian IFC. \begin{definition} A \textit{uniformly strong} IFC is a collection of strong sub-channels, i.e., both cross-links in each sub-channel satisfy (\ref{IFC_Str}). \end{definition} \begin{definition} An \textit{ergodic very strong} IFC is a collection of weak and strong (including very strong) sub-channels for which (\ref{IFC_VStr}) is satisfied when averaged over all fading states and for $P_{k}\left( \mathbf{H}\right) =P_{k}^{\left( wf\right) }\left( H_{kk}\right) $, where $P_{k}^{\left( wf\right) }\left( H_{kk}\right) $ is the optimal waterfilling policy that achieves the point-to-point capacity for user $k$ in the absence of interference. \end{definition} \begin{definition} A \textit{uniformly weak }IFC is a collection of weak sub-channels, i.e., in each sub-channel both cross-links do not satisfy (\ref{IFC_Str}). \end{definition} \begin{definition} A \textit{uniformly mixed }IFC is a pair of two complementary one-sided IFCs in which one of them is uniformly weak and the other is uniformly strong. \end{definition} \begin{definition} A \textit{hybrid }IFC is a collection of weak and strong sub-channels with at least one weak and one strong sub-channel that do not satisfy the conditions in (\ref{IFC_VStr}) when averaged over all fading states and for $P_{k}\left( \mathbf{H}\right) =P_{k}^{\left( wf\right) }\left( H_{kk}\right) $. \end{definition} Since an ergodic fading channel is a collection of parallel sub-channels (fading states) with different weights, throughout the sequel, we use the terms fading states and sub-channels interchangeably. In contrast to the one-sided IFC, we simply write IFC to denote the two-sided model. Before proceeding, we summarize the notation used in the sequel. \begin{itemize} \item Random variables (e.g. $H_{k,j}$) are denoted with uppercase letters and their realizations (e.g. $h_{k,j}$) with the corresponding lowercase letters. \item Bold font $\mathbf{X}$ denotes a random matrix while bold font $\mathbf{x}$ denotes an instantiation of $\mathbf{X}$. \item $I$ denotes the identity matrix. \item $\left\vert \mathbf{X}\right\vert $ and $\mathbf{X}^{-1}$ denotes the determinant and inverse of the matrix $\mathbf{X.}$ \item $\mathcal{CN}\left( 0,\mathbf{\Sigma}\right) $ denotes a circularly symmetric complex Gaussian distribution with zero mean and covariance $\mathbf{\Sigma}$. \item $\mathcal{K}=\left\{ 1,2\right\} $ denotes the set of transmitters. \item $\mathbb{E}\left( \cdot\right) $ denotes expectation; $C(x)$ denotes $\log(1+x)$ where the logarithm is to the base 2, $\left( x\right) ^{+}$ denotes $\max(x,0)$, $I(\cdot;\cdot)$ denotes mutual information, $h\left( \cdot\right) $ denotes differential entropy, and $R_{\mathcal{S}}$ denotes $% {\textstyle\sum\nolimits_{k\in\mathcal{S}}} R_{k}$ for any ${\mathcal{S}}$ $\subseteq\mathcal{K}$. \end{itemize} \section{\label{section 3}Main\ Results} The following theorems summarize the main contributions of this paper. The proof for the capacity region of the C-MAC is presented in Section \ref{Sec_CM} as are the details of determining the capacity achieving power policies. The proofs for the remaining theorems, related to IFCs, are collected in\ Section \ref{Sec_4}. Throughout the sequel we write waterfilling solution to denote the capacity achieving power policy for ergodic fading point-to-point channels \cite{cap_theorems:GoldsmithVaraiya}. \subsection{\label{Sec_3CM}Ergodic fading C-MAC} An achievable rate region for ergodic fading IFCs results from allowing both receivers to decode the messages from both transmitters, i.e., by converting an IFC to a C-MAC. The following theorem summarizes the sum-capacity $\mathcal{C}_{\text{C-MAC}}$ of an ergodic fading C-MAC. \begin{theorem} \label{Th_CMAC}The capacity region, $\mathcal{C}_{\text{C-MAC}}\left( \overline{P}_{1},\overline{P}_{2}\right) $, of an ergodic fading two-user Gaussian\ C-MAC with average power constraints $P_{k}$ at transmitter $k$, $k=1,2,$ is% \begin{equation} \mathcal{C}_{\text{C-MAC}}\left( \overline{P}_{1},\overline{P}_{2}\right) =\bigcup_{\underline{P}\in\mathcal{P}}\left\{ \mathcal{C}_{1}\left( \underline{P}\left( \mathbf{H}\right) \right) \cap\mathcal{C}_{2}\left( \underline{P}\left( \mathbf{H}\right) \right) \right\} \label{CapR_CMAC}% \end{equation} where for $j=1,2,$ we have% \begin{equation} \mathcal{C}_{j}\left( \underline{P}\left( \mathbf{H}\right) \right) =\left\{ \left( R_{1},R_{2}\right) :R_{\mathcal{S}}\leq\mathbb{E}\left[ C\left( \sum_{k\in\mathcal{S}}\left\vert H_{j,k}\right\vert ^{2}P_{k}\left( \mathbf{H}\right) \right) \right] ,\text{for all }\mathcal{S}% \subseteq\mathcal{K}\right\} . \label{CMAC_Cj}% \end{equation} The optimal coding scheme requires encoding and decoding jointly across all sub-channels. \end{theorem} \begin{remark} The capacity region $\mathcal{C}_{\text{C-MAC}}$ is convex. This follows from the convexity of the set $\mathcal{P}$ and the concavity of the $\log$ function. \end{remark} \begin{remark} $\mathcal{C}_{\text{C-MAC}}$ is a function of $\left( \overline{P}% _{1},\overline{P}_{2}\right) $ due to the fact that union in (\ref{CapR_CMAC}% ) is over all feasible power policies, i.e., over all $\underline{P}\left( \mathbf{H}\right) $ whose entries satisfy (\ref{ErgPwr}). \end{remark} \begin{remark} In contrast to the ergodic fading point-to-point and multiple access channels, the ergodic fading C-MAC is not merely a collection of independent parallel channels; in fact encoding and decoding independently in each parallel channel is in general sub-optimal as demonstrated later in the sequel. \end{remark} \begin{corollary} \label{Cor_1}The capacity region $\mathcal{C}_{\text{IFC}}$ of an ergodic fading IFC is bounded as $\mathcal{C}_{\text{C-MAC}}\subseteq\mathcal{C}% _{\text{IFC}}$. \end{corollary} \subsection{Ergodic Very Strong IFCs} \begin{theorem} \label{Th_VS}The capacity region of an ergodic very strong IFC is \begin{equation} \mathcal{C}_{\text{IFC}}^{EVS}=\left\{ \left( R_{1},R_{2}\right) :R_{k}% \leq\mathbb{E}\left[ C\left( \left\vert H_{k,k}\right\vert ^{2}P_{k}% ^{wf}\left( H_{k,k}\right) \right) \right] ,k=1,2\right\} . \label{EVS_CapR}% \end{equation} The sum-capacity is \begin{equation} \sum_{k=1}^{2}\mathbb{E}\left[ C\left( \left\vert H_{k,k}\right\vert ^{2}P_{k}^{wf}\left( H_{k,k}\right) \right) \right] \label{EVS_SC}% \end{equation} where, for all $k,$ $P_{k}^{wf}\left( H_{j,k}\right) $ is the optimal waterfilling solution for an (interference-free) ergodic fading link between transmitter $k$ and receiver $k$ such that, $\underline{P}^{wf}\left( H_{k,k}\right) $ satisfies \begin{equation} \sum_{k=1}^{2}\mathbb{E}\left[ C\left( \left\vert H_{k,k}\right\vert ^{2}P_{k}^{wf}\left( H_{k,k}\right) \right) \right] <\min_{j=1,2}% \mathbb{E}\left[ C\left( \sum_{k=1}^{2}\left\vert H_{j,k}\right\vert ^{2}P_{k}^{wf}\left( H_{k,k}\right) \right) \right] . \label{EVS_Cond}% \end{equation} The capacity achieving scheme requires encoding and decoding jointly across all sub-channels at the transmitters and receivers respectively. The optimal strategy also requires both receivers to decode messages from both transmitters. \end{theorem} \begin{remark} In the sequel we show that the condition in (\ref{EVS_Cond}) is a result of the achievable strategy, and therefore is a sufficient condition. For the special case of fixed (non-fading) channel gains $\mathbf{H}$, and $P_{k}^{\ast}=\overline{P}_{1}$, (\ref{EVS_Cond}) reduces to the general conditions for a very strong IFC (see for e.g., \cite{cap_theorems:Sato_IC}) given by \begin{subequations} \label{VS_NF_IFC}% \begin{align} \left\vert H_{1,2}\right\vert ^{2} & >\left\vert H_{2,2}\right\vert ^{2}\left( 1+\left\vert H_{1,1}\right\vert ^{2}\overline{P}_{1}\right) \\ \left\vert H_{2,1}\right\vert ^{2} & >\left\vert H_{1,1}\right\vert ^{2}\left( 1+\left\vert H_{2,2}\right\vert ^{2}\overline{P}_{2}\right) . \end{align} In contrast, the fading averaged conditions in (\ref{EVS_Cond}) imply that not every sub-channel needs to satisfy (\ref{VS_NF_IFC}) and in fact, the ergodic very strong channel can be a mix of weak and strong channels provided $\underline{P}^{\left( wf\right) }$ satisfies (\ref{EVS_Cond}). This in turn implies that not every parallel sub-channel needs to be a strong (non-fading)\ Gaussian IFC. \end{subequations} \end{remark} \begin{remark} The set of strong fading IFCs for which every sub-channel is strong and the optimal waterfilling policies for the two interference-free links satisfy (\ref{EVS_Cond}) is strictly a subset of the set of ergodic very strong IFCs. \end{remark} \begin{remark} As stated in Theorem \ref{Th_VS}, the capacity achieving scheme for EVS\ IFCs requires coding jointly across all sub-channels. Coding independent messages (separable coding) across the sub-channels is optimal only when every sub-channel is very strong at the optimal policy $\underline{P}^{\left( wf\right) }$. \end{remark} \subsection{Uniformly Strong IFC} In the following theorem, we present the capacity region and the sum-capacity of a uniformly strong IFC. \begin{theorem} \label{Th_Str}The capacity region of a uniformly strong fading IFC for which the entries of every fading state $\mathbf{h}$ satisfy% \begin{equation}% \begin{array} [c]{ccc}% \left\vert h_{1,1}\right\vert \leq\left\vert h_{2,1}\right\vert & \text{and} & \left\vert h_{2,2}\right\vert \leq\left\vert h_{1,2}\right\vert \end{array} \label{US_HCond}% \end{equation} is given by \begin{equation} \mathcal{C}_{\text{IFC}}^{US}\left( \overline{P}_{1},\overline{P}_{2}\right) =\mathcal{C}_{\text{C-MAC}}\left( \overline{P}_{1},\overline{P}_{2}\right) \end{equation} where $\mathcal{C}_{\text{C-MAC}}\left( \overline{P}_{1},\overline{P}% _{2}\right) $ is the capacity of an ergodic fading C-MAC with the same channel statistics as the IFC. The sum-capacity is \begin{equation} \max_{\underline{P}\left( \mathbf{H}\right) \in\mathcal{P}}\min\left\{ \min_{j=1,2}\left\{ \mathbb{E}\left[ C\left( {\textstyle\sum\nolimits_{k=1}^{2}} \left\vert H_{j,k}\right\vert ^{2}P_{k}\left( \mathbf{H}\right) \right) \right] \right\} ,\sum_{k=1}^{2}\mathbb{E}\left[ C\left( \left\vert H_{k,k}\right\vert ^{2}P_{k}\left( \mathbf{H}\right) \right) \right] \right\} . \label{US_SC}% \end{equation} The capacity achieving scheme requires encoding and decoding jointly across all sub-channels at the transmitters and receivers, respectively, and also requires both receivers to decode messages from both transmitters. \end{theorem} \begin{remark} In contrast to the very strong case, every sub-channel in a uniformly strong fading IFC is strong. \end{remark} \begin{remark} \label{Rem_USSep}The uniformly strong condition may suggest that separability is optimal. However, the capacity achieving C-MAC approach requires joint encoding and decoding across all sub-channels. A strategy where each sub-channel is viewed as an independent IFC, as in \cite{cap_theorems:ChuCioffi_IC}, will in general be strictly sub-optimal. This is seen directly from comparing (\ref{US_SC}) with the sum-rate achieved by coding independently over the sub-channels which is given by% \begin{equation} \max_{\underline{P}\left( \mathbf{H}\right) \in\mathcal{P}}\mathbb{E}% \left\{ \min\left\{ \min_{j=1,2}\left\{ C\left( {\textstyle\sum\nolimits_{k=1}^{2}} \left\vert H_{j,k}\right\vert ^{2}P_{k}\left( \mathbf{H}\right) \right) \right\} \right. \right. ,\left. \left. \sum_{k=1}^{2}C\left( \left\vert H_{k,k}\right\vert ^{2}P_{k}\left( \mathbf{H}\right) \right) \right\} \right\} .\text{ \ \ \ \ \ \ \ \ \ \ } \label{US_Ach}% \end{equation} \end{remark} The sub-optimality of independent encoding follows directly from the fact that for two random variables $A\left( \mathbf{H}\right) $ and $\not B \left( \mathbf{H}\right) ,$ $\mathbb{E}[\min\left( A\left( \mathbf{H}\right) ,B\left( \mathbf{H}\right) \right) ]$ $\leq$ $\min\left( \mathbb{E}% [A\left( \mathbf{H}\right) ],\mathbb{E}[B\left( \mathbf{H}\right) ]\right) ]$ with equality \textit{if and only if} for every fading instantiation $\mathbf{h}$, $A\left( \mathbf{H}\right) $ (resp. $B\left( \mathbf{H}\right) $) dominates $B\left( \mathbf{H}\right) $ (resp. $A\left( \mathbf{H}\right) $). Thus, independent (separable) encoding across sub-channels is optimal only when, at $\underline{P}^{\ast}\left( \mathbf{H}\right) $, the sum-rate in every sub-channel in (\ref{US_Ach}) is maximized by the same sum-rate function. \subsection{Uniformly Weak One-Sided IFC} The following theorem summarizes the sum-capacity of a one-sided uniformly weak IFC in which every sub-channel is weak. \begin{theorem} \label{Th_UW1}The sum-capacity of a uniformly weak ergodic fading Gaussian one-sided IFC for which the entries of every fading state $\underline{h}$ satisfy% \begin{equation}% \begin{array} [c]{c}% \left\vert h_{2,2}\right\vert >\left\vert h_{1,2}\right\vert \end{array} \label{UW_Cond}% \end{equation} is given by \begin{equation} \max_{\underline{P}\left( \mathbf{H}\right) \in\mathcal{P}}\left\{ S^{\left( w,1\right) }\left( \underline{P}\left( \mathbf{H}\right) \right) \right\} \label{SC_Weak}% \end{equation} where \begin{equation} S^{\left( w,1\right) }\left( \underline{P}\left( \mathbf{H}\right) \right) =\mathbb{E}\left[ C\left( \frac{\left\vert H_{1,1}\right\vert ^{2}P_{1}\left( \mathbf{H}\right) }{1+\left\vert H_{1,2}\right\vert ^{2}P_{2}\left( \mathbf{H}\right) }\right) +C\left( \left\vert H_{2,2}\right\vert ^{2}P_{2}\left( \mathbf{H}\right) \right) \right] . \label{SCW_S}% \end{equation} \end{theorem} \begin{remark} One could alternately consider the fading one-sided IFC in which $\left\vert h_{1,1}\right\vert >\left\vert h_{2,1}\right\vert $ and $h_{1,2}=0$ for the sum-capacity is given by (\ref{SC_Weak}) with the superscript $1$ replaced by 2. The expression $S^{\left( w,2\right) }\left( \underline{P}\left( \mathbf{H}\right) \right) $ is given by (\ref{SCW_S}) after swapping the indexes $1$ and $2$. \end{remark} \subsection{Uniformly Mixed IFC} The following theorem summarizes the sum-capacity of a class of uniformly mixed two-sided IFC. \begin{theorem} \label{Th_Mix}For a class of uniformly mixed ergodic fading two-sided Gaussian IFCs for which the entries of every fading state $\underline{h}$ satisfy% \begin{equation}% \begin{array} [c]{ccc}% \left\vert h_{1,1}\right\vert >\left\vert h_{2,1}\right\vert & \text{and} & \left\vert h_{2,2}\right\vert \leq\left\vert h_{1,2}\right\vert \end{array} \end{equation} the sum-capacity is \begin{equation} \max_{\underline{P}\left( \mathbf{H}\right) \in\mathcal{P}}\left\{ \min\left( \mathbb{E}\left[ C\left( {\textstyle\sum\nolimits_{k=1}^{2}} \left\vert H_{1,k}\right\vert ^{2}P_{k}\left( \mathbf{H}\right) \right) \right] ,S^{\left( w,2\right) }\left( \underline{P}\left( \mathbf{H}% \right) \right) \right) \right\} \label{SC_Mix}% \end{equation} where $S^{\left( w,2\right) }\left( \underline{P}\left( \mathbf{H}\right) \right) $ is given by (\ref{SCW_S}) by swapping indexes $1$ and $2$. \end{theorem} \begin{remark} One could alternately consider the fading IFC in which $\left\vert h_{1,1}\right\vert \leq\left\vert h_{2,1}\right\vert $ and $\left\vert h_{2,2}\right\vert >\left\vert h_{1,2}\right\vert $. The sum-capacity is given by (\ref{SC_Mix}) after swapping the indexes $1$ and $2$. \end{remark} \begin{remark} For the special case of $H_{k,k}=\sqrt{SNR}e^{j\phi_{kk}}$ and $H_{j,k}% =\sqrt{INR}e^{j\phi_{jk}}$, $j\not =k$, where $\phi_{j,k}$ for all $j$ and $k$ is independent and distributed uniformly in $\left[ -\pi,\pi\right] $, the sum-capacity in Theorems \ref{Th_Str} and \ref{Th_Mix} can also be achieved by ergodic interference alignment as shown in \cite{cap_theorems:Jafar_ErgIFC}. \end{remark} \subsection{Uniformly Weak IFC} The sum-capacity of a one-sided uniformly weak IFC in Theorem \ref{Th_UW1} is an upper bound for that of a two-sided IFC for which at least one of two one-sided IFCs that result from eliminating a cross-link is uniformly weak. Similarly, a bound can be obtained from the sum-capacity of the complementary one-sided IFC. The following theorem summarizes this result. \begin{theorem} \label{Th_UW2}For a class of uniformly weak ergodic fading two-sided Gaussian IFCs for which the entries of every fading state $\underline{h}$ satisfy% \begin{equation}% \begin{array} [c]{ccc}% \left\vert h_{1,1}\right\vert >\left\vert h_{2,1}\right\vert & \text{and} & \left\vert h_{2,2}\right\vert >\left\vert h_{1,2}\right\vert \end{array} \end{equation} the sum-capacity is upper bounded as \begin{equation} R_{1}+R_{2}\leq\max_{\underline{P}\left( \mathbf{H}\right) \in\mathcal{P}% }\min\left( S^{\left( w,1\right) }\left( \underline{P}\left( \mathbf{H}\right) \right) ,S^{\left( w,2\right) }\left( \underline {P}\left( \mathbf{H}\right) \right) \right) . \label{SC_UW2}% \end{equation} \end{theorem} \begin{remark} For the non-fading case, the sum-rate bounds in (\ref{SC_UW2}) simplify to those obtained in \cite[Theorem 3]{cap_theorems:ETW}. \end{remark} \subsection{One-sided IFC: General Achievable Scheme} For EVS and US IFCs, Theorems \ref{Th_VS} and \ref{Th_Str} suggest that joint coding across all sub-channels is optimal. Particularly for EVS, such joint coding allows one to exploit the strong states in decoding messages. Relying on this observation, we present an achievable strategy based on joint coding all sub-classes of one-sided IFCs with $H_{2,1}=0$. The encoding scheme involves rate-splitting at user $2$, i.e., user $2$ transmits $w_{2}=\left( w_{2p},w_{2c}\right) $ where $w_{2p}$ and $w_{2c}$ are private and common messages, respectively and can be viewed as a Han-Kobayashi scheme with Gaussian codebooks and without time-sharing. \begin{theorem} \label{Th_Hyb}The sum-capacity of a one-sided IFC is lower bounded by% \begin{equation} \max_{\underline{P}\left( \mathbf{H}\right) \in\mathcal{P},\alpha _{\mathbf{H}}\in\lbrack0,1]}\min\left( S_{1}\left( \alpha_{\mathbf{H}% },\underline{P}\left( \mathbf{H}\right) \right) ,S_{2}\left( \alpha_{\mathbf{H}},\underline{P}\left( \mathbf{H}\right) \right) \right) \label{HK1_SR}% \end{equation} where \begin{align} S_{1}\left( \alpha_{\mathbf{H}},\underline{P}\left( \mathbf{H}\right) \right) & =\mathbb{E}\left[ C\left( \frac{\left\vert H_{1,1}\right\vert ^{2}P_{1}\left( \mathbf{H}\right) }{1+\left\vert H_{1,2}\right\vert ^{2}\alpha_{\mathbf{H}}P_{2}\left( \mathbf{H}\right) }\right) \right] +\mathbb{E}\left[ C\left( \left\vert H_{2,2}\right\vert ^{2}P_{2}\left( \mathbf{H}\right) \right) \right] ,\\ S_{2}\left( \alpha_{\mathbf{H}},\underline{P}\left( \mathbf{H}\right) \right) & =\mathbb{E}\left[ C\left( \left\vert H_{2,2}\right\vert ^{2}\alpha_{\mathbf{H}}P_{2}\left( \mathbf{H}\right) \right) \right] +\mathbb{E}\left[ C\left( \frac{\left\vert H_{1,1}\right\vert ^{2}% P_{1}\left( \mathbf{H}\right) +\left\vert H_{1,2}\right\vert ^{2}% \overline{\alpha}_{\mathbf{H}}P_{2}\left( \mathbf{H}\right) }{1+\left\vert H_{1,2}\right\vert ^{2}\alpha_{\mathbf{H}}P_{2}\left( \mathbf{H}\right) }\right) \right] , \end{align} such that $\alpha_{\mathbf{H}}$ is the power allocated by user $2$ in fading state $\mathbf{H}$ to transmitting $w_{2p}$ and $\overline{\alpha}% _{\mathbf{H}}=1-\alpha_{\mathbf{H}}$, $\alpha_{\mathbf{H}}\in\left[ 0,1\right] $. For EVS\ one-sided IFCs, the sum-capacity is achieved by choosing $\alpha_{\mathbf{H}}=0$ for all $\mathbf{H}$ provided $S_{1}\left( 0,\underline{P}^{(wf)}\left( \mathbf{H}\right) \right) <S_{2}\left( 0,\underline{P}^{(wf)}\left( \mathbf{H}\right) \right) $. For US one-sided IFCs, the sum-capacity is given by (\ref{HK1_SR}) for $\alpha_{\mathbf{H}}=0$ for all $\mathbf{H}$. For UW\ one-sided IFCs, the sum-capacity is achieved by choosing $\alpha_{\mathbf{H}}=1$ and maximizing $S_{2}\left( 1,\underline {P}\left( \mathbf{H}\right) \right) =S_{1}\left( 1,\underline{P}\left( \mathbf{H}\right) \right) $ over all feasible $\underline{P}\left( \mathbf{H}\right) .$ For a hybrid one-sided IFC, the achievable sum-rate is maximized by \begin{equation} \alpha_{\mathbf{H}}^{\ast}=\left\{ \begin{array} [c]{cc}% \alpha\left( \mathbf{H}\right) \in(0,1] & \text{sub-channel }\mathbf{H}% \text{ is weak }\\ 0 & \text{sub-channel }\mathbf{H}\text{ is strong.}% \end{array} \right. \label{alpstar_hyb}% \end{equation} and is given by (\ref{HK1_SR}) for this choice of $\alpha_{\mathbf{H}}^{\ast}$. \end{theorem} \begin{remark} The optimal $\alpha_{\mathbf{H}}^{\ast}$ in (\ref{alpstar_hyb}) implies that in general for the hybrid one-sided IFCs joint coding the transmitted message across all sub-channels is optimal. Specifically, the common message is transmitted jointly in all sub-channels while the private message is transmitted only in the weak sub-channels. \end{remark} \begin{remark} The separation-based coding scheme of \cite{cap_theorems:SumCap_Par_ZIFC} is a special case of the above HK-based coding scheme and is obtained by choosing $\alpha_{\mathbf{H}}=1$ and $\alpha_{\mathbf{H}}=0$ for the weak and strong states, respectively. The resulting sum-rate is at most as large as the bound in (\ref{HK1_SR}) obtained for $\alpha_{\mathbf{H}}^{\ast}\in(0,1]$ and $\alpha_{\mathbf{H}}^{\ast}=0$ for the weak and strong states, respectively. \end{remark} \begin{remark} In \cite{cap_theorems:Tuninetti}, a Han-Kobayashi based scheme using Gaussian codebooks and no time-sharing is used to develop an inner bound on the capacity region of a two-sided IFC. \end{remark} \section{\label{Sec_CM}Compound MAC: Capacity Region and Optimal Policies} As stated in\ Corollary \ref{Cor_1}, an inner bound on the sum-capacity of an IFC can be obtained by allowing both receivers to decode both messages, i.e., by determining the sum-capacity of a C-MAC with the same inter-node links. In this Section, we prove Theorem \ref{Th_CMAC} which establishes the capacity region of ergodic fading C-MACs and discuss the optimal power policies that achieve every point on the boundary of the capacity region. \subsection{Capacity Region} The capacity region of a discrete memoryless compound MAC is developed in \cite{cap_theorems:Ahlswede_CMAC}. For each choice of input distribution at the two independent sources, this capacity region is an intersection of the MAC capacity regions achieved at the two receivers. The techniques in \cite{cap_theorems:Ahlswede_CMAC} can be easily extended to develop the capacity region for a Gaussian C-MAC with fixed channel gains. For the Gaussian C-MAC, one can show that Gaussian signaling achieves the capacity region using the fact that Gaussian signaling maximizes the MAC region at each receiver. Thus, the Gaussian C-MAC capacity region is an intersection of the Gaussian\ MAC capacity regions achieved at $D_{1}$ and $D_{2}$. For a stationary and ergodic process $\left\{ \mathbf{H}\right\} $, the channel in (\ref{IC_Y}) can be modeled as a parallel Gaussian C-MACs consisting of a collection of independent Gaussian C-MACs, one for each fading state $\mathbf{h}$, with an average transmit power constraint over all parallel channels. We now prove Theorem \ref{Th_CMAC} stated in\ Section \ref{Sec_3CM} which gives the capacity region of ergodic fading C-MACs. \textit{Proof of Theorem }\ref{Th_CMAC} We first present an achievable scheme. Consider a policy $\underline{P}\left( \mathbf{H}\right) \in\mathcal{P}$. The achievable scheme involves requiring each transmitter to encode the same message across all sub-channels and each receiver to jointly decode over all sub-channels. Independent codebooks are used for every sub-channel. An error occurs at receiver $j$ if one or both messages decoded jointly across all sub-channels is different from the transmitted message. Given this encoding and decoding, the analysis at each receiver mirrors that for a MAC receiver \cite[14.3]{cap_theorems:CTbook} and one can easily verify that for reliable reception of the transmitted message at receiver $j$, the rate pair $\left( R_{1},R_{2}\right) $ needs to satisfy the rate constraints in (\ref{CMAC_Cj}) where in decoding $w_{\mathcal{S}% }=\left\{ w_{k}:k\in\mathcal{S}\right\} $ the information collected in each sub-channel is given by $C\left( \sum_{k\in\mathcal{S}}\left\vert H_{j,k}\right\vert ^{2}P_{k}\left( \mathbf{H}\right) \right) $, for all $\mathcal{S}\subseteq\mathcal{K}.$ Thus, for any feasible $\underline {P}\left( \mathbf{H}\right) $, the achievable rate region is given by $\mathcal{C}_{1}\left( \underline{P}\left( \mathbf{H}\right) \right) \cap\mathcal{C}_{2}\left( \underline{P}\left( \mathbf{H}\right) \right) $. From the concavity of the $\log$ function, the achievable region over all $\underline{P}\left( \mathbf{H}\right) $ is given by (\ref{CapR_CMAC}% ).\newline\qquad For the converse, the proof technique mirrors the proof for the capacity of an ergodic fading MAC developed in \cite[Appendix A]{cap_theorems:TH01}. For any $\underline{P}\left( \mathbf{H}\right) \in\mathcal{P}$, one can using similar limiting arguments to show that for asymptotically error-free performance at receiver $j$, for all $j$, the achievable region has to be bounded as% \begin{equation}% \begin{array} [c]{cc}% R_{\mathcal{S}}\leq\mathbb{E}\left[ C\left( \sum_{k\in\mathcal{S}}\left\vert H_{j,k}\right\vert ^{2}P_{k}\left( \mathbf{H}\right) \right) \right] , & j=1,2. \end{array} \label{CMAC_OB}% \end{equation} The proof is completed by noting that due to the concavity of the $\log$ it suffices to take the union of the region over all $\underline{P}\left( \mathbf{H}\right) \in\mathcal{P}$. \begin{remark} An achievable scheme in which independent messages are encoded in each sub-channel, i.e., separable coding, will in general not achieve the capacity region. This is due to the fact that for this separable coding scheme the achievable rate in each sub-channel is a minimum of the rates at each receiver. The average of such minima can at most be the minimum of the average rates at each receiver, where the latter is achieved by encoding the same message jointly across all sub-channels. \end{remark} Corollary \ref{Cor_1} follows from the argument that a rate pair in $\mathcal{C}_{\text{C-MAC}}$ is achievable for the IFC since $\mathcal{C}% _{\text{C-MAC}}$ is the capacity region when both messages are decoded at both receivers.% \begin{figure*}[tbp] \centering {\includegraphics[ height=2.0911in, width=5.3056in ]% {Case1_Case2_IFC.eps}% }% \caption{Rate regions $\mathcal{C}_{1}(\underline{P}(\underline{H}% ))$ and $\mathcal{C}_{2}(\underline{P}(\underline{H}% ))$ and sum-rate for case 1 and case 2.}% \label{Fig_Case12}% \end{figure*}% \begin{figure*}[tbp] \centering {\includegraphics[ height=1.9268in, width=5.3333in ]% {Case3abc_IFC.eps}% }% \caption{Rate regions $R_{r}(\underline{P}(\underline{H}))$ and $R_{d}(\underline{P}(\underline{H}))$ and sum-rate for cases $3a$, $3b$, and $3c$.}\label{Fig_Case3abc}% \end{figure*}% \subsection{Sum-Capacity Optimal Policies} The capacity region $\mathcal{C}_{\text{C-MAC}}$ is a union of the intersection of the pentagons $\mathcal{C}_{1}\left( \underline{P}\left( \mathbf{H}\right) \right) $ and $\mathcal{C}_{2}\left( \underline{P}\left( \mathbf{H}\right) \right) $ achieved at $D_{1}$ and $D_{2},$ respectively, where the union is over all $\underline{P}\left( \mathbf{H}\right) \in\mathcal{P}$. The region $\mathcal{C}_{\text{C-MAC}}$ is convex, and thus, each point on the boundary of $\mathcal{C}_{\text{C-MAC}}$ is obtained by maximizing the weighted sum $\mu_{1}R_{1}$ $+$ $\mu_{2}R_{2}$ over all $\underline{P}\left( \mathbf{H}\right) \in\mathcal{P}$, and for all $\mu _{1}>0$, $\mu_{2}>0$, subject to (\ref{CMAC_OB}). In this section, we determine the optimal policy $\underline{P}^{\ast}\left( \mathbf{H}\right) $ that maximizes the sum-rate $R_{1}+R_{2}$ when $\mu_{1}$ $=$ $\mu_{2}$ $=$ $1$. Using the fact that the rate regions $\mathcal{C}_{1}\left( \underline{P}\left( \mathbf{H}\right) \right) $ and $\mathcal{C}_{2}\left( \underline{P}\left( \mathbf{H}\right) \right) $, for any feasible $\underline{P}\left( \mathbf{H}\right) $, are pentagons, in Figs. \ref{Fig_Case12} and \ref{Fig_Case3abc} we illustrate the five possible choices for the sum-rate resulting from an intersection of $\mathcal{C}% _{1}\left( \underline{P}\left( \mathbf{H}\right) \right) $ and $\mathcal{C}_{2}\left( \underline{P}\left( \mathbf{H}\right) \right) $ (see also \cite{cap_theorems:SankarLiang_Conf}). Cases $1$ and $2$, as shown in Fig. \ref{Fig_Case12} and henceforth referred to as \textit{inactive cases}, are such that the constraints on the two sum-rates are not active in $\mathcal{C}_{1}\left( \underline{P}\left( \mathbf{H}\right) \right) \cap\mathcal{C}_{2}\left( \underline{P}\left( \mathbf{H}\right) \right) $, i.e., no rate tuple on the sum-rate plane achieved at one of the receivers lies within or on the boundary of the rate region achieved at the other receiver. In contrast, when there exists at least one such rate tuple such that the two sum-rates constraints are active in $\mathcal{C}_{1}\left( \underline{P}\left( \mathbf{H}\right) \right) \cap\mathcal{C}_{2}\left( \underline{P}\left( \mathbf{H}\right) \right) $ are the \textit{active cases}. This includes Cases $3a$, $3b$, and $3c$ shown in Fig. \ref{Fig_Case3abc} where the sum-rate at $D_{1}$ is smaller, larger, or equal, respectively, to that achieved at $D_{2}$. By definition, the active set also include the \textit{boundary cases} in which there is exactly one rate pair that lies within or on the boundary of the rate region achieved at the other receiver.\ There are six possible boundary cases that lie at the intersection of an inactive case $l$, $l=1,2,$ and an active case $n,$ $n=3a,3b,3c$. There are six such boundary cases that we denote as cases $\left( l,n\right) $, $l=1,2,$ and $n=3a,3b,3c$. In general, it is not possible to know \textit{a priori} the type of intersection that will maximize the sum-capacity. Thus, the sum-rate for each case has to be maximized over all $\underline{P}\left( \mathbf{H}\right) \in\mathcal{P}$. To simplify optimization and obtain a unique solution, we explicitly consider the six boundary cases as distinct from the active cases thereby ensuring that the subsets of power policies resulting in the different cases are disjoint, i.e., no power policy results in more than one case. This in turn implies that the power policies resulting in each case satisfy specific conditions that distinguish that case from all others. For example, from Fig. \ref{Fig_Case12}, Case 1 results only when $\sum\nolimits_{k=1}% ^{2}C\left( H_{kk}P_{k}^{\left( wf\right) }\left( \mathbf{H}\right) \right) <C\left( \sum\nolimits_{k=1}^{2}H_{j,k}P_{k}^{\left( wf\right) }\left( \mathbf{H}\right) \right) $, for all $j=1,2.$ Using these disjoint cases and the fact that the rate expressions in (\ref{CMAC_OB}) are concave functions of $\underline{P}\left( \mathbf{H}\right) $ allows us to develop closed form sum-capacity results and optimal policies for all cases. Observe that cases $1$ and $2$ do not share a boundary since such a transition (see Fig. \ref{Fig_Case12}) requires passing through case $3a$ or $3b$ or $3c$. Finally, note that Fig. \ref{Fig_Case3abc} illustrates two specific $\mathcal{C}_{1}$ and $\mathcal{C}_{2}$ regions for $3a$, $3b$, and $3c$. The conditions for each case are shown in Figs. \ref{Fig_Case12}-\ref{Fig_BC23}.% \begin{figure*}[tbp] \centering {\includegraphics[ height=2.3462in, width=5.8055in ]% {CasesBC_1and3_IFC.eps}% }% \caption{Rate regions $R_{r}(\underline{P}(\underline{H}))$ and $R_{d}(\underline{P}(\underline{H}))$ for cases (1,3a), (1,3b), and (1,3c).}\label{Fig_BC13}% \end{figure*}% \begin{figure*}[tbp] \centering {\includegraphics[ height=2.3333in, width=5.6247in ]% {CasesBC_2and3_IFC.eps}% }% \caption{Rate regions $R_{r}(\underline{P}(\underline{H}))$ and $R_{d}(\underline{P}(\underline{H}))$ for cases (2,3a), (2,3b), and (2,3c).}\label{Fig_BC23}% \end{figure*}% Let $\underline{P}^{(i)}(\mathbf{H})$ and $\underline{P}^{(l,n)}(\mathbf{H})$ denote the optimal policies for cases $i$ and $(l,n)$, respectively. Let $S^{\left( i\right) }(\underline{P}(\mathbf{H}))$ and $S^{\left( l,n\right) }(\underline{P}(\mathbf{H}))$ denote the sum-rate achieved for cases $i$ and $\left( l,n\right) $, respectively, for some $\underline {P}(\mathbf{H})\in\mathcal{P}$. The optimization problem for case $i$ or case $\left( l,n\right) $ is given by \begin{equation}% \begin{array} [c]{l}% \max\limits_{\underline{P}\left( \mathbf{H}\right) \in\mathcal{P}}S^{\left( i\right) }\left( \underline{P}\left( \mathbf{H}\right) \right) \text{ or }\max\limits_{\underline{P}\left( \mathbf{H}\right) \in\mathcal{P}% }S^{\left( l,n\right) }\left( \underline{P}\left( \mathbf{H}\right) \right) \\% \begin{array} [c]{ccc}% s.t. & \mathbb{E}\left[ P_{k}(\mathbf{H})\right] \leq\overline{P}% _{k}\text{,} & \text{ }k=1,2, \end{array} \\% \begin{array} [c]{ccc}% \text{ \ \ \ \ \ } & P_{k}(\mathbf{H})\geq0\text{,} & \text{ }k=1,2,\text{ for all }\mathbf{H}% \end{array} \end{array} \label{DF_OptProb}% \end{equation} where \begin{equation}% \begin{array} [c]{l}% \begin{array} [c]{cc}% S^{\left( 1\right) }\left( \underline{P}\left( \mathbf{H}\right) \right) =\sum_{k=1}^{2}\mathbb{E}\left[ C\left( \left\vert H_{k,k}\right\vert ^{2}P_{k}\left( \mathbf{H}\right) \right) \right] & \end{array} \\% \begin{array} [c]{cc}% S^{\left( 2\right) }\left( \underline{P}\left( \mathbf{H}\right) \right) =\sum_{k=1}^{2}\mathbb{E}\left[ C\left( \left\vert H_{j,k}\right\vert ^{2}P_{k}\left( \mathbf{H}\right) \right) \right] , & j,k=1,2\text{, }j\not =k \end{array} \\% \begin{array} [c]{cc}% S^{\left( i\right) }\left( \underline{P}\left( \mathbf{H}\right) \right) =\mathbb{E}\left[ C\left( \sum_{k=1}^{2}\left\vert H_{j,k}\right\vert ^{2}P_{k}\left( \mathbf{H}\right) \right) \right] , & \text{ for }\left( i,j\right) =\left( 3a,2\right) ,\left( 3b,1\right) \end{array} \\% \begin{array} [c]{cc}% S^{\left( 3c\right) }\left( \underline{P}\left( \mathbf{H}\right) \right) =S^{\left( 3a\right) }\left( \underline{P}\left( \mathbf{H}% \right) \right) , & \text{ }s.t.\text{ }S^{\left( 3a\right) }\left( \underline{P}\left( \mathbf{H}\right) \right) =S^{\left( 3b\right) }\left( \underline{P}\left( \mathbf{H}\right) \right) \end{array} \text{ }\\% \begin{array} [c]{ccc}% S^{\left( l,n\right) }\left( \underline{P}\left( \mathbf{H}\right) \right) =S^{\left( l\right) }\left( \underline{P}\left( \mathbf{H}% \right) \right) , & \text{ }s.t.\text{ }S^{\left( l\right) }\left( \underline{P}\left( \mathbf{H}\right) \right) =S^{\left( n\right) }\left( \underline{P}\left( \mathbf{H}\right) \right) . & \text{for all }\left( l,n\right) . \end{array} \end{array} \label{DF_Jdef}% \end{equation} The conditions for each case are (see Figs \ref{Fig_Case12}-\ref{Fig_BC23}) given below where for each case the condition holds true when evaluated at the optimal policies $\underline{P}^{\left( i\right) }(\mathbf{H})$ and $\underline{P}^{\left( l,n\right) }(\mathbf{H})$ for cases $i$ and $\left( l,n\right) $, respectively. For ease of notation, we do not explicitly denote the dependence of $S^{\left( i\right) }$ and $S^{\left( l,n\right) }$ on the appropriate $\underline{P}^{\left( i\right) }(\mathbf{H})$ and $\underline{P}^{\left( l,n\right) }(\mathbf{H})$, respectively. \begin{align} & \begin{array} [c]{cl}% \underline{\text{Case}\ 1}: & S^{\left( 1\right) }<\min\left( S^{\left( 3a\right) },S^{\left( 3b\right) }\right) \end{array} \label{C1Cd}\\ & \begin{array} [c]{cl}% \underline{\text{Case}\ 2}: & S^{\left( 2\right) }<\min\left( S^{\left( 3a\right) },S^{\left( 3b\right) }\right) \end{array} \label{C2Cd}\\ & \begin{array} [c]{cc}% \underline{\text{Case}\ 3a}: & S^{\left( 3a\right) }<\min\left( S^{\left( 3b\right) },S^{\left( 1\right) },S^{\left( 2\right) }\right) \end{array} \label{C3aCd}\\ & \begin{array} [c]{cc}% \underline{\text{Case}\ 3b}: & S^{\left( 3b\right) }<\min\left( S^{\left( 3a\right) },S^{\left( 1\right) },S^{\left( 2\right) }\right) \end{array} \label{C3bCd}\\ & \begin{array} [c]{cc}% \underline{\text{Case}\ 3c}: & S^{\left( 3a\right) }=S^{\left( 3b\right) }<\min\left( S^{\left( 1\right) },S^{\left( 2\right) }\right) \end{array} \label{C3cCd}% \end{align} \begin{align} & \begin{array} [c]{cccc}% \underline{\text{Case}\ \left( 1,3a\right) }: & S^{\left( 3a\right) }<S^{\left( 3b\right) } & \text{and} & S^{\left( 1\right) }<S^{\left( 3b\right) }% \end{array} \label{C13aCd}\\ & \begin{array} [c]{cccc}% \underline{\text{Case}\ \left( 2,3a\right) }: & S^{\left( 3a\right) }<S^{\left( 3b\right) } & \text{and} & S^{\left( 2\right) }<S^{\left( 3b\right) }% \end{array} \label{C23aCd}\\ & \begin{array} [c]{cccc}% \underline{\text{Case}\ \left( 1,3b\right) }: & S^{\left( 3b\right) }<S^{\left( 3a\right) } & \text{and} & S^{\left( 1\right) }<S^{\left( 3a\right) }% \end{array} \label{C13bCd}\\ & \begin{array} [c]{cccc}% \underline{\text{Case}\ \left( 2,3b\right) }: & S^{\left( 3b\right) }<S^{\left( 3a\right) } & \text{and} & S^{\left( 2\right) }<S^{\left( 3a\right) }% \end{array} \label{C23bCd}\\ & \begin{array} [c]{cc}% \underline{\text{Case}\ \left( 1,3c\right) }: & S^{\left( 3a\right) }=S^{\left( 3b\right) }=S^{\left( 1\right) }<S^{\left( 2\right) }% \end{array} \label{C13cCd}\\ & \begin{array} [c]{cc}% \underline{\text{Case}\ \left( 2,3c\right) }: & S^{\left( 3a\right) }=S^{\left( 3b\right) }=S^{\left( 2\right) }<S^{\left( 1\right) }. \end{array} \label{C23cCd}% \end{align} \newline The optimal policy for each case is determined using Lagrange multipliers and the \textit{Karush}-\textit{Kuhn}-\textit{Tucker} (KKT) conditions. The sum-capacity optimal \underline{$P$}$^{\ast}\left( \mathbf{H}\right) $ is given by that $\underline{P}^{(i)}\left( \mathbf{H}\right) $ or $\underline{P}^{(l,n)}\left( \mathbf{H}\right) $ that satisfies the conditions of its case in (\ref{C1Cd})-(\ref{C23cCd}). \begin{remark} For cases $1$ and $2$, one can expand the capacity expressions to verify that the conditions $S^{\left( l\right) }<\min\left( S^{\left( 3a\right) },S^{\left( 3b\right) }\right) $, $l=1,2,$ imply that $S^{\left( 1\right) }<S^{\left( 2\right) }$ and vice-versa. Therefore, if the optimal policy is determined in the order of the cases in (\ref{C1Cd})-(\ref{C23cCd}), the conditions for cases $\left( 1,3c\right) $ and $\left( 2,3c\right) $ are tested only after all other cases have been excluded. Furthermore, the two cases are mutually exclusive, and thus, (\ref{C13cCd}) and (\ref{C23cCd}) simply redundant conditions written for completeness. \end{remark} \begin{remark} For the two-user case the conditions can be written directly from the geometry of intersecting rate regions for each case. However, for a more general $K$-user C-MAC, the conditions can be written using the fact that the rate regions for any $\underline{P}\left( \mathbf{H}\right) $ are polymatroids and that the sum-rate of two intersecting polymatroids is given by the polymatroid intersection lemma. A detailed analysis of the rate-region and the optimal policies using the polymatroid intersection lemma for a $K$-user two-receiver network is developed in \cite{cap_theorems:LSYLNMHVP}. \end{remark} The following theorem summarizes the form of \underline{$P$}$^{\ast}\left( \mathbf{H}\right) $ and presents an algorithm to compute it. The optimal policy maximizing each case can be obtained in a straightforward manner using standard constrained convex maximization techniques. The algorithm exploits the fact that each the occurence of one case excludes all other cases and the case that occurs is the one for which the optimal policy satisfies the case conditions. We refer the reader to \cite[Appendix]{cap_theorems:LSYLNMHVP} for a detailed analysis. \begin{theorem} \label{Th_CMAC_P}The optimal policy $\underline{P}^{\ast}\left( \mathbf{H}\right) $ achieving the sum-capacity of a two-user ergodic fading C-MAC is obtained by computing $\underline{P}^{(i)}\left( \mathbf{H}\right) $ and $\underline{P}^{(l,n)}\left( \mathbf{H}\right) $ starting with cases~$1$ and $2$, followed by cases $3a,$ $3b,$ and $3c$, in that order, and finally the boundary cases $(l,n),$ in the order that cases $\left( l,3c\right) $ are the last to be optimized, until for some case the corresponding $\underline{P}^{(i)}\left( \mathbf{H}\right) $ or $\underline{P}^{(l,n)}\left( \mathbf{H}\right) $ satisfies the case conditions. The optimal \underline{$P$}$^{\ast}\left( \mathbf{H}\right) $ is given by the optimal $\underline{P}^{(i)}\left( \mathbf{H}\right) $ or $\underline{P}^{(l,n)}\left( \mathbf{H}\right) $ that satisfies its case conditions and falls into one of the following three categories: \textit{Cases }$1$ \textit{and} $2$: The optimal policies for the two users are such that each user water-fills over its bottle-neck link, i.e., over the direct link to that receiver with the smaller (interference-free) ergodic fading capacity. Thus for cases $1$ and $2$, each transmitter water-fills on the (interference-free) point-to-point links to its intended and unintended receivers, respectively. Thus, for case $1$, $P_{k}^{\left( \ast\right) }\left( \mathbf{H}\right) =P_{k}^{\left( 1\right) }\left( \mathbf{H}% \right) =P_{k}^{wf}\left( H_{k,k}\right) $, and for case 2$,$ $P_{k}^{\left( \ast\right) }\left( \mathbf{H}\right) =P_{k}^{\left( 2\right) }\left( \mathbf{H}\right) =P_{k}^{wf}\left( H_{\left\{ 1,2\right\} \backslash k,k}\right) $, $k=1,2$. where $P_{k}^{wf}\left( H_{j,k}\right) $ for $j,k=1,2,$ is defined in Theorem \ref{Th_VS}. \textit{Cases }$\left( 3a,3b,3c\right) $: For cases $3a$ and $3b$, the optimal user policies $P_{k}^{\ast}\left( \mathbf{H}\right) $, for all $k$, are opportunistic multiuser waterfilling solutions over the multiaccess links to receivers 1 and 2, respectively. For case $3c$, $P_{k}^{\ast}\left( \mathbf{H}\right) $, for all $k$, takes an opportunistic non-waterfilling form and depends on the channel gains for each user at both receivers. \textit{Boundary Cases}: The optimal user policies $P_{k}^{\ast}\left( \mathbf{H}\right) $, for all $k$, are opportunistic non-waterfilling solutions. \end{theorem} \begin{remark} The sum-rate optimal policies for a two-transmitter two-receiver ergodic fading channel where one of the receiver also acts as a relay is developed in \cite{cap_theorems:LSYLNMHVP}. The analysis here is very similar to that in \cite{cap_theorems:LSYLNMHVP}, and thus, we briefly outline the intuition behind the results in the proof below. \end{remark} \begin{proof} The optimal policy for each case can be determined in a straightforward manner using Lagrange multipliers and the \textit{Karush}-\textit{Kuhn}% -\textit{Tucker} (KKT) conditions. Furthermore, not including all or some of the constraints for each case in the maximization problem simplifies the determination of the solution. For cases $1$ and $2$, $S^{\left( 1\right) }$ and $S^{\left( 2\right) }$, respectively, are sum of two bottle-neck point-to-point links, and thus, are maximized by the single-user waterfilling power policies, one for each bottle-neck link. For cases $3a$ and $3b$, the optimization is equivalent to maximizing the sum-capacity at one of the receivers. Thus, applying the results in \cite[Lemma 3.10]{cap_theorems:TH01} (see also \cite{cap_theorems:Knopp_Humblet}), for these two cases, one can show that sum-capacity achieving policies are opportunistic waterfilling solutions that exploit the multiuser diversity. For case $3c$, the sum-rate $S^{\left( 3a\right) }$ is maximized subject to the constraint $S^{\left( 3a\right) }=S^{\left( 3b\right) }$. Thus, for this case, the KKT conditions can be used to show that while opportunistic scheduling of the users based on a function of their fading states to both receivers is optimal, the optimal policies are no longer waterfilling solutions. The same argument also holds for the boundary cases $\left( l,n\right) $ where $S^{\left( l\right) }$ is maximized subject to $S^{\left( l\right) }=S^{\left( n\right) }$. In all cases, the optimal policies can be determined using an iterative procedure in a manner akin to the iterative waterfilling approach for fading MACs \cite{cap_theorems:Yu_Rhee02}. See \cite[Appendix]{cap_theorems:LSYLNMHVP}% \ for a detailed proof. \end{proof} \subsection{Capacity Region:\ Optimal Policies} As mentioned earlier, each point on the boundary of $\mathcal{C}% _{\text{C-MAC}}\left( \overline{P}_{1},\overline{P}_{2}\right) $ is obtained by maximizing the weighted sum $\mu_{1}R_{1}$ $+$ $\mu_{2}R_{2}$ over all $\underline{P}\left( \mathbf{H}\right) \in\mathcal{P}$, and for all $\mu _{1}>0$, $\mu_{2}>0$, subject to (\ref{CMAC_OB}). Without loss of generality, we assume that $\mu_{1}<\mu_{2}$. Let \underline{$\mu$} denote the pair $\left( \mu_{1},\mu_{2}\right) $. The optimal policy $\underline{P}^{\ast }\left( \mathbf{H,}\underline{\mu}\right) $ is given by% \begin{equation} \underline{P}^{\ast}\left( \mathbf{H,}\underline{\mu}\right) =\arg \max_{\underline{P}\in\mathcal{P}}\left( \mu_{1}R_{1}+\mu_{2}R_{2}\right) \text{ s.t. }\left( R_{1},R_{2}\right) \in\mathcal{C}_{\text{C-MAC}}\left( \overline{P}_{1},\overline{P}_{2}\right) \label{CMAC_CapR}% \end{equation} where $\mu_{1}R_{1}+\mu_{2}R_{2}$, denoted by $S^{\left( x\right) }\left( \underline{\mu},\underline{P}\left( \mathbf{H}\right) \right) $ for case $x=i,\left( l,n\right) $, for all $i$ and $\left( l,n\right) $, for the different cases are given by% \begin{equation}% \begin{array} [c]{l}% S^{\left( 1\right) }\left( \underline{\mu},\underline{P}\left( \mathbf{H}\right) \right) =\sum_{k=1}^{2}\mu_{k}\mathbb{E}\left[ C\left( \left\vert H_{k,k}\right\vert ^{2}P_{k}\left( \mathbf{H}\right) \right) \right] \\% \begin{array} [c]{cc}% S^{\left( 2\right) }\left( \underline{\mu},\underline{P}\left( \mathbf{H}\right) \right) =\sum_{k=1}^{2}\mu_{k}\mathbb{E}\left[ C\left( \left\vert H_{j,k}\right\vert ^{2}P_{k}\left( \mathbf{H}\right) \right) \right] , & j,k=1,2\text{, }j\not =k \end{array} \\% \begin{array} [c]{cc}% S^{\left( i\right) }\left( \underline{\mu},\underline{P}\left( \mathbf{H}\right) \right) =\mu_{1}S^{\left( i\right) }\left( \underline{P}\left( \mathbf{H}\right) \right) +\left( \mu_{2}-\mu _{1}\right) \min\limits_{j=1,2}\left( \mathbb{E}\left[ C\left( \left\vert H_{j,2}\right\vert ^{2}P_{2}\left( \mathbf{H}\right) \right) \right] \right) & i=3a,3b \end{array} \\% \begin{array} [c]{cc}% S^{\left( 3c\right) }\left( \underline{\mu},\underline{P}\left( \mathbf{H}\right) \right) =S^{\left( 3a\right) }\left( \underline {P}\left( \mathbf{H}\right) \right) , & \text{ }s.t.\text{ }S^{\left( 3a\right) }\left( \underline{\mu},\underline{P}\left( \mathbf{H}\right) \right) =S^{\left( 3b\right) }\left( \underline{\mu},\underline{P}\left( \mathbf{H}\right) \right) \end{array} \text{ }\\% \begin{array} [c]{ccc}% S^{\left( l,n\right) }\left( \underline{\mu},\underline{P}\left( \mathbf{H}\right) \right) =S^{\left( l\right) }\left( \underline {P}\left( \mathbf{H}\right) \right) , & \text{ }s.t.\text{ }S^{\left( l\right) }\left( \underline{\mu},\underline{P}\left( \mathbf{H}\right) \right) =S^{\left( n\right) }\left( \underline{\mu},\underline{P}\left( \mathbf{H}\right) \right) . & \text{for all }\left( l,n\right) . \end{array} \end{array} \label{CMAC_SR}% \end{equation} The expressions for $\mu_{2}<\mu_{1}$ can be obtained from (\ref{CMAC_SR}) by interchanging the indexes $1$ and $2$ in the second term in the expression for $S^{\left( i\right) }\left( \underline{\mu},\underline{P}\left( \mathbf{H}\right) \right) $, $i=3a,$ $3b$. From the convexity of $\mathcal{C}_{\text{C-MAC}}$, every point on the boundary is obtained from the intersection of two MAC rate regions. From Figs. \ref{Fig_Case12}% -\ref{Fig_BC23}, we see that for cases $1$, $2$, and the boundary cases, the region of intersection has a unique vertex at which both user rates are non-zero and thus, $\mu_{1}R_{1}+\mu_{2}R_{2}$ will be tangential to that vertex. On the other hand, for cases $3a$, $3b$, and $3c$, the intersecting region is also a pentagon and thus, $\mu_{1}R_{1}+\mu_{2}R_{2}$, for $\mu _{1}<\mu_{2}$, is maximized by that vertex at which user $2$ is decoded after user $1$. The conditions for the different cases are given by (\ref{C1Cd}% )-(\ref{C23cCd}). Note that for case $1$, since the sum-capacity achieving policies also achieve the point-to-point link capacities for each user to its intended destination, the capacity region is simply given by the single-user capacity bounds on $R_{1}$ and $R_{2}$. The following theorem summarizes the capacity region of an ergodic fading C-MAC and the optimal policies that achieve it for $\mu_{1}<\mu_{2}$. The policies for $\mu_{1}>\mu_{2}$ can be obtained in a straightforward manner. \begin{theorem} \label{Th_CMAC_CR}The optimal policy $\underline{P}^{\ast}\left( \mathbf{H}\right) $ achieving the sum-capacity of a two-user ergodic fading C-MAC is obtained by computing $\underline{P}^{(i)}\left( \mathbf{H}\right) $ and $\underline{P}^{(l,n)}\left( \mathbf{H}\right) $ starting with the inactive cases~$1$ and $2$, followed by the active cases $3a,$ $3b,$ and $3c$, in that order, and finally the boundary cases $(l,n),$ in the order that cases $\left( l,3c\right) $ are the last to be optimized, until for some case the corresponding $\underline{P}^{(i)}\left( \mathbf{H}\right) $ or $\underline{P}^{(l,n)}\left( \mathbf{H}\right) $ satisfies the case conditions. The optimal \underline{$P$}$^{\ast}\left( \mathbf{H}\right) $ is given by the optimal $\underline{P}^{(i)}\left( \mathbf{H}\right) $ or $\underline{P}^{(l,n)}\left( \mathbf{H}\right) $ that satisfies its case conditions and falls into one of the following three categories: \textit{Inactive Cases}: The optimal policies for the two users are such that each user water-fills over its bottle-neck link. Thus for cases $1$ and $2$, each transmitter water-fills on the (interference-free) point-to-point links to its intended and unintended receivers, respectively. Thus, for case $1$, $P_{k}^{\left( \ast\right) }\left( \mathbf{H}\right) =P_{k}^{wf}\left( H_{k,k}\right) $, and for case 2$,$ $P_{k}^{\left( \ast\right) }\left( \mathbf{H}\right) =P_{k}^{\left( 2\right) }\left( \mathbf{H}\right) =\mu_{k}P_{k}^{wf}\left( H_{\left\{ 1,2\right\} \backslash k,k}\right) $, $k=1,2$, where $P_{k}^{wf}\left( H_{j,k}\right) $ for $j,k=1,2,$ is defined in Theorem \ref{Th_VS}. \textit{Cases }$\left( 3a,3b,3c\right) $: For cases $3a$ and $3b$, the optimal policies are opportunistic multiuser solutions given in for the special case where the minimum sum-rate and single-user rate for user $2$ are achieved at the same receiver. Otherwise, the solutions for all three cases are opportunistic non-waterfilling solutions. \textit{Boundary Cases}: The optimal policies maximizing the constrained optimization of $S_{\mu_{1},\mu_{2}}^{\left( l,n\right) }\left( \underline{P}\left( \mathbf{H}\right) \right) $ are also opportunistic non-waterfilling solutions. \end{theorem} \section{\label{Sec_4}Proofs} \subsection{Ergodic VS IFCs: Proof of Theorem \ref{Th_VS}} We now prove Theorem \ref{Th_VS} on the sum-capacity of a sub-class of ergodic fading IFCs with a mix of weak and strong sub-channels. The capacity achieving scheme requires both receivers to decode both messages. \subsubsection{Converse} An outer bound on the sum-capacity of an interference channel is given by the sum-capacity of a IFC in which interference has been eliminated at one or both receivers. One can view it alternately as providing each receiver with the codeword of the interfering transmitter. Thus, from\ Fano's and the data processing inequalities we have that the achievable rate must satisfy \begin{subequations} \begin{align} n\left( R_{1}+R_{2}\right) -n\epsilon & \leq I(X_{1}^{n};Y_{1}^{n}% |X_{2}^{n},\mathbf{H}^{n})+I(X_{2}^{n};Y_{2}^{n}|X_{1}^{n},\mathbf{H}^{n})\\ & =I(X_{1}^{n};\tilde{Y}_{1}^{n}|\mathbf{H}^{n})+I(X_{2}^{n};\tilde{Y}% _{2}^{n}|\mathbf{H}^{n}) \label{VS_OB2}% \end{align} where \end{subequations} \begin{equation}% \begin{array} [c]{cc}% \tilde{Y}_{k}=H_{k,k}X_{k}+Z_{k}, & k=1,2. \end{array} \label{YX_P2P}% \end{equation} The converse proof techniques developed in \cite[Appendix]% {cap_theorems:GoldsmithVaraiya} for a point-to-point ergodic fading link in which the transmit and received signals are related by (\ref{YX_P2P}) can be apply directly following (\ref{VS_OB2}), and thus, we have that any achievable rate pair must satisfy \begin{equation} R_{1}+R_{2}\leq\sum_{k=1}^{2}\mathbb{E}\left[ C\left( \left\vert H_{k,k}\right\vert ^{2}P_{k}^{wf}\left( H_{k,k}\right) \right) \right] . \label{VS_OB}% \end{equation} \subsubsection{Achievable Scheme} Corollary \ref{Cor_1} states that the capacity region of an equivalent C-MAC is an inner bound on the capacity region of an IFC. Thus, from Theorem \ref{Th_CMAC_P} a sum-rate of \begin{equation} \sum_{k=1}^{2}\mathbb{E}\left[ C\left( \left\vert H_{k,k}\right\vert ^{2}P_{k}^{wf}\left( H_{k,k}\right) \right) \right] \label{EVS_SR}% \end{equation} is achievable when $\underline{P}^{\ast}\left( \mathbf{H}\right) =\underline{P}^{wf}\left( H_{k,k}\right) $ satisfies the condition for case 1 in (\ref{C1Cd}), which is equivalent to the requirement that $\underline {P}^{wf}\left( H_{k,k}\right) $ satisfies (\ref{EVS_Cond}). The conditions in (\ref{EVS_Cond}) imply that waterfilling over the two point-to-point links from each user to its receiver is optimal when the fading averaged rate achieved by each transmitter at its intended receiver is strictly smaller than the rate it achieves in the presence of interference at the unintended receiver, i.e., the channel is very strong on average. Finally, since the achievable bound on the sum-rate in (\ref{EVS_SR}) also achieves the single-user capacities, the capacity region of an EVS IFC is given by (\ref{EVS_CapR}). \subsubsection{Separability} Achieving the sum-capacity and the capacity region of the C-MAC requires joint encoding and decoding across all sub-channels. This observation also carries over to the sub-class of ergodic very strong IFCs that are in general a mix of weak and strong sub-channels. In fact, any strategy where each sub-channel is viewed as an independent IFC will be strictly sub-optimal except for those cases where every sub-channel is very strong at the optimal policy. \subsection{Uniformly Strong IFC: Proof of Theorem \ref{Th_Str}} We now show that the strategy of allowing both receivers to decode both messages achieves the sum-capacity for the sub-class of fading IFCs in which every fading state (sub-channel) is strong, i.e., the entries of $\mathbf{h}$ satisfy $\left\vert h_{1,1}\right\vert <\left\vert h_{2,1}\right\vert $ and $\left\vert h_{2,2}\right\vert <\left\vert h_{1,2}\right\vert $. \subsubsection{Converse} In the Proof of Theorem \ref{Th_VS}, we developed a genie-aided outer bound on the sum-capacity of ergodic fading IFCs. One can use similar arguments to write the bounds on the rates $R_{1}$ and $R_{2}$, for every choice of feasible power policy $\underline{P}\left( \mathbf{H}\right) $, as \begin{align} R_{k} & \leq\mathbb{E}\left[ \log\left( 1+\left\vert H_{k,k}\right\vert ^{2}P_{k}\left( \mathbf{H}\right) \right) \right] ,\text{ }% k=1,2.\label{US_SU_Rate}\\ \text{ \ \ \ } & \leq\mathbb{E}\left[ \log\left( 1+\left\vert H_{j,k}\right\vert ^{2}P_{k}\left( \mathbf{H}\right) \right) \right] ,\text{ \ }j=1,2,j\not =k, \label{US_SU_Rate2}% \end{align} where (\ref{US_SU_Rate2}) follows from the uniformly strong condition in (\ref{US_HCond}). We now present two additional bounds where the genie reveals the interfering signal to only one of the receivers. Consider first the case where the genie reveals the interfering signal at receiver $2.$ One can then reduce the two-sided IFC to a one-sided IFC, i.e., set $H_{2,1}=0$. For this genie-aided one-sided channel, from Fano's inequality, we have that the achievable rate must satisfy \begin{subequations} \begin{equation} n\left( R_{1}+R_{2}\right) -n\epsilon\leq I(X_{1}^{n};Y_{1}^{n}% |\mathbf{H}^{n})+I(X_{2}^{n};Y_{2}^{n}|\mathbf{H}^{n}). \label{USFano}% \end{equation} We first consider the expression on the right-side of (\ref{USFano}) for some instantiation $\mathbf{h}^{n}$. We thus have \end{subequations} \begin{equation} I(X_{1}^{n};Y_{1}^{n}|\mathbf{H}^{n}=\mathbf{h}^{n})+I(X_{2}^{n};Y_{2}% ^{n}|\mathbf{H}^{n}=\mathbf{h}^{n})=I(X_{1}^{n};\mathbf{h}_{1,1}^{n}X_{1}% ^{n}+\mathbf{h}_{1,2}^{n}X_{2}^{n}+Z_{1}^{n})+I(X_{2}^{n};\mathbf{h}_{2,2}% ^{n}X_{2}^{n}+Z_{2}^{n}) \label{StrCon3}% \end{equation} where $\mathbf{h}_{j,k}^{n}$ is a diagonal matrix with diagonal entries $h_{j,k,i}$, for all $i=1,2,\ldots,n$. Consider the mutual information terms on the right-side of the equality in (\ref{StrCon3}). We can expand these terms as \begin{subequations} \begin{align} & h\left( \mathbf{h}_{1,1}^{n}X_{1}^{n}+\mathbf{h}_{1,2}^{n}X_{2}^{n}% +Z_{1}^{n}\right) -h\left( \mathbf{h}_{1,2}^{n}X_{2}^{n}+Z_{1}^{n}\right) \\ & +h\left( \mathbf{h}_{2,2}^{n}X_{2}^{n}+Z_{2}^{n}\right) -h\left( Z_{2}^{n}\right) \nonumber\\ & \overset{\left( a\right) }{\leq}n\sum_{i=1}^{n}(h\left( h_{1,1,i}% X_{1,i}+h_{1,2,i}X_{2,i}+Z_{1,i}\right) -h\left( Z_{2,i}\right) )\label{US_Con2}\\ & -h\left( h_{1,2}^{n}X_{2}^{n}+Z_{1}^{n}\right) +h\left( h_{2,2}^{n}% X_{2}^{n}+Z_{2}^{n}\right) , \end{align} where $\left( a\right) $ is from the fact that conditioning does not increase entropy. For the uniformly strong ergodic IFC satisfying (\ref{US_HCond}), i.e., $\left\vert h_{2,2,i}\right\vert \leq\left\vert h_{1,2,i}\right\vert ,$ for all $i=1,2,\ldots,n,$ the third and fourth terms in (\ref{US_Con2}) can be simplified as \end{subequations} \begin{subequations} \label{USConExp}% \begin{align} & -h\left( X_{2}^{n}+\left( \mathbf{h}_{1,2}^{n}\right) ^{-1}Z_{1}% ^{n}\right) +h\left( X_{2}^{n}+\left( \mathbf{h}_{2,2}^{n}\right) ^{-1}Z_{2}^{n}\right) \\ & -\log\left( \left\vert \mathbf{h}_{1,2}^{n}\right\vert \right) +\log\left( \left\vert \mathbf{h}_{2,2}^{n}\right\vert \right) \nonumber\\ & =-h\left( X_{2}^{n}+\left( \mathbf{h}_{1,2}^{n}\right) ^{-1}Z_{1}% ^{n}\right) +h\left( X_{2}^{n}+\left( \mathbf{h}_{1,2}^{n}\right) ^{-1}Z_{1}^{n}+\tilde{Z}^{n}\right) \\ & -\log\left( \left\vert \mathbf{h}_{1,2}^{n}\right\vert \right) +\log\left( \left\vert \mathbf{h}_{2,2}^{n}\right\vert \right) \nonumber\\ & =I(\tilde{Z}^{n};X_{2}^{n}+\left( \mathbf{h}_{1,2}^{n}\right) ^{-1}% Z_{1}^{n}+\tilde{Z}^{n})-\log\left( \left\vert \mathbf{h}_{1,2}% ^{n}\right\vert \right) +\log\left( \left\vert \mathbf{h}_{2,2}% ^{n}\right\vert \right) \\ & \leq I(\tilde{Z}^{n};\left( \mathbf{h}_{1,2}^{n}\right) ^{-1}Z_{1}% ^{n}+\tilde{Z}^{n})-\log\left( \left\vert \mathbf{h}_{1,2}^{n}\right\vert \right) +\log\left( \left\vert \mathbf{h}_{2,2}^{n}\right\vert \right) \\ & =h(Z_{2}^{n})-h(Z_{1}^{n})\label{US_Hsimp}\\ & =\sum_{i=1}^{n}\left( h(Z_{2,i})-h(Z_{1,i})\right) \end{align} where $\tilde{Z}_{i}\sim\mathcal{CN}\left( 0,\left\vert h_{2,2,i}% ^{-1}\right\vert ^{2}-\left\vert h_{1,2,i}^{-1}\right\vert ^{2}\right) $, for all $i$, and the inequality in (\ref{USConExp}) results from the fact that mixing increases entropy. Substituting (\ref{US_Hsimp}) in (\ref{US_Con2}), we thus have that for every instantiation, the $n$-letter expressions reduce to a sum of single-letter expressions. Over all fading instantiations, one can thus write% \end{subequations} \begin{equation} \left( R_{1}+R_{2}\right) -\epsilon\leq I(X_{1}\left( Q\left( n\right) \right) X_{2}\left( Q\left( n\right) \right) ;Y_{1}\left( Q\left( n\right) \right) |\mathbf{H}\left( Q\left( n\right) \right) Q\left( n\right) ) \end{equation} where $Q\left( n\right) $ is a random variable distributed uniformly on $\left\{ 1,2,\ldots,n\right\} $. Our analysis from here on is exactly similar to that for a fading MAC in \cite[Appendix A]{cap_theorems:TH01}, and thus, we omit it in the interest of space. Effectively, the analysis involves considering an increasing sequence of partitions (quantized ranges) $I_{k},$ $k=\mathcal{I}^{+}$, of the alphabet of $\mathbf{H}$, while ensuring that for each $k$, the transmitted signals are constrained in power. Taking limits appropriately over $n$ and $k$, as in \cite[Appendix A]{cap_theorems:TH01}, we obtain% \begin{equation} R_{1}+R_{2}-\epsilon\leq\mathbb{E}\left[ C\left( {\textstyle\sum\nolimits_{k=1}^{2}} \left\vert H_{1,k}\right\vert ^{2}P_{k}\left( \mathbf{H}\right) \right) \right] \label{US_ConFin4}% \end{equation} \qquad where $P(\mathbf{H})$ satisfies (\ref{ErgPwr}). One can similarly let $H_{1,2}=0$ and show that% \begin{equation} R_{1}+R_{2}-\epsilon\leq\mathbb{E}\left[ C\left( {\textstyle\sum\nolimits_{k=1}^{2}} \left\vert H_{2,k}\right\vert ^{2}P_{k}\left( \mathbf{H}\right) \right) \right] \label{US_ConFin5}% \end{equation} Combining (\ref{US_SU_Rate}), (\ref{US_SU_Rate2}), (\ref{US_ConFin4}), and (\ref{US_ConFin5}), we see that, for every choice of $\underline{P}\left( \mathbf{H}\right) $, the capacity region of a uniformly strong ergodic fading IFC lies within the capacity region of a C-MAC for which the fading states satisfy (\ref{US_HCond}). Thus, over all power policies, we have \begin{equation} \mathcal{C}_{\text{IFC}}\left( \overline{P}_{1},\overline{P}_{2}\right) \subseteq\mathcal{C}_{\text{C-MAC}}\left( \overline{P}_{1},\overline{P}% _{2}\right) . \end{equation} \subsubsection{Achievable Strategy} Allowing both receivers to decode both messages as stated in Corollary \ref{Cor_1} achieves the outer bound. For the resulting C-MAC, the uniformly strong condition in (\ref{US_HCond}) limits the intersection of the rate regions $\mathcal{C}_{1}\left( \underline{P}\left( \mathbf{H}\right) \right) $ and $\mathcal{C}_{2}\left( \underline{P}\left( \mathbf{H}\right) \right) $, for any choice of $\underline{P}\left( \mathbf{H}\right) $, to one of cases $1,$ $3a$, $3b$, $3c$, or the boundary cases $\left( 1,n\right) $ for $n=3a,3b,3c,$ such that (\ref{US_SU_Rate}) defines the single-user rate bounds. The sum-capacity optimal policy for each of the above cases is given by Theorem \ref{Th_CMAC_P}. Thus, the optimal user policies are single-user waterfilling solutions when the uniformly strong fading IFC also satisfies (\ref{EVS_Cond}), i.e., the optimal policies satisfy the conditions for case $1$. For all other cases, the optimal policies are opportunistic multiuser allocations. Specifically, cases $3a$ and $3b$ the solutions are the classical multiuser waterfilling solutions \cite{cap_theorems:TH01}. One can similarly develop the optimal policies that achieve the capacity region. Here too, for every point $\mu_{1}R_{1}+\mu_{2}R_{2}$, $\mu_{1}% ,\mu_{2},$ on the boundary of the capacity region, the optimal policy $\underline{P}^{\ast}\left( \mathbf{H}\right) $ is either $\underline {P}^{\left( 1\right) }\left( \mathbf{H}\right) $ or $\underline {P}^{\left( n\right) }\left( \mathbf{H}\right) $ or $\underline {P}^{\left( 1,n\right) }\left( \mathbf{H}\right) $ for $n=3a,$ $3b,$ $3c$. \subsubsection{Separability} See Remark \ref{Rem_USSep}. \subsection{Uniformly Weak One-Sided IFC: Proof of Theorem \ref{Th_UW1}} We now prove Theorem \ref{Th_UW1} on the sum-capacity of a sub-class of one-sided ergodic fading IFCs where every sub-channel is weak, i.e., the channel is uniformly weak. We show that it is optimal to ignore the interference at the unintended receiver. \subsubsection{Converse} From\ Fano's inequality, any achievable rate pair $\left( R_{1},R_{2}\right) $ must satisfy \begin{subequations} \begin{equation} n\left( R_{1}+R_{2}\right) -n\epsilon\leq I(X_{1}^{n};Y_{1}^{n}% |\mathbf{H}^{n})+I(X_{2}^{n};Y_{2}^{n}|\mathbf{H}^{n}). \label{UW_Fano}% \end{equation} We first consider the expression on the right-side of (\ref{UW_Fano}) for some instantiation $\mathbf{h}^{n}$, i.e., consider \end{subequations} \begin{equation} I(X_{1}^{n};Y_{1}^{n}|\mathbf{H}^{n}=\mathbf{h}^{n})+I(X_{2}^{n};Y_{2}% ^{n}|\mathbf{H}^{n}=\mathbf{h}^{n})=I(X_{1}^{n};\mathbf{h}_{1,1}^{n}X_{1}% ^{n}+\mathbf{h}_{1,2}^{n}X_{2}^{n}+Z_{1}^{n})+I(X_{2}^{n};\mathbf{h}_{2,2}% ^{n}X_{2}^{n}+Z_{2}^{n}) \label{Con_W1}% \end{equation} where $\mathbf{h}_{j,k}^{n}$ is a diagonal matrix with diagonal entries $h_{j,k,i}$, for all $i=1,2,\ldots,n$. Let $N^{n}$ be a sequence of independent Gaussian random variables, such that \begin{equation} \left[ \begin{array} [c]{c}% Z_{1,i}\\ N_{i}% \end{array} \right] \sim\mathcal{CN}\left( 0,\left[ \begin{array} [c]{cc}% 1 & \rho_{i}\sigma_{i}\\ \rho_{i}\sigma_{i} & \sigma_{i}^{2}% \end{array} \right] \right) , \label{ConW_ZN}% \end{equation} and \begin{align} \rho_{i}^{2} & =1-\left( \left\vert h_{1,2,i}\right\vert ^{2}\left/ \left\vert h_{2,2,i}\right\vert ^{2}\right. \right) \label{ConW_rho}\\ \rho_{i}\sigma_{i} & =1+\left\vert h_{2,2,i}\right\vert ^{2}P_{2,i}. \label{ConW_rs}% \end{align} We bound (\ref{Con_W1}) as follows:% \begin{subequations} \begin{align} & I(X_{1}^{n};Y_{1}^{n}|\mathbf{h}^{n})+I(X_{2}^{n};Y_{2}^{n}|\mathbf{h}% ^{n})\nonumber\\ & \leq I(X_{1}^{n};Y_{1}^{n},h_{1,1}^{n}X_{1}^{n}+N^{n}|\mathbf{h}% ^{n})+I(X_{2}^{n};Y_{2}^{n}|\mathbf{h}^{n})\\ & =h\left( h_{2,2}^{n}X_{2}^{n}+Z_{2}^{n}\right) -h\left( Z_{2}% ^{n}\right) +h\left( h_{1,1}^{n}X_{1}^{n}+N^{n}\right) -h\left( N^{n}\right) \\ & +h\left( h_{1,1}^{n}X_{1}^{n}+h_{1,2}^{n}X_{2}^{n}+Z_{1}^{n}|h_{1,1}% ^{n}X_{1}^{n}+N^{n}\right) -h\left( h_{1,2}^{n}X_{2}^{n}+Z_{1}^{n}% |N^{n}\right) \nonumber\\ & \leq\sum\limits_{i=1}^{n}h\left( h_{1,1,i}X_{1,i}^{\ast}+N_{i}\right) -\sum\limits_{i=1}^{n}h\left( Z_{2,i}\right) -\sum\limits_{i=1}^{n}h\left( N_{i}\right) +h\left( h_{2,2}^{n}X_{2}^{n}+Z_{2}^{n}\right) \label{Con_WG}% \\ & -h\left( h_{1,2}^{n}X_{2}^{n}+Z_{1}^{n}|N^{n}\right) +\sum\limits_{i=1}% ^{n}h\left( h_{1,1,i}X_{1,i}^{\ast}+h_{1,2,i}X_{2,i}^{\ast}+Z_{1,i}% |h_{1,1,i}X_{1,i}^{\ast}+N_{i}\right) \nonumber\\ & =\sum\limits_{i=1}^{n}\left\{ h\left( h_{1,1,i}X_{1,i}^{\ast}% +N_{i}\right) -h\left( Z_{2,i}\right) -h\left( N_{i}\right) \right. +h\left( h_{2,2,i}X_{2,i}^{\ast}+Z_{2,i}\right) \label{Con_WG2}\\ & -h\left( h_{1,2,i}X_{2,i}^{\ast}+Z_{1,i}|N_{i}\right) \left. +h\left( h_{1,1,i}X_{1,i}^{\ast}+h_{1,2,i}X_{2,i}^{\ast}+Z_{1,i}|h_{1,1,i}X_{1,i}% ^{\ast}+N_{i}\right) \right\} \nonumber\\ & =\sum\limits_{i=1}^{n}\left\{ \log\left( \left\vert h_{1,1,i}\right\vert ^{2}P_{1,i}+\sigma_{i}^{2}\right) -h\left( \sigma_{i}\right) \right. +\log\left( \left\vert h_{2,2,i}\right\vert ^{2}P_{2,i}+1\right) \label{Con_WG3}\\ & -\log\left( \left\vert h_{1,2,i}\right\vert ^{2}P_{2,i}+\left( 1-\rho _{i}^{2}\right) \right) +\log\left( \left\vert h_{1,1,i}\right\vert ^{2}P_{1,i}+\left\vert h_{1,2,i}\right\vert ^{2}P_{2,i}+1\right. \nonumber\\ & \left. \left. -\left( \left\vert h_{1,1,i}\right\vert ^{2}P_{1,i}% +\sigma_{i}\right) ^{-1}\left( \left\vert h_{1,1,i}\right\vert ^{2}% P_{1,i}+\rho_{i}\sigma_{i}\right) ^{2}\right) \right\} \nonumber\\ & =\sum\limits_{i=1}^{n}\left\{ \log\left( \left\vert h_{2,2,i}\right\vert ^{2}P_{2,i}+1\right) +\log\left( 1+\frac{\left\vert h_{1,1,i}\right\vert ^{2}P_{1,i}}{1+\left\vert h_{1,2,i}\right\vert ^{2}P_{2,i}}\right) \right\} \label{Con_WG4}% \end{align} where (\ref{Con_WG}) follows from the fact that conditioning does not increase entropy and that the conditional entropy is maximized by Gaussian signaling, i.e., $X_{k,i}^{\ast}\sim\mathcal{CN}(0,P_{k,i})$, (\ref{Con_WG2}) follows from (\ref{ConW_ZN}) and (\ref{ConW_rho}) which imply% \end{subequations} \begin{equation} var\left( h_{1,2,i}^{-1}Z_{1,i}|N_{i}\right) =\frac{1-\rho_{i}^{2}% }{\left\vert h_{1,2,i}\right\vert ^{2}}=\left\vert h_{2,2,i}\right\vert ^{-2}% \end{equation} and therefore, \begin{subequations} \begin{align} & h\left( h_{2,2}^{n}X_{2}^{n}+Z_{2}^{n}\right) -h\left( h_{1,2}^{n}% X_{2}^{n}+Z_{1}^{n}|N^{n}\right) \\ & =\log\left( \left\vert h_{2,2}^{n}\right\vert \right) -\log\left( \left\vert h_{1,2}^{n}\right\vert \right) \\ & =\sum\limits_{i=1}^{n}h\left( h_{2,2,i}X_{2,i}^{\ast}+Z_{2,i}\right) -h\left( h_{1,2,i}X_{2,i}^{\ast}+Z_{1,i}|N_{i}\right) ; \label{ConW_2terms}% \end{align} and (\ref{Con_WG4}) follows from substituting (\ref{ConW_rs}) in (\ref{Con_WG3}) and simplifying the resulting expressions. Our analysis from here on is similar to that for the US IFC (see also \cite[Appendix A]{cap_theorems:TH01}). Effectively, the analysis involves considering an increasing sequence of partitions (quantized ranges) $I_{k},$ $k=\mathcal{I}^{+}$, of the alphabet of $\mathbf{H}$, while ensuring that for each $k$, the transmitted signals are constrained in power. Taking limits appropriately over $n$ and $k$, and using the fact that the $\log$ expressions in (\ref{Con_WG4}) are concave functions of $P_{k,i}$, for all $k$, and that every feasible power policy satisfies (\ref{ErgPwr}), we obtain% \end{subequations} \begin{subequations} \begin{equation} R_{1}+R_{2}-\epsilon\leq\mathbb{E}\left[ C\left( \left\vert H_{2,2}% \right\vert ^{2}P_{2}\left( \mathbf{h}\right) \right) +C\left( \frac{\left\vert H_{1,1}\right\vert ^{2}P_{1}\left( \mathbf{h}\right) }{1+\left\vert H_{1,2}\right\vert ^{2}P_{2}\left( \mathbf{h}\right) }\right) \right] . \label{ConW_fin}% \end{equation} An outer bound on the sum-rate is obtained by maximizing over all feasible policies and is given by (\ref{SC_Weak}) and (\ref{SCW_S}). \subsubsection{Achievable Strategy} The outer bounds can be achieved by letting receiver 1 ignore (not decode) the interference it sees from transmitter $2$. Averaged over all sub-channels, the sum of the rates achieved at the two receivers for every choice of $\underline{P}\left( \mathbf{H}\right) $ is given by (\ref{ConW_fin}). The sum-capacity in (\ref{SC_Weak}) is then obtained by maximizing (\ref{ConW_fin}% ) over all feasible $\underline{P}\left( \mathbf{H}\right) $. \subsubsection{Separability} The optimality of separate encoding and decoding across the sub-channels follows directly from the fact that the sub-channels are all of the same type, and thus, independent messages can be multiplexed across the sub-channels. This is in contrast to the uniformly strong and the ergodic very strong IFCs where mixtures of different channel types in both cases is exploited to achieve the sum-capacity by encoding and decoding jointly across all sub-channels. \end{subequations} \begin{remark} A natural question is whether one can extend the techniques developed here to the two-sided UW IFC. In this case, one would have four parameters per channel state, namely $\rho_{k}\left( \mathbf{H}\right) $ and $\sigma_{k}^{2}\left( \mathbf{H}\right) $, $k=1,2$. Thus, for example, one can generalize the techniques in \cite[Proof of Th. 2]{cap_theorems:ShangKramerChen} for a fading IFC with non-negative real $H_{j,k}$ for all $j,k$, such that $H_{1,1}% >H_{2,1}$ and $H_{2,2}>H_{1,2}$, to outer bound the sum-rate by \begin{equation} \mathbb{E}\left[ C\left( \frac{\left\vert H_{1,1}\right\vert ^{2}% P_{1}\left( \mathbf{H}\right) }{1+\left\vert H_{1,2}\right\vert ^{2}% P_{2}\left( \mathbf{H}\right) }\right) +C\left( \frac{\left\vert H_{2,2}\right\vert ^{2}P_{1}\left( \mathbf{H}\right) }{1+\left\vert H_{2,1}\right\vert ^{2}P_{2}\left( \mathbf{H}\right) }\right) \right] , \end{equation} we require that $\rho_{k}\left( \mathbf{H}\right) $ and $\sigma_{k}% ^{2}\left( \mathbf{H}\right) $, for all $\mathbf{H}$, satisfy% \begin{equation} H_{1,1}H_{1,2}\left( 1+H_{2,1}^{2}P_{1}\left( \mathbf{H}\right) \right) +H_{2,2}H_{2,1}\left( 1+H_{1,2}^{2}P_{2}\left( \mathbf{H}\right) \right) \leq H_{1,1}H_{2,2}. \label{UW2_Cond}% \end{equation} This implies that for a given fading statistics, every choice of feasible power policies $\underline{P}\left( \mathbf{H}\right) $ must satisfy the condition in (\ref{UW2_Cond}). With the exception of a few trivial channel models, the condition in (\ref{UW2_Cond}) cannot in general be satisfied by all power policies. One approach is to extend the results on sum-capacity and the related noisy interference condition for PGICs in \cite[Proof of Th. 3]{cap_theorems:Shang_03} to ergodic fading IFCs. Despite the fact that ergodic fading channels are simply a weighted combination of parallel sub-channels, extending the results in \cite[Proof of Th. 3]% {cap_theorems:Shang_03} are not in general straightforward. \end{remark} \subsection{Uniformly Mixed IFC: Proof of Theorem \ref{Th_Mix}} The proof of Theorem \ref{Th_UW2} follows directly from bounding the sum-capacity a UM\ IFC by the sum-capacities of a UW one-sided IFC and a US one-sided IFC that result from eliminating links one of the two interfering links. Achievability follows from using the US coding scheme for the strong user and the UW coding scheme for the weak user. \subsection{Uniformly Weak IFC: Proof of Theorem \ref{Th_UW2}} The proof of Theorem \ref{Th_UW2} follows directly from bounding the sum-capacity a UW\ IFC by that of a UW one-sided IFC that results from eliminating one of the interfering links (eliminating an interfering link can only improve the capacity of the network). Since two complementary one-sided IFCs can be obtained thus, we have two outer bounds on the sum-capacity of a UW IFC denoted by $S^{\left( w,1\right) }\left( \underline{P}\left( \mathbf{H}\right) \right) \ $and $S^{\left( w,2\right) }\left( \underline{P}\left( \mathbf{H}\right) \right) $ in (\ref{SC_UW2}), where $S^{\left( w,1\right) }\left( \underline{P}\left( \mathbf{H}\right) \right) $ and $S^{\left( w,2\right) }\left( \underline{P}\left( \mathbf{H}\right) \right) $ are the bounds for one-sided UW IFCs with $H_{2,1}=0$ and $H_{1,2}=0$, respectively. \subsection{Hybrid One-Sided IFC\textit{: }Proof of Theorem \ref{Th_Hyb}} The bound in (\ref{HK1_SR}) can be obtained from the following code construction: user $1$ encodes its message $w_{1}$ across all sub-channels by constructing independent Gaussian codebooks for each sub-channel to transmit the same message. On the other hand, user 2 transmits two messages $\left( w_{2p},w_{2c}\right) $ jointly across all sub-channels by constructing independent Gaussian codebooks for each sub-channel to transmit the same message pair. The messages $w_{2p}$ and $w_{2c}$ are transmitted at (fading averaged) rates $R_{2p}$ and $R_{2c}$, respectively, such that $R_{2p}% +R_{2c}=R_{2}$. Thus, across all sub-channels, one may view the encoding as a Han Kobayashi coding scheme for a one-sided non-fading IFC in which the two transmitted signals in each use of sub-channel $\mathbf{H}$ are \begin{align} X_{1}\left( \mathbf{H}\right) & =\sqrt{P_{1}\left( \mathbf{H}\right) }V_{1}\left( \mathbf{H}\right) \label{X1_hyb}\\ X_{2}\left( \mathbf{H}\right) & =\sqrt{\alpha_{\mathbf{H}}P_{2}\left( \mathbf{H}\right) }V_{2}\left( \mathbf{H}\right) +\sqrt{\overline{\alpha }_{\mathbf{H}}P_{2}\left( \mathbf{H}\right) }U_{2}\left( \mathbf{H}\right) \label{X2_hyb}% \end{align} where $V_{1}\left( \mathbf{H}\right) $, $V_{2}\left( \mathbf{H}\right) $, and $U_{2}\left( \mathbf{H}\right) $ are independent zero-mean unit variance Gaussian random variables, for all $\mathbf{H}$, $\alpha_{\mathbf{H}}% \in\left[ 0,1\right] $ and $\overline{\alpha}_{\mathbf{H}}=1-\alpha _{\mathbf{H}}$ are the power fractions allocated for $w_{2p}$ and $w_{2c}$, respectively. Thus, over $n$ uses of the channel, $w_{2p}$ and $w_{2c}$ are encoded via $V_{2}^{n}$ and $U_{2}^{n}$, respectively. Receiver $1$ decodes $w_{1}$ and $w_{2c}$ jointly and receiver $2$ decodes $w_{2p}$ and $w_{2c}$ jointly across all channel states provided \begin{subequations} \label{R2bounds}% \begin{align} R_{2p} & \leq\mathbb{E}\left[ C\left( \left\vert H_{2,2}\right\vert ^{2}\alpha_{\mathbf{H}}P_{2}\left( \mathbf{H}\right) \right) \right] \\ R_{2p}+R_{2c} & \leq\mathbb{E}\left[ C\left( \left\vert H_{2,2}\right\vert ^{2}P_{2}\left( \mathbf{H}\right) \right) \right] \end{align} \end{subequations} \begin{subequations} \label{Hyb_HK}% \begin{align} R_{1} & \leq\mathbb{E}\left[ C\left( \frac{\left\vert H_{1,1}\right\vert ^{2}P_{1}\left( \mathbf{H}\right) }{1+\left\vert H_{1,2}\right\vert ^{2}\alpha_{\mathbf{H}}P_{2}\left( \mathbf{H}\right) }\right) \right] \\ R_{2c} & \leq\mathbb{E}\left[ C\left( \frac{\left\vert H_{1,2}\right\vert ^{2}\overline{\alpha}_{\mathbf{H}}P_{2}\left( \mathbf{H}\right) }{1+\left\vert H_{1,2}\right\vert ^{2}\alpha_{\mathbf{H}}P_{2}\left( \mathbf{H}\right) }\right) \right] \\ R_{1}+R_{2c} & \leq\mathbb{E}\left[ C\left( \frac{\left\vert H_{1,1}% \right\vert ^{2}P_{1}\left( \mathbf{H}\right) +\left\vert H_{1,2}\right\vert ^{2}\overline{\alpha}_{\mathbf{H}}P_{2}\left( \mathbf{H}\right) }{1+\left\vert H_{1,2}\right\vert ^{2}\alpha_{\mathbf{H}}P_{2}\left( \mathbf{H}\right) }\right) \right] . \end{align} Using Fourier-Motzhkin elimination, we can simplify the bounds in (\ref{R2bounds}) and (\ref{Hyb_HK}) to obtain \end{subequations} \begin{subequations} \label{HybHKFin}% \begin{align} R_{1} & \leq\mathbb{E}\left[ C\left( \frac{\left\vert H_{1,1}\right\vert ^{2}P_{1}\left( \mathbf{H}\right) }{1+\left\vert H_{1,2}\right\vert ^{2}\alpha_{\mathbf{H}}P_{2}\left( \mathbf{H}\right) }\right) \right] \label{R1h}\\ R_{2} & \leq\mathbb{E}\left[ C\left( \left\vert H_{2,2}\right\vert ^{2}P_{2}\left( \mathbf{H}\right) \right) \right] \label{R2h}\\ R_{2} & \leq\mathbb{E}\left[ C\left( \alpha_{\mathbf{H}}\left\vert H_{2,2}\right\vert ^{2}P_{2}\left( \mathbf{H}\right) \right) \right] +\mathbb{E}\left[ \frac{\left\vert H_{1,2}\right\vert ^{2}\overline{\alpha }_{\mathbf{H}}P_{2}\left( \mathbf{H}\right) }{1+\left\vert H_{1,2}% \right\vert ^{2}\alpha_{\mathbf{H}}P_{2}\left( \mathbf{H}\right) }\right] \label{R2h2}\\ R_{1}+R_{2} & \leq\mathbb{E}\left[ C\left( \left\vert H_{2,2}\right\vert ^{2}\alpha_{\mathbf{H}}P_{2}\left( \mathbf{H}\right) \right) \right] +\mathbb{E}\left[ C\left( \frac{\left\vert H_{1,1}\right\vert ^{2}% P_{1}\left( \mathbf{H}\right) +\left\vert H_{1,2}\right\vert ^{2}% \overline{\alpha}_{\mathbf{H}}P_{2}\left( \mathbf{H}\right) }{1+\left\vert H_{1,2}\right\vert ^{2}\alpha_{\mathbf{H}}P_{2}\left( \mathbf{H}\right) }\right) \right] .\label{R12h}% \end{align} Combining the bounds in (\ref{HybHKFin}), for every choice of $\left( \alpha_{\mathbf{H}},\underline{P}\left( \mathbf{H}\right) \right) $, the sum-rate is given by the minimum of two functions $S_{1}\left( \alpha _{\mathbf{H}},\underline{P}\left( \mathbf{H}\right) \right) $ and $S_{2}\left( \alpha_{\mathbf{H}},\underline{P}\left( \mathbf{H}\right) \right) $, where $S_{1}\left( \underline{P}\left( \mathbf{H}\right) \right) $ is the sum of the bounds on $R_{1}$ and $R_{2}$ in (\ref{R1h}) and (\ref{R2h}), respectively, and $S_{2}\left( \alpha_{\mathbf{H}},\underline {P}\left( \mathbf{H}\right) \right) $ is the bound on $R_{1}+R_{2}$ in (\ref{R12h}). The bound on $R_{1}+R_{2}$ from combining (\ref{R1h}) and (\ref{R2h2}) is at least as much as (\ref{R12h}), and hence, is ignored. The maximization of the minimum of $S_{1}\left( \underline{P}\left( \mathbf{H}\right) \right) $ and $S_{2}\left( \alpha_{\mathbf{H}}% ,\underline{P}\left( \mathbf{H}\right) \right) $ can be shown to be equivalent to a \textit{minimax }optimization problem (see for e.g., \cite[II.C]{cap_theorems:HVPoor01}) for which the maximum sum-rate $S^{\ast}$ is given by three cases. The three cases are defined below. Note that in each case, the optimal $\underline{P}^{\ast}\left( \mathbf{H}\right) $ and $\alpha_{\mathbf{H}}^{\ast}$ maximize the smaller of the two functions and therefore maximize both in case when the two functions are equal. The three cases are% \end{subequations} \begin{subequations} \label{Hyb_MM}% \begin{align} & \begin{array} [c]{cc}% \text{Case }1: & S^{\ast}=S_{1}\left( \alpha_{\mathbf{H}}^{\ast}% ,\underline{P}^{\ast}\left( \mathbf{H}\right) \right) <S_{2}\left( \alpha_{\mathbf{H}}^{\ast},\underline{P}^{\ast}\left( \mathbf{H}\right) \right) \end{array} \label{MMC1}\\ & \begin{array} [c]{cc}% \text{Case }2: & S^{\ast}=S_{2}\left( \alpha_{\mathbf{H}}^{\ast}% ,\underline{P}^{\ast}\left( \mathbf{H}\right) \right) <S_{1}\left( \alpha_{\mathbf{H}}^{\ast},\underline{P}^{\ast}\left( \mathbf{H}\right) \right) \end{array} \label{MMC2}\\ & \begin{array} [c]{cc}% \text{Case }3: & S^{\ast}=S_{1}\left( \alpha_{\mathbf{H}}^{\ast}% ,\underline{P}^{\ast}\left( \mathbf{H}\right) \right) =S_{2}\left( \alpha_{\mathbf{H}}^{\ast},\underline{P}^{\ast}\left( \mathbf{H}\right) \right) \end{array} \label{MMC3}% \end{align} Thus, for Cases 1 and 2$,$ the minimax policy is the policy maximizing $S_{1}\left( \underline{P}\left( \mathbf{H}\right) \right) $ and $S_{2}\left( \alpha_{\mathbf{H}},\underline{P}\left( \mathbf{H}\right) \right) $ subject to the conditions in (\ref{MMC1}) and (\ref{MMC2}), respectively, while for Case $3$, it is the policy maximizing $S_{1}\left( \underline{P}\left( \mathbf{H}\right) \right) $ subject to the equality constraint in (\ref{MMC3}). We now consider this maximization problem for each sub-class. Before proceeding, we observe that, $S_{1}\left( \cdot\right) $ is maximized for $\alpha_{\mathbf{H}}^{\ast}=0$ and $P_{k}^{\ast}\left( \mathbf{H}\right) =P_{k}^{(wf)}\left( H_{kk}\right) $, $k=1,2.$ On the other hand, the $\alpha_{\mathbf{H}}^{\ast}$ maximizing $S_{2}\left( \cdot\right) $ depends on the sub-class. \textit{Uniformly Strong}: The bound $S_{2}\left( \alpha_{\mathbf{H}% },\underline{P}\left( \mathbf{H}\right) \right) $ in (\ref{R12h}) can be rewritten as \end{subequations} \begin{equation} \mathbb{E}\left[ C\left( \left\vert H_{2,2}\right\vert ^{2}\alpha _{\mathbf{H}}P_{2}\left( \mathbf{H}\right) \right) \right] -\mathbb{E}% \left[ C\left( \left\vert H_{1,2}\right\vert ^{2}\alpha_{\mathbf{H}}% P_{2}\left( \mathbf{H}\right) \right) \right] +\mathbb{E}\left[ C\left( \left\vert H_{1,1}\right\vert ^{2}P_{1}\left( \mathbf{H}\right) +\left\vert H_{1,2}\right\vert ^{2}P_{2}\left( \mathbf{H}\right) \right) \right] , \label{US1}% \end{equation} and thus, when $\Pr[\left\vert H_{1,2}\right\vert >\left\vert H_{2,2}% \right\vert ]=1$, for every choice of $\underline{P}\left( \mathbf{H}\right) $, $S_{2}\left( \alpha_{\mathbf{H}},\underline{P}\left( \mathbf{H}\right) \right) $ is maximized by $\alpha_{\mathbf{H}}=0$, i.e., $w_{2}=w_{2c}$. The sum-capacity is given by (\ref{US_SC}) with $H_{2,1}=\infty$ (this is equivalent to a genie aiding one of the receivers thereby simplifying the sum-capacity expression in (\ref{US_SC}) for a two-sided IFC to that for a one-sided IFC). Furthermore, $\alpha_{\mathbf{H}}=0$ also maximizes $S_{1}\left( \alpha_{\mathbf{H}},\underline{P}\left( \mathbf{H}\right) \right) .$ In conjunction with the outer bounds for US IFCs developed earlier, the US sum-capacity and the optimal policy achieving it are obtained via the minimax optimization problem with $\alpha_{\mathbf{H}}^{\ast}=0$ such that every sub-channel carries the same common information. \textit{Uniformly Weak}: For this sub-class of channels, it is straightforward to verify that for $\alpha_{\mathbf{H}}^{\ast}=0$ (\ref{MMC1}) will not be satisfied. Thus, one is left with Cases 2 and $3$. From Theorem \ref{Th_UW1}, we have that $\alpha_{\mathbf{H}}^{\ast}=1$ achieves the sum-capacity of one-sided UW\ IFCs, i.e., $w_{2}=w_{2p}$. Furthermore, $S_{2}\left( 1,\underline{P}\left( \mathbf{H}\right) \right) =S_{1}\left( 1,\underline{P}\left( \mathbf{H}\right) \right) $, and thus, the condition for Case $2$ is not satisfied, i.e., this sub-class corresponds to Case 3 in the minimax optimization. The constrained optimization in (\ref{MMC3}) for Case $3$ can be solved using Lagrange multipliers though the solution is relatively easier to develop using techniques in Theorem \ref{Th_UW1}. \textit{Ergodic Very Strong}: As mentioned before, $S_{1}\left( \cdot\right) $ is maximized for $\alpha_{\mathbf{H}}^{\ast}=0$ and $P_{k}^{\ast}\left( \mathbf{H}\right) =P_{k}^{(wf)}\left( H_{kk}\right) $, $k=1,2$, i.e. when $w_{2}=w_{2c}$ and each user waterfills on its intended link. From (\ref{Hyb_MM})$,$ we see that the sum-capacity of EVS IFCs is achieved provided the condition for Case 1 in (\ref{Hyb_MM}) is satisfied. Note that this maximization does not require the sub-channels to be UW or US. \textit{Hybrid}: When the condition for Case $1$ in (\ref{Hyb_MM}) with $\alpha_{\mathbf{H}}^{\ast}=0$ is satisfied, we obtain an EVS\ IFC. On the other hand, when this condition is not satisfied, the optimization simplifies to considering Cases $2$ and 3, i.e., $\alpha_{\mathbf{H}}^{\ast}\not =0$ for all $\mathbf{H}$. Using the linearity of expectation, we can write the expressions for $S_{1}\left( \cdot\right) $ and $S_{2}\left( \cdot\right) $ as sums of expectations of the appropriate bounds over the collection of weak and strong sub-channels. Let $S_{k}^{\left( w\right) }\left( \cdot\right) $ and $S_{k}^{\left( s\right) }\left( \cdot\right) $ denote the expectation over the weak and strong sub-channels, respectively, for $k=1,2$, such that $S_{k}\left( \cdot\right) =S_{k}^{\left( w\right) }\left( \cdot\right) +S_{k}^{\left( s\right) }\left( \cdot\right) $, $k=1,2.$ Consider Case $2$ first. For those sub-channels which are strong, one can use (\ref{US1}) to show that $\alpha_{\mathbf{H}}^{\ast}=0$ maximizes $S_{2}^{\left( s\right) }\left( \cdot\right) $. Suppose we choose $\alpha_{\mathbf{H}}^{\ast}=1$ to maximize $S_{2}^{\left( w\right) }\left( \cdot\right) $. From the UW analysis earlier, $S_{2}^{\left( w\right) }\left( 1,P\left( \mathbf{H}\right) \right) =S_{1}^{\left( w\right) }\left( 1,P\left( \mathbf{H}\right) \right) $, and therefore, (\ref{MMC2}) is satisfied only when $S_{2}^{\left( s\right) }\left( 0,P\left( \mathbf{H}\right) \right) <S_{1}^{\left( s\right) }\left( 0,P\left( \mathbf{H}\right) \right) $. This requirement may not hold in general, and thus, to satisfy (\ref{MMC2}), we require that $\alpha_{\mathbf{H}}^{\ast}% \in(0,1]$ for those $\mathbf{H}$ that represent weak sub-channels. Similar arguments hold for Case $3$ too thereby justifying (\ref{alpstar_hyb}) in Theorem \ref{Th_Hyb}. \begin{remark} The bounds in (\ref{R2bounds}) are written assuming superposition coding of the common and private messages at transmitter $2$. The resulting bounds following Fourier-Motzkin elimination remain unchanged even if we included an additional bound on $R_{2c}$ at receiver $2$ in (\ref{R2bounds}). \end{remark} \section{\label{Sec_Dis}Discussion} As in the non-fading case (see \cite{cap_theorems:ETW} for a detailed development of outer bounds), the outer bounds and capacity results we have obtained are in general tailored to specific regimes of fading statistics. Our results can be summarized by two Venn diagrams, one for the two-sided and one for the one-sided, as shown in Fig. \ref{Fig_IFCVenn}. Taking a Han-Kobayashi view-point, the diagrams show that transmitting common messages is optimal for the EVS and US IFCs, i.e., $w_{k}=w_{kc}$, $k=1,2$. Similarly, choosing only a private message at the interfering transmitter, i.e., $w_{2}=w_{2p}$ for $H_{2,1}=0$ and $w_{1}=w_{1p}$ for $H_{1,2}=0$, is optimal for the one-sided UW\ IFC. For the mixed IFCs, it is optimal for the strongly and the weakly interfering users to transmit only common and only private messages, respectively. For the remaining hybrid IFCs and two-sided UW IFCs, the most general achievable strategy results from generalizing the HK scheme to the fading model, i.e., each transmitter in the two-sided IFC transmits private and common messages while only the interfering transmitter does so in the one-sided model. These results are summarizes in Fig. \ref{Fig_IFCVenn}. The sub-classes for which either the sum-capacity or the entire capacity region is known are also indicated in the Figure.% \begin{figure}[tbp] \centering {\includegraphics[ height=2.8271in, width=5.6213in ]% {Two_One_sided_IFCs_Venn.eps}% }% \caption{Overview of capacity results for two-sided and one-sided ergodic fading IFCs.}\label{Fig_IFCVenn}% \end{figure}% We now present examples of continuous and discrete fading process for which the channel states satisfy the EVS condition. Without loss of generality in both examples we assume that the direct links are non-fading. Thus, for the case where the fading statistics and average power constraints $\overline {P}_{k}$ satisfy the EVS\ conditions in (\ref{EVS_Cond}), it is optimal for transmitter $k$ to transmit at $\overline{P}_{k}$. For the continuous model, we assume that the cross-links are independent and identically distributed Rayleigh faded links, i.e., $H_{j,k}\sim\mathcal{CN}\left( 0,\sigma ^{2}/2\right) $ for all $j\not =k,j,k=1,2$. For the discrete model, we assume that the cross-link fading states take values in a binary set $\left\{ h_{1},h_{2}\right\} $. Finally, we set $\overline{P}_{1}=\overline{P}% _{2}=\overline{P}$. For every choice of the Rayleigh fading variance $\sigma^{2}$, we determine the maximum $\overline{P}$ for which the EVS conditions in (\ref{EVS_Cond}) hold. The resulting feasible $\overline{P}$ vs. $\sigma^{2}$ region is plotted in\ Fig. \ref{Fig_RayLIFC}(a). Our numerical results indicate that for very small values of $\sigma^{2}$, i.e., $\sigma^{2}<1.5$, where the cumulative distribution of fading states with $\left\vert H_{j,k}\right\vert \,<1$ is close to $1$, the EVS condition cannot be satisfied by any finite value of $\overline{P}$, however small. As $\sigma^{2}$ increases thereby increasing the likelihood of $\left\vert H_{j,k}\right\vert \,>1$, $\overline{P}$ increases too. Also plotted in Fig. \ref{Fig_RayLIFC}(b) is the EVS sum-capacity achieved at $\overline{P}_{\max}$, the maximum $\overline{P}$ for every choice of $\sigma^{2}$. Furthermore, since the Rayleigh fading channel allows ergodic interference alignment \cite{cap_theorems:Nazer01}, we compare the EVS\ sum-capacity with the sum-rate achieved by ergodic interference alignment for every choice of $\sigma^{2}$ and the corresponding $\overline {P}_{\max}$. This achievable scheme, whose sum-rate is the same as that achieved when the users are time-duplexed, is closer to the sum-capacity only for small values of $\sigma^{2}$. This is to be expected as EVS\ IFCs achieve the largest possible degrees of freedom, which is $2$ for a two-user IFC while the scheme of achieves at most one degree of freedom. From (\ref{EVS_Cond}), one can verify that for a non-fading very strong IFC, the very strong condition sets an upper bound on the average transmit power $\overline{P}_{k}$ at user $k$ as% \begin{equation}% \begin{array} [c]{cc}% \overline{P}_{k}<H_{k,j}/\left( \left\vert H_{1,1}\right\vert ^{2}\left\vert H_{2,2}\right\vert ^{2}\right) -1 & j\not =k,j,k\in\left\{ 1,2\right\} . \end{array} \end{equation} One can view the upper bound on $\overline{P}$ for the EVS\ IFCs in\ Fig. \ref{Fig_RayLIFC} as an equivalent fading-averaged bound.% \begin{figure*}[tbp] \centering {\includegraphics[ trim=0.167262in 0.052738in 0.216967in 0.064759in, height=3.154in, width=5.9931in ]% {Rayleigh_Fading_figure_2plots.eps}% }% \caption{Feasible Power-variance region for EVS, EVS sum-capacity, and Ergodic Interference Alignment Sum-Rate.}\label{Fig_RayLIFC}% \end{figure*}% We next compare the effect of joint and separate coding for one-sided EVS and US IFCs. For computational simplicity, we consider a discrete fading model where the non-zero cross-link fading state take values in a binary set $\left\{ h_{1},h_{2}\right\} $ while the direct links are non-fading unit gains. For a one-sided EVS\ IFC, we choose $\left( h_{1},h_{2}\right) =\left( 0.5,3.5\right) $ and $\overline{P}_{1}=\overline{P}_{2}=\overline {P}_{\max}$ where $\overline{P}_{\max}$ is the maximum power for which the EVS conditions in (\ref{EVS_Cond}) are satisfied (note that only one of the conditions are relevant since it is a one-sided IFC). In Fig. \ref{Fig_IFCSep}% , the EVS sum-capacity is plotted along with the sum-rate achieved by independent coding in each sub-channel as a function of the probability $p_{1}$ of the fading state $h_{1}$. Here independent coding means that each sub-channel is viewed as a non-fading one-sided IFC and the sum-capacity achieving strategy for each sub-channel is applied. As expected, as $p_{1}\rightarrow0$ or $p_{1}\rightarrow1$, the sum-rate achieved by separable coding approaches the joint coding scheme. Thus, the difference between the optimal joint coding and the sub-optimal independent coding schemes is the largest when both fading states are equally likely. In contrast to this example where the gains from joint coding are not negligible, we also plot in Fig. \ref{Fig_IFCSep} the sum-capacity and sum-rate achieved by independent coding for an EVS\ IFC with $\left( h_{1},h_{2}\right) =\left( 0.5,2.0\right) $ for which the rate difference is very small. Thus, as expected, joint coding is advantageous when the variance of the cross-link fading is large and the transmit powers are small enough to result in an EVS IFC. In the same plot, we also compare the sum-capacity with the sum-rate achieved by a separable scheme for two US IFCs, one given by $\left( h_{1},h_{2}\right) =\left( 1.25,1.75\right) $ and the other by $\left( h_{1},h_{2}\right) =\left( 1.25,3.75\right) $. As with the EVS\ examples, here too, the rate difference between the optimal joint strategy and the, in general, sub-optimal independent strategy increases with increasing variance of the fading distribution. One can similarly compare the performance of independent and joint coding for two-sided EVS and US\ IFCs. In this case, the more general HK\ scheme needs to be considered in each sub-channel for the independent coding case. In general, the observations for the one-sided also extend to the two-sided IFC.% \begin{figure} [ptb] \begin{center} \includegraphics[ trim=0.258254in 0.038821in 0.399347in 0.150780in, height=3.3217in, width=4.3785in ]% {SumRate_Par_Opt_4cases_51209.eps}% \caption{Plot comparing the sum-capacities and the sum-rates achieved by separable coding for different values of $\left( h_{1},h_{2}\right) $ that result in either an EVS\ or a US IFC.}% \label{Fig_IFCSep}% \end{center} \end{figure} Finally, we demonstrate sum-rates achievable by Theorem \ref{Th_Hyb} for a hybrid one-sided IFC. As before, for computational simplicity, we consider a discrete fading model where the cross-link fading states take values in a binary set $\left\{ h_{1},h_{2}\right\} $ while the direct links are non-fading unit gains. Without loss of generality, we choose $\left( h_{1},h_{2}\right) =\left( 0.5,2.0\right) $ and assume $\overline{P}% _{1}=\overline{P}_{2}=\overline{P}$. The sum-rate achieved by the proposed HK-like scheme, denoted $R_{sum}^{\left( HK\right) }$, is determined as a function of the probability $p_{1}$ of the weak state $h_{1}$. For each $p_{1}$, using the fact that a hybrid IFC is by definition one for which the EVS condition is not satisfied, we choose $\overline{P}\left( p_{1}\right) =\overline{P}_{\max}^{EVS}\left( p_{1}\right) +1.5$ where $\overline {P}_{\max}^{EVS}\left( p_{1}\right) $ is the maximum $\overline{P}$ for which the EVS conditions hold for the chosen $p_{1}$ and $\left( h_{1}% ,h_{2}\right) $. In Fig. \ref{Fig_HKParOB}(a), we plot $R_{sum}^{\left( HK\right) }$ as a function of $p_{1}$. We also plot the largest sum-rate outer bounds $R_{sum}^{\left( OB\right) }$ obtained by assuming interference-free links from the users to the receivers. Finally, for comparison, we plot the sum-rate $R_{sum}^{\left( Ind\right) }$ achieved by a separable coding scheme in each sub-channel. This separable coding scheme is simply a special case of the HK-based joint coding scheme presented for hybrid one-sided IFCs in Theorem \ref{Th_Hyb} obtained by choosing $\alpha_{H}^{\ast}=0$ and $\alpha_{H}^{\ast }=1$ in the strong and weak sub-channels, respectively. Thus, $R_{sum}% ^{\left( Ind\right) }\leq R_{sum}^{\left( HK\right) }$ as demonstrated in the plot. In Fig. \ref{Fig_HKParOB}(b), the fractions $\alpha_{h_{1}}^{\ast}$ and $\alpha_{h_{2}}^{\ast}$ in the $h_{1}$ (weak) and the $h_{2}$ (strong) states, respectively, are plotted. As expected, $\alpha_{h_{2}}^{\ast}=0$; on the other hand, $\alpha_{h_{1}}^{\ast}$ varies between $0$ and $1$ such that for $p_{1}\rightarrow1$, $\alpha_{h_{1}}^{\ast}\rightarrow1$ and for $p_{1}\rightarrow1$, $\alpha_{h_{1}}^{\ast}\rightarrow1.$ Thus, when either the weak or the strong state is dominant, the performance of the HK-based coding\ scheme approaches that of the separable scheme in \cite{cap_theorems:SumCap_Par_ZIFC}.% \begin{figure*}[tbp] \centering {\includegraphics[ trim=0.180674in 0.000000in 0.241425in 0.000000in, height=3.1938in, width=6.1289in ]% {SumRate_HK_Par_OB_chpt5_2.eps}% }% \caption{Sum-Rate vs. $p_1$ for HK-based scheme and Separable coding scheme and plots of optimal power fractions for HK-based scheme.}\label{Fig_HKParOB}% \end{figure*}% \section{\label{Sec_Con}Conclusions} We have developed the sum-capacity of specific sub-classes of ergodic fading IFCs. These sub-classes include the ergodic very strong (mixture of weak and strong sub-channels satisfying the EVS condition), the uniformly strong (collection of strong sub-channels), the uniformly weak one-sided (collection of weak one-sided sub-channels) IFCs, and the uniformly mixed (mix of UW and US one-sided IFCs) two-sided IFCs. Specifically, we have shown that requiring both receivers to decode both messages, i.e., simplifying the IFC to a compound MAC, achieves the sum-capacity and the capacity region of the EVS and US (one- and two-sided) IFCs. For both sub-classes, achieving the sum-capacity requires encoding and decoding jointly across all sub-channels. In contrast, for the UW one-sided IFCs, we have used genie-aided methods to show that the sum-capacity is achieved by ignoring interference at the interfered receiver and with independent coding across sub-channels. This approach also allowed us to develop outer bounds on the two-sided UW\ IFCs. We combined the UW and US one-sided IFCs results to develop the sum-capacity for the uniformly mixed two-sided IFCs and showed that joint coding is optimal. For the final sub-class of hybrid one-sided IFCs with a mix of weak and strong sub-channels that do not satisfy the EVS conditions, using the fact that the strong sub-channels can be exploited, we have proposed a Han-Kobayashi based achievable scheme that allows partial interference cancellation using a joint coding scheme. Assuming no time-sharing, we have shown that the sum-rate is maximized by transmitting only a common message on the strong sub-channels and transmitting a private message in addition to this common message in the weak sub-channels. Proving the optimality of this scheme for the hybrid sub-class remains open. However, we have also shown that the proposed joint coding scheme applies to all sub-classes of one-sided IFCs, and therefore, encompasses the sum-capacity achieving schemes for the EVS, US, and UW sub-classes. Analogously with the non-fading IFCs, the ergodic capacity of a two-sided IFC continues to remain unknown in general. However, additional complexity arises from the fact that the sub-channels can in general be a mix of weak and strong IFCs. A direct result of this complexity is that, in contrast to the non-fading case, the sum-capacity of a one-sided fading IFC remains open for the hybrid sub-class. The problem similarly remains open for the two-sided fading IFC. An additional challenge for the two-sided IFC is that of developing tighter bounds for the uniformly weak channel. \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Tidal locking between two astronomical objects is a well known phenomenon that has fascinated physicists, philosophers and astronomers throughout centuries~\cite{arons,gron,white,withers,koenders,butikov,razmi,massi,urbassek,pujol,ng,cregg,norsen}. In a tidally locked two-body system, the orbital angular velocity of the objects around the common center of mass of {the} system is equal to the angular speed of one or both objects spinning around their own axes. The most noticeable example is the case of the Moon orbiting around planet Earth. In this case, the Moon rotates in an approximately circular orbit around the center-of-mass of the Earth-Moon system in exactly the same time as it takes to {revolve around its axis}. Consequently, the near side of the Moon is always facing the Earth, while the far side is always hidden from an Earthling's view. If the Earth and the Moon are considered as an isolated dynamical system, the tidal friction resulting from the bulges produced by gravitational force of the Moon on Earth's crust would eventually dissipate energy and slow down Earth's rotation until the system is completely tidally locked, \emph{i.e.} both the Moon and the Earth would spin around their axes in the same time that it would take them to orbit around the center of mass of the system. While the conceptual and mathematical understanding of tidal interactions between astronomical objects dates back to Johannes Kepler~\cite{kepler}, Sir Isaac Newton~\cite{newton} and Immanuel Kant~\cite{kant}, it has been shown during the past fifty years that the effect of tidal locking can be mathematically derived using an effective potential approach~\cite{kopal72,counselman73,vanhamme79,hut80,mcdonald,ferroglia}. In this framework, tidal locking is obtained by minimizing the effective potential energy of the astronomical objects orbiting each other, taking into account their rotational kinetic energies around their own axes, and assuming the total angular momentum of the system remains constant. The local minimum of the effective potential of the two-body system corresponds to a circular orbit configuration in which the two objects are completely tidally locked to each other. Moreover, this approach is also useful to study the stability of the system. The existence of a local minimum in the effective potential, \emph{i.e.} a stable tidally-locked configuration, depends on a single dimensionless parameter, corresponding to the case of a fold catastrophe in catastrophe theory\cite{guemez,fiolhais1,fiolhais2}. This control parameter is {a function of} the objects' masses and moments of inertia and the total angular momentum of the system. Despite the fact that tidal locking had already been studied in some detail, in 1964 the recently decommissioned Arecibo Telescope revealed a surprising new manifestation of a closely related phenomenon. The rotation period of Mercury around its own axis {is} only 59 days, as opposed to its 88-day orbital period~\cite{dyce} - an approximate 3:2 spin-orbit resonance. The reason for this anomalous behavior was soon identified as stemming from the ellipsoid shape of Mercury and its high eccentricity orbit around the Sun. This spin-orbit resonance due to Mercury's ellipsoidal shape is stabilized by the tidal torque applied by the Sun on Mercury. In this paper, this result is obtained in a pedagogical manner by considering the potential energy that regulates the rotational motion of the planet around its axis, where the metastable equilibrium configurations appear as local minima if the ellipsoidal satellite orbits around a central spherical object in an elliptic orbit. In order to derive spin-orbit resonances in the aforementioned case, the total energy of the system is calculated in Section~\ref{sec:energy}, taking into account the quadrupole correction to the gravitational potential energy of an ellipsoidal shaped planet orbiting around a spherical central star. In Section~{\ref{sec:eqforgamma}} it is shown how for integer and half-integer values of the ratio between the rotational and orbital periods of the planet, {a certain angle $\gamma$, that depends on the planet's rotational angle and on the mean anomaly of the orbital motion}, satisfies an equation of the same type as the equation of motion for a simple pendulum. It is then shown that in these conditions the rotational and orbital periods of the planet can remain in a fixed (half-)integer ratio. In Section~\ref{sec:rotpoten} the situation is reanalyzed by considering the shape of the rotational potential energy averaged over the orbital period. This analysis shows that for integer and half-integer ratios of the orbital period, the averaged rotational potential energy shows metastable minima for $\gamma = 0$ or $\gamma = \pi/2$. Conclusions are drawn in Section~\ref{sec:conclusions}. \section{Energy of the system} \label{sec:energy} \begin{figure} \centering \includegraphics[width=0.70\textwidth]{Fig01.pdf} \caption{Schematic representation (not to scale) of the planet orbiting the star along an elliptic orbit. The planet also spins {around an axis perpendicular to the plane of the orbit and going through its center}. The figure shows the relation between the angle $\phi$ and the angle $\delta$. } \label{fig:Spin-Orbit} \end{figure} Consider a planet of mass $m$ orbiting a star of mass $M_s$. The star is modeled as a perfectly spherical object, while the planet is an ellipsoid of semi-axes of length $a > b > c$. The rotational velocity of the star around its axis does not play a role in the argument exposed below, so it is taken equal to zero for simplicity. On the contrary, the planet is assumed to be spinning about an axis perpendicular to the orbital plane. This axis {of rotation} is supposed to coincide with {the shortest axis} of symmetry of the ellipsoid. The gravitational potential energy of this system can be studied through a multipole expansion, {as discussed for example in Sussman and Wisdom's book~\cite{sussman}}. By truncating the expansion after the quadrupole contribution, the gravitational potential can be written as \begin{equation} \label{eq:grpotential} U_{\text{gr}} = - \frac{G M_s m}{r} - \frac{3 G M_s }{2 r^3} \left[ \left(B -A\right) \cos^2 \left( \phi - \delta \right) - \frac{1}{3} \left(2 B -A - C \right) \right] + {\mathcal{O}} \left(\frac{1}{r^4} \right)\, , \end{equation} where $r$ is the distance between the star and the center of the planet. The angle $\phi$ is the angle that the line joining the star to the planet makes with the $x$-axis of the frame of reference, which is taken as centered on the star, since the star is assumed to be much more massive than the planet, $m \ll M_s$. The semi-axes $a$ and $b$ of the planet lie on the orbital plane, which is assumed to be the $x-y$ plane. The angle that the longest axis $a$ makes with the horizontal direction is indicated with $\delta$ in Eq.~(\ref{eq:grpotential}). In the case of an elliptic orbit, {in which the $x$-axis can be conveniently chosen along the line joining the aphelion and the perihelion}, the situation is sketched in Figure~\ref{fig:Spin-Orbit}. The quantities $A,B$ and $C$ are the moments of inertia of the ellipsoid with respect to its principal axes \begin{equation} A = \frac{m}{5} \left(b^2+c^2 \right) \, , \qquad B = \frac{m}{5} \left(a^2+c^2 \right) \, , \qquad C = \frac{m}{5} \left(a^2+b^2 \right) \, . \end{equation} {Indeed, Eq.~(\ref{eq:grpotential}) coincides with Eq.~(2.74) in~\cite{sussman}, if one considers that in the latter equation one should set $\alpha = \cos(\phi-\delta)$, $\beta =\sin(\phi-\delta)$, and $\gamma = 0$ to describe the configuration considered in Figure~\ref{fig:Spin-Orbit}.} For the purposes of this work it is possible to set $c = b$, which implies $C = B$. With this assumption $2 B- A-C = B-A $ so that the gravitational potential in Eq.~(\ref{eq:grpotential}) has the simpler form \begin{equation} \label{eq:grpotentialBC} U_{\text{gr}} = - \frac{G M_s m}{r} -\frac{3 G M_s }{2 r^3} \left(B -A\right) \left[ \cos^2 \left( \phi - \delta \right) - \frac{1}{3} \right] + {\mathcal{O}} \left(\frac{1}{r^4} \right)\, . \end{equation} The total mechanical energy of a system of two masses $M_s$ and $m$ orbiting each other under the potential in Eq.~(\ref{eq:grpotentialBC}) {is the sum of the orbital kinetic energy, the rotational kinetic energy due to the spinning of the ellipsoid, and of the gravitational potential energy;} it can be expressed {(in the c.o.m. frame)} as \begin{equation} \label{eq:4} E = \frac{1}{2} m \left( \dot{r}^2 +r^2 \dot{\phi}^2 \right) + \frac{1}{2} B \dot{\delta}^2 - \frac{G M_s m}{r} -\frac{3 G M_s }{2 r^3} \left(B -A\right) \left[ \cos^2 \left( \phi - \delta \right) - \frac{1}{3} \right] \, . \end{equation} {The planet's angular velocity around its axis is $\dot{\delta}$.} The angular velocity of the planet in its orbit around the star is indicated by $\dot{\phi}$. {The angular momentum of the system at a given instant in time} is \begin{equation} \label{eq:angmom} L \equiv m r^2 \dot{\phi} + B \dot{\delta} \, , \end{equation} {where the first term in the r.h.s. of Eq.~(\ref{eq:angmom}) is the orbital angular momentum, while the second term is the angular momentum associated to the rotation of the ellipsoidal planet around an axis of length $2 b$. If the planet follows an elliptic orbit, the orbital angular momentum is conserved, so that it is convenient to set} \begin{equation} \label{eq:l} l \equiv m r^2 \dot{\phi} \, . \end{equation} Consequently, {by solving Eq.~(\ref{eq:l}) w.r.t. $\dot{\phi}$ and by inserting the result in Eq.~(\ref{eq:4})}, the total mechanical energy of the system can then be expressed as \begin{equation} \label{eq:energyrpd} E = \frac{1}{2} m \dot{r}^2 - G \frac{M_s m}{r} -\frac{3 G M_s }{2 r^3} \left(B -A\right) \left[ \cos^2 \left( \phi - \delta \right) - \frac{1}{3} \right] +\frac{l^2}{2 m r^2} + {\frac{1}{2} B \dot{\delta}^2} \, . \end{equation} \boldmath \section{Spin-orbit gravitational locking} \label{sec:eqforgamma} \unboldmath \begin{figure} \centering \includegraphics[width=0.55\textwidth]{Fig02.pdf} \caption{Relation between the elliptic anomaly ${\mathcal E}$ and the {true anomaly} $\phi$. } \label{fig:elliptic-anomaly} \end{figure} The orbits of the planets of the solar system are described to an excellent accuracy by ellipses. For elliptic orbits, there is a fixed relation between the distance between the Sun and the planet, $r$, and the {true anomaly} $\phi$: \begin{equation} \label{eq:ellipse} r = \frac{s \left(1- \epsilon^2 \right)}{1 + \epsilon \cos \phi} \, , \end{equation} where $\epsilon$ is the orbit eccentricity and $s$ is the length of the semi-major axis of the orbit. In addition, the part of the mechanical energy of the system associated to the orbital motion is fixed and it depends on the eccentricity and semi-major axis of the orbit: \begin{equation}\label{eq:orbE} E_{\text{orb}} = \frac{1}{2} m \dot{r}^2 - G \frac{M_s m}{r} + \frac{l^2}{2 m r^2} = \frac{G^2 M_s^2 m^3}{2 l^2} \left(\epsilon^2-1 \right)\, , \end{equation} where, since $0< \epsilon < 1$ in an elliptic orbit, $E_{\text{orb}} < 0$. {The last equality in Eq.~(\ref{eq:orbE}) can be obtained by observing that at the planet perihelion, $\phi = 0$, so that $r = s (1-\epsilon)$; in addition at the perihelion the orbital energy of the planet coincides with its effective orbital potential energy, since its radial velocity is zero. Therefore, the last equality in Eq.~(\ref{eq:orbE}) can be verified by calculating the effective potential energy at the perihelion and by using the relation between angular momentum, semi-major axis and eccentricity of the orbit~\cite{taylor}.} {An expression for the orbital energy as a function of the semi-major axis rather than eccentricity can be found in~\cite{barger}; such expression is equivalent to the r.h.s. of Eq.~(\ref{eq:orbE}).} By using Eq.~(\ref{eq:energyrpd}), one finds \begin{equation} \label{eq:rotE} E_{\text{rot}} = E - E_{\text{orb}} = \frac{1}{2} B \dot{\delta}^2 -\frac{3 G M_s }{2 r^3} \left(B -A\right) \left[ \cos^2 \left( \phi - \delta \right) - \frac{1}{3} \right] \, , \end{equation} where, since the orbit is elliptic, the rotational energy $E_{\text{rot}}$ can be rewritten {by using Eq.~(\ref{eq:ellipse}) to replace $r$ in Eq.~(\ref{eq:rotE})} \begin{equation} E_{\text{rot}} =\frac{1}{2} B \dot{\delta}^2 - \frac{3 G M_s }{2 s^3} \left(B -A\right) \left(\frac{1 +\epsilon \cos \phi}{1 - \epsilon^2} \right)^3 \left[ \cos^2 \left( \phi - \delta \right) - \frac{1}{3} \right] \, . \end{equation} In addition, to simplify several of the equations that appear later on in this work, it is useful to introduce the quantity \begin{equation} Q \equiv \frac{3 G M_s }{2 s^3} \left(B -A\right) \, . \end{equation} Furthermore, following \cite{goldreich_peale}, it is now convenient to introduce the angle \begin{equation} \label{eq:gamma} \gamma \equiv \delta - p M \, , \end{equation} where $M$ is the mean anomaly of the orbital motion~\cite{,Montenbruck,Meeus} and $p$ is a generic dimensionless parameter. By taking the time derivative of Eq.~(\ref{eq:gamma}) one finds $\dot{\gamma} = \dot{\delta} - p n$, where $n = 2 \pi/ T$ is the mean motion and $T$ the orbital period. If $\dot{\delta}$ remains close to $p n$ through every orbit, $\gamma$ is almost constant. In other words, in the case {where} the planet is spinning with a constant angular speed {that matches the product $p n$, then} there is a constant phase shift between the angles $\delta$ and $pM$. {This behavior can be studied in detail by replacing} \begin{equation} \delta = \gamma + p M \, , \end{equation} in Eq.~(\ref{eq:rotE}), and further replacing $\phi$ with its expression in terms of $\epsilon$ and $M$. In order to find the relation between $\phi$, $\epsilon$, and $M$, it is useful to start by observing that simple geometric considerations (see Figure~\ref{fig:elliptic-anomaly}) allow one to write a relation between the {true anomaly} $\phi$, the eccentricity $\epsilon$ and the elliptic anomaly ${\mathcal E}$: \begin{equation} \label{eq:phiintermsofE} \cos \phi = \frac{\cos {\mathcal E} -\epsilon}{1 - \epsilon \cos {\mathcal E}} \, . \end{equation} The relation in Eq.~(\ref{eq:phiintermsofE}) can be solved w.r.t. the elliptic anomaly to find \begin{equation}\label{eq:Eintermsofphi} {\mathcal E} = \arccos{\left(\frac{\cos \phi + \epsilon}{1+ \epsilon \cos \phi} \right)} \, . \end{equation} Subsequently, it is possible to relate the elliptic anomaly to the mean anomaly through \emph{Kepler's equation}~\cite{danby} {\begin{equation} \label{eq:kepler} M = {\mathcal E} - \epsilon \sin {\mathcal E} \, . \end{equation}} By combining Eq.~(\ref{eq:kepler}) with Eq.~(\ref{eq:Eintermsofphi}) it is possible to write $M$ as a function of $\phi$. {This function can be written as} a power series in the eccentricity: \begin{align} M &= \phi - 2 \epsilon \sin \phi + \left(\frac{3}{4}\epsilon^2 + \frac{1}{8} \epsilon^4 + \frac{3}{64} \epsilon^6 \right) \sin \left( 2\phi \right) - \left(\frac{1}{3}\epsilon^3 + \frac{1}{8} \epsilon^5 \right) \sin \left( 3 \phi \right) \nonumber \\ & +\left(\frac{5}{32}\epsilon^4 + \frac{3}{32} \epsilon^6 \right) \sin \left( 4 \phi \right) - \frac{3}{40}\epsilon^5 \sin\left(5 \phi \right) + \frac{7}{192}\epsilon^6 \sin\left(6 \phi \right) +{\mathcal O}\left(\epsilon^7 \right) \, . \end{align} Consequently, the inverse relation {between the true anomaly and mean anomaly} can be written as a power series in $\epsilon$ as well: \begin{align} \label{eq:phivsM} \phi &= M + \left( 2 \epsilon - \frac{1}{4} \epsilon^3 + \frac{5}{96} \epsilon^5 \right) \sin{M} +\left(\frac{5}{4} \epsilon^2 - \frac{11}{24} \epsilon^4 + \frac{17}{192} \epsilon^6\right) \sin{\left(2M\right)} + \left( \frac{13}{12} \epsilon^3 -\frac{43}{64} \epsilon^5 \right)\sin\left(3 M\right) \nonumber \\ &+ \left(\frac{103}{96} \epsilon^4 - \frac{451}{480} \epsilon^6 \right)\sin\left(4 M\right) +\frac{1097}{960} \epsilon^5 \sin\left(5 M\right)+\frac{1223}{960} \epsilon^6 \sin\left(6 M\right)+ \mathcal{O} \left(\epsilon^7 \right) \, . \end{align} {As mentioned previously, for the purposes of this work it is sufficient to consider the case in which $\dot{\gamma}$ is small.} {In this approximation,} after inserting Eq.~(\ref{eq:phivsM}) in Eq.~(\ref{eq:rotE}), it is then possible to integrate over the mean anomaly $M$ in order to average the rotational energy over a complete orbit while keeping $\gamma$ fixed. {The integration with respect to the mean anomaly is equivalent to an integration with respect to time, since the mean anomaly is a linear function of time. In addition, averaging over the mean anomaly, an approach already followed by Goldreich and Peale~\cite{goldreich_peale}, is straightforward once the relation {between the true anomaly and mean anomaly}, Eq.~(\ref{eq:phivsM}), is known.} One then finds \begin{equation} \label{eq:averageU} \frac{1}{2 \pi} \int_{0}^{2 \pi} d M E_{\text{rot}} = \frac{1}{2} B \left(\dot{\gamma} + p n\right)^2 + Q \left[S(\epsilon) + R(p,\epsilon) \cos\left( 2(p \pi +\gamma ) \right) \sin{\left(2 p \pi\right)} \right] \, . \end{equation} The functions $S$ and $R$ have the expansions \begin{align} \label{eq:expSandR} S(\epsilon) =& -\frac{1}{6} -\frac{1}{4} \epsilon^2 - \frac{5}{8} \epsilon^4 +\mathcal{O} \left( \epsilon^6 \right)\nonumber \\ R(p,\epsilon) =& \frac{1}{2 \pi} \Biggl[ \frac{1}{p-1} \left(-\frac{1}{2} +\frac{5}{4} \epsilon^2 - \frac{13}{32} \epsilon^4 \right) + \frac{1}{p - \frac{3}{2}} \left(- \frac{7}{4} \epsilon + \frac{123}{32} \epsilon^3 \right) +\frac{1}{p -\frac{1}{2}} \left(\frac{1}{4} \epsilon - \frac{1}{32} \epsilon^3 \right)+ \frac{1}{p -\frac{5}{2}} \left(-\frac{845}{96} \epsilon^3 \right) \nonumber \\ & + \frac{1}{p +\frac{1}{2}} \left(-\frac{1}{96} \epsilon^3 \right) +\frac{1}{p -3} \left(-\frac{533}{32} \epsilon^4 \right) +\frac{1}{p -2} \left({-\frac{17}{4} \epsilon^2} + \frac{115}{12} \epsilon^4 \right) +\frac{1}{p +1} \left(-\frac{1}{48} \epsilon^4 \right) \Biggr]+\mathcal{O} \left( \epsilon^5 \right)\,. \end{align} The first term in Eq.~(\ref{eq:averageU}) can be interpreted as {the} kinetic energy for the rotation of the planet around its axis, while the second term can be read as the potential energy for the same variable. One can then build the Lagrangian for $\gamma$ \begin{equation} \mathcal{L} = \frac{1}{2} B \left(\dot{\gamma} + pn \right)^2 - Q \left[S(\epsilon) + R(p,\epsilon) \cos\left( 2(p \pi +\gamma ) \right) \sin{\left(2 p \pi\right)} \right] \, . \end{equation} The equation of motion for $\gamma$ is therefore \begin{equation} \label{eq:eom} \frac{d}{dt} \left[B \left(\dot{\gamma} + p n\right) \right] - 2 Q R(p,\epsilon) \sin{\left(2 p \pi\right)} \sin\left( 2(p \pi +\gamma )\right) = B \ddot{\gamma} - 2 Q R(p,\epsilon) \sin{\left(2 p \pi\right)} \sin\left( 2(p \pi +\gamma )\right) = 0 \,. \end{equation} {By using a standard trigonometric identity, Eq.~(\ref{eq:eom})} can be rewritten as \begin{equation} \label{eq:eom2} B \ddot{\gamma} - 2 Q R(p,\epsilon) \sin{\left(2 p \pi\right)} \left[ \sin\left( 2p \pi\right)\cos( 2\gamma ) + \cos\left( 2p \pi\right)\sin( 2\gamma ) \right] = 0 \,. \end{equation} Since the function $R(p,\epsilon)$ has at most single poles for $p=k/2$ with $k \in {\mathbb{N}}$, {the product $R(p,\epsilon) \sin(2 p \pi)$ has a finite limit for $p \to k/2$. One can then observe that in the square bracket in Eq.~(\ref{eq:eom2}) the factor $\sin(2 p \pi)$ vanishes for $p = k/2$, while $\cos(2 p \pi) = \pm 1$ for $p =k/2$.} Consequently, for integer and half-integer values of the parameter $p$ the term proportional to $\cos (2 \gamma)$ in Eq.~(\ref{eq:eom2}) vanishes, while the term proportional to $\sin (2 \gamma)$ survives. Therefore, for $p = k/2$ the equation of motion for $\gamma$ becomes \begin{equation} \label{eq:pendulumgamma} B \ddot{\gamma} - 2 Q R \left(p,\epsilon \right) \sin \left( 2 p \pi \right) \cos \left( 2 p \pi \right)\sin \left(2 \gamma \right) = 0 \, . \end{equation} Eq.~(\ref{eq:pendulumgamma}) is the same type of equation of motion that is satisfied by the angle between the vertical direction and the thread supporting a simple pendulum. This is indeed the result outlined at the beginning of \cite{goldreich_peale}, and the function $H(p,\epsilon)$ defined in \cite{goldreich_peale} is related to the function $R(p,\epsilon)$ defined above through the equation \begin{equation} H\left(p, \epsilon \right) = -2 R \left(p,\epsilon \right) \sin \left( 2 p \pi\right) \cos \left( 2 p \pi\right)\, . \end{equation} {The expansion of $H$ with respect to $\epsilon$ can be easily obtained starting from the expansion in Eq.~(\ref{eq:expSandR}).} Consequently, the equation of motion for $\gamma$ can be further rewritten as \begin{equation} \label{eq:pendulumgamma2} B \ddot{\gamma} + Q H \left(p,\epsilon \right) \sin \left(2 \gamma \right) = 0 \, . \end{equation} {It is important to stress once more that Eq.~(\ref{eq:pendulumgamma2}) was obtained under the assumption that $\dot{\gamma}$ is small and that $p=k/2$.} If $H$ in Eq.~(\ref{eq:pendulumgamma2}) is positive, the equation describes the oscillations of $\gamma$ about $\gamma = 0$ {and} the amplitude of the oscillation depends on the initial value of $\gamma$, {since Eq.~(\ref{eq:pendulumgamma2}) is of the same type as the pendulum equation for an arbitrary swinging angle}. {In addition,} if both the initial value of $\gamma$ and $\dot{\gamma}$ are small {and $H > 0$}, {one can replace $\sin(2 \gamma) \approx 2 \gamma$ in } Eq.~(\ref{eq:pendulumgamma2}) {that reduces to the equation of a motion of a harmonic oscillator.} {In that case, Eq.~(\ref{eq:pendulumgamma2})} implies that $\gamma$ {will remain} $\approx 0$ all the time and, consequently, ${\delta} \approx p M$ with $p = k/2$, so that the periods of the spin and orbital motions are locked in an integer or half-integer ratio. Therefore, if the longest semiaxis of the planet (denoted by $a$) points toward the sun at perihelion, it is again pointing toward the sun after two orbital periods. In these two orbital periods the planet will have completed $2 p = k$ revolutions around its axis. The frequency of small oscillations {of $\gamma$ around the value $\gamma = 0$} is \begin{equation} \omega = \sqrt{\frac{2Q H\left(p,\epsilon\right)}{B}} \, . \end{equation} If $H < 0$ instead, it is possible to see that $\gamma$ oscillates around the value $\gamma = \pi/2$. In this case, if the shortest semiaxis of the planet (denoted by $b$) is pointing toward the sun at perihelion, this axis will return to point toward the sun after two orbital periods. Also in this case, in these two orbital periods, the planet will have completed $2 p = k$ revolutions around its axis. \section{Resonant orbits \label{sec:rotpoten}} \begin{figure} \centering \begin{tabular}{cc} \includegraphics[width=0.49\textwidth]{Fig03a.pdf} & \includegraphics[width=0.49\textwidth]{Fig03b.pdf} \end{tabular} \caption{The function $F$ versus $\gamma$ for fixed values of $p = k/2$, for $\epsilon = 0.206$. } \label{fig:minimaofFgamma0} \end{figure} It is interesting to analyze the dependency of the potential energy term in Eq.~(\ref{eq:averageU}) with respect to the variable $\gamma$ and the parameter $p$. This dependence is encompassed in the function \begin{equation} \label{eq:F} F(p,\epsilon,\gamma) \equiv R\left(p,\epsilon\right) \sin \left(2 p \pi \right) \cos\left(2 \left( p \pi + \gamma \right) \right) \, . \end{equation} By plotting the function $F$ with respect to $\gamma$, while keeping $p$ and $\epsilon$ fixed, it is possible to observe that for $p = k/2$ with $k \in {\mathbb N}$ (with the exception of $p=1/2$), there is a minimum of the function located at $\gamma = 0$. The location of the minimum of $F$ as a function of $\gamma$ is shown in the left panel of Figure~\ref{fig:minimaofFgamma0}. The depth of the minima of $F$ at $p = k/2$ depends on the order in $\epsilon$ at which the corresponding pole enters in the function $R\left(p,\epsilon \right)$, {as well as on the residue of the pole}. {In particular, the function $F$ has a minimum at $\gamma = 0$ if the sign of the residue at a given $p = k/2$ in the function $R$ is negative.} As expected, the deepest minimum corresponds to $p = 1$, since the function $R$ has a simple pole in $p = 1$ already at zeroth order in $\epsilon$ (see Eq.~(\ref{eq:expSandR})). The second deepest minimum {in the function $F$} is the one that corresponds to the Mercury spin-orbit resonance, $p = 3/2$. The simple pole at $p = 3/2$ appears at order $\epsilon$ in the expansion of the function $R$ in Eq.~(\ref{eq:expSandR}). {As the function $R$ has a simple pole at $p = 2$ whose residue is proportional to $\epsilon^3$, the third deepest minimum appears for $p=2$. The poles of other integer and half-integer values of $p$ in the function $R(p,\epsilon)$, that are proportional to higher orders of $\epsilon$, lead to shallower minima in $F(p, \epsilon,\gamma)$ at $\gamma = 0$.} \begin{figure} \centering \begin{tabular}{cc} \includegraphics[width=0.50\textwidth]{Fig04a.pdf} & \includegraphics[width=0.50\textwidth]{Fig04b.pdf} \end{tabular} \caption{Schematic representation of the orbit of Mercury in a 3:2 spin-orbit resonance. The left panel refers to the the first orbit of the planet around the star, the right panel shows the second orbit of the planet around the star. A semi-major axis of the planet is drawn in red in order to show the angle of rotation of of the planet around its axis.} \label{fig:so3to2} \end{figure} {If $\dot{\gamma} = 0$, the parameter $p$ is simply the ratio between the rotational angular velocity $\dot{\delta}$ and the mean motion $n = 2 \pi/T$. Since the average rotational velocity is inversely proportional to the time that it takes the planet to complete a rotation around its axis, $p$ is the ratio of the orbital period over the rotational period. Therefore, {for $p = 3/2$,} the orbital period is longer than the rotation period of the planet around its axis; in particular, the planet completes a revolution around its axis in a time that corresponds to $2/3$ of its year. This is equivalent to saying that the planet completes three revolutions around its axis every two {of its} years. Figure~\ref{fig:so3to2} shows a stroboscopic view of two orbits of the planet around the star at the configuration that corresponds to the minimum of the potential for the $p = 3/2$ spin-orbit resonance. In the view shown in the figure, the planet orbits the star and rotates around its axis in a counterclockwise direction. Since the minimum of the potential occurs at $\gamma = 0$, if the planet is at perihelion, where $\phi = M =0$, also the angle $\delta$ should be zero; this particular instant in time is labeled by \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {1}}} in the left panel of Figure~\ref{fig:so3to2}. The red line drawn on the ellipse representing the planet shows the semi-major axis that is used to measure the angle of rotation of the planet around its axis: The angle of rotation $\delta$ is the angle between the red line and a line parallel to the major axis of the orbit. In the first year, the planet must complete one and a half revolutions around its axis, \emph{i.e.} {it} must rotate by an angle $\delta = 3/2 \pi$ every half a year. For this reason, at aphelion the red line drawn on the planet in Figure~\ref{fig:so3to2} is perpendicular to the line (not shown in the figure) that joins the planet to the star, as shown at the point labeled \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {4}}} in the left panel. After one year, when the planet returns to perihelion, the red line lies along the line joining the planet to the star, but it is pointing toward the star, as shown by the point labeled by \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {7}}} in the right panel of the figure, rather than away from the star as at point \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {1}}}. At the end of the second year instead, at perihelion the planet returns to the same configuration that it had at the beginning of the period shown in the left figure, with the red line parallel to the line joining the planet to the star but pointing away from the star.} The other simple pole that appears at order $\epsilon$ in Eq.~(\ref{eq:expSandR}), $p = 1/2$ does not correspond to a minimum, but to a maximum of the function $F$ in $\gamma = 0$. {This is due to the fact that the pole at $p = 1/2$ is the only pole among the ones explicitly written down in Eq.~(\ref{eq:expSandR}) whose coefficient at the lowest order in $\epsilon$ is positive rather than negative. Indeed, by expanding the trigonometric functions in Eq.~(\ref{eq:F}) for $p \to k/2$ with $k \in {\mathbb N}$, one finds \begin{equation} \sin \left(2 p \pi \right) \cos\left(2 \left( p \pi + \gamma \right) \right) = 2 \pi \left( p - \frac{k}{2} \right) \cos \left(2 \gamma \right) + {\mathcal O} \left(\left( p - \frac{k}{2} \right)^2 \right) \, . \end{equation} By multiplying this expansion by $R$ and then setting $p=k/2$, one finds that the function $F$ is proportional to \begin{displaymath} 2 \pi {\operatorname{Res}}R \left.\left(p,\epsilon \right)\right|_{p = \frac{k}{2}} \cos \left(2 \gamma \right) \, . \end{displaymath} Consequently, for $p = k/2$, the function $F$ shows a minimum at $\gamma=0$ if the coefficient of the single pole at $p = k/2$ is negative, and a maximum if the coefficient is positive. } For the value $p =1/2$ the function $F$ has a minimum at $\gamma = \pi/2$ {as shown in the right panel of Figure~\ref{fig:minimaofFgamma0}.} \begin{figure} \centering \begin{tabular}{cc} \includegraphics[width=0.50\textwidth]{Fig05a.pdf} & \includegraphics[width=0.50\textwidth]{Fig05b.pdf} \end{tabular} \caption{Schematic representation of the orbit of a planet around the star in a 1:2 spin-orbit resonance. The left panel refers to the the first orbit of the planet around the star, the right panel shows the second orbit of the planet around the star. A semi-major axis of the planet is drawn in red in order to show the angle of rotation of of the planet around its axis.} \label{fig:so1to2} \end{figure} As discussed above, this situation corresponds to the case in which the orbital mean anomaly is out of phase with respect to the rotation angle {$\delta$} by $\pi/2$, \emph{i.e.} the shortest axis of the planet $b$ points toward the Sun at perihelion. {In this configuration, the planet completes a full revolution around its axis every two years, as shown in Figure~\ref{fig:so1to2}. Since in this configuration $\gamma = \pi/2$, the value of the rotation angle $\delta$ at perihelion , \emph{i.e.} $\phi = M = 0$, should also be equal to $\pi/2$, as shown at the position labeled by \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {1}}} in the left panel of Figure~\ref{fig:so1to2}. The red line is therefore perpendicular to the line joining the star to the planet at that point. The planet rotates by an angle $\delta = \pi/2$ every half a year. For this reason the red line is parallel to the line joining the star to the planet at aphelion (points \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {3}}} and \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {7}}} in Figure~\ref{fig:so1to2}). After completing the first orbit (point \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {5}}} in the right panel) the red line is again perpendicular to the line joining the planet to the star, but pointing in the opposite direction with respect the initial position. Also in this case, the planet returns to the initial configuration after two orbits.} \section{Conclusions} \label{sec:conclusions} This work revisits the spin-orbit resonances of a planet orbiting a star in an elliptic orbit. A pedagogical approach is employed to show that for an ellipsoidal planet, the quadrupole correction to the two body gravitational potential implies that several stable configurations in which the planet rotates around its axis an integer number of times for every two revolutions around the star are possible. Among these situations, the most energetically favored is the one in which the planet spins around its axis exactly once for every revolution around the star. This situation corresponds to the well known tidal-locking phenomenon and applies not only to a star-planet system but to any two-body gravitationally bound system. Indeed this tidal locking occurs in the Moon-Earth system and it is the reason why the Moon has a far side always hidden from Earth. A planet can be tidally locked to the star in a 1:1 spin-orbit resonance even if the planet is a perfect sphere. Spin-orbit resonances characterized by other ratios can manifest themselves only if the orbit of the planet has a non-negligible eccentricity and the planet has a non-perfectly spherical shape. An analysis of an appropriately defined effective potential for the rotation of the ellipsoidal planet around its axis reveals that the second energetically most favored spin-orbit resonance is the one in which the planet spins three times around its axis for every two orbits around the star. In this configuration, the longest semi-axis of the ellipsoidal planet is always aligned with the major axis of the elliptic orbit at perihelion and perpendicular to it at aphelion. Also this situation is observed in nature and indeed it describes the orbit of Mercury around the Sun, which is in fact locked in a 3:2 spin-orbit resonance. The large eccentricity of Mercury's orbit in comparison to the other planets of the solar system and Mercury's ellipsoidal shape make this resonance more stable for Mercury than for other planets. Similar, but energetically less favored, resonances are possible for other integer or half-integer ratios between the rotational and orbital period of the planet, such as 2:1, 5:2, etc. A spin-orbit resonance characterized by a 1:2 ratio is instead possible when the longest semi-axis of the ellipsoidal planet is perpendicular to the line joining the planet to the star at perihelion, and parallel to it at aphelion. In this paper, the spin-orbit resonances are investigated both by means of the equation of motion for a suitably defined angle $\gamma$, as well as through the study of the shape of a potential energy term for the same angle $\gamma$. The presence of the spin-orbit resonances emerges in a straightforward way from the study of the potential. This study could be easily incorporated in a Classical Mechanics class for physics-major undergraduate students, and it would allow the instructor to provide the students with an application of the multipole expansion of the gravitational potential. {The present study is complementary to the Exercise 2.19 found in Sussman and Wisdom's book, where, for the case of Mercury, the reader is asked to solve numerically the equations of motion satisfied by $\phi$ and $\delta$, and to verify a posteriori that, with an appropriate choice of the initial conditions, the quantity $\delta - 3/2 \phi$ oscillates.} A separate and more complicated question is how likely it is for a planet like Mercury to be captured in such a resonance. No attempt is made to study {or} answer this question in this work. For the Mercury-Sun system, this aspect was studied in detail in~\cite{correia1,correia2}: These studies require one to take into account the fact that Mercury is not a perfectly rigid body, an approach that goes beyond the scope of this work. \acknowledgements The authors would like to thank Joel Weisberg for bringing the phenomenon of tidally locked spin-orbit resonances to their attention and Giovanni Ossola for discussions and suggestions, as well as a careful reading of the manuscript. The work of Christopher Clouse was sponsored by the CUNY Research Scholars Program (CRSP).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \setcounter{equation}{0} The weak nuclear process p + p $\to$ D + e$^+$ + $\nu_{\rm e}$, the solar proton burning or proton--proton (pp) fusion, plays an important role in Astrophysics [1,2]. It gives start for the p--p chain of nucleosynthesis in the Sun and the main--sequence stars [1,2]. In the Standard Solar Model (SSM) [3] the total (or bolometric) luminosity of the Sun $L_{\odot} = (3.846\pm 0.008)\times 10^{26}\,{\rm W}$ is normalized to the astrophysical factor $S_{\rm pp}(0)$ for pp fusion. The recommended value $S_{\rm pp}(0) = 4.00\times 10^{-25}\,{\rm MeV b}$ [4] has been found by averaging over the results obtained in the Potential model approach (PMA) [5,6] and the Effective Field Theory (EFT) approach [7,8]. However, as has been shown recently in Ref.[9] {\it the inverse and forward helioseismic approach indicate the higher values of $S_{\rm pp}(0)$ seem more favoured}, for example, $S_{\rm pp}(0) = 4.20\times 10^{-25}\,{\rm MeV b}$ and higher [9]. Of course, accounting for the experimental errors the recommended value does not contradict the result obtained in Ref.[9]. In Refs.[10-13] we have developed a relativistic field theory model of the deuteron (RFMD). In turn, in Ref.[14] we have suggested a modified version of the RFMD which is not well defined due to a violation of Lorentz invariance of the effective four--nucleon interaction describing N + N $\to$ N + N transitions. This violation has turned out to be incompatible with a dominance of one--nucleon loop anomalies which are Lorentz covariant. Thereby, the astrophysical factor $S_{\rm pp}(0)$ calculated in the modified version of the RFMD [14] and enhanced by a factor of 1.4 with respect to the recommended value [4] is not good established. This result demands the confirmation within the original RFMD [10--13] by using the technique expounded in Ref.[13]. As has been shown in Ref.\,[12] the RFMD is motivated by QCD. The deuteron appears in the nuclear phase of QCD as a neutron--proton collective excitation -- a Cooper np--pair induced by a phenomenological local four--nucleon interaction. Strong low--energy interactions of the deuteron coupled to itself and other particles are described in terms of one--nucleon loop exchanges. The one--nucleon loop exchanges allow to transfer nuclear flavours from an initial to a final nuclear state by a minimal way and to take into account contributions of nucleon--loop anomalies determined completely by one--nucleon loop diagrams. The dominance of contributions of nucleon--loop anomalies has been justified in the large $N_C$ expansion, where $N_C$ is the number of quark colours [13]. Unlike the PMA and the EFT approach the RFMD takes into account non--perturbative contributions of high--energy (short--distance) fluctuations of virtual nucleon ($N$) and anti--nucleon ($\bar{N}$) fields, $N\bar{N}$ fluctuations, in the form of one--nucleon loop anomalies. In accord the analysis carried out in Refs.[15] nucleon--loop anomalies can be interpreted as non--perturbative contributions of the nucleon Dirac sea. The description of one--nucleon loop anomalies goes beyond the scope of both the PMA and the EFT approach due to the absence in these approaches anti--nucleon degrees of freedom related to the nucleon Dirac sea. However, one should notice that in low--energy nuclear physics the nucleon Dirac sea cannot be ignored fully [16]. For example, high--energy $N\bar{N}$ fluctuations of the nucleon Dirac sea polarized by the nuclear medium decrease the scalar nuclear density in the nuclear interior of finite nuclei by 15$\%$ [16]. This effect has been obtained within quantum field theoretic approaches in terms of one--nucleon loop exchanges. In this paper we revise the value of $S_{\rm pp}(0)$ obtained in Ref.\,[14]. For this aim we apply the technique developed in the RFMD [13] for the description of contributions of low--energy elastic nucleon--nucleon scattering in the ${^1}{\rm S}_0$--state to amplitudes of electromagnetic and weak nuclear processes. This technique implies the summation of an infinite series of one--nucleon loop diagrams and the evaluation of the result of the summation in leading order in large $N_C$ expansion [13]. The application of this method to the evaluation of the cross sections for the anti--neutrino disintegration of the deuteron induced by charged $\bar{\nu}_{\rm e}$ + D $\to$ e$^+$ + n + n and neutral $\bar{\nu}_{\rm e}$ + D $\to$ $\bar{\nu}_{\rm e}$ + n + p weak currents gave the results agreeing good with the experimental data. The reaction $\bar{\nu}_{\rm e}$ + D $\to$ e$^+$ + n + n is, in the sense of charge independence of weak interaction strength, equivalent to the reaction p + p $\to$ D + e$^+$ + $\nu_{\rm e}$. Therefore, the application of the same technique to the description of the reaction p + p $\to$ D + e$^+$ + $\nu_{\rm e}$ should give a result of a good confidence level. The paper is organized as follows. In Sect.\,2 we evaluate the amplitude of the solar proton burning. We show that the contribution of low--energy elastic pp scattering in the ${^1}{\rm S}_0$--state with the Coulomb repulsion is described in agreement with low--energy nuclear phenomenology in terms of the S--wave scattering length and the effective range. This takes away the problem pointed out by Bahcall and Kamionkowski [17] that in the RFMD one cannot describe low--energy elastic pp scattering with the Coulomb repulsion in agreement with low--energy nuclear phenomenology. In Sect.\,3 we evaluate the astrophysical factor for the solar proton burning and obtain the value $S_{\rm pp}(0) = 4.08\times 10^{-25}\,{\rm MeV\, b}$ agreeing good with the recommended one $S_{\rm pp}(0) = 4.00\times 10^{-25}\,{\rm MeV\, b}$. In Sect.\,4 we evaluate the cross section for the neutrino disintegration of the deuteron $\nu_{\rm e}$ + D $\to$ e$^-$ + p + p caused by the charged weak current with respect to $S_{\rm pp}(0)$. In Sect.\,5 we adduce the evaluation of the astrophysical factor $S_{\rm pep}(0)$ of the reaction p + e$^-$ + p $\to$ D + $\nu_{\rm e}$ or pep--process relative to $S_{\rm pp}(0)$. In the Conclusion we discuss the obtained results. \section{Amplitude of solar proton burning and low--energy elastic proton--proton scattering} \setcounter{equation}{0} For the description of low--energy transitions N + N $\to$ N + N in the reactions n + p $\to$ D + $\gamma$, $\gamma$ + D $\to$ n + p, $\bar{\nu}_{\rm e}$ + D $\to$ e$^+$ + n + n and p + p $\to$ D + e$^+$ + $\nu_{\rm e}$, where nucleons are in the ${^1}{\rm S}_0$--state, we apply the effective local four--nucleon interactions [11--13]: \begin{eqnarray}\label{label2.1} &&{\cal L}^{\rm NN \to NN}_{\rm eff}(x)=G_{\rm \pi NN}\,\{[\bar{n}(x)\gamma_{\mu} \gamma^5 p^c(x)][\bar{p^c}(x)\gamma^{\mu}\gamma^5 n(x)]\nonumber\\ &&+\frac{1}{2}\, [\bar{n}(x)\gamma_{\mu} \gamma^5 n^c(x)][\bar{n^c}(x)\gamma^{\mu}\gamma^5 n(x)] + \frac{1}{2}\,[\bar{p}(x)\gamma_{\mu} \gamma^5 p^c(x)] [\bar{p^c}(x)\gamma^{\mu}\gamma^5 p(x)]\nonumber\\ &&+ (\gamma_{\mu}\gamma^5 \otimes \gamma^{\mu}\gamma^5 \to \gamma^5 \otimes \gamma^5)\}, \end{eqnarray} where $n(x)$ and $p(x)$ are the operators of the neutron and the proton interpolating fields, $n^c(x) = C \bar{n}^T(x)$ and so on, then $C$ is a charge conjugation matrix and $T$ is a transposition. The effective coupling constant $G_{\rm \pi NN}$ is defined by [11--13] \begin{eqnarray}\label{label2.2} G_{\rm \pi NN} = \frac{g^2_{\rm \pi NN}}{4M^2_{\pi}} - \frac{2\pi a_{\rm np}}{M_{\rm N}} = 3.27\times 10^{-3}\,{\rm MeV}^{-2}, \end{eqnarray} where $g_{\rm \pi NN}= 13.4$ is the coupling constant of the ${\rm \pi NN}$ interaction, $M_{\pi}=135\,{\rm MeV}$ is the pion mass, $M_{\rm p} = M_{\rm n} = M_{\rm N} = 940\,{\rm MeV}$ is the mass of the proton and the neutron neglecting the electromagnetic mass difference, which is taken into account only for the calculation of the phase volumes of the final states of the reactions p + p $\to$ D + e$^+$ + $\nu_{\rm e}$, $\nu_{\rm e}$ + D $\to$ e$^-$ + p + p and p + e$^-$ + p $\to$ D + $\nu_{\rm e}$, and $a_{\rm np} = (-23.75\pm 0.01)\,{\rm fm}$ is the S--wave scattering length of np scattering in the ${^1}{\rm S}_0$--state. The effective Lagrangian for the low--energy nuclear transition p + p $\to$ D + e$^+$ + $\nu_{\rm e}$ has been calculated in Ref.\,[12] and reads \begin{eqnarray}\label{label2.3} {\cal L}_{\rm pp\to D e^+ \nu_{\rm e}}(x) = - i g_{\rm A}G_{\rm \pi NN}M_{\rm N}\frac{G_{\rm V}}{\sqrt{2}}\frac{3g_{\rm V}}{4\pi^2}\,D^{\dagger}_{\mu}(x)\,[\bar{p^c}(x)\gamma^5 p(x)]\,[\bar{\psi}_{\nu_{\rm e}}(x)\gamma^{\mu}(1 - \gamma^5) \psi_{\rm e}(x)]. \end{eqnarray} where $G_{\rm V} = G_{\rm F}\,\cos \vartheta_C$ with $G_{\rm F} = 1.166\,\times\,10^{-11}\,{\rm MeV}^{-2}$ and $\vartheta_C$ are the Fermi weak coupling constant and the Cabibbo angle $\cos \vartheta_C = 0.975$, $g_{\rm A} = 1.2670 \pm 0.0035$ [18] and $g_{\rm V}$ is a phenomenological coupling constant of the RFMD related to the electric quadrupole moment of the deuteron $Q_{\rm D} = 0.286\,{\rm fm}^2$ [11]: $g^2_{\rm V} = 2\pi^2 Q_{\rm D}M^2_{\rm N}$. Then, $D_{\mu}(x)$, $\psi_{\nu_{\rm e}}(x)$ $\psi_{\rm e}(x)$ are the interpolating fields of the deuteron and leptonic pair, respectively. The effective Lagrangian Eq.(\ref{label2.3}) defines the effective vertex of the low--energy nuclear transition p + p $\to$ D + e$^+$ + $\nu_{\rm e}$ \begin{eqnarray}\label{label2.4} i{\cal M}({\rm p} + {\rm p} \to {\rm D} + {\rm e}^+ + \nu_{e})&=& \,G_{\rm V}\,g_{\rm A} M_{\rm N}\,G_{\rm \pi NN}\,\frac{3g_{\rm V}}{4\pi^2}\, e^*_{\mu}(k_{\rm D})\,[\bar{u}(k_{\nu_{\rm e}})\gamma^{\mu} (1-\gamma^5) v(k_{\rm e^+})]\nonumber\\ &&\times\,[\bar{u^c}(p_2) \gamma^5 u(p_1)], \end{eqnarray} where $e^*_{\mu}(k_{\rm D})$ is a 4--vector of a polarization of the deuteron, $u(k_{\nu_{\rm e}})$, $v(k_{\rm e^+})$, $u(p_2)$ and $u(p_1)$ are the Dirac bispinors of neutrino, positron and two protons, respectively. In order to evaluate the contribution of low--energy elastic pp scattering we have to determine the effective vertex of the p + p $\to$ p + p transition accounting for the Coulomb repulsion between the protons. For this aim we suggest to use the effective local four--nucleon interaction Eq.(\ref{label2.1}) and take into account the Coulomb repulsion in terms of the explicit Coulomb wave function of the protons. This yields \begin{eqnarray}\label{label2.5} V_{\rm pp \to pp}(k',k) = G_{\rm \pi NN}\,\psi^*_{\rm pp}(k'\,)\, [\bar{u}(p'_2) \gamma^5 u^c(p'_1)]\,[\bar{u^c}(p_2) \gamma^5 u(p_1)]\,\psi_{\rm pp}(k), \end{eqnarray} where $\psi_{\rm pp}(k)$ and $\psi^*_{\rm pp}(k'\,)$ are the explicit Coulomb wave functions of the relative movement of the protons taken at zero relative radius, and $k$ and $k'$ are relative 3--momenta of the protons $\vec{k} = (\vec{p}_1 - \vec{p}_2)/2$ and $\vec{k}^{\,\prime} = (\vec{p}^{\;\prime}_1 - \vec{p}^{\;\prime}_2 )/2$ in the initial and final states. The explicit form of $\psi_{\rm pp}(k)$ we take following Kong and Ravndal [8] (see also [19]) \begin{eqnarray}\label{label2.6} \psi_{\rm pp}(k) = e^{\textstyle - \pi/4k r_C}\,\Gamma\Bigg(1 + \frac{i}{2k r_C}\Bigg), \end{eqnarray} where $r_C = 1/M_{\rm N}\alpha = 28.82\,{\rm fm}$ and $\alpha = 1/137$ are the Bohr radius of a proton and the fine structure constant. The squared value of the modulo of $\psi_{\rm pp}(k)$ is given by \begin{eqnarray}\label{label2.7} |\psi_{\rm pp}(k)|^2 = C^2_0(k) = \frac{\pi}{k r_C}\, \frac{1}{\displaystyle e^{\textstyle \pi/k r_C} - 1}, \end{eqnarray} where $C_0(k)$ is the Gamow penetration factor [1,2,19]. We would like to emphasize that the wave function Eq.(\ref{label2.6}) is defined only by a regular solution of the Schr\"odinger equation for the pure Coulomb potential [19]. By taking into account the contribution of the Coulomb wave function and summing up an infinite series of one--proton loop diagrams the amplitude of the solar proton burning can be written in the form \begin{eqnarray}\label{label2.8} \hspace{-0.5in}&&i{\cal M}({\rm p} + {\rm p} \to {\rm D} + {\rm e}^+ + \nu_{e}) = G_{\rm V}\,g_{\rm A} M_{\rm N}\,G_{\rm \pi NN}\,\frac{3g_{\rm V}}{4\pi^2}\, e^*_{\mu}(k_{\rm D})\,[\bar{u}(k_{\nu_{\rm e}})\gamma^{\mu} (1-\gamma^5) v(k_{\rm e^+})]\,{\cal F}^{\rm e}_{\rm pp}\nonumber\\ \hspace{-0.5in}&&\times\,\frac{[\bar{u^c}(p_2) \gamma^5 u(p_1)]\,\psi_{\rm pp}(k)}{\displaystyle 1 + \frac{G_{\rm \pi NN}}{16\pi^2}\int \frac{d^4p}{\pi^2i}\,|\psi_{\rm pp}(|\vec{p} + \vec{Q}\,|)|^2 {\rm tr}\Bigg\{\gamma^5 \frac{1}{M_{\rm N} - \hat{p} - \hat{P} - \hat{Q}}\gamma^5 \frac{1}{M_{\rm N} - \hat{p} - \hat{Q}}\Bigg\}}. \end{eqnarray} where $P = p_1 + p_2 = (2\sqrt{k^2 + M^2_{\rm N}}, \vec{0}\,)$ is the 4--momentum of the pp--pair in the center of mass frame; $Q =a\,P + b\,K = a\,(p_1 + p_2) + b\,(p_1 - p_2)$ is an arbitrary shift of virtual momentum with arbitrary parameters $a$ and $b$, and in the center of mass frame $K = p_1 - p_2 = (0,2\,\vec{k}\,)$ [14]. The parameters $a$ and $b$ can be functions of $k$. The factor ${\cal F}^{\rm e}_{\rm pp}$ describes the overlap of the Coulomb and strong interactions [10]. It is analogous the overlap integral in the PMA [5]. We calculate this factor below. The evaluation of the momentum integral runs the way expounded in [14]. Keep only leading contributions in the large $N_C$ expansion [13,14] we obtain \begin{eqnarray}\label{label2.9} &&\int \frac{d^4p}{\pi^2i}\,|\psi_{\rm pp}(|\vec{p} + \vec{Q}\,|)|^2 {\rm tr}\Bigg\{\gamma^5 \frac{1}{M_{\rm N} - \hat{p} - \hat{P} - \hat{Q}}\gamma^5 \frac{1}{M_{\rm N} - \hat{p} - \hat{Q}}\Bigg\} =\nonumber\\ &&= - 8\, a\,(a + 1)\,M^2_{\rm N} + 8\,(b^2 - a\,(a + 1))\,k^2 - i\,8\pi\,M_{\rm N}\,k\,|\psi_{\rm pp}(k)|^2 = \nonumber\\ &&= - 8\, a\,(a + 1)\,M^2_{\rm N} + 8\,(b^2 - a\,(a + 1))\,k^2 - i\,8\pi\,M_{\rm N}\,k\,C^2_0(k). \end{eqnarray} Substituting Eq.(\ref{label2.9}) in Eq.(\ref{label2.8}) we get \begin{eqnarray}\label{label2.10} \hspace{-0.7in}&& i{\cal M}({\rm p} + {\rm p} \to {\rm D} + {\rm e}^+ + \nu_{e}) = G_{\rm V}\,g_{\rm A} M_{\rm N}\,G_{\rm \pi NN}\,\frac{3g_{\rm V}}{4\pi^2}\,{\cal F}^{\rm e}_{\rm pp}\nonumber\\ \hspace{-0.7in}&&\times \, e^*_{\mu}(k_{\rm D})\,[\bar{u}(k_{\nu_{\rm e}})\gamma^{\mu} (1-\gamma^5) v(k_{\rm e^+})]\,[\bar{u^c}(p_2) \gamma^5 u(p_1)]\,e^{\textstyle - \pi/4k r_C}\,\Gamma\Bigg(1 + \frac{i}{2k r_C}\Bigg)\nonumber\\ \hspace{-0.7in}&&\Bigg[ 1 - a(a+1) \frac{G_{\rm \pi NN}}{2\pi^2}\,M^2_{\rm N} + \frac{G_{\rm \pi NN}}{2\pi^2}\,(b^2 - a\,(a + 1))\,k^2 - i\,\frac{G_{\rm \pi NN}M_{\rm N}}{2\pi}\,k\,C^2_0(k)\Bigg]^{-1}\!\!\!. \end{eqnarray} In order to reconcile the contribution of low--energy elastic pp scattering with low--energy nuclear phenomenology [19] we should make a few changes. For this aim we should rewrite Eq.(\ref{label2.10}) in more convenient form \begin{eqnarray}\label{label2.11} \hspace{-0.7in}&& i{\cal M}({\rm p} + {\rm p} \to {\rm D} + {\rm e}^+ + \nu_{e}) = G_{\rm V}\,g_{\rm A} M_{\rm N}\,G_{\rm \pi NN}\,\frac{3g_{\rm V}}{4\pi^2}\,{\cal F}^{\rm e}_{\rm pp}\nonumber\\ \hspace{-0.7in}&&\times \, e^*_{\mu}(k_{\rm D})\,[\bar{u}(k_{\nu_{\rm e}})\gamma^{\mu} (1-\gamma^5) v(k_{\rm e^+})]\,[\bar{u^c}(p_2) \gamma^5 u(p_1)]\,e^{\textstyle i\sigma_0(k)}\,C_0(k)\nonumber\\ \hspace{-0.7in}&&\Bigg[ 1 - a(a+1) \frac{G_{\rm \pi NN}}{2\pi^2}\,M^2_{\rm N} + \frac{G_{\rm \pi NN}}{2\pi^2}\,(b^2 - a\,(a + 1))\,k^2 - i\,\frac{G_{\rm \pi NN}M_{\rm N}}{2\pi}\,k\,C^2_0(k)\Bigg]^{-1}\!\!\!. \end{eqnarray} We have denoted \begin{eqnarray}\label{label2.12} e^{\textstyle - \pi/4k r_C}\,\Gamma\Bigg(1 + \frac{i}{2k r_C}\Bigg) = e^{\textstyle i\sigma_0(k)}\,C_0(k)\;,\; \sigma_0(k)&=&{\rm arg}\,\Gamma\Bigg(1 + \frac{i}{2k r_C}\Bigg), \end{eqnarray} where $\sigma_0(k)$ is a pure Coulomb phase shift. Now, let us rewrite the denominator of the amplitude Eq.(\ref{label2.11}) in the equivalent form \begin{eqnarray}\label{label2.13} && \Bigg\{\cos\sigma_0(k)\Bigg[1 - a(a+1) \frac{G_{\rm \pi NN}}{2\pi^2}\,M^2_{\rm N} + \frac{G_{\rm \pi NN}}{2\pi^2}\,(b^2 - a\,(a + 1))\,k^2\Bigg]\nonumber\\ && - \sin\sigma_0(k)\,\frac{G_{\rm \pi NN}M_{\rm N}}{2\pi}\,k\,C^2_0(k)\Bigg\}- i\,\Bigg\{\cos\sigma_0(k)\,\frac{G_{\rm \pi NN}M_{\rm N}}{2\pi}\,k\,C^2_0(k)\nonumber\\ && + \sin\sigma_0(k)\Bigg[1 - a(a+1) \frac{G_{\rm \pi NN}}{2\pi^2}\,M^2_{\rm N} + \frac{G_{\rm \pi NN}}{2\pi^2}\,(b^2 - a\,(a + 1))\,k^2\Bigg]\Bigg\}=\nonumber\\ &&= \frac{1}{Z}\Bigg[1 - \frac{1}{2}\,a^{\rm e}_{\rm pp} r^{\rm e}_{\rm pp}k^2 + \frac{a^{\rm e}_{\rm pp}}{r_C}\,h(2 k r_C) + i\,a^{\rm e}_{\rm pp}\,k\,C^2_0(k)\Bigg], \end{eqnarray} where we have denoted \begin{eqnarray}\label{label2.14} \hspace{-0.3in}&& \frac{1}{Z}\Bigg[1 - \frac{1}{2}\,a^{\rm e}_{\rm pp} r^{\rm e}_{\rm pp}k^2 + \frac{a^{\rm e}_{\rm pp}}{r_C}\,h(2 k r_C) \Bigg]= - \sin\sigma_0(k)\,\frac{G_{\rm \pi NN}M_{\rm N}}{2\pi}\,k\,C^2_0(k)\nonumber\\ \hspace{-0.3in}&& + \cos\sigma_0(k)\Bigg[1 - a(a+1) \frac{G_{\rm \pi NN}}{2\pi^2}\,M^2_{\rm N} + \frac{G_{\rm \pi NN}}{2\pi^2}\,(b^2 - a\,(a + 1))\,k^2\Bigg],\nonumber\\ \hspace{-0.3in}&& - \frac{1}{Z}\,a^{\rm e}_{\rm pp}\,k\,C^2_0(k) =\cos\sigma_0(k)\frac{G_{\rm \pi NN}M_{\rm N}}{2\pi}\,k\,C^2_0(k)\nonumber\\ \hspace{-0.3in}&& + \sin\sigma_0(k)\Bigg[1 - a(a+1) \frac{G_{\rm \pi NN}}{2\pi^2}\,M^2_{\rm N} + \frac{G_{\rm \pi NN}}{2\pi^2}\,(b^2 - a\,(a + 1))\,k^2\Bigg]. \end{eqnarray} Here $Z$ is a constant which will be removed the renormalization of the wave functions of the protons, $a^{\rm e}_{\rm pp} = ( - 7.8196\pm 0.0026)\,{\rm fm}$ and $r^{\rm e}_{\rm pp} = 2.790\pm 0.014\,{\rm fm}$ [20] are the S--wave scattering length and the effective range of pp scattering in the ${^1}{\rm S}_0$--state with the Coulomb repulsion, and $h(2 k r_C)$ is defined by [19] \begin{eqnarray}\label{label2.15} h(2 k r_C) = - \gamma + {\ell n}(2 k r_C) + \sum^{\infty}_{n=1}\frac{1}{n(1 + 4n^2k^2r^2_C)}. \end{eqnarray} The validity of the relations Eq.(\ref{label2.14}) assumes the dependence of parameters $a$ and $b$ on the relative momentum $k$. After the changes Eq.(\ref{label2.11})--Eq.(\ref{label2.14}) the amplitude Eq.(\ref{label2.10}) takes the form \begin{eqnarray}\label{label2.16} \hspace{-0.2in}&& i{\cal M}({\rm p} + {\rm p} \to {\rm D} + {\rm e}^+ + \nu_{e}) = G_{\rm V}\,g_{\rm A} M_{\rm N}\,G_{\rm \pi NN}\,\frac{3g_{\rm V}}{4\pi^2}\,e^*_{\mu}(k_{\rm D})\,[\bar{u}(k_{\nu_{\rm e}})\gamma^{\mu} (1-\gamma^5) v(k_{\rm e^+})]\,{\cal F}^{\rm e}_{\rm pp}\nonumber\\ \hspace{-0.2in}&&\times \,\frac{C_0(k)}{\displaystyle1 - \frac{1}{2}\,a^{\rm e}_{\rm pp} r^{\rm e}_{\rm pp}k^2 + \frac{a^{\rm e}_{\rm pp}}{r_C}\,h(2 k r_C) + i\,a^{\rm e}_{\rm pp}\,k\,C^2_0(k) }\,Z\,[\bar{u^c}(p_2) \gamma^5 u(p_1)]. \end{eqnarray} Following [14] and renormalizing the wave functions of the protons $\sqrt{Z}u(p_2) \to u(p_2)$ and $\sqrt{Z}u(p_1) \to u(p_1)$ we obtain the amplitude of the solar proton burning \begin{eqnarray}\label{label2.17} \hspace{-0.2in}&& i{\cal M}({\rm p} + {\rm p} \to {\rm D} + {\rm e}^+ + \nu_{e}) = G_{\rm V}\,g_{\rm A} M_{\rm N}\,G_{\rm \pi NN}\,\frac{3g_{\rm V}}{4\pi^2}\,e^*_{\mu}(k_{\rm D})\,[\bar{u}(k_{\nu_{\rm e}})\gamma^{\mu} (1-\gamma^5) v(k_{\rm e^+})]\,{\cal F}^{\rm e}_{\rm pp}\nonumber\\ \hspace{-0.2in}&&\times \, \frac{\displaystyle C_0(k)}{\displaystyle1 - \frac{1}{2}\,a^{\rm e}_{\rm pp} r^{\rm e}_{\rm pp}k^2 + \frac{a^{\rm e}_{\rm pp}}{r_C}\,h(2 k r_C) + i\,a^{\rm e}_{\rm pp}\,k\,C^2_0(k) }\,[\bar{u^c}(p_2) \gamma^5 u(p_1)]\,F_{\rm D}(k^2), \end{eqnarray} where we have introduced too an universal form factor [14] \begin{eqnarray}\label{label2.18} F_{\rm D}(k^2) = \frac{1}{1 + r^2_{\rm D}k^2} \end{eqnarray} describing a spatial smearing of the deuteron coupled to the NN system in the ${^1}{\rm S}_0$--state at low energies; $r_{\rm D} = 1/\sqrt{\varepsilon_{\rm D}M_{\rm N}} = 4.315\,{\rm fm}$ is the radius of the deuteron and $\varepsilon_{\rm D} = 2.225\,{\rm MeV}$ is the binding energy of the deuteron. The real part of the denominator of the amplitude Eq.(\ref{label2.17}) is in complete agreement with a phenomenological relation [19] \begin{eqnarray}\label{label2.19} {\rm ctg}\delta^{\rm e}_{\rm pp}(k) = \frac{1}{\displaystyle C^2_0(k)\,k}\,\Bigg[ - \frac{1}{a^{\rm e}_{\rm pp}} + \frac{1}{2}\,r^{\rm e}_{\rm pp}k^2 - \frac{1}{r_{\rm C}}\,h(2 k r_{\rm C})\Bigg], \end{eqnarray} describing the phase shift $\delta^{\rm e}_{\rm pp}(k)$ of low--energy elastic pp scattering in terms of the S--wave scattering length $a^{\rm e}_{\rm pp}$ and the effective range $r^{\rm e}_{\rm pp}$. As has been pointed out [19] the expansion Eq.(\ref{label2.19}) is valid up to $T_{\rm pp}\le 10\,{\rm MeV}$, where $T_{\rm pp} = k^2/M_{\rm N}$ is a kinetic energy of the relative movement of the protons. Thus, we argue that the contribution of low--energy elastic pp scattering to the amplitude of the solar proton burning is described in agreement with low--energy nuclear phenomenology in terms of the S--wave scattering length $a^{\rm e}_{\rm pp}$ and the effective range $r^{\rm e}_{\rm pp}$ taken from the experimental data [20]. This takes away the problem pointed out by Bahcall and Kamionkowski [17] that in the RFMD with the local four--nucleon interaction given by Eq.(\ref{label2.1}) one cannot describe low--energy elastic pp scattering with the Coulomb repulsion in agreement with low--energy nuclear phenomenology. Now let us proceed to the evaluation of ${\cal F}^{\rm e}_{\rm pp}$. For this aim we should write down the matrix element of the transition p + p $\to$ D + e$^+$ + $\nu_{\rm e}$ with the Coulomb repulsion. The required matrix element has been derived in Refs.[11,14] and reads \begin{eqnarray}\label{label2.20} \hspace{-0.3in}&&i{\cal M}_C({\rm p} + {\rm p} \to {\rm D} + {\rm e}^+ + \nu_{e}) =\nonumber\\ \hspace{-0.3in}&&= G_{\rm V}\,g_{\rm A} M_{\rm N}\,G_{\rm \pi NN}\,\frac{3g_{\rm V}}{4\pi^2}\,C_0(k)\,[\bar{u}(k_{\nu_{\rm e}})\gamma^{\mu} (1-\gamma^5) v(k_{\rm e^+})]\,e^*_{\mu}(k_{\rm D})\,\nonumber\\ \hspace{-0.3in}&&\times\,\{- [\bar{u^c}(p_2)\gamma_{\alpha} \gamma^5 u(p_1)]{\cal J}^{\alpha\mu\nu}_C(k_{\rm D}, k_{\ell}) - [\bar{u^c}(p_2)\gamma^5 u(p_1)]\,{\cal J}^{\mu\nu}_C(k_{\rm D}, k_{\ell})\}, \end{eqnarray} where $k_{\rm D}$ and $k_{\ell}$ are 4--momenta of the deuteron and the leptonic pair, respectively. The structure functions ${\cal J}^{\alpha\mu\nu}(k_{\rm D}, k_{\ell})$ and ${\cal J}^{\mu\nu}(k_{\rm D}, k_{\ell})$ are determined by [11,14] \begin{eqnarray}\label{label2.21} &&{\cal J}^{\alpha\mu\nu}_C(k_{\rm D}, k_{\ell}) = \int\frac{d^4p}{\pi^2i}\,e^{\textstyle - \pi/4|\vec{q}\,| r_C}\,\Gamma\Bigg(1 - \frac{i}{2 |\vec{q}\,| r_C}\Bigg)\nonumber\\ &&\times\,{\rm tr} \Bigg\{\gamma^{\alpha}\gamma^5\frac{1}{M_{\rm N} - \hat{p} + \hat{k}_{\rm D}}\gamma^{\mu}\frac{1}{M_{\rm N} - \hat{p}}\gamma^{\nu}\gamma^5 \frac{1}{M_{\rm N} - \hat{p} - \hat{k}_{\ell}}\Bigg\},\nonumber\\ &&{\cal J}^{\mu\nu}_C(k_{\rm D},k_{\ell}) = \int\frac{d^4p}{\pi^2i}\,e^{\textstyle - \pi/4|\vec{q}\,| r_C}\,\Gamma\Bigg(1 - \frac{i}{2 |\vec{q}\,| r_C}\Bigg)\nonumber\\ &&\times\,{\rm tr} \Bigg\{\gamma^5\frac{1}{M_{\rm N} - \hat{p} + \hat{k}_{\rm D}}\gamma^{\mu}\frac{1}{M_{\rm N} - \hat{p}}\gamma^{\nu}\gamma^5 \frac{1}{M_{\rm N} - \hat{p} - \hat{k}_{\ell}}\Bigg\}, \end{eqnarray} where $\vec{q} = \vec{p} + (\vec{k}_{\ell} - \vec{k}_{\rm D})/2$. For the subsequent analysis it is convenient to represent the structure functions in the form of two terms \begin{eqnarray}\label{label2.22} {\cal J}^{\alpha\mu\nu}_C(k_{\rm D}, k_{\ell})&=&{\cal J}^{\alpha\mu\nu}_{SS}(k_{\rm D}, k_{\ell}) + {\cal J}^{\alpha\mu\nu}_{SC}(k_{\rm D}, k_{\ell}),\nonumber\\ {\cal J}^{\mu\nu}_C(k_{\rm D}, k_{\ell})&=&{\cal J}^{\mu\nu}_{SS}(k_{\rm D}, k_{\ell}) + {\cal J}^{\mu\nu}_{SC}(k_{\rm D}, k_{\ell}). \end{eqnarray} The decomposition is caused by the change \begin{eqnarray}\label{label2.23} e^{\textstyle - \pi/4|\vec{q}\,| r_C}\,\Gamma\Bigg(1 - \frac{i}{2 |\vec{q}\,| r_C}\Bigg) = 1 + \Bigg[e^{\textstyle - \pi/4|\vec{q}\,| r_C}\,\Gamma\Bigg(1 - \frac{i}{2 |\vec{q}\,| r_C}\Bigg) - 1\Bigg], \end{eqnarray} where the first term gives the contribution to the $SS$ part of the structure functions defined by strong interactions only, while the second one vanishes at $r_C \to \infty$ ( or $\alpha \to 0$) and describes the contribution to the $SC$ part of the structure functions caused by both strong and Coulomb interactions. The procedure of the evaluation of the structure functions Eq.(\ref{label2.21}) and Eq.(\ref{label2.22}) has been described in detail in Ref.[11,14]. Following this procedure we obtain ${\cal F}^{\rm e}_{\rm pp}$ in the form \begin{eqnarray}\label{label2.24} \hspace{-0.3in}&&{\cal F}^{\rm e}_{\rm pp} =\nonumber\\ \hspace{-0.3in}&&= 1 + \frac{32}{9}\int\limits^{\infty}_0 dp\,p^2\,\Bigg[e^{\textstyle - \pi/4p r_C}\,\Gamma\Bigg(1 - \frac{i}{2pr_C}\Bigg)- 1\Bigg]\Bigg[\frac{M^2_{\rm N}}{(M^2_{\rm N} + p^2)^{5/2}} - \frac{7}{16}\,\frac{1}{(M^2_{\rm N} + p^2)^{3/2}}\Bigg]\nonumber\\ \hspace{-0.3in}&&= 1 + \frac{32}{9}\int\limits^{\infty}_0 dv\,v^2\,\Bigg[e^{\textstyle - \alpha\pi/4v}\,\Gamma\Bigg(1 - \frac{i\alpha}{2v}\Bigg)- 1\Bigg]\Bigg[\frac{1}{(1 + v^2)^{5/2}} - \frac{7}{16}\,\frac{1}{(1 + v^2)^{3/2}}\Bigg]. \end{eqnarray} The integral can be estimated perturbatively. The result reads \begin{eqnarray}\label{label2.25} {\cal F}^{\rm e}_{\rm pp} = 1 + \alpha\,\Bigg(\frac{5\pi}{54} - i\,\frac{5\gamma}{27}\Bigg) + O(\alpha^2). \end{eqnarray} The numerical value of $|{\cal F}^{\rm e}_{\rm pp}|^2$ is \begin{eqnarray}\label{label2.26} |{\cal F}^{\rm e}_{\rm pp}|^2 = 1 + \alpha\,\frac{5\pi}{27} + O(\alpha^2) = 1 +(4.25 \times 10^{-3}) \simeq 1. \end{eqnarray} The contribution of the Coulomb field Eq.(\ref{label2.26}) inside the one--nucleon loop diagrams is found small. This is because of the integrals are concentrated around virtual momenta of order of $M_{\rm N}$ which is of order $M_{\rm N} \sim N_C$ in the large $N_C$ expansion [12]. For the calculation of the astrophysical factor $S_{\rm pp}(0)$ we can set ${\cal F}^{\rm e}_{\rm pp} = 1$. \section{Astrophysical factor for solar proton burning} \setcounter{equation}{0} The amplitude Eq.(\ref{label2.17}) squared, averaged over polarizations of protons and summed over polarizations of final particles reads \begin{eqnarray}\label{label3.1} \hspace{-0.5in}&&\overline{|{\cal M}({\rm p} + {\rm p} \to {\rm D} + {\rm e}^+ + \nu_{e})|^2}= G^2_{\rm V} g^2_{\rm A} M^4_{\rm N}\,G^2_{\rm \pi NN}\,\frac{9Q_{\rm D}}{8\pi^2}\,F^2_{\rm D}(k^2)\nonumber\\ \hspace{-0.5in}&&\times\,\frac{\displaystyle C^2_0(k)} {\displaystyle \Big[1 - \frac{1}{2}\,a^{\rm e}_{\rm pp} r^{\rm e}_{\rm pp}k^2 + \frac{a^{\rm e}_{\rm pp}}{r_C}\,h(2 k r_C)\Big]^2 + (a^{\rm e}_{\rm pp})^2 k^2 C^4_0(k)}\,\Bigg(- g^{\alpha\beta}+\frac{k^{\alpha}_{\rm D}k^{\beta}_{\rm D}}{M^2_{\rm D}}\Bigg)\nonumber\\ \hspace{-0.5in}&&\times {\rm tr}\{(- m_{\rm e} + \hat{k}_{\rm e^+})\gamma_{\alpha}(1-\gamma^5) \hat{k}_{\nu_{\rm e}}\gamma_{\beta}(1-\gamma^5)\}\times \frac{1}{4}\times {\rm tr}\{(M_{\rm N} - \hat{p}_2) \gamma^5 (M_{\rm N} + \hat{p}_1) \gamma^5\}, \end{eqnarray} where $m_{\rm e}=0.511\,{\rm MeV}$ is the positron mass, and we have used the relation $g^2_{\rm V} = 2\,\pi^2\,Q_{\rm D}\,M^2_{\rm N}$. In the low--energy limit the computation of the traces yields \begin{eqnarray}\label{label3.2} \hspace{-0.5in}&&\Bigg(- g^{\alpha\beta}+\frac{k^{\alpha}_{\rm D}k^{\beta}_{\rm D}}{M^2_{\rm D}}\Bigg)\,\times\,{\rm tr}\{( - m_{\rm e} + \hat{k}_{\rm e^+})\gamma_{\alpha}(1-\gamma^5) \hat{k}_{\nu_{\rm e}}\gamma_{\beta}(1-\gamma^5)\}= \nonumber\\ \hspace{-0.5in}&& = 24\,\Bigg( E_{\rm e^+} E_{\nu_{\rm e}} - \frac{1}{3}\vec{k}_{\rm e^+}\cdot \vec{k}_{\nu_{\rm e}}\,\Bigg) ,\nonumber\\ \hspace{-0.5in}&&\frac{1}{4}\,\times\,{\rm tr}\{(M_{\rm N} - \hat{p}_2) \gamma^5 (M_{\rm N} + \hat{p}_1) \gamma^5\} = 2\,M^2_{\rm N}, \end{eqnarray} where we have neglected the relative kinetic energy of the protons with respect to the mass of the proton. Substituting Eq.~(\ref{label3.2}) in Eq.~(\ref{label3.1}) we get \begin{eqnarray}\label{label3.3} \hspace{-0.2in}&&\overline{|{\cal M}({\rm p} + {\rm p} \to {\rm D} + {\rm e}^+ + \nu_{e})|^2} = \,G^2_{\rm V}\, g^2_{\rm A} M^6_{\rm N}\,G^2_{\rm \pi NN}\,\frac{54 Q_{\rm D}}{\pi^2}\,F^2_{\rm D}(k^2)\nonumber\\ \hspace{-0.2in}&&\times\,\frac{\displaystyle C^2_0(k)} {\displaystyle \Big[1 - \frac{1}{2}\,a^{\rm e}_{\rm pp} r^{\rm e}_{\rm pp}k^2 + \frac{a^{\rm e}_{\rm pp}}{r_C}\,h(2 k r_C)\Big]^2 + (a^{\rm e}_{\rm pp})^2 k^2 C^4_0(k)}\,\Bigg( E_{\rm e^+} E_{\nu_{\rm e}} - \frac{1}{3}\vec{k}_{\rm e^+}\cdot \vec{k}_{\nu_{\rm e}}\Bigg). \end{eqnarray} The integration over the phase volume of the final ${\rm D}{\rm e}^+ \nu_{\rm e}$--state we perform in the non--relativistic limit \begin{eqnarray}\label{label3.4} \hspace{-0.5in}&&\int\frac{d^3k_{\rm D}}{(2\pi)^3 2E_{\rm D}}\frac{d^3k_{\rm e^+}}{(2\pi)^3 2E_{\rm e^+}}\frac{d^3k_{\nu_{\rm e}}}{(2\pi)^3 2 E_{\nu_{\rm e}}}\,(2\pi)^4\,\delta^{(4)}(k_{\rm D} + k_{\ell} - p_1 - p_2)\,\Bigg( E_{\rm e^+} E_{\nu_{\rm e}} - \frac{1}{3}\vec{k}_{\rm e^+}\cdot \vec{k}_{\nu_{\rm e}}\,\Bigg)\nonumber\\ \hspace{-0.5in}&&= \frac{1}{32\pi^3 M_{\rm N}}\,\int^{W + T_{\rm pp}}_{m_{\rm e}}\sqrt{E^2_{\rm e^+}-m^2_{\rm e}}\,E_{\rm e^+}(W + T_{\rm pp} - E_{\rm e^+})^2\,d E_{\rm e^+} = \frac{(W + T_{\rm pp})^5}{960\pi^3 M_{\rm N}}\,f(\xi), \end{eqnarray} where $W = \varepsilon_{\rm D} - (M_{\rm n} - M_{\rm p}) = (2.225 -1.293)\,{\rm MeV} = 0.932\,{\rm MeV}$ and $\xi = m_{\rm e}/(W + T_{\rm pp})$. The function $f(\xi)$ is defined by the integral \begin{eqnarray}\label{label3.5} \hspace{-0.5in}f(\xi)&=&30\,\int^1_{\xi}\sqrt{x^2 -\xi^2}\,x\,(1-x)^2 dx=(1 - \frac{9}{2}\,\xi^2 - 4\,\xi^4)\,\sqrt{1-\xi^2}\nonumber\\ \hspace{-0.5in}&&+ \frac{15}{2}\,\xi^4\,{\ell n}\Bigg(\frac{1+\sqrt{1-\xi^2}}{\xi}\Bigg)\Bigg|_{T_{\rm pp} = 0} = 0.222 \end{eqnarray} and normalized to unity at $\xi = 0$. Thus, the cross section for the solar proton burning is given by \begin{eqnarray}\label{label3.6} &&\sigma_{\rm pp}(T_{\rm pp}) = \frac{e^{\displaystyle - \pi/r_C\sqrt{M_{\rm N}T_{\rm pp}}}}{v^2}\, \alpha\,\frac{9g^2_{\rm A} G^2_{\rm V} Q_{\rm D} M^3_{\rm N}}{320\,\pi^4}\,G^2_{\rm \pi NN}\,(W + T_{\rm pp})^5\,f\Bigg(\frac{m_{\rm e}}{W + T_{\rm pp}}\Bigg)\nonumber\\ &&\times\,\frac{\displaystyle F^2_{\rm D}(M_{\rm N}T_{\rm pp})}{ \displaystyle \Big[1 - \frac{1}{2}\,a^{\rm e}_{\rm pp} r^{\rm e}_{\rm pp}M_{\rm N}T_{\rm pp} + \frac{a^{\rm e}_{\rm pp}}{r_C}\,h(2 r_C\sqrt{M_{\rm N}T_{\rm pp}})\Big]^2 + (a^{\rm e}_{\rm pp})^2M_{\rm N}T_{\rm pp} C^4_0(\sqrt{M_{\rm N}T_{\rm pp}})} =\nonumber\\ &&\hspace{1.2in} = \frac{S_{\rm pp}(T_{\rm pp})}{T_{\rm pp}}\,e^{\displaystyle - \pi/r_C\sqrt{M_{\rm N}T_{\rm pp}}}. \end{eqnarray} The astrophysical factor $S_{\rm pp}(T_{\rm pp})$ reads \begin{eqnarray}\label{label3.7} \hspace{-0.5in}&&S_{\rm pp}(T_{\rm pp}) = \alpha\,\frac{9g^2_{\rm A}G^2_{\rm V}Q_{\rm D}M^4_{\rm N}} {1280\pi^4}\,G^2_{\rm \pi NN}\,(W + T_{\rm pp})^5\,f\Bigg(\frac{m_{\rm e}}{W + T_{\rm pp}}\Bigg)\nonumber\\ \hspace{-0.5in}&&\times\, \frac{\displaystyle F^2_{\rm D}(M_{\rm N}T_{\rm pp})} {\displaystyle \Big[1 - \frac{1}{2}\,a^{\rm e}_{\rm pp} r^{\rm e}_{\rm pp}M_{\rm N}T_{\rm pp} + \frac{a^{\rm e}_{\rm pp}}{r_C}\,h(2 r_C\sqrt{M_{\rm N}T_{\rm pp}})\Big]^2 + (a^{\rm e}_{\rm pp})^2M_{\rm N}T_{\rm pp} C^4_0(\sqrt{M_{\rm N}T_{\rm pp}})}. \end{eqnarray} At zero kinetic energy of the relative movement of the protons $T_{\rm pp} = 0$ the astrophysical factor $S_{\rm pp}(0)$ is given by \begin{eqnarray}\label{label3.8} \hspace{-0.5in}S_{\rm pp}(0) =\alpha\,\frac{9g^2_{\rm A}G^2_{\rm V}Q_{\rm D}M^4_{\rm N}}{1280\pi^4}\,G^2_{\rm \pi NN}\,W^5\,f\Bigg(\frac{m_{\rm e}}{W}\Bigg) = 4.08\,\times 10^{-25}\,{\rm MeV\,\rm b}. \end{eqnarray} The value $S_{\rm pp}(0) = 4.08 \times 10^{-25}\,{\rm MeV\,\rm b}$ agrees good with the recommended value $S_{\rm pp}(0) = 4.00 \times 10^{-25}\,{\rm MeV\,\rm b}$ [4]. Insignificant disagreement with the result obtained in Ref.[11] where we have found $S_{\rm pp}(0) = 4.02 \times 10^{-25}\,{\rm MeV\,\rm b}$ is due to the new value of the constant $g_{\rm A} = 1.260 \to 1.267$ [18] (see Ref.[13]). Unlike the astrophysical factor obtained by Kamionkowski and Bahcall [5] the astrophysical factor given by Eq.(\ref{label3.8}) does not depend explicitly on the S--wave scattering wave of pp scattering. This is due to the normalization of the wave function of the relative movement of two protons. After the summation of an infinite series and by using the relation Eq.(\ref{label2.19}) we obtain the wave function of two protons in the form \begin{eqnarray}\label{label3.9} \psi_{\rm pp}(k)= e^{\textstyle i\,\delta^{\,\rm e}_{\rm pp}(k)}\,\frac{\sin\,\delta^{\,\rm e}_{\rm pp}(k)}{-a^{\rm e}_{\rm pp}kC_0(k)}, \end{eqnarray} that corresponds the normalization of the wave function of the relative movement of two protons used by Schiavilla {\it et al.} [6]. For the more detailed discussion of this problem we relegate readers to the paper by Schiavilla {\it et al.} [6]\footnote{See the last paragraph of Sect.\,3 and the first paragraph of Sect.\,5 of Ref.[6].}. Unfortunately, the value of the astrophysical factor $S_{\rm pp}(0) = 4.08 \times 10^{-25}\,{\rm MeV\,\rm b}$ does not confirm the enhancement by a factor of 1.4 obtained in the modified version of the RFMD in Ref.[14]. \section{Neutrino disintegration of the deuteron induced by charged weak current} \setcounter{equation}{0} The evaluation of the amplitude of the process $\nu_{\rm e}$ + D $\to$ ${\rm e}^-$ + p + p has been given in details in Ref.\,[10]. The result can be written in the following form \begin{eqnarray}\label{label4.1} \hspace{-0.3in}&& i{\cal M}({\rm p} + {\rm p} \to {\rm D} + {\rm e}^+ + \nu_{e}) = g_{\rm A} M_{\rm N} \frac{G_{\rm V}}{\sqrt{2}}\,\frac{3g_{\rm V}}{2\pi^2}\, G_{\rm \pi NN} \,e^*_{\mu}(k_{\rm D})\,[\bar{u}(k_{\rm e^-})\gamma^{\mu}(1-\gamma^5) u(k_{\nu_{\rm e}})]\,{\cal F}^{\rm e}_{\rm ppe^-}\nonumber\\\hspace{-0.3in} &&\times \,\frac{C_0(k)}{\displaystyle1 - \frac{1}{2}\,a^{\rm e}_{\rm pp} r^{\rm e}_{\rm pp}k^2 + \frac{a^{\rm e}_{\rm pp}}{r_C}\,h(2 k r_C) + i\,a^{\rm e}_{\rm pp}\,k\,C^2_0(k) }\,[\bar{u^c}(p_2) \gamma^5 u(p_1)]\,F_{\rm D}(k^2), \end{eqnarray} where ${\cal F}^{\rm e}_{\rm ppe^-}$ is the overlap factor which we evaluate below, and $F_{\rm D}(k^2)$ is the universal form factor Eq.(\ref{label2.17}) describing a spatial smearing of the deuteron [14]. The amplitude Eq.(\ref{label4.1}) squared, averaged over polarizations of the deuteron and summed over polarizations of the final particles reads \begin{eqnarray}\label{label4.2} &&\overline{|{\cal M}(\nu_{\rm e} + {\rm D} \to {\rm e}^- + {\rm p} + {\rm p})|^2} = g^2_{\rm A}M^6_{\rm N}\frac{144 G^2_{\rm V}Q_{\rm D}}{\pi^2}\,G^2_{\rm \pi NN}\,|{\cal F}^{\rm e}_{\rm ppe^-}|^2\,\,F^2_{\rm D}(k^2)\,F(Z, E_{\rm e^-})\nonumber\\ &&\times {\displaystyle \frac{\displaystyle C^2_0(k)}{\displaystyle \Big[1 - \frac{1}{2}a^{\rm e}_{\rm pp} r^{\rm e}_{\rm pp} k^2 + \frac{a^{\rm e}_{\rm pp}}{r_{\rm C}}\,h(2kr_{\rm C})\Big]^2 + (a^{\rm e}_{\rm pp})^2k^2 C^4_0(k)}} \Bigg( E_{{\rm e^-}}E_{{\nu}_{\rm e}} - \frac{1}{3}\vec{k}_{{\rm e^-}}\cdot \vec{k}_{{\nu}_{\rm e}}\Bigg), \end{eqnarray} where $F(Z,E_{\rm e^-}$ is the Fermi function [21] describing the Coulomb interaction of the electron with the nuclear system having a charge $Z$. In the case of the reaction $\nu_{\rm e}$ + D $\to$ e$^+$ + p + p we have $Z = 2$. At $\alpha^2 Z^2 \ll 1$ the Fermi function $F(Z,E_{\rm e^-})$ reads [21] \begin{eqnarray}\label{label4.3} F(Z,E_{\rm e^-}) = \frac{2\pi \eta_{\rm e^-}}{\displaystyle 1 - e^{\textstyle -2\pi \eta_{\rm e^-}}}, \end{eqnarray} where $\eta_{\rm e^-} = Z \alpha/v_{\rm e^-} = Z \alpha E_{\rm e^-}/\sqrt{E^2_{\rm e^-} -m^2_{\rm e^-} }$ and $v_{\rm e^-}$ is a velocity of the electron. The r.h.s. of Eq.(\ref{label4.2}) can be expressed in terms of the astrophysical factor $S_{\rm pp}(0)$ for the solar proton burning brought up to the form \begin{eqnarray}\label{label4.4} \hspace{-0.5in}&&\overline{|{\cal M}(\nu_{\rm e} + {\rm D} \to {\rm e}^- + {\rm p} + {\rm p})|^2} = S_{\rm pp}(0)\,\frac{2^{12}5\pi^2}{\Omega_{\rm D e^+ \nu_{\rm e}}}\,\frac{r_{\rm C}M^3_{\rm N}}{m^5_{\rm e}}\,\frac{|{\cal F}^{\rm e}_{\rm ppe^-}|^2}{|{\cal F}^{\rm e}_{\rm pp}|^2}\,F^2_{\rm D}(k^2)\,F(Z, E_{\rm e^-})\nonumber\\ \hspace{-0.5in}&&\times \frac{\displaystyle C^2_0(k)}{\displaystyle \Big[1 - \frac{1}{2}a^{\rm e}_{\rm pp} r^{\rm e}_{\rm pp} k^2 + \frac{a^{\rm e}_{\rm pp}}{r_{\rm C}}\,h(2kr_{\rm C})\Big]^2 + (a^{\rm e}_{\rm pp})^2k^2 C^4_0(k)}\, \Bigg( E_{{\rm e^-}}E_{{\nu}_{\rm e}} - \frac{1}{3}\vec{k}_{{\rm e^-}} \cdot \vec{k}_{{\nu}_{\rm e}}\Bigg). \end{eqnarray} We have used here the expression for the astrophysical factor \begin{eqnarray}\label{label4.5} S_{\rm pp}(0) = \frac{9g^2_{\rm A}G^2_{\rm V}Q_{\rm D}M^3_{\rm N}}{1280\pi^4r_{\rm C}}\,G^2_{\rm \pi NN}\,|{\cal F}^{\rm e}_{\rm pp}|^2\,m^5_{\rm e}\,\Omega_{\rm D e^+ \nu_{\rm e}}, \end{eqnarray} where $m_{\rm e} = 0.511\,{\rm MeV}$ is the electron mass, and $\Omega_{\rm D e^+ \nu_{\rm e}} = (W/m_{\rm e})^5 f(m_{\rm e}/W) = 4.481$ at $W = 0.932\,{\rm MeV}$. The function $f(m_{\rm e}/W)$ is defined by Eq.(\ref{label3.5}). In the rest frame of the deuteron the cross section for the process $\nu_{\rm e}$ + D $\to$ ${\rm e}^-$ + p + p is defined as \begin{eqnarray}\label{label4.6} &&\sigma^{\nu_{\rm e} D}_{\rm cc}(E_{\nu_{\rm e}}) = \frac{1}{4M_{\rm D}E_{\nu_{\rm e}}}\int\,\overline{|{\cal M}(\nu_{\rm e} + {\rm D} \to {\rm e}^- + {\rm p} + {\rm p})|^2}\nonumber\\ &&\frac{1}{2}\,(2\pi)^4\,\delta^{(4)}(k_{\rm D} + k_{\nu_{\rm e}} - p_1 - p_2 - k_{\rm e^-})\, \frac{d^3p_1}{(2\pi)^3 2E_1}\frac{d^3 p_2}{(2\pi)^3 2E_2}\frac{d^3k_{{\rm e^-}}}{(2\pi)^3 2E_{{\rm e^-}}}, \end{eqnarray} where $E_{\nu_{\rm e}}$, $E_1$, $E_2$ and $E_{{\rm e^-}}$ are the energies of the neutrino, the protons and the electron. The abbreviation (cc) means the charged current. The integration over the phase volume of the (${\rm p p e^-}$)--state we perform in the non--relativistic limit and in the rest frame of the deuteron, \begin{eqnarray}\label{label4.7} &&\frac{1}{2}\,\int\frac{d^3p_1}{(2\pi)^3 2E_1}\frac{d^3p_2}{(2\pi)^3 2E_2} \frac{d^3k_{\rm e}}{(2\pi)^3 2E_{\rm e^-}}(2\pi)^4\,\delta^{(4)}(k_{\rm D} + k_{\nu_{\rm e}} - p_1 - p_2 - k_{\rm e^-})\,\nonumber\\ &&{\displaystyle \frac{\displaystyle C^2_0(\sqrt{M_{\rm N}T_{\rm pp}})\,F^2_{\rm D}(M_{\rm N}T_{\rm pp})\,F(Z, E_{\rm e^-})}{\displaystyle \Big[1 - \frac{1}{2}a^{\rm e}_{\rm pp} r^{\rm e}_{\rm pp}M_{\rm N}T_{\rm pp} + \frac{a^{\rm e}_{\rm pp}}{r_{\rm C}}\, h(2 r_{\rm C}\sqrt{M_{\rm N}T_{\rm pp}})\Big]^2 + (a^{\rm e}_{\rm pp})^2 M_{\rm N}T_{\rm pp} C^4_0(\sqrt{M_{\rm N}T_{\rm pp}})}}\nonumber\\ &&\Bigg( E_{\rm e^-} E_{\nu_{\rm e}} - \frac{1}{3} \vec{k}_{\rm e^-} \cdot \vec{k}_{\nu_{\rm e}}\Bigg)\, =\frac{E_{\bar{\nu}_{\rm e}}M^3_{\rm N}}{128\pi^3}\,\Bigg(\frac{E_{\rm th}}{M_{\rm N}} \Bigg)^{\!\!7/2}\Bigg(\frac{2 m_{\rm e}}{E_{\rm th}}\Bigg)^{\!\!3/2}\frac{1}{E^2_{\rm th}}\nonumber\\ &&\int\!\!\!\int dT_{\rm e^-} dT_{\rm pp}\delta(E_{\nu_{\rm e}}- E_{\rm th} - T_{\rm e^-} - T_{\rm pp}) \sqrt{T_{\rm e^-}T_{\rm pp}}\Bigg(1 + \frac{T_{\rm e^-}}{m_{\rm e}}\Bigg)\,{\displaystyle \sqrt{1 + \frac{T_{\rm e^-}}{2 m_{\rm e}}}}\nonumber\\ &&{\displaystyle \frac{\displaystyle C^2_0(\sqrt{M_{\rm N}T_{\rm pp}}) \,F^2_{\rm D}(M_{\rm N}T_{\rm pp})\,F(Z, E_{\rm e^-})}{\displaystyle \Big[1 - \frac{1}{2}a^{\rm e}_{\rm pp} r^{\rm e}_{\rm pp}M_{\rm N}T_{\rm pp} + \frac{a^{\rm e}_{\rm pp}}{r_{\rm C}}\,h(2 r_{\rm C}\sqrt{M_{\rm N}T_{\rm pp}})\Big]^2 + (a^{\rm e}_{\rm pp})^2 M_{\rm N}T_{\rm pp} C^4_0(\sqrt{M_{\rm N}T_{\rm pp}})}} \nonumber\\ &&= \frac{E_{\nu_{\rm e}}M^3_{\rm N}}{128\pi^3} \,\Bigg(\frac{E_{\rm th}}{M_{\rm N}} \Bigg)^{\!\!7/2}\Bigg(\frac{2 m_{\rm e}}{E_{\rm th}}\Bigg)^{\!\!3/2}\,(y-1)^2\,\Omega_{\rm p p e^-}(y), \end{eqnarray} where $T_{\rm e^-}$ is the kinetic energy of the electron, $E_{\rm th}$ is the neutrino energy threshold of the reaction $\nu_{\rm e}$ + D $\to$ ${\rm e}^-$ + p + p, and is given by $E_{\rm th}= \varepsilon_{\rm D} + m_{\rm e} - (M_{\rm n} - M_{\rm p}) = (2.225 + 0.511 - 1.293) \, {\rm MeV} = 1.443\,{\rm MeV}$. The function $\Omega_{\rm p p e^-}(y)$, where $y=E_{\nu_{\rm e}}/E_{\rm th}$, is defined as \begin{eqnarray}\label{label4.8} \hspace{-0.5in}&&\Omega_{\rm p p e^-}(y) = \int\limits^{1}_{0} dx \sqrt{x (1 - x)} \Bigg(1 + \frac{E_{\rm th}}{m_{\rm e}}(y-1)(1-x)\Bigg) \sqrt{1 + \frac{E_{\rm th}}{2 m_{\rm e}}(y-1)(1-x)}\nonumber\\ \hspace{-0.5in}&&C^2_0(\sqrt{M_{\rm N}E_{\rm th}\,(y - 1)\,x})\,F^2_{\rm D}(M_{\rm N}E_{\rm th}\,(y - 1)\,x)\,F(Z,m_{\rm e} + E_{\rm th}(y - 1)\,(1-x))\nonumber\\ \hspace{-0.5in}&&\Bigg\{\Bigg[1 - \frac{1}{2}a^{\rm e}_{\rm pp} r^{\rm e}_{\rm pp}M_{\rm N}E_{\rm th}\,(y - 1)\,x + \frac{a^{\rm e}_{\rm pp}}{r_{\rm C}}\,h(2 r_{\rm C}\sqrt{M_{\rm N}E_{\rm th}\,(y - 1)\,x})\Bigg]^2 \nonumber\\ \hspace{-0.5in}&&\hspace{0.2in} + (a^{\rm e}_{\rm pp})^2\,M_{\rm N}E_{\rm th}\,(y - 1)\,x C^4_0(\sqrt{M_{\rm N}E_{\rm th}\,(y - 1)\,x}) \Bigg\}^{-1}\!\!\!\!\!, \end{eqnarray} where we have changed the variable $T_{\rm pp} = (E_{\nu_{\rm e}} - E_{\rm th})\,x$. The cross section for $\nu_{\rm e}$ + D $\to$ ${\rm e}^-$ + p + p is defined \begin{eqnarray}\label{label4.9} \hspace{-0.5in}\sigma^{\nu_{\rm e} D}_{\rm cc}(E_{\nu_{\rm e}}) &=& S_{\rm pp}(0)\, \frac{640 r_{\rm C}}{\pi \Omega_{\rm D e^+\nu_{\rm e}}}\Bigg(\frac{M_{\rm N}}{E_{\rm th}}\Bigg)^{3/2}\Bigg(\frac{E_{\rm th}}{2m_{\rm e}}\Bigg)^{7/2}\frac{|{\cal F}^{\rm e}_{\rm ppe^-}|^2}{|{\cal F}^{\rm e}_{\rm pp}|^2}\,(y-1)^2\,\Omega_{\rm p p e^-}(y)=\nonumber\\ &=&3.69\times 10^5\,S_{\rm pp}(0)\,\frac{|{\cal F}^{\rm e}_{\rm ppe^-}|^2}{|{\cal F}^{\rm e}_{\rm pp}|^2}\,(y-1)^2\,\Omega_{\rm p p e^-}(y), \end{eqnarray} where $S_{\rm pp}(0)$ is measured in ${\rm MeV}\,{\rm cm}^2$. For $S_{\rm pp}(0) = 4.08\times 10^{-49}\,{\rm MeV}\,{\rm cm}^2$ Eq.(\ref{label3.8}) the cross section $\sigma^{\nu_{\rm e} D}_{\rm cc}(E_{\nu_{\rm e}})$ reads \begin{eqnarray}\label{label4.10} \sigma^{\nu_{\rm e} D}_{\rm cc}(E_{\nu_{\rm e}}) = 1.50\,\frac{|{\cal F}^{\rm e}_{\rm ppe^-}|^2}{|{\cal F}^{\rm e}_{\rm pp}|^2}\,(y-1)^2\,\Omega_{\rm p p e^-}(y)\,10^{-43}\,{\rm cm}^2. \end{eqnarray} In order to make numerical predictions for the cross section Eq.(\ref{label4.10}) we should evaluate the overlap factor ${\cal F}^{\rm e}_{\rm ppe^-}$. This evaluation can be carried out in analogy with the evaluation of ${\cal F}^{\rm e}_{\rm pp}$. By using the results obtained in Ref.[10] we get \begin{eqnarray}\label{label4.11} \hspace{-0.2in}{\cal F}^{\rm e}_{\rm ppe^-} = 1 + \frac{32}{9}\int\limits^{\infty}_0 dv\,v^2\,\Bigg[e^{\textstyle - \alpha\pi/4v}\,\Gamma\Bigg(1 - \frac{i\alpha}{2v}\Bigg)- 1\Bigg]\Bigg[\frac{1}{(1 + v^2)^{5/2}} - \frac{1}{16}\,\frac{1}{(1 + v^2)^{3/2}}\Bigg]. \end{eqnarray} The perturbative evaluation of the integral gives \begin{eqnarray}\label{label4.12} {\cal F}^{\rm e}_{\rm ppe^-} = 1 - \alpha\,\Bigg(\frac{13\pi}{54} - i\,\frac{13\gamma}{27}\Bigg) + O(\alpha^2). \end{eqnarray} Thus, the overlap factor ${\cal F}^{\rm e}_{\rm ppe^-}$ differs slightly from unity as well as the overlap factor ${\cal F}^{\rm e}_{\rm pp}$ of the solar proton burning. The ratio of the overlap factors is equal to \begin{eqnarray}\label{label4.13} \frac{|{\cal F}^{\rm e}_{\rm ppe^-}|^2}{|{\cal F}^{\rm e}_{\rm pp}|^2} = 1 - \alpha\,\frac{2\pi}{3} + O(\alpha^2) = 1 + (-1.53 \times 10^{-2}) \simeq 1. \end{eqnarray} Setting $|{\cal F}^{\rm e}_{\rm ppe^-}|^2/|{\cal F}^{\rm e}_{\rm pp}|^2 = 1$ we can make numerical predictions for the cross section Eq.(\ref{label4.10}) and compare them with the PMA ones. The most recent PMA calculations the cross section for the reaction $\nu_{\rm e}$ + D $\to$ e$^-$ + p + p have been obtained in Refs.\,[22,23] and tabulated for the neutrino energies ranging over the region from threshold up to 160$\,{\rm MeV}$. Since our result is restricted by the neutrino energies from threshold up to 10$\,{\rm MeV}$, we compute the cross section only for this energy region \begin{eqnarray}\label{label4.14} \sigma^{\nu_{\rm e} D}_{\rm cc}(E_{\nu_{\rm e}}= 4\,{\rm MeV}) &=& 2.46\,(1.86/1.54)\times 10^{-43}\,{\rm cm}^2,\nonumber\\ \sigma^{\nu_{\rm e} D}_{\rm cc}(E_{\nu_{\rm e}}= 6\,{\rm MeV}) &=& 9.60\,(5.89/6.13)\times 10^{-43}\,{\rm cm}^2,\nonumber\\ \sigma^{\nu_{\rm e} D}_{\rm cc}(E_{\nu_{\rm e}}= 8\,{\rm MeV}) &=& 2.38\,(1.38/1.44)\times 10^{-42}\,{\rm cm}^2,\nonumber\\ \sigma^{\nu_{\rm e} D}_{\rm cc}(E_{\nu_{\rm e}}= 10\,{\rm MeV}) &=& 4.07\,(2.55/2.66)\times 10^{-43}\,{\rm cm}^2, \end{eqnarray} where the data in parentheses are taken from Refs.\,[22] and [23], respectively. Thus, on the average our numerical values for the cross section $\sigma^{\nu_{\rm e} D}_{\rm cc}(E_{\nu_{\rm e}})$ by a factor of 1.5 are larger compared with the PMA ones. Our predictions for the cross section Eq.(\ref{label4.14}) differ from the predictions of Ref.[14]. This is related to (i) the value of the astrophysical factor which is by a factor 1.4 larger in Ref.[14] and (ii) the form factor describing a spatial smearing of the deuteron which is $F^2_{\rm D}(k^2)$ is this paper (see Ref. [13]) and $F_{\rm D}(k^2)$ in Ref.[14]. \section{Astrophysical factor for pep process} \setcounter{equation}{0} In the RFMD the amplitude of the reaction p + e$^-$ + p $\to$ D + $\nu_{\rm e}$ or the pep--process is related to the effective Lagrangian Eq.(\ref{label2.3}) and reads \begin{eqnarray}\label{label5.1} && i{\cal M}({\rm p} + {\rm e}^- + {\rm p} \to {\rm D} + \nu_{e}) = G_{\rm V}\,g_{\rm A} M_{\rm N}\,G_{\rm \pi NN}\,\frac{3g_{\rm V}}{4\pi^2}\,e^*_{\mu}(k_{\rm D})\,[\bar{u}(k_{\nu_{\rm e}})\gamma^{\mu} (1-\gamma^5) u(k_{\rm e^-})]\,{\cal F}^{\rm e}_{\rm pp}\nonumber\\ &&\times \,\frac{\displaystyle C_0(k)}{\displaystyle1 - \frac{1}{2}\,a^{\rm e}_{\rm pp} r^{\rm e}_{\rm pp}k^2 + \frac{a^{\rm e}_{\rm pp}}{r_C}\,h(2 k r_C) + i\,a^{\rm e}_{\rm pp}\,k\,C^2_0(k) }\,[\bar{u^c}(p_2) \gamma^5 u(p_1)]\,F_{\rm D}(k^2), \end{eqnarray} where we have described low--energy elastic pp scattering in analogy with the solar proton burning and the neutrino disintegration of the deuteron. The amplitude Eq.(\ref{label5.1}) squared, averaged and summed over polarizations of the interacting particles is defined \begin{eqnarray}\label{label5.2} &&\overline{|{\cal M}({\rm p} + {\rm e}^- + {\rm p} \to {\rm D} + \nu_{e})|^2} = G^2_{\rm V}\, g^2_{\rm A} M^6_{\rm N}\,G^2_{\rm \pi NN}\,\frac{27 Q_{\rm D}}{\pi^2}\,|{\cal F}^{\rm e}_{\rm pp}|^2\,\,F^2_{\rm D}(k^2)\,F(Z, E_{\rm e^-})\nonumber\\ &&\times\,\frac{\displaystyle C^2_0(k)} {\displaystyle \Big[1 - \frac{1}{2}\,a^{\rm e}_{\rm pp} r^{\rm e}_{\rm pp}k^2 + \frac{a^{\rm e}_{\rm pp}}{r_C}\,h(2 k r_C)\Big]^2 + (a^{\rm e}_{\rm pp})^2 k^2 C^4_0(k)}\,\Bigg( E_{\rm e^+} E_{\nu_{\rm e}} - \frac{1}{3}\vec{k}_{\rm e^+}\cdot \vec{k}_{\nu_{\rm e}}\Bigg), \end{eqnarray} where $F(Z, E_{\rm e^-})$ is the Fermi function given by Eq.(\ref{label4.3}). At low energies the cross section $\sigma_{\rm pep}(T_{\rm pp})$ for the pep--process can be determined as follows [24] \begin{eqnarray}\label{label5.3} &&\sigma_{\rm pep}(T_{\rm pp}) = \frac{1}{v}\frac{1}{4M^2_{\rm N}}\int \frac{d^3k_{\rm e^-}}{(2\pi)^3 2 E_{\rm e^-}}\,g\, n(\vec{k}_{\rm e^-})\int \overline{|{\cal M}({\rm p} + {\rm e}^- + {\rm p} \to {\rm D} + \nu_{\rm e})|^2}\nonumber\\ &&(2\pi)^4 \delta^{(4)}(k_{\rm D} + k_{\nu_{\rm e}} - p_1 - p_2 - k_{\rm e^-}) \frac{d^3k_{\rm D}}{(2\pi)^3 2M_{\rm D}}\frac{d^3k_{\nu_{\rm e}}}{(2\pi)^3 2E_{\nu_{\rm e}}}, \end{eqnarray} where $g = 2$ is the number of the electron spin states and $v$ is a relative velocity of the protons. The electron distribution function $n(\vec{k}_{\rm e^-})$ can be taken in the form [21] \begin{eqnarray}\label{label5.4} n(\vec{k}_{\rm e^-}) = e^{\displaystyle \bar{\nu} - T_{\rm e^-}/kT_c}, \end{eqnarray} where $k = 8.617\times 10^{-11}\,{\rm MeV\,K^{-1}}$, $T_c$ is a temperature of the core of the Sun. The distribution function $n(\vec{k}_{\rm e^-})$ is normalized by the condition \begin{eqnarray}\label{label5.5} g\int \frac{d^3k_{\rm e^-}}{(2\pi)^3}\,n(\vec{k}_{\rm e^-}) = n_{\rm e^-}, \end{eqnarray} where $n_{\rm e^-}$ is the electron number density. From the normalization condition Eq.(\ref{label5.5}) we derive \begin{eqnarray}\label{label5.6} e^{\displaystyle \bar{\nu}} = \frac{\displaystyle 4\,\pi^3\, n_{\rm e^-}}{\displaystyle (2\pi\,m_{\rm e}\,kT_c)^{3/2}}. \end{eqnarray} The astrophysical factor $S_{\rm pep}(0)$ is then defined by \begin{eqnarray}\label{label5.7} S_{\rm pep}(0) = S_{\rm pp}(0)\,\frac{15}{2\pi}\,\frac{1}{\Omega_{\rm D e^+ \nu_{\rm e}}}\,\frac{1}{m^3_{\rm e}}\,\Bigg(\frac{E_{\rm th}}{m_{\rm e}}\Bigg)^2\,e^{\displaystyle \bar{\nu}}\,\int d^3k_{\rm e^-} \,e^{\displaystyle - T_{\rm e^-}/kT_c}\,F(Z, E_{\rm e^-}). \end{eqnarray} For the ratio $S_{\rm pep}(0)/S_{\rm pp}(0)$ we obtain \begin{eqnarray}\label{label5.8} \frac{S_{\rm pep}(0)}{S_{\rm pp}(0)} = \frac{2^{3/2}\pi^{5/2}}{f_{\rm pp}(0)}\,\Bigg(\frac{\alpha Z n_{\rm e^-}}{m^3_{\rm e}}\Bigg)\,\Bigg(\frac{E_{\rm th}}{m_{\rm e}}\Bigg)^2\,\sqrt{\frac{m_{\rm e}}{k T_c}}\,I\Bigg(Z\sqrt{\frac{2 m_{\rm e}}{k T_c}}\Bigg). \end{eqnarray} We have set $f_{\rm pp}(0) = \Omega_{\rm D e^+ \nu_{\rm e}}/30 = 0.149$ [21] and the function $I(x)$ having been introduced by Bahcall and May [21] reads \begin{eqnarray}\label{label5.9} I(x) = \int\limits^{\infty}_0 {\displaystyle \frac{\displaystyle du\,e^{\displaystyle -u}}{\displaystyle 1 - e^{\displaystyle -\pi\alpha\,x/\sqrt{u}}}}. \end{eqnarray} The relation between the astrophysical factors $S_{\rm pep}(0)$ and $S_{\rm pp}(0)$ given by Eq.(\ref{label5.8}) is in complete agreement with that obtained by Bahcall and May [21]. The ratio Eq.(\ref{label5.8}) does not depend on whether the astrophysical factor $S_{\rm pp}(0)$ is enhanced with respect to the recommended value or not. \section{Conclusion} \setcounter{equation}{0} We have shown that the contributions of low--energy elastic pp scattering in the ${^1}{\rm S}_0$--state with the Coulomb repulsion to the amplitudes of the reactions p + p $\to$ D + e$^+$ + $\nu_{\rm e}$, $\nu_{\rm e}$ + D $\to$ e$^-$ + p + p and p + e$^-$ + p $\to$ D + $\nu_{\rm e}$ can be described in the RFMD in full agreement with low--energy nuclear phenomenology in terms of the S--wave scattering length and the effective range. The amplitude of low--energy elastic pp scattering has been obtained by summing up an infinite series of one--proton loop diagrams and the evaluation of the result of the summation in leading order in the large $N_C$ expansion. This takes away fully the problem pointed out by Bahcall and Kamionkowski [17] that in the RFMD with the effective local four--nucleon interaction Eq.(\ref{label2.1}) one cannot describe low--energy elastic pp scattering in the ${^1}{\rm S}_0$--state with the Coulomb repulsion in agreement with low--energy nuclear phenomenology. The obtained numerical value of the astrophysical factor $S_{\rm pp}(0) = 4.08\times 10^{-25}\,{\rm MeV\, b}$ agrees with the recommended value $S_{\rm pp}(0) = 4.00\times 10^{-25}\,{\rm MeV\, b}$ and recent estimate $S_{\rm pp}(0) = 4.20\times 10^{-25}\,{\rm MeV\, b}$ [9] obtained from the helioseismic data. Unfortunately, the value of the astrophysical factor $S_{\rm pp}(0) = 4.08 \times 10^{-25}\,{\rm MeV\,\rm b}$ does not confirm the enhancement by a factor of 1.4 obtained in the modified version of the RFMD in Ref.[14] which is not well defined due to a violation of Lorentz invariance of the effective four--nucleon interaction describing N + N $\to$ N + N transitions. This violation has turned out to be incompatible with a dominance of one--nucleon loop anomalies which are Lorentz covariant. The cross section for the neutrino disintegration of the deuteron has been evaluated with respect to $S_{\rm pp}(0)$. We have obtained an enhancement of the cross section by a factor of order 1.5 on the average for neutrino energies $E_{\nu_{\rm e}}$ varying from threshold to $E_{\nu_{\rm e}} \le 10\,{\rm MeV}$. It would be important to verify our results for the reaction $\nu_{\rm e}$ + D $\to$ e$^-$ + p + p in solar neutrino experiments planned by SNO. In fact, first, this should provide an experimental study of $S_{\rm pp}(0)$ and, second, the cross sections for the anti--neutrino disintegration of the deuteron caused by charged $\bar{\nu}_{\rm e}$ + D $\to$ e$^+$ + n + n and neutral $\bar{\nu}_{\rm e}$ + D $\to$ $\bar{\nu}_{\rm e}$ + n + p weak currents have been found in good agreement with recent experimental data obtained by the Reines's experimental group [26]. The evaluation of the astrophysical factor $S_{\rm pep}(0)$ for the reaction p + e$^-$ + p $\to$ D + $\nu_{\rm e}$ or pep--process in the RFMD has shown that the ratio $S_{\rm pep}(0)/S_{\rm pp}(0)$, first, agrees fully with the result obtained by Bahcall and May [21] and, second, does not depend on whether $S_{\rm pp}(0)$ is enhanced with respect to the recommended value or not. Concluding the paper we would like to emphasize that our model, the RFMD, conveys the idea of a dominant role of one--fermion loop (one--nucleon loop) anomalies from elementary particle physics to the nuclear one. This is a new approach to the description of low--energy nuclear forces in physics of finite nuclei. In spite of almost 30 year's history after the discovery of one--fermion loop anomalies and application of these anomalies to the evaluation of effective Lagrangians of low--energy interactions of hadrons, in nuclear physics fermion--loop anomalies have not been applied to the analysis of low--energy nuclear interactions and properties of nuclei. However, an important role of $N\bar{N}$ fluctuations for the correct description of low--energy properties of finite nuclei has been understood in Ref.[16]. Moreover, $N\bar{N}$ fluctuations have been described in terms of one--nucleon loop diagrams within quantum field theoretic approaches, but the contributions of one--nucleon loop anomalies have not been considered in the papers of Ref.[16]. The RFMD strives to fill this blank. Within the framework of the RFMD we aim to understand, in principle, the possibility of the description of strong low-energy nuclear forces in terms of one--nucleon loop anomalies. Of course, our results should be quantitatively compared with the experimental data and other theoretical approaches. Nevertheless, at the present level of the development of our model one cannot demand at once to describe, for example, the astrophysical factor $S_{\rm pp}(0)$ with accuracy better than it has been carried out by Schiavilla {\it et al.} [6], where only corrections not greater than 1$\%$ are allowed. It is not important for our approach at present. What is much more important is in the possibility to describe without free parameters in quantitative agreement with both the experimental data and other theoretical approaches all multitude of low--energy nuclear reactions of the deuteron coupled to nucleons and other particles. In Ref.[13] we have outlined the procedure of the evaluation of chiral meson--loop corrections in the RFMD. The absence of free parameters in the RFMD gives the possibility to value not only the role of these corrections but also the corrections of other kind mentioned recently by Vogel and Beacom [25]. The justification of the RFMD within QCD and large $N_C$ expansion [12] implies that one--nucleon loop anomalies might be natural objects for the understanding of low-energy nuclear forces. The real accuracy of the approach should be found out for the process of the development. \section{Acknowledgement} We would like to thank Prof. Kamionkowski and Prof. Beacom for reading manuscript and useful comments. \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} Development in numerical modelling provides the possibility to describe complex phenomena in material or structural behaviour. The resulting models are, however, often highly nonlinear and defined by many parameters, which have to be estimated so as to properly describe the investigated system and its behaviour. The aim of the model calibration is thus to rediscover unknown parameters knowing the experimentally obtained response of a system to the given excitations. The principal difficulty of model calibration is related to the fact that while the numerical model of an experiment represents a well-defined mapping from input (model, material, structural, or other parameters) to output (structural response), there is no guarantee that the inverse relation even exists. The most broadly used approach to parameter identification is usually done by means of an error minimisation technique, where the distance between parameterised model predictions and observed data is minimised \cite{Stavroulakis:2003}. Since the inverse relation (mapping of model outputs to its inputs) is often ill-posed, the error minimisation technique leads to a difficult optimisation problem, which is highly nonlinear and multi-modal. Therefore, the choice of an appropriate identification strategy is not trivial. Another approach intensively developed during the last decade is based on Bayesian updating of uncertainty in parameters' description \cite{Marzouk:2007:JCP,Kucerova:2012:JCAM}. The uncertainty in observations is expressed by corresponding probability distribution and employed for estimation of the so-called posterior probabilistic description of identified parameters together with the prior expert knowledge about the parameter values \cite{Jaynes:2003,Tarantola:2005}. The unknown parameters are thus modelled as random variables originally endowed with prior expert-based probability density functions which are then updated using the observations to the posterior density functions. While the error minimisation techniques lead to a single point estimate of parameters' values, the result of Bayesian inference is a probability distribution that summarizes all available information about the parameters. Another very important advantage of Bayesian inference consists in treating the inverse problem as a well-posed problem in an expanded stochastic space. Despite the progress in uncertainty quantification methods \cite{Matthies:2007:IB,Rosic:2013:ES}, more information provided by Bayesian inference is generally related to more time-consuming computations. In many situations, the single point estimate approach remains the only feasible one and development of efficient tools suitable for this strategy is still an actual topic. Within the several last decades, a lot of attention was paid to the so-called intelligent methods of information processing and among them especially to soft computing methods such as artificial neural networks (ANNs), evolutionary algorithms or fuzzy systems \cite{Jang:1996:NSC}. A review of soft computing methods for parameter identification can be found e.g. in \cite{Kucerova:2007:PHD}. In this paper, we focus on applications of ANNs in the single point approach to parameter identification. \new{In particular, we elaborate our previous work presented in \cite{Mares:2012:IALCCE,Mares:2012:Topping} with the goal to present a detail and comprehensive comparison of three different strategies of ANNs' usage in parameter identification problems.} \new{Next section briefly recall the basics of ANNs. Classification of ANNs' different applications in calibration problems is introduced in Section \ref{sec:strategies} and description of illustrative example -- affinity hydration model for concrete -- follows in Section \ref{sec:affinity}. In the context of this particular example, the calibration strategies are then described in detail in five sections starting by training data preparation and sensitivity analysis in Section \ref{sec:sensitivity}. Neural network inputs and outputs in particular strategies are discussed in Section \ref{sec:implementation} and training with topology are described in Section \ref{sec:training}. Verification and validation on simulated and experimental data are summarized in sections \ref{sec:verification} and \ref{sec:valid}, respectively. Finally, the results are concluded in Section \ref{sec:concl}.} \section{Artificial neural network} Artificial neural networks (ANNs) \cite{Gurney:2002,Haykin:2009} are powerful computational systems consisting of many simple processing elements - so-called neurons - connected together to perform tasks in an analogy to biological brains. Their main feature is the ability to change their behaviour based on external information that flows through the ANN during the learning (training) phase. A particular type of ANN is the so-called feedforward neural network, which consists of neurons organized into layers where outputs from one layer are used as inputs into the following layer, see Figure \ref{fig:mlp}. There are no cycles or loops in the network, no feed-back connections. Most frequently used example is the multi-layer perceptron (MLP) with a sigmoid transfer function and a gradient descent method of training called the back-propagation learning algorithm. In practical usage, the MLPs are known for their ability to approximate nonlinear relations and therefore, when speaking about an ANN, the MLP is considered in the following text. \begin{figure}[h!] \centerline{ \includegraphics[width=13cm]{mlp.eps} } \caption{Architecture of multi-layer perceptron} \label{fig:mlp} \end{figure} The input layer represents a vector of input parameters which are directly the outputs of the input layer. The outputs $o_{i-1,k}$ of the $(i-1)$-th layer are multiplied by a vector of constants $w_{i,j,k}$, the so-called synaptic weights, summarized and used as inputs $u_{i,j}$ into \new{the $j$-th neuron in} the following $i$-th layer. Elements in the hidden and output layers - neurons - are defined by an activation function $f_a(u_{i,j})$, which is applied on the input and produces the output value of the $j$-th neuron in the $i$-th layer, i.e. \begin{equation} o_{i,j} = f_a \left( u_{i,j} \right) \qquad \mbox{where} \qquad u_{i,j} = \new{\sum_{k=0}^K} \left(o_{i-1,k} w_{i,j,k} \right) \, . \end{equation} The synaptic weights $w_{i,j,k}$ are parameters of an ANN to be determined during the training process. \new{$K$ is the number of neurons in the $i-1$ layer.} The type of the activation function is usually chosen in accordance with the type of a function to be approximated. In the case of continuous problems, the sigmoid activation function given as \begin{equation} o_{i,j} = f_a \left( u_{i,j} \right) = \frac{1}{1+e^{-u_{i,j}}} \end{equation} is the most common choice. One bias neuron is usually added into the input and hidden layers. It does not contain an activation function, but only a constant value. Its role is to enable to shift the value of a sum over the outputs of his neighbouring neurons before this sum enters as the input into the neurons in the following layer. The value of biases is determined by the training process together with the synaptic weights. Despite of ANN's popularity there are only few recommendations for the choice of ANN's architecture. The authors, e.g. in \cite{Hornik:1989:NN,Hornik:1993:NN}, show that the ANN with any of a wide variety of continuous nonlinear hidden-layer activation functions and one hidden layer with an arbitrarily large number of units suffices for the "universal approximation" property. Therefore, we limit our numerical experiments to such case. The number of units in the input and the output layer is usually given by the studied problem itself, but there is no theory yet specifying the number of units in the hidden layer. \new{On one hand, too small number of hidden units leads to large prediction errors. On the other hand, a large number of hidden units may cause the so-called overfitting, where the ANN provides precise outputs for the training samples, but fails in case of unseen samples. In such a situation, the ANN tries to fit the training data despite increasing oscillations in the intermediate space.} To overcome this problem, some model selection technique \cite{Anders:1999:NN} has to be applied in order to perform a guided choice of the ANN's topology. \new{Recent approaches encompass e.g. growing-pruning methods (see e.g. \cite{Narasimha:2008:N} or more complex techniques designed for optimisation of the ANN's topology such as meta-learning \cite{Kordik:2009,Kordik:2010:NN}. Here we employ simple and general strategy to evaluate a particular ANN's topology: the cross-validation}, because it does not involve any probabilistic assumptions or dependencies on an identification problem. The idea of cross-validation is based on a repeated ANN's prediction error evaluation for a chosen subset of training data and selection of the ANN with the smallest averaged prediction errors. Comparing to the well-known model validation on some independent set of data, the advantage of cross-validation consists \new{in better behaviour on smaller data sets, where independent data set cannot be afforded} \cite{Moody:1994:FSNN}. \new{Before applying the ANN to any engineering problem, one has to resolve also several questions regarding the training data preparation. It involves not only the transformation of input and output data into the range of the activation functions. In simulation problems, where the ANN is applied to mimic some unknown relationship between observed quantities, the training data coincide with the measured data. In inverse problems, we already have some theoretical model relating those quantities and we train the ANN on simulated data, see a recent review of ANN's application in structural mechanics \cite{Freitag:2015:CTR}. Preparation of a suitable training set becomes another nontrivial task, where sensitivity analysis plays an important role. For a sake of clarity, we address these topics in more detail in Section \ref{sec:sensitivity} in context with a particular model for cement hydration.} \section{Strategies for application of ANN in model calibration}\label{sec:strategies} In model calibration, the goal is to find a set of model parameters minimising the difference between the model response and experimental measurements, see Figure \ref{fig:scheme}. \begin{figure}[h!] \centerline{ \includegraphics[width=13cm]{schema.eps} } \caption{Scheme of model calibration procedure.} \label{fig:scheme} \end{figure} An intuitive way of solving the calibration problem is to formulate an error function quantifying this difference and to minimise the error function using some optimisation algorithm. The most common error functions are given as \begin{eqnarray} F_1 & = & \sum^{N_\mathrm{R}} _{i=1}(r_i-d_i)^2 \, , \label{eq:optim1} \\ F_2 & = & \sum^{N_\mathrm{R}} _{i=1}|r_i-d_i| \, , \label{eq:optim2} \end{eqnarray} where $r_i$ is the $i$-th component of the model response corresponding to the $i$-th measured quantity $d_i$ and $N_\mathrm{R}$ is a number of measured quantities. The difficulty arises from the nonlinear relation between the model response and model parameters often causing complexity of the error function such as multi-modality or non-differentiability. Therefore, the computationally efficient methods based on analytically or numerically obtained gradient can be applied only in specific cases. A more general approach is to apply an optimisation algorithm which can handle the multi-modality once furnished by a sufficient number of function evaluations. However, one evaluation of an error function always involves a simulation of the model. Even for the relatively fast model simulation, the optimisation can become easily unfeasible because of the huge number of function evaluations commonly needed by evolutionary algorithms, even though they usually need less simulations than uncertainty based methods mentioned in the introductory part of the paper. One way of reducing the number of model simulations is to construct a {\bf forward model approximation} based e.g. on an ANN. The error function minimisation then becomes a minimisation of distance between the ANN's predictions and experimental data. The efficiency of this strategy relies on the evaluation of the trained ANN to be significantly much faster than the full model simulation. The advantage of this strategy is that the ANN is used to approximate a known mapping which certainly exists and is well-posed. Computational costs of this strategy are separated in two parts of a similar size: (i) the ANN training - optimisation of synaptic weights and (ii) the minimisation of an error in the ANN prediction for experimental data - optimisation of ANN inputs (i.e. determination of investigated model parameters). An important shortcoming of this method is that this ill-posed optimisation problem needs to be solved repeatedly for any new experimental measurement. This way of ANN application to the parameter identification was presented e.g. in \cite{Abendroth:2006:EFM}, where an ANN is used for predicting load-deflection curves and the conjugate directions algorithm is then applied for optimisation of ductile damage and fracture parameters. Authors in \cite{Pichler:2003:IJNME} train an ANN to approximate the results of FE simulations of jet-grouted columns and optimise the column radius and a cement content of the columns by a genetic algorithm. Principally same methods are used for identification of elasto-plastic parameters in \cite{Aguir:2011:MD}. One more difficulty of the forward model approximation concerns the number of parameters and response components. It is very common that the experimental observations are represented by a discretised curves or surfaces in time or space dimensions being defined as a vectors with a large number of components. A forward model then represents a mapping from usually low-dimensional parameter space to high-dimensional response space. Although this mapping is well-posed, the surrogate model must have a large number of outputs or the time and/or space dimensions have to be included among the model inputs. Another way of avoiding the mapping to a large number of outputs is to construct the {\bf error function approximation}, where the model parameters are mapped onto only one scalar value. One important inconvenience of such strategy is of course the complexity of the error function, which can be, as mentioned above, highly nonlinear, multi-modal and/or non-smooth. Higher complexity of the approximated relation leads to a higher number of simulations needed for the construction of the approximation. This concerns another problem of estimation and choice of an appropriate design of experiments, i.e. sets of parameters, to perform the simulations which will enable to build up the surrogate with a relatively small error. This problem can be reduced by adaptive addition of design points, i.e. new simulations, close to the minimum of the error function approximation. The result of the new simulation is then used for an improvement of the surrogate and a new optimisation process is run again. Such an approach is usually well suited for surrogates based on kriging or radial basis function networks \cite{Queipo:2005,Kucerova:2009}. In this paper, we limit our attention to application of feedforward layered neural networks and thus, we investigated their ability to approximate the error function with a limited number of simulations in non-adaptive fashion. While the strategy of the forward model approximation involves a new optimisation process for any new data, the strategy of the error function approximation involves not only the optimisation process, but also the surrogate model construction. Regarding this aspect, the most convenient strategy is the {\bf inverse relation approximation}, which needs only one evaluation to furnish the parameters corresponding to new observations. Of course, by the new observations we mean observations of the system with different properties but performed under the same external conditions (e.g. a different material, but the same geometry of the specimen and loading conditions). The strategy of the inverse relation approximation assumes the existence of an inverse relationship between the outputs and the inputs of the calibrated model. If such a relationship exists at least on a specified domain of parameters' values, it can be approximated by an ANN. Here the ANN's training process \new{is responsible for all} computational costs arising from a~solution of the ill-posed problem. This way of the ANN's application to parameter identification was presented e.g. in \cite{Novak:2006:EAAN} or recently in~\cite{Kucerova:2014:AES} for identification of mechanical material parameters, in \cite{Zaw:2009:JB} for estimation of elastic modulus of the interface tissue on dental implants surfaces, in \cite{Zhang:2010:ECM} for identification of interfacial heat transfer coefficient or in \cite{Klos:2011:CS} for determination of geometrical parameters of circular arches. In order to illustrate the advantages and disadvantages of the outlined strategies of the ANN's application to model calibration, we have chosen a computationally simple but nonlinear affinity hydration model briefly described in the following section. The model was successfully validated on Portland cements in \cite{Silva:2015:JIFS} and thus allows us to also validate the described identification strategies on experimental data as summarized in Section \ref{sec:valid}. \section{Affinity hydration model}\label{sec:affinity} Affinity hydration models provide a framework for accommodating all stages of cement hydration. We consider hydrating cement under isothermal temperature 25$^{\circ}\mathrm{C}$\xspace. At this temperature, the rate of hydration can be expressed by the {\it chemical affinity} $\tilde{A}_{25}(\alpha)$ under isothermal 25$^{\circ}\mathrm{C}$\xspace \begin{equation} \frac{\mathrm{d} \alpha}{\mathrm{d} t}=\tilde{A}_{25}(\alpha),\label{eq:158} \end{equation} where the chemical affinity has a dimension of $\textrm{time}^{-1}$ and $\alpha$ stands for the degree of hydration. The affinity for isothermal temperature can be obtained experimentally; isothermal calorimetry measures a heat flow $q(t)$ which gives the hydration heat $Q(t)$ after integration. The approximation is given \begin{eqnarray} \frac{Q(t)}{Q_{pot}} &\approx& \alpha\label{eq:146},\\ \frac{1}{Q_{pot}}\frac{\mathrm{d} Q(t)}{\mathrm{d} t} &=& \frac{q(t)}{Q_{pot}} \approx \frac{\mathrm{d} \alpha}{\mathrm{d} t}= \tilde{A}_{25}(\alpha)\label{eq:45}, \end{eqnarray} where $Q_{pot}$ is expressed in J/g of cement paste. Hence the normalized heat flow $\frac{q(t)}{Q_{pot}}$ under isothermal 25$^{\circ}\mathrm{C}$\xspace equals to chemical affinity $\tilde{A}_{25}(\alpha)$. Cervera et al. \cite{Cervera:1999:JEM} proposed an analytical form of the normalized affinity which was refined in \cite{Gawin:2006:IJNME}. Here we use a slightly modified formulation~\cite{Smilauer:2010}: \begin{eqnarray} \tilde{A}_{25}(\alpha) = B_1 \left( \frac{B_2}{\alpha_\infty} + \alpha \right ) \left( \alpha_\infty - \alpha \right) \exp\left(-\bar\eta\frac{\alpha}{\alpha_\infty}\right),\label{eq:46} \end{eqnarray} where $B_1, B_2$ are coefficients related to chemical composition, $\alpha_\infty$ is the ultimate hydration degree and $\bar\eta$ represents microdiffusion of free water through formed hydrates. When hydration proceeds under varying temperature, maturity principle expressed via Arrhenius equation scales the affinity to arbitrary temperature~$T$ \begin{eqnarray} \tilde{A}_{T} = \tilde{A}_{25} \exp\left[\frac{E_a}{R}\left(\frac{1}{273.15+25}-\frac{1}{T}\right)\right],\label{eq:145} \end{eqnarray} where $R$ is the universal gas constant (8.314 Jmol$^{-1}$K$^{-1}$) \new{and $E_a$ [Jmol$^{-1}$] is the activation energy}. For example, simulating isothermal hydration at 35$^{\circ}\mathrm{C}$\xspace means scaling $\tilde{A}_{25}$ with a factor of 1.651 at a given time. This means that hydrating concrete for 10 hours at 35$^{\circ}\mathrm{C}$\xspace releases the same amount of heat as concrete hydrating for 16.51 hours under 25$^{\circ}\mathrm{C}$\xspace. Note that setting $E_a=0$ ignores the effect of temperature and proceeds the hydration under 25$^{\circ}\mathrm{C}$\xspace. \new{The evolution of $\alpha$ is obtained through numerical integration since there is no analytical exact solution.} \section{Sensitivity analysis and training data preparation}\label{sec:sensitivity} Since the ANN's training process requires a preparation of a training data set, it is also worthy to use these data for a sampling-based sensitivity analysis \cite{Helton:2006:RESS,Saltelli:2000} and obtain some information about importance of particular observations or significance of each parameter for a system behaviour. To achieve some reliable information from sensitivity analysis as well as a good approximation by an ANN, one has to choose the training data carefully according to a suitable design of experiments, see e.g. \cite{Janouchova:2013:CS} for a competitive comparison of several experimental designs. \new{As the model parameters are defined on various intervals, they need to be transformed into standardised parameters, e.g. $p_i \in [ 0; 1]$, defined on the intervals suitable for chosen activation functions. When the bounds for a parameter vary in orders, it can typically suggest highly nonlinear relationship with model response. At this moment, any expert knowledge about the parameter meaning can be employed to decrease that nonlinearity by introduction of nonlinear transformation to standardised parameter. It is demonstrated on parameter $B_2$ in Table~\ref{tab:params}, where bounds for the affinity model parameters together with their relations to the standardised parameters $p_i$ are listed.} \begin{table}[t!] \centering \begin{tabular}{lllc}\hline Parameter & Minimum & Maximum & Relation \\\hline $B_1 \: [h^{-1}]$ & $0.1 $ & $1$ & $p_1 = (B_1 - 0.1)/0.9$\\ $B_2 \: [-]$ & $10^{-6}$ & $10^{-3}$ & $p_2 = (\log B_2 + 6)/3$\\ $\bar{\eta} \: [-]$ & 2 & 12 & $p_3 = (\bar{\eta} -2)/10$\\ $\alpha_\infty \: [-]$ & 0.7 & 1.0 & $p_4 = (\alpha_\infty -0.7)/0.3$\\ \hline \end{tabular} \caption{Bounds for affinity model parameters.} \label{tab:params} \end{table} \begin{figure}[h!] \centering \begin{tabular}{cc} $B_1 = \{0.1, 0.2, \ldots, 1\}$ & $B_2 = \{10^{-6}, 11.2^{-5}, \ldots, 10^{-3}\}$\\ \includegraphics[width=6cm]{vliv_parametru_1.eps} & \includegraphics[width=6cm]{vliv_parametru_2.eps}\\ $\bar{\eta} = \{2, 3.11, \ldots, 12\}$ & $\alpha_\infty = \{0.7, 0.73, \ldots, 1\}$\\ \includegraphics[width=6cm]{vliv_parametru_3.eps} & \includegraphics[width=6cm]{vliv_parametru_4.eps} \end{tabular} \caption{Influence of model parameters to model response $\alpha$.} \label{fig:par_alpha} \end{figure} The affinity hydration model was chosen not only for its nonlinearity, but especially for its relatively simple interpretation and computationally fast simulation. Hence, we assume that the model is eligible to illustrate typical features of particular identification strategies. In order to understand the influence of the model parameters to its response more deeply, Figure \ref{fig:par_alpha} demonstrates the changes of the response induced by changes in a chosen parameter while other parameters are fixed. On the other hand, to illustrate the spread of the model response corresponding to the parameters varying within the given domain, we prepare a design of experiments (DoE) having $N_\mathrm{DoE} = 100$ samples in the space of standardised parameters. The DoE is generated as Latin Hypercube Sampling optimised with respect to the modified $L_2$ discrepancy. Such an experimental design has a~good space-filling property and is nearly orthogonal~\cite{Janouchova:2013:CS}. For each design point we perform a~model simulation to obtain a bundle of $N_\mathrm{DoE}$ curves for the degree of hydration $\alpha(t)$, see Figure~\ref{fig:bundle}a. \begin{figure}[h!] \centering \begin{tabular}{cc} \includegraphics[width=6cm]{1_DoH_cely.eps} & \includegraphics[width=6cm]{citlivost.eps} \\ (a) & (b) \end{tabular} \caption{Bundle of degree of hydration curves obtained for design points (a) and sensitivity analysis for input-output pairs (b).} \label{fig:bundle} \end{figure} Since the model response is represented by the degree of hydration being a function of the time, the time domain is discretised into $1161$ steps uniformly distributed with the logarithm of the time. Hence, the model input vector $\vek{p} = (p_1, p_2, p_3, p_4)$ consists of $4$ parameters and the output vector $\vek{\alpha} = (\alpha_1, \dots, \alpha_{N_\mathrm{time}})$ consists of $N_\mathrm{time} = 1161$ components. In order to quantify the influence of the model parameters to particular response components, we evaluate Spearman's rank correlation coefficient $\rho$ for each $(\vek{p}_i,\vek{\alpha}_i)$ pair using all the $i \in \{1, \dots, N_\mathrm{DoE}\}$ simulations. The results of such a~sampling-based sensitivity analysis \cite{Helton:2006:RESS} are plotted in Figure \ref{fig:bundle}b. In the inverse mode of identification, the model output vector $\vek{\alpha}$ consisting of $N_\mathrm{time} = 1161$ components is too large for usage as an input vector for the ANN. Hence, we performed the principal component analysis (PCA) in order to reduce this number to $N_\mathrm{PCA} = 100$ components $\bar{\vek{\alpha}} = (\bar{\alpha}_1, \dots, \bar{\alpha}_2)$ with non-zero variance (this number is related to the number of simulations involved in PCA, i.e. $N_\mathrm{PCA} = N_\mathrm{DoE}$). The components are ordered according to their relative variance, see Figure \ref{fig:pca}a for the nine most important ones. \begin{figure}[h!] \centering \begin{tabular}{cc} \includegraphics[width=6cm]{variance_explained.eps} & \includegraphics[width=6cm]{pca_citlivost.eps} \\ (a) & (b) \end{tabular} \caption{Variance explained by the first nine principal components (a) and sensitivity analysis for model inputs $\vek{p}_i$ - principal components $\bar{\vek{\alpha}}_i$ (b).} \label{fig:pca} \end{figure} Resulting principal components are technically new quantities obtained by a linear combination of the original model outputs $\bar{\vek{\alpha}} = \bar{A}(\vek{\alpha})$. This transformation has of course an influence to sensitivity analysis and thus we computed correlations between the model inputs $\vek{p}_i$ and principal components $\bar{\vek{\alpha}}_i$, see Figure \ref{fig:pca}b. \section{Implementation of approximation strategies}\label{sec:implementation} Results of the described simulations are also used as training simulations for ANNs, i.e. $\mathcal{D}_\mathrm{train} = \{(\vek{p}_i,\vek{\alpha}_i) \, | \, i \in \{1, 2, \dots, N_\mathrm{train}\}, N_\mathrm{train} = N_\mathrm{DoE} = 100 \}$. Particular approximation strategies, however, process the training simulations in a different way. The strategy of the forward model approximation can be formulated in two ways, which differ in handling the high dimensionality of the model output $\vek{\alpha}$. In the first formulation, we can consider the time step $t_k$ as the fifth model parameter (i.e. the fifth model input) and thus the model output reduces into only one scalar value of the hydration degree $\alpha_k$ corresponding to the given time $t_k$. As the objective of the ANN is thus to span the parameter as well as the time space, we called this strategy as {\bf Forward Complex} (ForwComp). In such a configuration, the results of $N_\mathrm{train}$ training simulations turn into $N_\mathrm{train} \times N_\mathrm{time} = 116,100$ training samples. Evaluation of so many samples at every iteration of the ANN's training process is, however, very time-consuming. Therefore, only every $m$-th time step is included for ANN training and thus the training set is given as $\mathcal{D}_\mathrm{train}^\mathrm{ForwComp} = \{((\vek{p}_i,t_k),\alpha_{i,k}) \, | \, i \in \{1, 2, \dots, N_\mathrm{train}\}, k \in \{1, 1+m, 1+2m, \dots, N_\mathrm{time}\} \}$. In our particular implementation, we selected $m=10$ leading to $|\mathcal{D}_\mathrm{train}^\mathrm{ForwComp}| = 11,700$ samples. Note that in all other strategies, the number of training samples equals the number of training simulations, see Table \ref{tab:stratpar}, where the significant parameters of particular approximation strategies are briefly summarised. \begin{table}[h!] \centering \begin{tabular*}{\textwidth}{@{\extracolsep{\fill} }lrllr} \hline Strategy & $N_\mathrm{ANN}$ & Inputs & Outputs & $|\mathcal{D}_\mathrm{train}|$ \\ \hline Forward Complex & $1$ & $p_1$, $p_2$, $p_3$, $p_4$, $t_k$ & $\alpha_k \, | \, k \in \{1, 11, \dots, 1161\}$ & $11700$ \\ Forward Split & $9$ & $p_1$, $p_2$, $p_3$, $p_4$ & $\alpha_{300}; \alpha_{400}; \dots; \alpha_{1100}$ & $100$ \\ Forward Split II & $22$ & $p_1$, $p_2$, $p_3$, $p_4$ & $\alpha_{100}; \alpha_{150}; \dots; \alpha_{1150}$ & $100$ \\ Forward Split III & $43$ & $p_1$, $p_2$, $p_3$, $p_4$ & $\alpha_{100}; \alpha_{130}; \alpha_{150}; \alpha_{170}; \dots; \alpha_{1150}$ & $100$ \\ \hline Error $F_1$ & $1$ & $p_1$, $p_2$, $p_3$, $p_4$ & $F_1$ & $100$ \\ Error $F_2$ & $1$ & $p_1$, $p_2$, $p_3$, $p_4$ & $F_2$ & $100$ \\ \hline Inverse Expert & $4$ & $\alpha_{300}, \alpha_{400}, \dots, \alpha_{1100}$ & $p_1$; $p_2$; $p_3$; $p_4$ & $100$ \\ Inverse Expert II & $4$ & $\alpha_{200}, \alpha_{300}, \dots, \alpha_{1100}$ & $p_1$; $p_2$; $p_3$; $p_4$ & $100$ \\ Inverse PCA & $4$ & $\bar{\alpha}_1, \bar{\alpha}_2, \dots, \bar{\alpha}_9$ & $p_1$; $p_2$; $p_3$; $p_4$ & $100$ \\ \hline \end{tabular*} \caption{Parameters of approximation strategies} \label{tab:stratpar} \end{table} The second way of the model output approximation is based on training an independent ANN for every time step $t_k$. Here, the particular ANN approximates simpler relation and span only the parameter space. A training data set for ANN approximating the response component $\alpha_k$ is thus given as $\mathcal{D}_\mathrm{train}^{\mathrm{ForwSpli,}\alpha_k} = \{(\vek{p}_i,\alpha_{i,k}) \, | \, i \in \{1, 2, \dots, 100\} \}$ having only $|\mathcal{D}_\mathrm{train}^{\mathrm{ForwSpli,}\alpha_k}| = 100$ samples. A disadvantage of such an approach consists in training a large number $N_\mathrm{ANN}$ of smaller ANNs. As training of $N_\mathrm{ANN} = N_\mathrm{time} = 1161$ different ANNs can be almost unfeasible, we select only a few of the time steps, where the approximation is constructed and thus, the model output approximation is more rough. The choice of the important time steps and their number can be driven by the expert knowledge or results of the sensitivity analysis. Hence, we present three different choices so as to illustrate its influence, see Table \ref{tab:stratpar}. We further call these strategies as {\bf Forward Split} (ForwSpli), {\bf Forward Split II} (ForwSpliII) and {\bf Forward Split III} (ForwSpliIII). The error function approximation is the only strategy where the high dimensionality of the model output does not impose any complications. The model output is used for evaluation of the error function and the ANN is trained to approximate the mapping from the parameter space to a single scalar value of the error function, i.e. $\mathcal{D}_\mathrm{train}^{\mathrm{Error,}F_a} = \{(\vek{p}_i,F_a) \, | \, i \in \{1, 2, \dots, N_\mathrm{train}\}\}$ and $|\mathcal{D}_\mathrm{train}^{\mathrm{Error,}F_a}| = 100$, where $F_a$ stands for a chosen error function. As we already mentioned in Section \ref{sec:strategies}, there are two very common error functions given by Eqs. \eqref{eq:optim1} and \eqref{eq:optim2} and thus we investigate both considering the two strategies further called as {\bf Error $F_1$} and {\bf Error $F_2$}, respectively. In case of the inverse relation approximation, the high dimensionality of the model output needs again some special treatment so as to keep the number of ANN inputs and thus the ANN complexity reasonable. An intuitive approach is a simple selection of a limited number of output values $\vek{a} = A(\vek{\alpha})$. Here, one ANN is trained to predict one model parameter $p_j$ and thus $\mathcal{D}_\mathrm{train}^{\mathrm{InvExp},p_j} = \{(\vek{a}_i,p_{i,j}) \, | \, i \in \{1, 2, \dots, N_\mathrm{train}\}\}$ and $|\mathcal{D}_\mathrm{train}^{\mathrm{InvExp,}p_j}| = 100$. A particular choice of components in the vector $\vek{a}_i$ defined by the operator $A$ should take into account not only the results of sensitivity analysis, but also a possible measurement error in experimental data as well as any other expert knowledge. Hence we present again two different choices in order to illustrate its influence, see Table \ref{tab:stratpar} and we further call these configurations as {\bf Inverse Expert} (InvExp) and {\bf Inverse Expert II} (InvExpII). In order to reduce the influence of the expert choice, the principal components $\bar{\vek{\alpha}}$ computed as described in the previous section can be used as the ANN's inputs and one has to choose only their number. To compare the information contained in the same number of inputs selected by an expert, we have chosen the same number of principal components as the number of inputs in the Inverse Expert configuration and thus $\mathcal{D}_\mathrm{train}^{\mathrm{InvPCA},p_j} = \{((\bar{\alpha}_{i,1}, \dots, \bar{\alpha}_{i,9}),p_{i,j}) \, | \, i \in \{1, 2, \dots, N_\mathrm{train}\}\}$ and $|\mathcal{D}_\mathrm{train}^{\mathrm{InvPCA,}p_j}| = 100$. The principal components based strategy is further called {\bf Inverse PCA} (InvPCA). \new{In our preliminary study presented in \cite{Mares:2012:IALCCE}, we have also tested the possibility to choose smaller number of PCA-based inputs selected separately for each parameter to be identified according to the sensitivity analysis. Nevertheless, such sensitivity-driven reduction of PCA-based inputs was shown to deteriorate the quality of trained ANNs.} Then, the last preparatory step concerns the generation of testing data for a final assessment of the resulting ANNs consisting of $N_\mathrm{test} = 50$ simulations for randomly generated sets of input parameters. The obtained data are then processed by particular approximation strategies in the same way as the training data described above. \section{Neural network training algorithm and topology choice}\label{sec:training} The quality of the ANN-based approximation estimated on a given data set $\mathcal{D}$ can be expressed as the mean relative prediction error $\varepsilon^\mathrm{MRP}(\mathcal{D})$ given as \begin{equation} \varepsilon^\mathrm{MRP}(\mathcal{D}) = \frac{\sum^{|\mathcal{D}|}_{i=1}|O_{i}-T_{i,\mathcal{D}}|}{|\mathcal{D}|( T_\mathrm{max,\mathcal{D}_{train}}-T_\mathrm{min,\mathcal{D}_{train}})} \, , \label{eq:error} \end{equation} where $O_{i}$ is the ANN's output corresponding to the target value $T_{i,\mathcal{D}}$ contained in the data set $\mathcal{D}$, which consists of $|\mathcal{D}|$ samples. $T_\mathrm{max,\mathcal{D}_{train}}$ and $T_\mathrm{min,\mathcal{D}_{train}}$ are the maximal and minimal target values in the training data set $\mathcal{D}_\mathrm{train}$, so the error $\varepsilon^\mathrm{MRP}(\mathcal{D})$ is always scaled by the same factor for any chosen data set $\mathcal{D}$ and this factor corresponds to the range of the training data. The conjugate gradient-based method \cite{Shewchuk:1994} was applied as a training algorithm for synaptic weights computation and the cross-validation method was employed to determine the number of hidden neurons. In $V$-fold cross-validation we break the training data set $\mathcal{D}_\mathrm{train}$ into $V$ approximately equisized subsets $\mathcal{D}_\mathrm{train} = \mathcal{D}_{\mathrm{train},1} \cup \mathcal{D}_{\mathrm{train},2} \cup \dots \cup \mathcal{D}_{\mathrm{train,}V}$ and then we perform $V$ training processes, each time leaving out one of the subsets $\mathcal{D}_{\mathrm{train},i}$ and using the rest of the training data set $\mathcal{D}_\mathrm{train} \setminus \mathcal{D}_{\mathrm{train,i}}$. The criterion for stopping the training process is governed by the prediction errors ratio $r^\mathrm{PE}_k$ computed at the $k$-th iteration of the training algorithm given as \begin{equation} r^\mathrm{PE}_k (\mathcal{D}_\mathrm{train} \setminus \mathcal{D}_{\mathrm{train,i}}) = \frac{ \sum_{j = k-J}^{k} \varepsilon^\mathrm{MRP}_j (\mathcal{D}_\mathrm{train} \setminus \mathcal{D}_{\mathrm{train,i}}) }{ \sum_{j = k-2J}^{k-J-1} \varepsilon^\mathrm{MRP}_j (\mathcal{D}_\mathrm{train} \setminus \mathcal{D}_{\mathrm{train,i}})} \, , \end{equation} where $\varepsilon^\mathrm{MRP}_j (\mathcal{D}_\mathrm{train} \setminus \mathcal{D}_{\mathrm{train,i}})$ is the mean relative prediction error obtained at the $j$-th iteration of the training algorithm obtained on the training data set without its $i$-th partition. $J$ is the chosen number of iterations considered for computing the ratio $r^\mathrm{PE}_k$ for its smoothing effect on $r^\mathrm{PE}_k$. The training process is stopped either when the number of iterations achieves its chosen maximal value $K$ or if the prediction errors ratio $r^\mathrm{PE}_k$ exceeds a chosen critical value $r^\mathrm{PE}_\mathrm{max}$. Once the training process is completed, the ANN is evaluated on the remaining part of the training data $\mathcal{D}_{\mathrm{train,i}}$, which was not used in the training process. The quality of the ANN with a particular number of hidden neurons $h$ is assessed by the cross-validation error $\varepsilon^\mathrm{CV}_h$, which is computed as a mean of the errors obtained for the ANNs trained on the subsets $\mathcal{D}_\mathrm{train} \setminus \mathcal{D}_{\mathrm{train,i}}$ and then evaluated on the remaining subset $\mathcal{D}_{\mathrm{train,i}}$ , i.e. \begin{equation} \varepsilon^\mathrm{CV}_h = \frac{1}{V} \sum_{i=1}^{V} \varepsilon^\mathrm{MRP} (\mathcal{D}_{\mathrm{train,i}}) \, . \end{equation} We start with an ANN having $h_\mathrm{min}$ hidden neurons and we compute the corresponding cross-validation error. Then, one hidden neuron is added and after all the training processes on training data subsets, the new cross-validation error is evaluated. We compute the cross-validation error ratio $r^\mathrm{CVE}_h$ as \begin{equation} r^\mathrm{CVE}_h = \varepsilon^\mathrm{CV}_h / \varepsilon^\mathrm{CV}_{h-1} \, . \end{equation} We count the situations when the ratio $r^\mathrm{CVE}_h$ exceeds a chosen critical value $r^\mathrm{CVE}_\mathrm{max}$. If this happened $W$ times, the addition of hidden neurons is stopped. Then we choose the architecture having the smallest cross-validation error $\varepsilon^\mathrm{CV}_h$ and the particular ANN with the synaptic weights having the smallest training error $\varepsilon^\mathrm{MRP}$. \begin{table}[h!] \centering \begin{tabular*}{\textwidth}{@{\extracolsep{\fill} }lcc} \hline Number of subsets in cross-validation & $V$ & $10$ \\ Number of iteration considered in $r^\mathrm{PE}_k$ & $J$ & 100 \\ Maximal number of training iterations & $K$ & $5000$ \\ Maximal value of prediction errors ratio & $r^\mathrm{PE}_\mathrm{max}$ & $0.999$ \\ Starting value of hidden neurons & $h_\mathrm{min}$ & $1$ \\ Maximal value of cross-validation error ratio & $r^\mathrm{CVE}_\mathrm{max}$ & $0.99$ \\ Maximal value of $r^\mathrm{CVE}_\mathrm{max}$ exceeding & $W$ & $3$ \\ \hline \end{tabular*} \caption{Parameters of ANN training algorithm and cross-validation method} \end{table} The resulting ANNs are tested on an independent testing data set $\mathcal{D}_\mathrm{test}$. Since some of the approximation strategies consist of a high number of ANNs, the resulting number of hidden neurons and achieved errors on training and testing data for all the trained ANNs are listed in \ref{app:ann_config}. Brief summary of these results is presented in Table \ref{tab:training}\footnote{The error function approximation strategies are intrinsically related to particular experimental curve. The results here are obtained for experimental "Mokra" data described in Section \ref{sec:valid} in more detail.}. \begin{table}[h!] \tabcolsep=2pt \centering \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}lrrr}\hline Strategy & $h$ & $\varepsilon^\mathrm{MRP}(\mathcal{D}_\mathrm{train}) [\%]$ & $\varepsilon^\mathrm{MRP}(\mathcal{D}_\mathrm{test}) [\%]$\\ \hline Forward Complex & $7$ & $2.03$ & $2.67$ \\ Forward Split & $3$ to $10$ & $0.06$ to \hspace{0.9mm} $1.06$ & $0.06$ to \hspace{0.9mm} $1.27$\\ Forward Split II & $4$ to $13$ & $0.06$ to \hspace{0.9mm} $1.42$ & $0.07$ to \hspace{0.9mm} $2.04$\\ Forward Split III & $3$ to $13$ & $0.03$ to \hspace{0.9mm} $1.50$ & $0.03$ to \hspace{0.9mm} $1.98$\\ \hline Error $F_1$ & $10$ & $0.40$ to \hspace{0.9mm} $0.54$ & $0.57$ to \hspace{0.9mm} $0.74$\\ Error $F_2$ & $9$ to $11$ & $0.78$ to \hspace{0.9mm} $1.36$ & $0.96$ to \hspace{0.9mm} $1.56$\\ \hline Inverse Expert & $5$ to \hspace{0.9mm} $8$ & $1.14$ to \hspace{0.9mm} $5.74$ & $1.31$ to \hspace{0.9mm} $6.43$\\ Inverse Expert II & $4$ to \hspace{0.9mm} $6$ & $1.38$ to \hspace{0.9mm} $5.79$ & $1.36$ to \hspace{0.9mm} $6.52$\\ Inverse PCA & $4$ to \hspace{0.9mm} $8$ & $0.28$ to $10.50$ & $0.33$ to $16.73$\\ \hline \end{tabular*} \caption{Architecture of particular ANNs in inverse strategies and their errors on training and testing data.} \label{tab:training} \end{table} Regarding the number of hidden neurons, the results point to higher complexity of the error function relationships. Nevertheless, the differences in hidden neurons among particular strategies are relatively small. The quality of the resulting ANNs in approximation of the given relationships is measured by the obtained errors on all the training $\varepsilon^\mathrm{MRP}(\mathcal{D}_\mathrm{train})$ and testing $\varepsilon^\mathrm{MRP}(\mathcal{D}_\mathrm{test})$ data. Small differences between the training and testing errors refer to well-trained ANNs and to the good quality of the training method as well as the method for topology estimation. Note that overtrained ANNs usually lead to significantly higher errors on testing data. Comparing the approximation quality of the particular strategies, we can point out good results of the forward model approximation and error function approximation, where the errors did not exceed the value of $3$ \%. The good approximation of the forward model is not surprising since the relationship is well-defined, smooth and relatively simple. The good results of the error function approximation are more unexpected, because the relationship here is probably more nonlinear and complex. One possible explanation is a large spread of error function values on the training data, which is used to scale the errors (see Eq. \eqref{eq:error}). While the error functions converge to zero near the optimal parameter values, they quickly rise to extremely high values for parameter values more distant from the optimum. Hence, we presume that the small errors obtained in the error function approximation do not promise comparably good results in the final parameter identification. The results of the inverse relation approximation are not very good, but it was foreseen due to unknown and probably ill-posed relationship. Nevertheless, the obtained errors are actually the final errors of the whole identification process for the training and testing data, since there is no other following step concerning any optimisation as in the case of other identification strategies. Hence, further comments on these results are presented in the following section concerning verification of the overall identification strategies on the testing data. \section{Verification of model calibration}\label{sec:verification} Since the errors in Table \ref{tab:training} represent only the quality of the constructed ANNs, we have to also investigate the quality of the identification procedures. This section is devoted to verification of the model calibration, where the goal is to predict the model parameters' values corresponding to the simulated data, which are not perturbated by any noise. The advantage of verification is that we also know the true values of the parameters and thus, we can easily evaluate the quality of their estimation by each strategy. In particular, the calibration strategies were applied to estimate the parameters' values for all the training and testing simulations. As mentioned, in case of the inverse relation approximation, the outputs of ANNs are directly the predicted values of the identified parameters $\widehat{\vek{p}}$. In case of the forward model approximation, we have to run a subsequent optimisation process. Here, the evolutionary algorithm GRADE, see \cite{Kucerova:2007:PHD} for details about this method\footnote{The parameters of GRADE algorithm were set to pool\_rate = 4, radioactivity = 0.33 and cross\_limit = 0.1. The algorithm was stopped after $10000$ cost function evaluations.}, is applied to find a set of parameters' values $\widehat{\vek{p}}$ minimising the square distance $\delta$ between components of the model response $\alpha_k$ and their corresponding ANN-based approximated counterparts $\widetilde{\alpha}_k$, i.e. \begin{equation} \delta = \sum_k (\alpha_k - \widetilde{\alpha}_k)^2 \, , \label{eq:dist} \end{equation} where $k$ corresponds to the selected approximated components defined for particular identification strategies in Table \ref{tab:stratpar}. In such a way, the parameters $\widehat{\vek{p}}$ are predicted for all the training as well as testing data. As the true values of parameters $\vek{p}$ are known in the verification process, the mean prediction errors $\widehat{\varepsilon}$ are computed relatively to the spread of the training data, i.e. \begin{equation} \widehat{\epsilon}(\widehat{p}_j) = \frac{\sum_{i=1}^{|\mathcal{D}|}|p_{i,j} - \widehat{p}_{i,j}|}{|\mathcal{D}|(p_{\mathrm{max}(\mathcal{D}_{\mathrm{train}}),j} - p_{\mathrm{min}(\mathcal{D}_{\mathrm{train}}),j})} \, , \label{eq:prederr} \end{equation} and the obtained errors for particular identification strategies are listed in Table \ref{tab:ident}. \begin{table}[h!] \begin{tabular}{l|rr|rr|rr|rr|rr} \hline & \multicolumn{2}{l|}{$\widehat{\varepsilon}(\widehat{p}_1)$} & \multicolumn{2}{l|}{$\widehat{\varepsilon}(\widehat{p}_2)$} & \multicolumn{2}{l|}{$\widehat{\varepsilon}(\widehat{p}_3)$} & \multicolumn{2}{l|}{$\widehat{\varepsilon}(\widehat{p}_4)$} & \multicolumn{2}{l}{$\widehat{\varepsilon}(\widehat{\alpha})$}\\ & train & test & train & test & train & test & train & test & train & test\\\hline Forward Complex & 16.78 & 17.09 & 52.20 & 47.91 & 6.06 & 5.45 & 3.67 & 2.69 & 1.079 & 1.088 \\ Forward Split & 9.48 & 11.62 & 30.18 & 38.45 & 3.14 & 4.65 & 1.17 & 3.10 & 0.310 & 0.370 \\ Forward Split II & 5.09 & 6.47 & 13.34 & 15.03 & 1.69 & 2.60 & 0.67 & 1.02 & 0.144 & 0.205 \\ Forward Split III & 4.12 & {\bf 4.84} & 10.73 & 10.65 & 1.49 & {\bf 1.63} & 0.57 & 0.64 & {\bf 0.124} & {\bf 0.160} \\ Inverse Expert & 5.74 & 6.43 & {\bf 5.15} & {\bf 6.21} & 1.99 & 2.16 & 1.14 & 1.31 & 0.490 & 0.493 \\ Inverse Expert II & 5.79 & 6.23 & 5.60 & 6.52 & 2.60 & 3.18 & 1.38 & 1.36 & 0.444 & 0.533 \\ Inverse PCA & {\bf 3.86} & 5.10 & 10.50 & 16.73 & {\bf 1.25} & 1.89 & {\bf 0.28} & {\bf 0.33} & 0.377 & 1.209 \\ \hline \end{tabular} \caption{Results of verification of particular identification strategies in terms of mean relative prediction errors $\widehat{\varepsilon}$ [\%]. \new{Best results are highlighted in bold font.}} \label{tab:ident} \end{table} In application of the identification strategy to real experimental data, the parameter values are not known, but the success of the identification process is quantified by quality of fitting the data by the model response obtained for the identified parameters. Hence, the model simulations were performed for all the identified parameter sets and prediction errors $\widetilde{\varepsilon}$ in terms of predicted responses $\widetilde{\vek{\alpha}}$ are computed analogously to the Eq.~\eqref{eq:prederr}. Their values averaged also over all the response components are then listed in Table~\ref{tab:ident}. The results for the strategies based on an approximation of the error function are missing here, because they require to build a particular ANN for every curve of the hydration degree and for each require to run an additional minimisation procedure. This is overwhelming and thus these strategies are only validated on the experimental data as described in the following section. One can see that among the forward strategies, the complex variant provided the worst results in the training process as well as in the final identification. The complex relationship covering the time domain causes apparently certain difficulties to the training process. We can conclude that training of a set of neural networks means more work, but offers significantly better quality of the model approximation. We can also point out the large differences in errors of particular parameters, which correspond to influence of particular parameters to the model response. As demonstrated in Figure~\ref{fig:par_alpha}, the largest spread of the model response is related namely to change in the parameters $p_4$ and $p_3$, while the parameter $p_1$ and also $p_2$ seem to be almost negligible. The sensitivity analysis illustrated in Figure~\ref{fig:bundle}b shows very high sensitivity of the model response to the parameter $p_2$ at early stage of hydration, nevertheless, at this stage the spread of the model response is almost negligible and even a very small error in the response approximation can be fatal for identification of the parameter $p_2$. On the other hand, it is not surprising that the identification accuracy is significantly improved with an increasing number of approximated response components, i.e. an increasing number of trained ANNs. Despite the worse results in training of ANNs, the inverse strategies achieved comparably good results with the forward strategies in parameter identification and also in fitted measurements. More precisely, the results of measurements fitting are slightly worse, but the errors in parameter prediction are smaller. Especially the Inverse Expert strategies provided surprisingly small errors in $p_2$ prediction and the errors in parameters are generally more balanced. This phenomenon can be possibly explained by fact that each ANN is trained to predict each parameter separately, thus automatically selecting and emphasizing the combinations of the model response critical for the parameter. In the strategy Inverse Expert II, the usage of one additional input at the early stage of hydration caused no improvement of the resulting prediction, which is probably caused again by fact that the responses at this stage have a negligible spread and almost no predictive value. The last interesting result concerns the application of principal component analysis. The Inverse PCA strategy provided again significantly different errors in prediction of particular parameters, similarly to the forward strategies. The reason resides possibly in fact that PCA emphasize the most important components, while it can mix the effects of the less significant parameters. Nevertheless, when compared with strategies Forward Split and Inverse Expert using the same number of response components, the Inverse PCA provided the best results in prediction of all the parameters except $p_2$. Its quality of measurement fitting is, however, the worst among those strategies. From this thorough comparison we may conclude that all the inverse strategies provide very good results, which makes them highly promising considering their very simple implementation which does not include any additional optimisation process except the only training of ANNs. Moreover, the Inverse Expert strategies can be especially recommended for identification of less significant parameters. \section{Validation of model calibration}\label{sec:valid} The previous section was focused on mutual comparison of the presented identification strategies on simulated data. However, a complete comparison has to include their validation on experimental data. To that purpose we used the four experimental data obtained by isothermal calorimetry: one for cement ``{\bf Mokra}'' CEM I 42.5 R taken directly from Heidelberg cement group's kiln in Mokr\'a, Czech Republic \cite{Smilauer:2010} and three others from the following literature: ``{\bf Boumiz}'' \cite{Boumiz}, ``{\bf Hua}'' \cite{Hua} and ``{\bf Princigallo}'' \cite{Princigallo}. In parameter identification from experimental data, one often face to difficulties related to (i) experimental errors and (ii) model imperfections. Especially in case of models with parameters having a specific physical meaning -- like the affinity hydration model -- it happens that the experimental data seems to lie beyond the physically meaningful values of the model parameters. This is exactly what we face in case of the four experimental curves depicted in Figure~\ref{fig:shifts}. \begin{figure}[h!] \begin{tabular}{cc} \includegraphics[width=7cm]{mokra_posun.eps} & \includegraphics[width=7cm]{Boumitz_posun.eps} \\ correction: $0.5$ h & correction: $1$ h \\ \includegraphics[width=7cm]{Hua_posun.eps} & \includegraphics[width=7cm]{Princigallo_posun.eps}\\ correction: $4.5$ h & correction: $3.5$ h \end{tabular} \caption{Corrections of experimental curves.} \label{fig:shifts} \end{figure} The grey curves represent the training samples generated in an optimised fashion so as to maximally cover the parameter space. Nevertheless, it is visible that all the experimental curves lie out of the bundle of the training samples. Applying the identification strategies to these data will require the ANNs to extrapolate and it will probably lead to unphysical and wrong predictions of the model parameters. Such results were presented for ``Mokra'' in \cite{Mares:2012:Topping}. Looking in more detail on the experimental curves, one can see that the difference between the experimental data and simulations can be explained by wrong estimation of the origin of hydration. Correction of the starting time moves the curves into the bundle of response simulations. As a matter of fact, the correction in orders of hours is negligible comparing to the duration of the whole hydration process lasting often days or weeks. Moreover, the goal of this paper is not to argue against the correctness of the model or data, but to demonstrate the properties of particular identification strategies which can be better illustrated in a situation, where the observed data are not outliers w.r.t. sampled parameter domain. For an interested reader about the identification of outliers we refer to \cite{Mares:2012:Topping}. In general, validation does not allow for a comparison in terms of parameters' values, because these are not known a priori. Nevertheless, the simplicity and the fast simulation of the affinity hydration model permit a direct optimisation of the model parameters so as to fit the measured data without any incorporated approximation. The resulting optimal solutions can be then compared with the results obtained using the ANN approximations. To that purpose, we employ again the error functions given in Eqs.~\eqref{eq:optim1} and \eqref{eq:optim2} and the GRADE algorithm with the same setting as in the previous section to minimise the both error functions. The obtained results are referred to as {\bf Direct1} and {\bf Direct2}, respectively, \new{and they represent the best results that can be achieved with the current model on the given data.} Subsequently, the identification strategies were applied to the experimental data using the prepared ANNs. Since the ANNs are constructed for specific time steps of the hydration degree, the experimental curves are interpolated to the time steps required by the particular ANNs. If necessary, the data are extrapolated beyond the last measured time step assuming the further progress of hydration to be constant at the last measured value. The identified parameters together with the parameters' values obtained by the direct optimisation are written in Tables~\ref{tab:valid1} and~\ref{tab:valid2}. \new{Note that the parameter values highlighted in bold font refer to situation, where the measured data lie beyond the domain of training data and the ANN is forced to extrapolate.} \begin{table}[h!] \begin{tabular}{l|rrrr|r|rrrr|r} \hline & \multicolumn{5}{l|}{``Mokra''} & \multicolumn{5}{l}{$\quad$ ``Boumiz''} \\ Method & $p_1$ & $p_2$ & $p_3$ & $p_4$ & $\widehat{\varepsilon}(\widehat{\alpha})$ & $p_1$ & $p_2$ & $p_3$ & $p_4$ & $\widehat{\varepsilon}(\widehat{\alpha})$ \\ \hline Direct1 & 0.84 & 0.99 & 0.18 & 0.05 & 0.70 & $\quad$ 0.93 & 1.00 & 0.02 & 0.36 & 2.37 \\ Direct2 & 0.82 & 0.98 & 0.18 & 0.05 & 0.65 & 0.93 & 1.00 & 0.02 & 0.35 & 2.70 \\ \hline Forward Complex & 0.81 & 1.00 & 0.18 & 0.03 & 1.35 & 1.00 & 0.61 & 0.08 & 0.36 & 12.67 \\ Forward Split & 0.82 & 1.00 & 0.19 & 0.05 & 1.15 & 0.96 & 1.00 & 0.08 & 0.35 & 5.44 \\ Forward Split II & 0.78 & {\bf 1.01} & 0.18 & 0.05 & 0.83 & 1.00 & 1.00 & 0.08 & 0.35 & 4.11 \\ Forward Split III & 0.80 & 1.00 & 0.19 & 0.05 & 0.91 & 0.98 & 1.00 & 0.05 & 0.35 & 3.03 \\ \hline Error $F_1$ & 0.78 & 0.73 & 0.09 & 0.07 & 3.89 & - & - & - & - & - \\ Error $F_2$ & 1.00 & {\bf 1.19} & 0.15 & {\bf -0.06} & 2.73 & - & - & - & - & - \\ \hline Inverse Expert & {\bf 1.16} & {\bf -0.18} & 0.29 & 0.03 & 6.83 & 0.78 & {\bf -0.24} & 0.22 & 0.30 & 35.11 \\ Inverse Expert II & {\bf 1.21} & {\bf -0.06} & 0.19 & 0.16 & 4.68 & {\bf 1.27} & {\bf -0.14} & 0.20 & 0.13 & 25.94 \\ Inverse PCA & 0.75 & 0.83 & 0.18 & 0.06 & 1.82 & 0.78 & 0.87 & 0.02 & 0.35 & 10.82 \\ \hline \end{tabular} \caption{Results of identification strategies obtained for ``Mokra'' and ``Boumiz'': identified values of model parameters and mean relative error in degree of hydration $\widehat{\varepsilon}(\widehat{\alpha})$ [\%].} \label{tab:valid1} \end{table} \begin{table}[h!] \begin{tabular}{l|rrrr|r|rrrr|r} \hline & \multicolumn{5}{l|}{``Hua''} & \multicolumn{5}{l}{$\quad$ ``Princigallo''} \\ Method & $p_1$ & $p_2$ & $p_3$ & $p_4$ & $\widehat{\varepsilon}(\widehat{\alpha})$ & $p_1$ & $p_2$ & $p_3$ & $p_4$ & $\widehat{\varepsilon}(\widehat{\alpha})$ \\\hline Direct1 & 1.00 & 0.94 & 0.20 & 0.11 & 2.24 & $\quad$ 1.00 & 0.85 & 0.19 & 0.14 & 3.46 \\ Direct2 & 0.99 & 0.96 & 0.21 & 0.11 & 2.46 & 1.00 & 0.88 & 0.21 & 0.15 & 3.27 \\ \hline Forward Complex & 1.00 & 0.64 & 0.22 & 0.08 & 4.10 & 1.00 & 0.58 & 0.23 & 0.14 & 6.21 \\ Forward Split & 0.87 & 1.00 & 0.19 & 0.11 & 2.84 & 0.78 & 0.98 & 0.18 & 0.15 & 4.39 \\ Forward Split II & 0.93 & 0.96 & 0.21 & 0.11 & 2.92 & 0.92 & 0.82 & 0.20 & 0.14 & 4.44 \\ Forward Split III & 0.87 & {\bf 1.01} & 0.18 & 0.10 & 2.71 & 0.89 & 0.92 & 0.18 & 0.14 & 3.75 \\ \hline Inverse Expert & 0.94 & {\bf -0.29} & 0.26 & 0.12 & 10.64 & {\bf 1.07} & {\bf -0.16} & 0.22 & 0.15 & 9.02\\ Inverse Expert II & {\bf 1.26} & {\bf -0.27} & 0.19 & 0.02 & 6.23 & {\bf 1.52} & {\bf -1.38} & 0.13 & {\bf -0.24} & 15.05\\ Inverse PCA & 1.00 & 0.89 & 0.15 & 0.12 & 2.41 & {\bf 1.13} & 0.74 & 0.19 & 0.15 & 3.62 \\ \hline \end{tabular} \caption{Results of identification strategies obtained for ``Hua'' and ``Princigallo'': identified values of model parameters and mean relative error in degree of hydration $\widehat{\varepsilon}(\widehat{\alpha})$ [\%].} \label{tab:valid2} \end{table} The identified parameters were used as inputs for simulations, whose results are compared with the experimental data in Figures~\ref{fig:valid1} and \ref{fig:valid2}. To quantify the quality of obtained fits, Tables~\ref{tab:valid1} and \ref{tab:valid2} contain also the mean relative error $\widehat{\varepsilon}(\widehat{\alpha})$ [\%] computed in the same manner as in Table~\ref{tab:ident} for an easy comparison of the verification and validation results. \begin{figure}[h!] \begin{tabular}{cc} \includegraphics[width=7cm]{optim_mokra_posun1_ErrF.eps} & \includegraphics[width=7cm]{optim_mokra_posun2.eps}\\ \includegraphics[width=7cm]{optim_boumitz_posun1.eps} & \includegraphics[width=7cm]{optim_boumitz_posun2.eps}\\ \end{tabular} \caption{Comparison of corrected experimental data ``Mokra'' and ``Boumiz'' and corresponding results of calibration strategies.} \label{fig:valid1} \end{figure} \begin{figure}[h!] \begin{tabular}{cc} \includegraphics[width=7cm]{optim_hua_posun1.eps} & \includegraphics[width=7cm]{optim_hua_posun2.eps}\\ \includegraphics[width=7cm]{optim_princigallo_posun1.eps} & \includegraphics[width=7cm]{optim_princigallo_posun2.eps}\\ \end{tabular} \caption{Comparison of corrected experimental data ``Hua'' and ``Princigallo'' and corresponding results of calibration strategies.} \label{fig:valid2} \end{figure} The strategies based on the error function approximation are illustrated on the parameter identification from ``Mokra'' data, which are used to define the error functions, which are approximated by the ANNs. The trained ANNs are then optimised by the GRADE algorithm so as to provide the optimal set of the identified parameters. As we presumed, the identification results are not satisfactory despite very good results of the ANNs' training processes, see Table~\ref{tab:training}. The training and testing errors are small relatively to the spread of error functions' values, which increase quickly with the distance from the optimal solution. The strategy, however, requires high precision of the ANN's approximation near the optimal solution, which can be hardly achieved due the overall complex shape of the error functions. The worst results on all the experimental curves were obtained by the inverse strategies based on selected components of the model response used as the ANNs' inputs. The results pointed out the high sensitivity of this strategy to the measurement noise and to the specific choice of the inputs. Both drawbacks are overcome by employing principal component analysis, which allows to employ a high number of the response components and filter the measurement noise out of the several first principal components. The Inverse PCA strategy thus achieved significantly better results. The forward strategies provided generally the best results consistent with the results of verification on the simulated data. These strategies thus proved to be rather immune to the noise in experimental data. \section{Conclusions}\label{sec:concl} \label{concl} The presented paper reviews and compares several possible applications of artificial neural networks in calibration of numerical models. In particular, the feedforward layered neural network is employed in three basic schemes to surrogate: (i) response of a model, (ii) inverse relationship of model parameters and model response and (iii) error function quantifying how well the model response fits the experimental data. Their advantages and drawbacks are illustrated on calibration of four parameters of the affinity hydration model. The model is chosen for its nonlinearity, difference in sensitivities to particular parameters on one hand and simplicity and very fast numerical evaluation on the other. The later allow for the model calibration based on the stochastic evolutionary algorithm without any involved approximation and thus better quantification of calibration results provided by the particular strategies. The investigated calibration strategies are verified on $50$ simulated curves of hydration degree and validated on four experimental ones. \begin{table}[h!] \centering \begin{tabular}{lccccc} \hline Strategy & $N_{\mathrm{ANN}}$ & optimisation & new data & errors \\ \hline Forward Complex & $1$ & yes & optimisation & middle \\ Forward Split & $N_{\mathrm{\alpha}}$ & yes & optimisation & low \\ Error $F$ & $1$ & yes & traininng + optimisation & high \\ Inverse Expert & $N_{\mathrm{p}}$ & no & - & high \\ Inverse PCA & $N_{\mathrm{p}}$ & no & - & middle \\ \hline \end{tabular} \caption{Simplified summary of calibration strategies. $N_{\mathrm{\alpha}}$ stands for a number of approximated components of model response, $N_{\mathrm{p}}$ is a number of model parameters.} \label{tab:summary} \end{table} Simplified summary of the obtained results is written in Table~\ref{tab:summary}. One of the simplest strategies from the implementation point of view is based on an approximation of the error function (Error~$F$), where only one neural network needs to be trained for the prediction of the error function values. This simplicity, however, does not hold in case of multiple experimental measurements, where the whole identification process including the neural network training as well as its optimisation needs to be done all over again for any new experiment. Moreover, the presented examples revealed that the complexity of the error function may cause difficulties for neural network training resulting in high errors in the identified parameters. The potential of the neural network is wasted on approximating the whole domain, while the accurate predictions are required only in the vicinity of the optimal values of parameters. Hence, this strategy is more suited for surrogate models based on radial basis function networks or kriging, which can be trained along with the optimisation of the error function thus allowing to improve the precision in the promising area, see e.g. \cite{Kucerova:2009}. An equally simple strategy is based on the approximation of the model response, where time or space variables are included among the neural network inputs (Forward Complex). This strategy is better suited for layered neural networks, which is trained only once and then can be used repeatedly for any new observations. The effort invested into the approximation of the whole domain is thus not wasted. The application to new data requires only one new optimisation process. The results obtained by this strategy were not excellent, but can be considered as satisfactory solution at a low price. The best results were achieved by separate approximations of particular response components, where a higher number of neural networks is trained to approximate rather simple relationship defined by the calibrated model (Forward Split). This procedure requires more work on networks preparation, which is compensated by high accuracy of the obtained results. The accuracy is proportionally increasing with the number of approximated response components and can be thus influenced by work invested to the surrogate construction. Moreover, the constructed approximations can be then used again for any new data, where only the optimisation of model parameters needs to be repeated. The worst results were obtained by the strategy approximating the inverse mapping from the response components to the model parameters (Inverse Expert). Such relationship does not have to exist and can be hardly approximated. Moreover, if the inputs for a neural network are not properly selected and thus highly sensitive to the measurement error, the procedure provides unsatisfactory results. Nevertheless, using an expert knowledge for a proper selection of inputs as presented in~\cite{Kucerova:2014:AES}, this strategy gives good results at a very low price, since neither training nor optimisation process, but only a simple evaluation of the trained networks is needed for parameter identification from new data. The necessity of the expert knowledge and sensitivity to the measurement error can be easily circumvented by employing principal component analysis on the model response components (Inverse~PCA). Then only the number of components entering as inputs in the neural network needs to be selected. The strategy thus represents a compromise solution providing satisfactory results at a~low price especially in the repeated application to new observed data. \section*{Acknowledgment} The financial support of this work by the Czech Science Foundation (project No. 16-11473Y) is gratefully acknowledged. We would like also to thank V\'it \v{S}milauer (CTU in Prague) for providing us with a code of affinity hydration model, experimental data and helpful advices. \bibliographystyle{elsarticle-num}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{intro} The analysis of peculiar velocity fields of galaxies and clusters is one of the most effective ways of probing mass fluctuations on $\sim$ 100 h$^{-1}$ Mpc scales (h being the Hubble constant in units of 100 km s$^{-1}$ Mpc$^{-1}$). Studies of peculiar velocities can be used to constrain the amplitude of mass power spectrum on scales others than those probed by redshift surveys and those sampled by anisotropies in the CMB (e.g. Zaroubi et al.; 2001; Freudling et al. 1999). Originally motivated by the invention of distance indicators based on intrinsic relations between galaxy observables \cite{FJ,TF} that are {\it independent of redshift}, the field has, in recent years, experienced great progress toward constructing large and homogeneous redshift-distance samples of galaxies and clusters. Although the analyses of early redshift-distance surveys of spiral galaxies \cite{aaronson} and of elliptical galaxies (e.g., Lynden-Bell at al. 1988) led to the development of several statistical methods for analyzing peculiar velocity data \cite{7s,kaiser1988,fw1994,sw1995,wf1995}, these studies were hampered by the fact that they were based on relatively small and shallow data sets. The present generation of redshift-distance surveys consist of larger and higher--quality data sets of both spiral (da Costa et al. 1996; Giovanelli at al. 1997a, 1997b; Haynes et al. 1999a, 1999b; Karachentsev et al. 2000) and early-type galaxies (da Costa et al. 2000a, 2000b). These new samples pave the path toward a possible resolution of many discrepancies found in earlier samples; however, some quantitative disagreements persist. The earlier statistical comparisons of the peculiar velocity fields derived from D$_n$-$\sigma$ and Tully-Fisher (TF) distances found significant difference between them (e.g., Gorski et al. 1989; Tormen at al. 1993). Based on the work of Kaiser (1988), Feldman \& Watkins (1994) formulated a linear analysis to calculate the theoretical expectation for bulk flow in large-scale surveys as a function of the geometry of the survey, its clustering properties, and the assumed power spectrum; and applied it to a volume-limited complete sample of 119 Abell clusters (Lauer \& Postman 1994, hereafter LP) to show that the power spectra considered were inconsistent with the LP measurement of bulk flow at the 95$\%$-97$\%$ confidence level. The formalism was later applied \cite{wf1995} to calculate a measure of correlation between results obtained from the Riess, Press, \& Kirshner (1995a, 1995b, hereafter RPK) and the LP samples. They found that the apparent lack of agreement between the two measurements could be explained by the fact that both the LP and RPK samples were dominated by noise and incomplete cancellation of small scale motions. More recently, in an analysis of the ENEAR sample (da Costa at al. 2000a, 2000b), the results obtained by Borgani et al. (2000) pointed toward a statistical concordance of the velocity fields traced by spiral and elliptical galaxies, with galaxy distances estimated using TF and D$_n$-$\sigma$ distance indicators, respectively. Following the method described in Feldman \& Watkins (1994) and Watkins \& Feldman (1995), Hudson et al. (2000) also showed that the bulk flows measured from four different surveys (SMAC, SC, LP10k and SNIa) were consistent with each other. Reconstruction techniques which compared the velocity to those predicted from galaxy redshift distributions (\cite{zaroubi2002,pike2005,pp2006}) show that we can get consistent results for $\beta=\Omega_m^{0.6}/b$ where $\Omega_m$ and $b$ are the mass density and linear bias parameters, respectively. These papers do not directly compare the surveys, but rather calculates best fit for $\beta$ given a velocity field. Pairwise velocity comparisons between various samples (\cite{pairwise2}) also show that different velocity samples produce consistent statistical results for $\Omega_m$ and $\sigma_8$ the standard deviation of density fluctuations on a scale of $8h^{-1}$Mpc. The fact that they give consistent results does not necessarily indicate consistency between the surveys, since the agreement is indirect and there is no attempt to quantify the consistency between the surveys and the field. Agreement of a specific statistical characteristic between data sets need not mandate consistent data sets. In the present {\it paper}, we calculate the theoretically expected correlation between the estimates of the bulk flows of samples of galaxies in four recent surveys, namely, ENEAR (da Costa at al. 2000a, 2000b), SFI \cite{giovanelli1994,dacosta1996}, RFGC \cite{karachentsev2000}, SBF \cite{SBF} and the Mark III catalogs \cite{willick1997}. We also introduce an analytical method to calculate the likelihood that two surveys both sample the same large scale flows, that is, we study whether measurement errors and differences in the distribution and morphology of galaxies in the surveys can statistically account for the differences in the directions of bulk--flow vectors. Further, we construct the 3-dimensional bulk flow vectors for all the surveys mentioned above and calculate the actual dot products of the estimates of the bulk flows obtained from these surveys in order to discuss their consistency. In $\S$ 2, we describe the theoretical background of velocity fields; we explain in detail the formulation of our analysis in $\S$ 3. A description of the surveys considered in our analysis is given in $\S$ 4; We then discuss our results in $\S$ 5 and conclude in $\S$ 6. We argue, in the context of the EBW \cite{EBW} Standard cold--dark--matter (CDM) power spectrum, that our results also show a consistent statistical concordance. \section{Physics of Velocity Fields} In the context of the gravitational instability model of structure formation, the motions of galaxies are directly related to mass-density fluctuations. On scales of the surveys, the measured velocity of galaxies deviate from the Hubble expansion due to local mass distribution. Thus peculiar velocity surveys provide a unique method to probe the distribution of mass in the local universe. On scales that are small compared to the Hubble radius, galaxy motions are manifest in deviations from the idealized isotropic cosmological expansion \begin{equation} cz = H_0r + \hat{{\bf r}} \cdot \left[{\bf v}({\bf r})-{\bf v}(0) \right] \label{eq-cz} \end{equation} where c is the speed of light, z is the redshift, $H_0$ is the Hubble constant, $r$ is the distance of a galaxy at the redshift z, $\hat{{\bf r}}$ is the the unit vector toward the galaxy, and ${\bf v}({\bf r})$ is the proper motion of the galaxy (at position ${\bf r}$) with respect to the comoving frame. This component of the overall motion of the galaxy is known as its {\it peculiar velocity}, arising from the gravitational attractions of surrounding overdensities. In Eq. (\ref{eq-cz}), ${\bf v}(0)$ is the peculiar velocity of the observer; It is standard practice to omit this term from the equation and to assume that redshift has been corrected to account for the motion of the observer. The redshift--distance samples, obtained from peculiar velocity surveys, allow us to determine the radial (i.e., line--of--sight) component of the peculiar velocity of each galaxy: \begin{equation} v(r) = \hat{{\bf r}} \cdot {\bf v}({\bf r})= cz - H_0r \end{equation} We assume that galaxies trace the large--scale linear velocity field ${\bf v}({\bf r})$ which is described by a Gaussian random field that is completely defined, in Fourier space, by its velocity power spectrum $P_v(k)$. In the statistical model for peculiar velocities we define the Fourier Transform of the line-of-sight velocity $\hat{{\bf r}} \cdot {\bf v}({\bf r})$ such that: \begin{equation} \hat{\bf r} \cdot {\bf v}({\bf r}) = \frac{1}{(2\pi)^3} \int d^3 {\bf k} \: \hat{\bf r} \cdot \hat{\bf k} \: v(\bf{k}) \: e^{i {\bf k} \cdot {\bf r}} \label{FT-vls} \end{equation} Due to the isotropy assumed in the Cosmological Principle, the statistical properties of $\hat{{\bf k}} v({\bf k})$ are independent of the direction of $\hat{{\bf k}}$, and so we may define the {\it velocity power spectrum} P$_v(k)$: \begin{equation} \left< v({\bf k})v^*({\bf k}^{\prime})\right>=(2\pi)^3P_v(k)\delta_D({\bf k}-{\bf k}^{\prime}), \label{eq-Pv} \end{equation} where $\delta_D$ is a Dirac delta function, and the averaging on the left--hand--side is over directions of ${\bf k}$. In linear theory, the velocity power spectrum is related to the density power spectrum, P($k$), by \begin{equation} P_v(k)=\frac{H^2}{k^2} \: f^2(\Omega_{m,0},\Omega_{\Lambda}) \: P(k)\ . \label{eq-Pv-P} \end{equation} $f(\Omega_{m,0},\Omega_{\Lambda})$ is the rate of growth of the perturbations at the present epoch and can be approximated as (e.g., Lahav et al. 1991): \begin{equation} f(\Omega_{m,0},\Omega_{\Lambda}) \approx \Omega_{m,0}^{0.6} \end{equation} where $\Omega_{m,0}$ is the cosmological density parameter for matter at the present epoch. The power spectrum provides a complete statistical description of the linear peculiar velocity field. It should be noted that the above expressions are valid only on scales sufficiently large so that non--linearity can be neglected. In the present analysis, we consider the EBW parameterization of the linear CDM power spectrum \cite{EBW} \begin{equation} P(k) = \sigma_8^2 C k\Big(1+\big[6.4(k/\Gamma)+3(k/\Gamma)^{1.5} +(1.7k/\Gamma)^2\big]^{1.13}\Big)^{-2/1.13} \end{equation} where $\Gamma$ parameterizes the ``shape'' of the power spectrum and the overall normalization is determined by $\sigma_8$, the standard deviation of density fluctuations on a scale of $8h^{-1}$Mpc. The constant $C$ is determined by the direct relation between $\sigma_8$ and the power spectrum. For models where the total density parameter $\Omega=1$, the shape parameter is related to the density of matter, $\Gamma = \Omega_{m,0} h$. In the present analysis we use $\sigma_8 = 0.9$, $\Gamma = 0.21$ and $h=0.7$. \section{Modeling the Observational Data} A catalog of peculiar velocities consists of a set of galaxies, labeled by an index n, for which we are given positions ${\bf r}_{n}$ and estimates of the line-of-sight peculiar velocities $S_n$ with uncertainties $\sigma_n$. For simplicity, we will make the assumption that observational errors are Gaussian distributed. Since linear theory only applies on scales comparable to the survey size, we focus our attention on the lowest order moments of a Taylor expansion of the velocity field ${\bf v}({\bf r})$. Following Kaiser (1988), we model the velocity field as a uniform streaming motion, or bulk flow, denoted by ${\bf U}$, about which are random motions drawn from a Gaussian distribution with a 1--D velocity dispersion $\sigma_*$. Although this model ignores the fact that small-scale motions, including those that are nonlinear, are correlated, it is reasonable to assume that they effectively average out on the scales we are considering. Since the value of $\sigma_*$ is not well determined by linear theory, we will treat it as a parameter with a fixed value of 300 km s$^{-1}$. We have checked that our results are fairly insensitive to the exact value chosen for this parameter. Given these assumptions, the likelihood function for the bulk flow components is \begin{equation} L(U_i) = \prod_n\frac{1}{\sqrt{\sigma_n^2+\sigma_*^2}} \: \exp \left( -\frac{1}{2} \: \frac{(S_n-\hat{r}_{n,i}U_i)^2}{\sigma_n^2+\sigma_*^2} \right) \end{equation} where here and in subsequent equations repeated indices are summed over. The maximum likelihood solution for the $i$th component of the bulk flow is given by \begin{equation} U_i = A_{ij}^{-1} \: \sum_n \frac{\hat{r}_{n,j}S_n}{\sigma_n^2+\sigma_*^2}, \label{eq-Ui} \end{equation} where \begin{equation} A_{ij} = \sum_n \frac{\hat{r}_{n,i}\hat{r}_{n,j}}{\sigma_n^2+\sigma_*^2} \label{eq-Aij} \end{equation} Thus U$_i$ is the cross-correlation between the estimated line-of-sight velocity of the $n$-th galaxy and its position vector. For the catalogs considered, $A_{ij}$ is nearly diagonal, the off-diagonal terms being of order 10$\%$ of the diagonal ones. In the model we are considering, the measured peculiar velocity of galaxy $n$ is related to the velocity field at the position of galaxy $n$ by \begin{equation} S_n = \hat{r}_{n,i}v_i({\bf r}_n)+\epsilon_n \end{equation} where $\epsilon_n$ is drawn from a Gaussian with zero mean and variance $\sigma_n^2+\sigma_*^2$. The fact that $\epsilon_n$ is statistically independent of the velocity allows the theoretical covariance matrix for the bulk flow components to be written as \cite{kaiser1988} \begin{equation} R_{ij} = \left< U_i U_j \right> = R_{ij}^{(v)} + R_{ij}^{(\epsilon)}, \label{eq-Rij} \end{equation} where the ``noise'' term can be shown to be \cite{kaiser1988} \begin{equation} R_{ij}^{(\epsilon)} = A_{ij}^{-1} \label{eq-Rije} \end{equation} and the ``theoretical" term can be written as the convolution of an angle-averaged tensor window function with the power spectrum \begin{equation} R_{ij}^{(v)} = 4\pi \int d k \: k^2 P_v(k) \: {\cal W}^2_{ij}(k) \label{eq-Rijv} \end{equation} where \begin{equation} {\cal W}^2_{ij} (k)= A^{-1}_{il}A^{-1}_{js} \sum_{n,m} {\hat r_{n,l} \hat r_{m,s}\over (\sigma_n^2+\sigma_*^2)(\sigma_m^2+\sigma_*^2)}\int {d^2{\hat k}\over 4\pi}\ \left({\bf \hat r}_n\cdot {\bf \hat k}\ \ {\bf \hat r}_m\cdot {\bf \hat k}\right) \exp\left(i{\bf k}\cdot ({\bf r}_n- {\bf r}_m)\right) \label{eq-WF} \end{equation} Our main goal in this {\it paper} is to figure out whether the surveys we consider are consistent with one another. However, even if two surveys are measuring the same underlying velocity field, they will not necessarily give the same bulk flow. This is both due to measurement errors in the peculiar velocities and the fact that each survey probes the velocity field in a different way. This is most clearly seen by observing that each survey has different window functions (see below). In order to get an idea of how much correlation is expected between the estimates of the components of the bulk flows ${\bf U}^A$ and ${\bf U}^B$ of any pair of surveys (A,B) for a given power spectrum, we can calculate the correlation matrix $\left< {\bf U}^A{\bf U}^B\right>$ for the two surveys. This is calculated in a similar manner to the covariance matrix, except that the two sums in the window function are now over two different surveys \begin{eqnarray} {\cal W}^2_{ij} (k)= &(A^A)^{-1}_{il}(A^B)^{-1}_{js} & \sum_{n,m} {\hat r^A_{n,l} \hat r^B_{m,s}\over ((\sigma^A)_n^2+\sigma_*^2)((\sigma^B)_m^2+\sigma_*^2)} \nonumber\\ & & \times\int {d^2{\hat k}\over 4\pi}\ \left({\bf \hat r}^A_n\cdot {\bf \hat k}\ \ {\bf \hat r}^B_m\cdot {\bf \hat k}\right) \exp\left(i{\bf k}\cdot ({\bf r}^A_n- {\bf r}^B_m)\right) \label{eq-WFcross} \end{eqnarray} The correlation matrix can then be used to calculate the normalized expectation value for the dot product of ${\bf U}^A$ and ${\bf U}^B$ \cite{wf1995}: \begin{equation} {\mathcal C} = \frac{\left< U_i^A U_i^B \right> }{\left( \left< U_l^A U_l^A \right> \left< U_m^B U_m^B \right> \right)^{1/2}}= \langle\cos\theta\rangle\ , \label{eq-C} \end{equation} where $\theta$ is the angle between ${\bf U}^A$ and ${\bf U}^B$. $\mathcal C$ should be close to unity for highly correlated vectors, zero for vectors that are completely uncorrelated, and --1 if there is a high degree of anti-correlation. It is important to realize that $\mathcal C$ carries information only about the correlation of the {\it directions} of the bulk flow vectors of the two surveys; however, it provides a convenient measure of how well the large scale velocity information contained in two surveys agree. Given a value of ${\mathcal C}$ for two surveys (A,B) calculated using a given power spectrum, we can estimate the probability that the bulk flow vectors ${\bf U}^A$ and ${\bf U}^B$ will be separated by an angle greater than some $\theta_c$. Our strategy is to think of the direction of ${\bf U}^A$ as scattering about the direction of ${\bf U}^B$ in a two-dimensional space where $\theta$ is the radial distance. Thus we can take $\theta$ to have a $\chi^2$ distribution with two degrees of freedom \begin{equation} P(\theta)d\theta = {\theta\over a^2} e^{-\theta^2/2a^2}\ d\theta. \label{P-theta} \end{equation} The probability of measuring a value for $\theta$ greater than $\theta_c$ is then \begin{equation} P(\theta>\theta_c) = \int_{\theta_c}^\infty P(\theta)\ \ d\theta. \label{Pgt-theta} \end{equation} We can estimate the value of $a$ by using the fact that ${\mathcal C} = \langle \cos\theta\rangle \approx 1 - {1\over 2}\langle \theta^2\rangle$ for small $\theta$. Since our $P(\theta)$ distribution has the property that $\langle \theta^2\rangle = 2a^2$, we can estimate \begin{equation} a = \sqrt{1-{\mathcal C}}\ . \label{width} \end{equation} This analysis ignores the small anisotropy in the covariance matrices for ${\bf U}^A$ and ${\bf U}^B$, but should be sufficient for our purposes. \section{The Surveys} The formalism described above can be employed to test the consistency of all velocity field surveys. In this study, we have considered the following proper distance catalogs:\\ \newcounter{junk1} \begin{list}{{\arabic{junk1})}}{\usecounter{junk1}\setlength{\rightmargin}{\leftmargin}} \vspace{-0.1in} \item {\it Spiral Field I-Band (SFI)}: This is an all-sky survey (Giovanelli et al. 1994; da Costa et al. 1996; Giovanelli et al. 1998; Haynes at al. 1999a, 1999b), containing 1104 late-type spiral galaxies with I-Band TF distance estimates. It is a angular diameter limited survey and covers a volume out to $\sim$ 70 h$^{-1}$ Mpc. \item {\it Nearby Early-type Galaxy Survey (ENEAR)}: This is an all-sky survey probing a volume out to $\sim$ 70 h$^{-1}$ Mpc. Although the survey contains data from different sources, it has been conducted by a single group and the data was analyzed by a single procedure and reached the same completeness level across the sky (da Costa at al. 2000a, 2000b). The sample contains 702 independent objects early--type elliptical galaxies and groups of galaxies brighter than m$_B$ $=$ 14.5 with D$_n$-$\sigma$ measured distances probing volume similar to the SFI survey. \item {\it Revised Flat Galaxy Catalog (RFGC)}: This catalog \cite{karachentsev2000} provides a list of radial velocities, HI line widths, TF distances, and peculiar velocities of 1327 spiral galaxies. This was compiled from observations of flat galaxies from FGC (Karachentsev, Karachentseva, \& Pernovsky 1993) performed with the 305 m telescope at Arecibo \cite{giovanelli1997} confined to the zone $0^{{\rm o}}<\delta<+38^{{\rm o}}$ accesible to the telescope. \item {\it Surface Brightness Fluctuation (SBF) }: This catalog \cite{SBF} employs I-band surface brightness fluctuation method and consists of 269 galaxies (both spiral and elliptical) reaching out to $\sim$ 4000 km s$^{-1}$, having a characteristic depth $\sim$ 12 h$^{-1}$ Mpc. \item {\it Mark III catalog of singles:}\\ \noindent and \item {\it Mark III catalog of groups:} These catalogs \cite{willick1997} are a compilation of various disparate surveys that were recalibrated and compiled to provide some of the first reasonably dense and deep peculiar velocity surveys. The Mark III Catalogs provide the observables for each object (i.e. redshift, magnitude, velocity width) and inferred distances derived from both the forward and inverse TF or $D_n-\sigma$ relations. Distances for both individual objects and groups are provided. The singles catalog has 2538 galaxies, while the group catalog has 1124 groups. The total survey depth is over 100 h$^{-1}$ Mpc with homogeneous sky coverage to $\sim 30$ h$^{-1}$ Mpc. \end{list} The most important point to note here is that the above surveys are by and large independent of each other. They use different distance indicators, selection functions and survey geometries, and target different morphology of galaxies. Further, since our formalism weighs the galaxy contribution by the distance error, we do not expect distant galaxies to contribute much to our results and thus, the homogeneous Malmquist bias correction will not change our outcome, (see, e.g. \cite{pairwise1,pp2006}). As for the inhomogeneous Malmquist bias (IMB), since redshift distances are not affected, it will result in scattering the distances away from overdensities, leading to an appearance of an infall. This effect should not affect the bulk flow, though it may contribute to higher order moments. If we had found disagreement between the surveys, it would be very important to estimate the magnitude of the IMB since it may be the cause of the disagreement. Since we found agreement, then it is likely that the IMB is smaller than our errors, since it was not sufficient to cause disagreement even though one would expect each survey to have a different bias. That said, we have conducted the following experiment: We have cut all galaxies above a certain radius (for radii $r=40,50,60,70\ h^{-1}$ Mpc) and performed the analysis described above. Removing far away galaxies did not change the results significantly and the direction of the flow for these radii was within a standard deviation of the results quoted in this {\it paper}. These results are expected, since the galaxies are distance-error-weighted and thus, galaxies close to the edge of the surveys contribute relatively little to the overall flow. We thus confirm our expectation that prominent features close to the edge of the surveys (e.g. Pieces-Perseus region) do not produce a spurious IMB bulk motion and systematically bias our results. \section{Results} We now present the estimates of the actual bulk flow vectors for all the different surveys. We use Eq. (\ref{eq-Ui}) to construct the Cartesian components of the full three-dimensional bulk flow vectors. Our results are tabulated in Table 1. The uncertainties given for the bulk flow values in Table 1 are obtained from the noise part of the covariance matrix, $R^{(\epsilon)}$ (Eq. \ref{eq-Rije}). Since $R^{(\epsilon)}$ is nearly diagonal for all of the surveys, we took the uncertainties in the $U_i$ to be approximately independent, so that the uncertainties are taken to be $\sigma_{U_i} = \sqrt{R^{(\epsilon)}_{ii}}$. These uncertainties are dominated by measurement errors in the individual velocities, although they also have small contributions from $\sigma_*$ and cosmic scatter. \begin{center} \begin{tabular}{l|l|r|c|r|r|r} \multicolumn{7}{c}{\bf Table 1} \\ \hline \hline Survey & Method & N$\;\;$ & Effective Depth & U$_x\; \; \; $ & U$_y\; \; \;\; \; $ & U$_z\;\; \;\;$ \\ & & & (km s$^{-1}$) & (km s$^{-1}$) & (km s$^{-1}$) & (km s$^{-1}$) \\ \hline \hline SFI & TF & 1104 & $\sim$ 4000 & 43$\pm$ 37 & -145$\pm$ 35 & 57$\pm$ 26 \\ \hline ENEAR &$D_n-\sigma$& 702 & $\sim$ 4000 & 154$\pm$ 50 & -246$\pm$ 44 & 17$\pm$ 39 \\ \hline RFGC & TF & 1280 & $\sim$ 6000 & 235$\pm$ 38 & -232$\pm$ 39 & 31$\pm$ 29 \\ \hline SBF & SBF & 280 & $\sim$ 2000 & 249$\pm$ 58 & -262$\pm$ 46 & 163$\pm$ 29 \\ \hline Mark III singles& Various & 2538 & $\sim$ 4500 & 202$\pm$ 26 & -230$\pm$ 24 & 31$\pm$ 22 \\ \hline Mark III groups & Various & 1124 & $\sim$ 4500 & 247$\pm$ 34 & -386$\pm$ 31 & 79$\pm$ 26 \\ \hline \hline \end{tabular} \parbox{6in}{\small The Cartesian components of the full three--dimensional bulk flow vectors as well as the distance estimator method, the number of data points and the effective depth for each survey.} \end{center} \vspace{3mm} \begin{figure} \begin{center} \includegraphics[width=\linewidth]{W3D-all.ps} \caption{The logarithm of the trace of the normalized square tensor window functions in the $k_x-k_y$ plane from the six surveys we used in this study. As we can see, the central peak is similar in all surveys, suggesting that the large scale (small $k$) power is probed in a similar fashion. However, as we move away from the center, each survey samples the underlying power differently.} \label{fig-WF} \end{center} \end{figure} Each catalog surveys a different volume of space, it samples a small subset of the underlying population of galaxies, and uses independent spatial sampling techniques. Since the universe is homogeneous and isotropic on large scales and the sample volumes of the various data sets strongly overlap, these surveys react to the same underlying large-scale mass distribution. However, the contribution from small scale nonlinearities differs from one catalog to another depending on the particulars of the survey. The window functions of these surveys, therefore, differ from one another, particularly on small scales. As a consequence, we don't expect the measured bulk flows from these surveys to be identical, even in the absence of peculiar velocity errors. Thus the bulk flow components listed in Table 1 are not strictly comparable. However, if the samples are true representations of the flows of the underlying populations, they should be correlated. For illustration purposes (Fig. \ref{fig-WF}) we show the tensor window functions (Eq. \ref{eq-WF}) for each of the surveys. We chose to show the $ {\cal W}^2_{xx}$ in the $k_x,k_y$ plane (we have not performed the angle-averaging that is given in the equation). The figure illustrates the fact that although all of the surveys have a similar central peak around $k=0$ which samples the large-scale power in a similar way, each survey samples the region of larger $k$ differently. These differences in the window functions tend to decrease the correlation between the bulk flow vectors of the surveys. In Table 2, we show the value of $\mathcal C=\langle \cos\theta\rangle$ (Eq. \ref{eq-C}) for each pair of surveys (A,B), together with the inferred $a$ (Eq. \ref{width}) for the probability distribution of $\theta$. We can see from these values that the directions of the bulk flow vectors for all of the surveys are highly correlated. We also show the angle $\theta_c$ between the measured ${\bf U}^A$ and ${\bf U}^B$, and the probability of measuring an angle this large or larger, $P(\theta > \theta_c)$ (Eqs. \ref{P-theta}-\ref{Pgt-theta}). These show that in general the results are consistent with one another for all pairs. \begin{center} \begin{tabular}{l|c|c|c|c|c|c|l} \multicolumn{6}{c}{\bf Table 2} \\ \hline \hline Survey & $\cos(\theta)$ & $\langle \cos\theta\rangle$ & $a$ & $\theta_c$ & $P(\theta>\theta_c)$\\ \hline\hline SFI--ENEAR & 0.92 & 0.91 & 0.30 & 0.40 & 0.42 \\ \hline SFI--RFGC & 0.85 & 0.90 & 0.31 & 0.55 & 0.21 \\ \hline SFI--SBF & 0.91 & 0.78 & 0.47 & 0.44 & 0.64 \\ \hline SFI--Mark III s & 0.88 & 0.89 & 0.33 & 0.49 & 0.33 \\ \hline SFI--Mark III g & 0.95 & 0.92 & 0.27 & 0.33 & 0.49 \\ \hline ENEAR--RFGC & 0.97 & 0.86 & 0.37 & 0.23 & 0.82 \\ \hline ENEAR--SBF & 0.92 & 0.80 & 0.45 & 0.41 & 0.66 \\ \hline ENEAR--Mark III s & 0.99 & 0.88 & 0.34 & 0.17 & 0.89 \\ \hline ENEAR--Mark III g & 0.99 & 0.92 & 0.29 & 0.11 & 0.93 \\ \hline RFGC--SBF & 0.95 & 0.77 & 0.48 & 0.33 & 0.79 \\ \hline RFGC--Mark III s & 1.00 & 0.82 & 0.42 & 0.07 & 0.99 \\ \hline RFGC--Mark III g & 0.97 & 0.86 & 0.38 & 0.23 & 0.83 \\ \hline SBF--Mark III s & 0.95 & 0.68 & 0.56 & 0.32 & 0.85 \\ \hline SBF--Mark III g & 0.95 & 0.82 & 0.42 & 0.31 & 0.76 \\ \hline Mark III s--Mark III g & 0.99 & 0.92 & 0.28 & 0.17 & 0.83 \\ \hline \hline \end{tabular} \parbox{5in}{\small For each pair of surveys we show the value of the cosine of the angle ($\theta$) between their bulk flows, the expectation value of their dot product $\mathcal C$, the inferred width $a$ for the probability distribution of $\theta$, the critical angle $\theta_c$ and the probability of measuring an angle greater than $theta_c$.} \end{center} To test our theoretical results and see in more detail the exact distribution of the bulk flow vectors, we conducted numerical experiments with the data. In one experiment we perturbed galaxies' positions, and hence also the peculiar velocities, using the reported errors in the distance measurements. Essentially we took each catalog and performed 1,000 Monte-Carlo realizations of the data using the measurement errors as the width of a Gaussian about the mean distance -- the proper distance reported. In another experiment we used the diagonal elements of the ``noise'' part of the covariance matrix as the variance of the individual components of the bulk flow vectors and did another 1,000 Monte-Carlo realizations where we drew values from a Gaussian distribution, N($\mu$,$\Sigma^2$), with the individual components being taken as the mean $\mu$ and identifying $\Sigma^2$ as the variance obtained from $R^{(\epsilon)}_{ij}$ [refer to Eq. (\ref{eq-Rije})]. Both of our methods for calculating the spread in the components of the streaming solution yield similar results and agree with the errors reported in Table 1. We thus compared the bulk--flow vectors in two distinct ways, one was to measure the vectors directly from the data, monte--carlo the results and compare them, the other was to use the power spectrum to estimate the probability that two surveys measure different directions for the bulk flow. Since we used the power spectrum to calculate the likelihood that two surveys both sample the same large scale flows, one has to discuss the effect of cosmic scatter since that is part of the variance. However, surveys that have good all-sky coverage, as our surveys do, will have nearly identical contributions to their bulk flows from large scales; the dominant part of the variance comes from small scale effects such as galaxy noise and distance measurement errors. Thus only a very small part of the differences in direction of the bulk flows will come from large scale effects such as cosmic scatter. In order to remove the cosmic scatter part of the variance, we need to make a prior assumption, that we know the local velocity field from other sources, specifically reconstruction of density surveys. We chose not to use this prior and we show, in a model independent way, that velocity field surveys are consistent with each other and that they do sample the underlying large--scale in an unbiased, robust way. \begin{figure} \begin{center} \includegraphics[width=\linewidth]{Bulk-Flow-0.ps} \caption{The top panel depicts the spread in the individual components of the bulk flow vector for all the six surveys, depicted in an Aitoff--Hammer Galactic projection, the cross indicates the weighted mean bulk flow direction. The bottom left and right panels show the galactic longitude and latittude, respectively as a function of the estimated effective depth ($D_E$) of the survey. The solid and dashed lines mark the mean and standard deviation, respectively, of the weighted mean results from the six catalogs of $l= 234^{{\rm o}}\pm 11^{{\rm o}}$ and $b=12^{{\rm o}}\pm 9^{{\rm o}}$} \label{fig-Ulb} \end{center} \end{figure} The bulk flow vector direction for each survey is given in Fig. \ref{fig-Ulb} where we plot the results from perturbing the distances, as described above, in the Aitoff--Hammer projection. It is clear from the figure that the bulk flow vectors for all surveys cluster about the same direction in the sky. Although the bulk flow components are not strictly comparable, it was hard to resist combining the results for all six catalogs to get an estimate of the mean bulk flow of a sphere with an effective depth of $\sim4000$ km s$^{-1}$ to be approximately 330 km s$^{-1}$ $\pm$ 101 km s$^{-1}$ toward $l= 234^{{\rm o}}\pm 11^{{\rm o}}$ and $b=12^{{\rm o}}\pm 9^{{\rm o}}$ where $l$ and $b$ are the galactic longitude and latitude respectively. The value of the combined bulk flow vector was calculated by weighing the results by their errors, finding the weighted mean of the values for each survey. We would like to emphasize that this result should be taken with a grain of salt, since the bulk flow is volume dependent and these surveys strongly overlap but do not strictly occupy the same volume. We would also like to point out that the overall agreement between the bulk flow vectors of the different surveys may suggest that the internal sheer for the flows should be small, a conclusion we are testing in an upcoming paper (Watkins \& Feldman, 2006). \section {Conclusion} We have presented statistical analyses of the bulk flow measurement for six proper distance surveys. We have shown that the estimates of bulk flows obtained from these surveys are expected to have a high degree of correlation. Further, we have constructed the actual three dimensional estimates of the bulk flow vectors and shown that consistent results are obtainable from independent distance indicators, once they are applied to uniformly selected samples of galaxies. We find no statistically significant differences between the velocity fields mapped by different morphologies, galaxy types or distance indicators. We would like to stress that one should not be putting too much emphasis on comparing the components of the bulk flow vectors directly, since they are not really comparable. This is especially true since the error bars reflect only the statistical and measurement errors and do not capture the differences in the bulk flows due to the fact that they probe the power spectrum differently. Basically, we don't expect the bulk flow components to agree within the error bars shown in Table 1. However, when we look at the {\it direction} of the bulk flow vectors, they do agree with each other remarkably well. Thus we conclude that all bulk flow measurements are consistent with each other given the errors as long as we allow for small scale alliasing and incomplete cancelations. A rough estimate of the (weighted mean) bulk flow from all surveys gives a flow of 330 km s$^{-1}$ $\pm$ 101 km s$^{-1}$ toward $l= 234^{{\rm o}}\pm 15^{{\rm o}}$ and $b=12^{{\rm o}}\pm 9^{{\rm o}}$. This study clearly supports the notion that we have reached an era where velocity field data is consistent and robust across morphological types, selection criteria, survey geometry etc. Results from independent catalogs probe the same underlying large--scale power, though are subjected to different small--scale fluctuations. Unlike earlier, sparser surveys, the newer proper--distance surveys provide us with a dynamical probe of the large--scale structure which we can add to our growing arsenal of data with confidence that the results reflect the cosmology we probe. \noindent{\bf Acknowledgment:} This research was supported by the University of Kansas General Research Fund (KUGRF). HAF has been supported in part by a grant from the Research Corporation. {\small
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{Acknowledgement} We acknowledge the financial support from the National Science Foundation under grant number DMR-0210717. One of us (V. Z.) acknowledges the Mercator-Guestprofessorship award from the German Research Foundation. We are grateful to B. Coqblin for many stimulating discussions.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} A satisfactory description for the transverse dynamics of a charged particle beam is a fundamental requirement for the new generation of accelerating machines working at very high luminosity. To this aim, the recently proposed {\it Thermal Wave Model\/} (TWM) \cite{fm1} could represent an interesting framework for better describing the dynamics of charge particle beams. According to this model, the beam transport is described in terms of a complex function, the {\it beam wave function} (BWF), whose squared modulus gives the transverse density profile. This function satisfies a Schr\"{o}dinger-like equation in which Planck's constant is replaced by the transverse emittance. In this equation, the potential term accounts for the total interaction between the beam and the surroundings. In particular, in a particle accelerator the potential has to take into account both the multipole-like contributions, depending only on the machine parameters, and the collective terms, which on the contrary depend on the particle distribution (self-interaction). In transverse dynamics, TWM has been successfully applied to a number of linear and non-linear problems. In particular, it seems to be capable of reproducing the main results of Gaussian particle beam optics (dynamics for a quadrupole-like device \cite{fm1}), as well as of estimating the luminosity in final focusing stages of linear colliders in the presence of small sextupole- and octupole-like deviations \cite{fm2}. In addition, for the case of transverse dynamics in quadrupole-like devices with small sextupole and octupole deviations, TWM predictions have been recently compared with tracking code simulations showing a very satisfactory agreement \cite{fgm}. More recently, in the framework of Nelson's stochastic mechanics the Schr\"{o}dinger-like equation of TWM has been derived \cite{tzenov}. This paper concerns with the transverse phase-space behaviour of a charged particle beam passing through a thin quadrupole with small sextupole and octupole deviations, in order to find the most suitable function to describe the {\it phase-space distribution} associated with the beam in the framework of TWM. By using the BWF found in Ref.~\cite{fgm}, we compute the corresponding Wigner function (WF) \cite{wig} which, according to the quantum mechanics formalism, should represent the most natural definition for the beam distribution function. Unfortunately, due to the uncertainty principle, the WF is not positive definite, hence for the above BWF we compute also the corresponding Husimi function (HF) \cite{hus}, the so-called $Q$-function, which is alternative to WF and is in fact positive definite. A comparison of both these predictions with the results of a particle tracking simulation is then performed in order to select the most appropriate definition of the beam phase-space function in TWM. The paper is organized as follows. A brief presentation of BWF determination is given in section 2, whilst in section 3 we give the definition of both WF and HF. In section 4, the comparison of the theoretical predictions with the simulations results is presented. Finally, section 5 contains conclusions and remarks. \section{Beam Wave Function in presence of small sextupole and octupole aberrations} Let us consider a charged particle beam travelling along the $z$-axis with velocity $\beta c$ ($\beta \approx 1$), and having transverse emittance $\epsilon$.\\ We suppose that, at $z=0$, the beam enters a focusing quadrupole-like lens of length $l$ with small sextupole and octupole deviations, then it propagates in vacuo. In this region, if we denote with $x$ the transverse coordinate (1-D case), the beam particles feel the following potential \begin{equation} U(x,z) = \left\{ \begin{array}{cc} \frac{1}{2!} k_{1} x^2 + \frac{1}{3!} k_{2} x^3 +\frac{1}{4!} k_{3} x^{4} & 0\leq z \leq l\\ 0 & z > l \end{array} \right.~~~, \label{1} \end{equation} where $k_{1}$ is the quadrupole strength, $k_{2}$ is the sextupole strength and $k_{3}$ is the octupole strength, respectively. Note that $U(x,z)$ is a dimensionless energy potential, obtained dividing the potential energy associated with the transverse particle motion by the factor $m_{0}\gamma\beta^2c^2$, where $m_{0}$ and $\gamma$ are the particle rest mass and the relativistic factor $[1-\beta^2]^{-1/2}$, respectively. As already stated, in the TWM, the transverse beam dynamics is ruled by the Schr\"{o}dinger-like equation \cite{fm1} \begin{equation} i\epsilon~\frac{\partial \Psi}{\partial z} = - {}~\frac{\epsilon^2}{2} \frac{{\partial}^{2}}{\partial x^2} \Psi + U(x,z) \Psi~~~. \label{eq:schr} \end{equation} The $z$-constancy of the integral $\int_{-\infty}^{+\infty} |\Psi(x,z)|^2~dz$, which is a consequence of the reality of $U(x,z)$ in (\ref{eq:schr}), suggests to interpret $|\Psi(x,z)|^2$ as the transverse density profile of the beam. Hence, if $N$ is the total number of the beam particles, $\lambda (x,z)\equiv N~|\Psi (x,z)|^2$ is the transverse number density. We fix the initial profile as a Gaussian density distribution of r.m.s. $\sigma_{0}$, which corresponds to the initial BWF \begin{equation} \Psi_{0}(x)\equiv \Psi(x,0)= {1 \over \left[2\pi\sigma_{0}^{2}\right]^{1/4}} \exp\left(-\frac{x^2}{4\sigma^2_{0}}\right)~~~. \label{eq:psiinit} \end{equation} Provided that $\sigma_0 k_2/(3k_1) \ll 1$ and $\sigma_0^2 k_3/(12k_1) \ll 1$, the quantum mechanics formalism for {\it time}-dependent perturbation theory applied to (\ref{eq:schr}) for the case of a thin lens ($\sqrt{k_{1}}l \ll 1$) allows us to give an approximate first-order normalized BWF in the configuration space. At the exit of the lens ($z=l$) it reads \cite{fgm} \begin{eqnarray} \Psi (x,l) & = & \Psi_{0}(x) ~\exp\left(-i { K_1 x^2 \over 2 \epsilon} \right) \left[ (1-i3\omega)~H_{0}\left(\frac{x} {\sqrt{2}\sigma_{0}}\right) - i \frac{3\tau}{\sqrt{2}}~H_{1}\left(\frac{x} {\sqrt{2}\sigma_{0}}\right)\right. \nonumber\\ & - & \left. i 3 \omega ~H_{2}\left( \frac{x}{\sqrt{2}\sigma_{0}}\right) - i \frac{\tau}{2\sqrt{2}}~H_{3}\left(\frac{x}{\sqrt{2}\sigma_{0}}\right) -i \frac{\omega}{4}~H_{4}\left(\frac{x}{\sqrt{2}\sigma_{0}}\right)\right]~~~, \label{eq:psil} \end{eqnarray} where $\tau \equiv \sigma_0^3 K_2 /6\epsilon$, and $\omega \equiv \sigma_0^4 K_3 / 24\epsilon$, with $K_{i} \equiv k_{i} l$ ($i=1,2,3$), the integrated {\it aberration strengths}. Remarkably, Eq. (\ref{eq:psil}) shows that, due to the aberrations, the BWF after passing the {\it kick stage} is a superposition of only five modes, according to the simple selection rules due to (\ref{1}). Thus, after a drift of length $L$ ($L\gg l$) in the free space we get (see also \cite{fgm}) \begin{eqnarray} \Psi(x,L) & = & \frac{\exp{\left[- \frac{x^2}{4 \sigma^2(L)}\right]} \exp{\left[i\frac{x^2}{2 \epsilon R(L)} + i \phi(L)\right]}} {\left[2 \pi \sigma^2(L) (1 + 15 \tau^2 + 105 \omega^2)^2 \right]^{1/4}} \left[ (1-i3\omega)~H_{0}\left(\frac{x} {\sqrt{2}\sigma(L)}\right) \right. \nonumber\\ & - & \left. i \frac{3\tau}{\sqrt{2}}~H_{1}\left(\frac{x} {\sqrt{2}\sigma(L)}\right) e^{i 2 \phi(L)} - i 3 \omega ~H_{2}\left( \frac{x}{\sqrt{2}\sigma(L)}\right) e^{i 4 \phi(L)} \right. \nonumber\\ & - & \left. i \frac{\tau}{2\sqrt{2}}~H_{3}\left(\frac{x}{\sqrt{2} \sigma(L)}\right) e^{i 6 \phi(L)} - i \frac{\omega}{4}~H_{4}\left(\frac{x}{\sqrt{2}\sigma(L)}\right) e^{i 8 \phi(L)}\right]~~~, \label{13p} \end{eqnarray} where \begin{eqnarray} \sigma(L) & = & \left[\left( \frac{\epsilon^2}{4 \sigma_{0}^2} + K_{1}^2 \sigma_0^2\right) (L-l)^2 - 2 K_1 \sigma^2_0 (L-l) + \sigma_0^2\right]^{1/2}~~~, \nonumber\\ \frac{1}{R(L)} & = & \left.\frac{1}{\sigma} \frac{d \sigma}{d z}\right|_{z=L}~~~, \nonumber\\ \phi(L)& = & - \frac{1}{2} \left\{ \arctan{\left[ \left( \frac{\epsilon}{2 \sigma_{0}^{2}} + \frac{ 2 K_{1}^{2} \sigma^2_{0}} {\epsilon}\right)\left( L-l \right)- \frac{ 2 K_{1} \sigma^2_{0}} {\epsilon} \right]}\right. \nonumber\\ & + & \left.\arctan{\left[ \frac{ 2 K_{1} \sigma^2_{0}} {\epsilon}\right]} \right\}~~~. \label{13q} \end{eqnarray} Correspondingly, the Fourier transform of $\Psi (x,z)$ \begin{equation} \Phi(p,z)\equiv \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}\Psi (x,z)~\exp(-ipx/\epsilon)~dx~~~, \label{3} \end{equation} is the BWF in the momentum space.\\ Consequently, $|\Psi (x,l)|^2$ and $|\Psi (x,L)|^2$ give the transverse particle density profiles at $z=l$ (after lens) and at $z=L$ (after drift), respectively, whilst $|\Phi (p,l)|^2$ and $|\Phi (p,L)|^2$ give the momentum distributions of the particles, at $z=l$ and at $z=L$, respectively. In Ref.~\cite{fgm} the configuration-space and momentum-space distributions are reported for some significative values of parameters $\sigma_{0}$, $\epsilon$, $K_{1}$, $K_{2}$, $K_{3}$, and $L$. They are compared with the corresponding results obtained from particle tracking simulations showing a very satisfactory agreement. In the next section we extend this analysis, made separately both in configuration space and in momentum space, by introducing the appropriate distribution function in the context of TWM approach. Since the approximate solution (\ref{eq:psil}) and (\ref{13p}) for BWF is given for small sextupole and octupole deviations from a quadrupole potential (harmonic oscillator), the present analysis falls in the {\it semiclassical} description of the wave packet evolution (WKB theory) extensively treated in Ref. \cite{littlejohn}. \section{Phase-space distributions} According to Quantum Mechanics (QM), for a given BWF $\Psi (x,z)$ we can introduce the density matrix $\rho$ as \begin{equation} \rho (x,y,z) \equiv \Psi (x,z) \Psi^{*}(y,z)~~~, \label{eq:rhopsi} \end{equation} which, in the $\langle bra|$ and $|ket\rangle$ Dirac's notation, is associated with the the following {\it density operator} \begin{equation} \hat{\rho} = | \Psi \rangle \langle \Psi |~~~. \label{eq:matrdens} \end{equation} Note that $\hat{\rho}$ has the following two properties\\ i) probability conservation \begin{equation} \mbox{Tr}(\hat{\rho}) = 1~~~; \label{eq:trace} \end{equation} ii) hermiticity \begin{equation} \hat{\rho}^{\dag} = \hat{\rho}~~~. \label{eq:herm} \end{equation} On the basis of this density matrix definition, we can define the relevant phase-space distributions associated with the transverse beam motion within the framework of TWM. \subsection{Wigner function} \label{sec:wigf} One of the widely used phase-space representations given in QM is the one introduced by Weyl and Wigner. In this representation, by simply replacing Planck's constant with $\epsilon$, the phase-space beam dynamics can be described in terms of the following function, called Wigner function (WF) \begin{equation} W(x,p,z) \equiv { 1 \over 2 \pi \epsilon} \int_{-\infty}^{+\infty} \rho\left(x-{y \over 2}, x+{y \over 2},z \right)~\exp\left(i {p y \over\epsilon} \right)~dy~~~, \label{eq:wig} \end{equation} namely, by virtue of (\ref{eq:rhopsi}) \begin{equation} W(x,p,z) = {1 \over 2 \pi \epsilon} \int_{-\infty}^{+\infty} \Psi^{*}\left(x + { y \over 2},z\right) \Psi\left(x - { y \over 2},z\right) {}~\exp\left(i{p y\over \epsilon} \right)~dy~~~. \label{eq:wigexpr} \end{equation} It is easy to prove from (\ref{eq:wigexpr}) that \begin{equation} \int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty}W(x,p,z)~dx~dp=1~~~, \label{w-norm} \end{equation} \begin{equation} \lambda (x, z)~=~N\int_{-\infty}^{+\infty} W(x,p,z)~dp~~~, \label{eq:wigrho} \end{equation} and \begin{equation} \eta (p,z)~=~N\int_{-\infty}^{+\infty} W(x,p,z)~dx~~~, \label{eq:wigintq} \end{equation} where $\eta (p,z)$ represents the transverse momentum space number density (note that $\eta (p,z)=N ~ |\Phi (p,z)|^2$). Eqs. (\ref{eq:wigrho}) and (\ref{eq:wigintq}) show that $N$ times $W(x,p,z)$ is the phase-space distribution function associated with the transverse beam motion. Consequently, $\lambda$ and $\eta$ are its configuration-space and momentum-space projections, respectively. The averaged-quantity description can be done by introducing the following second-order momenta of $W$ \begin{equation} \sigma_{x}^{2}(z) \equiv \int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty}x^2~W(x,p,z)~dx~dp \equiv \langle x^2 \rangle~~~, \label{eq:sigmax2} \end{equation} \begin{equation} \sigma_{xp}(z) \equiv \int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty}xp~W(x,p,z)~dx~dp \equiv {1 \over 2}\langle xp+px \rangle~~~, \label{eq:sigmaxp} \end{equation} \begin{equation} \sigma_{p}^{2}(z) \equiv \int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty}p^2~W(x,p,z)~dx~dp \equiv \langle p^2 \rangle~~~. \label{eq:sigmap2} \end{equation} They are connected with the geometrical properties of the phase-space ensemble associated with the beam; here we have considered the case $\langle p \rangle=\langle x \rangle=0$. According to QM and simply replacing $\hbar$ with $\epsilon$, we can easily say that $W(x,p,z)$, defined by (\ref{eq:wigexpr}) when the BWF $\Psi (x,z)$ is solution of (\ref{eq:schr}) (with $U$ arbitrary Hermitian potential), satisfies the following Von Neumann equation \begin{equation} \left[{\partial \over \partial z}~+~p{\partial \over \partial x}~+~{i \over \epsilon}\left(U\left(x+{i\epsilon \over 2}{\partial \over \partial p}\right)~-~U\left(x-{i\epsilon \over 2}{\partial \over \partial p}\right)\right)\right]W~=~0~~~. \label{von-neumann} \end{equation} Thus, in order to perform a phase-space analysis, one has two possibilities: either to solve (\ref{eq:schr}) for $\Psi (x,z)$ and then, by means of the Wigner transform (\ref{eq:wigexpr}), to obtain $W(x,p,z)$, or to solve directly (\ref{von-neumann}) for $W(x,p,z)$. Although for the potential given by (\ref{1}) the exact solution of the Schr\"{o}dinger-like equation (\ref{eq:schr}) is unknown, time-dependent perturbation theory can be applied to give, at any order of the perturbative expansion, the approximate solution \cite{landau}: for example, (\ref{eq:psil}) is the first-order approximate solution of (\ref{eq:schr}) for the potential (\ref{1}).\\ Also for the Von Neumann equation, (\ref{von-neumann}), the exact solution is not available in the case of potential (\ref{1}), with the exception of the case with $k_2=k_3=0$ (pure quadrupole/harmonic oscillator). Nevertheless, it has been shown \cite{narcowich} that, for a more general Hamiltonian which includes our case as a special case, a perturbative Dyson-like expansion for the Wigner function can be constructed which converges to the solution of the corresponding quantum Liouville equation. In general this approach, providing $W(x,p,z)$ directly from the phase-space dynamics, should yield a smaller error. For a thin lens the Wigner transform, (\ref{eq:wigexpr}), of the approximate BWF, as given by (\ref{eq:psil}) and (\ref{13p}), coincides with the approximate first-order solution of quantum Liouville equation \cite{narcowich}: the two approaches are therefore equivalent. As TWM has been mainly developed in the configuration space \cite{fm1}, and the approximate BWF for the typical potential used in particle accelerators has already been calculated (see, for example, \cite{fm1}-\cite{fgm}), in this paper we have chosen to proceed with this latter approach. For what concerns the calculation of the higher-order solutions, solving directly (\ref{von-neumann}) for $W(x,p,z)$ should, in principle, be preferable to the procedure in which we first solve for BWF and then use (\ref{eq:wigexpr}), because less approximations are required, and therefore smaller errors are involved. Unfortunately it is very difficult to treat the Von Neumann equation, (\ref{von-neumann}), numerically, especially if the classical potential contains powers in $x$ higher than the quadratic one. In fact, in this case operators $U\left[x \pm i(\epsilon/2){\partial \over \partial p}\right]$ make (\ref{von-neumann}) a partial differential equation of order higher than the second in the $p$-derivative, which is rather difficult to handle numerically. In order to treat BWF and Wigner transform numerically, instead, one can profit of the very powerful methods developed for the Schr\"odinger equation, as it has been done, for instance, in Ref. \cite{fmve}. Although $W$ is the distribution function of the system in the framework of TWM, due to well-known quantum mechanical properties, it is not positive definite. However, in QM it results to be positive for some special harmonic oscillator wave functions, called {\it coherent states} \cite{coherstates,coherstates1,coherstates2,coherstates3}, which give purely Gaussian density profiles. Coherent states for charged particle beams have been recently introduced in TWM in order to describe the coherent structures of charged particle distributions produced in an accelerating machine \cite{dnfmm}. The fact that $W$ can assume negative values for some particular cases, reflects the quantum-like properties of both the wave function and the density operator. Remarkably, in the case of a pure quadrupole-like lens ($U=k_1 x^{2}/2$) an interesting quantity, which estimates the r.m.s. area of the ensemble, can be expressed in terms of these momenta, namely \begin{equation} \pi\left[ \langle x^2 \rangle ~ \langle p^2 \rangle - {1 \over 4}\langle xp+px \rangle^2\right]^{1/2} = \pi\left[\sigma^2_{x}(z) \sigma^2_{p}(z) - \sigma^2_{xp}(z) \right]^{1/2}~~~. \label{eq:area} \end{equation} Since in this case the ground-like state is \begin{equation} \Psi_{0}(x,z)= {1 \over \left[2\pi\sigma_{0}^{2}\right]^{1/4}} \exp\left(-\frac{x^2}{4\sigma^2_{0}}\right)\exp\left(-\frac{i}{2}\sqrt{k_{1}}z \right)~~~, \label{eq:psigauss} \end{equation} the formula (\ref{eq:wigexpr}) produces the following bi-Gaussian Wigner distribution \begin{equation} W_{0}(x,p) = { 1\over \pi \epsilon} \exp\left( - { x^2 \over 2 \sigma_{0}^2} - { 2 \sigma_{0}^2 \over \epsilon^2}p^2 \right)~~~. \label{eq:wig0} \end{equation} In addition, for the following non-coherent Gaussian-like state associated with the charged-particle beam \cite{dnfmm} \begin{equation} \Psi(x,z) = { 1 \over \left[ 2 \pi \sigma^2(z)\right]^{1/4}} \exp\left[ - { x^2 \over 4 \sigma^2(z) } + i { x^2 \over 2 \epsilon R(z) } + i \phi(z) \right]~~~, \label{eq:non-coherent} \end{equation} where \begin{equation} {d^2\sigma \over dz^2}+k_{1}\sigma-{ \epsilon^2\over 4\sigma^3}=0~~~~,~~~~ {1\over R}={1\over \sigma}{d\sigma\over dz}~~~~,~~~~ {d\phi\over dz}=-{\epsilon\over 4\sigma^2}~~~, \label{eq:envelope} \end{equation} definition (\ref{eq:wigexpr}) of $W$ easily gives \begin{equation} W(x,p,z) = { 1 \over \pi \epsilon} \exp\left[ - { x^2 \over 2 \sigma^2(z) } - { 2 \sigma^2(z) \over \epsilon^2} \left( { x \over R(z)} - p \right)^2 \right]~~~. \label{eq:wigz} \end{equation} Note that in this simple case of pure quadrupole the exact analytical solution (\ref{eq:wig0}) and (\ref{eq:wigz}) can be easily obtained directly by integrating (\ref{von-neumann}). Note also that the argument of the exponential in (\ref{eq:wigz}) is a quadratic form in the variables $x$ and $p$, which can be written as \begin{equation} F(x,p,z) \equiv - {2 \over \epsilon} \left[ \gamma(z) x^2 + 2 \alpha(z) x p + \beta(z) p^2 \right]~~~, \label{eq:quadraticform} \end{equation} where \begin{equation} \alpha(z)=-{\sigma^2(z) \over \epsilon R(z)}~~~~,~~~~ \beta(z)={\sigma^2(z) \over \epsilon}~~~~,~~~~ \gamma(z) = {\epsilon \over 4 \sigma^2(z)} + {\sigma^2(z)\over \epsilon R^2(z)}~~~. \label{eq:twissdef} \end{equation} These quantities are usually called Twiss parameters \cite{lawson}. By substituting (\ref{eq:wigz}) into (\ref{eq:sigmax2})-(\ref{eq:sigmap2}), we obtain $\alpha(z)= - (1/2) d\beta(z)/dz$, $\gamma(z) = \sigma_p^2(z)/\epsilon$ and \begin{equation} \sigma_x(z) =\sigma(z)~~~,~~~~~~~~\sigma_p^2(z) = {\epsilon^2 \over 4 \sigma^2(z)} + {\sigma^2(z) \over R^2(z)}= {\epsilon^2 \over 4 \sigma^2(z)} ( 1 + 4 \alpha^2(z))~~~. \label{eq:sigmaxsigmap} \end{equation} Consequently, we immediately get the following quantum-like version of the Lapostolle definition of emittance \cite{lawson} \begin{equation} \langle x^2 \rangle \langle p^2\rangle - {1 \over 4} \langle xp+px \rangle^2 = {\epsilon^2 \over 4} = \mbox{const.}~~~, \label{eq:courant} \end{equation} from which uncertainty relation of TWM can be easily derived \begin{equation} \sigma^2(z) \sigma_p^2(z) \geq { \epsilon^2 \over 4}~~~. \label{eq:TWM-uncertainty} \end{equation} Furthermore, the equation \begin{equation} \gamma(z) x^2 + 2 \alpha(z) xp + \beta(z) p^2 ={\epsilon \over 2}~~~, \label{eq:ellipse} \end{equation} represents, for each $z$, an ellipse in the phase-space of area $\pi\epsilon/2$ associated with the particle beam motion. Consequently, the operator \begin{equation} \hat{{\cal J}} (x,p,z)~\equiv ~ \gamma (z) x^2~+~2\alpha (z) {xp+px \over 2}~+~\gamma (z) p^2 \label{courant-snyder-invariant} \end{equation} is the quantum-like version of one of the well-known Courant-Snyder invariant \cite{courant-snyder}. In fact, it easy to prove that \begin{equation} i\epsilon{\partial\hat{{\cal J}}\over\partial z}~+~ \left[\hat{{\cal J}}, \hat{H}\right] ~=~0~~~, \label{quantum-like-total-derivative} \end{equation} where $\hat{H}$ is the Hamiltonian operator for the case of a quadrupole. \subsection{Husimi function} The phase-space description of a quantum system can be done in terms of another function which has been introduced by Husimi \cite{hus}, the so-called {\it $Q$-function} or {\it Husimi function} (HF). In order to give an analogous definition of HF for charged particle beams in the context of TWM, we still replace Planck's constant with $\epsilon$, obtaining \begin{equation} Q(x_{0},p_{0},z)\equiv {1 \over 2\pi\epsilon}~\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}~\Theta^{*} (u,z;x_{0},p_{0}) \rho (u,v,z)\Theta (v,z;x_{0},p_{0})~du~dv~~~, \label{eq:q-fun} \end{equation} where $\Theta (x,z;x_{0},p_{0})$ is a coherent state associated with the charged particle beam defined as \cite{dnfmm} \begin{equation} \Theta (u,z;x_{0},p_{0})= {1 \over \left[2\pi\sigma_{0}^{2}\right]^{1/4}} \exp\left[-\frac{(u-x_{0}(z))^2}{4\sigma^2_{0}}+\frac{i}{\epsilon}p_{0}(z)u -i\delta_{0}(z)\right]~~~, \label{eq:thetacoher} \end{equation} with \begin{equation} x_{0}(z)\equiv \langle u \rangle = \int_{-\infty}^{\infty}u~|\Theta|^2~du~~~, \label{xaverage} \end{equation} \begin{equation} p_{0}(z)\equiv \langle \hat{p} \rangle = \int_{-\infty}^{\infty}\Theta^{*}~ \left(-i\epsilon{\partial \over \partial x} \right)\Theta~dx~~~, \label{paverage} \end{equation} and \begin{equation} {d\delta_{0} \over dz}= {p_{0}^2 \over 2\epsilon}-\sqrt{k_{1}}{x_{0}^2 \over 4\sigma_{0}^2}+{\sqrt{k_{1}}\over 2}~~~. \label{deltazero} \end{equation} Note that here $x_{0}$ and $p_{0}$ play the role of classical phase-space variables.\\ By substituting (\ref{eq:rhopsi}) and (\ref{eq:thetacoher}) in (\ref{eq:q-fun}) we obtain \begin{eqnarray} Q(x,p,z)& = & {1 \over 2\pi\epsilon \sqrt{2\pi\sigma_{0}^{2}} } \exp\left(-{x^2 \over 2 \sigma_{0}^{2}}\right) \int_{-\infty}^{\infty}dv\int_{-\infty}^{\infty}du \nonumber\\ & \times &\exp\left[-{u^2+v^2 \over 4\sigma_{0}^{2}}+{x(u+v)\over 2\sigma_{0}^{2}}+{i\over\epsilon}p(u-v)\right] \Psi^{*}(u,z) \Psi(v,z)~~~, \label{eq:q-xpz} \end{eqnarray} where for simplicity we have replaced the classical variables $x_{0}$ and $p_{0}$ with $x$ and $p$, respectively. However, the integration over $p$ and over $x$, does not give configuration-space and momentum-space distributions, respectively. But this function removes the {\it pathology} of negativity exhibited by $W$, and thus gives a more accurate description in the phase-space region where $W$ is negative.\\ Some properties of $Q$ are in order. \vspace{.5cm} \noindent i) It is easy to prove the following normalization relation \begin{equation} \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}Q(x,p,z)~dx~dp=1~~~. \label{eq:Q-norm} \end{equation} \vspace{.5cm} \noindent ii) Definition (\ref{eq:q-xpz}) can be cast in the following more convenient form \begin{equation} Q(x,p,z) = { 1 \over 8 \pi \epsilon \sqrt{2 \pi \sigma_0^2}} \left| \int_{-\infty}^{\infty}dy~\exp\left( - { y^2 \over 16 \sigma_0^2} - i {p y \over 2 \epsilon} \right) {}~\Psi\left(x + { y \over 2},z \right) \right|^2~~~. \label{eq:q-convform} \end{equation} Note that (\ref{eq:q-convform}) clearly shows that $Q$ is positive definite. \vspace{.5cm} \noindent iii) Definition (\ref{eq:q-fun}) of Husimi function, or equivalently (\ref{eq:q-convform}), does not give in general a phase-space distribution coinciding with WF. But, in the case of a Gaussian BWF they must coincide, because in this case both $Q$ and $W$ reduce to the {\it classical} distribution. In particular, we observe that since in (\ref{eq:q-fun}) or in (\ref{eq:q-convform}) the constant $\sigma_0$ is involved, the present definitions of $Q$ are suitable only to describe the phase-space distribution of eigenstates: this way $Q$ does not explicitly depend on $z$. The natural generalization of $Q$ for $z$-dependent BWF will be introduced later. Now we point out that, in connection with the Gaussian BWF given by (\ref{eq:psigauss}), Eq. (\ref{eq:q-convform}) gives a bi-Gaussian $Q$-function that does not coincide with (\ref{eq:wig0}) because of a scaling disagreement. This problem can be easily removed, by introducing the following new definition for $Q$ \begin{equation} Q(x,p,z) = { 1 +\lambda^2 \over 8 \pi \epsilon \lambda^2 \sqrt{2 \pi \sigma_0^2}} \left| \int_{-\infty}^{\infty}dy~\exp\left( - { y^2 \over 16 \sigma_0^2 \lambda^2} - i { \sqrt{1 +\lambda^2} p y \over 2 \epsilon \lambda } \right) {}~\Psi\left(\sqrt{1 + \lambda^2} x + { y \over 2},z \right) \right|^2, \label{eq:q-lambdaform} \end{equation} for any real number $\lambda$. This is equivalent to introduce the following substitutions in the definition (\ref{eq:q-fun}) \begin{equation} x \rightarrow \sqrt{1+\lambda^2}~x~~~, ~~~p \rightarrow {\sqrt{1+\lambda^2} \over \lambda}~p~~~, ~~~Q\rightarrow {1+\lambda^2 \over \lambda}~Q~~~. \label{substitution} \end{equation} Under these substitutions, the normalization condition (\ref{eq:Q-norm}) is preserved. In order to symmetrize (\ref{substitution}), we choose $\lambda=1$. Consequently, \begin{equation} x \rightarrow \sqrt{2}~x~~~, ~~~p \rightarrow \sqrt{2}~p~~~, {}~~~Q\rightarrow 2~Q~~~, \label{substitution1} \end{equation} and (\ref{eq:q-lambdaform}) becomes \begin{equation} Q(x,p,z) = { 1 \over 4 \pi \epsilon \sqrt{2 \pi \sigma_0^2}} \left| \int_{-\infty}^{\infty}dy~\exp\left( - { y^2 \over 16 \sigma_0^2 } - {i \over \sqrt{2} \epsilon }p y \right) {}~\Psi\left(\sqrt{2} x + { y \over 2},z \right) \right|^2~. \label{eq:q-fun1} \end{equation} Thus, by using (\ref{eq:psigauss}) in (\ref{eq:q-fun1}) we obtain \begin{equation} Q(x,p) = { 1\over \pi \epsilon} \exp\left( - { x^2 \over 2 \sigma_{0}^2} - { 2 \sigma_{0}^2 \over \epsilon^2}p^2 \right)~~~, \label{eq:q0} \end{equation} which now coincides with the corresponding Wigner function (\ref{eq:wig0}). \vspace{.5cm} \noindent iv) For the non-coherent Gaussian-like BWF (\ref{eq:non-coherent}), Eq. (\ref{eq:q-fun1}) is no longer valid, because it does not coincide with (\ref{eq:wigz}). In particular, at any $z$, the phase-space ellipses associated with $W$ do not coincide with the contours of (\ref{eq:q-fun1}) for BWF given by (\ref{eq:non-coherent}). This problem can be overcome by generalizing definition (\ref{eq:q-fun1}) with the following \begin{equation} Q(x,p,z) = { 1 \over 4 \pi \epsilon \sqrt{2 \pi \sigma^2(z)}} \left| \int_{-\infty}^{\infty}dy~\exp\left( - { y^2 \over 16 \sigma^2(z) } - { i y^2 \over 8 \epsilon R(z)} - {i \over \sqrt{2} \epsilon }p y \right) {}~\Psi\left(\sqrt{2} x + { y \over 2},z \right) \right|^2~~~. \label{eq:q-fun2} \end{equation} By substituting (\ref{eq:non-coherent}) in (\ref{eq:q-fun2}) we easily obtain \begin{equation} Q(x,p,z) = { 1 \over \pi \epsilon} \exp\left[ - { x^2 \over 2 \sigma^2(z) } - { 2 \sigma^2(z) \over \epsilon^2} \left( { x \over R(z)} - p \right)^2 \right]~~~, \label{eq:qz} \end{equation} which now coincides with the corresponding WF given by (\ref{eq:wigz}). \vspace{.5cm} (v) Note that the generalized definition (\ref{eq:q-fun2}) suggests to start from (\ref{eq:q-fun}) where the coherent states $\Theta (u,z;x_0,p_0)$ are replaced by the following non-coherent Gaussian-like states \begin{equation} \Theta_0 (u,z;x_{0},p_{0})= {1 \over \left[2\pi\sigma^{2}(z)\right]^{1/4}} \exp\left[-\frac{(u-x_{0}(z))^2}{4\sigma^2(z)}+\frac{i}{\epsilon}p_{0}(z)u -i\delta_{0}(z) + \frac{(u-x_{0}(z))^2}{2 \epsilon R(z)}\right]~~~. \label{eq:thetaz} \end{equation} This way if we start from (\ref{eq:q-fun}) with (\ref{eq:thetaz}), it is very easy to prove that it is equivalent to (\ref{eq:q-fun2}) for any $\Psi$. Consequently, the generalized definition of (\ref{eq:q-fun}) with (\ref{eq:thetaz}) produce the same results of Wigner function definition (\ref{eq:wigexpr}) for any Gaussian-like states. \vspace{.5cm} \noindent vi) Note that (\ref{eq:thetaz}) are the non-coherent fundamental modes of the following set of solutions of (\ref{eq:schr}) in the case of $U = (1/2) K_1 x^2$ \begin{equation} \Theta_n(x,z;x_0,p_0) = {\Theta_0(x,z;x_0,p_0) \over \sqrt{2^n n!}} H_n\left( { x - x_0(z)\over \sqrt{2} \sigma(z) }\right)~\exp\left( i 2 n \phi(z)\right)~~~. \label{eq:theta-n} \end{equation} These modes are analogous to squeezed and correlated Fock's states used in quantum optics \cite{Dodonov}. In the next section we check if the equivalence between (\ref{eq:q-fun}) with (\ref{eq:thetaz}) and (\ref{eq:wigexpr}) is still valid for non-Gaussian BWF. In particular, we make a comparison between definition (\ref{eq:wigexpr}) and (\ref{eq:q-fun}) with (\ref{eq:thetaz}) for the case of BWF given by (\ref{eq:psil}) and (\ref{13p}), and in addition we compare them with the corresponding results of a multi-particle tracking simulation. \section{Numerical analysis} A numerical study has been pursued in order to compare the description of the phase-space as given by TWM, both by means of WF and by means of HF, with the one resulting from standard particle tracking. A Gaussian flat (1-D) particle beam has been used as starting beam, with emittance $\epsilon = 120 \times 10^{-6} \mbox{m}~ \mbox{rad}$, and $\sigma_0 = 0.05$ m. From Eq.~(\ref{eq:sigmaxsigmap}) it follows that, if $\alpha(0) = 0$ at the start, $\sigma_{p0}\equiv \sigma_{p}(0) = 1.2 \times 10^{-3}~\mbox{ rad}$. (Note that the definition of emittance given in (\ref{eq:courant}) differs from the definition used in classical accelerator physics by a factor 1/2.) A simple device made of a quadrupole magnet plus a drift space has been considered as beam transport line: in addition sextupole and/or octupole aberrations have been included in the quadrupole. The Wigner function (\ref{eq:wigexpr}) and the $Q$-function (\ref{eq:q-fun2}) have been computed by numerical integration for different combinations of aberration strengths, and have been compared with the results of the tracking of $7 \times 10^{5}$ particles. Isodensity contours at $1$, $2$ and $3~\sigma$ have been used to describe the particle distribution in phase-space, both before and after the passage through the simple device specified above. It is worth noting that with this choice only $2\%$ of the particles are found beyond the contour at $2~\sigma$, and only $0.01\%$ of them are beyond the contour at $3~\sigma$. In Figure~1. the starting distribution is shown together with the distribution emerging from the linear transport line. As already shown, in the linear case both WF and HF can be computed analytically (Eqs.~(\ref{eq:wigz}) and (\ref{eq:qz})) yielding exactly the same result of the conventional accelerator physics. Therefore, the output from tracking and the TWM predictions are in full agreement, and thus their superposition is not shown here. In the cases shown in Figure~2. non-linear perturbations have been added to the quadrupole lens: a sextupole perturbation in the first column, an octupole perturbation in the second column, and perturbations of both kinds in the third column. The results from the numerical computation of WF and HF are shown also superposed to the tracking results for a first set of perturbation values, corresponding to the distortions\footnote{ The phase-space distortion can be defined as the ratio between the deflection $\Delta p(x)\equiv p(x) - p(x_0)$, at $x=\sigma_0$, and $\sigma_{p0}$, namely $D\equiv \Delta p(\sigma_0)/\sigma_{p0}$. It is easy to prove that $D=6 \tau= \sigma_0^3 K_2 /\epsilon$ for pure sextupole, and $D= 8 \omega = \sigma_0^4 K_3 / 3 \epsilon$ for pure octupole.} $D=0.125$ in the case of $K_2 = 0.06~m^{-2}$ and $D = 0.042$ in the case of $K_3 = 1.2~m^{-3}$. It should be noted that these values of distortion, chosen with the aim of enhancing the effects desired, are huge compared with what could be considered acceptable in any realistic single pass device. A quite good agreement with tracking can be observed for the contours at $1$ and $2~\sigma$ for both WF and HF. The contours at $3~\sigma$ show some discrepancies, more pronounced in the case of the Wigner function, which, as discussed in Section~\ref{sec:wigf}, in the periphery of the distribution produces regions with negative phase-space density which yield an unrealistic distortion of the phase-space. By the use of the $Q$-function, instead, this effect is largely smoothed out, as shown in the last two raws of Figure~2.. In the plots displayed in the different columns of Figure~3. the same kinds of perturbations are included as in Figure~2., but with twice the strength, therefore yielding twice the distortion. These large values of distortion approach the limits of applicability of perturbation theory which, for the starting parameters and the quadrupole strength selected, are given by $D \ll 3{\sigma_0}^2 K_1 / \epsilon = 2.25$ for the sextupole perturbation and $D \ll 4{\sigma_0}^2 K_1 / \epsilon = 3.00$ for the octupole perturbation. Indeed larger discrepancies between TWM and tracking can be observed now, in particular where the sextupole perturbation is present ($D~=~0.25$). Nevertheless in these cases (columns 1 and 3 of Figure~3.) HF still describes fairly well the phase-space distribution for particles with amplitudes up to $1-1.5~\sigma$, whilst the WF distortion due to its negativity starts to show up already at amplitudes of the order of 1$\sigma$. In spite of the discrepancies observed at large amplitudes due to the particularly strong perturbations used in this study, these results can be considered very satisfactory: the TWM phase-space description of beam dynamics in the presence of a non-linear lens, and in particular the one given by the HF, will be in more than reasonable agreement with the one given by classical accelerator physics for all realistic values of perturbation. \section{Remarks and conclusions} In this paper we have studied the phase-space distribution associated with the transverse motion of a charged particle beam travelling through a quadrupole-like device with sextupole and octupole deviations. In particular, a quantum-like phase-space analysis within the {\it Thermal Wave Model} has been developed and its predictions have been compared with the results of particle tracking simulations. To this end, we have first introduced the density matrix for the beam wave function (\ref{eq:rhopsi}). Then, following the usual definitions, we have constructed the Wigner transform (Eq.~(\ref{eq:wigexpr})), and the Husimi transform (Eq.~(\ref{eq:q-fun})) with $\Theta$ given by (\ref{eq:thetaz}). These functions represent the best candidates for describing the full phase-space evolution of the beam. Our aim was to compare the results of tracking simulations with the predictions of TWM, given in terms of $W$ and $Q$, in order to enquire if the equivalence between (\ref{eq:q-fun}) with (\ref{eq:thetaz}) and (\ref{eq:wigexpr}) is valid also for non-Gaussian BWF, and thus to determine the appropriate function to describe the phase-space dynamics. The Wigner transform of the BWF, which represents the {\it natural} quantum analogous of the classical phase-space distribution associated with the beam, has been numerically computed for the present problem. Its $x$ and $p$-projections reproduce the configuration and momentum space distribution well \cite{fgm}, up to the first order of the perturbation theory. According to the results presented in section 4, this function reveals to be in good agreement with the tracking results for small values of the integrated multipole-like strengths, for which conditions $\sigma_0 K_2/(3K_1) \ll 1$ and $\sigma_0^2 K_3/(12K_1) \ll 1$ are fully satisfied. For larger values of these parameters a little discrepancy appears. The reason is that in the last case the first-order perturbative expansion is not reliable enough. Nevertheless, we stress the fact that, beyond the contours at $2~\sigma$ and $3~\sigma$, only the $2\%$ and $0.01\%$ of the beam particles are present, respectively. Summarizing, the comparison between the tracking results and the TWM predictions shows that:\\ i) for small aberrations (conditions $\sigma_0 K_2/(3K_1) \ll 1$ and $\sigma_0^2 K_3/(12K_1) \ll 1$ well satisfied), both $Q$ and $W$ are in good agreement with the tracking results;\\ ii) for larger aberrations, WF and HF exhibit the same order of discrepancy with respect to the tracking contours, but the distortions in the Wigner contours are much more evident due to the negativity of this function which is responsible for a change in the contour concavity. In conclusion, up to the first order in the perturbation theory, and for thin lens approximation, it results that Wigner function can be adopted as appropriate phase-space distribution in TWM. In fact it gives the correct $x$- and $p$-projections, which are in good agreement with the tracking configuration and momentum distribution, respectively. Finally, although Husimi function does not give the right $x$ and $p$-projections, according to our analysis it provides a better description as far as the only phase-space dynamics is concerned. \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In the medical field, the problem of deformable image registration has been heavily studied for many years. The problem relies on establishing the best dense voxel-wise transformation ($\Phi$) to wrap one volume (source or moving, $M$) to match another volume (reference or fixed, $F$) in the best way. Traditionally, different types of formulations and approaches had been proposed in the last years~\cite{sotiras2013deformable} to address the problem. However, with the recent advances of deep learning, a lot of learning based methods became very popular currently, providing very efficient and state-of-the art performances~\cite{haskins2020deep}. Even if there is a lot of work in the field of image registration there are still a lot of challenges to be addressed. In order to address these challenges and provide common datasets for the benchmarking of learning based~\cite{voxelmorph,de2019deep} and traditional methods~\cite{heinrich2013mrf,avants2008symmetric}, the Learn2Reg challenge is organised~\cite{adrian_dalca_2020_3715652}. Four tasks were proposed {\color{black} by the organisers} with different organs and modalities. In this work, we focused on two tasks: the CT abdominal (task 3) and the MRI hippocampus registration (task 4). In this work, we propose a learning based method that learns how to obtain spatial gradients in a similar way to~\cite{stergios2018linear,estienne2020deep}. The main contributions of this work rely on \textit{(i)} enforcing the same network to predict both $\Phi_{M\rightarrow F}$ and $\Phi_{F\rightarrow M}$ deformations using the same encoding and implicitly enforcing it to be symmetric and \textit{(ii)} {\color{black}integrating noisy labels from different organs during the training, to fully exploit publicly available datasets.} In the following sections, we will briefly summarise these two contributions and present our results that gave to our method the third position in the Learn2Reg challenge 2020 (second for task 3 and third for task 4). \section{Methodology} {\color{black} An overview of our proposed framework is presented in the Figure~\ref{fig:network}.} Our method uses as backbone a 3D UNet{\color{black}~\cite{cciccek20163d}} based architecture, which consists of 4 blocks with 64, 128, 256 and 512 channels for the encoder part ($\textbf{E}$). Each block consists of a normalisation layer, Leaky ReLU activation, $3$D convolutions with a kernel size of $3\times3\times3$ and convolution with kernel size and stride 2 to reduce spatial resolution. Each of the $F,M$ volumes passes independently through the encoder part of the network. Their encoding is then merged using the subtraction operation before passing through the decoder ($\textbf{D}$) part for the prediction of the optimal spatial gradients of the deformation field $\nabla \Phi$. {\color{black}We obtained the deformation field $\Phi$ from its gradient using integration which we approximated with the cumulative summation operation.} $\Phi$ is then used to obtain the deformed volume together with its segmentation mask using warping $M^{warp} = \mathcal{W}(M, \Phi_{M\rightarrow F})$. Finally, we apply deep supervision to train our network in a way similar to~\cite{krebs2019learning}. \begin{figure}[t!] \centering \includegraphics[trim=4.6cm 10.3cm 1.65cm 9.6cm, clip, height=6cm]{Images/Graphe_L2R.pdf} \caption{\color{black} Schematic representation of the proposed methodology.} \label{fig:network} \end{figure} \vspace{-0.2cm} \paragraph{\textbf{Symmetric training}} Even if our grid formulation has constraints for the spatial gradients to avoid self-crossings on the vertical and horizontal directions for each of the x,y,z-axis, our formulation is not diffeomorphic. This actually indicates that we can not calculate the inverse transformation of $\Phi_{M\rightarrow F}$. To deal with this problem, we predict both $\Phi_{M\rightarrow F}$ and $\Phi_{F\rightarrow M}$ and we use both for the optimization of our network. Different methods such as~\cite{kim2019unsupervised,guo2020end} explore similar concepts using however different networks for each deformation. Due to our fusion strategy on the encoding part, our approach is able to learn both transformations with less parameters. In particular, our spatial gradients are obtained by: $\nabla\Phi_{M\rightarrow F} = \mathbf{D}( \mathbf{E}(M) - \mathbf{E}(F))$ and $\nabla\Phi_{F\rightarrow M} = \mathbf{D}( \mathbf{E}(F) - \mathbf{E}(M))$. \vspace{-0.2cm} \paragraph{\textbf{Pretraining and Noisy Labels}} Supervision has been proved to boost the performance of the learning based registration methods integrating implicit anatomical knowledge during the training procedure. For this reason, in this study, we investigate ways to use publicly available datasets to boost performance. We exploit available information from publicly available datasets namely KITS 19~\cite{kits_dataset}, Medical Segmentation Decathlon (sub-cohort Liver, Spleen, Pancreas, Hepatic Lesion and Colon)~\cite{task4_dataset} and TCIA Pancreas\cite{roth2016data,gibson2018automatic}. In particular, we trained a 3D UNet segmentation network on $11$ different organs (spleen, right and left kidney, liver, stomach, pancreas, gallbladder, aorta, inferior vena cava, portal vein and oesophagus). To harmonise the information that we had at disposal for each dataset, we optimised the dice loss only on the organs that were available per dataset. The network was then used to provide labels for the $11$ organs for approximately $600$ abdominal scans. These segmentation masks were further used for the pretraining of our registration network for the task 3. After the training the performance of our segmentation network on the validation set in terms of dice is summarised to: 0.92 (Spl), 0.90 (RKid), 0.91 (LKid), 0.94 (Liv) 0.83 (Sto), 0.74 (Pan), 0.72 (GBla), 0.89 (Aor), 0.76 (InfV), 0.62 (PorV) and 0.61 (Oes). The validation set was composed of 21 patients of Learn2Reg and TCIA Pancreas dataset. Furthermore, we explored the use of pretraining of registration networks on domain-specific large datasets. In particular, for task 3 the ensemble of the publicly available datasets together with their noisy segmentation masks were used to pretrain our registration network, after a small preprocessing including an affine registration step using Advanced Normalization Tools (ANTs)\cite{avants2009advanced} and isotropic resampling to $2$mm voxel spacing. {\color{black}Moreover, for task 4, we performed an unsupervised pretraining using approximately $750$ T1 MRI from OASIS 3 dataset \cite{oasis_dataset} without segmentations.} For both tasks, the pretraining had been performed for 300 epochs. \subsection{Training Strategy and Implementation Details} To train our network, we used a combination of multiple loss functions. The first one was the reconstruction loss optimising a similarity function over the intensity values of the medical volume $\mathcal{L}_{sim}$. For our experiments, we used the mean square error function and normalized cross correlation, depending on the experiment, between the warped image $M^{warp}$ and the fixed image $F$. The second loss integrated anatomical knowledge by optimising the dice coefficient between the warped segmentation and the segmentation of the fixed volume: $\mathcal{L}_{sup} = Dice(M^{warp}_{seg},F_{seg})$. Finally, a regularisation loss was also integrated to enforce smoothness of the displacement field by keeping it close to zero deformation : $\mathcal{L}_{smo} = || \nabla \Phi_{M\rightarrow F} ||$. These losses composed our final optimization strategy calculated for both $\nabla\Phi_{M\rightarrow F}$ and $\nabla \Phi_{F\rightarrow M}$ \begin{equation*} \mathcal{L} = (\alpha \mathcal{L}_{sim} + \beta \mathcal{L}_{sup} + \gamma \mathcal{L}_{smo})_{{M\rightarrow F}} + (\alpha \mathcal{L}_{sim} + \beta \mathcal{L}_{sup} + \gamma \mathcal{L}_{smo})_{{F\rightarrow M}} \end{equation*} where $\alpha$, $\beta$ and $\gamma$ were weights that were manually defined. The network was optimized using Adam optimiser with a learning rate set to $1e^{-4}$. Regarding the implementation details, for task 3, we used batch size 2 with patch size equal to $144\times144\times144$ due to memory limitations. Our normalisation strategy included the extraction of three CT windows, which all of them are used as additional channels and min-max normalisation to be in the range $(0,1)$. For our experiments we did not use any data augmentation and we set $\alpha=1$, $\beta=1$ and $\gamma=0.01$. The network was trained on 2 Nvidia Tesla V100 with 16 GB memory, for 300 epochs for $\approx$ 12 hours. For task 4, the batch size was set to $6$ with patches of size $64\times64\times64$ while data augmentation was performed by random flip, random rotation and translation. Our normalisation strategy in this case included: $\mathcal{N}(0,1)$ normalisation, clipping values outside of the range $[-5, 5]$ and min-max normalisation to stay to the range $(0,1)$. The weights were set to $\alpha=1$, $\beta=1$ and $\gamma=0.1$ and the network was trained on 2 Nvidia GeForce GTX 1080 GPUs with 12 GB memory for 600 epochs for $\approx$ 20 hours. {\color{black}The segmentation network, used to produce noisy segmentations, was a 3D UNet trained with batch size $6$, learning rate $1e^{-4}$, leaky ReLU activation functions, instance normalisation layers and random crop of patch of size $144\times144\times144$. During inference, we kept the ground truth segmentations of the organs available, we applied a normalisation with connected components and we checked each segmentations manually to remove outlier results.} \section{Experimental Results} For each task, we performed an ablation study to evaluate the contribution of each component and task 3, we performed a supplementary experiment integrating the noisy labels during the pretraining. The evaluation was performed in terms of Dice score, 30\% of lowest Dice score, Hausdorff distance and standard deviation of the log Jacobian. These metrics evaluated the accuracy and robustness of the method as well as the smoothness of the deformation. Our results are summarised in Table~\ref{tab:task3_val}, {\color{black}while some qualitative results are represented in Figure~\ref{fig:results}. For the inference on the test set, we used our model trained on both training and validation datasets}. Concerning the computational time, our approach needs $6.21$ and $1.43$ seconds for the inference respectively for task 3 and 4. {\color{black} This is slower than other participants to the challenge, probably due to the size of our deep network which have around 20 millions parameters}. Concerning task 3, one can observe a significant boost on the performance when the pretraining with the noisy labels was integrated. Due to the challenging nature of this registration problem, the impact of the symmetric training was not so high in any of the metrics. On the other hand, for task 4, the symmetric component with the pretraining boosted the robustness of the method while the pretraining had a lower impact than on task 3. One possible explanation is that for this task, the number of provided volumes in combination with the nature of the problem was enough for training a learning based registration method. \begin{tableth} \scalebox{0.83}{ \begin{tabular}{p{0.1\linewidth}|p{0.50\linewidth}|P{0.05\linewidth}P{0.07\linewidth}P{0.07\linewidth}P{0.06\linewidth}||P{0.05\linewidth} P{0.07\linewidth}P{0.07\linewidth}P{0.06\linewidth}} & & \multicolumn{4}{c||}{Task 3} & \multicolumn{4}{c}{Task 4}\\ Dataset & & Dice & Dice30 & Hd95 & StdJ & Dice & Dice30 & Hd95 & StdJ \\ \hline Val & Unregistered & 0.23 & 0.01 & 46.1 & & 0.55 & 0.36 & 3.91 & \\ \hline Val & Baseline & 0.38 & 0.35 & 45.2 & 1.70 & 0.80 & 0.78 & 2.12 & \textbf{0.067}\\ Val & Baseline + sym. & 0.40 & 0.36 & 45.7 & 1.80 & 0.83 & 0.82 & 1.68 & 0.071 \\ Val & Baseline + sym. + pretrain & 0.52 & 0.50 & 42.3 & \textbf{0.32} & \textbf{0.84} & \textbf{0.83} & \textbf{1.63} & 0.093 \\ Val & Baseline + sym. + pretrain + noisy labels & \textbf{0.62} & \textbf{0.58} & \textbf{39.3} & 1.77 & & &\\ \hline Test & Baseline + sym. + pretrain + noisy labels & 0.64 & 0.40 & 37.1 & 1.53 & 0.85 & 0.84 & 1.51 & 0.09 \end{tabular}} \caption{Evaluation of our method for the Tasks 3 \& 4 of Learn2Reg Challenge {\color{black} on the validation set (val) and on the test set (test)}.} \label{tab:task3_val} \end{tableth} \begin{figure}[t!] \centering \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[height=2.7cm]{Images/Task3_plot.png} \caption{Example for task 3} \label{fig:task3} \end{subfigure} \hfill \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[height=2.7cm]{Images/Task4_plot.png} \caption{Example for task 4} \label{fig:task4} \end{subfigure} \caption{\color{black}Results obtained on the validation set. From left to right : moving, fixed, deformed images and the deformation grid. For the task 3, we displayed an axial view with the different organs (second row). For the task 4, we displayed a sagittal view with the head and tail masks (second row)} \label{fig:results} \end{figure} \section{Conclusions} In this work, we summarise our method that took the 3rd place in the Learn2Reg challenge, participating on the tasks 3 \& 4. Our formulation is based on spatial gradients and explores the impact of symmetry, pretraining and integration of public available datasets. In the future, we aim to further explore symmetry in our method and investigate ways that our formulation could hold diffeomorphic properties. Finally, adversarial training is also something that we want to explore in order to be deal with multimodal registration. \clearpage \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Noncommutative tori} \subsection{The algebra of functions} The basic notions of noncommutative differential geometry were introduced and illustrated on the example of a two-dimensional noncommutative torus by A.~Connes in \cite{Connes1}. To define an algebra of functions on a $d$-dimensional noncommutative torus consider a set of linear generators $U_{\bf n}$ labelled by ${\bf n} \in {\mathbb Z}^{d}$ - a $d$-dimensional vector with integral entries. The multiplication is defined by the formula \begin{equation} \label{un} U_{\bf n}U_{\bf m} = e^{\pi i n_{j}\theta^{jk}m_{k}} U_{\bf n + m} \end{equation} where $\theta^{jk}$ is an antisymmetric $d\times d$ matrix, and summation over repeated indices is assumed. We further extend the multiplication from finite linear combinations to formal infinite series $\sum _{\bf n} C({\bf n}) U_{\bf n}$ where the coefficients $C({\bf n})$ tend to zero faster than any power of $\| {\bf n} \|$. The resulting algebra constitutes an algebra of smooth functions on a noncommutative torus and will be denoted as $T_{\theta}^{d}$. Sometimes for brevity we will omit the dimension label $d$ in the notation of the algebra. We introduce an involution $*$ in $T_{\theta}^{d}$ by the rule: $U_{\bf n}^{*} = U_{-\bf n}$. The elements $U_{\bf n}$ are assumed to be unitary with respect to this involution, i.e. $U^{*}_{\bf n}U_{\bf n} = U_{-\bf n}U_{\bf n} = 1\equiv U_{\bf 0}$. One can further introduce a norm and take an appropriate completion of the involutive algebra $T_{\theta}^{d}$ to obtain a $C^{*}$-algebra of functions on a noncommutative torus. For our purposes the norm structure will not be important. A canonically normalized trace on $T_{\theta}^{d}$ is introduced by specifying \begin{equation}\label{trace} {\rm Tr} \, U_{\bf n} = \delta_{{\bf n}, {\bf 0}} \, . \end{equation} \subsection{Projective modules} According to general approach to noncommutative geometry finitely generated projective modules over the algebra of functions are natural analogs of vector bundles. Throughout this article when speaking of a projective module we will assume a finitely generated left projective module. A free module $(T_{\theta}^{d})^{N}$ is equipped with a $T_{\theta}^{d}$-valued Hermitian inner product $\langle . , .\rangle_{T_{\theta}}$ defined by the formula \begin{equation} \langle (a_{1}, \dots , a_{N}), (b_{1},\dots , b_{N})\rangle_{T_{\theta}} = \sum\limits_{i=1}^{N}a_{i}^{*}b_{i} \, . \end{equation} A projective module $E$ is by definition a direct summand in a free module. Thus it inherits the inner product $\langle . , .\rangle_{T_{\theta}}$. Consider the endomorphisms of module $E$, i.e. linear mappings $E\to E$ commuting with the action of $T_{\theta}^{d}$. These endomorphisms form an associative unital algebra denoted ${\rm End}_{T_{\theta}}E$. A decomposition $(T_{\theta}^{d})^{N}=E\oplus E'$ determines an endomorphism $P:(T_{\theta}^{d})^{N}\to (T_{\theta}^{d})^{N}$ that projects $(T_{\theta}^{d})^{N}$ onto $E$. The algebra ${\rm End}_{T_{\theta}}E$ can then be identified with a subalgebra in ${\rm Mat}_{N}(T_{-\theta}^{d})$ - the endomorphisms of free module $(T_{\theta}^{d})^{N}$. The latter one has a canonical trace that is a composition of the matrix trace with the trace specified in (\ref{trace}). By restriction it gives rise to a canonical trace ${\rm Tr}$ on ${\rm End}_{T_{\theta}}E$. The same embedding also provides a canonical involution on $E$ by a composition of the matrix transposition and the involution $\ast$ on $T_{\theta}^{d}$. A large class of examples of projective modules over noncommutative tori constitute the so called Heisenberg modules. They are constructed as follows. Let $G$ be a direct sum of ${\mathbb R}^{p}$ and an abelian finitely generated group, and let $G^{*}$ be its dual group. In the most general situation $G={\mathbb R}^{p}\times {\mathbb Z}^{q} \times F$ where $F$ is a finite group. Then $G^{*}\cong {\mathbb R}^{p}\times T^{q} \times F^{*}$. Consider a linear space ${\cal S}(G)$ of functions on $G$ decreasing at infinity faster than any power. We define operators $U_{(\gamma, \tilde \gamma)}: {\cal S}(G)\to {\cal S}(G)$ labelled by a pair $(\gamma, \tilde \gamma)\in G\times G^{*}$ acting as follows \begin{equation}\label{U} (U_{(\gamma, \tilde \gamma)}f)(x)=\tilde \gamma (x) f(x+ \gamma ) \, . \end{equation} One can check that the operators $U_{(\gamma, \tilde \gamma)}$ satisfy the commutation relations \begin{equation} \label{nt} U_{(\gamma, \tilde \gamma)}U_{(\mu, \tilde \mu)}= \tilde \mu (\gamma )\tilde \gamma^{-1} (\mu ) U_{(\mu, \tilde \mu)}U_{(\gamma, \tilde \gamma)} \, . \end{equation} If $(\gamma, \tilde \gamma)$ run over a $d$-dimensional discrete subgroup $\Gamma \subset G\times G^{*}$, $\Gamma \cong {\mathbb Z}^{d}$ then formula (\ref{U}) defines a module over a $d$-dimensional noncommutative torus $T_{\theta}^{d}$ with \begin{equation}\label{cocycle} exp(2\pi i \theta_{ij}) = \tilde \gamma_{i} (\gamma_{j} )\tilde \gamma_{j}^{-1} (\gamma_{i} ) \end{equation} for a given basis $(\gamma_{i} , \tilde \gamma_{i})$ of the lattice $\Gamma$. This module is projective if $\Gamma$ is such that $G\times G^{*}/\Gamma$ is compact. If that is the case then the projective $T_{\theta}^{d}$-module at hand is called a Heisenberg module and denoted by $E_{\Gamma}$. Heisenberg modules play a special role. If the matrix $\theta_{ij}$ is irrational in the sense that at least one of its entries is irrational then any projective module over $T_{\theta}^{d}$ can be represented as a direct sum of Heisenberg modules. In that sense Heisenberg modules can be used as building blocks to construct an arbitrary module. \subsection{Connections} Next we would like to define connections on a projective module over $T_{\theta}^{d}$. To this end let us first define a Lie algebra of shifts $L_{\theta}$ acting on $T_{\theta}^{d}$ by specifying a basis consisting of derivations $\delta_{j}:T_{\theta}^{d}\to T_{\theta}^{d}$, $j=1,\dots , d$ satisfying \begin{equation} \label{deltaj} \delta_{j} (U_{\bf n}) = 2\pi i n_{j}U_{\bf n} \, . \end{equation} These derivations span a $d$-dimensional abelian Lie algebra that we denote by $L_{\theta}$. A connection on a module $E$ over $T_{\theta}^{d}$ is a set of operators $\nabla_{X}:E\to E$, $X\in L_{\theta}$ depending linearly on $X$ and satisfying \begin{equation} [\nabla_{X}, U_{\bf n} ] = \delta_{X}(U_{\bf n}) \end{equation} where $U_{\bf n}$ are operators $E\to E$ representing the corresponding generators of $T_{\theta}^{d}$. In the standard basis (\ref{deltaj}) this relation reads as \begin{equation} [\nabla_{j}, U_{\bf n} ] = 2\pi i n_{j}U_{\bf n} \, . \end{equation} The curvature of connection $\nabla_{X}$ defined as a commutator $F_{XY}=[\nabla_{X},\nabla_{Y}]$ is an exterior two-form on the adjoint vector space $L_{\theta}^{*}$ with values in $End_{T_{\theta}^{d}}E$. \subsection{K-theory. Chern character} The $K$-groups of a noncommutative torus coincide with those for commutative tori: $$ K_{0}(T_{\theta}^{d}) \cong {\mathbb Z}^{2^{d-1}}\cong K_{1}(T_{\theta}^{d}) \, . $$ A Chern character of a projective module $E$ over a noncommutative torus $T_{\theta}^{d}$ can be defined as \begin{equation} \label{ncChern} {\rm ch}(E) = {\rm Tr} \, exp\left(\frac{F}{2\pi i }\right) \in \Lambda^{even}(L^{*}_{\theta}) \end{equation} where $F$ is the curvature form of a connection on $E$, $\Lambda^{even}(L^{*}_{\theta})$ is the even part of the exterior algebra of $L_{\theta}^{*}$ and Tr is the canonical trace on $End_{T_{\theta}^{d}}E$. This mapping gives rise to a noncommutative Chern character \begin{equation} \label{ncch} {\rm ch} : K_{0}(T_{\theta}^{d}) \to \Lambda^{even}(L^{*}_{\theta}) \, . \end{equation} The component ${\rm ch}_{0}(E)={\rm Tr}{\bf 1} \equiv {\rm dim}(E)$ is called the dimension of module $E$. A distinctive feature of the noncommutative Chern character (\ref{ncch}) is that its image does not consist of integral elements, i.e. there is no lattice in $L_{\theta}^{*}$ that generates the image of the Chern character. However there is a different integrality statement that replaces the commutative one. Consider a basis in $L_{\theta}^{*}$ in which the derivations corresponding to basis elements satisfy (\ref{deltaj}). Denote the exterior forms corresponding to the basis elements by $\alpha^{1}, \dots , \alpha^{d}$. Then an arbitrary element of $\Lambda(L_{\theta}^{*})$ can be represented as a polynomial in anticommuting variables $\alpha^{i}$. Next let us consider a subset $\Lambda^{even} ({\mathbb Z}^{d})\subset \Lambda^{even}(L^{*}_{\theta})$ that consists of polynomials in $\alpha^{j}$ having integer coefficients. It was proved by Elliott that the Chern character is injective and its range on $K_{0}(T_{\theta}^{d})$ is given by the image of $\Lambda^{even} ({\mathbb Z}^{d})$ under the action of the operator $ exp\left( -\frac{1}{2}\frac{\partial}{\partial \alpha^{j}} \theta^{jk} \frac{\partial}{\partial \alpha^{k}} \right) $. This fact implies that the K-group $K_{0}(T_{\theta}^{d})$ can be identified with the additive group $\Lambda^{even} ({\mathbb Z}^{d})$. A K-theory class $\mu(E)\in \Lambda^{even} ({\mathbb Z}^{d})$ of a module $E$ can be computed from its Chern character by the formula \begin{equation} \label{Elliott} \mu(E) = exp\left( \frac{1}{2}\frac{\partial}{\partial \alpha^{j}} \theta^{jk} \frac{\partial}{\partial \alpha^{k}} \right) {\rm ch}(E) \, . \end{equation} Note that the anticommuting variables $\alpha^{i}$ and the derivatives $\frac{\partial}{\partial \alpha^{j}}$ satisfy the anticommutation relation $\{ \alpha^{i}, \frac{\partial}{\partial \alpha^{j}} \} = \delta^{i}_{j}$. The coefficients of $\mu(E)$ standing at monomials in $\alpha^{i}$ are integers to which we will refer as the topological numbers of module $E$. These numbers also can be interpreted as numbers of D-branes of a definite kind although in noncommutative geometry it is difficult to talk about branes as geometrical objects wrapped on torus cycles. One can show that for noncommutative tori $T_{\theta}^{d}$ with irrational matrix $\theta_{ij}$ the set of elements of $K_{0}(T_{\theta}^{d})$ that represent a projective module (i.e. the positive cone) consists exactly of the elements with a positive dimension. Moreover if $\theta_{ij}$ is irrational any two projective modules which represent the same element of $K_{0}(T_{\theta}^{d})$ are isomorphic that is the projective modules are essentially specified in this case by their topological numbers. Complex defferential geometry of noncommutative tori and its relation with mirror symmetry is discussed in \cite{PS}. \section{Yang-Mills theory on noncommutative tori} Let $E$ be a projective module over $T_{\theta}^{d}$. We call a Yang-Mills field on $E$ a connection $\nabla_{X}$ compatible with the Hermitian structure, that is a connection satisfying \begin{equation}\label{herm} <\nabla_{X}\xi , \eta>_{T_{\theta}} + <\xi, \nabla_{X}\eta>_{T_{\theta}} = \delta_{X}(<\xi,\eta >_{T_{\theta}}) \end{equation} for any two elements $\xi, \eta \in E$. Given a positive-definite metric on the Lie algebra $L_{\theta}$ we can define a Yang-Mills functional \begin{equation}\label{YM} S_{YM}(\nabla_{i}) = \frac{V}{4g^{2}_{YM}} g^{ik}g^{jl}{\rm Tr}(F_{ij}F_{kl})\, . \end{equation} Here $g^{ij}$ stands for the metric tensor in the canonical basis (\ref{deltaj}), $V=\sqrt{|{\rm det}\, g|}$, $g_{YM}$ is the Yang-Mills coupling constant, ${\rm Tr}$ stands for the canonical trace on ${\rm End}_{T_{\theta}}E$ discussed above and summation over repeated indices is assumed. Compatibility with the Hermitian structure (\ref{herm}) can be shown to imply the positive definiteness of the functional $S_{YM}$. The extrema of this functional are given by the solutions to the Yang-Mills equations \begin{equation} \label{YMeq} g^{ki}[\nabla_{k},F_{ij}] = 0 \, . \end{equation} A gauge transformation in the noncommutative Yang-Mills theory is specified by a unitary endomorphism $Z\in {\rm End}_{T_{\theta}}E$, i.e. an endomorphism satisfying $ZZ^{\ast} = Z^{\ast}Z= 1$. The corresponding gauge transformation acts on a Yang-Mills field as \begin{equation} \nabla_{j}\mapsto Z\nabla_{j}Z^{\ast} \, . \end{equation} The Yang-Mills functional (\ref{YM}) and the Yang-Mills equations (\ref{YMeq}) are invariant under these transformations. It is easy to see that Yang-Mills fields whose curvature is a scalar operator, i.e. $[\nabla_{i}, \nabla_{j}] = \sigma_{ij}\cdot {\bf 1}$ with $\sigma_{ij}$ a real number valued tensor, solve the Yang-Mills equations (\ref{YMeq}). A characterization of modules admitting a constant curvature connection and a description of the moduli spaces of constant curvature connections (that is the space of such connections modulo gauge transformations) is reviewed in \cite{KonSch}. Another interesting class of solutions to the Yang-Mills equations is instantons (see below). As in the ordinary field theory one can construct various extensions of the noncommutative Yang-Mills theory (\ref{YM}) by adding other fields. To obtain a supersymmetric extension of (\ref{YM}) one needs to add a number of endomorphisms $X_{I}\in {\rm End}_{T_{\theta}}E$ that play the role of bosonic scalar fields in the adjoint representation of the gauge group and a number of odd Grassmann parity endomorphisms $\psi^{\alpha}_{i}\in \Pi{\rm End}_{T_{\theta}}E$ endowed with an $SO(d)$-spinor index $\alpha$. The latter ones are analogs of the usual fermionic fields. In string theory one considers a maximally supersymmetric extension of the Yang-Mills theory (\ref{YM}). In this case the supersymmetric action depends on $10-d$ bosonic scalars $X_{I}$, $I=d, \dots 9$ and the fermionic fields can be collected into an $SO(9,1)$ Majorana-Weyl spinor multiplet $\psi^{\alpha}$, $\alpha=1, \dots 16$. The maximally supersymmetric Yang-Mills action takes the form \begin{eqnarray} \label{sYM} S_{SYM} = &&\frac{V}{4g^{2}}{\rm Tr}\Bigl( F_{\mu\nu}F^{\mu\nu} + [\nabla_{\mu},X_{I}][\nabla^{\mu}, X^{I}] + [X_{I}, X_{J}][X^{I}, X^{J}] \nonumber \\ &&-2\psi^{\alpha}\sigma^{\mu}_{\alpha\beta} [\nabla_{\mu}, \psi^{\beta}] - 2\psi^{\alpha}\sigma^{I}_{\alpha\beta} [X_{I}, \psi^{\beta}]\Bigr)\, . \end{eqnarray} Here the curvature indices $F_{\mu\nu}$, $\mu,\nu=0, \dots , d-1$ are assumed to be contracted with a Minkowski signature metric, $\sigma_{\alpha\beta}^{A}$ are blocks of the ten-dimensional $32\times 32$ Gamma-matrices $$ \Gamma_{A} = \left( \begin{array}{cc} 0 & \sigma^{\alpha \beta}_{A} \\ (\sigma_{A})_{\alpha \beta} & 0 \end{array} \right) \, , \qquad A=0,\dots , 9\, . $$ This action is invariant under two kinds of supersymmetry transformations denoted by $\delta_{\epsilon}$, $\tilde \delta_{\epsilon}$ and defined as \begin{eqnarray} \label{ncSUSY} && \delta_{\epsilon} \psi = \frac{1}{2}(\sigma^{jk} F_{jk} \epsilon + \sigma^{jI}[\nabla_{j}, X_{I}]\epsilon + \sigma^{IJ}[X_{I}, X_{J}] \epsilon ) \, , \nonumber \\ && \delta_{\epsilon} \nabla_{j} = \epsilon \sigma_{j} \psi \, , \quad \delta_{\epsilon} X_{J} = \epsilon \sigma_{J} \psi \, , \nonumber \\ && \tilde \delta_{\epsilon} \psi = \epsilon \, , \quad \tilde \delta_{\epsilon} \nabla_{j} = 0 \, , \quad \tilde \delta_{\epsilon} X_{J} = 0 \, . \end{eqnarray} where $\epsilon$ is a constant $16$-component Majorana-Weyl spinor. Of particular interest for string theory applications are solutions to the equations of motion corresponding to (\ref{sYM}) that are invariant under some of the above supersymmetry transformations. Further discussion can be found in \cite{KonSch}. \section{Morita equivalence} The role of Morita equivalence as a duality transformation in noncommutative Yang-Mills theory was elucidated by A.~Schwarz in \cite{ASMorita}. We will adopt a definition of Morita equivalence for noncommutative tori which can be shown to be essentially equivalent to the standard definition of strong Morita equivalence. We will say that two noncommutative tori $T_{ \theta}^{d}$ and $T_{\hat \theta}^{d}$ are Morita equivalent if there exists an $(T_{ \theta}^{d}, T_{\hat \theta}^{d})$-bimodule $Q$ and an $(T_{ \hat \theta}^{d}, T_{ \theta}^{d})$-bimodule $P$ such that \begin{equation} Q\otimes_{ T_{\hat \theta}} P \cong T_{\theta} \, , \quad P\otimes_{ T_{ \theta}} Q \cong T_{\hat \theta} \end{equation} where $T_{\theta}$ on the right hand side is considered as a $(T_{\theta}, T_{\theta})$-bimodule and analogously for $T_{\hat \theta}$. (It is assumed that the isomorphisms are canonical). Given a $T_{ \theta}$-module $E$ one obtains a $T_{\hat \theta}$-module $\hat E$ as \begin{equation} \label{mod_map} \hat E=P\otimes_{T_{\theta}}E \, . \end{equation} One can show that this mapping is functorial. Moreover the bimodule $Q$ provides us with an inverse mapping $Q\otimes_{T_{\hat \theta}}\hat E\cong E$. We further introduce a notion of gauge Morita equivalence (originally called ``complete Morita equivalence'') that allows one to transport connections along with the mapping of modules (\ref{mod_map}). Let $L$ be a $d$-dimensional commutative Lie algebra. We say that $(T_{\hat \theta}^{d},T_{ \theta}^{d})$ Morita equivalence bimodule $P$ establishes a gauge Morita equivalence if it is endowed with operators $\nabla^{P}_{X}$, $X\in L$ that determine a constant curvature connection simultaneously with respect to $T_{\theta}^{d}$ and $T_{\hat \theta}^{d}$, i.e. satisfy \begin{eqnarray}\label{biconnect} &&\nabla^{P}_{X}(ea)=(\nabla^{P}_{X}e)a + e(\delta_{X}a) \, , \nonumber \\ &&\nabla^{P}_{X}(\hat ae )=\hat a(\nabla^{P}_{X}e) + (\hat \delta_{X} \hat a)e \, , \nonumber \\ &&[\nabla^{P}_{X},\nabla^{P}_{Y}]=2\pi i \sigma_{XY}\cdot {\bf 1} \, . \end{eqnarray} Here $\delta_{X}$ and $\hat \delta_{X}$ are standard derivations on $T_{\theta}$ and $T_{\hat \theta}$ respectively. In other words we have two Lie algebra homomorphisms \begin{equation} \label{deltas} \delta : L \to L_{\theta} \, , \qquad \hat \delta : L \to L_{\hat \theta} \, . \end{equation} If a pair $(P, \nabla^{P}_{X})$ specifies a gauge $(T_{\theta},T_{\hat \theta})$ equivalence bimodule then there exists a correspondence between connections on $E$ and connections on $\hat E$. A connection $\hat \nabla_{X}$ on $\hat E$ corresponding to a given connection $\nabla_{X}$ on $E$ is defined as \begin{equation}\label{conmap} \nabla_{X} \mapsto \hat \nabla_{X} = 1\otimes \nabla_{X} + \nabla_{X}^{P}\otimes 1 \, . \end{equation} More precisely, an operator $1\otimes \nabla_{X} + \nabla_{X}^{P}\otimes 1$ on $P\otimes_{\mathbb C}E$ descends to a connection $\hat \nabla_{X}$ on $\hat E = P\otimes_{T_{\theta}}E $. It is straightforward to check that under this mapping gauge equivalent connections go to gauge equivalent ones $$ \widehat{Z^{\dagger}\nabla_{X} Z} = \hat Z^{\dagger} \hat \nabla_{X} \hat Z $$ where $\hat Z = 1\otimes Z$ is the endomorphism of $\hat E= P\otimes_{T_{\theta}}E$ corresponding to $Z\in End_{T_{\theta}^{d}}E$. The curvatures of $\hat \nabla_{X}$ and $\nabla_{X}$ are connected by the formula \begin{equation} \label{curv_shift} F_{XY}^{\hat \nabla}=\hat F_{XY}^{\nabla} + {\bf 1}\sigma_{XY} \end{equation} that in particular shows that constant curvature connections go to constant curvature ones. Since noncommutative tori are labelled by an antisymmetric $d\times d$ matrix $\theta$, gauge Morita equivalence establishes an equivalence relation on the set of such matrices. To describe this equivalence relation consider an action $\theta \mapsto h\theta = \hat \theta$ of $SO(d,d|{\mathbb Z})$ on the space of antisymmetric $d\times d$ matrices by the formula \begin{equation} \label{action} \hat \theta = (M\theta + N)(R\theta + S)^{-1} \end{equation} where $d\times d$ matrices $M$, $N$, $R$, $S$ are such that the matrix \begin{equation} \label{g1} h= \left( \begin{array}{cc} M&N\\ R&S\\ \end{array} \right) \end{equation} belongs to the group $SO(d,d|{\mathbb Z})$. The above action is defined whenever the matrix $A\equiv R\theta + S$ is invertible. One can prove that two noncommutative tori $T^{d}_{\theta}$ and $T_{\hat \theta}$ are gauge Morita equivalent if and only if the matrices $\theta$ and $\hat \theta$ belong to the same orbit of $SO(d,d|{\mathbb Z})$-action (\ref{action}). The duality group $SO(d,d|{\mathbb Z})$ also acts on the topological numbers of moduli $\mu \in \Lambda^{even}({\mathbb Z^{d}})$. This action can be shown to be given by a spinor representation constructed as follows. First note that the operators $a^{i}=\alpha^{i}$, $b_{i}=\partial/\partial\alpha^{i}$ act on $\Lambda({\mathbb R}^{d})$ and give a representation of the Clifford algebra specified by the metric with signature $(d,d)$. The group $O(d,d|{\mathbb C})$ thus can be regarded as a group of automorphisms acting on the Clifford algebra generated by $a^{i}$, $b_{j}$. Denote the latter action by $W_{h}$ for $h\in O(d,d|{\mathbb C})$. One defines a projective action $V_{h}$ of $O(d,d|{\mathbb C})$ on $\Lambda({\mathbb R}^{d})$ according to $$ V_{h}a^{i}V_{h}^{-1}=W_{h^{-1}}(a^{i})\, , \qquad V_{h}b_{j}V_{h}^{-1}=W_{h^{-1}}(b_{j}) \, . $$ This projective action can be restricted to yield a double-valued spinor representation of $SO(d,d|{\mathbb C})$ on $\Lambda({\mathbb R}^{d})$ by choosing a suitable bilinear form on $\Lambda({\mathbb R}^{d})$. The restriction of this representation to the subgroup $SO(d,d|{\mathbb Z})$ acting on $\Lambda^{even}({\mathbb Z}^{d})$ gives the action of Morita equivalence on the topological numbers of moduli. The mapping (\ref{conmap}) preserves the Yang-Mills equations of motion (\ref{YMeq}). Moreover, one can define a modification of the Yang-Mills action functional (\ref{YM}) in such a way that the values of functionals on $\nabla_{X}$ and $\hat \nabla_{X}$ coincide up to an appropriate rescaling of coupling constants. The modified action functional has the form \begin{equation} \label{modifYM} S_{YM} = \frac{V}{4g^{2}}{\rm Tr} (F_{jk} + \Phi_{jk}\cdot {\bf 1}) (F^{jk} + \Phi^{jk}\cdot {\bf 1}) \end{equation} where $\Phi^{jk}$ is a number valued tensor that can be thought of as some background field. Adding this term will allow us to compensate for the curvature shift by adopting the transformation rule $$ \Phi_{XY} \mapsto \Phi_{XY} - \sigma_{XY} \, . $$ Note that the new action functional (\ref{modifYM}) has the same equations of motion (\ref{YMeq}) as the original one. To show that the functional (\ref{modifYM}) is invariant under gauge Morita equivalence one has to take into account two more effects. Firstly, the values of trace change by a factor $c = {\rm dim}(\hat E) ({\rm dim}(E ))^{-1}$ as $ \hat {\rm Tr} \hat X = c {\rm Tr} X $. Secondly, the identification of $L_{\theta}$ and $L_{\hat \theta}$ is established by means of some linear transformation $A_{j}^{k}$ the determinant of which will rescale the volume $V$. Both effects can be absorbed into an appropriate rescaling of the coupling constant. One can show that the curvature tensor, the metric tensor, the background field $\Phi_{ij}$ and the volume element $V$ transform according to \begin{eqnarray} \label{tr_rules} && F_{ij}^{\hat \nabla} = A^{k}_{i}F_{kl}^{\nabla} A^{l}_{j} + \sigma_{ij} \, , \qquad \hat g_{ij} = A^{k}_{i}g_{kl}A^{l}_{j} \, , \nonumber \\ && \hat \Phi_{ij} = A^{k}_{i}\Phi_{kl} A^{l}_{j} - \sigma_{ij} \, , \qquad \hat V = V|{\rm det}\, A| \, \end{eqnarray} where $A=R\theta + S$ and $\sigma=-RA^{t}$. The action functional (\ref{modifYM}) is invariant under the gauge Morita equivalence if the coupling constant transforms according to \begin{equation} \label{ccMorita} \hat g^{2}_{YM} = g^{2}_{YM}|{\rm det}\, A|^{1/2} \, . \end{equation} Supersymmetric extensions of Yang-Mills theory on noncommutative tori were shown to arise within string theory essentially in two situations. In the first case one considers compactifications of the (BFSS or IKKT) Matrix model of M-theory \cite{CDS}. A discussion regarding the connection between T-duality and Morita equivalence in this case can be found in section 7 of \cite{SeibWitt}. Noncommutative gauge theories on tori can be also obtained by taking the so called Seiberg-Witten zero slope limit in the presence of a Neveu-Schwarz $B$-field background \cite{SeibWitt}. The emergence of noncommutative geometry in this limit is discussed in the article {\it ``Noncommutative geometry from string theory''} in this volume. Below we give some details on the relation between T-duality and Morita equivalence in this approach. Consider a number of Dp-branes wrapped on $T^{p}$ parameterized by coordinates $x^{i} \sim x^{i} + 2\pi r$ with a closed string metric $G_{ij}$ and a $B$-field $B_{ij}$. The $SO(p,p|{\mathbb Z})$ T-duality group is represented by the matrices \begin{equation} \label{g2} T= \left( \begin{array}{cc} a&b\\ c&d\\ \end{array} \right) \end{equation} that act on the matrix $$ E=\frac{r^{2}}{\alpha'}(G + 2\pi \alpha'B) $$ by a fractional transformation \begin{equation}\label{Td} T: E \mapsto E'=(aE + b)(cE + d)^{-1} \, . \end{equation} The transformed metric and $B$-field are obtained by taking respectively the symmetric and antisymmetric parts of $E'$. The string coupling constant is transformed as \begin{equation}\label{couplc} T: g_{s}\mapsto g_{s}'=\frac{g_{s}}{({\rm det}(cE + d))^{1/2}}\, . \end{equation} The zero slope limit of Seiberg and Witten is obtained by taking \begin{equation}\label{SWlimit} \alpha' \sim \sqrt{\epsilon} \to 0 \, , \qquad G_{ij} \sim \epsilon \to 0 \, . \end{equation} Sending the closed string metric to zero implies that the $B$-field dominates in the open string boundary conditions. In the limit (\ref{SWlimit}) the compactification is parameterized in terms of open string moduli \begin{equation}\label{openm} g_{ij} = -(2\pi \alpha')^{2}(BG^{-1}B)_{ij} \, , \quad \theta^{ij} = \frac{1}{2\pi r^{2}}(B^{-1})^{ij} \, . \end{equation} which remain finite. One can demonstrate that $\theta^{ij}$ is a noncommutativity parameter for the torus and the low energy effective theory living on the $Dp$-brane is a noncomutative maximally supersymmetric gauge theory with a coupling constant \begin{equation} G_{s}=g_{s}\left(\frac{{\rm det}\, g}{{\rm det}G} \right)^{1/4} \, . \end{equation} From the transformation law (\ref{Td}) it is not hard to derive the transformation rules for the moduli (\ref{openm}) in the limit (\ref{SWlimit}) \begin{eqnarray}\label{Tdd} &&T:g\mapsto g'=(a + b\theta)g(a + b\theta)^{t} \, , \nonumber \\ &&T:\theta \mapsto \theta' = (c + d\theta)(a + b\theta)^{t} \, . \end{eqnarray} Furthermore the effective gauge theory becomes a noncommutative Yang-Mills theory (\ref{sYM}) with a coupling constant $$ (g_{YM})^{-2}=\frac{(\alpha')^{\frac{3-p}{2}}}{(2\pi)^{p-2}G_{s}} $$ which goes to a finite limit under (\ref{SWlimit}) provided one simultaneously scales $g_{s}$ with $\epsilon$ as $$ g_{s}\sim \epsilon^{(3 - p + k)/4} $$ where $k$ is the rank of $B_{ij}$. The limiting coupling constant $g_{YM}$ transforms under the T-duality (\ref{Td}), (\ref{couplc}) as \begin{equation} \label{couplec2} T:g_{YM} \mapsto g'_{YM}=g_{YM}({\rm det}(a + b\theta))^{1/4} \, . \end{equation} We see that the transformation laws (\ref{Td}) and (\ref{couplec2}) have the same form as the corresponding transformations in (\ref{action}), (\ref{tr_rules}), (\ref{ccMorita}) provided one identifies matrix (\ref{g1}) with matrix (\ref{g2}) conjugated by $T=\left( \begin{array}{cc} 0&1\\ 1&0\\ \end{array} \right)$. The need for conjugation reflects the fact that in the BFSS M(atrix) model in the framework of which the Morita equivalence was originally considered, the natural degrees of freedom are D0 branes versus Dp branes considered in the above discussion of T-duality. One can further check that the gauge field transformations following from gauge Morita equivalence match with those induced by the T-duality. It is worth stressing that in the absence of a $B$-field background the effective action based on the gauge field curvature squared is not invariant under T-duality. \section{Instantons on noncommutative $T^{4}_{\theta}$} Consider a Yang-Mills field $\nabla_{X}$ on a projective module $E$ over a noncommutative four-torus $T_{\theta}^{4}$. Assume that the Lie algebra of shifts $L_{\theta}$ is equipped with the standard Euclidean metric such that the metric tensor in the basis (\ref{deltaj}) is given by the identity matrix. The Yang-Mills field $\nabla_{i}$ is called an instanton if the self-dual part of the corresponding curvature tensor is proportional to the identity operator \begin{equation} F_{jk}^{+}\equiv \frac{1}{2}(F_{jk} + \frac{1}{2}\epsilon_{jkmn}F^{mn}) = i\omega_{jk}\cdot {\bf 1} \end{equation} where $\omega_{jk}$ is a constant matrix with real entries. An antiinstanton is defined the same way by replacing the self-dual part with the antiself-dual one. One can define a noncommutative analog of Nahm transform for instantons \cite{AstNekSchw} that has properties very similar to those of the ordinary (commutative) one. To that end consider a triple $({\cal P}, \nabla_{i}, \hat \nabla_{i})$ consisting of a (finite projective) $(T_{\theta}^{4}, T_{\hat \theta}^{4})$-bimodule ${\cal P}$, $T_{\theta}^{4}$-connection $\nabla_{i}$ and $T_{\hat \theta}^{4}$-connection $\hat \nabla_{i}$ that satisfy the following properties. The connection $\nabla_{i}$ commutes with the $T_{\hat \theta}$-action on ${\cal P}$ and the connection $\hat \nabla_{i}$ with that of $T_{\theta}$. The commutators $[\nabla_{i}, \nabla_{j}]$, $[\hat \nabla_{i}, \hat \nabla_{j}]$, $[\nabla_{i}, \hat \nabla_{j}]$ are proportional to the identity operator \begin{equation} [\nabla_{i}, \nabla_{j}]=\omega_{ij}\cdot {\bf 1}\, , \quad [\hat \nabla_{i}, \hat \nabla_{j}]=\hat \omega_{ij}\cdot {\bf 1}\, , \quad [\nabla_{i}, \hat \nabla_{j}]=\sigma_{ij}\cdot {\bf 1}\, . \end{equation} The above conditions mean that ${\cal P}$ is a $T_{\theta\oplus (-\hat \theta)}^{8}$ module and $\nabla_{i}\oplus \hat \nabla_{i}$ is a constant curvature connection on it. In addition we assume that the tensor $\sigma_{ij}$ is non-degenerate. For a connection $\nabla^{E}$ on a right $T_{\theta}^{4}$-module $E$ we define a Dirac operator $D=\Gamma^{i}(\nabla^{E}_{i} + \nabla_{i})$ acting on the tensor product $$ (E\otimes_{T_{\theta}}{\cal P})\otimes S $$ where $S$ is the $SO(4)$ spinor representation space and $\Gamma^{i}$ are four-dimensional Dirac gamma-matrices. The space $S$ is ${\mathbb Z}_{2}$-graded: $S=S^{+}\oplus S^{-}$ and $D$ is an odd operator so that we can consider \begin{eqnarray*} &&D^{+}: (E\otimes_{T_{\theta}}{\cal P})\otimes S^{+} \to (E\otimes_{T_{\theta}}{\cal P})\otimes S^{-} \, , \\ && D^{-}: (E\otimes_{T_{\theta}}{\cal P})\otimes S^{-} \to (E\otimes_{T_{\theta}}{\cal P})\otimes S^{+} \, . \end{eqnarray*} A connection $\nabla^{E}_{i}$ on a $T_{\theta}^{4}$-module $E$ is called ${\cal P}$-irreducible if there exists a bounded inverse to the Laplacian $$ \Delta = \sum\limits_{i}(\nabla^{E}_{i} + \nabla_{i})(\nabla^{E}_{i} + \nabla_{i}) \, . $$ One can show that if $\nabla^{E}$ is a ${\cal P}$-irreducible instanton then ${\rm ker}D^{+}=0$ and $D^{-}D^{+}=\Delta$. Denote by $\hat E$ the closure of the kernel of $D^{-}$. Since $D^{-}$ commutes with the $T_{\hat \theta}^{4}$ action on $(E\otimes_{T_{\theta}}{\cal P})\otimes S^{-}$ the space $\hat E$ is a right $T_{\hat \theta}^{4}$-module. One can prove that this module is finite projective. Let $P:(E\otimes_{T_{\theta}}{\cal P})\otimes S^{-} \to \hat E$ be a Hermitian projector. Denote by $ \nabla^{\hat E}$ the composition $P\circ \hat \nabla$. One can show that $ \nabla^{\hat E}$ is a Yang-Mills field on $\hat E$. The noncommutative Nahm transform of a ${\cal P}$-irreducible instanton connection $\nabla^{E}$ on $E$ is defined to be the pair $(\hat E,\nabla^{\hat E})$. One can further show that $\nabla^{\hat E}$ is an instanton. \vskip .5in \begin{center} {\bf \large Acknowledgments} \end{center} I am grateful to Albert Schwarz for reading and commenting on the manuscript of this article.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{Appendix} \subsection{A Sequence of 10 Mutations Corresponding to Move (2) in the \texorpdfstring{$G_2$}{} Case} \label{A} this case says that $(\alpha,\beta,\alpha,\beta,\alpha,\beta)\sim (\beta,\alpha,\beta, \alpha,\beta,\alpha)$. Without loss of generality let's assume that $C_{\alpha\beta}=-1$ and $C_{\beta\alpha}=-3$. The following pictures are the string diagrams and their corresponding quasi-quivers together with the sequence of seed mutations that transform one into the other. For more details and deeper reasons for why this is true, please see Fock and Goncharov's paper on amalgamation (\cite{FGD} Section 3.7). \[ \tikz[baseline=3ex]{ \node (1) at (1,1) [] {$\alpha$}; \node (2) at (3,1) [] {$\alpha$}; \node (3) at (5,1) [] {$\alpha$}; \node (4) at (2,0) [] {$\beta$}; \node (5) at (4,0) [] {$\beta$}; \node (6) at (6,0) [] {$\beta$}; \draw (0,0) -- (4) -- (5) -- (6) -- (7,0); \draw (0,1) -- (1) -- (2) -- (3) --(7,1); }\quad\quad \sim \quad \quad \tikz[baseline=3ex]{ \node (1) at (1,0) [] {$\beta$}; \node (2) at (3,0) [] {$\beta$}; \node (3) at (5,0) [] {$\beta$}; \node (4) at (2,1) [] {$\alpha$}; \node (5) at (4,1) [] {$\alpha$}; \node (6) at (6,1) [] {$\alpha$}; \draw (0,1) -- (4) -- (5) -- (6) -- (7,1); \draw (0,0) -- (1) -- (2) -- (3) --(7,0); } \] \[ \tikz{ \node (1) at (0,0) [] {$\bullet$}; \node (2) at (2,0) [] {$\bullet$}; \node (3) at (4,0) [] {$\bullet$}; \node (4) at (6,0) [] {$\bullet$}; \node (5) at (0,2) [] {$\bullet$}; \node (6) at (2,2) [] {$\bullet$}; \node (7) at (4,2) [] {$\bullet$}; \node (8) at (6,2) [] {$\bullet$}; \draw [->] (2) -- (1); \draw [->] (3) -- (2); \draw [->] (4) -- (3); \draw [->] (6) -- (5); \draw [->] (7) -- (6); \draw [->] (8) -- (7); \draw [x-->] (5) -- (1); \draw [-x>] (1) -- (6); \draw [x->] (6) -- (2); \draw [-x>] (2) -- (7); \draw [x->] (7) -- (3); \draw [-x>] (3) -- (8); \draw [x-->] (8) -- (4); \node at (6) [above] {$a$}; \node at (7) [above] {$b$}; \node at (2) [below] {$c$}; \node at (3) [below] {$d$}; } \quad \quad \quad \quad \quad\quad \quad \tikz{ \node (1) at (0,0) [] {$\bullet$}; \node (2) at (2,0) [] {$\bullet$}; \node (3) at (4,0) [] {$\bullet$}; \node (4) at (6,0) [] {$\bullet$}; \node (5) at (0,2) [] {$\bullet$}; \node (6) at (2,2) [] {$\bullet$}; \node (7) at (4,2) [] {$\bullet$}; \node (8) at (6,2) [] {$\bullet$}; \draw [->] (2) -- (1); \draw [->] (3) -- (2); \draw [->] (4) -- (3); \draw [->] (6) -- (5); \draw [->] (7) -- (6); \draw [->] (8) -- (7); \draw [--x>] (1) -- (5); \draw [x->] (5) -- (2); \draw [-x>] (2) -- (6); \draw [x->] (6) -- (3); \draw [-x>] (3) -- (7); \draw [x->] (7) -- (4); \draw [--x>] (4) -- (8); \node at (6) [above] {$a$}; \node at (7) [above] {$b$}; \node at (2) [below] {$c$}; \node at (3) [below] {$d$}; } \] \[ \tikz{\draw [<->] (0,0) -- node[left]{$\mu_d$} (0,1);} \quad\quad \quad\quad\quad \quad\quad \quad\quad \quad\quad \quad\quad \quad\quad \quad\quad \quad\quad \quad \quad\quad \quad\quad \quad \tikz{\draw [<->] (0,0) -- node[right]{$\mu_d$} (0,1);} \] \[ \tikz{ \node (1) at (0,0) [] {$\bullet$}; \node (2) at (2,0) [] {$\bullet$}; \node (3) at (4,0) [] {$\bullet$}; \node (4) at (6,0) [] {$\bullet$}; \node (5) at (0,2) [] {$\bullet$}; \node (6) at (2,2) [] {$\bullet$}; \node (7) at (4,2) [] {$\bullet$}; \node (8) at (6,2) [] {$\bullet$}; \draw [->] (2) -- (1); \draw [->] (2) -- (3); \draw [->] (3) -- (4); \draw [->] (6) -- (5); \draw [->] (7) -- (6); \draw [double distance=2pt, -implies] (7) -- (8); \draw [->] (4) edge[bend left] (2); \draw [x-->] (5) -- (1); \draw [-x>] (1) -- (6); \draw [x->] (6) -- (2); \draw [-x>] (3) -- (7); \draw [x->] (8) -- (3); \draw [--x>] (4) -- (8); \node at (6) [above] {$a$}; \node at (7) [above] {$b$}; \node at (2) [below] {$c$}; \node at (3) [below] {$d$}; } \quad \quad \quad \quad \quad\quad \quad \tikz{ \node (1) at (0,0) [] {$\bullet$}; \node (2) at (2,0) [] {$\bullet$}; \node (3) at (4,0) [] {$\bullet$}; \node (4) at (6,0) [] {$\bullet$}; \node (5) at (0,2) [] {$\bullet$}; \node (6) at (2,2) [] {$\bullet$}; \node (7) at (4,2) [] {$\bullet$}; \node (8) at (6,2) [] {$\bullet$}; \draw [->] (2) -- (1); \draw [->] (2) -- (3); \draw [->] (3) -- (4); \draw [->] (6) -- (5); \draw [double distance=2pt, -implies] (6) -- (7); \draw [->] (8) -- (7); \draw [->] (4) edge[bend left] (2); \draw [--x>] (1) -- (5); \draw [x->] (5) -- (2); \draw [-x>] (3) -- (6); \draw [x->] (7) -- (3); \draw [--x>] (4) -- (8); \node at (6) [above] {$a$}; \node at (7) [above] {$b$}; \node at (2) [below] {$c$}; \node at (3) [below] {$d$}; } \] \[ \tikz{\draw [<->] (0,0) -- node[left]{$\mu_c$} (0,1);} \quad\quad \quad\quad\quad \quad\quad \quad\quad \quad\quad \quad\quad \quad\quad \quad\quad \quad\quad \quad \quad\quad \quad\quad \quad \tikz{\draw [<->] (0,0) -- node[right]{$\mu_a$} (0,1);} \] \[ \tikz{ \node (1) at (0,0) [] {$\bullet$}; \node (2) at (2,0) [] {$\bullet$}; \node (3) at (4,0) [] {$\bullet$}; \node (4) at (6,0) [] {$\bullet$}; \node (5) at (0,2) [] {$\bullet$}; \node (6) at (2,2) [] {$\bullet$}; \node (7) at (4,2) [] {$\bullet$}; \node (8) at (6,2) [] {$\bullet$}; \draw [->] (1) -- (2); \draw [->] (3) -- (2); \draw [->] (6) -- (5); \draw [->] (7) -- (6); \draw [double distance=2pt, -implies] (7) -- (8); \draw [->] (2) edge[bend right] (4); \draw [->] (4) edge[bend left] (1); \draw [x-->] (5) -- (1); \draw [x->] (6) -- (3); \draw [-x>] (2) -- (6); \draw [-x>] (3) -- (7); \draw [x->] (8) -- (3); \draw [--x>] (4) -- (8); \node at (6) [above] {$a$}; \node at (7) [above] {$b$}; \node at (2) [below] {$c$}; \node at (3) [below] {$d$}; } \quad \quad \quad \quad \quad\quad \quad \tikz{ \node (1) at (0,0) [] {$\bullet$}; \node (2) at (2,0) [] {$\bullet$}; \node (3) at (4,0) [] {$\bullet$}; \node (4) at (6,0) [] {$\bullet$}; \node (5) at (0,2) [] {$\bullet$}; \node (6) at (2,2) [] {$\bullet$}; \node (7) at (4,2) [] {$\bullet$}; \node (8) at (6,2) [] {$\bullet$}; \draw [->] (2) -- (1); \draw [->] (2) -- (3); \draw [->] (3) -- (4); \draw [->] (5) -- (6); \draw [double distance=2pt, -implies] (7) -- (6); \draw [->] (8) -- (7); \draw [->] (4) edge[bend left] (2); \draw [--x>] (1) -- (5); \draw [-x>] (3) -- (5); \draw [x->] (5) -- (2); \draw [x->] (6) -- (3); \draw [-x>] (3) -- (7); \draw [--x>] (4) -- (8); \node at (6) [above] {$a$}; \node at (7) [above] {$b$}; \node at (2) [below] {$c$}; \node at (3) [below] {$d$}; } \] \[ \tikz{\draw [<->] (0,0) -- node[left]{$\mu_b$} (0,1);} \quad\quad \quad\quad\quad \quad\quad \quad\quad \quad\quad \quad\quad \quad\quad \quad\quad \quad\quad \quad \quad\quad \quad\quad \quad \tikz{\draw [<->] (0,0) -- node[right]{$\mu_c$} (0,1);} \] \[ \tikz{ \node (1) at (0,0) [] {$\bullet$}; \node (2) at (2,0) [] {$\bullet$}; \node (3) at (4,0) [] {$\bullet$}; \node (4) at (6,0) [] {$\bullet$}; \node (5) at (0,2) [] {$\bullet$}; \node (6) at (2,2) [] {$\bullet$}; \node (7) at (4,2) [] {$\bullet$}; \node (8) at (6,2) [] {$\bullet$}; \draw [->] (1) -- (2); \draw [->] (3) -- (2); \draw [->] (6) -- (5); \draw [->] (6) -- (7); \draw [double distance=2pt, -implies] (8) -- (7); \draw [->] (2) edge[bend right] (4); \draw [->] (4) edge[bend left] (1); \draw [x-->] (5) -- (1); \draw [-x>] (2) -- (6); \draw [x->] (7) -- (3); \draw [-x>] (3) -- (8); \draw [--x>] (4) -- (8); \node at (6) [above] {$a$}; \node at (7) [above] {$b$}; \node at (2) [below] {$c$}; \node at (3) [below] {$d$}; } \quad \quad \quad \quad \quad\quad \quad \tikz{ \node (1) at (0,0) [] {$\bullet$}; \node (2) at (2,0) [] {$\bullet$}; \node (3) at (4,0) [] {$\bullet$}; \node (4) at (6,0) [] {$\bullet$}; \node (5) at (0,2) [] {$\bullet$}; \node (6) at (2,2) [] {$\bullet$}; \node (7) at (4,2) [] {$\bullet$}; \node (8) at (6,2) [] {$\bullet$}; \draw [->] (1) -- (2); \draw [->] (3) -- (2); \draw [->] (5) -- (6); \draw [double distance=2pt, -implies] (7) -- (6); \draw [->] (8) -- (7); \draw [->] (2) edge[bend right] (4); \draw [->] (4) edge[bend left] (1); \draw [x-->] (5) -- (1); \draw [-x>] (2) -- (5); \draw [x->] (6) -- (3); \draw [-x>] (3) -- (7); \draw [--x>] (4) -- (8); \node at (6) [above] {$a$}; \node at (7) [above] {$b$}; \node at (2) [below] {$c$}; \node at (3) [below] {$d$}; } \] \[ \tikz{\draw [<->] (0,0) -- node[left]{$\mu_a$} (0,1);} \quad\quad \quad\quad\quad \quad\quad \quad\quad \quad\quad \quad\quad \quad\quad \quad\quad \quad\quad \quad \quad\quad \quad\quad \quad \tikz{\draw [<->] (0,0) -- node[right]{$\mu_d$} (0,1);} \] \[ \tikz{ \node (1) at (0,0) [] {$\bullet$}; \node (2) at (2,0) [] {$\bullet$}; \node (3) at (4,0) [] {$\bullet$}; \node (4) at (6,0) [] {$\bullet$}; \node (5) at (0,2) [] {$\bullet$}; \node (6) at (2,2) [] {$\bullet$}; \node (7) at (4,2) [] {$\bullet$}; \node (8) at (6,2) [] {$\bullet$}; \draw [->] (1) -- (2); \draw [->] (3) -- (2); \draw [->] (5) -- (6); \draw [->] (7) -- (6); \draw [double distance=2pt, -implies] (8) -- (7); \draw [->] (2) edge[bend right] (4); \draw [->] (4) edge[bend left] (1); \draw [x-->] (5) -- (1); \draw [x->] (6) -- (2); \draw [-x>] (2) -- (5); \draw [-x>] (2) -- (7); \draw [x->] (7) -- (3); \draw [-x>] (3) -- (8); \draw [--x>] (4) -- (8); \node at (6) [above] {$a$}; \node at (7) [above] {$b$}; \node at (2) [below] {$c$}; \node at (3) [below] {$d$}; } \quad \quad \quad \quad \quad\quad \quad \tikz{ \node (1) at (0,0) [] {$\bullet$}; \node (2) at (2,0) [] {$\bullet$}; \node (3) at (4,0) [] {$\bullet$}; \node (4) at (6,0) [] {$\bullet$}; \node (5) at (0,2) [] {$\bullet$}; \node (6) at (2,2) [] {$\bullet$}; \node (7) at (4,2) [] {$\bullet$}; \node (8) at (6,2) [] {$\bullet$}; \draw [->] (1) -- (2); \draw [->] (2) -- (3); \draw [->] (5) -- (6); \draw [->] (6) -- (7); \draw [->] (8) -- (7); \draw [->] (2) edge[bend right] (4); \draw [->] (4) edge[bend left] (1); \draw [x-->] (5) -- (1); \draw [-x>] (2) -- (5); \draw [x->] (6) -- (2); \draw [-x>] (3) -- (6); \draw [x->] (7) -- (3); \draw [--x>] (4) -- (8); \node at (6) [above] {$a$}; \node at (7) [above] {$b$}; \node at (2) [below] {$c$}; \node at (3) [below] {$d$}; } \] \[ \tikz{\draw [<->] (1,0) -- node[below left]{$\mu_d$} (0,1);} \quad\quad\quad \quad\quad \quad\quad \quad\quad \quad \quad\quad \quad\quad \quad \tikz{\draw [<->] (-1,0) -- node[below right]{$\mu_b$} (0,1);} \] \[ \tikz{ \node (1) at (0,0) [] {$\bullet$}; \node (2) at (2,0) [] {$\bullet$}; \node (3) at (4,0) [] {$\bullet$}; \node (4) at (6,0) [] {$\bullet$}; \node (5) at (0,2) [] {$\bullet$}; \node (6) at (2,2) [] {$\bullet$}; \node (7) at (4,2) [] {$\bullet$}; \node (8) at (6,2) [] {$\bullet$}; \draw [->] (1) -- (2); \draw [->] (2) -- (3); \draw [->] (5) -- (6); \draw [->] (7) -- (6); \draw [->] (7) -- (8); \draw [->] (2) edge[bend right] (4); \draw [->] (4) edge[bend left] (1); \draw [x-->] (5) -- (1); \draw [x->] (6) -- (2); \draw [-x>] (2) -- (5); \draw [-x>] (3) -- (7); \draw [x->] (8) -- (3); \draw [--x>] (4) -- (8); \node at (6) [above] {$a$}; \node at (7) [above] {$b$}; \node at (2) [below] {$c$}; \node at (3) [below] {$d$}; } \] \begin{landscape} \subsection{Cluster Identities on Double Bruhat Cells } The following table contains identities corresponding to move (1) and move (2) in the cases of $C_{\alpha\beta}C_{\beta\alpha}=1$ and $C_{\alpha\beta}C_{\beta\alpha}=2$. \label{B} \begin{table}[h] \begin{center} \resizebox{\columnwidth}{!}{ \begin{tabular}{|c|c|c|}\hline & $\mathcal{A}$ & $\mathcal{X}$ \\ \hline $\begin{array}{c} (\alpha,-\alpha) \\ \downarrow \\ (-\alpha,\alpha)\end{array}$ & $\Delta_\alpha\left(\overline{s}_\alpha^{-1}x\overline{s}_\alpha\right)=\displaystyle\frac{ \Delta_\alpha\left(\overline{s}_\alpha^{-1} x\right)\Delta_\alpha\left(x\overline{s}_\alpha\right) +\prod_{\beta\neq \alpha}\left( \Delta_\beta(x)\right)^{-C_{\beta\alpha}}}{\Delta_\alpha(x)}$ & $e_\alpha t^{H^\alpha} e_{-\alpha}=\displaystyle(1+t)^{H^\alpha}e_{-\alpha}\left(\frac{1}{t}\right)^{H^\alpha} e_\alpha (1+t)^{H^\alpha}\prod_{\beta\neq \alpha} \left(\frac{1}{1+\frac{1}{t}}\right)^{H^\beta}$ \\ \hline $\begin{array}{c} (\alpha,\beta,\alpha) \\ \downarrow \\ (\beta,\alpha,\beta)\end{array}$ & $\Delta_\beta\left(x\overline{s}_\beta\right)=\displaystyle\frac{\Delta_\alpha(x)\Delta_\beta\left(x\overline{s}_\alpha\overline{s}_\beta\right)+\Delta_\alpha\left(x\overline{s}_\alpha\overline{s}_\beta\overline{s}_\alpha\right)\Delta_\beta(x)}{\Delta_\alpha\left(x\overline{s}_\alpha\right)}$ & $e_\alpha t^{H^\alpha} e_\beta e_\alpha=\displaystyle(1+t)^{H^\alpha}\left(\frac{1}{1+\frac{1}{t}}\right)^{H^\beta}e_\beta \left(\frac{1}{t}\right)^{H^\beta} e_\alpha e_\beta (1+t)^{H^\beta}\left(\frac{1}{1+\frac{1}{t}}\right)^{H^\alpha}$ \\ \hline $\begin{array}{c} (\alpha,\beta,\alpha,\beta) \\ \downarrow \\ (\beta,\alpha,\beta,\alpha) \\ \quad \\ \text{with $C_{\alpha\beta}=-2$} \\ \text{and $C_{\beta\alpha}=-1$} \end{array}$ & {$\!\begin{aligned} \Delta_\alpha\left(x\overline{s}_\alpha\right)=&\displaystyle\frac{1}{\Delta_\alpha\left(x\overline{s}_\beta\overline{s}_\alpha\right)\Delta_\beta\left(x\overline{s}_\beta\right)}\left(\left(\Delta_\alpha(x)\right)^2\Delta_\beta\left(x\overline{s}_\beta \overline{s}_\alpha \overline{s}_\beta\right)+\left(\Delta_\alpha\left(x\overline{s}_\beta\overline{s}_\alpha\right)\right)^2\Delta_\beta(x)\right. \\ & \left. + \Delta_\alpha(x)\Delta_\alpha\left(x\overline{s}_\beta\overline{s}_\alpha\overline{s}_\beta\overline{s}_\alpha\right)\Delta_\beta \left(x\overline{s}_\beta\right)\right)\\ \Delta_\beta\left(x\overline{s}_\alpha\overline{s}_\beta\right)=&\displaystyle\frac{1}{\left(\Delta_\alpha\left(x\overline{s}_\beta\overline{s}_\alpha\right)\right)^2\Delta_\beta\left(x\overline{s}_\beta\right)}\left(\left(\Delta_\alpha(x)\Delta_\beta\left(x\overline{s}_\beta\overline{s}_\alpha\overline{s}_\beta\right)+\Delta_\alpha\left(x\overline{s}_\beta\overline{s}_\alpha\overline{s}_\beta\overline{s}_\alpha\right)\Delta_\beta\left(x\overline{s}_\beta\right)\right)^2\right.\\ &\left.+\left(\Delta_\alpha\left(x\overline{s}_\beta\overline{s}_\alpha\right)\right)^2\Delta_\beta\left(x\overline{s}_\alpha\overline{s}_\alpha \overline{s}_\beta\right)\Delta_\beta(x)\right) \end{aligned}$} & $\begin{array}{l} e_\alpha e_\beta t_\alpha^{H^\alpha} t_\beta^{H^\beta} e_\alpha e_\beta=y_1^{H^\alpha}y_2^{H^\beta}e_\beta e_\alpha y_3^{H^\alpha}y_4^{H^\beta} e_\beta e_\alpha y_5^{H^\alpha}y_6^{H^\beta} \\ \quad \\ \text{where $y_1, y_2, \dots, y_6$ are functions of $t_\alpha$ and $t_\beta$ as described below.} \end{array}$ \\ \hline \end{tabular} } \end{center} \end{table} \begin{align*} y_1=& \frac{1+t_\beta+2t_\alpha t_\beta+t_\alpha^2t_\beta}{1+t_\beta+t_\alpha t_\beta} & y_2=&\frac{t_\alpha^2 t_\beta}{1+t_\beta +2t_\alpha t_\beta+t_\alpha^2t_\beta} \\ y_3=& \frac{t_\alpha}{1+t_\beta+2t_\alpha t_\beta+t_\alpha^2t_\beta} & y_4=& \frac{\left(1+t_\beta+t_\alpha t_\beta\right)^2}{t_\alpha^2 t_\beta} \\ y_5=& 1+t_\beta+t_\alpha t_\beta & y_6=&\frac{t_\beta+t_\beta^2+2t_\alpha t_\beta^2+t_\alpha^2t_\beta^2}{\left(1+t_\beta+t_\alpha t_\beta\right)^2} \end{align*} \end{landscape} \subsection{Configuration of Quadruples of Borel Subgroups} \label{2.2} In this subsection we will look at the double quotients of double Bruhat cells $H\backslash G^{u,v}/H$ from a different angle. Recall that a semisimple Lie group $G$ acts transitively on the space of its Borel subgroups $\mathcal{B}$ by conjugation, with the stable subgroup of the point $B$ being the Borel subgroup $B$ itself. Therefore we can identify $\mathcal{B}$ with either of the quotients $B_-\backslash G$ or $G/B_+$ (the choice of left vs. right quotients will be clear later). For notation simplicity, we will not distinguish cosets in $B_-\backslash G$, cosets in $G/B_+$, and Borel subgroups of $G$ in this paper, and hence phrases like ``the Borel subgroup $xB_+$'' should make sense tautologically. \begin{defn} Let $B_1=g_1B_+$ and $B_2=g_2B_+$ be two Borel subgroups; we define a map \begin{align*} d_+: \mathcal{B}\times \mathcal{B}& \rightarrow W\\ (B_1,B_2)&\mapsto w \end{align*} if $g_1^{-1}g_2\in B_+wB_+$. Analogously if $B_1=B_-g_1$ and $B_2=B_-g_2$ and $g_1g_2^{-1}\in B_-wB_-$ then we define \[ d_-(B_1,B_2):=w. \] \end{defn} Our first observation on these two maps is the anti-symmetry on their arguments: $d_\pm(B_1,B_2)=w$ if and only if $d_\pm (B_2,B_1)=w^{-1}$. But there are more symmetry to these two maps, as we will see in the following several propositions. \begin{prop} The maps $d_\pm$ are $G$-equivariant, i.e., $d_\pm(B_1,B_2)=d_\pm (xB_1x^{-1},xB_2x^{-1})$ for any $x\in G$. \end{prop} \begin{proof} Just note that the $x$ in the first argument becomes $x^{-1}$ after inverting and cancels with the $x$ in the second argument. \end{proof} Recall that we have an involutive automorphism $*$ defined on $G$, which can be extended naturally to the space of Borel subgroups $\mathcal{B}$. Note that $B_\pm^*=B_\pm$; further we have the following observation. \begin{prop} $d_\pm(B_1,B_2)=w$ if and only if $d_\pm(B_1^*,B_2^*)=w^*$. \end{prop} \begin{proof} It follows from the fact that $*$ is an automorphism and $B_\pm^*=B_\pm$. \end{proof} \begin{prop} Let $B_1$ and $B_2$ be two Borel subgroups. Then $d_+(B_1,B_2)=w$ if and only if $d_-(B_1,B_2)=w^*$. \end{prop} \begin{proof} Let's show one direction only, for the other direction is completely analogous. Suppose $B_1=xB_+x^{-1}$ and $B_2=yB_+y^{-1}$. Then $d_+(B_1,B_2)=w$ means that $x^{-1}y\in B_+wB_+$. On the other hand we know that $B_1=xw_0B_-w_0x^{-1}$ and $B_2=yw_0B_-w_0y^{-1}$; therefore we know that \[ \overline{w}_0x^{-1}y\overline{w}_0^{-1}\in w_0B_+wB_+w_0=B_-w^*B_-, \] which implies $d_-(B_1,B_2)=w^*$. \end{proof} Since $d_+$ already contains the information of $d_-$, we introduce the following more concise notation: we write $\xymatrix{B_1 \ar[r]^u & B_2}$ to mean $d_+(B_1,B_2)=u$; then $d_-(B_1,B_2)=v$ can be denoted as $\xymatrix{B_1 \ar[r]^{v^*} & B_2}$; moreover, since $w_0^*=w_0^{-1}=w_0$, we further simplify $\xymatrix{B_1 \ar[r]^{w_0} & B_2}$ to $\xymatrix{B_1 \ar@{-}[r] & B_2}$ without the arrow or the argument $w_0$. \begin{prop}\label{opposite flag} Let $B_1$ and $B_2$ be two Borel subgroups. Then the followings are equivalent: \begin{enumerate} \item $B_1$ and $B_2$ are opposite Borel subgroups; \item $\xymatrix{B_1 \ar@{-}[r] & B_2}$; \item there exists an element $g\in G$ such that $B_1=gB_+g^{-1}$ and $B_2=gB_-g^{-1}$. \end{enumerate} Further the choice of $g$ in (3) is unique up to a right multiple of an element from $H$. \end{prop} \begin{proof} (1)$\implies$(2). Suppose $B_1=xB_+x^{-1}$ and $B_2=yB_+y^{-1}$. Then by assumption $xB_+x^{-1}$ is opposite to $yB_+y^{-1}$ as Borel subgroups. But then this implies that \[ x^{-1}yB_+y^{-1}x=B_-=w_0B_+w_0, \] and hence $x^{-1}yB_+=w_0B_+$ due to the identification $\mathcal{B}\cong G/B_+$. But then this implies that $x^{-1}y\in B_+w_0B_+$, which implies (2). (2)$\implies$(3). Suppose $B_1=xB_+x^{-1}$ and $B_2=yB_+y^{-1}$. Then by assumption $x^{-1}y\in B_+w_0B_+$. Thus we can find $b$ and $b'$ from $B_+$ such that $x^{-1}y=b\overline{w}_0b'$. Let $g:=xb$; then \[ gB_+g^{-1}=xB_+x^{-1}=B_1 \quad \quad \text{and} \quad \quad gB_-g^{-1}=xbw_0B_+w_0b^{-1}x^{-1}=yB_+y^{-1}=B_2. \] (3)$\implies$(1) is trivial since $gB_+g^{-1}$ and $gB_-g^{-1}$ are obviously opposite Borel subgroups. For the remark on the uniqueness of $g$, note that if $gB_+=g'B_+$ and $B_-g^{-1}=B_-g'^{-1}$, then $g^{-1}g'$ is in both $B_+$ and $B_-$; since $B_+\cap B_-=H$, it follows that $g$ and $g'$ can only differ by a right multiple of an element from $H$. \end{proof} \begin{prop}\label{2.8} Suppose $l(uv)=l(u)+l(v)$. Then $\xymatrix{B_1\ar[r]^{uv} & B_2}$ if and only if there exists a Borel subgroup $B_3$ such that $\xymatrix{B_1 \ar[r]^u & B_3}$ and $\xymatrix{B_3 \ar[r]^v & B_2}$. In particular, such a Borel subgroup $B_3$ is unique. \end{prop} \begin{proof} The existence part follows from the general fact about semisimple Lie groups that \[ (B_+uB_+)(B_+vB_+)=B_+uvB_+ \] whenever $l(uv)=l(u)+l(v)$ (see for example \cite{Hum} Section 29.3 Lemma A). The unique part follows from the following lemma, which also holds for all semisimple Lie groups. \end{proof} \begin{lem} Let $B$ be a Borel subgroup of $G$. If $x\in BwB$ where $w=s_{\alpha(1)}s_{\alpha(2)}\dots s_{\alpha(l)}$ is a reduced word for $w$, then there exists $x_k\in Bs_{\alpha(k)}B$ such that $x=x_1x_2\dots x_l$; further if $x=x'_1x'_2\dots x'_l$ is another such factorization then $x_k^{-1}x'_k\in B$ for all $1\leq k\leq l-1$ and $x'_kx_k^{-1}\in B$ for all $2\leq k\leq l$. \end{lem} \begin{proof} The existence part is essentially the same as the existence part of the above proposition, so it suffices to show the uniqueness part. We will do an induction on $l$. There is nothing to show for the base case $l=1$. Suppose $l>1$. Let $x=yx_l=y'x'_l$ where both $y$ and $y'$ are in $Bs_{\alpha(1)}\dots s_{\alpha(l-1)}B$ and both $x_l$ and $x'_l$ are in $Bs_{\alpha(l)}B$. Then from the fact that $(Bs_{\alpha(l)}B)^2\subset B\cup Bs_{\alpha(l)}B$ we know that $x'_lx_l^{-1}$ is in either $B$ or $Bs_{\alpha(l)}B$. To rule out the latter possibility, note that if $x'_lx_l^{-1}\in Bs_{\alpha(l)}B$ then $y'x'_lx_l^{-1}=xx_l^{-1}=y$ is in both Bruhat cells $Bs_{\alpha(1)}\dots s_{\alpha(l-1)}B$ and $BwB$, which is a contradiction. Thus $x'_lx_l^{-1}\in B$. The fact that $x'_lx_l^{-1}\in B$ implies that $y^{-1}y'=x_lx^{-1}x{x'_l}^{-1}=x_l{x'_l}^{-1}\in B$. Thus $y$ and $y'$ can only differ by a right multiple of $B$. This difference can be absorbed into the right ambiguity of $x_{l-1}$, and hence without loss of generality one can assume that $y=y'$, and the proof is finished by induction. \end{proof} After all the basic facts and notations, we are ready to link $\mathcal{B}$ to double Bruhat cells $G^{u,v}$. \begin{defn} For a pair of Weyl group elements $(u,v)$ we define the \textit{configuration space of Borel subgroups} $\mathrm{Conf}^{u,v}(\mathcal{B})$ to be the quotient space of quadruples of Borel subgroups $(B_1,B_2,B_3,B_4)$ satisfying the following relation \[ \xymatrix{B_1 \ar[r]^u \ar@{-}[d] & B_4 \ar@{-}[d] \\ B_3 \ar[r]_{v^*} & B_2} \] modulo the diagonal action by $G$, i.e., $(gB_1g^{-1},gB_2g^{-1},gB_3g^{-1},gB_4g^{-1})\sim (B_1,B_2,B_3,B_4)$. We also call diagrams like the one above \textit{square diagrams}. \end{defn} As it turns out, such configuration space $\mathrm{Conf}^{u,v}(\mathcal{B})$ is just our old frient $H\backslash G^{u,v}/H$ in disguise. \begin{prop}\label{flag-bruhat} There is a natural isomorphism $i:\mathrm{Conf}^{u,v}(\mathcal{B})\overset{\cong}{\longrightarrow} H\backslash G^{u,v}/H$. \end{prop} \begin{proof} By Proposition \ref{opposite flag}, any element $[B_1,B_2,B_3,B_4]$ in $\mathrm{Conf}^{u,v}(\mathcal{B})$ can be represented by the square diagram \[ \xymatrix{B_+ \ar[r]^u \ar@{-}[d] & xB_+ \ar@{-}[d] \\ B_- \ar[r]_{v^*} & B_-x^{-1}} \] for some $x\in G$, and the choice of $x$ is unique up to a left multiple and a right multiple by elements in $H$. Note that by definition of $\mathrm{Conf}^{u,v}(\mathcal{B})$, $x\in B_+uB_+\cap B_-vB_-$. Thus the map \[ i:[B_1,B_2,B_3,B_4]\mapsto H\backslash x/H \] is a well-defined map from $\mathrm{Conf}^{u,v}(\mathcal{B})$ to $H\backslash G^{u,v}/H$, and it is not hard to see that it is indeed an isomorphism. \end{proof} Let $w^c:=w_0w^{-1}$ for any Weyl group element $w$; by computation it is not hard to see that $w_0=w^cw=w^*w^c$ and $l(w_0)=l(w^c)+l(w)=l(w^*)+l(w^c)$. But then Proposition \ref{2.8} tells us that we can find two new Borel subgroups $B_5$ and $B_6$ to put into the middle of the two vertical edges in the above diagram, forming the following \textit{hexagon diagram}. \[ \xymatrix{ & B_6\ar[r]^{u^c} \ar@{-}[drr] & B_1 \ar[dr]^u \ar@{-}[dll] & \\ B_3 \ar[ur]^{u^*} \ar[dr]_{v^*} \ar@{-}[drr] & & & B_4 \ar@{-}[dll] \\ & B_2 \ar[r]_{v^c} & B_5 \ar[ur]_v &} \] Note that if we take out the ''square'' with vertices $B_3, B_4, B_5$, and $B_6$, and apply the involution $*$, we get another square diagram of a quadruple representing a point in $\mathrm{Conf}^{u,v}(\mathcal{B})$. \[ \xymatrix{B_3^* \ar[r]^u \ar@{-}[d] & B_6^* \ar@{-}[d]\\ B_5^* \ar[r]_{v^*}& B_4^*} \] This observation gives rise to a map \begin{align*} \eta:\mathrm{Conf}^{u,v}(\mathcal{B})&\rightarrow \mathrm{Conf}^{u,v}(\mathcal{B})\\ [B_1,B_2,B_3,B_4]&\mapsto [B_3^*, B_4^*,B_5^*,B_6^*]. \end{align*} Now we are ready to prove part (a).(i) of our main theorem \ref{main}. \begin{prop}\label{twistflag} Via the identification $\mathrm{Conf}^{u,v}(\mathcal{B})\cong H\backslash G^{u,v}/H$, the map $\eta$ can be expressed as the following map on $H\backslash G^{u,v}/H$ as well: \[ \eta:H\backslash x/H\mapsto H\backslash \left(\left[\overline{u}^{-1}x\right]_-^{-1}\overline{u}^{-1}x\overline{v^{-1}}\left[x\overline{v^{-1}}\right]_+^{-1}\right)^t/H. \] \end{prop} \begin{proof} Recall that $H\backslash x/H$ corresponds to a configuration that can be represented as \[ \xymatrix{ B_+ \ar@{-}[d] \ar[r]^u & xB_+\ar@{-}[d] \\ B_- \ar[r]_{v^*} & B_-x^{-1}} \] It is not hard to see that \[ B_5:=x\overline{v^{-1}}B_+ \quad \quad \text{and}\quad \quad B_6:=B_-\overline{u}^{-1} \] will fit into the hexagon diagram \[ \xymatrix{ & B_-\overline{u}^{-1} \ar[r]^{u^c} \ar@{-}[drr] & B_+ \ar[dr]^u \ar@{-}[dll] & \\ B_- \ar[ur]^{u^*} \ar[dr]_{v^*} \ar@{-}[drr] & & & xB_+ \ar@{-}[dll] \\ & B_-x^{-1} \ar[r]_{v^c} & x\overline{v^{-1}}B_+ \ar[ur]_v &} \] Thus by definition the $\eta$ map maps the configuration $[B_+,B_-x^{-1},B_-,xB_+]$ to \[ \left[\vcenter{\xymatrix{B_- \ar@{-}[d] \ar[r]^{u^*} & B_-\overline{u}^{-1} \ar@{-}[d] \\ x\overline{v^{-1}}B_+\ar[r]_v & xB_+}}\right]^* \] To compute the corresponding image of $\eta$ in $H\backslash G^{u,v}/H$ we need to rewrite the quadruple of Borel subgroups $\left(B_-,xB_+, x\overline{v^{-1}}B_+,B_-\overline{u}^{-1}\right)$ as $\left(yB_+,B_-z^{-1},B_-y^{-1},zB_+\right)$ for some elements $y$ and $z$ in $G$. Following the guideline we have in the proof of Proposition \ref{opposite flag} we can easily compute \[ y=\left[x\overline{v^{-1}}\right]_-\overline{w}_0^{-1} \quad \quad \text{and} \quad \quad z=\overline{u}\left[\overline{u}^{-1}x\right]_-\overline{w}_0^{-1}. \] Thus the corresponding image of $\eta$ is \[ H\backslash x/H=H\backslash\left(y^{-1}z\right)^*/H=H\backslash\left(\left[\overline{u}^{-1}x\right]_-^{-1}\overline{u}^{-1}x\overline{v^{-1}}\left[x\overline{v^{-1}}\right]_+^{-1}\right)^t/H.\qedhere \] \end{proof} \subsection{Structures in Semisimple Lie Groups}\label{bruhat} This subsection serves as a brief introduction to the structure theory of semisimple Lie groups with focus on structures that will play important roles in our paper. Please see \cite{Hum} or other standard Lie group textbook for more details. Let $G$ be a semisimple Lie group and let $\mathfrak{g}$ be its Lie algebra. Fix a pair of opposite Borel subgroups $B_\pm$ in $G$. Then $H:=B_+\cap B_-$ is a maximal torus in $G$. The Weyl group $W$ of $G$ is defined to be the quotient $N_GH/H$. It is known that for any Borel subgroup $B$ of $G$, there is a \textit{Bruhat decomposition} $G=\bigsqcup_{w\in W} BwB$. Taking the two Bruhat decomposition corresponding to the pair of opposite Borel subgroups $B_\pm$ and intersecting them, we obtain our object of interest. \begin{defn} For a pair of Weyl group elements $(u,v)\in W\times W$, the \textit{double Bruhat cell} $G^{u,v}$ is defined to be \[ G^{u,v}:=\left(B_+uB_+\right)\cap\left( B_-vB_-\right). \] \end{defn} It is known that the commutator subgroup of a Borel subgroup is a maximal unipotent subgroup. We will denote the commutator subgroup of each of the pair of opposite subgroups by $N_\pm:=[B_\pm,B_\pm]$. \begin{defn} The subset of \textit{Gaussian decomposable} elements in $G$ is defined to be the Zariski open subset $G_0:=N_-HN_+$. A \textit{Gaussian decomposition} of an element $x\in G_0$ is the factorization \[ x=[x]_-[x]_0[x]_+ \] where $[x]_\pm\in N_\pm$ and $[x]_0\in H$. \end{defn} \begin{prop} The Gaussian decomposition of a Gaussian decomposable element is unique. \end{prop} \begin{proof} It follows from the standard facts that $N_-H=B_-$, $HN_+=B_+$, and $N_\pm\cap B_\mp=\{e\}$. \end{proof} The adjoint action of $H$ on $\mathfrak{g}$ admits a root space decomposition, and the choice of the pair of opposite Borel subgroups $B_\pm$ defines a subset of \textit{simple roots} $\Pi$ in the root system. The root system spans a lattice called the \textit{root lattice} $Q$. There is a dual notion called \textit{simple coroots}, which can be identified with elements inside the Cartan subalgebra $\mathfrak{h}$ of $\mathfrak{g}$. We will denote the simple coroot dual to $\alpha$ as $H_\alpha$. The \textit{Cartan matrix} of $G$ can then be defined as \[ C_{\alpha\beta}:= \inprod{H_\alpha}{\beta}. \] The Cartan matrix $C_{\alpha\beta}$ of a semisimple Lie group $G$ is known to have 2 along the diagonal and non-positive integers entries elsewhere. In particular, the Cartan matrix $C_{\alpha\beta}$ matrix is invertible and symmetrizable, i.e., there exists a diagonal matrix $D:=\mathrm{diag}(D_\alpha)$ with integer entries such that \begin{equation}\label{symmetrizable} \hat{C}_{\alpha\beta}:=D_\alpha C_{\alpha\beta} \end{equation} is a symmetric matrix. The Lie algebra $\mathfrak{g}$ can be generated by the Chevalley generators $E_{\pm \alpha}$ and $H_\alpha$; the relations among the Chevalley generators are \begin{align*} [H_\alpha,H_\beta]=& 0,\\ [H_\alpha,E_{\pm \beta}]=& \pm C_{\alpha\beta}E_{\pm\beta},\\ [E_{\pm \alpha}, E_{\mp \beta}]=& \pm \delta_{\alpha\beta} H_\alpha,\\ \left(\mathrm{ad}_{E_{\pm\alpha}}\right)^{1-C_{\alpha\beta}}E_{\pm \beta}=&0 \quad \quad \text{for $\alpha\neq \beta$}. \end{align*} The simple coroots $H_\alpha$ are also cocharacters of $H$, and hence they define group homomorphisms $\mathbb{C}^*\rightarrow H$; we will denote such homomorphisms by \[ a\mapsto a^{H_\alpha} \] for any $a\in \mathbb{C}^*$. Using the exponential map $\exp:\mathfrak{g}\rightarrow G$ we also define a group homomorphisms $e_{\pm \alpha}:\mathbb{C}\rightarrow G$ by \[ e_{\pm \alpha}(t):=\exp\left(tE_{\pm \alpha}\right). \] Particularly when $t=1$ we will omit the argument and simply write $e_{\pm\alpha}$. The arguments $t$ in $e_{\pm \alpha}(t)$ are known as \textit{Lusztig coordinates}, which can be used to define coordinate system on double Bruhat cells as well (see \cite{FZ} for details). It is known that a semisimple Lie group $G$ is generated by elements of the form $e_{\pm \alpha}(t)$ and $a^{H_\alpha}$. We can then define an anti-involution $t$ on $G$ called \textit{transposition}, which acts on the generators by \[ \left(e_{\pm \alpha}(p)\right)^t=e_{\mp \alpha}(p) \quad \quad \text{and} \quad \quad \left(a^{H_\alpha}\right)^t=a^{H_\alpha}. \] It is not hard to verify on generators that transposition commutes with taking inverse, which is another commonly seen anti-involution on a semisimple Lie group $G$. Note that if $G$ is the semisimple Lie group $\mathrm{SL}_n$, the transposition anti-involution we defined above is indeed the transposition of matrices. The set of simple roots also defines a Coxeter generating set $S=\{s_\alpha\}$ for the Weyl group $W$. The braid relations among these Coxeter generators are the following: \begin{equation} \label{braid} \left\{\begin{array}{ll} s_\alpha s_\beta s_\alpha = s_\beta s_\alpha s_\beta & \text{if $C_{\alpha\beta}C_{\beta\alpha}=1$;} \\ s_\alpha s_\beta s_\alpha s_\beta = s_\beta s_\alpha s_\beta s_\alpha & \text{if $C_{\alpha\beta}C_{\beta\alpha}=2$;} \\ s_\alpha s_\beta s_\alpha s_\beta s_\alpha s_\beta = s_\beta s_\alpha s_\beta s_\alpha s_\beta s_\alpha & \text{if $C_{\alpha\beta}C_{\beta\alpha}=3$.} \end{array}\right. \end{equation} \begin{defn} For an element $w$ in the Weyl group $W$, a \textit{reduced word} of $w$ (with respect to the choice of Coxeter generating set $S$) is a sequence of simple roots $\vec{i}=(\alpha(1),\alpha(2),\dots, \alpha(l))$ which is the shortest among all sequences satisfying $s_{\alpha(1)}s_{\alpha(2)}\dots s_{\alpha(l)}=w$. Entries of a reduced word are called \textit{letters}, and the number $l$ is called the \textit{length} of $w$. \end{defn} One important fact about reduced words is that any two reduced words of the same Weyl group element can be obtained from each other via a finite sequence of braid relations. It is also known that there exists a unique longest element inside the Weyl group $W$ of any semisimple Lie group $G$, and we will denote this longest element by $w_0$. It follows easily from the uniqueness that $w_0^{-1}=w_0$. Conjugation by any lift of $w_0$ swaps the pair of opposite Borel subgroups, i.e., $w_0B_\pm w_0=B_\mp$. \begin{defn} For a pair of Weyl group elements $(u,v)\in W\times W$, a \textit{reduced word} of $(u,v)$ is a sequence $\vec{i}=(\alpha(1),\alpha(2),\dots, \alpha(l))$ in which every letter is either a simple root or the opposite of a simple root, satisfying the conditions that if we pick out the subsequence consisting of simple roots we get back a reduced word of $v$, and if we pick out the subsequence consisting of the opposite simple roots and then drop the minus sign in front of every letter we get back a reduced word of $u$. The number $l$ is again called the \textit{length} of the pair $(u,v)$. \end{defn} Generally speaking, the Weyl group $W$ does not live inside the Lie group $G$. However, one can define a lift of each Coxeter generator $s_\alpha$ by \begin{equation}\label{wbar} \overline{s}_\alpha:=e_\alpha^{-1}e_{-\alpha}e_\alpha^{-1}=e_{-\alpha}e_\alpha^{-1}e_{-\alpha}. \end{equation} It is not hard to verify that these lifts of the Coxeter generators satisfy the braid relations \ref{braid}. Thus by using a reduced word one can define a lift $\overline{w}$ for any Weyl group element $w$, which is independent of the choice of the reduced word (see also \cite{FZ} and \cite{GS}). Conjugation by the longest element $w_0$ defines an involution $w^*:=w_0ww_0$ on the Weyl group $W$. We can lift such involution up to the semisimple Lie group $G$ by defining \begin{equation}\label{starinv} x^*:= \overline{w}_0 \left(x^{-1}\right)^t\overline{w}_0^{-1}. \end{equation} Since $w_0B_\pm w_0=B_\mp$ and $B_\pm^t=B_\mp$, we know that the pair of opposite Borel subgroups $B_\pm$ is invariant under the involution $*$, i.e., $B_\pm^*=B_\pm$. We call the involution $*$ on $G$ a lift of the involution $*$ on $W$ because of the following proposition. \begin{prop} For a Weyl group element $w$, $\overline{w^*}=\overline{w}^*$. \end{prop} \begin{proof} We only need to show it for the Coxeter generators. Let $s_\alpha$ be a Coxeter generator. From the definition of the lift $\overline{s}_\alpha$ we see that $\overline{s}_\alpha^t=\overline{s}_\alpha^{-1}$. By a length argument we also know that $s_\alpha^*$ is also a Coxeter generator, say $s_\beta$. Then it follows that $w_0s_\alpha=s_\beta w_0$. Since $l(w_0s_\alpha)=l(s_\beta w_0)=l(w_0)-1$, it follows that \[ \overline{w}_0\overline{s}_\alpha^{-1}=\overline{w_0s_\alpha}=\overline{s_\beta w_0}=\overline{s}_\beta^{-1}\overline{w}_0. \] Therefore we know that \[ \overline{s}_\alpha^*=\overline{w}_0\overline{s}_\alpha\overline{w}_0^{-1}=\overline{s}_\beta. \qedhere \] \end{proof} Each simple root $\alpha$ also defines a group homomorphism $\varphi_\alpha:\mathrm{SL}_2\rightarrow G$, which maps the generators as follows: \[ \begin{pmatrix} 1 & t \\ 0 & 1\end{pmatrix} \mapsto e_\alpha(t), \quad \quad \begin{pmatrix} 1 & 0 \\ t & 1 \end{pmatrix} \mapsto e_{-\alpha}(t), \quad \quad \begin{pmatrix} a & 0 \\ 0 & a^{-1}\end{pmatrix} \mapsto a^{H_\alpha}. \] Using this group homomorphism, the lift $\overline{s}_\alpha$ can be alternatively defined as \[ \overline{s}_\alpha:=\varphi_\alpha\begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix}. \] The following identities are credit to Fomin and Zelevinsky \cite{FZ}, which can be easily verified within $\mathrm{SL}_2$ and then mapped over to $G$ under $\varphi_\alpha$: \begin{equation}\label{e_+e_-} e_\alpha(p)e_{-\alpha}(q)=e_{-\alpha}\left(\frac{q}{1+pq}\right)(1+pq)^{H_\alpha}e_\alpha\left(\frac{p}{1+pq}\right); \end{equation} \begin{equation}\label{e_+s} e_\alpha(t)\overline{s}_\alpha=e_{-\alpha}\left(t^{-1}\right)t^{H_\alpha} e_\alpha\left(-t^{-1}\right); \end{equation} \begin{equation}\label{se_-} \overline{s}_\alpha^{-1}e_{-\alpha}(t)=e_{-\alpha}\left(-t^{-1}\right)t^{H_\alpha} e_\alpha\left(t^{-1}\right). \end{equation} In general there are more than one Lie group associated to the same semisimple Lie algebra $\mathfrak{g}$. Among such a family of Lie groups, there is a simply connected one $G_{sc}$ and a centerless one $G_{ad}$ (which is also known as the \textit{adjoint form} and hence the subscript), and they are unique up to isomorphism. Any Lie group $G$ associated to the Lie algebra $\mathfrak{g}$ is a quotient of $G_{sc}$ by some subgroup of the center $C(G_{sc})$, and $G_{ad}\cong G/C(G)$. For the rest of this subsection we will discuss some structures special to each of these two objects. Let's start with the simply connected case $G_{sc}$. One important fact from representation theory is that the finite dimensional irreducible representations of a simply connected semisimple Lie group $G_{sc}$ are classified by dominant \textit{weights}, which form a cone inside the \textit{weight lattice} $P$. The weight lattice $P$ can be identified with the lattice of characters of the maximal torus $H$, and contains the root lattice $Q$ as a sublattice. The dual lattice of $P$ is spanned by the simple coroots $H_\alpha$. Hence by dualizing $\{H_\alpha\}$ we obtain a basis $\{\omega_\alpha\}$ of $P$, and we call the weights $\omega_\alpha$ \textit{fundamental weights}. The fundamental weights are dominant, and their convex hull is precisely the cone of dominant weights. Since fundamental weights are elements in the lattice of characters of the maximal torus $H$, they define group homomorphisms $H\rightarrow \mathbb{C}^*$ which we will denote as \[ h\mapsto h^{\omega_\alpha}. \] Further, it is known that for each fundamental weight $\omega_\alpha$, there is a regular function $\Delta_\alpha$ on $G_{sc}$ uniquely determined by the equation \[ \Delta_\alpha(x)=[x]_0^{\omega_\alpha} \] when restricted to the subset of Gaussian decomposable elements of $G_{sc}$ (see also \cite{Hum} Section 31.4). In particular, if we consider $\mathbb{C}[G_{sc}]$ as a representation of $G_{sc}$ under the natural action $(x.f)(y):=f(x^{-1}y)$, then $\Delta_\alpha$ is a highest weight vector of weight $\omega_\alpha$ (see \cite{FZ} Proposition 2.2). Such regular functions $\Delta_\alpha$ are called \textit{generalized minors}, and they are indeed the principal minors in the case of $\mathrm{SL}_n$. \begin{prop}\label{minor} $\Delta_\alpha(x)=\Delta_\alpha\left(x^t\right)$. If $\beta\neq \alpha$, then $\Delta_\alpha\left(\overline{s}_\beta^{-1}x\right)=\Delta_\alpha(x)=\Delta_\alpha\left(x\overline{s}_\beta\right)$. \end{prop} \begin{proof} The first part follows from the fact that transposition swaps $N_\pm$ while leaving $H$ invariant. To show the second part, recall that $\overline{s}_\beta=e_\beta^{-1}e_{-\beta}e_\beta^{-1}$; therefore \[ \Delta_\alpha\left(\overline{s}_\beta^{-1}x\right)=\Delta_\alpha\left(e_\beta e_{-\beta}^{-1}e_\beta x\right). \] But then since $\Delta_\alpha$ is a highest weight vector of weight $\omega_\alpha$, it is invariant under the action of $e_\beta^{-1}$. Therefore we can conclude that \[ \Delta_\alpha\left(\overline{s}_\beta^{-1}x\right)=\Delta_\alpha\left(e_\beta e_{-\beta}^{-1}e_\beta x\right)=\Delta_\alpha\left(e_{-\beta}^{-1}e_\beta x\right)=\Delta_\alpha\left(e_\beta x\right)=\Delta_\alpha(x). \] The other equality can be obtained from this one by taking transposition. \end{proof} Now let's turn to the adjoint form $G_{ad}$. Since the Cartan matrix $C_{\alpha\beta}$ is invertible, and we can use it to construct another basis $\left\{H^\alpha\right\}$ of the Cartan subalgebra $\mathfrak{h}$, which is defined by \[ H_\alpha=\sum_\beta C_{\alpha\beta}H^\beta. \] Replacing $H_\alpha$ with $H^\alpha$ we can rewrite the relations among the Chevalley generators as \begin{align*} [H^\alpha,H^\beta]=& 0, \\ [H^\alpha, E_{\pm \beta}]=&\pm \delta_{\alpha\beta}E_{\pm \beta},\\ [E_{\pm \alpha}, E_{\mp \beta}]=& \pm \delta_{\alpha\beta}C_{\alpha\gamma}H^\gamma,\\ \left(\mathrm{ad}_{E_{\pm\alpha}}\right)^{1-C_{\alpha\beta}}E_{\pm \beta}=&0 \quad \quad \text{for $\alpha\neq \beta$.} \end{align*} It turns out that $H^\alpha$ are cocharacters of the maximal torus $H$ of $G_{ad}$, and hence we can extend our earlier notation $a\mapsto a^{H_\alpha}$ to $a\mapsto a^{H^\alpha}$. In particular the following statement is true. \begin{prop}\label{commute} If $\beta\neq \alpha$, then $e_{\pm \beta} a^{H^\alpha}=a^{H^\alpha}e_{\pm \beta}$. \end{prop} \begin{proof} This follows the fact that $[H^\alpha,E_{\pm \beta}]=0$ whenever $\beta\neq \alpha$. \end{proof} Note that $H^\alpha$ are generally not cocharacters of the maximal torus $H$ of a non-adjoint-form semisimple Lie group $G$. Thus for a non-adjoint-form semisimple Lie group $G$, $a^{H^\alpha}$ is only well-defined up to a center element; however, since we will be consider the double quotient $H\backslash G^{u,v}/H$, this center element ambiguity will be gone because the center $C(G)$ is contained in the maximal torus $H$. The basis $\left\{H^\alpha\right\}$ of $\mathfrak{h}$ was first introduced by Fock and Goncharov \cite{FGD} to give the cluster Poisson structure on the double Bruhat cell $G_{ad}^{u,v}$, which will play an important role in our paper. \subsection{Double Bruhat Cells as Cluster Varieties}\label{cells} Recall that there are a pair of semisimple Lie groups $G_{sc}$ and $G_{ad}$ associated to each semisimple Lie algebra $\mathfrak{g}$, with $G_{sc}$ being simply connected and $G_{ad}\cong G_{sc}/C(G_{sc})$ being centerless. In this subsection we will describe a reduced cluster ensemble $\left(\mathcal{A}^{u,v},\underline{\mathcal{X}}^{u,v},p\right)$ (Remark \ref{reduced}) together with two rational maps \[ \psi:G_{sc}^{u,v}\dashrightarrow \mathcal{A}^{u,v} \quad \quad \text{and} \quad \quad\chi:\underline{\mathcal{X}}^{u,v}\dashrightarrow H\backslash G^{u,v}/H \] for any given pair of Weyl group elements $(u,v)$. The first map $\psi$ can be obtained from a cluster algebra result of Berenstein, Fomin, and Zelevinsky \cite{BFZ}, whereas the second map $\chi$ is the amalgamation map introduced by Fock and Goncharov in \cite{FGD}. Unfortunately the seed data used in the two references differ by a sign; we choose to follow Fock and Goncharov's treatment and later will comment on its relation to Berenstein, Fomin, and Zelevinsky's result. The amalgamation that produces the cluster varieties $\mathcal{A}^{u,v}$ and $\underline{\mathcal{X}}^{u,v}$ comes from the spelling of a (equivalently any) reduced word of the pair of Weyl group elements $(u,v)$. To describe it more precisely, we start with the building block pieces, namely seeds that correspond to letters. Recall that the spelling alphabet is $-\Pi\sqcup \Pi$ where $\Pi$ is the set of simple roots, so the letters naturally come in two types: the ones that are simple roots and the ones that are opposite to simple roots. We will describe the seed data associated to each of these two types. Let $\alpha$ be a simple root. We define a new set $\Pi^\alpha:=\left(\Pi\setminus\{\alpha\}\right)\cup \{\alpha_-,\alpha_+\}$. One should think of $\Pi^\alpha$ as almost the same as the set of simple roots $\Pi$ except the simple root $\alpha$ splits into two copies $\alpha_-$ and $\alpha_+$. Now we define the seed associated to the simple root $\alpha$ to be $\vec{i}^\alpha:=\left(\Pi^\alpha,\Pi^\alpha, \epsilon^\alpha, d^\alpha\right)$ where \begin{align*} \epsilon^\alpha_{ab}:=&\left\{\begin{array}{ll} \pm 1 & \text{if $a=\alpha_\pm$ and $b=\alpha_\mp$;} \\ \pm C_{\beta\alpha}/2 & \text{if $a=\alpha_\pm$ and $b=\beta$ that is neither of the two split copies of $\alpha$;}\\ \pm C_{\alpha\beta}/2 & \text{if $a=\beta$ that is neither of the two split copies of $\alpha$ and $b=\alpha_\mp$;}\\ 0 & \text{otherwise}, \end{array}\right. \\ d^\alpha_a:=&\left\{\begin{array}{ll} D_\alpha & \text{if $a=\alpha_\pm$;} \\ D_\beta & \text{if $a=\beta$ that is neither of the two split copies of $\alpha$.} \end{array}\right. \end{align*} Here $D$ is the diagonal matrix that symmetrize the Cartan matrix $C$ \eqref{symmetrizable}. In contrast, we define the seed associated to $-\alpha$ to be $\vec{i}^{-\alpha}:=\left(\Pi^{-\alpha},\Pi^{-\alpha}, \epsilon^{-\alpha}, d^{-\alpha}\right)$ where \[ \Pi^{-\alpha}:=\Pi^\alpha, \quad \quad d^{-\alpha}:=d^\alpha,\quad \quad\text{ and}\quad \quad \epsilon^{-\alpha}:=-\epsilon^\alpha. \] It is straightforward to verify that $\vec{i}^\alpha$ and $\vec{i}^{-\alpha}$ are seeds. Note that all vertices of these two seeds are frozen, so there is no mutation available. Before we start amalgamation, we would like to introduce two families rational maps that will go along with amalgamation: \[ \psi^{\pm \alpha}: G_{sc}\dashrightarrow \mathcal{A}_{\vec{i}^{\pm\alpha}} \quad \quad \text{and} \quad \quad \chi^{\pm \alpha}: \mathcal{X}_{\vec{i}^{\pm\alpha}}\rightarrow G_{ad}. \] The rational maps $\psi^{\pm \alpha}$ are defined by the pull-backs \[ \psi^{-\alpha*}\left(A_a\right)=\left\{\begin{array}{ll} \Delta_\alpha & \text{if $a=\alpha_-$;} \\ \Delta_\alpha\left(\overline{s}_\alpha^{-1} \quad \cdot \quad \right) & \text{if $a=\alpha_+$;}\\ \Delta_\beta & \text{if $a=\beta$ that is neither of the two split copies of $\alpha$;} \end{array}\right. \] \[ \psi^{\alpha*}\left(A_a\right)=\left\{\begin{array}{ll} \Delta_\alpha\left(\quad \cdot \quad \overline{s}_\alpha\right) & \text{if $a=\alpha_-$;} \\ \Delta_\alpha & \text{if $a=\alpha_+$;}\\ \Delta_\beta & \text{if $a=\beta$ that is neither of the two split copies of $\alpha$.} \end{array}\right. \] The maps $\chi^{\pm \alpha}$ are defined by \[ \chi^{\pm \alpha}:\left(X_a\right)\mapsto \left(X_{\alpha_-}^{H^\alpha}e_{\pm \alpha} X_{\alpha_+}^{H^\alpha} \prod_{\beta\neq \alpha} \right)X_\beta^{H^\beta}, \] where the product on the right hand side takes place inside the Lie group $G_{ad}$. Now we are ready for amalgamation. Fix a pair of Weyl group elements $(u,v)$ and a reduced word $\vec{i}:=\left(\alpha(1),\dots, \alpha(l)\right)$ of the pair $(u,v)$. The family of seeds that we are amalgamating is \[ \left\{\vec{i}^{\alpha(k)}\right\}_{k=1}^l:=\left\{\left(\Pi^{\alpha(k)}, \Pi^{\alpha(k)}, \epsilon^{\alpha(k)}, d^{\alpha(k)}\right)\right\}_{k=1}^l, \] one for each letter $\alpha(k)$ in the reduced word. To define the amalgamation, we also need a finite set $K$ and a collection of injective maps $i^k:\Pi^{\alpha(k)}\rightarrow K$. Let $n_\alpha$ be the total number of times that letters $\pm \alpha$ appear in the reduced word $\vec{i}$; then we define \[ K:=\left\{\textstyle\binom{\alpha}{i} \ \middle| \ \alpha\in \Pi, 0\leq i\leq n_\alpha\right\}. \] Further we define \[ |\alpha(k)|=\left\{\begin{array}{ll} \alpha(k) & \text{if $\alpha(k)$ is a simple root}, \\ -\alpha(k) & \text{if $\alpha(k)$ is opposite to a simple root}, \end{array}\right. \] which we then use to define the injective maps $i^k:\Pi^{\alpha(k)}\rightarrow K$ as follows: \[ i^k(a)=\left\{\begin{array}{ll} \binom{|\alpha(k)|}{i-1} & \text{if $a=|\alpha(k)|_-$ and $\alpha(k)$ is the $i$th appearance of letters $\pm |\alpha(k)|$;} \\ \binom{|\alpha(k)|}{i} & \text{if $a=|\alpha(k)|_+$ and $\alpha(k)$ is the $i$th appearance of lettes $\pm |\alpha(k)|$;}\\ \binom{\beta}{j} & \begin{tabular}{l} if $a=\beta$ that is neither of the two split copies of $|\alpha(k)|$ and there \\ have been $j$ numbers of $\pm\beta$ appearing before $\alpha(k)$.\end{tabular} \end{array}\right. \] \begin{exmp} Although the amalgamation data above is heavy on notations, the idea is intuitive. One should think of $\binom{\alpha}{i}$ as the space before the first appearance of $\pm \alpha$, the gap between every two appearances of $\pm \alpha$, or the space after the last appearance of $\pm \alpha$. Then the injective map $i^k$ basically replaces the letter $\alpha(k)$ with the seed $\vec{i}^{\alpha(k)}$, and the gluing connects to the left via $\left(\Pi\setminus\{|\alpha(k)|\}\right)\cup\{|\alpha(k)|_-\}$ and connects to the right via $\left(\Pi\setminus\{|\alpha(k)|\}\right)\cup\{|\alpha(k)|_+\}$. To better convey the idea, consider a rank 3 semisimple group $G$ whose simple roots are $\{\alpha,\beta, \gamma\}$. Let $\vec{i}:=(\alpha, -\beta, -\alpha,\gamma, \beta, -\beta)$ be a reduced word of a pair of Weyl group elements. By writing different letters (disregard the signs) on different horizontal lines, the elements of $K$ then become very clear, as illustrated below. \[ \tikz{ \node (1) at (1,2) [] {$\alpha$}; \node (2) at (2,1) [] {$-\beta$}; \node (3) at (3,2) [] {$-\alpha$}; \node (4) at (4,0) [] {$\gamma$}; \node (5) at (5,1) [] {$\beta$}; \node (6) at (6,1) [] {$-\beta$}; \draw (0,2) -- node[above]{$\binom{\alpha}{0}$} (1) -- node[above]{$\binom{\alpha}{1}$} (3) -- node[above]{$\binom{\alpha}{2}$} (7,2); \draw (0,1) -- node[above]{$\binom{\beta}{0}$} (2) -- node[above]{$\binom{\beta}{1}$} (5) -- node[above]{$\binom{\beta}{2}$} (6) -- node[above]{$\binom{\beta}{3}$} (7,1); \draw (0,0) -- node[above]{$\binom{\gamma}{0}$} (4) -- node[above]{$\binom{\gamma}{1}$} (7,0); \node at (3,-1) [] {$\vec{i}$}; } \] On the other hand, if we also draw the seed associated to each letter the same way, i.e., \[ \tikz[baseline=0ex]{ \node (0) at (1,2) [] {$\pm \alpha$}; \draw (0,2) -- node[above]{$\alpha_-$} (0) -- node[above]{$\alpha_+$} (2,2); \draw (0,1) -- node[above]{$\beta$} (2,1); \draw (0,0) -- node[above]{$\gamma$} (2,0); \node at (1,-1) [] {$\vec{i}^{\pm \alpha}$}; } \quad \quad \quad \tikz[baseline=0ex]{ \node (0) at (1,1) [] {$\pm \beta$}; \draw (0,1) -- node[above]{$\beta_-$} (0) -- node[above]{$\beta_+$} (2,1); \draw (0,2) -- node[above]{$\alpha$} (2,2); \draw (0,0) -- node[above]{$\gamma$} (2,0); \node at (1,-1) [] {$\vec{i}^{\pm \beta}$}; } \quad \quad \quad \tikz[baseline=0ex]{ \node (0) at (1,0) [] {$\pm \gamma$}; \draw (0,0) -- node[above]{$\gamma_-$} (0) -- node[above]{$\gamma_+$} (2,0); \draw (0,1) -- node[above]{$\beta$} (2,1); \draw (0,2) -- node[above]{$\alpha$} (2,2); \node at (1,-1) [] {$\vec{i}^{\pm \gamma}$}; } \] then the injective maps $i^k$ are just placing pieces of building blocks of $\vec{i}$ into the right positions. We will call the diagram where we put letters associated to different simple roots (disregarding the sign) on different horizontal lines a \textit{string diagram}. In a string diagram we call the letters \textit{nodes} and horizontal lines cut out by nodes \textit{strings}. We say a string is on \textit{level} $\alpha$ if it is on the horizontal line corresponding to the simple root $\alpha$. In particular, we say that a string is \textit{closed} if it is cut out by letters on both ends, and otherwise we say that it is \textit{open}. Obviously a string is open if and only if it is at either end of the string diagram. \end{exmp} \begin{prop} The set $K$ and injective maps $i^k$ satisfy the conditions for amalgamation (Definition \ref{amalgamation}). \end{prop} \begin{proof} (1) and (2) are obvious, and (3) holds since the data of $d^{\alpha(k)}$ for every seed $\vec{i}^{\alpha(k)}$ is identical to the diagonal matrix $D$ which symmetrizes the Cartan matrix $C$. \end{proof} The amalgamated seed obtained this way is not very interesting unless we defrost some of its vertices. As it turns out, the right choice of the set of defrosted vertices is \[ L:=\left\{\textstyle\binom{\alpha}{i}\ \middle| \ 0<i<n_\alpha\right\}. \] Of course, we need to show the following statement which is demanded by Definition \ref{amalgamation}. \begin{prop} In the exchange matrix $\epsilon$ of the amalgamated seed, any entry involving elements of $L$ is an integer. \end{prop} \begin{proof} The only possible non-integer entry of $\epsilon$ always comes from entries $\pm C_{\alpha\beta}/2$ of the exchange matrices of the seeds for individual letters. Notice that if $0<i<n_\alpha$, then $\binom{\alpha}{i}$ is a gap between two appearances of $\pm \alpha$. Thus any entry of $\epsilon$ involving $\binom{\alpha}{i}$ will have contributions from the both ends of this gap, which will always have the same absolute value with denominator 2 according to the construction. Therefore any entry of $\epsilon$ involving $\binom{\alpha}{i}$ has to be an integer. \end{proof} \begin{rmk} By now it should be clear that vertices of the amalgamated seed are in bijection with strings in the corresponding string diagram, under which the frozen vertices are in bijection with the open strings. \end{rmk} \begin{rmk} The seed data obtained from our amalgamation construction differs from the seed constructed by Berenstein, Fomin, and Zelevinsky in \cite{BFZ} by a minus sign. This is okay since the cluster $\mathcal{A}$-mutation formula is invariant under $\epsilon_{ab}\rightarrow -\epsilon_{ab}$, and hence Fomin and Zelevinsky's upper cluster algebra structure on $\mathbb{C}[G_{sc}^{u,v}]$ is still applicable to our story. \end{rmk} Now we have the complete set of data for the amalgamated seed $(K, K_0, \epsilon,d)$, which by an abuse of notation we will also denote as $\vec{i}$. Recall that amalgamation can also be lifted to the seed torus level, and hence we have maps \[ \Delta:\mathcal{A}_\vec{i}\rightarrow \prod_{k=1}^l \mathcal{A}_{\vec{i}^{\alpha(k)}} \quad \quad \text{and} \quad \quad m:\prod_{k=1}^l \mathcal{X}_{\vec{i}^{\alpha(k)}}\rightarrow \mathcal{X}_\vec{i}. \] Our next goal is to find maps to complete the following commutative diagram. \[ \xymatrix{ G_{sc} \ar@{-->}[d]_{\psi_\vec{i}} \ar[r]^(0.4){\Delta} & \prod_{k=1}^l G_{sc} \ar@{-->}[d]^{\prod_k \psi^{\alpha(k)}}\\ \mathcal{A}_\vec{i} \ar[r]_(0.3){\Delta} & \prod_{k=1}^l \mathcal{A}_{\vec{i}^{\alpha(k)}} }\quad \quad \quad \quad \xymatrix{ \prod_{k=1}^l \mathcal{X}_{\vec{i}^{\alpha(k)}} \ar[r]^(0.7){m} \ar[d]_{\prod_k \chi^{\alpha(k)}} & \mathcal{X}_\vec{i} \ar[d]^{\chi_\vec{i}} \\ \prod_{k=1}^l G_{ad} \ar[r]_(0.6){m} & G_{ad}} \] Among all the new maps we need to define, the easiest one is $m:\prod_{k=1}^l G_{ad}\rightarrow G_{ad}$; as the notation suggested, it is just multiplication in $G_{ad}$ following the order $1\leq k\leq l$. To define $\Delta:G_{sc}\rightarrow \prod_{k=1}^l G_{sc}$, however, requires a bit more work. Recall the choice of reduced word $\vec{i}=\left(\alpha(1),\dots, \alpha(l)\right)$ of our pair of Weyl group elements $(u,v)$. For every $1\leq k\leq l$ we define two new Weyl group elements $u_{<k}$ and $v_{>k}$ as following: pick out all the entries of $\vec{i}$ before $\alpha(k)$ (not including $\alpha(k)$) that are opposite to simple roots, and then multiply the corresponding simple reflections to get $u_{<k}$; similarly, pick out all the entries of $\vec{i}$ after $\alpha(k)$ (not including $\alpha(k)$) that are simple roots, and then multiply the corresponding simple reflections to get $v_{>k}$. The map $\Delta$ is then defined to be \[ \Delta: x\mapsto \left(\overline{u_{<k}}^{-1} x \overline{\left(v_{>k}\right)^{-1}}\right)_{k=1}^l. \] Now comes the most difficult part, which is completing the square with the remaining yet-to-define map $\psi_\vec{i}$ and $\chi_\vec{i}$. \begin{prop} There exists unique maps $\psi_\vec{i}$ and $\chi_\vec{i}$ that fit into the commutative diagrams above and make them commute. \end{prop} \begin{proof} Let's first consider the commutative diagram on the left. On the one hand, by following the top arrow and then the right arrow of the square we get a rational map $G_{sc}\dashrightarrow \prod_k \mathcal{A}_{\vec{i}^{\alpha(k)}}$. On the other hand, the bottom arrow is some sort of diagonal embedding and is injective. Therefore all we need to show is that the image of $G_{sc}\dashrightarrow \prod_k \mathcal{A}_{\vec{i}^{\alpha(k)}}$ lies inside the image of the bottom arrow. In other words, we need to show that if the vertex $a$ of $\vec{i}^{\alpha(k)}$ is glued to vertex $b$ of $\vec{i}^{\alpha(k+1)}$ during amalgamation, then \[ A_a^{\alpha(k)}\left(\psi^{\alpha(k)}\left(\overline{u_{<k}}^{-1} x \overline{\left(v_{>k}\right)^{-1}}\right)\right)=A_b^{\alpha(k+1)}\left(\psi^{\alpha(k+1)}\left(\overline{u_{<k+1}}^{-1} x \overline{\left(v_{>k+1}\right)^{-1}}\right)\right). \] There are a few possible cases to analyze, but the arguments are all analogous. Thus without loss of generality let's assume that $\alpha(k)=\alpha$ and $\alpha(k+1)=\beta$ are both simple roots and $\alpha\neq \beta$. If $a=\beta$ and $b=\beta_-$, then we see right away from the definition of $\psi^{\alpha(k+1)}$ that both sides of the above equation are exactly the same. If $a=b=\gamma$ for some $\gamma$ other than $\beta$, we then know that the left hand side of the desired equality above is $\Delta_\gamma\left(\overline{u_{<k}}^{-1} x \overline{\left(v_{>k}\right)^{-1}}\right)$ whereas the right hand side is $\Delta_\gamma\left(\overline{u_{<k+1}}^{-1} x \overline{\left(v_{>k+1}\right)^{-1}}\right)$. Since we have assumed that both $\alpha$ and $\beta$ are simple roots, it follows that $u_{<k}=u_{<k+1}$ and $v_{>k}=s_\beta v_{>k+1}$. Therefore we have \[ \Delta_\gamma\left(\overline{u_{<k+1}}^{-1} x \overline{\left(v_{>k+1}\right)^{-1}}\right)=\Delta_\gamma\left(\overline{u_{<k}}^{-1}x\overline{\left(v_{>k}\right)^{-1}}\overline{s}_\beta\right)=\Delta_\gamma\left(\overline{u_{<k}}^{-1} x \overline{\left(v_{>k}\right)^{-1}}\right), \] where the last equality is due to Proposition \ref{minor}. Now let's turn to the commutative diagram on the right. Observe that the top arrow is surjective. So to define $\chi_\vec{i}:\mathcal{X}_\vec{i}\rightarrow G_{ad}$, we can first lift the input to $\prod_k \mathcal{X}_{\vec{i}^{\alpha(k)}}$ and then follow the left arrow and the bottom arrow to arrive at $G_{ad}$. Then all we need to do is the verify that such map is well-defined, i.e., the final output does not depend on the lift. But this is obvious since \[ X_a^{H^\alpha}X_b^{H^\alpha}=\left(X_aX_b\right)^{H^\alpha} \] and Proposition \ref{commute} tells us that whenever $\beta\neq \alpha$, \[ e_{\pm \beta} X^{H^\alpha}=X^{H^\alpha}e_{\pm \beta}. \qedhere \] \end{proof} Note that the only frozen vertices of the amalgamated seed are of the form $\binom{\alpha}{0}$ and $\binom{\alpha}{n_\alpha}$. According to the map $\chi_\vec{i}$ we constructed above, the coordinates corresponding to these frozen vertices are equivalent to multiplication by elements of the maximal torus $H$ on the left or on the right. Therefore the map $\chi_\vec{i}$ can be passed to a map \[ \chi_\vec{i}:\underline{\mathcal{X}}_\vec{i}\rightarrow H\backslash G/H. \] Note that we don't keep the subscript of $G$ any more because the double quotient of any Lie group $G$ with the same Lie algebra by its maximal torus on both sides is the same variety. At this point it is natural to ask the question: what if we start with a different reduced word? Is there any relation between the resulting seed $\mathcal{A}$-tori and the resulting seed $\mathcal{X}$-tori? For the rest of this subsection we will explore the relations among these seed tori, which ultimately will give us the reduced cluster ensemble $\left(\mathcal{A}^{u,v}, \mathcal{X}^{u,v},p\right)$ and the maps $\psi:G_{sc}\dashrightarrow\mathcal{A}^{u,v}$ and $\chi:\underline{\mathcal{X}}^{u,v}\rightarrow H\backslash G/H$. We start with the following elementary observation. \begin{prop}\label{move} If $\vec{i}$ and $\vec{i}'$ are two reduced words for the same pair of Weyl group elements $(u,v)$, then one can transform $\vec{i}$ to $\vec{i}'$ or vice versa via a finite sequence of the following moves: \begin{itemize} \item move (1): exchange two neighboring letters of opposite signs, i.e., \[ (\dots, \alpha, -\beta,\dots)\sim (\dots, - \beta, \alpha,\dots) \] (we assume that both $\alpha$ and $\beta$ are simple roots); \item move (2): replace a consecutive segment of letters with the same sign by their equivalent using the braid relations \eqref{braid}, for example, if $C_{\alpha\beta}C_{\beta\alpha}=-1$, then \[ (\dots, \pm \alpha,\pm \beta,\pm \alpha,\dots)\sim (\dots, \pm \beta,\pm \alpha,\pm\beta,\dots). \] \end{itemize} \end{prop} \begin{proof} Recall the general fact that any two reduced words of the same Weyl group element $w$ can be obtained from one another via a finite sequence of moves described by the braid relations. Therefore we can first use move (1) to separate the letters from $\Pi$ and the letters from $-\Pi$ and get reduced words of $u$ and $v$ respectively, and then use move (2) to rearrange the letters in each of them, and lastly use move (1) to mix them up again. \end{proof} Since these two moves are responsible for the transformation between reduced words, we may also want to investigate their induced transformation on seeds. \begin{prop} Move (1) does not induce any change on the seed data unless $\alpha=\beta$, which then induces a mutation at the vertex corresponding to the gap between these two letters. Each case of move (2) is a composition of seed mutations; in particular, it is a single mutation if $C_{\alpha\beta}C_{\beta\alpha}=1$, a composition of 3 mutations if $C_{\alpha\beta}C_{\beta\alpha}=2$, and a composition of 10 mutations if $C_{\alpha\beta}C_{\beta\alpha}=3$. \end{prop} \begin{proof} Since both the amalgamation and the two moves as described above are local operations, without loss of generality we may assume that there are no other letters present in the reduced words other than the ones involved in the moves. Let's consider move (1) first. If $\alpha\neq \beta$, then the string diagram representing move (1) will look like the following. \[ \tikz[baseline=4ex]{ \node (a) at (1,1) [] {$ \alpha$}; \node (b) at (2,0) [] {$-\beta$}; \draw (0,1) -- node[above]{$\binom{\alpha}{0}$} (a) -- node[above]{$\binom{\alpha}{1}$} (3,1); \draw (0,0) -- node[above]{$\binom{\beta}{0}$} (b) -- node[above]{$\binom{\beta}{1}$} (3,0); }\quad \quad \sim \quad \quad \tikz[baseline=4ex]{ \node (a) at (2,1) [] {$ \alpha$}; \node (b) at (1,0) [] {$- \beta$}; \draw (0,1) -- node[above]{$\binom{\alpha}{0}$} (a) -- node[above]{$\binom{\alpha}{1}$} (3,1); \draw (0,0) -- node[above]{$\binom{\beta}{0}$} (b) -- node[above]{$\binom{\beta}{1}$} (3,0); } \] Thus the only pieces of seed data that may be affected by such a move are the entries $\epsilon_{\binom{\alpha}{0}\binom{\beta}{1}}$, $\epsilon_{\binom{\alpha}{1}\binom{\beta}{0}}$, $\epsilon_{\binom{\beta}{0}\binom{\alpha}{1}}$, and $\epsilon_{\binom{\beta}{1}\binom{\alpha}{0}}$. But the amalgamation construction tells us that they all vanish before and after the move. Thus move (1) does not induce any change on the seed data if $\alpha\neq \beta$. If we are applying move (1) in the special case $(\alpha,-\alpha)\sim (-\alpha,\alpha)$, then the string diagram will look like the following ($\beta$ can be any simple root other than $\alpha$). \[ \tikz[baseline=4ex]{ \node (a) at (1,1) [] {$\alpha$}; \node (b) at (3,1) [] {$-\alpha$}; \draw (0,1) -- node[above]{$\binom{\alpha}{0}$} (a) -- node[above]{$\binom{\alpha}{1}$} (b) -- node[above]{$\binom{\alpha}{2}$} (4,1); \draw (0,0) -- node[above]{$\binom{\beta}{0}$} (4,0); }\quad \quad \sim \quad \quad \tikz[baseline=4ex]{ \node (a) at (1,1) [] {$-\alpha$}; \node (b) at (3,1) [] {$\alpha$}; \draw (0,1) -- node[above]{$\binom{\alpha}{0}$} (a) -- node[above]{$\binom{\alpha}{1}$} (b) -- node[above]{$\binom{\alpha}{2}$} (4,1); \draw (0,0) -- node[above]{$\binom{\beta}{0}$} (4,0); } \] If we use the unprime notation to denote the exchange matrix of the left and the prime notation to denote that of the right, then we see that the entries of the exchange matrix that got changed under such a move are \[ \epsilon_{\binom{\alpha}{1}\binom{\beta}{0}}=C_{\beta\alpha}, \quad \quad \epsilon_{\binom{\beta}{0}\binom{\alpha}{1}}=-C_{\alpha\beta}, \] \[ \epsilon_{\binom{\alpha}{1}\binom{\alpha}{0}}=-\epsilon_{\binom{\alpha}{0}\binom{\alpha}{1}}=\epsilon_{\binom{\alpha}{1}\binom{\alpha}{2}}=-\epsilon_{\binom{\alpha}{2}\binom{\alpha}{1}}=1, \] \[ \epsilon_{\binom{\beta}{0}\binom{\alpha}{0}}=\epsilon_{\binom{\beta}{0}\binom{\alpha}{2}}=\frac{C_{\alpha\beta}}{2}, \quad \quad \epsilon_{\binom{\alpha}{0}\binom{\beta}{0}}=\epsilon_{\binom{\alpha}{2}\binom{\beta}{0}}=-\frac{C_{\beta\alpha}}{2}, \] which change to \[ \epsilon'_{\binom{\alpha}{1}\binom{\beta}{0}}=-C_{\beta\alpha}, \quad \quad \epsilon'_{\binom{\beta}{0}\binom{\alpha}{1}}=C_{\alpha\beta}, \] \[ \epsilon'_{\binom{\alpha}{1}\binom{\alpha}{0}}=-\epsilon'_{\binom{\alpha}{0}\binom{\alpha}{1}}=\epsilon'_{\binom{\alpha}{1}\binom{\alpha}{2}}=-\epsilon'_{\binom{\alpha}{2}\binom{\alpha}{1}}=-1, \] \[ \epsilon'_{\binom{\beta}{0}\binom{\alpha}{0}}=\epsilon'_{\binom{\beta}{0}\binom{\alpha}{2}}=-\frac{C_{\alpha\beta}}{2}, \quad \quad \epsilon'_{\binom{\alpha}{0}\binom{\beta}{0}}=\epsilon'_{\binom{\alpha}{2}\binom{\beta}{0}}=\frac{C_{\beta\alpha}}{2}, \] One can verify easily that such change is exactly the same as a mutation at the vertex $\binom{\alpha}{1}$. Now let's consider move (2). Due to symmetry, we will only prove for the case where all the letters are simple roots; the case where the letters are opposite to simple roots is completely analogous. Let's start with the simplest case where $C_{\alpha\beta}C_{\beta\alpha}=1$. Then move (2) says that $(\alpha, \beta, \alpha)\sim (\beta, \alpha,\beta)$. The following is the corresponding string diagram. \[ \tikz[baseline=4ex]{ \node (a) at (1,1) [] {$\alpha$}; \node (b) at (3,1) [] {$\alpha$}; \node (c) at (2,0) [] {$\beta$}; \draw (0,1) -- node[above]{$1$} (a) -- node[above]{$0$} (b) -- node[above]{$2$} (4,1); \draw (0,0) -- node[above]{$3$} (c) -- node[above]{$4$} (4,0); }\quad \quad \sim \quad \quad \tikz[baseline=4ex]{ \node (a) at (1,0) [] {$\beta$}; \node (b) at (3,0) [] {$\beta$}; \node (c) at (2,1) [] {$\alpha$}; \draw (0,0) -- node[above]{$3$} (a) -- node[above]{$0$} (b) -- node[above]{$4$} (4,0); \draw (0,1) -- node[above]{$1$} (c) -- node[above]{$2$} (4,1); } \] To avoid possible confusion we rename the vertices (gaps) of the seeds with numbers. Note that the vertex 0 in either picture only has non-vanishing exchange matrix entries with the other 4 vertices that are present but nothing else. We claim that the move (2) in this case induces a single seed mutation at the vertex 0. In fact since $C_{\alpha\beta}C_{\beta\alpha}=1$ we can present the each seed data with a quiver, and it is obvious from the quiver presentation that such a move is indeed a seed (quiver) mutation. (We use the convention that dashed arrows represent half weight exchange matrix in either direction). \[ \tikz[baseline=3ex]{ \node (1) at (0,1) [] {$1$}; \node (0) at (2,1) [] {$0$}; \node (2) at (4,1) [] {$2$}; \node (3) at (1,0) [] {$3$}; \node (4) at (3,0) [] {$4$}; \draw [<-] (1) -- (0); \draw [<-] (0) -- (2); \draw [<-] (3) -- (4); \draw [<-] (0) -- (3); \draw [<-] (4) -- (0); \draw [dashed, ->] (1) -- (3); \draw [dashed, ->] (4) -- (2); }\quad \quad \sim \quad \quad \tikz[baseline=3ex]{ \node (1) at (1,1) [] {$1$}; \node (0) at (2,0) [] {$0$}; \node (2) at (3,1) [] {$2$}; \node (3) at (0,0) [] {$3$}; \node (4) at (4,0) [] {$4$}; \draw [<-] (1) -- (2); \draw [<-] (0) -- (1); \draw [<-] (2) -- (0); \draw [<-] (3) -- (0); \draw [<-] (0) -- (4); \draw [dashed, <-] (1) -- (3); \draw [dashed, <-] (4) -- (2); } \] Next let's consider the case $C_{\alpha\beta}C_{\beta\alpha}=2$, for which move (2) says $(\alpha,\beta,\alpha,\beta)\sim (\beta,\alpha,\beta,\alpha)$. Without loss of generality let's assume that $C_{\alpha\beta}=-2$ and $C_{\beta\alpha}=-1$. The corresponding string diagram is the following. \[ \tikz[baseline=3ex]{ \node (a) at (1,1) [] {$\alpha$}; \node (b) at (3,1) [] {$\alpha$}; \node (c) at (2,0) [] {$\beta$}; \node (d) at (4,0) [] {$\beta$}; \draw (0,1) -- (a) -- (b) -- (5,1); \draw (0,0) -- (c) -- (d) -- (5,0); }\quad \quad \sim \quad \quad \tikz[baseline=3ex]{ \node (a) at (2,1) [] {$\alpha$}; \node (b) at (4,1) [] {$\alpha$}; \node (c) at (1,0) [] {$\beta$}; \node (d) at (3,0) [] {$\beta$}; \draw (0,1) -- (a) -- (b) -- (5,1); \draw (0,0) -- (c) -- (d) -- (5,0); } \] Unfortunately in this case the exchange matrix is not antisymmetric, so we cannot use an ordinary quiver to present the seed data. One way to go around is to introduce the following new notation to replace arrows in a quiver \[ \tikz[baseline=-0.5ex]{ \node (a) at (0,0) [] {$\bullet$}; \node (b) at (2,0) [] {$\bullet$}; \node at (0,0) [left] {$a$}; \node at (2,0) [right] {$b$}; \draw [z->] (a) -- (b); } \quad \quad \text{means} \quad \epsilon_{ab}=1 \quad \text{and} \quad \epsilon_{ba}=-2, \] \[ \tikz[baseline=-0.5ex]{ \node (a) at (0,0) [] {$\bullet$}; \node (b) at (2,0) [] {$\bullet$}; \node at (0,0) [left] {$a$}; \node at (2,0) [right] {$b$}; \draw [-z>] (a) -- (b); } \quad \quad \text{means} \quad \epsilon_{ab}=2 \quad \text{and} \quad \epsilon_{ba}=-1; \] we call the resulting picture a \textit{quasi-quiver}, which is a useful tool to make the mutation computation more illustrative. Translating the picture on the left and the picture on the right into the language of quasi-quiver (again we will use dashed line to represent half weight), we get \[ \tikz[baseline=6ex]{ \node (a) at (0,2) [] {$\bullet$}; \node (b) at (2,2) [] {$\bullet$}; \node (c) at (6,2) [] {$\bullet$}; \node (d) at (0,0) [] {$\bullet$}; \node (e) at (4,0) [] {$\bullet$}; \node (f) at (6,0) [] {$\bullet$}; \draw [->] (b) -- (a); \draw [->] (c) -- (b); \draw [->] (f) -- (e); \draw [->] (e) -- (d); \draw [-z>] (d) -- (b); \draw [z->] (b) -- (e); \draw [-z>] (e) -- (c); \draw [z-->] (a) -- (d); \draw [z-->] (c) -- (f); \node at (b) [above] {$a$}; \node at (e) [below] {$b$}; } \quad \quad \text{and} \quad \quad \tikz[baseline=6ex]{ \node (a) at (0,2) [] {$\bullet$}; \node (b) at (4,2) [] {$\bullet$}; \node (c) at (6,2) [] {$\bullet$}; \node (d) at (0,0) [] {$\bullet$}; \node (e) at (2,0) [] {$\bullet$}; \node (f) at (6,0) [] {$\bullet$}; \draw [->] (b) -- (a); \draw [->] (c) -- (b); \draw [->] (f) -- (e); \draw [->] (e) -- (d); \draw [z->] (a) -- (e); \draw [-z>] (e) -- (b); \draw [z->] (b) -- (f); \draw [--z>] (d) -- (a); \draw [--z>] (f) -- (c); \node at (b) [above] {$a$}; \node at (e) [below] {$b$}; } \] We claim that either seed (quasi-quiver) can be obtained from the other via a sequence of 3 mutations at the vertices labeled $a$ and $b$. The following is the quasi-quiver illustration of such mutation sequence. \[ \tikz[baseline=6ex]{ \node (a) at (0,2) [] {$\bullet$}; \node (b) at (2,2) [] {$\bullet$}; \node (c) at (6,2) [] {$\bullet$}; \node (d) at (0,0) [] {$\bullet$}; \node (e) at (4,0) [] {$\bullet$}; \node (f) at (6,0) [] {$\bullet$}; \draw [->] (b) -- (a); \draw [->] (c) -- (b); \draw [->] (f) -- (e); \draw [->] (e) -- (d); \draw [-z>] (d) -- (b); \draw [z->] (b) -- (e); \draw [-z>] (e) -- (c); \draw [z-->] (a) -- (d); \draw [z-->] (c) -- (f); \node at (b) [above] {$a$}; \node at (e) [below] {$b$}; } \quad \quad \quad \quad \quad \quad \quad \tikz[baseline=6ex]{ \node (a) at (0,2) [] {$\bullet$}; \node (b) at (4,2) [] {$\bullet$}; \node (c) at (6,2) [] {$\bullet$}; \node (d) at (0,0) [] {$\bullet$}; \node (e) at (2,0) [] {$\bullet$}; \node (f) at (6,0) [] {$\bullet$}; \draw [->] (b) -- (a); \draw [->] (c) -- (b); \draw [->] (f) -- (e); \draw [->] (e) -- (d); \draw [z->] (a) -- (e); \draw [-z>] (e) -- (b); \draw [z->] (b) -- (f); \draw [--z>] (d) -- (a); \draw [--z>] (f) -- (c); \node at (b) [above] {$a$}; \node at (e) [below] {$b$}; } \] \[ \tikz{\draw [<->] (0,0) -- node[left]{$\mu_b$} (0,1);} \quad\quad \quad\quad\quad \quad\quad \quad\quad \quad\quad \quad\quad \quad\quad \quad\quad \quad\quad \quad \quad\quad \quad\quad \quad \tikz{\draw [<->] (0,0) -- node[right]{$\mu_b$} (0,1);} \] \[ \tikz[baseline=6ex]{ \node (a) at (0,2) [] {$\bullet$}; \node (b) at (2,2) [] {$\bullet$}; \node (c) at (6,2) [] {$\bullet$}; \node (d) at (0,0) [] {$\bullet$}; \node (e) at (4,1) [] {$\bullet$}; \node (f) at (6,0) [] {$\bullet$}; \draw [->] (b) -- (a); \draw [->] (b) -- (c); \draw [->] (e) -- (f); \draw [->] (d) -- (e); \draw [->] (f) -- (d); \draw [-z>] (e) -- (b); \draw [z->] (c) -- (e); \draw [z-->] (a) -- (d); \draw [--z>] (f) -- (c); \node at (b) [above] {$a$}; \node at (e) [below] {$b$}; } \quad\quad \tikz{\draw [<->] (0,0) -- node[above]{$\mu_a$} (1,0);}\quad \quad \tikz[baseline=6ex]{ \node (a) at (0,2) [] {$\bullet$}; \node (b) at (4,2) [] {$\bullet$}; \node (c) at (6,2) [] {$\bullet$}; \node (d) at (0,0) [] {$\bullet$}; \node (e) at (2,1) [] {$\bullet$}; \node (f) at (6,0) [] {$\bullet$}; \draw [->] (a) -- (b); \draw [->] (c) -- (b); \draw [->] (e) -- (f); \draw [->] (d) -- (e); \draw [->] (f) -- (d); \draw [z->] (b) -- (e); \draw [-z>] (e) -- (a); \draw [z-->] (a) -- (d); \draw [--z>] (f) -- (c); \node at (b) [above] {$a$}; \node at (e) [below] {$b$}; } \] The case $C_{\alpha\beta}C_{\beta\alpha}=3$ can be done similarly. To save space, we will put the quasi-quiver demonstration of one possible sequence of 10 mutations corresponding move (2) in Appendix \ref{A}. \end{proof} The last two propositions together implies the following statement, which allows us to glue the seed tori $\mathcal{A}_\vec{i}$ and $\underline{\mathcal{X}}_\vec{i}$ into cluster varieties $\mathcal{A}^{u,v}$ and $\underline{\mathcal{X}}^{u,v}$. \begin{cor} Any two seeds associated to reduced words of the same pair of Weyl group elements $(u,v)$ are mutation equivalent. \end{cor} After obtaining the cluster varieties $\mathcal{A}^{u,v}$ and $\underline{\mathcal{X}}^{u,v}$, the next natural question to ask is whether the following two diagrams commute, where the map $\mu$ is the cluster transformation induced by a sequence of move (1) and move (2) that transforms the reduced word $\vec{i}$ into the reduced word $\vec{i}'$ of the pair of Weyl group elements $(u,v)$. \[ \xymatrix{& \mathcal{A}_\vec{i} \ar@{-->}[dd]^\mu \\ G_{sc} \ar@{-->}[ur]^{\psi_\vec{i}} \ar@{-->}[dr]_{\psi_{\vec{i}'}} & \\ & \mathcal{A}_{\vec{i}'}}\quad \quad \quad \quad \quad \quad \xymatrix{\underline{\mathcal{X}}_\vec{i} \ar[dr]^{\chi_\vec{i}} \ar@{-->}[dd]_\mu & \\ & H\backslash G/H \\ \underline{\mathcal{X}}_{\vec{i}'} \ar[ur]_{\chi_{\vec{i}'}} & } \] \begin{prop} The two diagrams above commute, and therefore they pass to maps $\psi:G_{sc}\dashrightarrow\mathcal{A}^{u,v}$ and $\chi:\underline{\mathcal{X}}^{u,v}\rightarrow H\backslash G/H$. In particular $\psi$ restricts to a birational equivalence $\psi:G_{sc}^{u,v}\dashrightarrow \mathcal{A}^{u,v}$, and $\chi$ has image in $H\backslash G^{u,v}/H$. \end{prop} \begin{proof} The commutativity of these two diagrams follows from a collection of identities corresponding move (1) and move (2), which have been laid out by Fomin and Zelevinsky on the $\mathcal{A}$ side \cite{FZ} and Fock and Goncharov on the $\mathcal{X}$ side \cite{FGD}. We will simply quote some of these identities from their papers (in particular we omit the formulas for the $C_{\alpha\beta}C_{\beta\alpha}=3$ case) in Appendix \ref{B}; readers who are interested in these formulas should go to the respective paper to find them. The fact that $\psi$ restricts to a birational equivalence $\psi:G_{sc}^{u,v}\dashrightarrow \mathcal{A}^{u,v}$ was proved by Berenstein, Fomin, and Zelevinsky in \cite{BFZ}: in their paper they showed that the coordinate ring $\mathbb{C}[G_{sc}^{u,v}]$ is isomorphic to the upper cluster algebra generated by the seed data associated to any reduced word of the pair $(u,v)$, which is exactly the coordinate ring of our cluster variety $\mathcal{A}^{u,v}$. The fact that the image of $\chi$ lies inside $H\backslash G^{u,v}/H$ follows from the fact that $e_{\pm \alpha}\in B_\pm \cap B_\mp s_\alpha B_\mp$ and $B_\pm uB_\pm vB_\pm=B_\pm uvB_\pm$ if $l(u)+l(v)=l(uv)$ (\cite{Hum} Section 29.3 Lemma A). \end{proof} Now we finally obtained what we want at the beginning of this subsection, namely the reduced cluster ensemble $(\mathcal{A}^{u,v},\underline{\mathcal{X}}^{u,v},p)$ and maps \[ \psi:G_{sc}^{u,v}\dashrightarrow \mathcal{A}^{u,v} \quad \quad \text{and} \quad \quad \chi:\underline{\mathcal{X}}^{u,v}\rightarrow H\backslash G^{u,v}/H. \] These structures enable us to rewrite the twist map $\eta$ in a new way, as we will soon see in the next section. \begin{rmk} Since from now on we will only focus on the reduced cluster ensemble $(\mathcal{A}^{u,v},\underline{\mathcal{X}}^{u,v},p)$, we will drop the underline notation and just write the cluster ensemble as $(\mathcal{A}^{u,v}, \mathcal{X}^{u,v},p)$. \end{rmk} \subsection{Cluster Varieties and Amalgamation}\label{cluster} We will give a brief review of Fock and Goncharov's theory of cluster ensemble and amalgamation. We will mainly follow the coordinate description presented in \cite{FG} and \cite{FGD}. \begin{defn} A \textit{seed} $\vec{i}$ is a quadruple $(I,I_0, \epsilon, d)$ satisfying the following properties: \begin{enumerate} \item $I$ is a finite set; \item $I_0\subset I$; \item $\epsilon=\left(\epsilon_{ab}\right)_{a,b\in I}$ is a $\mathbb{Q}$-coefficient matrix in which $\epsilon_{ab}$ is an integer unless $a,b\in I_0$; \item $d=\left(d_a\right)_{a\in I}$ is a $|I|$-tuple of positive integers such that $\hat{\epsilon}_{ab}:=\epsilon_{ab}d_b$ is a skew-symmetric matrix. \end{enumerate} \end{defn} In the special case where $\epsilon_{ab}$ is itself skew-symmetric, the data of a seed defined as above is equivalent to the data of a quiver with vertex set $I$ and exchange matrix $\epsilon_{ab}$. Thus by extending the terminology from quivers to seeds, we call elements of $I$ \textit{vertices}, call elements of $I_0$ \textit{frozen vertices}, and call $\epsilon$ the \textit{exchange matrix}. \begin{defn} Let $\vec{i}=(I,I_0,\epsilon,d)$ be a seed and let $c$ be a non-frozen vertex. Then the \textit{mutation} of $\vec{i}$ at $c$, which we will denote as $\mu_c$, gives rise to new seed $\vec{i}'=\left(I',I'_0, \epsilon', d'\right)$ where $I'=I$, $I'_0=I_0$, $d'=d$, and \begin{equation}\label{mutation} \epsilon'_{ab}=\left\{\begin{array}{ll} -\epsilon_{ab} & \text{if $c\in \{a,b\}$;} \\ \epsilon_{ab} & \text{if $\epsilon_{ac}\epsilon_{cb}\leq 0$ and $c\notin \{a,b\}$;}\\ \epsilon_{ab}+|\epsilon_{ac}|\epsilon_{cb} & \text{if $\epsilon_{ac}\epsilon_{cb}>0$, $c\notin \{a,b\}$.} \end{array}\right. \end{equation} \end{defn} It is not hard to check that mutating twice at the same vertex gives back the original seed. In fact, if we start with a skew-symmetric matrix $\epsilon$ so the data of a seed can be translated into a quiver, then seed mutation precisely corresponds to quiver mutation. Starting with an initial seed $\vec{i}_0$, we say that a seed $\vec{i}$ is \textit{mutation equivalent} to $\vec{i}_0$ if there is a sequence of seed mutations that turns $\vec{i}_0$ into $\vec{i}$; we denote the set of all seeds mutation equivalent to $\vec{i}_0$ by $|\vec{i}_0|$. To each seed $\vec{i}$ in $|\vec{i}_0|$ we associate two split algebraic tori $\mathcal{A}_\vec{i}=(\mathbb{C}^*)^{|I|}$ and $\mathcal{X}_\vec{i}=(\mathbb{C}^*)^{|I|}$, which are equipped with canonical coordinates $(A_a)$ and $(X_a)$ indexed by the set $I$. These two split algebraic tori are linked by a map $p_\vec{i}:\mathcal{A}_\vec{i}\rightarrow \mathcal{X}_\vec{i}$ given by \[ p_\vec{i}^*(X_a)=\prod_{j\in I} A_b^{\epsilon_{ab}}. \] The split algebraic tori $\mathcal{A}_\vec{i}$ and $\mathcal{X}_\vec{i}$ are called a \textit{seed $\mathcal{A}$-torus} and a \textit{seed $\mathcal{X}$-torus} respectively. A seed mutation $\mu_c:\vec{i}\rightarrow \vec{i}'$ gives rise to birational equivalences between the corresponding seed tori, which by an abuse of notation we also denote both as $\mu_c$; in terms of the canonical coordinates $(A'_a)$ and $(X'_a)$ they can be expressed as \begin{align*} \mu_c^*(A'_a)=&\left\{\begin{array}{ll} \displaystyle A_c^{-1}\left(\prod_{\epsilon_{cb}>0} A_b^{\epsilon_{cb}}+\prod_{\epsilon_{cb}<0} A_b^{-\epsilon_{cb}}\right) & \text{if $a=c$,}\\ A_a & \text{if $a\neq c$,}\end{array}\right. \\ \text{and} \quad \quad \mu_c^*(X'_a)=&\left\{\begin{array}{l l} X_c^{-1} & \text{if $a=c$,} \\ X_a\left(1+X_c^{-\mathrm{sign} (\epsilon_{ac})}\right)^{-\epsilon_{ac}}& \text{if $a\neq c$.}\end{array}\right. \end{align*} These two birational equivalences are called \textit{cluster $\mathcal{A}$-mutation} and \textit{cluster $\mathcal{X}$-mutation} respectively. One important feature about cluster mutations is that they commute with the respective $p$ maps. \[ \xymatrix{\mathcal{A}_\vec{i} \ar[d]_{p_\vec{i}} \ar@{-->}[r]^{\mu_k} & \mathcal{A}_{\vec{i}'} \ar[d]^{p_{\vec{i}'}}\\ \mathcal{X}_\vec{i} \ar@{-->}[r]_{\mu_k} & \mathcal{X}_{\vec{i}'}} \] Besides cluster mutations between seed tori we also care about cluster isomorphisms induced by seed isomorphisms. A \textit{seed isomorphism} $\sigma:\vec{i}\rightarrow \vec{i}'$ is a bijection $\sigma:I\rightarrow I'$ that fixes the subset $I_0\subset I\cap I'$ such that $\epsilon'_{\sigma(i)\sigma(j)}=\epsilon_{ij}$. Given a seed isomorphism $\sigma:\vec{i}\rightarrow \vec{i}'$ between two seeds in $|\vec{i}_0|$, we obtain isomorphisms on the corresponding seed tori, which by an abuse of notation we also denote by $\sigma$: \[ \sigma^*(A'_{\sigma(a)})=A_a \quad \quad \text{and} \quad \quad \sigma^*(X'_{\sigma(a)})=X_a. \] We call these isomorphisms \textit{cluster isomorphisms}. It is not hard to see that cluster isomorphisms also commute with the $p$ maps. \[ \xymatrix{\mathcal{A}_\vec{i} \ar[d]_{p_\vec{i}} \ar[r]^\sigma & \mathcal{A}_{\vec{i}'} \ar[d]^{p_{\vec{i}'}}\\ \mathcal{X}_\vec{i} \ar[r]_\sigma & \mathcal{X}_{\vec{i}'}} \] Compositions of seed mutations and seed isomorhpisms are called \textit{seed transformations}, and compositions of cluster mutations and cluster isomorphisms are called \textit{cluster transformations}. A seed transformation $\vec{i}\rightarrow \vec{i}$ is called \textit{trivial} if it induces identity maps on the coresponding seed $\mathcal{A}$-torus $\mathcal{A}_\vec{i}$ and seed $\mathcal{X}$-torus $\mathcal{X}_\vec{i}$. \begin{defn} By gluing the seed tori via cluster mutations we obtain the corresponding \textit{cluster varieties}, which will be denoted as $\mathcal{A}_{|\vec{i}_0|}$ and $\mathcal{X}_{|\vec{i}_0|}$ respectively. Cluster transformations can be seen as automorphisms on these cluster varieties. Since the maps $p_\vec{i}$ commute with cluster mutations, they naturally glue into a map $p:\mathcal{A}_{|\vec{i}_0|}\rightarrow \mathcal{X}_{|\vec{i}_0|}$ of cluster varieties. The triple $\left(\mathcal{A}_{|\vec{i}_0|},\mathcal{X}_{|\vec{i}_0|}, p\right)$ associated to a mutation equivalent family of seeds $|\vec{i}_0|$ is called a \textit{cluster ensemble}. \end{defn} Cluster ensemble connect the theory of cluster algebras with the Poisson geometry: on the one hand, the coordinate rings on cluster $\mathcal{A}$-varieties are examples of Fomin and Zelevinsky's upper cluster algebras \cite{BFZ}, and on the other hand, cluster $\mathcal{X}$-varieties carry natural Poisson variety structures given by \[ \{X_a,X_b\}=\hat{\epsilon}_{ab}X_aX_b. \] Thus a cluster $\mathcal{X}$-variety is also known as a \textit{cluster Poisson variety}. More details are available in \cite{FG}. Our next goal of this subsection is to describe a process of constructing cluster ensembles from smaller pieces known as \textit{amalgamation}; it was first introduced by Fock and Goncharov in \cite{FGD} when they studied the Poisson structure on double Bruhat cells. \begin{defn}\label{amalgamation} Let $\left\{\vec{i}^s\right\}=\left\{\left(I^s,I_0^s, \epsilon^s, d^s\right)\right\}$ be a finite collection of seeds, together with a collection of injective maps $i^s:I^s\rightarrow K$ for some finite set $K$ satisfying the following conditions: \begin{enumerate} \item the images of $i^s$ cover $K$; \item $\left(i^s\right)^{-1}\left(i^t\left(I^t\right)\right)\subset I_0^s$ for any $s\neq t$; \item if $i^s(a)=i^t(b)$ then $d^s_a=d^t_b$. \end{enumerate} Then the \textit{amalgamation} of such collection of seeds is defined to be a new seed $(K,K_0,\epsilon, d)$ where \[ \epsilon_{ab}:=\sum_{\substack{i^s(a^s)=a\\ i^s(b^s)=b}} \epsilon^s_{a^sb^s}, \quad \quad \quad \quad d_a:=d^s_{a^s} \quad \text{for any $a^s$ with $i^s(a^s)=a$}, \] \[ \text{and} \quad \quad K_0:=\left(\bigcup_s i^s\left(I_0^s\right)\right)\setminus L. \] The set $L$ in the last line can be any subset of the set \[ \{a\in K\mid \text{both $\epsilon_{ab}$ and $\epsilon_{ba}$ are integers for all $b\in K$}\}. \] In particular, elements of the set $L$ are called \textit{defrosted vertices}. \end{defn} Observe that if $a=i^s(a^s)$ for some non-frozen vertex $a^s\in I^s\setminus I_0^s$, then $a$ cannot possibly lie inside the image of any other $i^t$. Therefore mutation at $a$ after amalgamation will give the same seed as amalgamation after mutation at $a^s$. This shows that amalgamation commutes with mutation at non-frozen vertices (\cite{FGD} Lemma 2.2). If the seed $\vec{k}$ is the amalgamation of seeds $\vec{i}^s$, then on the seed torus level we can induce two \textit{amalgamation maps}: \[ \Delta:\mathcal{A}_\vec{k}\rightarrow \prod_s\mathcal{A}_{\vec{i}^s} \quad \quad \text{and} \quad \quad m:\prod_s\mathcal{X}_{\vec{i}^s} \rightarrow \mathcal{X}_\vec{k}, \] whose pull-backs are \[ \Delta^*\left(A_{a^s}\right)=A_{i^s(a^s)} \quad \quad \text{and} \quad \quad m^*\left(X_a\right)=\prod_{i^s(a^s)=a} X_{a^s}. \] One should think of $\Delta$ as some sort of diagonal embedding and $m$ as some sort of multiplication. This point will become much clearer when we construct the cluster structures on double Bruhat cells of semisimple Lie groups. \begin{prop} The map $p_\vec{k}:\mathcal{A}_\vec{k} \rightarrow \mathcal{X}_\vec{k}$ can be factored as the composition \[ \xymatrix{ \mathcal{A}_\vec{k} \ar[r]^(0.4){\Delta} & \prod_s \mathcal{A}_{\vec{i}^s} \ar[d]^{\prod_sp_{\vec{i}^s}} & \\ & \prod_s\mathcal{X}_{\vec{i}^s} \ar[r]_(0.6){m}& \mathcal{X}_\vec{k} } \] \end{prop} \begin{proof} This can be verified by direction computation using the definitions of the $p$ maps and amalgamation maps $\Delta$ and $m$. \end{proof} \begin{rmk}\label{reduced} (\textbf{Important!}) There is a \textit{reduced} version of a seed $\mathcal{X}$-torus, which is obtained as the image of $\mathcal{X}_\vec{i}$ under the projection to the coordinates corresponding to non-frozen vertices (and hence is isomorphic to $\left(\mathbb{C}^*\right)^{|I\setminus I_0|}$). If we need to distinguish the two seed $\mathcal{X}$-tori, we will denote the reduced one as $\underline{\mathcal{X}}_\vec{i}$. One can view the reduced seed $\mathcal{X}$-torus $\underline{\mathcal{X}}_\vec{i}$ as a seed $\mathcal{X}$-torus for the seed $\left(I\setminus I_0,\emptyset, \underline{\epsilon}, \underline{d}\right)$ where $\underline{\epsilon}$ and $\underline{d}$ are obtained by deleting the data entries of $\epsilon$ and $d$ that involve $I_0$ respectively. By composing the $p$ map $p_\vec{i}:\mathcal{A}_\vec{i}\rightarrow \mathcal{X}_\vec{i}$ and the projection map $\mathcal{X}_\vec{i}\rightarrow \underline{\mathcal{X}}_\vec{i}$ we obtain another $p$ map, which by an abuse of notation we also denote as $p_\vec{i}:\mathcal{A}_\vec{i}\rightarrow \underline{\mathcal{X}}_\vec{i}$. In fact, the reduced $p$ map makes more sense since it is guaranteed to be algebraic, whereas the unreduced one may have fractional exponents. In addition, by looking at the formulas for cluster $\mathcal{X}$-mutation we see that the frozen coordinates never get into the formula of unfrozen ones; therefore we can conclude that the projection $\mathcal{X}_\vec{i}\rightarrow \underline{\mathcal{X}}_\vec{i}$ commutes with cluster transformation. In particular, the reduced seed $\mathcal{X}$-tori also glue together into a \textit{reduced cluster $\mathcal{X}$-variety}, which we may denote as $\underline{\mathcal{X}}_{|\vec{i}_0|}$, together with a $p$ map $p:\mathcal{A}_{|\vec{i}_0|}\rightarrow \underline{\mathcal{X}}_{|\vec{i}_0|}$. We call the triple $\left(\mathcal{A}_{|\vec{i}_0|}, \underline{\mathcal{X}}_{|\vec{i}_0|}, p\right)$ a \textit{reduced cluster ensemble}. The reduced version is actually more useful and more relevant to our story, but the unreduced one also makes our life easier as it allows us to use amalgamation to define the cluster structure of double Bruhat cells. We therefore decide to include both in this section and use both later when we define the cluster structures on double Bruhat cells; however, after we finish the construction of the cluster structures we will drop the underline notation and use $\mathcal{X}_{|\vec{i}_0|}$ to denote the reduced cluster $\mathcal{X}$-variety. \end{rmk} \subsection{Construction of the Donaldson-Thomas Cluster Transformation}\label{3.2} Now we have got two maps between the space $H\backslash G^{u,v}/H$ and the cluster Poisson variety $\mathcal{X}^{u,v}$, namely \[ \psi:H\backslash G^{u,v}/H\dashrightarrow \mathcal{X}^{u,v}\quad \quad \text{and} \quad \quad \chi:\mathcal{X}^{u,v}\rightarrow H\backslash G^{u,v}/H. \] In last subsection we have proved that the composition $\chi\circ \psi$ is rationally equivalent to the twist map $\eta$ on the double quotient $H\backslash G^{u,v}/H$, which has been interpreted in many different ways. In this subsection we will consider the composition in the opposite direction, i.e., \[ \mathrm{DT}:=\psi\circ \chi, \] and show that $\mathrm{DT}$ is in fact the Donaldson-Thomas cluster transformation of the cluster Poisson variety $\mathcal{X}^{u,v}$. Following our subscript convention, we define $\mathrm{DT}_\vec{i}:=\psi_\vec{i}\circ \chi_\vec{i}$ to be the restriction of $\mathrm{DT}$ to the seed torus $\mathcal{X}_\vec{i}$ for any reduced word $\vec{i}$ of the pair of Weyl group elements $(u,v)$. According to Theorem \ref{gs} and Proposition \ref{lem0}, in order to achieve our goal, it suffices to show two following things for some (equivalently any) reduced word $\vec{i}$ of $(u,v)$: \begin{itemize} \item $\mathrm{DT}_\vec{i}$ satisfy the condition that $\deg_{X_b}\mathrm{DT}_\vec{i}^*\left(X_a\right)=-\delta_{ab}$; \item $\mathrm{DT}_\vec{i}:\mathcal{X}_\vec{i}\dashrightarrow \mathcal{X}_\vec{i}$ is a cluster transformation. \end{itemize} Again, proving these two statements comes down to choosing the right reduced word $\vec{i}$. In contrast to our choice in the last subsection, this time our choice of reduced word has the $u$ part comes before the $v$ part: \[ \vec{i}:=(\underbrace{\alpha(1),\dots, \alpha(n)}_\text{$u$ part}, \underbrace{\alpha(n+1),\dots, \alpha(l)}_\text{$v$ part}), \] in other words, the first $n$ letters are opposite to simple roots, which will give a reduced word of $u$ if you switch their signs, and the last $l-n$ letters are simple roots, which gives a reduced word of $v$. Fix one such reduced word $\vec{i}$. Let $(X_a)$ be a generic point in the seed torus $\mathcal{X}_\vec{i}$ and let $x$ be an element in $G_{sc}^{u,v}$ such that \[ H\backslash x/H=\chi_\vec{i}(X_a). \] To compute $\mathrm{DT}_\vec{i}(X_a)$, we need to first compute the image $\psi_\vec{i}(x)$, whose coordinates are generalized minors of the form \[ \Delta_\alpha\left(\overline{u_{<i}}^{-1}x\overline{v^{-1}}\right) \quad \quad \text{or} \quad \quad \Delta_\alpha\left(\overline{u}^{-1}x\overline{\left(v_{>i}\right)^{-1}}\right), \] depending the corresponding vertex in the seed. The good news is that we sort of already know the Gaussian decomposotion of $x$: from the definition $x:=\chi_\vec{i}(x)$ we know that we can write $x$ as a product of $e_{\alpha(i)}$ and $X_a^{H^\alpha}$, and since the $u$ part of $\vec{i}$ comes before the $v$ part, the first half of such product is in $B_-$ while the second half is in $B_+$. But then since $N_\pm$ is normal in $B_\pm$, we know that for any $n_\pm \in N_\pm$ and $h\in H$, there always exists $n'_\pm$ such that \[ hn_-=n_-'h \quad \quad \text{and} \quad \quad n_+h=hn'_+. \] Using these two identities we can conclude that \[ \Delta_\alpha(x)=\Delta_\alpha\left(\prod_{\beta,i} X_{\binom{\beta}{i}}^{H^\beta}\right)=\prod_{\beta,i} X_{\binom{\beta}{i}}^{\inprod{H^\beta}{\omega_\alpha}}=\prod_{\beta,i} X_{\binom{\beta}{i}}^{\left(C^{-1}\right)_{\beta\alpha}}, \] where $C^{-1}$ denotes the inverse of the Cartan matrix $C$. A couple of the bad things happen here. One is that although the Cartan matrix $C$ has integer entries, it is generally not true for $C^{-1}$ to have integer entries, and we may run into trouble of taking fraction power of an algebraic variable. However, this may not be as bad as we imagine after all, because at the end of the day what we need is not individual generalized minors but some of their ratios, and hopefully these ratios are algebraic over the cluster variables. The other bad thing is that $\Delta_\alpha(x)$ is not even on the list of the generalized minors we are trying to compute! Was our effort in vain? Of course not. We can modify it slightly to compute the generalized minors we actually need. The key is again the identities \eqref{e_+s} and \eqref{se_-}, which we will write them down again: \[ e_\alpha(t)\overline{s}_\alpha=e_{-\alpha}\left(t^{-1}\right)t^{H_\alpha} e_\alpha\left(-t^{-1}\right); \quad \quad \quad \quad \overline{s}_\alpha^{-1}e_{-\alpha}(t)=e_{-\alpha}\left(-t^{-1}\right)t^{H_\alpha} e_\alpha\left(t^{-1}\right). \] From these two identities we see that multiplying $\overline{s}_{\alpha(l)}$ on the right of $x:=\chi_\vec{i}(X_a)$, assuming that $v$ is non-trivial so that $\alpha(l)$ is a simple root, changes the very last factor $e_{\alpha(l)}$ to $e_{-\alpha(l)} e_{\alpha(l)}^{-1}$; now if we were to compute $\Delta_\alpha \left(x\overline{s}_\alpha\right)$, we just need to move the factor $e_{-\alpha(l)}$ all the way through the $B_+$ part of the product $\chi_\vec{i}(X_a)$ into the $N_-$ part by using the identity \eqref{e_+e_-} \[ e_\alpha(p)e_{-\alpha}(q)=e_{-\alpha}\left(\frac{q}{1+pq}\right)(1+pq)^{H_\alpha}e_\alpha\left(\frac{p}{1+pq}\right), \] and then pick up whatever is left in the $H$ part and compute the generalized minors. Variation of this observation is also true when we multiply $\overline{s}_{\alpha(l-1)}, \overline{s}_{\alpha(l-2)}, \dots$ on the right and $\overline{s}_{\alpha(1)}^{-1}, \overline{s}_{\alpha(2)}^{-1}, \dots$ on the left of $x$. But things can still get very messy as we keep applying these identities. Fortunately we don't actually need the explicit expression of $\mathrm{DT}_\vec{i}^*\left(X_a\right)$ but just to verify the identity $\deg_{X_b}\mathrm{DT}_\vec{i}^*\left(X_a\right)=-\delta_{ab}$. This allows us to go around the difficulty by only considering the leading power of each variable. To better record the data of the leading power of each cluster $\mathcal{X}$-variable, we introduce the following notations replacing the strings and the nodes in the string diagram, each of which carries a monomial inside and represents an element in $G_{ad}$: \begin{align*} \text{$\tikz[baseline=0ex]{\draw (-0.75,-0.75) -- (0.75,-0.75) -- (0,0.75) -- cycle; \node[scale=0.8] at (0,-0.25) [below] {$\textstyle\prod_aX_a^{p_a}$};}$ at level $\alpha$} \quad =& \quad e_\alpha\left(c\prod_a X_a^{p_a}+\text{terms of lower powers}\right),\\ \text{$\tikz[baseline=0ex]{\draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; \node[scale=0.8] at (0,0.25) [above] {$\textstyle\prod_aX_a^{p_a}$};}$ at level $\alpha$} \quad =& \quad e_{-\alpha}\left(c\prod_a X_a^{p_a}+\text{terms of lower powers}\right),\\ \text{$\tikz[baseline=0ex]{\draw (0,0) circle [radius=0.75]; \node[scale=0.8] at (0,0) [] {$\textstyle\prod_aX_a^{p_a}$};}$ at level $\alpha$} \quad =& \quad \left(c\prod_a X_a^{p_a}+\text{terms of lower powers}\right)^{H^\alpha}. \end{align*} We impose the convention that an empty figure means the monomial inside is 1. In particular, we don't draw empty circles since it is just the identity element. \begin{exmp} Consider a rank 2 root system with simple roots $\alpha$ and $\beta$. Then the image of the amalgamation map $\chi_\vec{i}:\mathcal{X}_\vec{i}\rightarrow H\backslash G^{u,v}/H$ corresponding to the reduced word $\vec{i}:=(-\alpha,-\beta,\alpha,\beta)$ can be represented by our notation above as \[ \tikz[scale=0.8]{ \node at (0,2) [] {$\alpha$}; \node at (0,0) [] {$\beta$}; \draw (1.25,2.75) -- (2.75,2.75) -- (2,1.25) -- cycle; \draw (4,2) circle [radius=0.75]; \node at (4,2) [] {$X_{\binom{\alpha}{1}}$}; \draw (5.25,1.25) -- (6.75,1.25) -- (6,2.75) -- cycle; \draw (3.25,0.75) -- (4.75,0.75) -- (4,-0.75) -- cycle; \draw (6,0) circle [radius=0.75]; \node at (6,0) [] {$X_{\binom{\beta}{1}}$}; \draw (7.25,-0.75) -- (8.75,-0.75) -- (8,0.75) -- cycle; } \] \end{exmp} Using such notation, we see that to compute the leading powers of cluster $\mathcal{X}$-variables in certain generalized minors, all we need to do is to multiply the corresponding lifts of Weyl group elements on the two sides, and then move figures around so that all triangles of the form $\tikz[baseline=0ex,scale=0.5]{\draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle;}$ are on the left of circles and all triangles of the form $\tikz[baseline=0ex, scale=0.5]{\draw (-0.75,-0.75) -- (0.75,-0.75) -- (0,0.75) -- cycle;}$ are on the right of circles, and at the end whatever monomials are left in the circles in the middle will give us the leading powers of the cluster $\mathcal{X}$-variables. This may sound more difficult than it really is; in fact, there are many identities we can use to help us achieve the goal. \begin{prop}\label{figure} The following identities hold (for notation simplicity we only include one cluster $\mathcal{X}$-variable $X$ here). \begin{enumerate} \item Neighboring circles on the same level can be merged into one single circle with the respective monomials multiplied. Different figures on different levels commute with each other. Circles on different levels also commute with each other. \vspace{0.5cm} \item $\tikz[baseline=0ex, scale=0.7]{ \draw (-0.75,-0.75) -- (0.75,-0.75) -- (0,0.75) -- cycle; \node[scale=0.8] at (0,0) [below] {$X^p$}; \draw (2,0) circle [radius=0.75]; \node[scale=0.8] at (2,0) [] {$X^d$}; } = \tikz[baseline=0ex, scale=0.7]{ \draw (-0.75,-0.75) -- (0.75,-0.75) -- (0,0.75) -- cycle; \node[scale=0.8] at (0,0) [below] {$X^{p-d}$}; \draw (-2,0) circle [radius=0.75]; \node[scale=0.8] at (-2,0) [] {$X^d$}; }$. \vspace{0.5cm} \item $\tikz[baseline=0ex, scale=0.7]{ \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; \node[scale=0.8] at (0,0) [above] {$X^p$}; \draw (-2,0) circle [radius=0.75]; \node[scale=0.8] at (-2,0) [] {$X^d$}; } = \tikz[baseline=0ex, scale=0.7]{ \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; \node[scale=0.8] at (0,0) [above] {$X^{p-d}$}; \draw (2,0) circle [radius=0.75]; \node[scale=0.8] at (2,0) [] {$X^d$}; }$. \vspace{0.5cm} \item Define $\underline{d}:=\max\{0,d\}$; then $\tikz[baseline=0ex]{ \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; \node[scale=0.8] at (0,0) [above] {$X^q$}; \draw (-2,-0.75) -- (-0.5,-0.75) -- (-1.25,0.75) -- cycle; \node[scale=0.8] at (-1.25,0) [below] {$X^p$}; } = \tikz[baseline=0ex]{ \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; \node[scale=0.7] at (0,0.25) [above] {$X^{q-\left(\underline{p+q}\right)}$}; \draw (2,0) circle [radius=0.75]; \node[scale=0.8] at (2,0) [] {$X^{2\left(\underline{p+q}\right)}$}; \draw (3.25,-0.75) -- (4.75,-0.75) -- (4,0.75) -- cycle; \node [scale=0.7] at (4,-0.25) [below] {$X^{p-\left(\underline{p+q}\right)}$}; \draw (2,-2) circle [radius=0.75]; \node [scale=0.8] at (2,-2) [] {$X^{\left(\underline{p+q}\right)C_{\alpha\beta}}$}; \node at (5.5,0) [] {$\alpha$}; \node at (5.5,-2) [] {$\beta$}; }$. \vspace{0.5cm} \item $\tikz[baseline=0ex, scale=0.7]{ \draw (-0.75,-0.75) -- (0.75,-0.75) -- (0,0.75) -- cycle; \node[scale=0.8] at (0,0) [below] {$X^p$};}\overline{s}_\alpha = \tikz[baseline=0ex, scale=0.7]{ \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; \node[scale=0.8] at (0,0) [above] {$X^{-p}$}; \draw (2,0) circle [radius=0.75]; \node[scale=0.8] at (2,0) [] {$X^{2p}$}; \draw (3.25,-0.75) -- (4.75,-0.75) -- (4,0.75) -- cycle; \node[scale=0.8] at (4,0) [below] {$X^{-p}$}; \draw (2,-2) circle [radius=0.75]; \node[scale=0.8] at (2,-2) [] {$X^{pC_{\alpha\beta}}$}; \node at (6,0) [] {$\alpha$}; \node at (6,-2) [] {$\beta$}; }$ \vspace{0.5cm} \item $\overline{s}_\alpha^{-1}\tikz[baseline=0ex, scale=0.7]{ \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; \node[scale=0.8] at (0,0) [above] {$X^p$};}= \tikz[baseline=0ex, scale=0.7]{ \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; \node[scale=0.8] at (0,0) [above] {$X^{-p}$}; \draw (2,0) circle [radius=0.75]; \node[scale=0.8] at (2,0) [] {$X^{2p}$}; \draw (3.25,-0.75) -- (4.75,-0.75) -- (4,0.75) -- cycle; \node[scale=0.8] at (4,0) [below] {$X^{-p}$}; \draw (2,-2) circle [radius=0.75]; \node[scale=0.8] at (2,-2) [] {$X^{pC_{\alpha\beta}}$}; \node at (6,0) [] {$\alpha$}; \node at (6,-2) [] {$\beta$}; }$ \end{enumerate} \end{prop} \begin{proof} (1) follows from the fact that $\left[E_{\pm \alpha},E_{\mp \beta}\right]=\left[H^\alpha,E_{\pm \beta}\right]=\left[H^\alpha,H^\beta\right]=0$ whenever $\alpha\neq \beta$; (2) and (3) follows from the commutator relation $\left[H^\alpha,E_{\pm\alpha}\right]=\pm E_{\pm\alpha}$; (4), (5), and (6) follow from identities \eqref{e_+e_-}, \eqref{e_+s}, and \eqref{se_-} respectively. \end{proof} This may still sound difficult, but there are further simplifications. For example, since we require our reduced word $\vec{i}$ to have its $u$ part before its $v$ part, we know that starting from the very beginning we already have the figures more or less in the order that we want, namely triangles of the form $\tikz[baseline=0ex,scale=0.5]{\draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle;}$ are on the left of triangles of the form $\tikz[baseline=0ex, scale=0.5]{\draw (-0.75,-0.75) -- (0.75,-0.75) -- (0,0.75) -- cycle;}$; there are some circles scattering in the mix, but we can use identities (2) and (3) from Proposition \ref{figure} to move the circles to the middle. Notice that all triangles in the image of $\chi_\vec{i}$ are empty by definition, and after using identities (2) and (3) from Proposition \ref{figure}, all triangles are filled with monomials with non-positive exponents. Suppose our reduced word $\vec{i}:=(\underbrace{\alpha(1),\dots, \alpha(n)}_\text{$u$ part}, \underbrace{\alpha(n+1),\dots, \alpha(l)}_\text{$v$ part})$ has non-trivial $v$ part. According to (5) of Proposition \ref{figure}, multiplying $\overline{s}_{\alpha(l)}$ on the right of $x$ changes the right most triangle $\tikz[baseline=0ex, scale=0.5]{\draw (-0.75,-0.75) -- (0.75,-0.75) -- (0,0.75) -- cycle;}$ to $\tikz[baseline=0ex,scale=0.5]{\draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle;}$ $\tikz[baseline=0ex, scale=0.5]{\draw (-0.75,-0.75) -- (0.75,-0.75) -- (0,0.75) -- cycle;}$. So if we want to compute a generalized minor of $x\overline{s}_{\alpha(l)}$, all we need to do then is just move the triangle $ \tikz[baseline=0ex, scale=0.5]{\draw (-0.75,-0.75) -- (0.75,-0.75) -- (0,0.75) -- cycle;}$ to the left until it passes the collection of circles in the middle, and then evaluate the generalized minor based on the circles in the middle. \begin{rmk} We claim that we may even drop the last triangle of the form $\tikz[baseline=0ex, scale=0.5]{\draw (-0.75,-0.75) -- (0.75,-0.75) -- (0,0.75) -- cycle;}$ when applying (5) of Proposition \ref{figure}: since $(\alpha(n+1),\dots, \alpha(l))$ is a reduced word of $v$, for any $m>n$ we know that $s_{\alpha(m)}\dots s_{\alpha(l-1)}$ is guaranteed to map $e_{\alpha(l)}$ to some unipotent element $n_+\in N_+$, and hence dropping the last triangle of the form $\tikz[baseline=0ex, scale=0.5]{\draw (-0.75,-0.75) -- (0.75,-0.75) -- (0,0.75) -- cycle;}$ will not affect our computation of generalized minor whatsoever. A similar argument can be applied to (6) of Proposition \ref{figure} as well; in conclusion, we may effectively replace (5) and (6) of Proposition \ref{figure} by \vspace{0.5cm} \indent (5') $\tikz[baseline=0ex, scale=0.7]{ \draw (-0.75,-0.75) -- (0.75,-0.75) -- (0,0.75) -- cycle; \node[scale=0.8] at (0,0) [below] {$X^p$};}\overline{s}_\alpha = \tikz[baseline=0ex, scale=0.7]{ \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; \node[scale=0.8] at (0,0) [above] {$X^{-p}$}; \draw (2,0) circle [radius=0.75]; \node[scale=0.8] at (2,0) [] {$X^{2p}$}; \draw (2,-2) circle [radius=0.75]; \node[scale=0.8] at (2,-2) [] {$X^{pC_{\alpha\beta}}$}; \node at (4,0) [] {$\alpha$}; \node at (4,-2) [] {$\beta$}; }$ \vspace{0.5cm} \indent (6') $\overline{s}_\alpha^{-1}\tikz[baseline=0ex, scale=0.7]{ \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; \node[scale=0.8] at (0,0) [above] {$X^p$};}= \tikz[baseline=0ex, scale=0.7]{ \draw (2,0) circle [radius=0.75]; \node[scale=0.8] at (2,0) [] {$X^{2p}$}; \draw (3.25,-0.75) -- (4.75,-0.75) -- (4,0.75) -- cycle; \node[scale=0.8] at (4,0) [below] {$X^{-p}$}; \draw (2,-2) circle [radius=0.75]; \node[scale=0.8] at (2,-2) [] {$X^{pC_{\alpha\beta}}$}; \node at (6,0) [] {$\alpha$}; \node at (6,-2) [] {$\beta$}; }$ \end{rmk} Let's use (5') and (1) - (4) of Proposition \ref{figure} to compute $x\overline{v^{-1}}$. As a convention, we will use a dashed vertical line to keep track of the separation between the $u$ part and the $v$ part of the reduced word. If $X$ is a cluster $\mathcal{X}$-variable not from the $v$ part of the reduced word, it is not hard to see that \begin{align*} \dots \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; } \tikz[scale=0.8, baseline=0ex]{ \draw (0,0) circle [radius=0.75]; \node at (0,0) [] {$X$}; } \dots \tikz[baseline=0ex, scale=0.8]{ \draw[dashed] (0,-1.5) -- (0,1.5); \draw [fill=white] (0,0) circle [radius=0.75]; }\dots \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,-0.75) -- (0.75,-0.75) -- (0,0.75) -- cycle; }\dots \overline{v^{-1}} =& \dots \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; } \dots \tikz[baseline=0ex, scale=0.8]{ \draw[dashed] (0,-1.5) -- (0,1.5); \draw [fill=white] (0,0) circle [radius=0.75]; \node at (0,0) [] {$X$}; }\dots \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,-0.75) -- (0.75,-0.75) -- (0,0.75) -- cycle; }\dots \overline{v^{-1}}\\ =&\dots \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; } \dots \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; \node at (0,0) [above] {$X^{-1}$}; }\dots \tikz[baseline=0ex, scale=0.8]{ \draw[dashed] (0,-1.5) -- (0,1.5); \draw [fill=white] (0,0) circle [radius=0.75]; \node at (0,0) [] {$X$}; }. \end{align*} Things get a bit trickier when $X$ is a cluster $\mathcal{X}$-varialbe from the $v$ part of the reduced word. Consider the following identities (we will always assume that the top level is level $\alpha$ and the bottom level is level $\beta$ unless otherwise specified): \begin{align*} & \dots \tikz[scale=0.8, baseline=0ex]{ \draw[dashed] (0,-3.5) -- (0,1.5); \draw[fill=white] (0,0) circle [radius=0.75]; \draw[fill=white] (0,-2) circle [radius=0.75]; } \dots \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,-0.75) -- (0.75,-0.75) -- (0,0.75) -- cycle; } \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,-2.75) -- (0.75,-2.75) -- (0,-1.25) -- cycle; \node at (0,0) [] {$\dots$}; } \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,-0.75) -- (0.75,-0.75) -- (0,0.75) -- cycle; } \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,-2.75) -- (0.75,-2.75) -- (0,-1.25) -- cycle; \draw (0,0) circle [radius=0.75]; \node at (0,0) [] {$X$}; } \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,-0.75) -- (0.75,-0.75) -- (0,0.75) -- cycle; }\dots \overline{v^{-1}}\\ =& \dots \tikz[scale=0.8, baseline=0ex]{ \draw[dashed] (0,-3.5) -- (0,1.5); \draw[fill=white] (0,0) circle [radius=0.75]; \draw[fill=white] (0,-2) circle [radius=0.75]; \node at (0,0) [] {$X$}; } \dots \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,-0.75) -- (0.75,-0.75) -- (0,0.75) -- cycle; \node at (0,0) [below] {$X^{-1}$}; } \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,-2.75) -- (0.75,-2.75) -- (0,-1.25) -- cycle; \node at (0,0) [] {$\dots$}; } \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,-0.75) -- (0.75,-0.75) -- (0,0.75) -- cycle; \node at (0,0) [below] {$X^{-1}$}; } \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,-2.75) -- (0.75,-2.75) -- (0,-1.25) -- cycle; \node at (0,0) [] {$\dots$}; } \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,-0.75) -- (0.75,-0.75) -- (0,0.75) -- cycle; }\dots \overline{v^{-1}}\\ =&\dots \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; \node at (0,0) [above] {$X^{-1}$}; } \tikz[scale=0.8,baseline=0ex]{ \draw (-0.75,-1.25) -- (0.75,-1.25) -- (0,-2.75) -- cycle; \node at (0,0) [] {$\dots$}; }\dots\tikz[scale=0.8, baseline=0ex]{ \draw[dashed] (0,-3.5) -- (0,1.5); \draw[fill=white] (0,0) circle [radius=0.75]; \draw[fill=white] (0,-2) circle [radius=0.75]; \node at (0,0) [] {$X$}; } \dots \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,-0.75) -- (0.75,-0.75) -- (0,0.75) -- cycle; \node at (0,0) [below] {$X^{-1}$}; } \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,-2.75) -- (0.75,-2.75) -- (0,-1.25) -- cycle; \node at (0,0) [] {$\dots$}; } \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; \node at (0,0) [above] {$X$}; \draw (1.5,0) circle [radius=0.75]; \node at (1.5,0) [] {$X^{-2}$}; \draw (1.5,-2) circle [radius=0.75]; \node at (1.5,-2) [] {$X^{-C_{\alpha\beta}}$}; }\quad \overline{v'^{-1}}\\ =& \dots \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; \node at (0,0) [above] {$X^{-1}$}; } \tikz[scale=0.8,baseline=0ex]{ \draw (-0.75,-1.25) -- (0.75,-1.25) -- (0,-2.75) -- cycle; \node at (0,0) [] {$\dots$}; } \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; } \dots\tikz[scale=0.8, baseline=0ex]{ \draw[dashed] (0,-3.5) -- (0,1.5); \draw[fill=white] (0,0) circle [radius=0.75]; \draw[fill=white] (0,-2) circle [radius=0.75]; \node at (0,0) [] {$X^{-1}$}; \node at (0,-2) [] {$X^{-C_{\alpha\beta}}$}; } \dots \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,-0.75) -- (0.75,-0.75) -- (0,0.75) -- cycle; \node at (0,0) [below] {$X$}; } \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,-2.75) -- (0.75,-2.75) -- (0,-1.25) -- cycle; \node at (0,0) [] {$\dots$}; \node at (0,-2) [below] {$X^{C_{\alpha\beta}}$}; } \overline{v'^{-1}} \end{align*} We may want to pause here and make a few observations before proceeding any further. \begin{rmk}\label{1stobs} First, we claim that from the last line and on, we always have the pattern \[ \dots \tikz[baseline=0ex, scale=0.8]{ \draw[dashed] (0,-1.5) -- (0,1.5); \draw [fill=white] (0,0) circle [radius=0.75]; \node at (0,0) [] {$X^d$}; }\dots \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,-0.75) -- (0.75,-0.75) -- (0,0.75) -- cycle; \node at (0,0) [below] {$X^{-d}$}; }\dots\tikz[scale=0.8, baseline=0ex] { \draw (-0.75,-0.75) -- (0.75,-0.75) -- (0,0.75) -- cycle; \node at (0,0) [below] {$X^{-d}$}; }\dots \] on every level (say $\alpha$) in the $v$ part of the reduced word for some integer $d$ (such integer may differ for different level); this is because when multiplied another $\overline{s}_\alpha$ on the right of the last $\tikz[scale=0.7, baseline=0ex] { \draw (-0.75,-0.75) -- (0.75,-0.75) -- (0,0.75) -- cycle; \node[scale=0.8] at (0,0) [below] {$X^{-d}$}; }$, it turns to $\tikz[baseline=0ex, scale=0.7]{\draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; \node[scale=0.8] at (0,0) [above] {$X^d$}; \draw (1.5,0) circle [radius=0.75]; \node[scale=0.8] at (1.5,0) [] {$X^{-2d}$};}$ with some additional $\tikz[baseline=0ex,scale=0.7]{\draw (0,0) circle [radius=0.75]; \node[scale=0.8] at (0,0) [] {$X^{-dC_{\alpha\beta}}$};}$ on other levels $\beta$. But then because (4) of Proposition \ref{figure} says that \[ \tikz[baseline=0ex,scale=0.8]{ \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; \node at (0,0) [above] {$X^{-d}$}; \draw (-0.5,-0.75) -- (-2,-0.75) -- (-1.25,0.75) -- cycle; \node at (-1.25,0) [below] {$X^d$}; } = \tikz[baseline=0ex,scale=0.8]{ \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; \node at (0,0) [above] {$X^d$}; \draw (0.5,-0.75) -- (2,-0.75) -- (1.25,0.75) -- cycle; \node at (1.25,0) [below] {$X^{-d}$}; } \] we can move $\tikz[baseline=0ex, scale=0.7]{\draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; \node[scale=0.8] at (0,0) [above] {$X^d$};}$ through all $\tikz[scale=0.7, baseline=0ex] { \draw (-0.75,-0.75) -- (0.75,-0.75) -- (0,0.75) -- cycle; \node[scale=0.8] at (0,0) [below] {$X^{-d}$}; }$ on level $\alpha$ without changing anything; lastly the circles on the far right, while moving towards the middle, change the remaining triangles in the $v$ part of the reduced word uniformly through each level; hence at the end of the day we have \[ \dots \tikz[baseline=0ex, scale=0.8]{ \draw[dashed] (0,-1.5) -- (0,1.5); \draw [fill=white] (0,0) circle [radius=0.75]; \node at (0,0) [] {$X^d$}; }\dots \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,-0.75) -- (0.75,-0.75) -- (0,0.75) -- cycle; \node at (0,0) [below] {$X^{-d}$}; }\dots\tikz[scale=0.8, baseline=0ex] { \draw (-0.75,-0.75) -- (0.75,-0.75) -- (0,0.75) -- cycle; \node at (0,0) [below] {$X^{-d}$}; }\overline{s}_\alpha= \dots \tikz[baseline=0ex,scale=0.8]{ \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; } \tikz[baseline=0ex, scale=0.8]{ \draw[dashed] (0,-1.5) -- (0,1.5); \draw [fill=white] (0,0) circle [radius=0.75]; \node at (0,0) [] {$X^{-d}$}; }\dots \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,-0.75) -- (0.75,-0.75) -- (0,0.75) -- cycle; \node at (0,0) [below] {$X^d$}; }\dots \] \end{rmk} \begin{rmk}\label{2ndobs} Second, we also notice that if $X$ is a cluster $\mathcal{X}$-variable in the $v$ part on the level $\alpha$ to begin with, then every triangle from the $v$ part of the reduced word will no longer carry any factor of $X$ after it is moved into the $u$ part, except those that are on the right of $X$ on the same level to begin with (which we color red in the picture below): \[ \dots \tikz[baseline=0ex,scale=0.8]{ \draw [dashed] (0,-1.5) -- (0,1.5); \draw [fill=white] (0,0) circle [radius=0.75]; } \dots \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,-0.75) -- (0.75,-0.75) -- (0,0.75) -- cycle; } \tikz[scale=0.8, baseline=0ex] { \draw (0,0) circle [radius=0.75]; \node at (0,0) [] {$X$}; } \tikz[scale=0.8, baseline=0ex] { \draw [red] (-0.75,-0.75) -- (0.75,-0.75) -- (0,0.75) -- cycle; } \dots \tikz[scale=0.8, baseline=0ex] { \draw [red] (-0.75,-0.75) -- (0.75,-0.75) -- (0,0.75) -- cycle; } \dots \overline{v^{-1}}=\dots \tikz[scale=0.8, baseline=0ex] { \draw [red] (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; \node [red] at (0,0) [above] {$X^{-1}$}; } \dots \tikz[scale=0.8, baseline=0ex] { \draw [red] (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; \node [red] at (0,0) [above] {$X^{-1}$}; } \dots \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; } \dots \tikz[baseline=0ex,scale=0.8]{ \draw [dashed] (0,-1.5) -- (0,1.5); \draw [fill=white] (0,0) circle [radius=0.75]; \node at (0,0) [] {$X^p$}; }. \] This can be seen as a direct consequence of our first observation. \end{rmk} Unfortunately we don't have well-formed formula to compute the exponents $p$; however, we can still formulate the following proposition that we help us prove statements about such exponents. \begin{prop}\label{3.10} Let $X$ be a cluster $\mathcal{X}$-variable originally on level $\alpha$ strictly contained in the $v$ part. Let $p_\beta$ be the exponents of $X$ in the circle on level $\beta$ in the schematic picture of $x\overline{\left(v_{>k}\right)^{-1}}$ (where circles have all been moved to the middle in between the $u$ part and the $v$ part). If in addition $\alpha(k)$ is still a simple root (a letter in the $v$ part) and $\alpha(k)\neq \alpha$, then \[ \sum_\beta q_\beta \left(C^{-1}\right)_{\beta\alpha}=\sum_\beta p_\beta \left(C^{-1}\right)_{\beta\alpha}, \] where $q_\beta$ are the exponents of $X$ in the circle on level $\beta$ in the schematic picture of $x\overline{\left(v_{>k-1}\right)^{-1}}$, and $C^{-1}$ is the inverse of the Cartan matrix $C$. \end{prop} \begin{proof} Suppose $\alpha(k)=\gamma\neq \alpha$ is a simple root. Then by applying (5') and Remark \ref{1stobs} we know that $q_\beta=p_\beta-C_{\gamma\beta}p_\gamma$. But then \[ \sum_\beta q_\beta \left(C^{-1}\right)_{\beta\alpha}=\sum_\beta \left(p_\beta-C_{\gamma\beta}p_\gamma\right)\left(C^{-1}\right)_{\beta\alpha}=\sum_\beta p_\beta\left(C^{-1}\right)_{\beta\alpha}, \] where in the last equality we have used the fact that $\sum_\beta C_{\gamma\beta}\left(C^{-1}\right)_{\beta\alpha}=\delta_{\gamma\alpha}=0$. \end{proof} Now we finally have accomplished something: the general minors of $x\overline{v^{-1}}$ do come up when computing $\mathrm{DT}$. To finish what we have started, we now need to multiply some part of the reduced word of $u$ on the left to get $\overline{u_{<k}}^{-1}x\overline{v^{-1}}$. This means we start with whatever we get at the end when computing $x\overline{v^{-1}}$, and then start to change the triangles from the left using (6') and move them all the way to the right. We will now draw two dashed lines in the schematic picture, with one representing the separation between triangles that are originally in the $u$ part of the reduced word and those that have come across from the $v$ part of the reduced word, and the other one representing the original separation between the $u$ part and the $v$ part of the reduced word, as drawn below. \[ \underbrace{\dots \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; } \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,-1.25) -- (0.75,-1.25) -- (0,-2.75) -- cycle; \node at (0,0) [] {$\dots$}; } \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; } \dots}_\text{triangles that are originally in $u$ part} \tikz[scale=0.8,baseline=0ex]{ \draw [dashed] (0,-3.5) -- (0,1.5); } \underbrace{\dots \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; } \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,-1.25) -- (0.75,-1.25) -- (0,-2.75) -- cycle; \node at (0,0) [] {$\dots$}; } \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; } \dots}_\text{triangles that come from the $v$ part} \tikz[scale=0.8, baseline=0ex]{ \draw[dashed] (0,-3.5) -- (0,1.5); \draw[fill=white] (0,0) circle [radius=0.75]; \draw[fill=white] (0,-2) circle [radius=0.75]; } \] Suppose $X$ is a cluster $\mathcal{X}$-variable on level $\alpha$ in the $v$ part of the reduced word. Then from our discussion on $x\overline{v^{-1}}$ we know that in the schematic diagram of $x\overline{v^{-1}}$, only the triangles that are originally on level $\alpha$ to the right of $X$ (and hence from the $v$ part) will carry a factor of $X^{-1}$ in them. \[ \dots \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; } \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,-1.25) -- (0.75,-1.25) -- (0,-2.75) -- cycle; \node at (0,0) [] {$\dots$}; } \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; } \dots \tikz[scale=0.8,baseline=0ex]{ \draw [dashed] (0,-3.5) -- (0,1.5); } \dots \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; \node at (0,0) [above] {$X^{-1}$}; } \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,-1.25) -- (0.75,-1.25) -- (0,-2.75) -- cycle; \node at (0,0) [] {$\dots$}; } \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; } \dots \tikz[scale=0.8, baseline=0ex]{ \draw[dashed] (0,-3.5) -- (0,1.5); \draw[fill=white] (0,0) circle [radius=0.75]; \draw[fill=white] (0,-2) circle [radius=0.75]; \node at (0,0) [] {$X^{p_\alpha}$}; \node at (0,-2) [] {$X^{p_\beta}$}; } \] Therefore when we apply (4) and (6') over and over again to compute $\overline{u_{<k}}^{-1}x\overline{v^{-1}}$, no leading power of $X$ is going to change except when the triangles originally from the $u$ part finally arrive in the $v$ part. \begin{align*} &\overline{u_{<k}}^{-1}\dots \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; } \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,-1.25) -- (0.75,-1.25) -- (0,-2.75) -- cycle; \node at (0,0) [] {$\dots$}; } \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; } \dots \tikz[scale=0.8,baseline=0ex]{ \draw [dashed] (0,-3.5) -- (0,1.5); } \dots \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; \node at (0,0) [above] {$X^{-1}$}; } \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,-1.25) -- (0.75,-1.25) -- (0,-2.75) -- cycle; \node at (0,0) [] {$\dots$}; } \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; } \dots \tikz[scale=0.8, baseline=0ex]{ \draw[dashed] (0,-3.5) -- (0,1.5); \draw[fill=white] (0,0) circle [radius=0.75]; \draw[fill=white] (0,-2) circle [radius=0.75]; \node at (0,0) [] {$X^{p_\alpha}$}; \node at (0,-2) [] {$X^{p_\beta}$}; }\\ =& \dots \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; } \dots \tikz[scale=0.8,baseline=0ex]{ \draw [dashed] (0,-3.5) -- (0,1.5); } \dots \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; \node at (0,0) [above] {$X^{-1}$}; } \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,-1.25) -- (0.75,-1.25) -- (0,-2.75) -- cycle; \node at (0,0) [] {$\dots$}; } \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; } \dots \tikz[scale=0.8, baseline=0ex]{ \draw[dashed] (0,-3.5) -- (0,1.5); \draw[fill=white] (0,0) circle [radius=0.75]; \draw[fill=white] (0,-2) circle [radius=0.75]; \node at (0,0) [] {$X^{p_\alpha}$}; \node at (0,-2) [] {$X^{p_\beta}$}; } \dots \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,-0.75) -- (0.75,-0.75) -- (0,0.75) -- cycle; \node at (0,0) [below] {$X^{-p_\alpha}$}; } \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,-2.75) -- (0.75,-2.75) -- (0,-1.25) -- cycle; \node at (0,0) [] {$\dots$}; \node at (0,-2) [below] {$X^{-p_\beta}$}; } \dots \end{align*} If $X$ is a cluster $\mathcal{X}$-variable on level $\alpha$ in between the $u$ part and the $v$ part of the reduced word, then we can also see easily that \begin{align*} &\overline{u_{<k}}^{-1}\dots \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; } \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,-1.25) -- (0.75,-1.25) -- (0,-2.75) -- cycle; \node at (0,0) [] {$\dots$}; } \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; } \dots \tikz[scale=0.8,baseline=0ex]{ \draw [dashed] (0,-3.5) -- (0,1.5); } \dots \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; \node at (0,0) [above] {$X^{-1}$}; } \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,-1.25) -- (0.75,-1.25) -- (0,-2.75) -- cycle; \node at (0,0) [] {$\dots$}; } \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; \node at (0,0) [above] {$X^{-1}$}; } \dots \tikz[scale=0.8, baseline=0ex]{ \draw[dashed] (0,-3.5) -- (0,1.5); \draw[fill=white] (0,0) circle [radius=0.75]; \draw[fill=white] (0,-2) circle [radius=0.75]; \node at (0,0) [] {$X$}; }\\ =& \dots \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; } \dots \tikz[scale=0.8,baseline=0ex]{ \draw [dashed] (0,-3.5) -- (0,1.5); } \dots \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; \node at (0,0) [above] {$X^{-1}$}; } \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,-1.25) -- (0.75,-1.25) -- (0,-2.75) -- cycle; \node at (0,0) [] {$\dots$}; } \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; \node at (0,0) [above] {$X^{-1}$}; } \dots \tikz[scale=0.8, baseline=0ex]{ \draw[dashed] (0,-3.5) -- (0,1.5); \draw[fill=white] (0,0) circle [radius=0.75]; \draw[fill=white] (0,-2) circle [radius=0.75]; \node at (0,0) [] {$X$}; } \dots \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,-0.75) -- (0.75,-0.75) -- (0,0.75) -- cycle; \node at (0,0) [below] {$X^{-1}$}; } \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,-2.75) -- (0.75,-2.75) -- (0,-1.25) -- cycle; \node at (0,0) [] {$\dots$}; } \dots \end{align*} Lastly, if $X$ is a cluster $\mathcal{X}$-variable on level $\alpha$ in the $u$ part to begin with, then in the schematic diagram of $x\overline{v^{-1}}$ we will have the following (the dashed circle represents where $X$ was originally). \[ \dots \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; } \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,-1.25) -- (0.75,-1.25) -- (0,-2.75) -- cycle; \draw [dashed] (0,0) circle [radius=0.75]; } \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; \node at (0,0) [above] {$X^{-1}$}; } \dots \tikz[scale=0.8,baseline=0ex]{ \draw [dashed] (0,-3.5) -- (0,1.5); } \dots \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; \node at (0,0) [above] {$X^{-1}$}; } \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,-1.25) -- (0.75,-1.25) -- (0,-2.75) -- cycle; \node at (0,0) [] {$\dots$}; } \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; \node at (0,0) [above] {$X^{-1}$}; } \dots \tikz[scale=0.8, baseline=0ex]{ \draw[dashed] (0,-3.5) -- (0,1.5); \draw[fill=white] (0,0) circle [radius=0.75]; \draw[fill=white] (0,-2) circle [radius=0.75]; \node at (0,0) [] {$X$}; } \] By going over similar computations we did before Remarks \ref{1stobs} and \ref{2ndobs}, we see that we will arrive at similar conclusion as Remarks \ref{1stobs} and \ref{2ndobs} again. Therefore we can conclude that \[ \overline{u_{<k}}^{-1}x\overline{v^{-1}}= \dots \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; \node at (0,0) [above] {$X^{-p_\alpha}$}; } \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,-1.25) -- (0.75,-1.25) -- (0,-2.75) -- cycle; \node at (0,0) [] {$\dots$}; \node at (0,-2) [above] {$X^{-p_\beta}$}; } \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; \node at (0,0) [above] {$X^{-p_\alpha}$}; } \dots \tikz[scale=0.8, baseline=0ex]{ \draw[dashed] (0,-3.5) -- (0,1.5); \draw[fill=white] (0,0) circle [radius=0.75]; \draw[fill=white] (0,-2) circle [radius=0.75]; \node at (0,0) [] {$X^{p_\alpha}$}; \node at (0,-2) [] {$X^{p_\beta}$}; } \dots \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,-0.75) -- (0.75,-0.75) -- (0,0.75) -- cycle; \node at (0,0) [below] {$X^{-p_\alpha}$}; } \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,-2.75) -- (0.75,-2.75) -- (0,-1.25) -- cycle; \node at (0,0) [] {$\dots$}; \node at (0,-2) [below] {$X^{-p_\beta}$}; } \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,-0.75) -- (0.75,-0.75) -- (0,0.75) -- cycle; \node at (0,0) [below] {$X^{-p_\alpha}$}; } \dots \] for some integers $p_\alpha$ and $p_\beta$. \begin{rmk} Through all the computations we have done, we see that when we apply (4) of Proposition \ref{figure}, the factor of circles never appear. Therefore one can effectively replace (4) by \vspace{0.5cm} \indent (4') $\tikz[baseline=0ex,scale=0.7]{ \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; \node at (0,0) [above] {$X^q$}; \draw (-1.25,-0.75) -- (-2.75,-0.75) -- (-2,0.75) -- cycle; \node at (-2,0) [below] {$X^p$}; } = \tikz[baseline=0ex,scale=0.7]{ \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; \node at (0,0) [above] {$X^q$}; \draw (1.25,-0.75) -- (2.75,-0.75) -- (2,0.75) -- cycle; \node at (2,0) [below] {$X^p$}; }$ \vspace{0.5cm} \noindent when doing computation with reduced words whose $u$ part is strictly before the $v$ part. \end{rmk} By using the same technique and a symmetric argument one can also compute $\overline{u}^{-1}x\overline{\left(v_{>k}\right)^{-1}}$. We will leave the details as exercise for the readers. Now we are ready to prove the following theorem, which is the first half of what we need to prove that $\mathrm{DT}$ is the cluster Donaldson-Thomas transformation on $\mathcal{X}^{u,v}$. \begin{prop}\label{1sthalf} For our choice of the reduced word $\vec{i}$ (whose $u$ part comes before its $v$ part), $\deg_{X_b}\mathrm{DT}_\vec{i}^*\left(X_a\right)=-\delta_{ab}$ for any cluster $\mathcal{X}$-variables $X_a$ and $X_b$. \end{prop} \begin{proof} In the proof we will mainly focus on two cases depending on where $X_a$ is: (i) $X_a$ is strictly contained in the $u$ part, and (ii) $X_a$ is in between the $u$ part and the $v$ part; the case where $X_a$ is strictly contained in the $v$ part can be proved by a symmetric argument of case (i). Let's consider case (i) first. Suppose the vertex (string) $a$ is on level $\alpha$ cut out by nodes $\alpha(i)$ and $\alpha(j)$ as below. \[ \tikz{ \node (1) at (1,1) [] {$\alpha(i)$}; \node (2) at (3,1) [] {$\alpha(j)$}; \draw (0,1) -- (1) -- node[above]{$a$} (2) -- (4,1); } \] From earlier computation we learn that \[ \mathrm{DT}_\vec{i}^*\left(X_a\right)=\frac{\Delta_\alpha\left(\overline{u_{<j+1}}^{-1}x\overline{v^{-1}}\right)\prod_{\beta\neq \alpha} \left(\Delta_\beta\left(\overline{u_{<i}}^{-1}x\overline{v^{-1}}\right)\right)^{-C_{\beta\alpha}}}{\Delta_\alpha\left(\overline{u_{<i}}^{-1}x\overline{v^{-1}}\right)\prod_{\beta\neq \alpha} \left(\Delta_\beta\left(\overline{u_{<j+1}}^{-1}x\overline{v^{-1}}\right)\right)^{-C_{\beta\alpha}}}. \] If $b$ is a vertex strictly contained in the $v$ part of the reduced word or in between the $u$ part and the $v$ part, then it is not hard to see from the schematic picture that multiplying $\overline{u_{<i}}^{-1}$ and $\overline{u_{<j+1}}^{-1}$ on the left of $x\overline{v^{-1}}$ yield the same factors of $X_b$ in the middle, and hence $\deg_{X_b}\mathrm{DT}_\vec{i}^*\left(X_a\right)=0$. This reduces the case to the situation where $b$ is a vertex strictly contained in the $u$ part of the reduced word. Under such circumstance, it is helpful to rewrite $\mathrm{DT}_\vec{i}^*\left(X_a\right)$ as \begin{equation}\label{degDT} \mathrm{DT}_\vec{i}^*\left(X_a\right)=\frac{\Delta_\alpha\left(\overline{u_{<i}}^{-1}x\overline{v^{-1}}\right)\prod_\beta\left(\Delta_\beta\left(\overline{u_{<j+1}}^{-1}x\overline{v^{-1}}\right)\right)^{C_{\beta\alpha}}}{\Delta_\alpha\left(\overline{u_{<j+1}}^{-1}x\overline{v^{-1}}\right)\prod_\beta \left(\Delta_\beta\left(\overline{u_{<i}}^{-1}x\overline{v^{-1}}\right)\right)^{C_{\beta\alpha}}}. \end{equation} The last expression may still look complicated, but it is actually easy to compute, especially if we only care about the leading power: note that if you try to compute $\prod_\beta\left( \Delta_\beta \left(n_-\left(\prod_\mu t_\mu^{H^\mu}\right)n_+\right)\right)^{C_{\beta\alpha}}$ for some $n_\pm \in N_\pm$ and $t_\mu\in \mathbb{C}^*$, all you need is just to realize \[ \sum_\beta C_{\beta\alpha}\inprod{H^\mu}{\omega_\beta}=\sum_{\beta,\nu} C_{\beta\alpha}\left(C^{-1}\right)_{\mu\nu}\inprod{H_\nu}{\omega_\beta}=\delta_{\alpha\mu}, \] which implies immediately that \[ \prod_\beta\left( \Delta_\beta \left(n_-\left(\prod_\mu t_\mu^{H^\mu}\right)n_+\right)\right)^{C_{\beta\alpha}}=t_\alpha. \] Let's first consider the subcase $b\neq a$ with $b$ being a vertex strictly contained in the $u$ part. By Remark \ref{1stobs}, we can assume that $\overline{u_{<i}}^{-1}x\overline{v^{-1}}$ and $\overline{u_{<j+1}}^{-1}x\overline{v^{-1}}$ are represented by the following schematic pictures respectively: \[ \overline{u_{<i}}^{-1}x\overline{v^{-1}}= \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; \node at (0,0) [above] {$X_b^{-p_\alpha}$}; } \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,-1.25) -- (0.75,-1.25) -- (0,-2.75) -- cycle; \node at (0,0) [] {$\dots$}; \node at (0,-2) [above] {$X_b^{-p_\beta}$}; } \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; \node at (0,0) [above] {$X_b^{-p_\alpha}$}; } \dots \tikz[scale=0.8, baseline=0ex]{ \draw[dashed] (0,-3.5) -- (0,1.5); \draw[fill=white] (0,0) circle [radius=0.75]; \draw[fill=white] (0,-2) circle [radius=0.75]; \node at (0,0) [] {$X_b^{p_\alpha}$}; \node at (0,-2) [] {$X_b^{p_\beta}$}; } \dots, \quad \quad \quad \quad \quad \quad \quad \quad \overline{u_{<j+1}}^{-1}x\overline{v^{-1}}=\dots \tikz[scale=0.8, baseline=0ex]{ \draw[dashed] (0,-3.5) -- (0,1.5); \draw[fill=white] (0,0) circle [radius=0.75]; \draw[fill=white] (0,-2) circle [radius=0.75]; \node at (0,0) [] {$X_b^{q_\alpha}$}; \node at (0,-2) [] {$X_b^{q_\beta}$}; } \dots. \] Then to prove $\deg_{X_b}\mathrm{DT}_\vec{i}^*\left(X_a\right)=0$, all we need to show is the following equality: \[ p_\alpha-\inprod{\sum_\beta p_\beta H^\beta}{\omega_\alpha}=q_\alpha-\inprod{\sum_\beta q_\beta H^\beta}{\omega_\alpha}, \] which is equivalent to showing \[ p_\alpha-\sum_\beta p_\beta\left(C^{-1}\right)_{\beta\alpha}=q_\alpha-\sum_\beta q_\beta\left(C^{-1}\right)_{\beta\alpha}, \] where $C^{-1}$ is the inverse of the Cartan matrix $C$. To see this, we need the schematic pictures of $\overline{u_{<i+1}}^{-1}x\overline{v^{-1}}$ and $\overline{u_{<j}}^{-1}x\overline{v^{-1}}$, which are the following. \[ \overline{u_{<i+1}}^{-1}x\overline{v^{-1}}= \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,-1.25) -- (0.75,-1.25) -- (0,-2.75) -- cycle; \node at (0,0) [] {$\dots$}; \node[scale=0.5] at (0,-1.7) [above] {$X_b^{-p_\beta+C_{\alpha\beta}p_\alpha}$}; } \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; \node at (0,0) [above] {$X_b^{p_\alpha}$}; } \dots \tikz[scale=0.8, baseline=0ex]{ \draw[dashed] (0,-3.5) -- (0,1.5); \draw[fill=white] (0,0) circle [radius=0.75]; \draw[fill=white] (0,-2) circle [radius=0.75]; \node at (0,0) [] {$X_b^{-p_\alpha}$}; \node[scale=0.6] at (0,-2) [] {$X_b^{p_\beta-C_{\alpha\beta}p_\alpha}$}; } \dots, \quad \quad \quad \quad \quad \quad \quad \quad \overline{u_{<j}}^{-1}x\overline{v^{-1}}= \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; \node at (0,0) [above] {$X_b^{q_\alpha}$}; } \dots \tikz[scale=0.8, baseline=0ex]{ \draw[dashed] (0,-3.5) -- (0,1.5); \draw[fill=white] (0,0) circle [radius=0.75]; \draw[fill=white] (0,-2) circle [radius=0.75]; \node at (0,0) [] {$X_b^{-q_\alpha}$}; \node[scale=0.6] at (0,-2) [] {$X_b^{q_\beta-C_{\alpha\beta}q_\alpha}$}; } \dots. \] By renaming the exponents of $\overline{u_{<i+1}}^{-1}x\overline{v^{-1}}$ as $p'_\beta$ and the exponents of $\overline{u_{<j}}^{-1}x\overline{v^{-1}}$ as $q'_\beta$, we see that \[ \sum_\beta p'_\beta\left(C^{-1}\right)_{\beta\alpha}=\sum_\beta \left(p_\beta-C_{\alpha\beta}p_\alpha\right)\left(C^{-1}\right)_{\beta\alpha}=-p_\alpha+\sum_\beta p_\beta\left(C^{-1}\right)_{\beta\alpha}, \] \[ \sum_\beta q'_\beta\left(C^{-1}\right)_{\beta\alpha}=\sum_\beta \left(q_\beta-C_{\alpha\beta}q_\alpha\right)\left(C^{-1}\right)_{\beta\alpha}=-q_\alpha+\sum_\beta q_\beta\left(C^{-1}\right)_{\beta\alpha}. \] The problem now reduces to showing that \[ \sum_\beta p'_\beta\left(C^{-1}\right)_{\beta\alpha}=\sum_\beta q'_\beta\left(C^{-1}\right)_{\beta\alpha}, \] which is a direct consequence of (a symmetric version of) Proposition \ref{3.10}. This leaves us with one remaining vertex, namely $b=a$. In this subcase, the schematic pictures of $\overline{u_{<i}}^{-1}x\overline{v^{-1}}$ and $\overline{u_{<j+1}}^{-1}x\overline{v^{-1}}$ are the following: \[ \overline{u_{<i}}^{-1}x\overline{v^{-1}}= \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; } \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,-1.25) -- (0.75,-1.25) -- (0,-2.75) -- cycle; \node at (0,0) [] {$\dots$}; } \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; \node at (0,0) [above] {$X_a^{-1}$}; } \dots \tikz[scale=0.8, baseline=0ex]{ \draw[dashed] (0,-3.5) -- (0,1.5); \draw[fill=white] (0,0) circle [radius=0.75]; \draw[fill=white] (0,-2) circle [radius=0.75]; \node at (0,0) [] {$X_a$}; } \dots, \quad \quad \quad \quad \quad \quad \quad \quad \overline{u_{<j+1}}^{-1}x\overline{v^{-1}}=\dots \tikz[scale=0.8, baseline=0ex]{ \draw[dashed] (0,-3.5) -- (0,1.5); \draw[fill=white] (0,0) circle [radius=0.75]; \draw[fill=white] (0,-2) circle [radius=0.75]; \node at (0,0) [] {$X_a^{-1}$}; \node at (0,-2) [] {$X_a^{-C_{\alpha\beta}}$}; } \dots. \] Then according to Equation \eqref{degDT}, we can easily compute \begin{align*} \deg_{X_a}\mathrm{DT}_\vec{i}^*\left(X_a\right)=&\inprod{H^\alpha}{\omega_\alpha}-1-\inprod{-H^\alpha-\sum_{\beta\neq \alpha}C_{\alpha\beta}H^\beta}{\omega_\alpha}-1\\ =&\inprod{\sum_\beta C_{\alpha\beta}H^\beta}{\omega_\alpha}-2\\ =&-1, \end{align*} which concludes the proof of case (i). Let's now turn to case (ii). Again suppose the vertex (string) $a$ is on level $\alpha$ cut out by nodes $\alpha(i)$ and $\alpha(j)$, but this time with $-\alpha(i)=\alpha(j)=\alpha$. Then we know that \begin{align*} \mathrm{DT}_\vec{i}^*\left(X_a\right)=&\frac{\left(\prod_{\beta\neq \alpha}\left(\Delta_\beta\left(\overline{u_{<i}}^{-1}x\overline{v^{-1}}\right)\right)^{-C_{\beta\alpha}}\right)\left(\prod_{\beta\neq \alpha}\left(\Delta_\beta\left(\overline{u}^{-1}x\overline{\left(v_{>j}\right)^{-1}}\right)\right)^{-C_{\beta\alpha}}\right)}{\Delta_\alpha\left(\overline{u_{<i}}^{-1}x\overline{v^{-1}}\right)\Delta_\alpha\left(\overline{u}^{-1}x\overline{\left(v_{>j}\right)^{-1}}\right)\prod_{\beta\neq \alpha}\left(\Delta_\beta\left(\overline{u}^{-1}x\overline{v^{-1}}\right)\right)^{-C_{\beta\alpha}}}\\ =&\frac{\Delta_\alpha\left(\overline{u_{<i}}^{-1}x\overline{v^{-1}}\right)\Delta_\alpha\left(\overline{u}^{-1}x\overline{\left(v_{>j}\right)^{-1}}\right)\prod_\beta\left(\Delta_\beta\left(\overline{u}^{-1}x\overline{v^{-1}}\right)\right)^{C_{\beta\alpha}}}{\left(\Delta_\alpha\left(\overline{u}^{-1}x\overline{v^{-1}}\right)\right)^2\left(\prod_\beta\left(\Delta_\beta\left(\overline{u_{<i}}^{-1}x\overline{v^{-1}}\right)\right)^{C_{\beta\alpha}}\right)\left(\prod_\beta\left(\Delta_\beta\left(\overline{u}^{-1}x\overline{\left(v_{>j}\right)^{-1}}\right)\right)^{C_{\beta\alpha}}\right)}. \end{align*} This time the easy ones are the vertices $b$ that are originally in between the $u$ part and the $v$ part, for which there is at most one on each level. From the analysis in schematic pictures we know that they do not change as we move triangles across. Suppose $b$ is on level $\beta$ to begin with. Then by simple computation we get that \[ \deg_{X_b}\mathrm{DT}_\vec{i}^*\left(X_a\right)=2\inprod{H^\beta}{\omega_\alpha}+\delta_{\alpha\beta}-2\inprod{H^\beta}{\omega_\alpha}-2\delta_{\alpha\beta}=-\delta_{\alpha\beta}=-\delta_{ab}. \] Therefore to finish the proof, we only need to show that $\deg_{X_b}\mathrm{DT}_\vec{i}^*\left(X_a\right)=0$ for any vertex $b$ that is strictly contained in either the $u$ part or the $v$ part. Due to the symmetry of the arguments, we will only consider the subcase where $b$ is a vertex strictly contained in the $u$ part of the reduced word. Without further due, we lay down the schematic pictures of $\overline{u_{<i}}^{-1}x\overline{v^{-1}}$, $\overline{u}^{-1}x\overline{\left(v_{>j}\right)^{-1}}$, and $\overline{u}^{-1}x\overline{v^{-1}}$ as below: \[ \overline{u_{<i}}^{-1}x\overline{v^{-1}}= \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle; \node at (0,0) [above] {$X_b^{-p_\alpha}$}; } \dots \tikz[scale=0.8, baseline=0ex]{ \draw[dashed] (0,-3.5) -- (0,1.5); \draw[fill=white] (0,0) circle [radius=0.75]; \draw[fill=white] (0,-2) circle [radius=0.75]; \node at (0,0) [] {$X_b^{p_\alpha}$}; \node at (0,-2) [] {$X_b^{p_\beta}$}; } \dots , \quad \quad\quad \quad \quad \quad \overline{u}^{-1}x\overline{\left(v_{>j}\right)^{-1}}=\dots \tikz[scale=0.8, baseline=0ex]{ \draw[dashed] (0,-3.5) -- (0,1.5); \draw[fill=white] (0,0) circle [radius=0.75]; \draw[fill=white] (0,-2) circle [radius=0.75]; \node at (0,0) [] {$X_b^{-p_\alpha}$}; \node[scale=0.6] at (0,-2) [] {$X_b^{p_\beta-C_{\alpha\beta}p_\alpha}$}; } \dots \tikz[scale=0.8, baseline=0ex] { \draw (-0.75,-0.75) -- (0.75,-0.75) -- (0,0.75) -- cycle; }, \] \[ \overline{u}^{-1}x\overline{v^{-1}}=\dots \tikz[scale=0.8, baseline=0ex]{ \draw[dashed] (0,-3.5) -- (0,1.5); \draw[fill=white] (0,0) circle [radius=0.75]; \draw[fill=white] (0,-2) circle [radius=0.75]; \node at (0,0) [] {$X_b^{-p_\alpha}$}; \node[scale=0.6] at (0,-2) [] {$X_b^{p_\beta-C_{\alpha\beta}p_\alpha}$}; } \dots. \] Based on these schematic pictures, we can go straight into the computation of the degree of $X_b$ in $\mathrm{DT}_\vec{i}^*\left(X_a\right)$: \begin{align*} \deg_{X_b}\mathrm{DT}_\vec{i}^*\left(X_a\right)=&\inprod{\sum_\beta p_\beta H^\beta}{\omega_\alpha}+\inprod{\sum_\beta \left(p_\beta-C_{\alpha\beta}p_\alpha\right)H^\beta}{\omega_\alpha}-p_\alpha\\ &-2\inprod{\sum_\beta \left(p_\beta-C_{\alpha\beta}p_\alpha\right)H^\beta}{\omega_\alpha}-p_\alpha-\left(-p_\alpha\right)\\ =&\inprod{\sum_\beta C_{\alpha\beta}p_\alpha H^\beta}{\omega_\alpha}-p_\alpha\\ =&0. \end{align*} This finally completes the whole proof of Proposition \ref{1sthalf}. \end{proof} Now we are half way through proving that $\mathrm{DT}$ is the cluster Donaldson-Thomas transformation on $\mathcal{X}^{u,v}$. The remaining part of this subsection will be devoted to proving the other claim we need, i.e., $\mathrm{DT}_\vec{i}$ is a cluster transformation. Recall from Proposition \ref{twistflag} that we can identify the twist map $\eta$ on double Bruhat cells $H\backslash G^{u,v}/H$ with the map $\eta:[B_1,B_2,B_3,B_4]\mapsto [B_3^*,B_4^*,B_5^*,B_6^*]$ on $\mathrm{Conf}^{u,v}(\mathcal{B})$, where the six Borel subgroups can be fit in the following hexagon diagram. \[ \xymatrix{ & B_6\ar[r]^{u^c} \ar@{-}[drr] & B_1 \ar[dr]^u \ar@{-}[dll] & \\ B_3 \ar[ur]^{u^*} \ar[dr]_{v^*} \ar@{-}[drr] & & & B_4 \ar@{-}[dll] \\ & B_2 \ar[r]_{v^c} & B_5 \ar[ur]_v &} \] The key to show that $\psi\circ \chi$ is a cluster transformation is to break $\eta$ down to a composition of a series of small ``clockwise tilting'' on the square diagram. To be more precise, let \[ \vec{i}:=(\underbrace{-\alpha(1),\dots, -\alpha(m)}_\text{$u$ part}, \underbrace{\beta(1),\dots, \beta(n)}_\text{$v$ part}) \] again be a reduced word for the pair of Weyl group elements $(u,v)$ whose $u$ part comes before its $v$ part (for notation simplicity later we have put in a minus sign for the $u$ part so now all $\alpha(i)$ are simple roots, and we use $\beta(j)$ for the $v$ part to distinguish it from the $u$ part). From the way the reduced word $\vec{i}$ is structured, we see that if $x=\chi_\vec{i}\left(X_a\right)$, then $x$ can be written as a product $x_-x_+$ where $x_\pm\in B_\pm$ (please be aware that the notation is different from Gaussian decomposition $x=[x]_-[x]_0[x]_+$). Fix a choice of such factorization $x=x_-x_+$. Then we know that the point $[x_-^{-1}B_+, B_-x_+^{-1},B_-,B_+]$ in $\mathrm{Conf}^{u,v}(\mathcal{B})$ corresponds to the equivalence class $H\backslash x/H$ in $H\backslash G^{u,v} /H$. We can further represent such a point by the square diagram below. \[ \xymatrix{x_-^{-1}B_+ \ar[r]^u \ar@{-}[d] & B_+ \ar@{-}[d] \\ B_- \ar[r]_{v^*} & B_-x_+^{-1}} \] We now initiate a sequence of tilting on the edge $\xymatrix{x_-^{-1}B_+\ar@{-}[r] & B_-}$ with respect to the reduced word $(\alpha(1),\dots, \alpha(m))$ of $u$. First set $B_u^{(0)}:=x_-^{-1}B_+$ and $B_u^{(m)}:=B_+$; by using Proposition \ref{2.8} we can find a sequence of Borel subgroups $\left(B_u^{(k)}\right)$ such that for $1\leq k\leq m$, \[ \xymatrix{B_u^{(k-1)}\ar[r]^{s_{\alpha(k)}} & B_u^{(k)}}. \] Next set $B_{u^*}^{(0)}:=B_-$ and again use Proposition \ref{2.8} to find a sequence of Borel subgroups $\left(B_{u^*}^{(k)}\right)$ such that for $1\leq k\leq m$, \[ \xymatrix{B_{u^*}^{(k-1)}\ar[r]^{s_{\alpha(k)}^*} & B_u^{(k)}}, \] and \[ \xymatrix{B_{u^*}^{(m)}\ar[r]^{u^c} & x_-^{-1}B_+}. \] Since $s_{\alpha(k+1)}^*\dots s_{\alpha(m)}^*u^c s_{\alpha(1)}\dots s_{\alpha(k)}=w_0$ and $m+l(u^c)=l(w_0)$, it follows from Proposition \ref{2.8} again that for all $0\leq k\leq m$, we have \[ \xymatrix{B_{u}^{(k)}\ar@{-}[r] & B_{u^*}^{(k)}}. \] We can view these two sequences as a sequence of tilting of the edge $\xymatrix{x_-^{-1}B_+\ar@{-}[r] & B_-}$. This can be seen more intuitively on the original square diagram. \[ \tikz{ \node (u0) at +(135:4) [] {$x_-^{-1}B_+$}; \node (u1) at +(115:4) [] {$B_u^{(1)}$}; \node (u2) at +(95:4) [] {$B_u^{(2)}$}; \node (um) at +(45:4) [] {$B_+$}; \node (u*0) at +(-135:4) [] {$B_-$}; \node (u*1) at +(-155:4) [] {$B_{u^*}^{(1)}$}; \node (u*2) at +(-175:4) [] {$B_{u^*}^{(2)}$}; \node (u*m) at +(165:4) [] {$B_{u^*}^{(m)}$}; \node (v) at +(-45:4) [] {$x_+B_-$}; \draw [->] (u0) -- (u1) node [midway, above left] {$s_{\alpha(1)}$}; \draw [->] (u1) -- (u2) node [midway, above] {$s_{\alpha(2)}$}; \draw [->, dashed] (u2) -- (um); \draw [->] (u*0) -- (u*1) node [midway, left] {$s_{\alpha(1)}^*$}; \draw [->] (u*1) -- (u*2) node [midway, left] {$s_{\alpha(2)}^*$}; \draw [->] (u*m) -- (u0) node [midway, left] {$u^c$}; \draw [->, dashed] (u*2) -- (u*m); \draw [->] (u*0) -- (v) node [midway, below] {$v^*$}; \draw (um) -- (v); \draw (u0) -- (u*0); \draw (u0) -- (um) node [midway, below] {$u$}; \draw (u*1) -- (u1); \draw (u*2) -- (u2); \draw (u*m) -- (um); } \] We can do the same construction for $v$, by finding two sequences $\left(B_{v^*}^{(k)}\right)_{k=1}^l$ and $\left(B_v^{(k)}\right)_{k=1}^l$ to fit into the square diagram in an analogous way. To save time, we will just draw the resulting tilting diagram. \[ \tikz{ \node (u0) at +(135:4) [] {$x_-^{-1}B_+$}; \node (u1) at +(115:4) [] {$B_u^{(1)}$}; \node (u2) at +(95:4) [] {$B_u^{(2)}$}; \node (um) at +(45:4) [] {$B_+$}; \node (u*0) at +(-135:4) [] {$B_-$}; \node (u*1) at +(-155:4) [] {$B_{u^*}^{(1)}$}; \node (u*2) at +(-175:4) [] {$B_{u^*}^{(2)}$}; \node (u*m) at +(165:4) [] {$B_{u^*}^{(m)}$}; \node (v*l) at +(-45:4) [] {$x_+B_-$}; \node (v*1) at (-115:4) [] {$B_{v^*}^{(1)}$}; \node (v*2) at (-95:4) [] {$B_{v^*}^{(2)}$}; \node (v0) at (-15:4) [] {$B_v^{(0)}$}; \node (v1) at (5:4) [] {$B_v^{(1)}$}; \node (v2) at (25:4) [] {$B_v^{(2)}$}; \draw [->] (u0) -- (u1) node [midway, above] {$s_{\alpha(1)}$}; \draw [->] (u1) -- (u2) node [midway, above] {$s_{\alpha(2)}$}; \draw [->, dashed] (u2) -- (um); \draw [->] (u*0) -- (u*1) node [midway, left] {$s_{\alpha(1)}^*$}; \draw [->] (u*1) -- (u*2) node [midway, left] {$s_{\alpha(2)}^*$}; \draw [->] (u*m) -- (u0) node [midway, left] {$u^c$}; \draw [->, dashed] (u*2) -- (u*m); \draw [->] (u*0) -- (v*l) node [midway, below] {$v^*$}; \draw (um) -- (v*l); \draw (u0) -- (u*0); \draw (u0) -- (um) node [midway, below] {$u$}; \draw (u*1) -- (u1); \draw (u*2) -- (u2); \draw (u*m) -- (um); \draw [->] (u*0) -- (v*1) node [midway, below] {$s_{\beta(1)}^*$}; \draw [->] (v*1) -- (v*2) node [midway, below] {$s_{\beta(2)}^*$}; \draw [dashed, ->] (v*2) -- (v*l); \draw [->] (v*l) -- (v0) node [midway, right] {$v^c$}; \draw [->] (v0) -- (v1) node [midway, right] {$s_{\beta(1)}$}; \draw [->] (v1) -- (v2) node [midway, right] {$s_{\beta(2)}$}; \draw [dashed, ->] (v2) -- (um); \draw (u*0) -- (v0); \draw (v*1) -- (v1); \draw (v*2) -- (v2); } \] One can see that we are tilting the right vertical edge $\xymatrix{B_+\ar@{-}[r] & B_-x_+^{-1}}$ clockwisely a bit at a time, going from index $l$, $l-1$, and so on, till we get to $\xymatrix{B_v^{(0)}\ar@{-}[r]& B_-}$. Note that by the time we finish both tilting sequences, we can apply $*$ to the final square diagram and obtain $\eta[x_-^{-1}B_+,B_-x_+^{-1},B_-,B_+]=\left[B_-^*, B_+^*,\left(B_{v}^{(0)}\right)^*, \left(B_{u^*}^{(m)}\right)^*\right]$. \[ \xymatrix{B_-^* \ar[r]^u \ar@{-}[d] & \left(B_{u^*}^{(m)}\right)^* \ar@{-}[d]\\ \left(B_v^{(0)}\right)^* \ar[r]_{v^*} &B_+^*} \] Our next mission is to figure out how to realize such a tilting in terms of operations on the element $x$ in $G^{u,v}$. By viewing $x$ as a transformation on pairs of opposite Borel subgroups, we can break down the square diagram of the quadruple $[x_-^{-1}B_+,x_+B_-,B_-,B_+]$ into a two-step process: first taking the opposite pair $\xymatrix{x_-^{-1}B_+\ar@{-}[r] & B_-}$ to the opposite pair $\xymatrix{B_+ \ar@{-}[r]& B_-}$ and then to the opposite pair $\xymatrix{B_+ \ar@{-}[r]& B_-x_+^{-1}}$. We can describe such transformation by the picture below. \[ \xymatrix{x_-^{-1}B_+ \ar[rr]^u \ar@{-}[dr] & & B_+ \ar@{-}[dl] \ar@{-}[dr] & \\ & B_- \ar[rr]_{v^*} & & B_-x_+^{-1}} \] In the diagram above, note that the middle opposite pair $\xymatrix{B_+\ar@{-}[r] & B-}$ corresponds to the diagonal in the tilting diagram that separates the two tilting sequence, so we can expect that the tilting sequence for $u$ will take place inside the triangle on the left, whereas the tilting sequence will take place inside the triangle on the right. This diagram also links to the string diagram we have used to produce the seed corresponding to the reduced word $\vec{i}$, which in turn can be used to prove the cluster nature of $\mathrm{DT}_\vec{i}$, as we will see near the end of this subsection. \begin{rmk} A sidetrack: you may have wondered why we have used triangles of the form $\tikz[baseline=0ex, scale=0.5]{\draw (-0.75,0.75) -- (0.75,0.75) -- (0,-0.75) -- cycle;}$ in the $u$ part and triangles of the form $\tikz[baseline=0ex, scale=0.5]{\draw (-0.75,-0.75) -- (0.75,-0.75) -- (0,0.75) -- cycle;}$ in the $v$ part when computing the leading power of $\mathrm{DT}_\vec{i}^*\left(X_a\right)$; well, the above diagram is why. \end{rmk} \begin{prop}\label{u tilt} Based on the representative $(x_-^{-1}B_+,B_-x_+^{-1},B_-,B_+)$, the following identity holds for $0\leq k\leq m$: \[ \tikz{ \node (a) at (0,1) [] {$B_u^{(k)}$}; \node (b) at (0,-1) [] {$B_{u^*}^{(k)}$}; \node (c) at (5,1) [] {$\left(e_{\alpha(k)}e_{-\alpha(k)}^{-1}\dots e_{\alpha(1)}e_{-\alpha(1)}^{-1}x_-\right)^{-1}B_+$}; \node (d) at (5,-1) [] {$B_-\left(e_{\alpha(k)}e_{-\alpha(k)}^{-1}\dots e_{\alpha(1)}e_{-\alpha(1)}^{-1}x_-\right)$}; \draw (a) -- (b); \draw (c) -- (d); \node at (1,0) [] {$=$}; }. \] \end{prop} \begin{proof} We just need to verify the defining conditions for $B_u^{(k)}$ and $B_{u^*}^{(k)}$, and the key facts are \[ e_{\alpha(k)}\in B_+\cap B_-s_{\alpha(k)}B_- \quad \quad \text{and} \quad \quad e_{-\alpha(k)}\in B_+s_{\alpha(k)}B_+\cap B_-. \] Let's first look at $B_u^{(k)}$. Obviously $B_u^{(0)}=x_-^{-1}B_+$, and since \[ \left(e_{\alpha(k-1)}e_{-\alpha(k-1)}^{-1}\dots e_{\alpha(1)}e_{-\alpha(1)}^{-1}x_-\right)\left(e_{\alpha(k)}e_{-\alpha(k)}^{-1}\dots e_{\alpha(1)}e_{-\alpha(1)}^{-1}x_-\right)^{-1}=e_{-\alpha(k)}e_{\alpha(k)}^{-1}\in B_+s_{\alpha(k)}B_+, \] we know that $\xymatrix{\left(e_{\alpha(k-1)}e_{-\alpha(k-1)}^{-1}\dots e_{\alpha(1)}e_{-\alpha(1)}^{-1}x_-\right)^{-1}B_+ \ar[r]^{s_{\alpha(k)}} & \left(e_{\alpha(k)}e_{-\alpha(k)}^{-1}\dots e_{\alpha(1)}e_{-\alpha(1)}^{-1}x_-\right)^{-1}B_+}$. Thus we only need to show that $\left(e_{\alpha(m)}e_{-\alpha(m)}^{-1}\dots e_{\alpha(1)}e_{-\alpha(1)}^{-1}x_-\right)^{-1}B_+=B_+$, which is equivalent to showing that $e_{\alpha(m)}e_{-\alpha(m)}^{-1}\dots e_{\alpha(1)}e_{-\alpha(1)}^{-1}x_-$ is an element of $B_+$. But if we look at the $u$ part of the string diagram associated to $\vec{i}$ (which is a string diagram associated to the the reduced word $(-\alpha(1),\dots, -\alpha(m))$ and can be used to give a parametrization of $x_-$), we see that multiplying $e_{\alpha(1)}e_{-\alpha(1)}^{-1}$ on the left of $x_-$ turns the left most node from $-\alpha(1)$ to $\alpha(1)$; then we can move this node to the right of $x_-$ using the move (1) of Proposition \ref{move}; similar arguments apply to $e_{\alpha(2)}e_{-\alpha(2)}^{-1}$ and so on. Thus at then end we will obtain a string diagram with only nodes that are simple roots, and this shows that $e_{\alpha(m)}e_{-\alpha(m)}^{-1}\dots e_{\alpha(1)}e_{-\alpha(1)}^{-1}x_-$ is an element of $B_+$. As for $B_{u^*}^{(k)}$, we see that $B_{u^*}^{(0)}=B_-x_-=B_-$, and since \[ \left(e_{\alpha(k-1)}e_{-\alpha(k-1)}^{-1}\dots e_{\alpha(1)}e_{-\alpha(1)}^{-1}x_-\right)\left(e_{\alpha(k)}e_{-\alpha(k)}^{-1}\dots e_{\alpha(1)}e_{-\alpha(1)}^{-1}x_-\right)^{-1}=e_{-\alpha(k)}e_{\alpha(k)}^{-1}\in B_-s_{\alpha(k)}B_-, \] we know that $\xymatrix{\left(e_{\alpha(k-1)}e_{-\alpha(k-1)}^{-1}\dots e_{\alpha(1)}e_{-\alpha(1)}^{-1}x_-\right)^{-1}B_- \ar[r]^{s_{\alpha(k)}^*} & \left(e_{\alpha(k)}e_{-\alpha(k)}^{-1}\dots e_{\alpha(1)}e_{-\alpha(1)}^{-1}x_-\right)^{-1}B_-}$. Thus we only need to show that \[ \xymatrix{B_-\left(e_{\alpha(m)}e_{-\alpha(m)}^{-1}\dots e_{\alpha(1)}e_{-\alpha(1)}^{-1}x_-\right)\ar[r]^(0.75){u^c} & x_-^{-1}B_+}. \] But this is equivalent to showing that \[ \overline{w}_0e_{\alpha(m)}e_{-\alpha(m)}^{-1}\dots e_{\alpha(1)}e_{-\alpha(1)}^{-1}=\overline{u^c}\overline{u}e_{\alpha(m)}e_{-\alpha(m)}^{-1}\dots e_{\alpha(1)}e_{-\alpha(1)}^{-1}\in B_+u^cB_+, \] for which it suffices to show that \[ \overline{u}e_{\alpha(m)}e_{-\alpha(m)}^{-1}\dots e_{\alpha(1)}e_{-\alpha(1)}^{-1}\in B_+. \] To show this, we recall that $\overline{s}_\alpha:=e_\alpha^{-1}e_{-\alpha}e_\alpha^{-1}$; thus \[ \overline{u}e_{\alpha(m)}e_{-\alpha(m)}^{-1}\dots e_{\alpha(1)}e_{-\alpha(1)}^{-1}=\overline{s_{\alpha(1)}\dots s_{\alpha(m-1)}} e_{\alpha(m)}^{-1}e_{\alpha(m-1)}e_{-\alpha(m-1)}^{-1}\dots e_{\alpha(1)}e_{-\alpha(1)}^{-1}. \] But then since $(\alpha(1),\dots, \alpha(m))$ is a reduced word of $u$, $s_{\alpha(1)}\dots s_{\alpha(m-1)}$ maps the simple root $\alpha_{\alpha(m)}$ to a positive root, which implies that \[ \overline{s_{\alpha(1)}\dots s_{\alpha(m-1)}} e_{\alpha(m)}^{-1}=b\overline{s_{\alpha(1)}\dots s_{\alpha(m-1)}} \] for some $b\in B_+$. The proof is then finished by induction on $m$. \end{proof} By a completely analogous proof one can also show the following proposition. \begin{prop} Based on the representative $(x_-^{-1}B_+,B_-x_+^{-1},B_-,B_+)$, the following identity holds for $0\leq k\leq l$: \[ \tikz{ \node (a) at (0,1) [] {$B_v^{(k)}$}; \node (b) at (0,-1) [] {$B_{v^*}^{(k)}$}; \node (c) at (5,1) [] {$\left(x_+e_{\beta(l)}^{-1}e_{-\beta(l)}\dots e_{\beta(k+1)}^{-1}e_{-\beta(k+1)}\right)B_+$}; \node (d) at (5,-1) [] {$B_-\left(x_+e_{\beta(l)}^{-1}e_{-\beta(l)}\dots e_{\beta(k+1)}^{-1}e_{-\beta(k+1)}\right)^{-1}$}; \draw (a) -- (b); \draw (c) -- (d); \node at (1,0) [] {$=$}; }. \] \end{prop} Our last two propositions show that, in order to reflect the two tilting sequences in terms of $x$, all we need to do is to multiply $e_{\alpha(m)}e_{-\alpha(m)}^{-1}\dots e_{\alpha(1)}e_{-\alpha(1)}^{-1}$ on the left and $e_{\beta(l)}^{-1}e_{-\beta(l)}\dots e_{\beta(1)}^{-1}e_{-\beta(1)}$ on the right, which is similar to what we have done when computing the general minors in earlier discussion (turning triangles up-side-down and moving them across). With these two results in our pockets, we are ready to prove our last proposition. \begin{prop}\label{3.18} $\mathrm{DT}:=\psi\circ \chi$ is a cluster transformation. \end{prop} \begin{proof} Let $H\backslash x/H:=\chi_\vec{i}(X_f)$ and consider the representative $[x_-^{-1}B_+,x_+B_-,B_-,B_+]$ corresponding to $H\backslash x/H$ in $\mathrm{Conf}^{u,v}(\mathcal{B})$. We learned from Propositions \ref{twistflag} and \ref{clustertwist} that the composition $\chi\circ \psi$ is the same as the map $\eta$ on $\mathrm{Conf}^{u,v}(\mathcal{B})$, and hence $\chi \circ \psi (H\backslash x/H)$ will correspond to the equivalence class of the configuration represented by the following square diagrams \begin{align*} & \eta\left(\tikz[baseline=-0.5ex]{ \node (a) at (0,1) [] {$x_-^{-1}B_+$}; \node (b) at (0,-1) [] {$B_-$}; \node (c) at (2,1) [] {$B_+$}; \node (d) at (2,-1) [] {$x_+B_-$}; \draw [->] (a) -- (c) node [midway, above] {$u$}; \draw [->] (b) -- (d) node [midway, below] {$v^*$}; \draw (a) -- (b); \draw (c) -- (d); }\right)\\ =& \tikz[baseline=-0.5ex]{ \node (a) at (0,1) [] {$B_-$}; \node (b) at (0,-1) [] {$\left(x_+e_{\beta(l)}^{-1}e_{-\beta(l)}\dots e_{\beta(1)}^{-1}e_{-\beta(1)}\right)^*B_+$}; \node (c) at (5,1) [] {$B_-\left(e_{\alpha(m)}e_{-\alpha(m)}^{-1}\dots e_{\alpha(1)}e_{-\alpha(1)}^{-1}x_-\right)^*$}; \node (d) at (5,-1) [] {$B_+$}; \draw [->] (a) -- (c) node [midway, above] {$u$}; \draw [->] (b) -- (d) node [midway, below] {$v^*$}; \draw (a) -- (b); \draw (c) -- (d); }\\ =&\tikz[baseline=-0.5ex]{ \node (a) at (0,1) [] {$\left(x_+e_{\beta(l)}^{-1}e_{-\beta(l)}\dots e_{\beta(1)}^{-1}e_{-\beta(1)}\right)^*\overline{w}^{-1}_0B_+$}; \node (b) at (0,-1) [] {$B_-\overline{w}_0\left(\left(x_+e_{\beta(l)}^{-1}e_{-\beta(l)}\dots e_{\beta(1)}^{-1}e_{-\beta(1)}\right)^{-1}\right)^*$}; \node (c) at (8,1) [] {$\left(\left(e_{\alpha(m)}e_{-\alpha(m)}^{-1}\dots e_{\alpha(1)}e_{-\alpha(1)}^{-1}x_-\right)^{-1}\right)^*\overline{w}^{-1}_0B_+$}; \node (d) at (8,-1) [] {$B_-\overline{w}_0\left(e_{\alpha(m)}e_{-\alpha(m)}^{-1}\dots e_{\alpha(1)}e_{-\alpha(1)}^{-1}x_-\right)^*$}; \draw [->] (a) -- (c) node [midway, above] {$u$}; \draw [->] (b) -- (d) node [midway, below] {$v^*$}; \draw (a) -- (b); \draw (c) -- (d); }. \end{align*} Note that the last result corresponds to the element \[ H\backslash\left(e_{\alpha(m)}e_{-\alpha(m)}^{-1}\dots e_{\alpha(1)}e_{-\alpha(1)}^{-1}xe_{\beta(l)}^{-1}e_{-\beta(l)}\dots e_{\beta(1)}^{-1}e_{-\beta(1)}\right)^t/H \] in $H\backslash G^{u,v}/H$. If we take a look at the string diagram associated to the reduced word $\vec{i}$, as we have argued in the proof of Proposition \ref{u tilt}, each time when we multiply $e_{\alpha(k)}e_{-\alpha(k)}^{-1}$ on the left, all we is doing is change the left most node from an opposite simple root to the corresponding simple root, and then move it to the middle (right after the last letter of $u$). Since $H\backslash G^{u,v}/H$ only corresponds to the non-frozen part of the seed $\vec{i}$, changing the left most from an opposite simple root to a simple root does nothing to the quiver, and moving it to the middle is just a sequence of seed mutations, which gives rise to a corresponding sequence of cluster mutation on the cluster variety $\mathcal{X}^{u,v}$. Similar argument also applies to each time we multiply $e_{\beta(k)}^{-1}e_{-\beta(k)}$ on the right. Then taking transposition flips the string diagram horizontally while simultaneously changing the nodes from the simple roots to opposite simple roots and vice versa. Thus transposition gives rise to a seed isomorphism which in turn produces a cluster isomorphism on $\mathcal{X}^{u,v}$. Lastly we need to use move (1) of Proposition \ref{move} again to restore to the original layout with $u$ part on the left and $v$ part on the right, which is again another sequence of cluster mutations. The following picture summarizes the idea \begin{align*} \tikz{ \draw (0,1.5) -- (2,1.5) -- (3,0) -- (1,0) -- cycle; \draw (1,0) -- (2,1.5); \node at (1,0.75) [above] {$u$}; \node at (2,0.75) [below] {$v$}; }& \\ \bigg\downarrow \quad \quad \quad \quad&\begin{array}{l}\text{Multiplying $e_{\alpha(m)}e_{-\alpha(m)}^{-1}\dots e_{\alpha(1)}e_{-\alpha(1)}^{-1}$ on the left and $e_{\beta(l)}^{-1}e_{-\beta(l)}\dots e_{\beta(1)}^{-1}e_{-\beta(1)}$}\\ \text{on the right, which is a sequence of cluster mutations.} \end{array} \\ \tikz{ \draw (0,0) -- (2,0) -- (3,1.5) -- (1,1.5) -- cycle; \draw (1,1.5) -- (2,0); \node at (1,0.75) [below] {$u^{-1}$}; \node at (2,0.75) [above] {$v^{-1}$}; }& \\ \bigg\downarrow \quad \quad\quad \quad & \text{Transposition, which is a cluster isomorphism.} \\ \tikz{ \draw (0,0) -- (2,0) -- (3,1.5) -- (1,1.5) -- cycle; \draw (1,1.5) -- (2,0); \node at (1,0.75) [below] {$v$}; \node at (2,0.75) [above] {$u$}; }& \\ \bigg\downarrow \quad \quad \quad \quad&\text{Restoring the original layout, which is another sequence of cluster mutations.} \\ \tikz{ \draw (0,1.5) -- (2,1.5) -- (3,0) -- (1,0) -- cycle; \draw (1,0) -- (2,1.5); \node at (1,0.75) [above] {$u$}; \node at (2,0.75) [below] {$v$}; }& \end{align*} Combining these observations we see that $\psi\circ \chi$ is indeed a composition of cluster mutations and cluster isomorphisms, which is by definition a cluster transformation. \end{proof} \begin{rmk} Shen also pointed out that if we single out either the $u$ triangles or the $v$ triangles from the above diagram, the sequence of seed mutations also defines a reddening sequence in the sense of Keller \cite{KelDT}. \end{rmk} \section{Introduction} Cluster algebras were defined by Fomin and Zelevinsky in \cite{FZI}. Cluster varieties were introduced by Fock and Goncharov in \cite{FG}. They can be used to construct examples of 3d Calabi-Yau categories with stability conditions. One important object of study for such categories are their Donaldson-Thomas invariants, introduced by Kontsevich and Soibelman \cite{KS}, which generalize geometric invariants of Calabi-Yau manifolds. For a 3d Calabi-Yau category with stability condition constructed from a cluster variety, its Donaldson-Thomas invariants are encoded by a single formal automorphism on the corresponding cluster variety, which is also known as the Donaldson-Thomas transformation \cite{KS}. Keller \cite{KelDT} gave a combinatorial characterization of a certain class of Donaldson-Thomas transformations based on quiver mutation. Goncharov and Shen gave an equivalent definition of the Donaldson-Thomas transformations using tropical points of cluster varieties in \cite{GS}, which we use in this paper. Double Bruhat cells have been an important family of examples in the study of cluster algebras and cluster Poisson varieties since the very beginning of the subject. On the one hand, Berenstein, Fomin, and Zelevinsky \cite{BFZ} proved that the algebra of regular functions on double Bruhat cells in simply connected semisimple Lie groups are upper cluster algebras. On the other hand, Fock and Goncharov \cite{FGD} showed that double Bruhat cells in adjoint semisimple Lie groups are cluster Poisson varieties. Furthermore, Fock and Goncharov proved in the same paper that the Poisson structure in the biggest double Bruhat cell, which is a Zariski open subset of the Lie group, coincides with the Poisson-Lie structure defined by Drinfeld in \cite{D}. These two constructions can be combined into a cluster ensemble in the sense of Fock and Goncharov \cite{FG}, which will play a central role in our construction of Donaldson-Thomas transformation on the double quotient $H\backslash G^{u,v}/H$. Recall the flag variety $\mathcal{B}$ associated to a semisimple Lie group $G$. Generic configurations of flags were studied by Fock and Goncharov in \cite{FG06}. The cluster Donaldson-Thomas transformation of such configuration space was constructed by Goncharov and Shen in \cite{GS}. In this paper we make use of the configuration space of quadruple of flags with certain degenerate conditions depending on a pair of Weyl group elements $(u,v)$, which we call $\mathrm{Conf}^{u,v}(\mathcal{B})$, and show that such configuration space is isomorphic to the quotient $H\backslash G^{u,v}/H$ of double Bruhat cells. We relate the Donaldson-Thomas transformation on $H\backslash G^{u,v}/H$ to some explicit automorphism on the configuration space $\mathrm{Conf}^{u,v}(\mathcal{B})$. \subsection{Main Result} Let $G$ be a semisimple Lie group. Fix a pair of opposite Borel subgroups $B_\pm$ of $G$ and let $H:=B_+\cap B_-$ be the corresponding maximal torus. Then with respect to these two Borel subgroups, $G$ admits two Bruhat decompositions \[ G=\bigsqcup_{w\in W}B_+wB_+=\bigsqcup_{w\in W}B_-wB_-, \] where $W$ denotes the Weyl group with respect to the maximal torus $H$. For a pair of Weyl group elements $(u,v)$, we define the \textit{double Bruhat cell} $G^{u,v}$ to be the intersection \[ G^{u,v}:=B_+uB_+\cap B_-vB_-. \] Let $\mathcal{B}$ be the flag variety associated to the semisimple Lie group $G$. For any two Borel subgroups $B=xB_+x^{-1}$ and $B'=yB_+y^{-1}$ we write $\xymatrix{B \ar[r]^w & B'}$ if $x^{-1}y\in B_+wB_+$, and write $\xymatrix{B \ar@{-}[r] & B'}$ if $B$ and $B'$ are opposite Borel subgroups. Then the configuration space $\mathrm{Conf}^{u,v}(\mathcal{B})$ is defined to be the configuration space of quadruple of Borel subgroups $(B_1,B_2,B_3,B_4)$ satisfying the relative condition \[ \xymatrix{B_1 \ar[r]^u \ar@{-}[d] & B_4 \ar@{-}[d] \\ B_3 \ar[r]_{v^*} & B_2} \] with $v^*:=w_0vw_0$, modulo the diagonal adjoint action by $G$. As it turns out, $\mathrm{Conf}^{u,v}(\mathcal{B})$ is naturally isomorphic to the double quotient of double Bruhat celss $H\backslash G^{u,v}/H$ (Proposition \ref{flag-bruhat}): \[ i:\mathrm{Conf}^{u,v}(\mathcal{B})\overset{\cong}{\longrightarrow} H\backslash G^{u,v}/H. \] We consider the following three maps of the quotient $H\backslash G^{u,v}/H$ of double Bruhat cell and the configuration space $\mathrm{Conf}^{u,v}(\mathcal{B})$. \begin{enumerate} \item The \textit{twist map} \begin{align*} \eta:H\backslash G^{u,v}/H& \rightarrow H\backslash G^{u,v}/H \\ H\backslash x/H &\mapsto H\backslash\left(\left[\overline{u}^{-1}x\right]_-^{-1}\overline{u}^{-1} x \overline{v^{-1}}\left[x\overline{v^{-1}}\right]^{-1}_+\right)^t/H, \end{align*} where $x=[x]_-[x]_0[x]_+$ denotes the Gaussian decomposition and $\overline{w}$ denotes the lift of a Weyl group element $w$ defined by Equation \eqref{wbar} (see also \cite{FZ}, \cite{BFZ}, and \cite{GS}). \item The map \begin{align*} \eta:\mathrm{Conf}^{u,v}(\mathcal{B})&\rightarrow \mathrm{Conf}^{u,v}(\mathcal{B})\\ [B_1,B_2,B_3,B_4] &\mapsto [B_3^*,B_4^*,B_5^*,B_6^*] \end{align*} where $*$ denotes an involution on $G$ defined by \eqref{starinv} and the two new Borel subgroups $B_5$ and $B_6$ are uniquely determined by the following relative configurations with $u^c:=w_0u^{-1}$. \[ \xymatrix{ & B_6\ar[r]^{u^c} \ar@{-}[drr] & B_1 \ar[dr]^u \ar@{-}[dll] & \\ B_3 \ar[ur]^{u^*} \ar[dr]_{v^*} \ar@{-}[drr] & & & B_4 \ar@{-}[dll] \\ & B_2 \ar[r]_{v^c} & B_5 \ar[ur]_v &} \] \item The composition \[ \chi\circ p\circ \psi\circ s:H\backslash G^{u,v}/H\dashrightarrow H\backslash G^{u,v}/H, \] where the maps are drawn from the following diagram where $(\mathcal{A}^{u,v}, \mathcal{X}^{u,v}, p)$ is the cluster ensemble associated to the pair of Weyl group elements $(u,v)$ (see Section \ref{cells}): \[ \xymatrix{ G_{sc}^{u,v} \ar@{-->}[r]^\psi \ar@<.5ex>[d] & \mathcal{A}^{u,v}\ar[d]^p & \\ H\backslash G^{u,v}/H \ar@<.5ex>[u]^s & \mathcal{X}^{u,v} \ar[r]^(0.4){\chi} & H\backslash G^{u,v}/H } \] In particular, \begin{itemize} \item $s$ is an arbitrary section of the natural projection $G_{sc}^{u,v}\rightarrow H\backslash G^{u,v}/H$; the resulting composition is independent of such choice; \item $\psi:G_{sc}^{u,v}\dasharrow \mathcal{A}^{u,v}$ comes from the result of Berenstein, Fomin, and Zelevinsky saying that $\mathcal{O}(G_{sc}^{u,v})$ is an upper cluster algebra \cite{BFZ}; \item $p:\mathcal{A}^{u,v}\rightarrow \mathcal{X}^{u,v}$ comes from Fock and Goncharov's theory of cluster ensemble\cite{FG}; \item $\chi:\mathcal{X}^{u,v}\rightarrow H\backslash G^{u,v}/H$ is given by Fock and Goncharov's amalgamation map \cite{FGD}. \end{itemize} \end{enumerate} Let $G_{ad}:=G/\text{Center}(G)$ be the adjoint group associated to $G$. Fock and Goncharov showed in \cite{FGD} that the double Bruhat cells $G_{ad}^{u,v}$ as well as their double quotients $H\backslash G^{u,v}/H$ are cluster Poisson varieties. We drop the subscript in the double quotient because forms of semisimple Lie groups with the same Lie algebra differ only by center elements, which are contained in the maximal torus $H$. Goncharov and Shen conjectured in \cite{GS} that the Donaldson-Thomas transformation on $H\backslash G^{u,v}/H$ is a slight modification of Fomin and Zelevinsky's twist map. The precise statement is the following, which is the main result of this paper. \begin{thm}\label{main} Let $G$ be a semisimple Lie group and let $(u,v)$ be a pair of Weyl group elements. \begin{enumerate}[label=(\alph*)] \item The maps (1), (2), and (3) defined above are rationally equivalent. Precisely: \begin{enumerate}[label=(\roman*)] \item the isomorphism $i$ between $H\backslash G^{u,v}/H$ and $\mathrm{Conf}^{u,v}(\mathcal{B})$ intertwines the maps (1) and (2); \item the maps (1) and (3) are rationally equivalent. \end{enumerate} \item The Donaldson-Thomas transformation of the cluster Poisson variety $H\backslash G^{u,v}/H$ is a cluster transformation. It is given by either of the maps (1), (2), and (3), which agree by part (a). \end{enumerate} \end{thm} Notice that the expression of the twist map in (1) above, modulo the double quotients, differs from Fomin and Zelevinsky's original version of twist map (Definition 1.5 in \cite{FZ}) only by an anti-automorphism $x\mapsto x^\iota$ on $G$, which is defined on the generators by \[ e_{\pm \alpha}^\iota=e_{\pm \alpha} \quad \quad \text{and} \quad \quad h^\iota=h^{-1} \quad \forall h\in H. \] We show in Subsection \ref{3.3} that the anti-automorphism $x\mapsto x^\iota$ coincides with the involution $i_\mathcal{X}$ introduced by Goncharov and Shen, and hence Fomin and Zelevinsky's original version of twist map in turn coincides with Goncharov and Shen's involution $D_\mathcal{X}$, which is defined as $D_\mathcal{X}:=i_\mathcal{X}\circ \mathrm{DT}$ (see Section 1.6 in \cite{GS}). The special case of double Bruhat cells in $\mathrm{GL}_n$ was solved in another paper of the author \cite{WGL}; however, the bipartite graph method, which was also used in the computation of Donaldson-Thomas transformation of Grassmannian \cite{W}, does not apply in the general case of double Bruhat cells in semisimple Lie groups. The following is an important application of our main result. We proved in our main theorem that the Donaldson-Thomas transformation of the cluster Poisson variety $H\backslash G^{u,v}/H$ is a cluster transformation. Combined with the work of Gross, Hacking, Keel, and Kontsevich (Theorem 0.10 of \cite{GHKK}), our result proves the Fock and Goncharov's conjecture \cite{FG} in the case of $H\backslash G^{u,v}/H$. \subsection{Structure of the Paper} We divide the rest of the paper into two sections. Section 2 contains all the preliminaries necessary for our proof of the main theorem. Subsection \ref{bruhat} introduces double Bruhat cells $G^{u,v}$ and related structures, most of which are similar to the original work of Fomin and Zelevinsky \cite{FZ} and that of Fock and Goncharov \cite{FGD}. Subsection \ref{2.2} describes a link between double Bruhat cells and configurations of quadruples of Borel subgroups, which is the key to proving the cluster nature of our candidate map for Donaldson-Thomas transformation; such link and the proof of cluster nature are both credit to Shen. Subsections \ref{cluster} and \ref{1.4} review Fock and Goncharov's theory of cluster ensemble and tropicalization, the main source of reference of which is \cite{FG}. Subsection \ref{cells} focuses on the cluster structures related to the main object of our study, namely the double Bruhat cells $G^{u,v}$, the main resources of reference of which are \cite{FGD} and \cite{BFZ}. Section 3 is the proof of our main theorem itself. Subsection \ref{3.1} uses cluster ensemble to rewrite the twist map on $H\backslash G^{u,v}/H$. Subsection \ref{3.2} constructs the cluster Donaldson-Thomas transformation on $H\backslash G^{u,v}/H$ and proves that it satisfies the two defining properties of the cluster Donaldson-Thomas transformation, the latter of which is due to a private conversation with Shen. Subsection \ref{3.3} summarizes the relation among all the maps we have discussed thusfar in the paper, which include our version of the twist map, Fomin and Zelevinsky's version of the twist map, the cluster Donaldson-Thomas transformation on $H\backslash G^{u,v}/H$ and so on. \subsection{Acknowledgements} The author is deeply grateful to his advisor Alexander Goncharov for his enlightening guidence and strong encouragement during the process of solving this problem as well as his detailed advice in the revision of the paper. The author also would like to thank both Alexander Goncharov and Linhui Shen for their help on understanding the relation between configuration space of flags and double Bruhat cells and their inspiring idea on proving the cluster nature of our candidate map using the configuration of quadruples of Borel subgroups. This paper uses many ideas from the work of Goncharov \cite{Gon}, the work of Fock and Goncharov \cite{FG}, \cite{FGD}, \cite{FGI}, the work of Goncharov and Shen \cite{GS}, and the work of Fomin and Zelevinsky \cite{FZ}; without the work of all these pioneers in the field this paper would not have been possible otherwise. \section{Preliminaries} \input{bruhat} \input{borel} \input{cluster} \input{tropical} \input{cells} \section{Proof of Our Main Results} \input{twist} \input{dt} \input{relation} \subsection{Relation to Fomin and Zelevinsky's Twist Map} \label{3.3} If we compare our version of the twist map to the original one defined by Fomin and Zelevinsky (Definition 1.5 of \cite{FZ}), we see that they by an anti-automorphism $x\mapsto x^\iota$ on $G$, which is uniquely defined by (Equation (2.2) of \cite{FZ}): \[ e_{\pm i}^\iota=e_{\pm i} \quad \quad \text{and} \quad \quad a^\iota=a^{-1} \quad \forall a\in H. \] As it turns out, such anti-involution also has a cluster counterpart, which is the involution $i_\mathcal{X}$ introduced by Fock and Goncharov in \cite{FG}. We shall explain their relation in more details now. Given any seed $\vec{i}=(I,I_0, \epsilon, d)$, we can define a new seed $\vec{i}^\circ=(I^\circ,I_0^\circ, \epsilon^\circ, d^\circ)$ by setting \[ I^\circ:=I, \quad I_0^\circ:=I_0, \quad \epsilon^\circ:=-\epsilon, \quad \text{and} \quad d^\circ=d. \] From the identification $I^\circ=I$ we get a natural correspondence between coordinates $(X_a)$ of $\mathcal{X}_\vec{i}$ and $(X^\circ_a)$ of $\mathcal{X}_{\vec{i}^\circ}$. Then the involution $i_\mathcal{X}$ is the map from the seed torus $\mathcal{X}_\vec{i}$ to the seed torus $\mathcal{X}_{\vec{i}^\circ}$ defined by \[ i_\mathcal{X}^*(X_a^\circ)=X_a^{-1}. \] It is not hard to see that the involutions $i_\mathcal{X}$ commute with cluster mutations as below. \[ \xymatrix{ \mathcal{X}_\vec{i} \ar@{-->}[r]^{\mu_k} \ar[d]_{i_\mathcal{X}} & \mathcal{X}_{\vec{i}'} \ar[d]^{i_\mathcal{X}} \\ \mathcal{X}_{\vec{i}^\circ}\ar@{-->}[r]_{\mu_k} & \mathcal{X}_{\vec{i}'^\circ}} \] Thus the involutions $i_\mathcal{X}$ can be glued into a single involution $i_\mathcal{X}:\mathcal{X}^{u,v} \rightarrow \mathcal{X}^{u,v}$. The proposition below shows that in the case of double Bruhat cells $G^{u,v}$, the involution $i_\mathcal{X}$ is essentially the same as the anti-automorphism $x\mapsto x^\iota$. \begin{prop} Let $\vec{i}$ be any reduced word for a pair of Weyl group elements $(u,v)$. Then $(u^{-1},v^{-1})$ is also a pair of Weyl group elements and $\vec{i}^\circ$ is a reduced word for it. Further the following diagram commutes. \[ \xymatrix{\mathcal{X}^{u,v} \ar[r]^(0.4)\chi \ar[d]_{i_\mathcal{X}} & H\backslash G^{u,v}/H \ar[d]^{\iota} \\ \mathcal{X}^{u,v}\ar[r]_(0.3)\chi & H\backslash G^{u^{-1},v^{-1}}/H} \] \end{prop} \begin{proof} We claim that if $\vec{i}$ is the seed associated to the reduced word $\vec{i}$, then $\vec{i}^\circ$ is the seed associated to the reduced word $\vec{i}^\circ$ which is obtained by reversing the order of the letters. Such a claim can be seen obviously by a horizontal flip of the picture of seed $\vec{i}$. Note that such a flip also reverse the order of multiplication for the map $\psi$ and the cluster involution $i_\mathcal{X}$ maps $X_a$ to $X_a^{-1}$, which is precisely what the definition of the anti-involution $\iota$ says. \end{proof} As stated in Conjecture 3.12 of \cite{GS}, Goncharov and Shen conjectured that the composition $D_\mathcal{X}:=i_\mathcal{X}\circ \mathrm{DT}$ is an involution. In the case of $H\backslash G^{u,v}/H$, we see that $D_\mathcal{X}$ is precisely Fomin and Zelevinsky's twist map $T$ (passed to the double quotients), which is indeed a biregular involution (Theorem 1.6 of \cite{FZ}). To summarize, we put everything into the following commutative diagram. \[ \xymatrix{& G_{sc}^{u,v} \ar[d] \ar@{-->}[r]^\psi & \mathcal{A}^{u,v} \ar[d]^(0.6){p} & \\ \mathcal{X}^{u,v} \ar@{-->}@/^6ex/[rr]^(0.3){\mathrm{DT}} \ar[r]^(0.4)\chi \ar[drr]_{D_\mathcal{X}}& H\backslash G^{u,v}/H \ar@{-->}[r]^(0.6){\psi} \ar[drr]_(0.3){T} \ar@{-->}@/^6ex/[rr]^(0.7){\eta} & \mathcal{X}^{u,v} \ar[r]^(0.4)\chi \ar[d]_(0.6){i_\mathcal{X}} & H \backslash G^{u,v}/H \ar[d]^\iota \\ & & \mathcal{X}^{u^{-1},v^{-1}}\ar[r]_(0.3){\chi} & H\backslash G^{u^{-1},v^{-1}}/H} \] \subsection{Tropicalization}\label{1.4} One important feature of the theory of cluster ensemble is that all the maps present in the construction (cluster transformation and the $p$ map) are positive, which enables us to tropicalize a cluster ensemble. For the rest of this subsection we will make this statement precise, and use the tropical language to define cluster Donaldson-Thomas transformation. Let's start with the definition of tropicalization. Consider a split algebraic torus $\mathcal{X}$. The semiring of \textit{positive rational functions} on $\mathcal{X}$, which we denote as $P(\mathcal{X})$, is the semiring consisting of elements in the form $f/g$ where $f$ and $g$ are linear combinations of characters on $\mathcal{X}$ with positive integral coefficients. A rational map $\phi:\mathcal{X}\dashrightarrow \mathcal{Y}$ between two split algebraic tori is said to be \textit{positive} if it induces a semiring homomorphism $\phi^*:P(\mathcal{Y})\rightarrow P(\mathcal{X})$. It then follows that composition of positive rational maps is still a positive rational map. One typical example of a positive rational map is a cocharacter $\chi$ of a split algebraic torus $\mathcal{X}$: the induced map $\chi^*$ pulls back an element $f/g\in P(\mathcal{X})$ to $\frac{\langle f, \chi\rangle}{\langle g,\chi\rangle}$ in $P(\mathbb{C}^*)$, where $\langle f, \chi\rangle$ and $\langle g,\chi\rangle$ are understood as linear extensions of the canonical pairing between characters and cocharacters with values in powers of $z$. We will denote the lattice of cocharacters of a split algebraic torus $\mathcal{X}$ by $\mathcal{X}^t$ for reasons that will become clear in a moment. Note that $P(\mathbb{C}^*)$ is the semiring of rational functions in a single variable $z$ with positive integral coefficients. Thus if we let $\mathbb{Z}^t$ be the semiring $(\mathbb{Z}, \max, +)$, then there is a semiring homomorphism $\deg_z:P(\mathbb{C}^*)\rightarrow \mathbb{Z}^t$ defined by $f(z)/g(z)\mapsto \deg_zf-\deg_zg$. Therefore a cocharacter $\chi$ on $\mathcal{X}$ gives rise to a natural semiring homomorphism \[ \deg_z \langle \cdot, \chi\rangle:P(\mathcal{X})\rightarrow \mathbb{Z}^t \] \begin{prop} The map $\chi\mapsto \deg_z\langle \cdot, \chi\rangle$ is a bijection between the lattice of cocharacters and set of semiring homomorphisms from $P(\mathcal{X})$ to $\mathbb{Z}^t$. \end{prop} \begin{proof} Note that $P(\mathcal{X})$ is a free commutative semiring generated by any basis of the lattice of characters, and in paricular any choice of coordinates $(X_i)_{i=1}^r$. Therefore to define a semiring homomorphism from $P(\mathcal{X})$ to $\mathbb{Z}^t$ we just need to assign to each $X_i$ some integer $a_i$. But for any such $r$-tuple $(a_i)$ there exists a unique cocharacter $\chi$ such that $\langle X_i,\chi\rangle=z^{a_i}$. Therefore $\chi\mapsto \deg_z\langle \cdot, \chi\rangle$ is indeed a bijection. \end{proof} \begin{cor} A positive rational map $\phi:\mathcal{X}\dashrightarrow \mathcal{Y}$ between split algebraic tori gives rise to a natural map $\phi^t:\mathcal{X}^t\rightarrow \mathcal{Y}^t$ between the respective lattice of cocharacters. \end{cor} \begin{proof} Note that $\phi$ induces a semiring homomorphism $\phi^*:P(\mathcal{Y})\rightarrow P(\mathcal{X})$. Therefore for any cochcaracter $\chi$ of $\mathcal{X}$, the map $f\mapsto \deg_z\langle \phi^*f,\chi\rangle$ is a semiring homomorphism from $P(\mathcal{Y})\rightarrow \mathbb{Z}^t$. By the above proposition there is a unique cocharacter $\eta$ of $\mathcal{Y}$ representing this semiring homomorphism, and we assign $\phi^t(\chi)=\eta$. \end{proof} We also want to give an explicit way to compute the induced map $\phi^t$. Fix two coordinate charts $(X_i)$ on $\mathcal{X}$ and $(Y_j)$ on $\mathcal{Y}$. Then $(X_i)$ gives rise to a basis $\{\chi_i\}$ of the lattice of cocharacters $\mathcal{X}^t$, which is defined by \[ \chi_i^*(X_k):=\left\{\begin{array}{ll} z & \text{if $k=i$;} \\ 1 & \text{if $k\neq i$.} \end{array}\right. \] This basis allows us to write each cocharacter $\chi$ of $\mathcal{X}$ as a linear combination $\sum x_i\chi_i$. It is not hard to see that \[ x_i=\deg_z\langle X_i, \chi\rangle. \] Similarly the coordinate chart $(Y_j)$ also gives rise to a basis $\{\eta_j\}$ of the lattice of cocharacters $\mathcal{Y}^t$, and we can write each cocharacter of $\mathcal{Y}$ as a linear combination $\sum y_j\eta_j$. On the other hand, for any positive rational function $q$ in $r$ varibles $X_1, \dots, X_r$ we have the so-called \textit{na\"{i}ve tropicalization}, which turns $q$ into a map from $\mathbb{Z}^r$ to $\mathbb{Z}$ via the following process: \begin{enumerate} \item replace addition in $q(X_1,\dots, X_r)$ by taking maximum; \item replace multiplication in $q(X_1,\dots, X_r)$ by addition; \item replace division in $q(X_1,\dots, X_r)$ by subtraction; \item replace every constant by zero; \item replace $X_i$ by $x_i$. \end{enumerate} It is not hard to see that, given a positive rational map $\phi:\mathcal{X}\dashrightarrow\mathcal{Y}$, the induced map $\phi^t$ maps $\sum x_i\chi_i$ to $\sum y_j\eta_j$ where \begin{equation}\label{cocharacter} y_j:=(\phi^*(Y_j))^t(x_i). \end{equation} Now we are ready to define tropicalization. \begin{defn} The \textit{tropicalization} of a split algebraic torus $\mathcal{X}$ is defined to be its lattice of cocharacters $\mathcal{X}^t$ (and hence the notation). For a positive rational map $\phi:\mathcal{X}\dashrightarrow \mathcal{Y}$ between split algebraic tori, the \textit{tropicalization} of $\phi$ is defined to be the map $\phi^t:\mathcal{X}^t\rightarrow \mathcal{Y}^t$. The basis $\{\chi_i\}$ of $\mathcal{X}^t$ corresponding to a coordinate system $(X_i)$ on $\mathcal{X}$ is called the \textit{basic laminations} associated to $(X_i)$. \end{defn} Now let's go back to the cluster varieties $\mathcal{A}_{|\vec{i}_0|}$ and $\mathcal{X}_{|\vec{i}_0|}$. Since both cluster varieties are obtained by gluing seed tori via positive birational equivalences, we can tropicalize everything and obtain two new glued objects which we call \textit{tropicalized cluster varieties} and denote as $\mathcal{A}_{|\vec{i}_0|}^t$ and $\mathcal{X}_{|\vec{i}_0|}^t$. Since each seed $\mathcal{X}$-torus $\mathcal{X}_\vec{i}$ is given a split algebraic torus, it has a set of basic laminations associated to the canonical coordinates $(X_i)$; we will call this set of basic laminations the \textit{positive basic $\mathcal{X}$-laminations} and denote them as $l_i^+$. Note that $\{-l_i^+\}$ is also a set of basic laminations on $\mathcal{X}_\vec{i}$, which will be called the \textit{negative basic $\mathcal{X}$-laminations} and denote them as $l_i^-$. With all the terminologies developed, we can now state the definition of Goncharov and Shen's cluster Donaldson-Thomas transformation as follows. \begin{defn}[Definition 2.15 in \cite{GS}] A \textit{cluster Donaldson-Thomas transformation} (of a seed $\mathcal{X}$-torus $\mathcal{X}_\vec{i}$) is a cluster transformation $\mathrm{DT}:\mathcal{X}_\vec{i}\dashrightarrow \mathcal{X}_\vec{i}$ whose tropicalization $\mathrm{DT}^t:\mathcal{X}_\vec{i}^t\rightarrow \mathcal{X}_\vec{i}^t$ maps each positive basic $\mathcal{X}$-laminations $l_i^+$ to its corresponding negative basic $\mathcal{X}$-laminations $l_i^-$. \end{defn} Goncharov and Shen proved that a cluster Donaldson-Thomas transformation enjoys the following properties. \begin{thm} [Goncharov-Shen, Theorem 2.16 in \cite{GS}] \label{gs} A cluster Donaldson-Thomas transformation $\mathrm{DT}:\mathcal{X}_\vec{i}\rightarrow \mathcal{X}_\vec{i}$ is unique if it exists. If $\vec{i}'$ is another seed in $|\vec{i}|$ (the collection of seeds mutation equivalent to $\vec{i}$) and $\tau:\mathcal{X}\vec{i}\rightarrow \mathcal{X}_{\vec{i}'}$ is a cluster transformation, then the conjugate $\tau \mathrm{DT} \tau^{-1}$ is the cluster Donaldson-Thomas transformation of $\mathcal{X}_{\vec{i}'}$. Therefore it makes sense to say that the cluster Donaldson-Thomas transformation $\mathrm{DT}$ exists on a cluster $\mathcal{X}$-variety without referring to any one specific seed $\mathcal{X}$-torus. \end{thm} From our discussion on tropicalization above, we can translate the definition of a cluster Donaldson-Thomas transformation into the following equivalent one, which we will use to prove our main theorem. \begin{prop}\label{lem0} A cluster transformation $\mathrm{DT}:\mathcal{X}_{|\vec{i}_0|}\rightarrow \mathcal{X}_{|\vec{i}_0|}$ is a cluster Donaldson-Thomas transformation if and only if on one (any hence any) seed $\mathcal{X}$-torus $\mathcal{X}_\vec{i}$ with cluster coordinates $(X_i)$, we have \[ \deg_{X_i}\mathrm{DT}^*(X_j)=-\delta_{ij} \] where $\delta_{ij}$ denotes the Kronecker delta. \end{prop} \begin{proof} From Equation \eqref{cocharacter} we see that $\mathrm{DT}^t(l_i^+)= l_i^-$ for all $i$ if and only if $\deg_{X_i}\mathrm{DT}^*(X_j)=-\delta_{ij}$. \end{proof} \subsection{Rewriting the Twist Map in Cluster Language}\label{3.1} So far we have seen the twist map (a slight modification from Fomin and Zelevinsky's original one in \cite{FZ}) in two different forms: on the one hand, we can write it using Gaussian decomposition \begin{align*} \eta: H\backslash G^{u,v}/H&\rightarrow H\backslash G^{u,v}/H\\ H\backslash x/H & \mapsto H\backslash \left(\left[\overline{u}^{-1}x\right]_-^{-1}\overline{u}^{-1}x\overline{v^{-1}}\left[x\overline{v^{-1}}\right]_+^{-1}\right)^t/H; \end{align*} on the other hand, if we identify the double quotient $H\backslash G^{u,v}/H$ with the configuration space $\mathrm{Conf}^{u,v}(\mathcal{B})$, the twist map can be rewritten as \begin{align*} \eta:\mathrm{Conf}^{u,v}(\mathcal{B})&\mapsto \mathrm{Conf}^{u,v}(\mathcal{B})\\ [B_1,B_2,B_3,B_4]&\mapsto [B_3,B_4,B_5,B_6]^* \end{align*} where the six Borel subgroups fit into the following hexagon diagram (Proposition \ref{twistflag}). \[ \xymatrix{ & B_6\ar[r]^{u^c} \ar@{-}[drr] & B_1 \ar[dr]^u \ar@{-}[dll] & \\ B_3 \ar[ur]^{u^*} \ar[dr]_{v^*} \ar@{-}[drr] & & & B_4 \ar@{-}[dll] \\ & B_2 \ar[r]_{v^c} & B_5 \ar[ur]_v &} \] In this subsection, we will rewrite the twist map $\eta$ once again using the cluster ensemble $(\mathcal{A}^{u,v},\mathcal{X}^{u,v},p)$ and the maps $\psi$ and $\chi$ we constructed in last subsection and prove part (a).(ii) of our main theorem; we copy down that part of the statement as follows. \begin{prop}\label{clustertwist} The twist map $\eta:H\backslash G^{u,v}/H\dashrightarrow H\backslash G^{u,v}/H$ is rationally equivalent to the following composition of maps: \[ \xymatrix{G_{sc}^{u,v} \ar@{-->}[r]^\psi \ar[d] & \mathcal{A}^{u,v} \ar[d]^p & \\ H\backslash G^{u,v}/H & \mathcal{X}^{u,v} \ar[r]^(0.4){\chi} & H\backslash G^{u,v}/H} \] where $G_{sc}^{u,v}\rightarrow H\backslash G^{u,v}/H$ is the quotient map $G_{sc}^{u,v}\rightarrow H\backslash G_{sc}^{u,v}/H\cong H\backslash G^{u,v}/H$, and we go against this map by taking a lift at the very first step. In particular, the resulting map does not depend on the lift that we take. \end{prop} The key to prove this proposition is the following two lemmas. \begin{lem} If $x\in B_+\cap B_-vB_-$ and $(\alpha(1),\dots, \alpha(n))$ is a reduced word of $v$, then \[ \left[x\overline{v^{-1}}\right]_-^t=\prod_{i=1}^n e_{\alpha(i)}(t_i) \] where \[ t_i:=\frac{\prod_{\beta\neq \alpha(i)} \left(\Delta_\beta\left(x\overline{\left(v_{>i}\right)^{-1}}\right)\right)^{-C_{\beta\alpha(i)}}}{\Delta_{\alpha(i)}\left(x\overline{\left(v_{>i}\right)^{-1}}\right)\Delta_{\alpha(i)}\left(x\overline{\left(v_{>i-1}\right)^{-1}}\right)}. \] \end{lem} \begin{proof} We will prove by induction on $n$. There is nothing to show when $n=0$. For $n>0$, we may assume without loss of generality that \[ x=\left(\prod_\beta a_\beta^{H_\beta} \right)e_{\alpha(1)}(p_1)\dots e_{\alpha(n)}(p_n). \] Then it is obvious that for any simple root $\beta$, \[ \Delta_\beta(x)=a_\beta. \] Next let's consider $x\overline{\left(v_{>n-1}\right)^{-1}}=x\overline{s}_{\alpha(n)}$. By using the identity \[ e_\beta(p)\overline{s}_\beta=e_{-\beta}(p^{-1})p^{H_\beta}e_\beta(-p^{-1}) \] we get \[ x\overline{s}_{\alpha(n)}=\left(\prod_\beta a_\beta^{H_\beta} \right)e_{\alpha(1)}(p_1)\dots e_{\alpha(n-1)}(p_{n-1})e_{-\alpha(n)}(p_n^{-1})p_n^{H_{\alpha(n)}}e_{\alpha(n)}(-p_n^{-1}). \] Now our mission is to move the factor $e_{-\alpha(n)}(p_n^{-1})p_n^{H_{\alpha(n)}}$ all the way to the front so that we can use induction. The way to do this is to use the fact that $e_\beta(p)$ commutes with $e_{-\alpha}(q)$ whenever $\beta\neq \alpha$ plus the following identities (the first one is identity \eqref{e_+e_-} and the second one comes from the Lie algebra identity $[H_\alpha, E_\beta]=C_{\alpha\beta}E_\beta$): \begin{align*} e_\beta(q)e_{-\beta}(p)=&e_{-\beta}\left(\frac{p}{1+pq}\right)(1+pq)^{H_\beta} e_\beta\left(\frac{q}{1+pq}\right);\\ e_\beta(q)a^{H_\alpha}=&a^{H_\alpha}e_\beta\left(a^{-C_{\alpha\beta}}q\right). \end{align*} Combining these facts we see that \[ e_\beta(q)e_{-\alpha(n)}(p^{-1})p^{H_{\alpha(n)}}=\left\{\begin{array}{ll} e_{-\alpha(n)}(p^{-1})p^{H_{\alpha(n)}}e_\beta(\cdots) & \text{if $\beta\neq \alpha(n)$;} \\ e_{-\alpha(n)}\left(\frac{1}{p+q}\right)(p+q)^{H_{\alpha(n)}}e_\beta(\cdots) & \text{if $\beta=\alpha(n)$.} \end{array}\right. \] Using the above identity recursively, we get \[ x\overline{s}_{\alpha(n)}=e_{-\alpha(n)}\left(\frac{\prod_\beta a^{-C_{\beta \alpha(n)}}}{\sum_{\alpha(i_k)=\alpha(n)} p_{i_k}}\right)\left(\sum_{\alpha(i_k)=\alpha(n)} p_{i_k}\right)^{H_{\alpha(n)}}\left(\prod_\beta a_\beta^{H_\beta}\right)e_{\alpha(1)}(\cdots)\dots e_{\alpha(n-1)}(\cdots)e_{\alpha(n)}(-p_n^{-1}). \] Thus it follows that \[ \Delta_{\alpha(n)}\left(x\overline{s}_{\alpha(n)}\right)=a_{\alpha(n)}\sum_{\alpha(i_k)=\alpha(n)} p_{i_k}. \] Note that if we define $t_n:=\frac{\prod_\beta a_\beta^{-C_{\beta \alpha(n)}}}{\sum_{\alpha(i_k)=\alpha(n)} p_{i_k}}$, then it follows that \[ t_n=\frac{\prod_{\beta\neq \alpha(n)} a_\beta^{-C_{\beta\alpha(n)}}}{a_{\alpha(n)}^2\sum_{\alpha(i_k)=\alpha(n)}p_{i_k}}=\frac{\prod_{\beta\neq \alpha(n)} \left(\Delta_\beta (x)\right)^{-C_{\beta\alpha(n)}}}{\Delta_{\alpha(n)}(x)\Delta_{\alpha(n)}\left(x\overline{s}_{\alpha(n)}\right)}. \] On the other hand, if we define $v':=vs_{\alpha(n)}$ and \begin{align*} x':=&e_{-\alpha(n)}\left(-t_n\right)x\overline{s}_{\alpha(n)}e_{\alpha(n)}(p_n^{-1})\\ =&\left(\sum_{\alpha(i_k)=\alpha(n)} p_{i_k}\right)^{H_{\alpha(n)}}\left(\prod_\beta a_\beta^{H_\beta}\right)e_{\alpha(1)}(\cdots)\dots e_{\alpha(n-1)}(\cdots), \end{align*} then it follows that $x'\in B_+\cap B_-v'B_-$ and \[ \left[x\overline{v^{-1}}\right]_-=e_{-\alpha(n)}\left(t_n\right)\left[x'\overline{v'^{-1}}\right]_-. \] Note that $l(v')=l(v)-1$; hence we can use induction to finish the proof. The only remaining thing one needs to realize is that \[ \Delta_\beta\left(x'\overline{\left(v'_{>l}\right)^{-1}}\right)=\Delta_\beta\left(e_{-\alpha(n)}(-t_n)x\overline{s}_{\alpha(n)}e_{\alpha(n)}(p_n^{-1})\overline{\left(v'_{>l}\right)^{-1}}\right)=\Delta_\beta\left(x\overline{\left(v_{>l}\right)^{-1}}\right). \] The last equality holds because $v'_{>l}(\alpha(n))>0$ and hence \[ e_{\alpha(n)}(p_n^{-1})\overline{\left(v'_{>l}\right)^{-1}}=\overline{\left(v'_{>l}\right)^{-1}}n_+ \] for some unipotent element $n_+\in N_+$. \end{proof} Analogously, one can also prove the following lemma. \begin{lem} If $x\in B_+uB_+\cap B_-$ and $(\alpha(1),\dots, \alpha(m))$ is a reduced word of $u$, then \[ \left[\overline{u}^{-1}x\right]_+^t=\prod_{i=1}^m e_{-\alpha(i)}(t_i) \] where \[ t_i:=\frac{\prod_{\beta\neq \alpha(i)} \left(\Delta_\beta\left(\overline{u_{< i}}^{-1}x\right)\right)^{-C_{\beta\alpha(i)}}}{\Delta_{\alpha(i)}\left(\overline{u_{< i}}^{-1}x\right)\Delta_{\alpha(i)}\left(\overline{u_{< i+1}}^{-1}x\right)}. \] \end{lem} \noindent\textit{Proof of Proposition \ref{clustertwist}.} Since every seed torus is a Zariski open subset of the corresponding cluster variety and we only care about rational equivalence, we can reduced Proposition \ref{clustertwist} to proving that the twist map is rationally equivalent to the composition \[ \xymatrix{G_{sc}^{u,v} \ar@{-->}[r]^{\psi_\vec{i}} \ar[d] & \mathcal{A}_\vec{i} \ar[d]^{p_\vec{i}} & \\ H\backslash G^{u,v}/H & \mathcal{X}_\vec{i} \ar[r]^(0.3){\chi_\vec{i}} & H\backslash G^{u,v}/H} \] for some nice reduced word $\vec{i}$ of the pair of Weyl group elements $(u,v)$. Our choice of reduced word for this proof is any reduced word $\vec{i}:=(\alpha(1),\dots, \alpha(n), \alpha(n+1),\dots, \alpha(l))$ satifying the fact that $(\alpha(1),\dots, \alpha(n))$ is a reduced word of $v$ (so all the letters $\alpha(1),\dots, \alpha(n)$ are simple roots) and $-(\alpha(n+1),\dots, \alpha(l))$ is a reduced word of $u$ (so all the letters $\alpha(n+1),\dots, \alpha(l)$ are opposite to simple roots). In other words, our reduced word $\vec{i}$ can be broken down into two parts: the $v$ part and the $u$ part, as depicted below \[ \vec{i}:=(\underbrace{\alpha(1),\dots, \alpha(n)}_\text{$v$ part}, \underbrace{\alpha(n+1),\dots, \alpha(l)}_\text{$u$ part}). \] Let's ignore the lifting at the very first step for now. Suppose we start with an element $x\in G_{sc}^{u,v}$. As the space of Gaussian decomposable elements $G_0$ is dense in $G_{sc}^{u,v}$ (see \cite{FZ} Proposition 2.14), we may further assume that $x$ is Gaussian decomposable, i.e., $x=[x]_-[x]_0[x]_+$. In particular, from the assumption that $x\in G_{sc}^{u,v}$ we know that $[x]_-[x]_0\in B_+u B_+\cap B_-$ and $[x]_0[x]_+\in B_+\cap B_-vB_-$. Now the twist map $\eta$ maps $H\backslash x/H$ to \begin{align*} H\backslash \left(\left[\overline{u}^{-1}x\right]_-^{-1}\overline{u}^{-1}x\overline{v^{-1}}\left[x\overline{v^{-1}}\right]_+^{-1}\right)^t/H=& H\backslash\left(\left[\overline{u}^{-1}[x]_-[x]_0\right]_-^{-1}\overline{u}^{-1}[x]_-[x]_0[x]_+\overline{v^{-1}}\left[[x]_0[x]_+\overline{v^{-1}}\right]_+^{-1}\right)^t/H\\ =&H\backslash \left(\left[\overline{u}^{-1}[x]_-[x]_0\right]_+[x]_0^{-1}\left[[x]_0[x]_+\overline{v^{-1}}\right]_-\right)^t/H\\ =& H\backslash \left(\left[[x]_0[x]_+\overline{v^{-1}}\right]_-^t [x]_0^{-1} \left[\overline{u}^{-1}[x]_-[x]_0\right]_+^t\right)/H. \end{align*} By applying the two lemmas we had above, we see that the element in the middle can be rewritten as \[ \left(\prod_{i=1}^n e_{\alpha(i)}(t_i)\right)[x]_0^{-1}\left(\prod_{i=n+1}^l e_{\alpha(i)}(t_i)\right) \] where \[ t_i:=\left\{\begin{array}{ll} \displaystyle\frac{\prod_{\beta\neq \alpha(i)} \left(\Delta_\beta\left([x]_0[x]_+\overline{\left(v_{>i}\right)^{-1}}\right)\right)^{-C_{\beta\alpha(i)}}}{\Delta_{\alpha(i)}\left([x]_0[x]_+\overline{\left(v_{>i}\right)^{-1}}\right)\Delta_{\alpha(i)}\left([x]_0[x]_+\overline{\left(v_{>i-1}\right)^{-1}}\right)} & \text{if $1\leq i\leq n$}; \\ \displaystyle\frac{\prod_{\beta\neq \alpha(i)} \left(\Delta_\beta\left(\overline{u_{< i}}^{-1}[x]_-[x]_0\right)\right)^{-C_{\beta\alpha(i)}}}{\Delta_{\alpha(i)}\left(\overline{u_{< i}}^{-1}[x]_-[x]_0\right)\Delta_{\alpha(i)}\left(\overline{u_{< i+1}}^{-1}[x]_-[x]_0\right)} & \text{if $n+1\leq i\leq l$}. \end{array}\right. \] But then since $\Delta_\gamma \left([x]_0[x]_+\overline{\left(v_{>i}\right)^{-1}}\right)=\Delta_\beta \left(x\overline{\left(v_{>i}\right)^{-1}}\right)$ and $\Delta_\gamma\left(\overline{u_{< i}}^{-1}[x]_-[x]_0\right)=\Delta_\gamma\left(\overline{u_{< i}}^{-1}x\right)$ for any $\gamma$, we can rewrite the $t_i$'s as \[ t_i:=\left\{\begin{array}{ll} \displaystyle\frac{\prod_{\beta\neq \alpha(i)} \left(\Delta_\beta\left(x\overline{\left(v_{>i}\right)^{-1}}\right)\right)^{-C_{\beta\alpha(i)}}}{\Delta_{\alpha(i)}\left(x\overline{\left(v_{>i}\right)^{-1}}\right)\Delta_{\alpha(i)}\left(x\overline{\left(v_{>i-1}\right)^{-1}}\right)} & \text{if $1\leq i\leq n$}; \\ \displaystyle\frac{\prod_{\beta\neq \alpha(i)} \left(\Delta_\beta\left(\overline{u_{< i}}^{-1}x\right)\right)^{-C_{\beta\alpha(i)}}}{\Delta_{\alpha(i)}\left(\overline{u_{< i}}^{-1}x\right)\Delta_{\alpha(i)}\left(\overline{u_{< i+1}}^{-1}x\right)} & \text{if $n+1\leq i\leq l$}. \end{array}\right. \] Note that every generalized minor factor present in the above expression is a cluster coordinate of the seed torus $\mathcal{A}_\vec{i}$! This should be the first hint why the twist map should be translatable into cluster language. To put things into the right places, we need the following additional identities: \[ e_\alpha(t)=t^{H^\alpha}e_\alpha t^{-H^\alpha}, \quad \quad \quad \quad e_{-\alpha}(t)=t^{-H^\alpha}e_{-\alpha}t^{H^\alpha}. \] Note that we have secretly moved from the $G_{sc}$ territory into the $G_{ad}$ territory: the right hand side of either identities above obviously lives in $G_{ad}$. This is okay because what we care at the end is the image of $\eta$ in the double quotient $H\backslash G^{u,v}/H$, and it's completely fine to project $\left(\prod_{i=1}^n e_{\alpha(i)}(t_i)\right)[x]_0^{-1}\left(\prod_{i=n+1}^l e_{\alpha(i)}(t_i)\right)$ into $G_{ad}$ first before taking the double quotient. We now make the bold claim that \[ H\backslash\left(\prod_{i=1}^n e_{\alpha(i)}(t_i)\right)[x]_0^{-1}\left(\prod_{i=n+1}^l e_{\alpha(i)}(t_i)\right)/H=H\backslash \left(\chi_\vec{i}\circ p_\vec{i} \circ \psi_\vec{i} (x)\right)/H. \] To show this, we break the argument down into three cases. \begin{enumerate} \item The first case is for vertices between two occurrences of the same simple root, i.e., vertices that are strictly contained in the $v$ part of the seed $\vec{i}$. The string diagram looks like the following, and we suppose $b$ is the vertex that we are interested in. \[ \tikz{ \node (1) at (1,2) [] {$\alpha(i)$}; \node (2) at (6,2) [] {$\alpha(j)$}; \node (3) at (2,0) [] {$\alpha(k)$}; \node (4) at (5,0) [] {$\alpha(m)$}; \node (0) at (3.5,0) [] {$\cdots$}; \node at (3.5,1) [] {$\cdots$}; \draw (0,0) -- node[above]{$d$} (3) -- node[above]{$e$} (0) -- node[above]{$f$} (4) -- node[above]{$g$} (7,0); \draw (0,2) -- node[above]{$a$} (1) -- node[above]{$b$} (2) -- node[above]{$c$} (7,2); } \] From the way we constructed the seed data, we know that $\epsilon_{be}=\epsilon_{bf}=0$, and the cluster $\mathcal{X}$-coordinate \begin{align*} X_b\left(p_\vec{i}\circ \psi_\vec{i}(x)\right)=&\frac{A_a\left(\psi_\vec{i}(x)\right)\left(A_g\left(\psi_\vec{i}(x)\right)\right)^{-C_{\alpha(m)\alpha(j)}}\dots}{A_c\left(\psi_\vec{j}(x)\right)\left(A_d\left(\psi_\vec{i}(x)\right)\right)^{-C_{\alpha(k)\alpha(i)}}\dots}\\ =&\frac{\Delta_{\alpha(i)}\left(x\overline{\left(v_{>i-1}\right)^{-1}}\right)\left(\Delta_{\alpha(m)}\left(x\overline{\left(v_{>m}\right)^{-1}}\right)\right)^{-C_{\alpha(m)\alpha(j)}}\dots}{\Delta_{\alpha(j)}\left(x\overline{\left(v_{>j}\right)^{-1}}\right)\left(\Delta_{\alpha(k)}\left(x\overline{\left(v_{>k-1}\right)^{-1}}\right)\right)^{-C_{\alpha(k)\alpha(i)}}\dots}\\ =&\frac{\Delta_{\alpha(i)}\left(x\overline{\left(v_{>i-1}\right)^{-1}}\right)\prod_{\beta\neq \alpha(j)}\left(\Delta_\beta\left(x\overline{\left(v_{>j}\right)^{-1}}\right)\right)^{-C_{\beta\alpha(j)}}}{\Delta_{\alpha(j)}\left(x\overline{\left(v_{>j}\right)^{-1}}\right)\left(\prod_{\beta\neq \alpha(i)}\Delta_\beta\left(x\overline{\left(v_{>i}\right)^{-1}}\right)\right)^{-C_{\beta\alpha(i)}}}\\ =&\frac{\Delta_{\alpha(i)}\left(x\overline{\left(v_{>i-1}\right)^{-1}}\right)\Delta_{\alpha(i)}\left(x\overline{\left(v_{>i}\right)^{-1}}\right)}{\prod_{\beta\neq \alpha(i)}\left(\Delta_\beta\left(x\overline{\left(v_{>i}\right)^{-1}}\right)\right)^{-C_{\beta\alpha(i)}}}\frac{\prod_{\beta\neq \alpha(j)}\left(\Delta_\beta\left(x\overline{\left(v_{>j}\right)^{-1}}\right)\right)^{-C_{\beta\alpha(j)}}}{\Delta_{\alpha(j)}\left(x\overline{\left(v_{>j-1}\right)^{-1}}\right)\Delta_{\alpha(j)}\left(x\overline{\left(v_{>j}\right)^{-1}}\right)}\\ =&t_i^{-1}t_j. \end{align*} On the other hand, we also know that inside the product $\left(\prod_{k=1}^n e_{\alpha(k)}(t_k)\right)[x]_0^{-1}\left(\prod_{k=n+1}^l e_{\alpha(k)}(t_k)\right)$, the two factors corresponding to the letters $\alpha(i)$ and $\alpha(j)$ are \[ e_{\alpha(i)}(t_i)=t_i^{H^{\alpha(i)}}e_{\alpha(i)} t_i^{-H^{\alpha(i)}} \quad \quad \text{and} \quad \quad e_{\alpha(j)}(t_j)=t_j^{H^{\alpha(j)}}e_{\alpha(j)} t_j^{-H^{\alpha(j)}}. \] Since there are no other letter that is the simple root $\alpha(i)=\alpha(j)=\alpha$ between the $i$th and the $j$th place, we know that the $\mathcal{X}$-variable $X_b$ is exactly the $H^\alpha$ part of whatever that lies between the two letters, i.e., \[ \dots e_{\alpha(i)}(t_i)\dots e_{\alpha(j)}(t_j)\dots =\dots e_\alpha X_b^{H^\alpha} \dots e_\alpha\dots \] \item By a completely symmetric argument, we can also take care of the case of vertices between two occurrences of the same letter that is opposite to a simple root, i.e., vertices that are strictly contained in the $u$ part of the seed $\vec{i}$. \item So the remaining case is for the vertices that lie between the $v$ part and the $u$ part of the seed. Consider the vertex $a$ in the string diagram below, where $\alpha(i)=-\alpha(j)=\alpha$ is a simple root. \[ \tikz{ \node (1) at (1,1) [] {$\alpha(i)$}; \node (2) at (3,1) [] {$\alpha(j)$}; \draw (0,1) -- (1) -- node[above]{$a$} (2) -- (4,1); } \] By an analysis similar to what we have done in part (1), we know that the cluster $\mathcal{X}$-coordinate \[ X_a\left(p_\vec{i}\circ \psi_\vec{i}(x)\right)=\frac{\prod_{\beta\neq \alpha} \left(\Delta_\beta(x)\right)^{-C_{\beta\alpha}}}{t_it_j\left(\Delta_\alpha(x)\right)^2} =t_i^{-1}t_j^{-1}\prod_\beta \left(\Delta_\beta(x)\right)^{-C_{\beta\alpha}}. \] This is perfect because inside the product $\left(\prod_{k=1}^n e_{\alpha(k)}(t_k)\right)[x]_0^{-1}\left(\prod_{k=n+1}^l e_{\alpha(k)}(t_k)\right)$ we have factors of the form \[ \dots e_{\alpha(i)}(t_i)\dots [x]_0^{-1}\dots e_{\alpha(j)}(t_j)\dots = \dots t_i^{H^\alpha}e_\alpha t_i^{-H^\alpha} \dots \prod_\alpha\left(\prod_\beta \left(\Delta_\beta(x)\right)^{-C_{\beta\alpha}}\right)^{H^\alpha}\dots t_j^{-H^\alpha}e_{-\alpha}t_j^{-H^\alpha}\dots, \] and we see that the $H^\alpha$ part lying between the letters $\alpha(i)$ and $\alpha(j)$ is exactly $\left(X_a\left(p_\vec{i}\circ \psi_\vec{i}(x)\right)\right)^{H^\alpha}$. In conclusion, by applying $\chi_\vec{i}$ to the image $p_\vec{i}\circ\psi_\vec{i}(x)$, we have successfully recovered our desired image $H\backslash \left(\left[\overline{u}^{-1}x\right]_-^{-1}\overline{u}^{-1}x\overline{v^{-1}}\left[x\overline{v^{-1}}\right]_+^{-1}\right)^t/H$. \end{enumerate} So far we have shown that if we start with an element $x$ in $G_{sc}^{u,v}$, then \[ H\backslash \left(\left[\overline{u}^{-1}x\right]_-^{-1}\overline{u}^{-1}x\overline{v^{-1}}\left[x\overline{v^{-1}}\right]_+^{-1}\right)^t/H=\chi_\vec{i}\circ p_\vec{i}\circ \psi_\vec{i}(x); \] to finish the proof of the proposition, we also need to show that the image does not depend on the lift from $H\backslash G^{u,v}/H$ to $G_{sc}^{u,v}$ in the first place. Suppose instead of $x$, we start with $hx$ for some element $h\in H$. Then \[ H\backslash\left(\left[\overline{u}^{-1}hx\right]_-^{-1}\overline{u}^{-1}hx\overline{v^{-1}}\left[hx\overline{v^{-1}}\right]_+^{-1}\right)^t/H=H\backslash\left(\left[\overline{u}^{-1}hx\right]_+ \overline{v^{-1}}\left[hx\overline{v^{-1}}\right]_+^{-1}\right)^t/H. \] But by the definition of the Weyl group $W:=N_GH/H$, we know that $\overline{u}^{-1}$ normalizes $H$; therefore $\overline{u}^{-1}h=h'\overline{u}^{-1}$ for some $h'\in H$ and we can simplify the above equality to \begin{align*} H\backslash\left(\left[\overline{u}^{-1}hx\right]_-^{-1}\overline{u}^{-1}hx\overline{v^{-1}}\left[hx\overline{v^{-1}}\right]_+^{-1}\right)^t/H=&H\backslash\left(\left[\overline{u}^{-1}x\right]_+ \overline{v^{-1}}\left[x\overline{v^{-1}}\right]_+^{-1}\right)^t/H\\ =&H\backslash \left(\left[\overline{u}^{-1}x\right]_-^{-1}\overline{u}^{-1}x\overline{v^{-1}}\left[x\overline{v^{-1}}\right]_+^{-1}\right)^t/H. \end{align*} A similar argument can also be applied to the replacement of $x$ by $xh$ for some $h\in H$. This finishes our proof of Proposition \ref{clustertwist}.\qed \begin{cor} For any reduced word $\vec{i}$ of a pair of Weyl group elements $(u,v)$, there is a rational map $\psi_\vec{i}:H\backslash G^{u,v}/H\dashrightarrow \mathcal{X}_\vec{i}$ that makes the following diagram commute. \[ \xymatrix{G_{sc}^{u,v} \ar@{-->}[r]^{\psi_\vec{i}} \ar[d] & \mathcal{A}_\vec{i} \ar[d]^p \\ H\backslash G^{u,v}/H \ar@{-->}[r]_(0.6){\psi_\vec{i}} & \mathcal{X}_\vec{i}} \] In particular, these rational maps $\psi_\vec{i}$ can be glued into a rational map $\psi:H\backslash G^{u,v}/H\dashrightarrow \mathcal{X}^{u,v}$. \end{cor} \begin{proof} To show that such a rational map exists, it suffices to show its well-defined-ness. From Proposition \ref{clustertwist} we know that by adding an extra map $\chi_\vec{i}:\mathcal{X}_\vec{i}\rightarrow H\backslash G^{u,v}/H$ to the lower right hand corner we get the twist map $\eta$, which is a birational equivalence since it only differs from Fomin and Zelevinsky's twist map by an anti-involution and Fomin and Zelevnisky proved that their twist map is a biregular involution (\cite{FZ} Theorem 1.6). In particular, since the twist map $\eta$ is dominant, so is the map $\chi_\vec{i}$. But then since $\mathcal{X}_\vec{i}$ and $H\backslash G^{u,v}/H$ are algebraic varieties of the same dimension, the fibers of $\chi_\vec{i}$ can only be finite sets of points. But the fibers of $G_{sc}^{u,v}\rightarrow H\backslash G^{u,v}/H$ are algebraic tori, which are irreducible; therefore the image of each such fiber must lie entirely within a single point, and this proves the well-defined-ness of the rational map $\psi_\vec{i}:H\backslash G^{u,v}/H\dashrightarrow \mathcal{X}_\vec{i}$. The gluability of $\psi_\vec{i}$ directly follows from the gluabilities of $\psi_\vec{i}:G_{sc}^{u,v}\dashrightarrow \mathcal{A}_\vec{i}$ and $p_\vec{i}:\mathcal{A}_\vec{i}\rightarrow \mathcal{X}_\vec{i}$. \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} Speech separation aims to segregate individual speakers from a mixture signal, and it can be used in many applications, such as speaker diarization, speaker verification or multi-talker speech recognition. Deep learning has allowed an unprecedented separation accuracy compared with the traditional signal processing based methods, however, there are still challenges to address. For instance, in blind source separation, the order of the output speakers is arbitrary and unknown in advance, which forms a speaker label permutation problem during training. Clustering based methods~\cite{hershey2016deep} or, more recently, Permutation Invariant Training (PIT) technique~\cite{kolbaek2017multitalker} have been proposed to alleviate this issue. Although the PIT forces the frames belonging to the same speaker to be aligned with the same output stream, frames inside one utterance can still flip between different sources, leading to a poor separation performance. Alternatively, the initial PIT-based separation model can be further trained with a fixed label training strategy~\cite{yang2020interrupted}, or a long term dependency can be imposed to the output streams by adding an additional speaker identity loss~\cite{drude2018deep,nachmani2020voice}. Another issue in blind source separation is that the speaker order of the separated signals during inference is also unknown, and needs to be identified by a speaker recognition system. An alternative solution to the label permutation problem is to perform target speaker extraction~\cite{vzmolikova2019speakerbeam,Delcroix_2020,Ge2020SpExAC}. In this case, the separation model is biased with information about the identity of the target speaker to extract from the mixture. Typically, a speech extraction system consists of two networks, one to generate speaker embeddings, and another one to perform speech extraction. The speaker embedding network outputs a speaker representation from an enrollment signal uttered by the target. The speaker embedding network can be either jointly trained with the speech extraction model to minimise the enhancement loss or trained on a different task, i.e., a speaker recognition task, to access larger speaker variations~\cite{Wang_2019}. The target speaker embedding is usually inserted into the middle-stage features of the extraction network by using multiplication~\cite{Delcroix_2020} or concatenation operations~\cite{Ge2020SpExAC,ji2020speaker}, however, the shared middle-features in the extraction model may not be optimal for both tasks of speaker conditioning and speech reconstruction. Most of the existing speech extraction models enhance only one target speaker each time and ignore speech from other speakers. When multiple speakers are of interest, the extraction model has to be applied several times, which is inconvenient and requires more computational resources. Therefore, a system capable of simultaneously extracting multiple speakers from a mixture is of practical importance. Recently, a speaker-conditional chain model (SCCM) has been proposed that firstly infers speaker identities, then uses the corresponding speaker embeddings to extract all sources~\cite{shi2020speaker}. However, SCCM is still trained with the PIT criterion, and the output order of separated signals is arbitrary. Lastly, when multiple microphones are available, the spatial information has been shown to improve the performance of both separation and extraction~\cite{Zhang2020end,Delcroix_2020} systems in clean and reveberant environments. So far, the spatial information has not been tested with a multi-speaker extraction system, nor it has been evaluated in noisy and reverberant environments. In this paper, we reformulate our previous multi-channel speech separation design in~\cite{Zhang2020end} as a multi-talker speech extraction system. The proposed system uses embeddings from all speakers in the mixture to simultaneously extract all sources, and does not require PIT to solve the label permutation problem. There are three main contributions in this work. Firstly, we improve our previous multi-channel system in~\cite{Zhang2020end} by swapping the Temporal fully-Convolutional Network (TCN) blocks with U-Convolutional blocks, which yielded promising results for a recent single-channel speech separation model~\cite{tzinis2020sudo}. Secondly, the previous modified system is reformulated to perform multi-speaker extraction, and, lastly, a novel speaker conditioning mechanism is proposed that exploits the speaker embeddings more effectively. The evaluation is performed with multi-channel noisy and reverberant 2-speaker mixtures. We show that combining the updated multi-channel structure and the proposed speaker conditioning mechanism leads to a significant improvement in terms of both the separation metric and speech recognition accuracy. The rest of paper is organised as follows. In section~\ref{sec:multi_extr}, we introduce the proposed multi-channel speech extraction approach. Section~\ref{sec:experiment} presents implementation details and the experiment setup. Results and analysis are presented in Section~\ref{sec:result}. Finally, the paper is concluded in Section~\ref{sec:conclusion}. \section{Multi-channel end-to-end extraction} \label{sec:multi_extr} Recently, neural network based multi-channel speech separation approaches have achieved state-of-the-art performance by directly processing time-domain speech signals~\cite{Zhang2020end,gu2020enhancing}. These systems incorporate a spectral encoder, a spatial encoder, a separator, and a decoder. In~\cite{Zhang2020end}, spatial features are input to the separator only. In this work, we simplify the previous framework by combining the spatial and spectral features as depicted in Figure~\ref{fig:update_multi}. We found the proposed approach is beneficial for the speech extraction task. The spectral encoder and spatial encoder independently generate $N$-dimensional single-channel representations and $S$-dimensional multi-channel representations, respectively. The spectral encoder is a 1-D convolutional layer, and the spatial encoder is a 2-D convolutional layer. The encoded single-channel spectral features and two-channel spatial features are concatenated together to form multi-channel representations with a dimension of $(N+S)$, which are accessed by both the separation module and the decoder. The separator will estimate linear weights for combining the multi-channel representations to generate separated representations for each source. Finally, the decoder (1-D convolutional layer) reconstructs the estimated signals by inverting the separated representations back to time-domain signals. \begin{figure}[htp] \centering \includegraphics[width=0.48\textwidth]{update_multi_channel.png} \caption{Updated multi-channel model structure} \label{fig:update_multi} \end{figure} Compared with our previous work~\cite{Zhang2020end}, we also upgrade the separator by replacing the original TCN~\cite{lea2016temporal} blocks with U-Convolutional blocks (U-ConvBlock), which have proven to be more effective in modelling sequential signals in the single-channel speech separation task~\cite{tzinis2020sudo}. Furthermore, a system built on U-ConvBlock requires fewer parameters and floating point operations compared with the systems built on TCN or recurrent neural network architectures~\cite{luo2020dual}. The U-ConvBlock (Figure~\ref{fig:u_conv}) extracts information from multiple resolutions using $Q$ successive temporal downsampling and $Q$ upsampling operations similar to a U-Net structure~\cite{ronneberger2015u}. The channel dimension of the input to each U-ConvBlock is expanded from $C$ to $C_U$ before downsampling, and is contracted to the original dimension after upsampling. The updated separation module is shown in Figure~\ref{fig:u_conv_sep} and consists of a instance normalisation layer, a bottleneck layer, $B$ stacked U-ConvBlocks and a 1-D convolutional layer with a non-linear activation function. We choose to use an instance normalisation layer~\cite{ulyanov2016instance} rather than global layer normalisation for the first layer-normalisation, as the latter would normalise over the channel dimension which is inappropriate given the heterogeneous nature of the concatenated features. \begin{figure}[htp] \centering \includegraphics[width=0.45\textwidth]{U-ConvBlock.png} \caption{U-Conv block structure} \label{fig:u_conv} \end{figure} \begin{figure}[htp] \centering \includegraphics[width=0.45\textwidth]{uconv_separator.png} \caption{Improved separator with U-Conv blocks} \label{fig:u_conv_sep} \end{figure} \subsection{Proposed speech extraction structure} Building on the modified system described above, in this section we introduce a novel multi-channel speech extraction system which simultaneously tracks multiple sources in the mixture. In general, the system uses embeddings from multiple speakers as input, which are used to condition single-source outputs with a consistent speaker order. Common strategies for supplying speaker information to the extraction model are to modulate the speaker features on middle-level features inside the separation model~\cite{vzmolikova2019speakerbeam,zeghidour2020wavesplit} or concatenate the speaker features with the mixture speech representations~\cite{Ge2020SpExAC}. However, it is not trivial to find a single optimal layer at which to insert the speaker features. For instance, the shared middle-features in the extraction model may not be optimal for both speaker conditioning and speech reconstruction. \begin{figure}[b] \centering \includegraphics[width=0.48\textwidth]{extraction_split.png} \caption{Proposed multi-channel speech extractor with dedicated speaker stack} \label{fig:multi_channel_split_extr} \end{figure} \begin{figure}[hb!] \centering \includegraphics[width=0.48\textwidth]{split_extraction_speaker.png} \caption{Internal structure of proposed speaker stack} \label{fig:spkr_stack} \end{figure} To address this issue, we propose a new `speaker stack' for processing the input speaker representations to coordinate with the main separation stack, as shown in Figure~\ref{fig:multi_channel_split_extr}. The speaker stack takes the encoded multi-channel features and generates two high-level sequential features, which are suitable to receive speaker information from externally computed speaker embeddings. The output of the speaker branch containing speaker information is encouraged to learn similar characteristics as the original multi-channel features and can be concatenated together as input to the separation stack. Note that the encoder is shared for both the speaker stack and the separation stack. The speaker stack, illustrated in Figure~\ref{fig:spkr_stack}, first employs an instance normalisation, a bottleneck 1-D CNN and a single TCN block to receive multi-channel features. Then, the output of the TCN block will be factorised by an adaptation layer into multiple features for modulation with multiple speaker embeddings, which are transformed with a $1 \times 1$ convolutional layer to the same feature dimension. The modulated signals from each speaker embedding are concatenated together and processed with a 1-D convolutional layer and a ReLU non-linear activation function to form $E$-dimensional speaker information features, which have the same time length as the multi-channel features. The speaker stack and the separation stack are jointly trained to directly optimise the scale-invariant signal-to-noise ratio (SI-SNR) metric~\cite{le2019sdr}, \begin{equation} \begin{split} & \text{SI\text{-}SNR} = 10\mathrm{log}_{10} \frac{||\text{s}_{target}||^2}{||\text{e}_{noise}||^2} \\ & \text{s}_{target} = \frac{\big \langle \hat{s}, s \big \rangle s}{||s||^2}, \quad \text{e}_{noise} = \hat{s} - s_{target} \end{split} \label{equ:si_snr} \end{equation} where $\hat{s}$ and $s$ denote the estimated and clean source, respectively, and $||s||^2=\big \langle s, s \big \rangle$ denotes the signal power. In contrast with PIT, we condition the decoded signals on the speaker representations and keep the output speaker order consistent with the order of input speaker embeddings. \section{Experiment Setup} \label{sec:experiment} \subsection{Data simulation} The evaluation is performed on the WHAMR! dataset~\cite{Maciejewski_2020}, which consists of simulated noisy and reverberant 2-speaker mixtures. WHAMR! is based on Wall Street Journal (WSJ) data, mixed with noise recorded in various urban environments~\cite{Wichern_2019}, and artificial room impulse responses generated by using pyroomacoustics~\cite{scheibler2018pyroomacoustics} to approximate domestic and classroom environments. There are 20k sentences from 101 speakers for training, and 3k sentences from 18 speakers for testing. The speakers in the test set do not appear during training of the speaker recognition model nor they appear during training of the speaker extraction system. All data are binaural (2-channels) and have 8 kHz sampling rate. \subsection{Speech extraction network} The multi-channel separation network in~\cite{Zhang2020end} trained with PIT has been set as the baseline for comparison. The hyper-parameters of the baseline model are the same as those for the best model in the original paper, chosen as follows, $N=256$, $S=36$, $R=3$, $X=7$, $L=20$, and the batch size $M=3$. For the U-ConvBlock based separation module, the hyper-parameters are set as SuDoRM-RF 1.0x in~\cite{tzinis2020sudo} namely, $L=21$, $B=16$, $Q=4$, $C=256$, $C_U=512$, and the training batch size $M=4$. Each utterance is split into multiple segments with a fixed length of 4 seconds. The dimension of speaker features, $E$, in the speaker stack is set to 128. The ADAM optimizer~\cite{kingma2014adam} is used for training with a learning rate of $1e-3$, which will be halved if the loss of validation set is not reduced in 3 consecutive epochs. All models are trained with 100 epochs. The input for all the models is the reveberant mixture with noise and the targets are the clean individual sources. \subsection{Speaker recognition network} We retrained the time-domain speaker recognition model SincNet \cite{Ravanelli_2018} for speaker embedding generation. Employing the same configuration as in the original paper, SincNet is trained on the clean training set of WSJ0 (101 speakers), using speech segments of 200~ms with 10~ms overlap. The output of the last hidden layer of final SincNet model represents one frame-level speaker embedding for each 200 ms segment, and an utterance-level embedding is derived by averaging all the frame predictions. Randomly selecting a single enrollment utterance for generating the speaker embedding leads to poor extraction performance. Therefore, to increase the robustness, we follow an averaging strategy to obtain one global embedding for each speaker~\cite{Li2019Target}. Specifically, each global speaker embedding is obtained by averaging several embeddings generated from multiple randomly selected utterances belonging to the same speaker. During training, one global speaker embedding is generated by averaging all the utterance-level embeddings from the training utterances belonging to the corresponding speaker. During evaluation, 3 utterances are randomly selected for each speaker, and the utterance-level embeddings from the selected utterances are averaged to form one global embedding. Experiments showed that increasing the number of utterances beyond 3 does not improve performance. \subsection{Acoustic model} To evaluate the speech recognition performance, two acoustic models have been trained using the WSJ corpus. One model (AM1) was trained on roughly 80~hrs of clean WSJ-SI284 data plus the WHAMR! single-speaker noisy reverberant speech, and the other one (AM2) was trained on the data used for AM1 plus the separated signals from the WHAMR! mixture in the training set processed by the proposed model. The audio data is downsampled to 8 kHz to match the sampling rate of data used for separation experiments. The acoustic model topology is a 12-layered Factorised TDNN~\cite{povey2018semi}, where each layer has 1024 units. The input to the acoustic model is 40-dimensional MFCCs and a 100-dimensional i-Vector. A 3-gram language model is used during recognition. The acoustic model is implemented with the Kaldi speech recognition toolkit~\cite{povey2011kaldi}. With our set-up, the ASR results obtained with AM1 on the standard clean WSJ Dev93 and Eval92 are 7.2\% and 5.0\% WER, respectively. \vspace{-5pt} \section{Results and Analysis} \label{sec:result} \vspace{-5pt} \subsection{Improved Multi-channel separation network} Table~\ref{tab:whamr_sep_perf} reports the separation performance for the improved multi-channel separation network with various configurations. The first observation is that the dimension of the spatial features does not have to be fixed to a small value (typically 36) as mentioned in the previous work. The results show that when the dimension increases, more useful spatial information is extracted and the model benefits more from the multi-channel signals. Replacing the TCN blocks with the stacked U-ConvBlocks provides a larger receptive field due to successive downsampling operations, and the latter model yields 0.5 dB SI-SNR improvement. The configuration depicted in the last row of Table 1 is used for the rest of the experiments. \begin{table}[htp] \centering \caption{Speech separation performance of improved multi-channel structure on WHAMR! test set} \label{tab:whamr_sep_perf} \begin{tabular}{lcc} \hline \textbf{Model} & \textbf{S} & \textbf{SI-SNRi} \\ \hline Multi-TasNet (TCN) & 36 & 12.1 \\ Multi-TasNet (TCN) & 64 & 12.2 \\ Multi-TasNet (TCN) & 128 & 12.4 \\ \hline Multi-TasNet (U-Conv) & 128 & 12.9 \\ \hline \end{tabular} \end{table} \subsection{Results of speech extraction system} Three subsets of experiments with different speaker information conditioning strategies are performed. The first experiment uses the multiplication strategy applied in SpeakerBeam~\cite{Delcroix_2020}, which modulates the speaker embedding on the middle-stage representations in the separation module, denoted as Multiply. The second experiment repeats and concatenates the speaker embeddings with the spectral and spatial representations before being fed into the separation module, denoted as Concat. Lastly, the third experiment uses the proposed conditioning mechanism, denoted as Split. \vspace{-12pt} \begin{table}[htp] \centering \caption{Speech extraction performance with improved multi-channel structure on the WHAMR! test set} \label{tab:whamr_extr_perf} \begin{tabular}{lccc} \hline \textbf{Model} & \textbf{PIT} & \textbf{SI-SNRi} \\ \hline Separation (Improved) & \checkmark & 12.9 \\ Extraction (Concat) & \xmark & 12.8 \\ Extraction (Multiply) & \xmark & 12.9 \\ \hline Extraction (Split) & \xmark & 13.3 \\ Extraction (Split) & \checkmark & 13.4 \\ \hline \end{tabular} \end{table} The results in Table~\ref{tab:whamr_extr_perf} show that the extraction model cannot directly benefit from the speaker information through the multiplication or concatenation strategies. The reason for failure of direct multiplication is presumed to be that the shared middle-stage features are not optimal for both tasks of speaker conditioning and speech reconstruction. As for the concatenation, the multi-channel features and the speaker embedding are completely different signals and cannot be suitably processed by the convolutional layer, which assume time and frequency homogeneity. Conversely, the separation model with the proposed mechanism can benefit from the speaker information and outperforms the blind source separation system and other conditioning strategies. The proposed method uses a separated speaker branch to generate high-level features for speaker conditioning tasks to alleviate the shared feature problem. And the sequential speaker features from the speaker branch can have a similar signal characteristic to the multi-channel features, which is a suitable input to the convolutional layers. It should be noted that the proposed speech extraction system can be evaluated without accessing reference clean speech to find the right permutation. When the system is evaluated with the PIT criterion to find the oracle permutation, there is only a small difference between the two results. This demonstrates that our system can successfully identify and track multiple speakers in noisy and reverberant acoustic conditions. \vspace{-12pt} \begin{table}[htp] \centering \caption{Results on different and same gender mixtures} \label{tab:whamr_gender_analysis} \begin{tabular}{lcccc} \hline \multirow{2}{*}{\textbf{Model}} & \multirow{2}{*}{\#nchs} & \multirow{2}{*}{\textbf{PIT}} & \multicolumn{2}{c}{\textbf{SI-SNRi}} \\ \cline{4-5} & & & Diff. & Same \\ \hline SuDo-RMRF~\cite{tzinis2020sudo} & 1 & \checkmark & 10.6 & 9.1 \\ Multi-TasNet (TCN) & 2 & \checkmark & 12.4 & 12.4 \\ Multi-TasNet (U-Conv) & 2 & \checkmark & 12.9 & 12.9 \\ Extraction (Split) & 2 & \xmark & 13.5 & 13.1 \\ Extraction (Split) & 2 & \checkmark & 13.5 & 13.3 \\ \hline \end{tabular} \vspace{-5pt} \end{table} Table~\ref{tab:whamr_gender_analysis} reports the performance of various systems with different and same gender WHAMR! mixture speech. For blind source separation, a single-channel system can achieve better separation performance with different gender mixtures than same gender mixtures. With the spatial information, a multi-channel system improves performance in both conditions and reduces the gap between the two mixture conditions. With the additional speaker information, the performance in the different gender condition is further boosted. It can be also noticed that the same gender mixtures are more challenging, and more future work is needed to find better speaker representations in this case. Table~\ref{tab:whamr_all_perf} compares the proposed approach with other competing systems evaluated on WHAMR!. The proposed speaker conditioning mechanism provides consistent separation performance gain in both single and multi-channel scenarios. With the additional information from multiple microphones and speaker enrollment, our system achieves the best performance. \vspace{-12pt} \begin{table}[htp] \centering \caption{Comparative results of single and multi-channel speech separation/extraction on WHAMR! data} \label{tab:whamr_all_perf} \begin{tabular}{lcccc} \hline \textbf{Model} & \#nchs & \textbf{Building Unit} & \textbf{PIT} & \textbf{SI-SNRi} \\ \hline Conv-TasNet~\cite{luo2019conv} & 1 & TCN & \checkmark & 9.3 \\ SuDo-RMRF~\cite{tzinis2020sudo} & 1 & U-Conv & \checkmark & 9.9 \\ Wavesplit~\cite{zeghidour2020wavesplit} & 1 & TCN & \checkmark & 12.0 \\ Nachmanis's~\cite{nachmani2020voice} & 1 & RNN & \checkmark & 12.2 \\ Multi-TasNet~\cite{Zhang2020end} & 2 & TCN & \checkmark & 12.1 \\ \hline Extraction (Split) & 1 & U-Conv & \xmark & 11.1 \\ Extraction (Split) & 1 & U-Conv & \checkmark & 11.1 \\ Extraction (Split) & 2 & U-Conv & \xmark & 13.3 \\ Extraction (Split) & 2 & U-Conv & \checkmark & 13.4 \\ \hline \end{tabular} \vspace{-12pt} \end{table} \vspace{-12pt} \begin{table}[htp] \centering \caption{Speech recognition results} \label{tab:whamr_wer} \begin{tabu}{lccc} \hline \multirow{2}{*}{\textbf{System}} & \multirow{2}{*}{\#nchs} & \multicolumn{2}{c}{\textbf{WER(\%)}} \\ \cline{3-4} & & AM1 & AM2 \\ \hline Mixture & - & 79.1 & 77.0 \\ Multi-TasNet~\cite{Zhang2020end} & 2 & 37.7 & - \\ Extraction (Split) & 2 & \bf{31.6} & \bf{20.9} \\ \hline \rowfont{\color{gray}} Noisy Oracle & - & 19.8 & 20.0 \\ \hline \end{tabu} \end{table} Table~\ref{tab:whamr_wer} reports the ASR results. The proposed speech extraction model yields a significant WER reduction over the noisy reverberant mixture and outperforms the strong multi-channel separation baseline. The extraction system can introduce distortions to the separated signals (causing a mismatch problem between training and testing of the acoustic model), therefore, by decoding the data with AM2, the WER is further reduced by 34\% relatively, which is close to the result obtained with oracle single-speaker noisy reverberant speech (last row in Table~\ref{tab:whamr_wer}). In future work, we plan to exploit other speaker recognition models for embedding generation, and to train these models with larger and more challenging datasets, such as VoxCeleb~\cite{Chung2018}. Moreover, we will investigate joint training of the speaker embedding and the proposed speech extraction networks, which is expected to benefit both tasks~\cite{ji2020speaker}. \vspace{-5pt} \section{Conclusions} \label{sec:conclusion} \vspace{-5pt} In this paper, we have presented a multi-channel speech extraction system with a novel speaker conditioning mechanism. By introducing an additional speaker branch for receiving external speaker features, this mechanism solves the problems caused by feature sharing from contradicting tasks and difference between multiple inputs, providing a more effective way to use the speaker information to improve separation performance. Informed by multiple speaker embeddings, the proposed system is able to simultaneously output corresponding sources from a noisy and reverberant mixture, without a label permutation ambiguity. Experiments on WHAMR! simulated 2-speaker mixtures have shown that the proposed multi speaker extraction approach outperforms a strong blind speech separation baseline based on PIT. \vfill\pagebreak \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Chiral anomalies in quantum field theory appear in several different forms. Historically they were first observed in perturbative calculations of certain 1-loop scattering amplitudes, as a breaking of the (classically valid) chiral symmetry, \cite{ABJ}. Later nonperturbative methods were developed for understanding the chiral symmetry breaking in the euclidean path integral formalism, \cite{NRS},\cite{AW}. It was also understood that even the symmetry under coordinate transformations could be broken when quantizing massless fermions. In the hamiltonian approach to chiral anomalies one considers the equal time commutation relations for the infinitesimal generators of the classical symmetry group. First one constructs the bundle of fermionic Fock spaces parametrized by various external (classical) fields: gauge potentials, metrics, scalar potentials etc. The quantization of the algebra of currents in the Fock spaces requires some renormalization procedure. In $1+1$ space-time dimensions usually a normal ordering is sufficient but in higher dimensions certain additional subtractions are needed. Typically the renormalization modifies the algebra of classical symmetries by the so-called Schwinger terms, \cite{S}. In $1+1$ dimensions the Schwinger term is normally just a c-number, leading to an affine Lie algebra (gauge transformations) or to the Virasoro algebra (diffeomorphisms). In higher dimensions the algebra is more complicated; instead of a central (c-number) extension the Schwinger terms lead to an extension by an abelian ideal \cite{FM}. Direct analytic computations of the Schwinger terms in higher dimensions, although they can be carried out \cite{M1} in the Yang-Mills case, are very complicated in the case of an external gravitational field. However, there are topological and geometrical methods which give directly the structure of the quantized current algebra. As in the case of euclidean path integral anomalies, a central ingredient in this discussion is the families index theorem. In a previous paper \cite{CMM} (see also \cite{CMM2} for a review) it was shown how the Schwinger terms in the gauge current algebra are related, via Atiyah-Patodi-Singer index theory, to the structure of a system of local determinant line bundles in odd dimensions. This system provides an example of a mathematical structure called a bundle gerbe, \cite{Mu}. In the present paper we want to extend the methods of \cite{CMM} for constructing the Schwinger terms which arise in fermionic Fock space quantization of the algebra of vector fields on an odd dimensional manifold. In section 2 we set up the notation and recall some basic results in the families index theory in case of compact manifolds with boundaries. In section 3 we compute the curvature forms for a local system of complex line bundles over a parameter space $B$ which consists of gauge potentials and Riemannian metrics. In section 4 we explain how the Schwinger terms in the Fock space quantization of vector fields (and infinitesimal gauge transformations) are obtained from the local curvature formulas. Finally, in section 5 we give some results of explicit computations of the Schwinger terms. \section{The family index theorem} Let $\tilde{\pi }:\tilde{M}\rightarrow B$ be a smooth fibre bundle with fibres diffeomorphic to a compact oriented spin manifold $M$ of even dimension $2n$. Assume that each fibre $M_z=\tilde{\pi }^{-1}(z)$, $z\in B,$ is equipped with a Riemannian metric. Assume further that $\tilde{M}$ is equipped with a connection. This means that at each point $x\in \tilde{M}$, the tangent space splits in a horizontal and a vertical part: $T_x\tilde{M}= H_x \oplus V_x,$ where $V_x$ consists of vectors tangential to the fibers. Let $\tilde{{\cal E} }$ be a vector bundle over $\tilde M$ which along each fiber $M_z$ is the tensor product ${\cal E}_z$ of the Dirac spinor bundle over $M_z$ (with Clifford action of the vertical vectors $V_x$) and a finite dimensional vector bundle $W_z$ (with trivial Clifford multiplication). It has a $\bf Z_2$ structure provided by the chirality operator $\Gamma$ according to: $c(\Gamma )=\pm 1$ on $\tilde{{\cal E} } ^{\pm }$, where $c$ denotes the Clifford action. $\tilde{{\cal E} }$ is assumed to be equipped with a hermitean fiber metric. This naturally induces an $L^2$-metric in the space of sections $\Gamma (M_z,{\cal E}_z)$. Finally, let $D$ be an operator on $\Gamma (\tilde{M},\tilde{\cal E})$ which is fiberwise defined as a family of Dirac operators $D_z : \Gamma (M_z,{\cal E}_z)\rightarrow\Gamma (M_z,{\cal E}_z)$ with $z\in B.$ With a Dirac operator we mean an operator that can be written as the sum of compositions of a covariant derivative and a Clifford multiplication. We would now like to apply the family index theorem to the case described above. Before doing this, some additional assumptions is needed. These are of different nature depending on if $M$ has a non-empty boundary or not. When the boundary is empty, it will only be assumed that the set $\{ D_z\} _{z\in B}$ should consist of self-adjoint Dirac operators. The assumptions in the more difficult case of a non-empty boundary $\partial M$ will now be described. We make the common simplifying assumption that for all $z\in B$, there exists a collar neighbourhood of $\partial M_z$ such that all structures in ${\cal E }_z$ are of 'product type\rq $ $ (see ref. \cite{APS}). This implies that $D_z^{+}=D|_{\Gamma (M_z,{\cal E} ^+_z)}$ can be written as $c_t(\frac{\partial}{\partial t}+D_z^{\partial })$ near the boundary, where $c_t$ is the Clifford multiplication by an element corresponding to the coordinate vector field $\frac{\partial}{\partial t}$ (which is along the unit inward normal vector field at the boundary) and $D_z^{\partial}$ is a self-adjoint Dirac operator on ${\tilde{\cal E } ^+}|_{\partial M_z}$. Our conventions are such that $c_t$ is unitary. Let $\mbox{Ind}{D}_{\lambda }$, be the family $\{\mbox{Ind}D_{z,\lambda }\} _{z\in U_{\lambda }}$, where $\mbox{Ind}{D}=\mbox{ker} D^+\ominus \mbox{ker} D^-$ is the index bundle in the sense of K-theory and $U_{\lambda }=\{ z\in B;\lambda \notin \mbox{spec} \left( D_z^{\partial }\right)\}$. The notation means that every operator $D_z^{+}$ will be restricted to the domain $\{ \psi \in \Gamma ( M_z,{\cal E } ^+_z; P_{z,\lambda }\psi |_{\partial M_z}=0\}$, while $D_z^{-}$ will be restricted to $\{ \psi \in \Gamma ( M_z,{\cal E } ^-_z ; (1-P_{z,\lambda }) c_t \psi |_{\partial M_z}=0\}$, where $ P_{z,\lambda }$ is the spectral projection of $D_z^{\partial }$ corresponding to eigenvalues $\geq \lambda $. We will assume that the Dirac operators $D_{z,\lambda }$ are self-adjoint. The family index theorem has been proven in \cite{AS2} when $\partial M=\emptyset$ and in \cite{BC}, based on \cite{APS}, when $\partial M\neq\emptyset$. It reads: \begin{eqnarray} \label{eq:FIT} \mbox{ch}\left( \mbox{Ind}{D}\right) (z) & = & \int _{M_z} \hat{A}(M_z)\mbox{ch}(W_z), \quad\partial M=\emptyset\nonumber \\ \nopagebreak \mbox{ch}\left( \mbox{Ind}D_{\lambda }\right) (z) & = & \int _{M_z} \hat{A}(M_z)\mbox{ch}(W_z) - \frac{1}{2}\tilde{\eta }_{\lambda }(z),\,\,\, z\in U_{\lambda },\partial M\neq\emptyset, \nonumber \end{eqnarray} where \begin{eqnarray} \label{eq:AR} \hat{A}(M_z) & = & \mbox{det}^{1/2}\left( \frac{iR_z/4\pi } {\sinh (iR_z/4\pi)}\right) \nonumber \\ \nopagebreak \mbox{ch}(W_z) & = & \mbox{tr exp}\left( iF_z/2\pi\right) . \end{eqnarray} We choose to not write down the definitions of the form $\tilde{\eta }_{\lambda }$ or the curvature 2-forms $\tilde{R}$ and $\tilde{F}$ in $\tilde{M}$, where $R_z=\tilde{R}|_{M_z} $ and $F_z=\tilde{F}|_{M_z}$, since we will only need their explicit expression in a simple special case. The only thing about the $\tilde\eta$ form we need to know later is that it depends only on the boundary spectral data. The zero degree part of $\tilde \eta$ is just the $\eta$-invariant of the boundary Dirac operator. For $M=\emptyset$, the determinant line bundle DET is an object closely related with the index bundle $\mbox{Ind}{D}$. It is a line bundle over $B$, fibre-wise defined as $(\mbox{det ker} D_z^{+})^{\ast }\otimes\mbox{det ker }D_z^{-}$. To define it globally over $B$ we must also account for that the dimension of $\mbox{ker}D_z^{+}$ and $\mbox{ker}D_z^{-}$ can jump as $z$ varies. For a detailed construction, see \cite{BF}. For $M\neq\emptyset $ there exist a similar construction of a bundle $\mbox{DET}_{\lambda }$ over $U_{\lambda }$, closely related to $\mbox{Ind}D_{\lambda }$, see ref. \cite{PZ}. In \cite{BF} and \cite{PZ} it has been shown that for $\partial M=\emptyset$ and $\partial M\neq\emptyset$ there exists a connection on DET and $\mbox{DET}_{\lambda }$, respectively, naturally associated with the Quillen metric, \cite{Q1}, with curvature given by \begin{eqnarray} \label{eq:FD} \frac{i}{2\pi} F^{\mbox{\footnotesize{DET}}}(z) & = & \left(\int _{M_z} \hat{A}(M_z)\mbox{ch}(W_z) \right) _{[2]}, \quad \partial M=\emptyset \nonumber \\ \nopagebreak \frac{i}{2\pi}F^{\mbox{\footnotesize{DET}}_{\lambda }}(z) & = & \left(\int _{M_z} \hat{A}(M_z)\mbox{ch}(W_z) - \frac{1}{2}\tilde{\eta }_{\lambda }(z)\right) _{[2]},\nonumber \\ \nopagebreak && z\in U_{\lambda },\partial M\neq\emptyset , \end{eqnarray} where $\left[ 2\right]$ denotes the part that is a 2-form. Notice that the 2-form part of the right hand side of the family index theorem, eq. \Ref{eq:FIT}, is equal to the right hand side of eq. \Ref{eq:FD}. In the case of an odd dimensional manifold $N$ one can produce in a similar way an element $\Omega\in H^3(B, \bf Z),$ by an integration over the fibers $N_z,$ \begin{eqnarray} \label{eq:FDD} \Omega (z) & = & \left(\int _{N_z} \hat{A}(N_z)\mbox{ch}(W_z) \right) _{[3]}, \quad \partial N=\emptyset \end{eqnarray} where this time we pick up the component of the form which is of degree $3$ in the tangential directions on the parameter space $B.$ This form plays an important role in the hamiltonian quantization of external field problems, \cite{CMM}. It is the Dixmier-Douady class of a gerbe. A nonvanishing Dixmier-Douady class is an obstruction to quantizing chiral fermions in a gauge invariant manner. \section{Local line bundles over boundary \\ geometries} Let $P$ be a fixed principal $G$ bundle over $M$ with a projection $\pi:P \to M$ and $FM$ the oriented frame bundle of $M.$ The bundle $FM$ is a also a principal bundle, with the structure group $GL_+(2n,\bf R),$ the group of real $2n\times 2n$ matrices with positive determinant. Let $Q$ denote the product bundle $P\times FM.$ Let $B=\cal A \times \cal M$ where $\cal A$ is the affine space of connections on $P$ and $\cal M$ is the space of Riemannian metrics on $M.$ Locally, an element of $\cal A$ is written as a Lie$(G)$ valued 1-form on $M.$ We may view $Q$ as a principal bundle $\tilde Q$ over $\tilde{M}=M\times B$ in a natural way, as the pull-back under the projection $M\times B\to M.$ With notations as in the previous section we define $M_z$ as the manifold $M$ with metric given by $z\in B$. Along the model fiber $M,$ let $\cal E$ be the tensor product of the Dirac spinor bundle and a vector bundle $W$ over $M,$ the latter being an associated bundle to $P(M,G).$ We view $\cal E$ in a natural way as a vector bundle $\tilde{\cal E}$ over $M\times B.$ Finally, we let $D_z:\Gamma (M_z,{\cal E }_z)\rightarrow \Gamma (M_z,{\cal E }_z)$ be the Dirac operator constructed from $z\in B$ in the usual way; in terms of a local coordinates $A=A_{\mu} dx^{\mu}, \Gamma= \Gamma_{\mu} dx^{\mu},$ and with respect to a local orthonormal frame $\{e_{a}\}_{a =1}^{2n}$ of $TM_z$ we have $$ D_{(A,\Gamma)}=\sum _{a,\mu =1}^{2n}\gamma ^{a}{e_a}^{\mu} \left( \partial _{\mu }+ A_{\mu}+ \Gamma _{\mu }\right) , $$ where the ${e_a}^{\mu}$'s are the components of the basis vectors $e_a$ in the coordinate frame and $\gamma^a$ is the Clifford multiplication by $e_a,$ with $\gamma^a\gamma^b +\gamma^b\gamma^a = 2\delta^{ab}.$ Let $\cal D$ be the group of orientation preserving diffeomorphims of $M$ and $\cal G$ the group of gauge transformations in $P,$ i.e., the group of automorphims of $P(M,G)$ which projects to the identity map on the base. The groups $\cal G$ and $\cal D$ act through pull-backs on $\cal A$ and $\cal M.$ The group actions induce a fiber structure in $B=\cal A \times \cal M$ but in order to obtain smooth moduli spaces we restrict to the subgroups ${\cal G}_0\subset \cal G$ and ${\cal D}_0 \subset \cal D.$ The former is the group of based gauge transformations $\phi$ leaving invariant some fixed base point $p_0\in P,$ $\phi(p_0) =p_0.$ The group ${\cal D}_0$ is defined as $${\cal D}_0 =\{\phi\in{\cal D} | \phi(x_0) =x_0 \mbox{ and } T_{x_0}\phi= \mbox{id} \}$$ for some fixed $x_0\in M.$ With these choices, we obtain the smooth fiber bundles ${\cal M}\to {\cal M}/{\cal D}_0$ and ${\cal A} \to {\cal A}/ {\cal G}_0.$ This leads also to a fibering $B={\cal A}\times {\cal M} \to ({\cal A}\times {\cal M})/({\cal D}_0\ltimes {\cal G}_0).$ Note that the group of symmetries is the semidirect product of ${\cal D}_0$ and ${\cal G}_0$ since locally an element of ${\cal G}_0$ is a $G$-valued function on $M$ and the diffeomorphisms act on the argument of the function. Following \cite{AS3} we have a connection form $\omega$ on $\tilde Q=P\times FM\times B\to M\times B$ which can be pushed forward to a connection form on ${\tilde Q}/({\cal D}_0\ltimes {\cal G}_0).$ Along $P\times FM$ the form $\omega$ is given by a connection $A\in \cal A$ and by the Levi-Civita connection given by a metric $g\in \cal M.$ Restricted to the second factor $B=\cal A\times \cal M$ the form $\omega$ is called the BRST ghost and will be denoted by $v.$ Since the total form $\omega$ should vanish along gauge and diffeomorphism directions it follows that its value along these directions in $B$ is uniquely determined by the value of the corresponding vector field on $P\times FM.$ In the case of gauge potentials, $B=\cal A,$ an infinitesimal gauge transformation is given locally as a Lie$(G)$ valued function $Z$ on $M$ and then $v_{p,A}(Z_A)= Z(x)$ where $x=\pi(p)$ and $Z_A=\delta_Z A$ is the vector field on $\cal A$ defined by the infinitesimal gauge transformation $Z.$ In the case of diffeomorphims, $Z$ is the ${\bf gl}(2n,\bf R)$ valued function defined as the Jacobian (in local coordinates) of a vector field on $M.$ Again, $v(Z_{\Gamma})$ is the 'tautological 1-form', $Z$ evaluated at a point $x\in M.$ The ghost $v$ in other directions is a nonlocal expression involving the Green's function of a gauge covariant Laplacian, \cite{AS3, Do}, but we shall make explicit use of $v$ only in the gauge and diffeomorphims directions. Next let $N$ be an odd dimensional manifold without boundary and let $M=[0,1] \times N,$ dim$M=2n.$ Given a principal bundle $P$ over $N$ we can extend it to a principal bundle over $M$ (to be denoted by the same symbol $P$) by a pull-back defined by the projection $[0,1]\times N \to N.$ We choose a fixed connection $A_0$ in the principal bundle $P$ over $N$ and a fixed metric $g_0$ in $N.$ If $A$ is an arbitrary connection in $P$ we form a connection $A(t)$ (with $t\in [0,1]$) in the principal bundle $P$ over $M$ by $$A(t) = (1-f(t)) A_0 + f(t) A,$$ where $f$ is a fixed smooth real valued function such that $f(0)=0, f(1)=1,$ and all the derivatives of $f$ vanish at the end points $t=0,1.$ Similarly, any metric $g$ in $N$ defines a metric in $M$ such that along $N$ directions it is given by $$g_{ij}(t) = (1-f(t)) (g_0)_{ij} + f(t) g_{ij}$$ and such that $\partial_t$ is a normalized vector field in $t$ direction, orthogonal to the $N$ directions. However, for computations below it is more convenient to use directly a homotopy connecting the the Levi-Civita connection $\Gamma$ (constructed from the metric $g$) to the connection $\Gamma_0$ (constructed from $g_0$). The formula for this homotopy is the same as for the gauge potentials above. We use the 'Russian formula', which is just an expression of the fact that in a principal bundle with a connection the curvature form has \it no components along fiber directions. \rm The formula tells that when the total curvature on $P\times FM\times{\cal A}\times {\cal M}$ is evaluated along vertical directions in ${\cal A}\times {\cal M} \to ({\cal A}\times {\cal M})/ ({\cal G}_0 \times {\cal D}_0)$ and along vector fields on $P\times FM$ the result is $$ F^{\omega} = F^{A,\Gamma} = dA +\frac12 [A,A] + d\Gamma + \frac12 [\Gamma, \Gamma],$$ where $\Gamma$ is the Levi-Civita connection. Next we replace $\omega = A+\Gamma +v$ by the 'time' dependent connection $$\omega(t) = (1-f(t)) (A_0 +\Gamma_0) +f(t) (A+\Gamma +v).$$ An evaluation of the curvature of a Dirac determinant bundle over $\cal A \times \cal M$ involves an integration of a characteristic class $p_{n+1} (F^{\omega(t)})$ over $M=[0,1]\times N$ and an evaluation of the $\tilde\eta$ form on the boundary. Here $p_i$ denotes a generic homogeneous symmetric invariant polynomial of degree $i$ in the curvature. Actually we have to restrict the construction of the determinant bundle to subsets $U_{\lambda}$ in the parameter space $B$ on the boundary. Here $U_{\lambda}$ is again the set of those points in $B$ such that the associated Dirac operator on $\partial M$ does not have the eigenvalue $\lambda.$ These sets form an open cover of $B.$ In each of the sets $U_{\lambda}$ one can define the $\tilde\eta$ form associated to Dirac operators $D_z -\lambda$ as a continuous function of the parameters $z\in B.$ We shall restrict to the problem of determining the curvature along gauge and diffeomorphism directions on the boundary. The $\tilde\eta$ form is a spectral invariant and therefore the only term which contributes is the appropriate characteristic class in the bulk $M.$ The integration of the index density in the bulk can be performed in two steps. First one integrates over the time variable $t$ and then the resulting expression is integrated over $N$ to produce a 2-form on ${\cal A}(N) \times {\cal M}(N).$ All the computations involving the ghost $v$ are restricted to vertical directions. Restricting to the case of gauge potentials (calculations involving the Levi-Civita connections are performed in the same way) we have \begin{eqnarray} F_{ij}^{\omega(t)} &= &\partial_i A_j(t) -\partial_j A_i(t) +[A_i(t), A_j(t)]\nonumber \\ \nopagebreak F_{0i}^{\omega(t)} &= &f'(t) (A -A_0)_i,\nonumber \end{eqnarray} where we use the index $0$ for the $t$ component. For intermediate times $0<t<1$ the curvature has components also to the vertical directions, \begin{eqnarray} (F^{\omega(t)})_{[0,2]}&= &\frac12 f(f-1)[v,v] \nonumber \\ \nopagebreak (F^{\omega(t)})_{[1,1]}&=& v f'(t) dt + f(f -1) [A-A_0, v].\nonumber\end{eqnarray} Here we have denoted by $(F)_{[i,j]}$ the component of a form $F$ which is of degree $i$ in the tangential directions in $P$ and of degree $j$ in the ghost $v.$ If $p_k$ is any homogeneous symmetric function of degree $k$ of the curvature we set $p_k(A,F) = p_k(A,F,F,\dots,F)$ and then \begin{eqnarray} \label{eq:POM} \int_M p_k(F^{\omega (t)})&=& k\int_N \int_{0}^{1} p_k(f'(t)(A+v-A_0), F^{\omega(t)}) dt \nonumber \\ \nopagebreak &\equiv &\int_N \omega_{2k-1}(A+v,A_0).\end{eqnarray} The form on the right, when expanded in powers of the ghost $v$, gives forms of various degrees on the parameter space $B.$ We are interested in the curvature form which is of degree 2 in $v.$ The degree zero part gives just the Chern-Simons form $\omega_{2k-1}(A,A_0)$ and if $N$ were an even dimensional manifold the degree 1 term would be the nonabelian gauge anomaly. In low dimensions one gets familiar explicit formulas; as an example, consider the case of a trivial bundle and $A_0 =0.$ When $n=1$ the relevant characteristic class is $p_2(F)=\frac{1}{2!} (\frac{i}{2\pi})^2 \mbox{tr}\, F^2$ and the curvature $c_1$ along vector fields given by a pair $X,Y$ of infinitesimal gauge transformations on the one-dimensional manifold $N$ is $$\frac{i}{2\pi}c_1(X,Y)= \frac{1}{8\pi^2} \int_N \mbox{tr} \, A[X,Y]$$ and in the case $n=2,$ dim$ N=3,$ $p_3(F)=\frac{1}{3!} (\frac{i}{2\pi})^3 \mbox{tr}\, F^3$, one gets \begin{eqnarray} \frac{i}{2\pi} c_3(X,Y) & = & \frac{1}{48\pi^3} \int_N \mbox{tr} \, \Big( (AdA +dA \,A + A^3)[X,Y] \nonumber \\ \nopagebreak &&+XdA\, YA -YdA\,XA\Big) .\nonumber\end{eqnarray} The case of Levi-Civita connection $\Gamma$ needs some extra remarks. The reason is that we have actually two types of Chern-Simons forms (and associated anomaly forms) depending whether we write the connection with respect to a (local) orthonormal frame in the tangent bundle $TM$ or with respect to the holonomic frame given by coordinate vector fields. Formally, the two Chern-Simons forms (and associated polynomials in $v$) look the same; they are given exactly by the same differential polynomials in $\Gamma_{\mu\nu}^{\lambda}$ (coordinate basis) or in $\Gamma_{\mu a}^b$ (with respect to an anholonomic basis $e_a^{\mu}$). The difference is (locally) an exterior derivative of a form (in $N$) of lower degree. The difference $\Delta \omega$ of the Chern-Simons forms involves the matrix function $e_a^{\mu}(x)$ on $N.$ Since this function takes values in the group $GL(2n-1,\bf R),$ which topologically is equivalent to $SO(2n-1),$ there might be a topological obstruction for writing $\Delta \omega$ globally as $d\theta$ for some form $\theta.$ The potential obstruction is the winding number of the map $e: N\to GL(2n-1,\bf R),$ given by the (normalized) integral $$w(e) = \frac{1}{(2n-1)!} \left(\frac{i}{2\pi}\right) ^{n+1} \int_N \mbox{tr}\, (e^{-1} de)^{2n-1}.$$ The choice $\Gamma_{\mu b}^a$ in the anholonomic frame $e_a$ leads to diffeomorphism invariant integral $\int_N \omega_{2n-1}(A,A_0)$ and there are no anomalies or 2-forms along Diff$(N)$ orbits in $\cal M.$ On the other hand, there is a frame bundle anomaly related to local frame rotations; this takes exactly the same form as the pure gauge anomaly discussed above, \cite{BZ}. The choice $\Gamma_{\mu\nu}^{\lambda}$ in the coordinate frame is insensitive to the frame rotations $e_a \mapsto e'_a$ but it responds to a local change of coordinates. Explicit formulas for the forms $c_{2n-1}$ along Diff$N$ orbits are given in the appendix. \section{Schwinger terms from the local system of line bundles} As we saw in the previous section, APS index theorem gives us a system of local determinant bundles $\mbox{DET}_{\lambda}$ over certain open sets $U_{\lambda} \subset B.$ The infinite-dimensional group ${\cal K} = {\cal D}_0 \times {\cal G}_0$ acts in the parameter space $B$ mapping each of the subsets $U_{\lambda}$ onto itself. We denote $\bf k=$ Lie$(\cal K).$ In general, the determinant bundles are topologically nontrivial and one cannot lift directly the action of $\cal K$ to the total space of $\mbox{DET}_{\lambda}.$ Instead, there is a an extension $\hat {\cal K}$ which acts in the determinant bundles. The Lie algebra $\hat{\bf k}$ of $\hat{\cal K}$ is given in a standard way. It consists of pairs $(X,\alpha),$ where $X\in {\bf k}$ and $\alpha$ is a function on $B,$ with commutation relations $$[(X,\alpha), (Y,\beta)] = ([X,Y], {\cal L}_X \beta -{\cal L}_Y\alpha + c(X,Y; \cdot) ),$$ where the Schwinger term $c$ is a purely imaginary function on $B$ and antisymmetric bilinear function on $\bf k.$ It is defined as the value of the curvature of $\mbox{DET}_{\lambda}$ at the given point in $B$ along the vector fields $X,Y$ on $B.$ Here ${\cal L}_X$ denotes the Lie derivative on $B$ along the vector field $X.$ The Jacobi indentity in $\hat{\bf k}$ is an immediate consequence of the fact that the curvature is a closed 2-form on $U_{\lambda}\subset B.$ Let $U_{\lambda\lambda'}=U_{\lambda}\cap U_{\lambda'}.$ Over $U_{\lambda \lambda'}$ there is a natural complex line bundle $\mbox{DET}_{\lambda\lambda'}$ such that the fiber at a point $z$ is the top exterior power of the finite-dimensional vector space spanned by all eigenvectors of the Dirac operator $D_z$ on $N$ with eigenvalues in the range $\lambda < \mu < \lambda'.$ If $\lambda' < \lambda,$ we set $\mbox{DET}_{\lambda \lambda'} = \mbox{DET}_{\lambda'\lambda}^*.$ By construction, we have a natural isomorphism $$\mbox{DET}_{\lambda\lambda'}\otimes \mbox{DET}_{\lambda'\lambda''}\simeq \mbox{DET}_{\lambda \lambda''},$$ for all triples $\lambda,\lambda',\lambda''.$ \begin{theorem} (\cite{CMM}) For any pairs $\lambda,\lambda'$ of real numbers one has $$\mbox{\rm DET}_{\lambda\lambda'} \simeq \mbox{\rm DET}_{\lambda} \otimes \mbox{\rm DET}_{\lambda'}^*$$ over the set $U_{\lambda\lambda'}.$ \end{theorem} Note that even though in \cite{CMM} the discussion was mainly around the case of gauge potentials and gauge transformations, the proof of the theorem was abstract and very general, not depending on the particular type of parameter space for Dirac operators. In the gerbe terminology the content of this theorem is that the gerbe defined by the system of local line bundles $DET_{\lambda\lambda'}$ is trivial. The line bundles can be pushed forward to give a family of local line bundles on $B/\cal K$ since the spectral subspaces transform equivariantly under gauge transformations and changes of coordinates. However, over $B/\cal K$ the gerbe is no more trivial, i.e., it cannot be given as tensor products of local line bundles over the sets $pr(U_{\lambda}),$ where $pr: B \to B/{\cal K}$ is the canonical projection. The obstruction to the trivialization is an element of $H^3(B/{\cal K}, \bf Z),$ the Dixmier- Douady class of the gerbe. In \cite{CMM} the DD class was computed from the index theory in the case of Yang-Mills theory; the generalization to the case involving metrics and diffeomorphism is straight-forward and the free part of the cohomology class is given by the integral formula \Ref{eq:FDD}, with $B$ replaced by $B/\cal K.$ The importance of the above theorem comes from the following simple observation. Let $H_z=H_+(z,\lambda) \oplus H_-(z,\lambda)$ be the spectral decomposition of the fermionic '1-particle' Hilbert space with respect to a spectral cut at $\lambda\in {\bf R}$, not in the spectrum of ${D_z}$. This determines a representation of the CAR algebra in a Fock space ${\cal F}(z,\lambda),$ with a normalized vacuum vector $|z,\lambda>.$ The defining property of this representation is that $$a(u)|z,\lambda>=0= a^*(u')|z,\lambda>,$$ for $u\in H_+(z,\lambda)$ and $u'\in H_-(z,\lambda)$. All creation operators $a^*(u)$ and annihilation operators $a(u)$ are anticommuting except $$a^*(u) a(u') + a(u') a^*(u) = <u',u>,$$ where $<\cdot,\cdot>$ is the inner product in $H_z.$ If we change the vacuum level from $\lambda$ to $\lambda'> \lambda,$ we have an isomorphism ${\cal F}(z, \lambda) \to {\cal F}(z,\lambda')$ which is natural up to a multiplicative phase. The phase is fixed by a choice of normalized eigenvectors $u_1,u_2,\dots u_p$ in the energy range $\lambda < D_z < \lambda'$ and setting $|z,\lambda'> = a^*(u_1) \dots a^*(u_p)|z,\lambda'>.$ But this choice is exactly the same as choosing a (normalized) element in $\mbox{DET}_{\lambda\lambda'}$ over the point $z\in B.$ Thus, setting ${\cal F}_z ={\cal F}(z,\lambda) \otimes \mbox{DET}_{\lambda}(z)$ for any $\lambda$ not in the spectrum of $D_z$ we obtain, according to Theorem 1, a family of Fock spaces parametrized by points of $B$ but which do not depend on the choice of $\lambda,$ \cite{M2}. This gives us a smooth Fock bundle $\cal F$ over $B.$ The $\cal K$ action on the base lifts to a $\hat{\cal K}$ action in $\cal F,$ the extension part in $\hat{\cal K}$ coming entirely from the action in the determinant bundles $\mbox{DET}_{\lambda}.$ The Schr\"odinger wave functions for quantized fermions in background fields (parametrized by points of $B$) are sections of the Fock bundle. It follows that the Schwinger terms for the infinitesimal generators of $\hat{\cal K},$ acting on Schr\"odinger wave functions, are given by the formula for $c$ which describes the curvature of the determinant bundle in the $\cal K$ directions. In the case of $B=\cal A$ and ${\cal K}={\cal G},$ the elements in the Lie algebra are the Gauss law generators. This case was discussed in detail in \cite{CMM}. More generally, we give explicit formulas for the Schwinger terms in section \ref{sec:EC}. \section{Explicit computations} \label{sec:EC} The Schwinger term in $(2n-1)$-dimensional space will now be computed. This will be done by using notations for Yang-Mills, but it works for diffeomorphisms as well if a different symmetric invariant polynomial is used. Eq. \Ref{eq:POM} gives that \begin{eqnarray} &&\omega _{2n+1}(A+v,A_0) = (n+1)\int _0^1f^{\prime }p_{n+1}\Big( A+v-A_0 , \nonumber \\ \nopagebreak && fdA+f^2A^2+(1-f)dA_0+ (1-f)^2A_0^2 -f(f-1)[A_0,A]\nonumber \\ \nopagebreak && +f( f-1) [A-A_0,v]+\frac{1}{2}f(f-1)[v,v]\Big) dt,\nonumber \end{eqnarray} The Schwinger term can be calculated from this expression. However, since we are only interested in the Schwinger term up to a coboundary, an alternative is to use the \lq triangle formula\rq $ $ as in \cite{MS}: \[ \omega _{2n+1}(A+v,A_0) \sim \omega _{2n+1}(A_0+v,A_0)+\omega _{2n+1}(A+v,A_0+v), \] where \lq $\sim$\rq $ $ means equality up to a coboundary with respect to $d+\delta$, where $\delta$ is the BRST operator. This gives a simpler expression for the non-integrated Schwinger term and also for all other ghost degrees of $\omega _{2n+1}(A+v,A_0)$. Straight forward computations gives the result \begin{eqnarray} && \omega _{2n+1}(A+v,A_0)_{(2)} \sim \frac{(n+1)n}{2}p_{n+1}\left( v, dv+[A_0,v],dA_0+A_0^2\right) \nonumber \\ \nopagebreak && +\frac{(n+1)n(n-1)}{2}\int _0^1f^{\prime }(1-f)^2p_{n+1}\Big( A-A_0, dv+[A_0,v], \nonumber \\ \nopagebreak && dv+[A_0,v] , fdA+f^2A^2+(1-f)dA_0\nonumber \\ \nopagebreak && +(1-f)^2A_0^2- f(f-1)[A_0,A]\Big) dt\nonumber \end{eqnarray} where the index $(2)$ means the part of the form that is quadratic in the ghost. Inserting $n=1,2$ and $3$ gives: \begin{eqnarray} && \omega _3(A+v,A_0)_{(2)} \sim p_2(v, dv+[A_0,v])\nonumber \\ \nopagebreak && \omega _5(A+v,A_0)_{(2)} \sim 3p_3\left( v, dv+[A_0,v],dA_0+A_0^2\right) \nonumber \\ \nopagebreak && +p_3(A-A_0, dv+[A_0,v], dv+[A_0,v])\nonumber \\ \nopagebreak && \omega _7(A+v,A_0)_{(2)} \sim 6p_4\left( v, dv+[A_0,v],dA_0+A_0^2,dA_0+A_0^2\right) \nonumber \\ \nopagebreak && +p_4\big( A-A_0, dv+[A_0,v], dv+[A_0,v], \nonumber \\ \nopagebreak && dA +\frac{2}{5}A^2+3dA_0+\frac{12}{5}A_0^2+\frac{3}{5}[A_0,A]\big) .\nonumber \end{eqnarray} This gives expressions for the non-integrated Schwinger term in a pure Yang-Mills potential if $p_{n+1}$ is the symmetrized trace. The appropriate polynomial to use for the Levi-Civita connection is $p_{n+1}=\hat{A}(M)_{n+1}$, according to eq. \Ref{eq:AR}. Using \begin{eqnarray} \hat{A}(M) & = & 1+\left(\frac{1}{4\pi }\right) ^2\frac{1}{12}\mbox{tr}\left( R^2\right) \nonumber \\ \nopagebreak &&+ \left(\frac{1}{4\pi }\right) ^4\left( \frac{1}{288}\left( \mbox{tr}\left( R^2\right) \right) ^2 + \frac{1}{360}\mbox{tr}\left( R^4\right)\right) + ...\nonumber \end{eqnarray} gives for $n=1$ and 2: \begin{eqnarray} \omega _3(\Gamma +v,\Gamma _0)_{(2)} & \sim & \left(\frac{1}{4\pi }\right) ^2\frac{1}{12}\left(vdv+2\Gamma _0v^2\right)\nonumber \\ \nopagebreak \omega _5(\Gamma +v,\Gamma _0)_{(2)} & \sim & 0.\nonumber \end{eqnarray} Since the expression for $\omega _7$ is rather long we will omit to write it down. However, for the special case $\Gamma _0=0$ it becomes: \begin{eqnarray} && \omega _7(\Gamma +v,0)_{(2)} \sim \nonumber \\ \nopagebreak && \left(\frac{1}{4\pi }\right) ^4\left( \frac{1}{288}\cdot\frac{2}{3}\mbox{tr} \left( \Gamma dv\right)\mbox{tr}\left( dv\left( d\Gamma+\frac{2}{5}\Gamma ^2\right)\right) \right. \nonumber \\ \nopagebreak &&+ \frac{1}{288}\cdot\frac{1}{3}\mbox{tr}\left( (dv)^2\right) \mbox{tr}\left( \Gamma \left( d\Gamma +\frac{2}{5}\Gamma ^2\right)\right) \nonumber \\ \nopagebreak && \left. + \frac{1}{360}\cdot\frac{1}{3}\mbox{tr} \left(\left( R-\frac{3}{5}\Gamma ^2\right) \left( (dv)^2\Gamma +(dv)\Gamma dv +\Gamma (dv)^2\right)\right)\right) .\nonumber \end{eqnarray} This expression can be simplified if subtracting the coboundary \begin{eqnarray} &&\left(\frac{1}{4\pi }\right) ^4\frac{1}{288}\cdot\frac{2}{3}\left( \delta\left(\mbox{tr}\left( \Gamma dv\right)\mbox{tr}\left( \Gamma d\Gamma +\frac{4}{5}\Gamma ^3\right)\right)\right. \nonumber \\ \nopagebreak && +\left. d\left(\mbox{tr}\left( v dv\right)\mbox{tr}\left( \Gamma d\Gamma +\frac{2}{3}\Gamma ^3\right)\right)\right) .\nonumber \end{eqnarray} The result is \begin{eqnarray} && \omega _7 (\Gamma +v,0)_{(2)} \sim \left(\frac{1}{4\pi }\right) ^4\left( \frac{1}{288}\mbox{tr} \left( v dv\right)\mbox{tr}R^2\right. \nonumber \\ \nopagebreak && \left. + \frac{1}{360}\cdot\frac{1}{3}\mbox{tr} \left(\left( R-\frac{3}{5}\Gamma ^2\right)\left( (dv)^2\Gamma +(dv)\Gamma dv +\Gamma (dv)^2\right)\right)\right) .\nonumber \end{eqnarray} The gravitational Schwinger terms are obtained by multiplying with the normalization factor $(i/2\pi )^{-1}$, inserting the integration over $N$ and evaluating on vector fields $X$ and $Y$ on $M$ generating diffeomorphisms. The Levi-Civita connection and curvature have components $(\Gamma )^{i^{\prime }}_{\,\, j^{\prime }}=\Gamma ^{i^{\prime }}_{i j^{\prime }}dx^i$ and $(R )^{ i^{\prime }}_{\,\, j^{\prime }}=R ^{i^{\prime }}_{ij j^{\prime }}dx^i\wedge dx^j$. Recall that $(v(X))^{i^{\prime }}_{\,\, j^{\prime }}=\partial _{j^{ \prime }}X^{i^{\prime }}$, see, for instance, \cite{BZ}. To illustrate how the Schwinger terms can be computed, we give the result for 1 space dimension: \[ -2\pi i\int _N\omega _3(\Gamma +v,\Gamma _0)_{(2)}(X,Y) \sim -\frac{i}{48\pi }\int _N \left(\partial _xX\right) \partial _x^2Ydx, \] where \lq $\sim$\rq $ $ now means equality up to a coboundary with respect to the BRST operator. When both a Yang-Mills field and gravity are present, the relevant polynomial is a sum of polynomial of type \[ p_k\left( F^{\omega (t)}\right) \tilde{p}_l\left( R^{\omega (t)}\right) , \] where the curvatures $F^{\omega (t)}$ and $R^{\omega (t)}$ are with respect to pure Yang-Mills and pure gravity, respective. This gives \begin{eqnarray} && \int _Mp_k\left( F^{\omega (t)}\right) \tilde{p}_l\left( R^{\omega (t)}\right) \nonumber \\ \nopagebreak &=& \int _N\int _0^1\Big[ kp_k\left( f^{\prime }(t)(A+v_A-A_0), F^{\omega (f(t))}\right) \tilde{p}_l\left( R^{\omega (h(t))}\right) \nonumber \\ \nopagebreak && +lp_k\left( F^{\omega (f(t))}\right) \tilde{p}_l\left( h^{\prime }(t)(\Gamma +v_{\Gamma }-\Gamma _0),R^{\omega (h(t))}\right) \Big] dt\nonumber . \end{eqnarray} The expression is independent of $f$ and $h$ (see below). With a choice such that $f^{\prime }(t)=0$, $t\in [1/2,1]$ and $h^{\prime }(t)=0$, $t\in [0,1/2]$, this implies that \begin{eqnarray} \label{eq:REFD} \int _Mp_k\left( F^{\omega (t)}\right) \tilde{p}_l\left( R^{\omega (t)}\right) & = &\int _N\left( \omega _{2k-1}(A+v_A,A_0)\tilde{p}_l(R_0)\right.\nonumber \\ \nopagebreak &&\left. +p_k(F)\tilde{\omega }_{2l-1}(\Gamma +v_{\Gamma },\Gamma _0)\right) . \end{eqnarray} Thus, the Schwinger term in combined Yang-Mills and gravity is up to a coboundary equal to the part of the expansion of \Ref{eq:REFD} that is of second ghost degree. In particular, this implies that Schwinger terms which have one Yang-Mills ghost and on diffeomorphism ghost are in cohomology equal to the Schwinger term obtained from the form in \Ref{eq:REFD}. Thus, truly mixed Schwinger terms do not exist. Notice that if the background fields are vanishing then the Schwinger term is gravitational (although some parts of the form degrees are taken up by the Yang-Mills polynomial). This can give anomalies of Virasoro typ in higher dimensions. Observe that there is nothing special about gravity, a Yang-Mills Schwinger term is obtained by interchanging the role of $f$ and $h$. This does however not mean that the gravitational Schwinger term differ from the Yang-Mills Schwinger term by a coboundary. The terms with $k=0$ respective $l=0$ ruin this argument. It is easy to see that our method of computing the Schwinger term agrees with one of the most common approaches: The polynomial $p_k(F^n)-p_k(F_0^n)$ is written as $(d+\delta )$ on a form, the (non-integrated) Chern-Simons form. The Schwinger term is the given by the part of the Chern-Simons form that is quadratic in the ghost. For the case when both Yang-Mills and gravity are present, the relevant polynomial is a sum of polynomials \begin{equation} \label{eq:POL} p_k(F)\tilde{p}_l(R)-p_k(F_0)\tilde{p}_l(R_0).\nonumber \end{equation} There is an ambiguity in the definition of the Chern-Simons form; it is for instance possible to add forms of type $(d+\delta )\chi$ to it. However, an ambiguity of this type will only change the Schwinger term by a coboundary. It will now be shown that the ambiguity in the definition of the Chern-Simons form is only of this type. Thus, we must prove that closeness with respect to $(d+\delta )$ implies exactness. This can be done by introducing the degree 1 derivation $\triangle$ defined on the generators by: $\triangle (d+\delta )(A+v_A)=A+v_A, \triangle (d+\delta )(\Gamma +v_{\Gamma })=\Gamma +v_{\Gamma }, \triangle dA_0=A_0,\triangle d\Gamma _0=\Gamma _0$, and otherwise zero. Then $\triangle (d+\delta )+(d+\delta )\triangle $ is a degree 0 derivation which is equal to 1 on the generators. Therefore, if $\chi$ is closed with respect to $(d+\delta )$, then $\chi$ is proportional to $(d+\delta )\triangle \chi$. An example of a (non-integrated) Chern-Simons form for the polynomial in \Ref{eq:POL} is \[ \omega _{2k-1}(A+v_A,A_0)\tilde{p}_l(R_0)+p_k(F)\tilde{\omega }_{2l-1}(\Gamma +v_{\Gamma },\Gamma _0). \] This is in complete agreement with \Ref{eq:REFD}. \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} We study turn-based graph games between two players---Player Min and Player Max---who take turns to move a token along the vertices of a coloured finite graph so as to optimise their adversarial objectives. Various classes of graph games are characterised by the objective of the players, for instance in \emph{parity games} the objective is to optimise the parity of the dominating colour occurring infinitely often, while in \emph{discounted and mean-payoff games} the objective is the discounted and limit-average sum of the colours. Solving graph games is the central and most expensive step in many model checking~\cite{Kozen/83/mu,Emerson+all/93/mu,Wilke/01/Alternating,deAlfaro+Henziger+Majumdar/01/Control,Alur+Henziger+Kupferman/02/ATL}, satisfiability checking~\cite{Wilke/01/Alternating,Kozen/83/mu,Vardi/98/2WayAutomata,Schewe+Finkbeiner/06/ATM}, and synthesis~\cite{Piterman/06/Parity,Schewe+Finkbeiner/06/Asynchronous} algorithms. More efficient algorithms for solving graph games will therefore foster the development of performant model checkers and contribute to bringing synthesis techniques to practice. Parity games enjoy a special status among graph games and the quest for performant algorithms \cite{Kozen/83/mu,Emerson+Lei/86/Parity,Emerson+Jutla/91/Memoryless,McNaughton/93/Games,Zwick+Paterson/96/payoff,Browne-all/97/fixedpoint,Zielonka/98/Parity,Jurdzinski/00/ParityGames,Ludwig/95/random,Puri/95/simprove,Voge+Jurdzinski/00/simprove,BjorklundVorobyov/07/subexp,Obdrzalek/03/TreeWidth,Lange/05/ParitySAT,Berwanger+all/06/ParityDAG,Jurdzinski/06/subex,Schewe/07/parity,Schewe/08/improvement,Fearnley/10/snare} for solving them has therefore been an active field of research during the last decades. Traditional forward techniques \mbox{($\approx O(n^{\frac{1}{2}c})$~\cite{Jurdzinski/00/ParityGames}} for parity games with $n$ positions and $c$ colours), backward techniques ($\approx\hspace*{-1.9pt} O(n^{c})$~\cite{McNaughton/93/Games,Emerson+Lei/86/Parity,Zielonka/98/Parity}), and their combination ($\approx \hspace*{-1.9pt} O(n^{\frac{1}{3}c})$~\cite{Schewe/07/parity}) provide good complexity bounds. However, these bounds are sharp, and techniques with good complexity bounds~\cite{Schewe/07/parity,Jurdzinski/00/ParityGames} frequently display their worst case complexity on practical examples. Strategy improvement algorithms~\cite{Ludwig/95/random,Puri/95/simprove,Voge+Jurdzinski/00/simprove,BjorklundVorobyov/07/subexp,Schewe/08/improvement,Fearnley/10/snare}, on the other hand, are closely related to the Simplex algorithm for solving linear programming problems that perform well in practice. Classic strategy improvement algorithms are built around the existence of optimal positional strategies for both players. They start with an arbitrary positional strategy for a player and iteratively compute a better positional strategy in every step until the strategy cannot be further improved. Since there are only finitely many positional strategies in a finite graph, termination is guaranteed. The crucial step in a strategy improvement algorithm is to compute a better strategy from the current strategy. Given a current strategy $\sigma$ of a player (say, \Pmax), this step is performed by first computing the globally optimal counter strategy $\tau^c_\sigma$ of the opponent (\Pmin) and then computing the value of each vertex of the game restricted to the strategies $\sigma$ and $\tau^c_\sigma$. For the games under discussion (parity, discounted, and mean-payoff) both of these computations are simple and tractable. This value dictates potentially locally profitable changes or switches $\prof(\sigma)$ that \Pmax\ can make vis-{\`a}-vis his previous strategy $\sigma$. For the correctness of the strategy improvement algorithm it is required that such locally profitable changes imply a global improvement. The strategy of \Pmax\ can then be updated according to a switching rule (akin to pivoting rule of the Simplex) in order to give an improved strategy. This has led to the following template for classic strategy improvement algorithms. \begin{algorithm} \caption{\label{alg:classic-sia} Classic strategy improvement algorithm} determine an optimal counter strategy $\tau^c_\sigma$ for $\sigma$\\ evaluate the game for $\sigma$ and $\tau^c_\sigma$ and determine the profitable changes $\prof(\sigma)$ for $\sigma$ \\ update $\sigma$ by applying changes from $\prof(\sigma)$ to $\sigma$\\ \end{algorithm} A number of switching rules, including the ones inspired by Simplex pivoting rules, have been suggested for strategy improvement algorithms. The most widespread ones are to select changes for all game states where this is possible, choosing a combination of those with an optimal update guarantee, or to choose uniformly at random. For some classes of games, it is also possible to select an optimal combination of updates \cite{Schewe/08/improvement}. There have also been suggestions to use more advanced randomisation techniques with sub-exponential -- $2^{O(\sqrt{n})}$ -- bounds \cite{BjorklundVorobyov/07/subexp} and snare memory \cite{Fearnley/10/snare}. Unfortunately, all of these techniques have been shown to be exponential in the size of the game \cite{Friedmann/11/lower,Friedmann/11/Zadeh,Friedmann/13/snare}. Classic strategy improvement algorithms treat the two players involved quite differently where at each iteration one player computes a globally optimal counter strategy, while the other player performs local updates. In contrast, a \emph{symmetric strategy improvement} algorithm symmetrically improves the strategies of both players at the same time, and uses the finding to guide the strategy improvement. This suggests the following na\"ive symmetric approach. \begin{algorithm} \caption{\label{alg:classic-ssia} Na\"ive symmetric strategy improvement algorithm} determine $\tau' = \tau^c_\sigma$ \hspace{50mm} determine $\sigma' = \sigma^c_\tau$\\ update $\sigma$ to $\sigma'$ \hspace{56.6mm} update $\tau$ to $\tau'$\\ \end{algorithm} This algorithm has earlier been suggested by Condon~\cite{Condon93onalgorithms} where it was shown that a repeated application of this update can lead to cycles \cite{Condon93onalgorithms}. A problem with this na\"ive approach is that there is no guarantee that the primed strategies are generally better than the unprimed ones. With hindsight this is maybe not very surprising, as in particular no improvement in the evaluation of running the game with $\sigma',\tau'$ can be expected over running the game with $\sigma,\tau$, as an improvement for one player is on the expense of the other. This observation led to the approach being abandoned. In this paper we propose the following more careful symmetric strategy improvement algorithm that guarantees improvements in each iteration similar to classic strategy improvement. \begin{algorithm} \caption{\label{alg:novel-ssia} Symmetric strategy improvement algorithm} determine $\tau^c_\sigma$ \hspace{58mm} determine $\sigma^c_\tau$\\ determine $\prof(\sigma)$ for $\sigma$ \hspace{40.3mm} determine $\prof(\tau)$ for $\tau$\\ update $\sigma$ using $\prof(\sigma) \cap \sigma^c_\tau$ \hspace{33.4mm} update $\tau$ using $\prof(\tau) \cap \tau^c_\sigma$\\ \end{algorithm} The main difference to classic strategy improvement approaches is that we exploit the strategy of the other player to inform the search for a good improvement step. In this algorithm we select only such updates to the two strategies that agree with the optimal counter strategy to the respective other's strategy. We believe that this will provide a gradually improving advice function that will lead to few iterations. We support this assumption by showing that this algorithm suffices to escape the traps Friedmann has laid to establish lower bounds for different types of strategy improvement algorithms \cite{Friedmann/11/lower,Friedmann/11/Zadeh,Friedmann/13/snare}. \section{Preliminaries} \label{sec:prelims} We focus on turn-based zero-sum games played between two players---\PZero{} and \POne{}---over finite graphs. A game arena $\Aa$ is a tuple $(V_\mMAX, V_\mMIN, E, C, \phi)$ where $(V = V_\mMAX \cup V_\mMIN, E)$ is a finite directed graph with the set of vertices $V$ partitioned into a set $V_\mMAX$ of vertices controlled by \Pmax\ and a set $V_\mMIN$ of vertices controlled by \Pmin, $E \subseteq V \times V$ is the set of edges, $C$ is a set of colours, $\phi: V \to C$ is the colour mapping. We require that every vertex has at least one outgoing edge. A turn-based game over $\Aa$ is played between players by moving a token along the edges of the arena. A play of such a game starts by placing a token on some initial vertex $v_0 \in V$. The player controlling this vertex then chooses a successor vertex $v_1$ such that $(v_0, v_1) \in E$ and the token is moved to this successor vertex. In the next turn the player controlling the vertex $v_1$ chooses the successor vertex $v_2$ with $(v_1, v_2) \in E$ and the token is moved accordingly. Both players move the token over the arena in this manner and thus form a play of the game. Formally, a play of a game over $\Aa$ is an infinite sequence of vertices $\seq{v_0, v_1, \ldots} \in V^\omega$ such that, for all $i \geq 0$, we have that $(v_i, v_{i+1}) \in E$. We write $\mathsf{Plays}_\Aa(v)$ for the set of plays over $\Aa$ starting from vertex $v \in V$ and $\mathsf{Plays}_\Aa$ for the set of plays of the game. We omit the subscript when the arena is clear from the context. We extend the colour mapping $\phi: V \to C$ from vertices to plays by defining the mapping $\phi: \mathsf{Plays} \to C^\omega$ as $\seq{v_0, v_1, \ldots} \mapsto \seq{\phi(v_0), \phi(v_1), \ldots}$. \begin{definition}[Graph Games] A graph game $\Gg$ is a tuple $(\Aa, \eta, \prec)$ such that $\Aa$ is an \emph {arena}, $\eta: C^\omega \to \Dd$ is an evaluation function where $\Dd$ is the carrier set of a complete space, and $\prec$ is a preference ordering over $\Dd$. \end{definition} \begin{example} Parity, mean-payoff and discounted payoff games are graph games $(\Aa, \eta, \prec)$ played on game arenas $\Aa = (V_\mMAX, V_\mMIN, E, \mathbb{R}, \phi)$. For mean payoff games the evaluation function is $\eta : \seq{c_0, c_1, \ldots} \mapsto \liminf_{i \rightarrow \infty} \frac{1}{i}\sum_{j=0}^{i-1} c_j$, while for discounted payoff games with discount factor $\lambda \in [0,1)$ it is $\eta : \seq{c_0, c_1, \ldots} \mapsto \sum_{i=0}^{\infty} \lambda^i c_i$ with $\prec$ as the natural order over the reals. For (max) parity games the evaluation function is $\eta : \seq{c_0, c_1, \ldots} \mapsto \limsup_{i \rightarrow \infty} c_i$ often used with a preference order $\prec_{\mathrm{parity}}$ where higher even colours are preferred over smaller even colours, even colours are preferred over odd colours, and smaller odd colours are preferred over higher odd colours. In the remainder of this paper, we will use parity games where every colour is unique, i.e., where $\phi$ is injective. All parity games can be translated into such games as discussed in \cite{Voge+Jurdzinski/00/simprove}. For these games, we use a valuation function based on their progress measure. We define $\eta$ as $\seq{c_0, c_1, \ldots} \mapsto (c, C, d)$, where $c = \limsup_{i \rightarrow \infty} c_i$ is the dominant colour of the colour sequence, $d = \min\{ i \in \omega \mid c_i = c\}$ is the index of the first occurrence of $c$, and $C = \{c_i \mid i < d, c_i > c\}$ is the set of colours that occur before the first occurrence of $c$. The preference order is defined as the following: we have $(c', C', d') \prec (c, C, d)$ if \begin{itemize} \item $c' \prec_{\mathrm{parity}} c$, \item $c {=} c'$, the highest colour $h$ in the symmetric difference between $C$ and $C'$ is even,~and~in~$C$, \item $c {=} c'$, the highest colour $h$ in the symmetric difference between $C$ and $C'$ is odd, and~in~$C'$, \item $c = c'$ is even, $C = C'$, and $d<d'$, or \item $c = c'$ is odd, $C = C'$, and $d>d'$. \end{itemize} \end{example} \begin{definition}[Strategies] A strategy of \Pmax\ is a function $\sigma: V^*V_\mMAX \rightarrow V$ such that $\big(v,\sigma(\pi v)\big) \in E$ for all $\pi \in V^*$ and $v \in V_\mMAX$. Similarly, a strategy of \Pmin\ is a function $\tau: V^*V_\mMIN \rightarrow V$ such that $\big(v,\sigma(\pi v)\big) \in E$ for all $\pi \in V^*$ and $v \in V_\mMIN$. We write $\Sigma^\infty$ and $T^\infty$ for the set of strategies of \Pmax\ and \Pmin, respectively. \end{definition} \begin{definition}[Valuation] For a strategy pair $(\sigma, \tau) \in \Sigma^\infty \times T^\infty$ and an initial vertex $v \in V$ we denote the unique play starting from the vertex $v$ by $\pi(v, \sigma, \tau)$ and we write $\val_\Gg(v, \sigma, \tau)$ for the value of the vertex $v$ under the strategy pair $(\sigma, \tau)$ defined as \[ \val_\Gg(v, \sigma, \tau) \rmdef \eta\big(\phi(\pi(v, \sigma, \tau))\big). \] We also define the concept of the value of a strategy $\sigma \in \Sigma^\infty$ and $\tau \in T^\infty$ as \[ \val_{\Gg}(v,\sigma) \rmdef \inf_{\tau \in T^\infty} \val_{\Gg}(v,\sigma,\tau) \text{ and } \val_{\Gg}(v,\tau) \rmdef \sup_{\sigma \in \Sigma^\infty} \val_\Gg(v,\sigma,\tau). \] We also extend the valuation for vertices to a valuation for the whole game by defining $V$ dimensional vectors $\val_{\mathcal G}(\sigma) : v \mapsto \val_{\mathcal G}(v,\sigma)$ with the usual $V$ dimensional partial order $\sqsubseteq$, where $\val \sqsubseteq \val'$ if, and only if, $\val(v) \preceq \val'(v)$ holds for all $v \in V$. \end{definition} \begin{definition}[Positional Determinacy] We say that a strategy $\sigma \in \Sigma^\infty$ is memoryless or \emph{positional} if it only depends on the last state, i.e. for all $\pi, \pi' \in V^*$ and $v \in V_\mMAX$ we have that $\sigma(\pi v) = \sigma(\pi' v)$. Thus, a positional strategy can be viewed as a function $\sigma: V_\mMAX \to V$ such that for all $v\in V_\mMAX$ we have that $(v, \sigma(v)) \in E$. The concept of positional strategies of \Pmin\ is defined in an analogous manner. We write $\Sigma$ and $T$ for the set of positional strategies of Players Max and Min, respectively. We say that a game is positionally determined if: \begin{itemize} \item $\val_{\mathcal G}(v,\sigma) = \min_{\tau \in T} \val_{\mathcal G}(v,\sigma,\tau)$ holds for all $\sigma \in \Sigma$, \item $\val_{\mathcal G}(v,\tau) = \max_{\sigma \in \Sigma} \val_{\mathcal G}(v,\sigma,\tau)$ holds for all $\tau \in T$, \item {\bf Existence of value}: for all $v \in V$ $\max_{\sigma \in \Sigma} \val_{\mathcal G}(v,\sigma) = \min_{\tau \in T} \val_{\mathcal G}(v,\tau)$ holds, and we use $\val_{\mathcal G}(v)$ to denote this value, and \item {\bf Existence of positional optimal strategies}: there is a pair $\tau_{\min},\sigma_{\max}$ of strategies such that, for all $v \in V$, $\val_{\mathcal G}(v) = \val_{\mathcal G}(v,\sigma_{\max}) = \val_{\mathcal G}(v,\tau_{\min})$ holds. Observe that for all $\sigma \in \Sigma$ and $\tau \in T$ we have that $\val_{\mathcal G}(\sigma_{\max}) \sqsupseteq \val_{\mathcal G}(\sigma)$ and $\val_{\mathcal G}(\tau_{\min}) \sqsubseteq \val_{\mathcal G}(\tau)$. \end{itemize} \end{definition} Observe that (first and second item above) that classes of games with positional strategies guarantee an optimal positional counter strategy for \Pmin\ to all strategies $\sigma \in \Sigma$ of \Pmax. We denote these strategies by $\tau^c_\sigma$. Similarly, we denote the optimal positional counter strategy for \Pmax\ to a strategy $\tau \in T$ by $\sigma^c_\tau$ of \Pmin. While this counter strategy is not necessarily unique, we use the {\bf convention} in all proofs that $\tau^c_\sigma$ is always the same counter strategy for $\sigma \in \Sigma$, and $\sigma^c_\tau$ is always the same counter strategy for $\tau \in T$. \begin{figure}[t] \begin{center} \scalebox{1}{ \begin{tikzpicture} \tikzstyle{eloc}=[fill=blue!10!white,draw,circle,minimum size=2em,inner sep=0em] \tikzstyle{oloc}=[fill=blue!10!white,draw,minimum size=2em,inner sep=0em] \tikzstyle{trans}=[-latex, rounded corners] \node[oloc] at (0,0) (1) {$1$}; \node[oloc] at (3,0) (4) {$4$}; \node[eloc] at (6,0) (3) {$3$}; \node[oloc] at (9,0) (0) {$0$}; \draw[trans] (4) -- (1); \draw[trans] (3) -- (0); \draw[trans,color=red,thick] (4) to [bend left=20] (3); \draw[trans,color=green!70!black,thick] (3) to [bend left=20] (4); \draw[trans,color=red,thick] (1) to [loop above] (1); \draw[trans,color=blue,thick] (3) to [loop above] (3); \draw[trans,color=red,thick] (0) to [loop above] (0); \end{tikzpicture} } \end{center} \caption{Parity game arena with four vertices and unique colours.} \vspace{-1em} \label{fig:example} \end{figure} \begin{example} Consider the parity game arena shown in Figure~\ref{fig:example}. We use circles for the vertices of Player Max and squares for Player Min. We label each vertex with its colour. Notice that a positional strategy can be depicted just by specifying an outgoing edge for all the vertices of a player. The positional strategies $\sigma$ of \PZero{} is depicted in {\color{blue} blue} and the positional strategy $\tau$ of \POne{} is depicted in {\color{red} red}. In the example, $\val(1,\sigma,\tau) = (1,\emptyset,0)$, $\val(4,\sigma,\tau) = (3,\{4\},1)$, $\val(3,\sigma,\tau) = (3,\emptyset,0)$, and $\val(0,\sigma,\tau) = (0,\emptyset,0)$. \end{example} \subsection{Classic Strategy Improvement Algorithm} As discussed in the introduction, classic strategy improvement algorithms work well for classes of games that are positionally determined. Moreover, the evaluation function should be such that one can easily identify the set $\prof(\sigma)$ of profitable updates and reach an optimum exactly where there are no profitable updates. We formalise these prerequisites for a class of games to be good for strategy improvement algorithm in this section. \begin{definition}[Profitable Updates] For a strategy $\sigma \in \Sigma$, an edge $(v, v') \in E$ with $v \in V_\mMAX$ is a profitable update if $\sigma' \in \Sigma$ with $\sigma': v \mapsto v'$ and $\sigma': v'' \mapsto \sigma(v'')$ for all $v'' \neq v$ has a strictly greater evaluation than $\sigma$, $\val_{\mathcal G}(\sigma') \sqsupset \val_{\mathcal G}(\sigma)$. We write $\prof(\sigma)$ for the set of profitable updates. \end{definition} \begin{example} In our example from Figure \ref{fig:example}, $\tau = \tau_\sigma^c$ is the optimal counter strategy to $\sigma$, such that $\val(\sigma) = \val(\sigma,\tau)$. $\prof(\sigma) = \{(3,4),(3,0)\}$, because both the successor to the left and the successor to the right have a better valuation, $(3,\{4\},1)$ and $(0,\emptyset,0)$, respectively, than the successor on the selected self-loop, $(3,\emptyset,0)$. \end{example} For a strategy $\sigma$ and a functional (right-unique) subsets $P \subseteq \prof(\sigma)$ we define the strategy $\sigma^P$ with $\sigma^P: v \mapsto v'$ if $(v,v') \in P$ and $\sigma^P: v \mapsto \sigma(v)$ if there is no $v' \in V$ with $(v,v') \in P$. For a class of graph games, profitable updates are \emph{combinable} if, for all strategies $\sigma$ and all functional (right-unique) subsets $P \subseteq \prof(\sigma)$ we have that $\val_{\mathcal G}(\sigma^P) \sqsupset \val_{\mathcal G}(\sigma)$. Moreover, we say that a class of graph games is \emph{maximum identifying} if $\prof(\sigma) = \emptyset \Leftrightarrow \val_{\mathcal G}(\sigma) = \val_{\mathcal G}$. Algorithm~\ref{alg:classic-sia-conc} provides a generic template for strategy improvement algorithms. \begin{algorithm}[h] \caption{\label{alg:classic-sia-conc} Classic strategy improvement algorithm} Let $\sigma_0$ be an arbitrary positional strategy. {\bf Set} $i :=0$.\\ If $\prof(\sigma_i) = \emptyset$ \Return $\sigma_i$\\ $\sigma_{i+1} := {\sigma_i}^P$ for some functional subset $P\subseteq \prof(\sigma)$. {\bf Set} $i := i + 1$. {\bf go to} 2.\\ \end{algorithm} We say that a class of games is \emph{good for $\max$ strategy improvement} if they are positionally determined and have combinable and maximum identifying improvements. \begin{theorem} \label{theorem:classic} If a class of games is good for $\max$ strategy improvement then Algorithm~\ref{alg:classic-sia-conc} terminates with an optimal strategy $\sigma$ ($\val_{\mathcal G}(\sigma) = \val_{\mathcal G}$) for \Pmax. \end{theorem} As a remark, we can drop the combinability requirement while maintaining correctness when we restrict the updates to a single position, that is, when we require $P$ to be singleton for every update. We call such strategy improvement algorithms \emph{slow}, and a class of games \emph{good for slow $\max$ strategy improvement} if it is maximum identifying and positionally determined. \begin{theorem} \label{theorem:slow} If a class of games is positionally determined games with maximum identifying improvement then all slow strategy improvement algorithms terminate with an optimal strategy $\sigma$ ($\val_{\mathcal G}(\sigma) = \val_{\mathcal G}$) for \Pmax. \end{theorem} The proof for both theorems is the same. \begin{proof} The strategy improvement algorithm will produce a sequence $\sigma_0, \sigma_1, \sigma_2 \ldots$ of positional strategies with increasing quality $\val_{\mathcal G}(\sigma_0) \sqsubset \val_{\mathcal G}(\sigma_1) \sqsubset \val_{\mathcal G}(\sigma_2) \sqsubset \ldots$. As the set of positional strategies is finite, this chain must be finite. As the game is maximum identifying, the stopping condition provides optimality. \end{proof} Various concepts and results extend naturally for analogous claims about \Pmin. We call a class of game \emph{good for strategy improvement} if it is good for $\max$ strategy improvement and good for $\min$ strategy improvement. Parity games, mean payoff games, and discounted payoff games are all good for strategy improvement (for both players). Moreover, the calculation of $\prof(\sigma)$ is cheap in all of these instances, which makes them well suited for strategy improvement techniques. \section{Symmetric Strategy Improvement Algorithm} \label{sec:algo} We first extend the termination argument for classic strategy improvement techniques (Theorems \ref{theorem:classic} and \ref{theorem:slow}) to symmetric strategy improvement given as Algorithm~\ref{alg:classic-ssia-conc}. \begin{algorithm}[h] \caption{\label{alg:classic-ssia-conc} Symmetric strategy improvement algorithm} Let $\sigma_0$ and $\tau_0$ be arbitrary positional strategies. {\bf set} $i :=0$.\\ Determine $\sigma_{\tau_i}^c$ and $\tau_{\sigma_i}^c$\\ $\sigma_{i+1} := {\sigma_i}^P$ for $P\subseteq \prof(\sigma) \cap \sigma^c_{\tau_i}$.\\ $\tau_{i+1} := {\tau_i}^P$ for $P\subseteq \prof(\tau) \cap \tau^c_{\sigma_i}$. \\ {\bf if} $\sigma_{i+1} = \sigma_i$ and $\tau_{i+1} = \tau_i$ {\bf return} $(\sigma_i, \tau_i)$.\\ {\bf set} $i := i + 1$. {\bf go to} 2.\\ \end{algorithm} \subsection{Correctness} \begin{lemma} The symmetric strategy improvement algorithm terminates for all classes of games that are good for strategy improvement. \end{lemma} \begin{proof} We first observe that the algorithm yields a sequence $\sigma_0, \sigma_1, \sigma_2, \ldots$ of \Pmax\ strategies for $\mathcal G$ with improving values $\val_{\mathcal G}(\sigma_0) \sqsubseteq \val_{\mathcal G}(\sigma_1) \sqsubseteq \val_{\mathcal G}(\sigma_2) \sqsubseteq \ldots$, where equality, $\val_{\mathcal G}(\sigma_i) \equiv \val_{\mathcal G}(\sigma_{i+i})$, implies $\sigma_i = \sigma_{i+1}$. Similarly, for the sequence $\tau_0, \tau_1, \tau_2, \ldots$ of \Pmin\ strategies for $\mathcal G$, the values $\val_{\mathcal G}(\tau_0) \sqsupseteq \val_{\mathcal G}(\tau_1) \sqsupseteq \val_{\mathcal G}(\tau_2) \sqsupseteq \ldots$, improve (for \Pmin), such that equality, $\val_{\mathcal G}(\tau_i) \equiv \val_{\mathcal G}(\tau_{i+i})$, implies $\tau_i = \tau_{i+1}$. As the number of values that can be taken is finite, eventually both values stabilise and the algorithm terminates. \end{proof} What remains to be shown is that the symmetric strategy improvement algorithm cannot terminate with an incorrect result. In order to show this, we first prove the weaker claim that it is optimal in $\mathcal G(\sigma,\tau,\sigma^c_\tau,\tau^c_\sigma) = (V_{\max},V_{\min},E',\val)$ such that $E' = \big\{ \big(v,\sigma(v)\big) \mid v \in V_{\max}\big\} \cup \big\{ \big(v,\tau(v)\big) \mid v \in V_{\min}\big\} \cup \big\{ \big(v,\sigma^c_\tau(v)\big) \mid v \in V_{\max}\big\} \cup \big\{ \big(v,\tau^c_\sigma(v)\big) \mid v \in V_{\min}\big\}$ is the subgame of $\mathcal G$ whose edges are those defined by the four positional strategies, when it terminates with the strategy pair $\sigma,\tau$. \begin{lemma} \label{lemma:local} When the symmetric strategy improvement algorithm terminates with the strategy pair $\sigma,\tau$ on games that are good for strategy improvement, then $\sigma$ and $\tau$ are the optimal strategies for Players Max and Min, respectively, in $\mathcal G(\sigma,\tau,\sigma^c_\tau,\tau^c_\sigma)$. \end{lemma} \begin{proof} For $\mathcal G(\sigma,\tau,\sigma^c_\tau,\tau^c_\sigma)$, both update steps are not restricted: the changes \Pmax\ can potentially select his updates from are the edges defined by $\sigma^c_\tau$ at the vertices $v\in V_{\max}$ where $\sigma$ and $\sigma^c_\tau$ differ ($\sigma(v) \neq \sigma^c_\tau(v)$). Consequently, $\prof(\sigma) = \prof(\sigma)\cap \sigma^c_\tau$. Thus, $\sigma=\sigma'$ holds if, and only if, $\sigma$ is the result of an update step when using classic strategy improvement in $\mathcal G(\sigma,\tau,\sigma^c_\tau,\tau^c_\sigma)$ when starting in $\sigma$. As game is maximum identifying, $\sigma$ is the optimal \Pmax\ strategy for $\mathcal G(\sigma,\tau,\sigma^c_\tau,\tau^c_\sigma)$. Likewise, the \Pmin\ can potentially select every updates from $\tau^c_\sigma$, at vertices $v\in V_{\min}$ and we first get $\prof(\tau) = \prof(\tau)\cap \tau^c_\sigma$ with the same argument. As the game is minimum identifying, $\tau$ is the optimal \Pmin\ strategy for $\mathcal G(\sigma,\tau,\sigma^c_\tau,\tau^c_\sigma)$. \end{proof} We are now in a position to expand the optimality in the subgame $\mathcal G(\sigma,\tau,\sigma^c_\tau,\tau^c_\sigma)$ from Lemma \ref{lemma:local} to global optimality the valuation of these strategies for $\mathcal G$. \begin{lemma} \label{lemma:evaluation} When the symmetric strategy improvement algorithm terminates with the strategy pair $\sigma,\tau$ on a game $\mathcal G$ that is good for strategy improvement, then $\sigma$ is an optimal \Pmax\ strategy and $\tau$ an optimal \Pmin\ strategy. \end{lemma} \begin{proof} Let $\sigma,\tau$ be the strategies returned by the symmetric strategy improvement algorithm for a game $\mathcal G$, and let $\mathcal L = \mathcal G(\sigma,\tau,\sigma^c_\tau,\tau^c_\sigma)$ denote the local game from Lemma \ref{lemma:local} defined by them. Lemma \ref{lemma:local} has established optimality in $\mathcal L$. Observing that the optimal responses in $\mathcal G$ to $\sigma$ and $\tau$, $\tau^c_\sigma$ and $\sigma^c_\tau$, respectively, are available in $\mathcal L$, we first see that they are also optimal in $\mathcal L$. Thus, we have \begin{itemize} \item $\val_{\mathcal L}(\sigma) \equiv \val_{\mathcal L}(\sigma,\tau^c_\sigma) \equiv \val_{\mathcal G}(\sigma,\tau^c_\sigma)$ and \item $\val_{\mathcal L}(\tau) \equiv \val_{\mathcal L}(\sigma^c_\tau,\tau) \equiv \val_{\mathcal G}(\sigma^c_\tau,\tau)$. \end{itemize} Optimality in $\mathcal L$ then provides $\val_{\mathcal L}(\sigma) = \val_{\mathcal L}(\tau)$. Putting these three equations together, we get $\val_{\mathcal G}(\sigma,\tau^c_\sigma) \equiv \val_{\mathcal G}(\sigma^c_\tau,\tau)$. Taking into account that $\tau^c_\sigma$ and $\sigma^c_\tau$ are the optimal responses to $\sigma$ and $\tau$, respectively, in $\mathcal G$, we expand this to $\val_{\mathcal G} \sqsupseteq \val_{\mathcal G}(\sigma) \equiv \val_{\mathcal G}(\sigma,\tau^c_\sigma) \equiv \val_{\mathcal G}(\sigma^c_\tau,\tau) \equiv \val_{\mathcal G}(\tau) \sqsupseteq \val_{\mathcal G}$ and get $\val_{\mathcal G} \equiv \val_{\mathcal G}(\sigma) \equiv \val_{\mathcal G}(\tau) \equiv \val_{\mathcal G}(\sigma,\tau)$. \end{proof} The Lemmas in this subsection yield the following results. \begin{theorem} \label{theorem:correct} The symmetric strategy improvement algorithm is correct for games that are good for strategy improvement. \end{theorem} \begin{theorem} \label{theorem:correctAndSlow} The slow symmetric strategy improvement algorithm is correct for positionally determined games that are maximum and minimum identifying. \end{theorem} We implemented our symmetric strategy improvement algorithm based on the progress measures introduced by V\"oge and Jurdzi\'nski \cite{Voge+Jurdzinski/00/simprove}. The first step is to determine the valuation for the optimal counter strategies to and the valuations for $\sigma$ and $\tau$. \begin{example} In our running example from Figure \ref{fig:example}, we have discussed in the previous section that $\tau$ is the optimal counter strategy $\tau^c_\sigma$ and that $\prof(\sigma) = \{(3,4),(3,0)\}$. In the optimal counter strategy $\sigma^c_\tau$ to $\tau$, \PZero{} moves from $3$ to $4$, and we get $\val(1,\tau) = (1,\emptyset,0)$, $\val(4,\tau) = (4,\emptyset,0)$, $\val(3,\tau) = (4,\emptyset,1)$, and $\val(0,\tau) = (0,\emptyset,0)$. Consequently, $\prof(\tau) = \{(4,1)\}$. For the update of $\sigma$, we select the intersection of $\prof(\sigma)$ and $\sigma^c_\tau$. In our example, this is the edge from $3$ to $4$ (depicted in {\color{green!70!black} green}). To update $\tau$, we select the intersection of $\prof(\tau)$ and $\tau^c_\sigma$. In our example, this intersection is empty, as the current strategy $\tau$ agrees with $\tau^c_\sigma$. \end{example} \subsection{A minor improvement on stopping criteria} In this subsection, we look at a minor albeit natural improvement over Algorithm~\ref{alg:classic-ssia-conc} shown in Algorithm~\ref{alg:classic-ssia-conc-alt}. There we used termination on both sides as a condition to terminate the algorithm. We could alternatively check if \emph{either} player has reached an optimum. Once this is the case, we can return the optimal strategy and an optimal counter strategy to it. \begin{algorithm}[h] \caption{\label{alg:classic-ssia-conc-alt} Symmetric strategy improvement algorithm (Improved Stopping criteria)} Let $\sigma_0$ and $\tau_0$ be arbitrary positional strategies. {\bf set} $i :=0$.\\ Determine $\sigma_{\tau_i}^c$ and $\tau_{\sigma_i}^c$\\ {\bf if} $\prof(\sigma_i) = \emptyset$ {\bf return} $(\sigma_i, \tau_{\sigma_i}^c)$;\\ {\bf if} $\prof(\tau_i) = \emptyset$ {\bf return} $(\sigma_{\tau_i}^c, \tau_i)$;\\ $\sigma_{i+1} := {\sigma_i}^P$ for $P\subseteq \prof(\sigma) \cap \sigma^c_{\tau_i}$.\\ $\tau_{i+1} := {\tau_i}^P$ for $P\subseteq \prof(\tau) \cap \tau^c_{\sigma_i}$. \\ {\bf set} $i := i + 1$. {\bf go to} 2.\\ \end{algorithm} The correctness of this stopping condition is provided by Theorems \ref{theorem:classic} and \ref{theorem:slow}, and checking this stopping condition is usually cheap: it suffices to check if $\prof(\sigma)$ or $\prof(\tau)$ is empty. This provides us with a small optimisation, as we can stop as soon as one of the strategies involved is optimal. However this small optimisation can only provide a small advantage. \begin{theorem} \label{theorem:optlin} The difference in the number of iterations of Algorithm~\ref{alg:classic-ssia-conc} and Algorithm~\ref{alg:classic-ssia-conc-alt} is at most linear in the number of states of $\mathcal G$. \end{theorem} \begin{proof} Let $\sigma$ be an optimal strategy for $\mathcal G$. When starting with a strategy pair $\sigma,\tau_0$ for some strategy $\tau_0$ of \Pmin, we first construct the optimal counter strategies $\tau^c_\sigma$ and $\sigma_{\tau_0}$. As $\sigma$ is optimal and $\mathcal G$ maximum identifying, $\prof(\sigma)=\emptyset$, and strategy improvement will not change it. In particular, our algorithm will always provide $\sigma' = \sigma$, irrespective of the optimal counter strategy $\sigma_{\tau_i}^c$ to a strategy $\tau_i$ of \Pmin. This also implies that $\tau^c_\sigma$ will not change. It is now easy to see that, unless $\tau_i' = \tau_i$, $\tau_{i+1} = \tau_i'$ differs from $\tau_i$ in at least one decision, and it differs by adhering to $\tau^c_\sigma$ at the positions where it differs ($\forall v \in V_{\min}.\ \tau_i(v) \neq \tau_{i+1}(v) \Rightarrow \tau_{i+1}(v) = \tau^c_\sigma(v)$). Such an update can happen at most once for each \Pmin\ position. The argument for starting with an optimal strategy $\tau$ of \Pmin\ is similar. \end{proof} \section{Friedmann's Traps} \label{sec:friedmann} In a seminal work on the complexity of strategy improvement \cite{Friedmann/11/lower}, Friedmann uses a class of parity games called \emph{1-sink parity games}. These games contain a \emph{sink} node with the weakest odd parity in a max-parity game. This sink node is reachable from every other node in the game and such a game is won by \POne{} eventually. Figure \ref{fig:lbgame} shows a lower bound game from \cite{Friedmann/11/lower}. In order to obtain an exponential lower bound for the classic strategy improvement algorithm with the locally optimising policy, these sink games implement a binary counter realised by a gadget called a \emph{cycle gate} which consists of two components. With $n$ cycle gates, we have a representation of the $n$ bits for an $n$ bit counter. The first component of a cycle gate is called a \emph{simple cycle}. In Figure \ref{fig:lbgame}, the three smaller boxes shown in yellow are the simple cycles of the game. These simple cycles encode the bits of the counter. The second component of the cycle gate gadget is called a \emph{deceleration lane}. This structure serves to ensure that any profitable updates to strategies are postponed by cycling through seemingly more profitable improvements, in the order $r, s, a_1, a_2, \ldots$, before eventually turning to $e_i$. This structure is shown as a shaded blue rectangle in Figure \ref{fig:lbgame}. \begin{figure*} \begin{center} \scalebox{0.636}{ \begin{tikzpicture} \tikzstyle{dummy}=[draw,dashed,circle,minimum size=2em,inner sep=0em] \tikzstyle{eloc}=[draw,circle,minimum size=3.5em,inner sep=0em] \tikzstyle{oloc}=[draw,minimum size=3em,inner sep=0em] \tikzstyle{trans}=[-latex, rounded corners] \node[eloc] at (0,0) (c) {$c:28$}; \node[eloc] at (0,2) (t1) {$t_1:15$}; \node[eloc] at (0,4) (t2) {$t_2:17$}; \node[eloc] at (0,6) (t3) {$t_3:19$}; \node[eloc] at (0,8) (t4) {$t_4:21$}; \node[eloc] at (0,10) (t5) {$t_5:23$}; \node[eloc] at (0,12) (t6) {$t_6:25$}; \node[oloc] at (2,2) (a1) {$a_1:16$}; \node[oloc] at (2,4) (a2) {$a_2:18$}; \node[oloc] at (2,6) (a3) {$a_3:20$}; \node[oloc] at (2,8) (a4) {$a_4:22$}; \node[oloc] at (2,10) (a5) {$a_5:24$}; \node[oloc] at (2,12) (a6) {$a_6:26$}; \fill[blue!20!white,draw, fill opacity=0.2] (-2, -1) rectangle (3, 14); \node[eloc] at (6,3) (d1) {$d_1:3$}; \node[eloc] at (6,7) (d2) {$d_2:7$}; \node[eloc] at (6,11) (d3) {$d_3:11$}; \node[oloc] at (8,3) (e1) {$e_1:4$}; \node[oloc] at (8,7) (e2) {$e_2:8$}; \node[oloc] at (8,11) (e3) {$e_3:12$}; \fill[yellow!40!white,draw, fill opacity=0.2] (5,2.2) rectangle (9, 3.8); \fill[yellow!40!white,draw, fill opacity=0.2] (5,6.2) rectangle (9, 7.8); \fill[yellow!40!white,draw, fill opacity=0.2] (5,10.2) rectangle (9, 11.8); \node[oloc] at (9,1) (f1) {$f_1:35$}; \node[oloc] at (9,5) (f2) {$f_2:39$}; \node[oloc] at (9,9) (f3) {$f_3:43$}; \node[oloc] at (13,3) (h1) {$h_1:36$}; \node[oloc] at (13,7) (h2) {$h_2:40$}; \node[oloc] at (13,11) (h3) {$h_3:44$}; \node[eloc] at (13,1) (g1) {$g_1:6$}; \node[eloc] at (13,5) (g2) {$g_2:10$}; \node[eloc] at (13,9) (g3) {$g_3:14$}; \node[eloc] at (16,3) (k1) {$k_1:33$}; \node[eloc] at (15.2,7) (k2) {$k_2:37$}; \node[eloc] at (16,11) (k3) {$k_3:41$}; \node[eloc, fill=black!20!white] at (9,-1) (r) {$r:32$}; \node[oloc] at (17,7) (x) {$x:1$}; \node[eloc,fill=black!20!white] at (11.5,13) (s) {$s:30$}; \draw[trans] (a1) -- (t1); \draw[trans] (a2) -- (t2); \draw[trans] (a3) -- (t3); \draw[trans] (a4) -- (t4); \draw[trans] (a5) -- (t5); \draw[trans] (a6) -- (t6); \draw[trans] (d1) -- (a1); \draw[trans] (d1) -- (a2); \draw[trans] (d2) -- (a1); \draw[trans] (d2) -- (a2); \draw[trans] (d2) -- (a3); \draw[trans] (d2) -- (a4); \draw[trans] (d3) -- (a1); \draw[trans] (d3) -- (a2); \draw[trans] (d3) -- (a3); \draw[trans] (d3) -- (a4); \draw[trans] (d3) -- (a5); \draw[trans] (d3) -- (a6); \draw[trans] (t6) -- (t5); \draw[trans] (t5) -- (t4); \draw[trans] (t4) -- (t3); \draw[trans] (t3) -- (t2); \draw[trans] (t2) -- (t1); \draw[trans] (t1) -- (c); \draw[trans] (t6) -- +(-1, +0.5) node[dummy,pos=1,left] {$r$}; \draw[trans] (t6) -- +(-1, -0.5) node[dummy,pos=1,left] {$s$}; \draw[trans] (t5) -- +(-1, +0.5) node[dummy,pos=1,left] {$r$}; \draw[trans] (t5) -- +(-1, -0.5) node[dummy,pos=1,left] {$s$}; \draw[trans] (t4) -- +(-1, +0.5) node[dummy,pos=1,left] {$r$}; \draw[trans] (t4) -- +(-1, -0.5) node[dummy,pos=1,left] {$s$}; \draw[trans] (t3) -- +(-1, +0.5) node[dummy,pos=1,left] {$r$}; \draw[trans] (t3) -- +(-1, -0.5) node[dummy,pos=1,left] {$s$}; \draw[trans] (t2) -- +(-1, +0.5) node[dummy,pos=1,left] {$r$}; \draw[trans] (t2) -- +(-1, -0.5) node[dummy,pos=1,left] {$s$}; \draw[trans] (t1) -- +(-1, +0.5) node[dummy,pos=1,left] {$r$}; \draw[trans] (t1) -- +(-1, -0.5) node[dummy,pos=1,left] {$s$}; \draw[trans] (c) -- +(-1, +0.5) node[dummy,pos=1,left] {$r$}; \draw[trans] (c) -- +(-1, -0.5) node[dummy,pos=1,left] {$s$}; \draw[trans] (d1) -- +(-0.5, +1) node[dummy,pos=1,above] {$r$}; \draw[trans] (d1) -- +(0.5, +1) node[dummy,pos=1,above] {$s$}; \draw[trans] (d2) -- +(-0.5, +1) node[dummy,pos=1,above] {$r$}; \draw[trans] (d2) -- +(0.5, +1) node[dummy,pos=1,above] {$s$}; \draw[trans] (d3) -- +(-0.5, +1) node[dummy,pos=1,above] {$r$}; \draw[trans] (d3) -- +(0.5, +1) node[dummy,pos=1,above] {$s$}; \draw[trans] (d3) to [bend left=20] (e3); \draw[trans] (e3) to [bend left=20] (d3); \draw[trans] (d2) to [bend left=20] (e2); \draw[trans] (e2) to [bend left=20] (d2); \draw[trans] (d1) to [bend left=20] (e1); \draw[trans] (e1) to [bend left=20] (d1); \draw[trans] (g3) -- (f3); \draw[trans] (f3) -- (e3); \draw[trans] (e3) -- (h3); \draw[trans] (h3) -- (k3); \draw[trans] (g3) -- (k3); \draw[trans] (k3) -- (x); \draw[trans] (g2) -- (f2); \draw[trans] (f2) -- (e2); \draw[trans] (e2) -- (h2); \draw[trans] (h2) -- (k2); \draw[trans] (g2) -- (k2); \draw[trans] (k2) -- (x); \draw[trans] (k2) -- (g3); \draw[trans] (g1) -- (f1); \draw[trans] (f1) -- (e1); \draw[trans] (e1) -- (h1); \draw[trans] (h1) -- (k1); \draw[trans] (g1) -- (k1); \draw[trans] (k1) -- (x); \draw[trans] (k1) -- (g2); \draw[trans] (k1) -- (g3); \draw[trans] (s) -- (f1); \draw[trans] (s) -- (f2); \draw[trans] (s) -- (f3); \draw[trans] (s) -- +(5, 0) -| (x); \draw[trans] (r) -- (g1); \draw[trans] (r) -- (g2); \draw[trans] (r) -- (g3); \draw[trans] (r) -- +(5, 0) -| (x); \draw[trans] (x) to [loop right] (x); \end{tikzpicture} } \end{center} \caption{Friedmann's lower bound game for the locally optimal strategy improvement algorithm} \label{fig:lbgame} \end{figure*} A simple cycle consists of exactly one \PZero{} controlled node $d$ with a weak odd colour $k$ and one \POne{} controlled node $e$ with the even colour $k+1$. The \PZero{} node is also connected to some set of external nodes in the game and the \POne{} node is connected to an output node with a high even colour on a path to the sink node. Given a strategy $\sigma$, we say that a simple cycle is closed if we have an edge $\sigma(d) =e$. Otherwise, we say that the simple cycle is open. Opening and closing cycles correspond to unsetting and setting bits. We then say a cycle gate is \emph{open} or \emph{closed} when its corresponding simple cycle is open or closed respectively. In these lower bound games, the simple cycles are connected to the deceleration lane in such a way that lower valued cycles have less edges entering the deceleration lane ensuring that lower open cycles close before higher open cycles. This allows the lesser significant bits to be set and reset before the higher significant bits. The deceleration lane hides sensible improvements, thus making the players take more iterations before taking the best improvement. It is then shown in \cite{Friedmann/11/lower} that incrementing a bit state always requires more than one strategy iteration in 4 different phases. This gadget thus counts an exponential number of improvement steps taken by the strategy improvement algorithm to flip $n$ bits. For a detailed exposition of the gadget and the exponential lower bound construction, we refer the reader to \cite{Friedmann/11/lower}. \subsection{Escaping the traps with symmetric strategy improvement} We discuss the effect of symmetric strategy improvement on Friedmann's traps, with a focus on the simple cycles. Simple cycles are the central component of the cycle gates and the heart of the lower bound proof. As described above, an $n$-bit counter is represented by $n$ cycle gates, each cycle gate embedding a smaller simple cycle. These simple cycles are reused exponentially often to represent $n$ bits. Both players have the choice to open or close the simple cycles. The optimal strategy of both players in the simple cycles of Figure \ref{fig:lbgame} is to turn right. (For \PZero{}, one could say that he wants to leave the cycle, and for \POne{}, one could say that she wants to stay in it.) When the players agree to stay in the cycle, \PZero{} wins the parity game. In fact these are the only places where \PZero{} can win positionally in this parity game. When running the symmetric strategy improvement algorithm for \PZero{}, the optimal counter strategy by \POne{} is to move to the right in simple cycles where \PZero{} is moving to the right, and to move left in all other simple cycles. As mentioned before, Friedmann \cite{Friedmann/11/lower} showed that, when looking at an abstraction of the \PZero{} strategy that only distinguishes the decisions of turning right or not turning right in the simple cycles, then they essentially behave like a binary counter that, with some delay (caused by the deceleration lane) will `count up'. More precisely, one step after the $i^{th}$ bit has been activated, all lower bits are reset. We now discuss how symmetric strategy improvement can beat this mechanism by taking the view of both players into account. For this, we consider a starting configuration, where \POne{} moves to the right in the $j$ most significant simple cycle positions, where $j$ can be $0$. Note that, when \POne{} moves right in all of these positions, she has found her optimal strategy and we can invoke Theorem \ref{theorem:optlin} to show that the algorithm terminates in a linear number of steps---or simply stop when using the alternative stopping condition. The first observation is that changing the decision to moving left will not lead to an improvement, as it produces a winning cycle of a quality (leading even colour) higher than the quality of any cycle available for \PZero{} under the current strategy of \POne{}. Let us now consider the less significant position $j+1$. First, we observe that moving to the right is a superior strategy. This can easily be seen: moving to the left produces a cycle with a dominating even colour and thus turns out to be winning for \PZero{}. Moving to the right in position $j+1$ and (by our assumption) all more significant positions removes this cycle and implies that the leading colour from this position is 1. This is clearly better for \POne{}. If \POne{} uses a strategy where $j+1$ is the most significant position where she decides to move to the left, we have the following case distinctions for \PZero{}'s strategy in this simple cycle: \begin{enumerate} \item \PZero{} moves to the right in this simple cycle. Then moving to the right is also the optimal counter strategy for \POne{}, and her strategy will be updated accordingly. \item \PZero{} does not move right in this simple cycle with her current strategy $\sigma$. Moving right in this simple cycle is among $\prof(\sigma)$, as one even colour is added to the set in the quality measure in the local comparison. It is also the choice for the optimal counter strategy $\sigma^c_\tau$ to the current strategy $\tau$ of \POne{}, as this is the only way for \PZero{} to produce a valuation with the dominating even colour of this simple cycle, while to valuation with a higher even colour is possible. \end{enumerate} Taking these two cases into consideration, \POne{} will move to the right in the $j$ most significant positions after $2j$ improvement steps. When \PZero{} has found his optimal strategy, we can invoke Theorem \ref{theorem:optlin} to show termination in linear steps for the algorithm. There are similar arguments for all kinds of traps that Friedmann has developed for strategy improvement algorithms. We have not formalised these arguments on other instances, but provided the number of iterations needed by our symmetric strategy improvement algorithm for all of them in the next section. Note that the way in which Friedmann traps asymmetric strategy improvement has proven to be quite resistant to the improvement policy (snare \cite{Fearnley/10/snare}, random facet \cite{Ludwig/95/random,BjorklundVorobyov/07/subexp}, globally optimal \cite{Schewe/08/improvement}, etc.). From the perspective of the traps, the different policies try to aim at a minor point in the mechanism of the traps, and this minor point is adjusted. The central mechanism, however is not affected. All of these examples have some variant of simple cycles at the heart of the counter and a deceleration lane to orchestrate the timely counting. Symmetric strategy improvement aims at the mechanism of the traps themselves. It seems that examples that trap symmetric strategy improvement algorithms need to do more than just trapping both players (which could be done by copying the trap with inverse roles), they need to trap them simultaneously. It is not likely to find a proof that such traps do not exist, as this would imply a proof that symmetric strategy improvement solves parity (or, depending on the proof, mean or discounted payoff) games in polynomial time. But it seems that such traps would need a different structure. A further difference to asymmetric strategy improvement is that the deceleration lane ceases to work. Taking into account that finding traps for asymmetric strategy improvement took decades and was very insightful, this looks like an interesting challenge for future research. \section{Experimental Results} \label{sec:expt} We have implemented the symmetric strategy improvement algorithm for parity games and compared it with the standard strategy improvement algorithm with the popular locally optimising and other switching rules. To generate various examples we used the tools \texttt{steadygame} and \texttt{stratimprgen} that comes as a part of the parity game solver collection \textsc{PGSolver}~\cite{LF09}. We have compared the performance of our algorith on parity games with 100 positions (see appendix) and found that the locally optimising policy outperforms other switching rules. We therefore compare our symmetric strategy improvement algorithm with the locally optimising strategy improvement below. Since every iteration of both algorithms is rather similar---one iteration of our symmetric strategy improvement algorithm essentially runs two copies of an iteration of a classical strategy improvement algorithm---and can be performed in polynomial time, the key data to compare these algorithms is the number of iterations taken by both algorithms. \begin{figure}[h] \scalebox{0.8}{ \begin{tikzpicture} \begin{axis}[ title={A}, xlabel={example number}, ylabel={number of iterations}, legend pos=north west, legend style={fill=none} ] \addplot[ color=cyan, mark=*, ] coordinates { (0,6)(1,8)(2,8)(3,7)(4,6)(5,7)(6,8)(7,10)(8,8)(9,8) (10,9)(11,8)(12,9)(13,7)(14,10)(15,8)(16,8)(17,8)(18,9)(19,6) (20,7)(21,8)(22,6)(23,9)(24,6)(25,8)(26,9)(27,8)(28,9)(29,7) (30,7)(31,7)(32,7)(33,6)(34,7)(35,10)(36,9)(37,7)(38,7)(39,8) (40,9)(41,6)(42,8)(43,8)(44,6)(45,5)(46,11)(47,7)(48,7)(49,8) (50,7)(51,8)(52,7)(53,8)(54,6)(55,8)(56,6)(57,8)(58,7)(59,10) }; \addplot[ color=orange, mark=square*] coordinates { (0,9)(1,15)(2,13)(3,11)(4,9)(5,12)(6,13)(7,10)(8,9)(9,12) (10,10)(11,11)(12,12)(13,7)(14,12)(15,11)(16,13)(17,10)(18,12)(19,10) (20,10)(21,11)(22,13)(23,11)(24,10)(25,11)(26,13)(27,11)(28,11)(29,12) (30,10)(31,14)(32,15)(33,10)(34,9)(35,9)(36,8)(37,11)(38,13)(39,13) (40,8)(41,10)(42,11)(43,8)(44,14)(45,10)(46,11)(47,10)(48,10)(49,9) (50,9)(51,11)(52,10)(53,11)(54,14)(55,13)(56,15)(57,13)(58,14)(59,11) }; \end{axis} \end{tikzpicture} \hfill \begin{tikzpicture} \begin{axis}[ title={B} xlabel={number of counter bits in Friedmann's trap}, ylabel={number of iterations}, yticklabel pos=right,ylabel near ticks, legend pos=north west, ] \addplot[ color=cyan, mark=*, ] coordinates { (1,3)(2,5)(3,7)(4,9)(5,10)(6,11)(7,12)(8,13)(9,14)(10,15)(11,16) }; \addplot[ color=orange, mark=square* ] coordinates { (1,11)(2,29)(3,65)(4,137)(5,281)(6,569)(7,1145)(8,2297)(9,4601)(10,9209) }; \end{axis} \end{tikzpicture} } \caption{These plots compare the performance of the symmetric strategy improvement algorithm (data points in cyan circles) with standard strategy improvement using the locally optimising policy rule (data points in orange squares). The plot on the left side is for random examples generated using the \texttt{steadygame 1000 2 4 3 5 6} command, while the plot on the right is for Friedmann's trap from the previous section generated by the command \texttt{stratimprgen -pg switchallsubexp i}.} \label{fig:expfigs} \end{figure} Symmetric strategy improvement will often rule out improvements at individual positions: it disregards profitable changes of Player Max and Min if they do not comply with $\sigma^c_\tau$ and $\tau^c_\sigma$, respectively. It is well known that considering fewer updates can lead to a significant increase in the number of updates on random examples and benchmarks. An algorithm based on the random-facet method \cite{Ludwig/95/random,BjorklundVorobyov/07/subexp}, e.g., needs around a hundred iterations on the random examples with 100 positions we have drawn, simply because it updates only a single position at a time. The same holds for a random-edge policy where only a single position is updated. The figures for these two methods are given in the appendix. It is therefore good news that symmetric strategy improvement does not display a similar weakness. It even uses less updates when compared to classic strategy improvement with the popular locally optimising and locally random policy rules. Note also that having less updates can lead to a faster evaluation of the update, because unchanged parts do not need to be re-evaluated~\cite{BjorklundVorobyov/07/subexp}. As shown in Figure~\ref{fig:expfigs}, the symmetric strategy improvement algorithm not only performs better (on average) in comparison with the traditional strategy improvement algorithm with the locally optimising policy rule, but also avoids Friedmann's traps for the strategy improvement algorithm. The following table shows the performance of symmetric strategy improvement algorithm for Friedmann's traps for other common switching rules. It is clear that our algorithm is not exponential for these classes of examples. \begin{center} \begin{tabular}{l | c c c c c c c c c c} \hline Switch Rule & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ [0.5ex] \hline Cunningham & 2 &6&9&12&15&18&21&24&27&30\\ CunninghamSubexp &1&1&1&1&1&1&1&1&1&1\\ FearnleySubexp &4&7&11&13&17&21&25&29&33&37\\ FriedmannSubexp&4&9&13&15&19&23&27&31&35&39\\ RandomEdgeExpTest &1&2&2&2&2&2&2&2&2&2\\ RandomFacetSubexp &1&2&7&9&11&13&15&17&19&21\\ SwitchAllBestExp &4&5&8&11&12&13&15&17&18&19\\ SwitchAllBestSubExp &5&7&9&11&13&15&17&19&21&23\\ SwitchAllSubExp &3&5&7&9&10&11&12&13&14&15\\ SwitchAllExp &3&4&6&8&10&11&12&14&16&18\\ ZadehExp &-&6&10&14&18&21&25&28&32&35\\ ZadehSubexp &5&9&13&16&20&23&27&30&34&37\\ \hline \end{tabular} \end{center} \section{Discussion} \label{sec:discuss} We have introduced symmetric approaches to strategy improvement, where the players take inspiration from the respective other's strategy when improving theirs. This creates a rather moderate overhead, where each step is at most twice as expensive as a normal improvement step. For this moderate price, we have shown that we can break the traps Friedmann has introduced to establish exponential bounds for the different update policies in classic strategy improvement \cite{Friedmann/11/lower,Friedmann/11/Zadeh,Friedmann/13/snare}. In hindsight, attacking a symmetric problem with a symmetric approach seems so natural, that it is quite surprising that it has not been attempted immediately. There are, however, good reasons for this, but one should also consent that the claim is not entirely true: the concurrent update to the respective optimal counter strategy has been considered quite early \cite{Friedmann/11/lower,Friedmann/11/Zadeh,Friedmann/13/snare}, but was dismissed, because it can lead to cycles \cite{Condon93onalgorithms}. The first reason is therefore that it was folklore that symmetric strategy improvement does not work. The second reason is that the argument for the techniques that we have developed in this paper would have been restricted to beauty until some of the appeal of classic strategy improvement was caught in Friedmann's traps. Friedmann himself, however, remained optimistic: \begin{quote} We think that the strategy iteration still is a promising candidate for a polynomial time algorithm, however it may be necessary to alter more of it than just the improvement policy. \end{quote} \noindent This is precisely, what the introduction of symmetry and co-improvement tries to do.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{intro} The concept of {\em Cognitive Radio} ($CR$)~\cite{b31} is based on dividing the available radio spectrum into several parts, with some part reserved for the licensed users and the rest freely available for all. A {\em Cognitive Radio Network} ($CRN$) provides the capability of sharing the spectrum in an opportunistic manner by both licensed and unlicensed users, leading to an increase in the effective utilization of the available spectrum. According to a survey conducted by {\em Federal Communications Commission} ($FCC$)~\cite{b116, b117, b118, b119}, the usage of the radio spectrum is non-uniform. While some portions of the spectrum are heavily used, other portions remain relatively under-utilized. Thus, when a licensed user is not currently using the spectrum, an unlicensed user can sense this fact and may temporarily use this channel for his/her purpose. However, as soon as the licensed owner starts using his channel, the unlicensed user must relinquish this channel immediately, and move to a different one by sensing the {\em spectrum holes} or {\em white spaces}. A cognitive radio should have the capability of being programmed to transmit and receive on a variety of frequencies and to use different transmission access technologies supported by its hardware design~\cite{b16,b23}. The transmission parameters, e.g., power level, modulation scheme, etc. of a cognitive radio should be reconfigurable not only at the beginning of a transmission but also during the transmission, when it is switched to a different spectrum band. Of late, there also has been an increasing trend of multimedia communication in the form of voice, text, still image and video in various applications involving $CRN$. Designing efficient algorithms for allocating channels to a large number of such users of $CRN$s and maintaining the {\em Quality of Service} ($QoS$) for multimedia communication constitute an important research problem. \subsection{Related Works} \label{related} Multimedia communication through $CRN$s has already been studied by several authors~\cite{b31, b32, b29}. Mitola J. first introduced the concept of {\em flexible mobile multimedia communications}~\cite{b31} in a $CRN$. Kushwaha et al.~\cite{b32} used fountain coding for packet generation and conversion to send data with high reliability and tolerable delay. Shing et al.~\cite{b29} proposed the idea of dynamic channel selection for video streaming over a $CRN$, based on some priority-based scheduling of video signals. On the other hand Lei et. al. worked on spectrum fragmentation by their method ``Jello''~\cite{b111}, where they detect ``edges'' of power spectrum, then use classical best-fit, worst-fit and first-fit algorithm for spectrum selection and finally they do a distributed coordinate procedure to synchronize transceiver system. However, in all of these communication schemes, a video signal can not be communicated over the $CRN$ unless a channel of sufficiently large bandwidth for maintaining the $QoS$ of these video signals, is allocated from the white spaces of the spectrum. Thus, even if the sum of all the available white spaces in the spectrum may be larger than the required bandwidth for transmitting a video signal, it may not be possible to transmit the video signal if there is no single white space in the spectrum which can provide the required large bandwidth for its communication. Basically, this is a situation of {\em fragmentation} of the spectrum into small holes, with no hole being large enough to accommodate a video signal transmission. It was mentioned by Akyildiz et al.~\cite{b14} that {\em ``CR users may not detect any single spectrum band to meet the user's requirements. Therefore, multiple noncontiguous spectrum bands can be simultaneously used for transmission in CR networks''}. Some authors have addressed this implementation issue of the proposal by using {\em Orthogonal Frequency Division Multiplexing} ($OFDM$) - based $CRN$~\cite{b36, b37}. However, {\em Multi-Band $OFDM$} ($MBOFDM$) system for allowing more than one sender to send their messages in the $CRN$ is still a challenging problem~\cite{b35}. Techniques for detection of unused spectrum and sharing the spectrum without harmful interference with other users with the help of a {\em Common Control Channel} ($CCC$) have been presented by Krishnamurthy et al.~\cite{b27}, Masri et al.~\cite{b28} and Bayhan and Alag\"{o}z~\cite{b109}. The $CCC$ is used for supporting the transmission coordination and spectrum related information exchange between the $CR$ users. It facilitates neighbor discovery, helps in spectrum sensing coordination, control signaling and exchange of local measurements between the $CR$ users. Spectrum sensing without using a $CCC$ has been considered by Kondareddy et al.~\cite{b5} and Xin and Cao~\cite{b6}. Taxonomy, open issues, and challenges for channel assignment algorithms in a $CRN$ have been described in~\cite{b100}. Allocation schemes can be fixed~\cite{b30, b113}, dynamic~\cite{b1, b13, b29, b30, b113} or~\cite{b30, b113} depending on the flexibility of assigning channels to the cells in the network. The dynamic channel allocation in the spectrum is similar to the computer classical memory management strategies like ``first-fit'', ``best-fit'', and ``worst-fit''~\cite{b108}. Very recently, Bayhan and Alag\"{o}z~\cite{b109, b110} have proposed best-fit channel selection techniques in cognitive radio networks. \subsection{Problem Statement} \label{problem} \begin{figure}[] \centering \includegraphics[width=3.0in,height=0.25in]{a3.eps} \caption{Spectrum Divided into Channels (Unused Channels shown as White)}\label{f3} \end{figure} Consider a representative scenario depicted in Fig.~\ref{f3} where we show a part of the spectrum divided into $16$ channels marked as $x_0, x_1, \cdots, x_{15}$, each of these channels being of the same bandwidth equal to $B_{min}$ which is the minimum bandwidth for a multimedia signal. For example, if the bandwidth requirements for the voice, text and video signals are $64$ Kbps, $128$ Kbps and $512$ Kbps respectively, then $B_{min}$ is taken to be $64$ Kbps. Thus, to transmit an audio signal, we need only one channel, while for a video signal, we need eight consecutive channels ($x_i$'s). However, as we see from Fig.~\ref{f3}, there is no continuous band consisting of eight channels, but a total of nine channels are still available. We need to devise an appropriate technique to allow the transmission of the given video signal through eight of these available nine channels, without compromising the video quality at the receiving end. \begin{figure}[] \centering \includegraphics[width=1.25in,height=.75in]{a6.eps} \caption{Nodes with their Respective Sensing Regions}\label{f6} \end{figure} Channel assignment process in a $CRN$ may broadly be divided in two subproblems - {\em channel sensing} and {\em channel allocation}. We assume that the transmission range of a node is equal to its sensing range. A node $U$ is called a 1-distance neighbor of a node $V$ if $U$ falls under the transmission or sensing range of the node $V$. While sensing, we assume that a node can always sense the channels which are being used by all of its 1-distance neighbors for transmitting their respective data. Referring to Fig.~\ref{f6}, the transmitting channels of all the neighbors at 1-distance from a node $A$ can be sensed by node $A$. Consider the node $C$ in Fig.~\ref{f6} which is a 1-distance neighbor of $A$. Node $B$ is another 1-distance neighbor of $C$ but node $B$ is at 2-distance apart from $A$. The channels used by $C$ in receiving some information from $B$, can not be sensed by node $A$. Thus, node $B$ can give rise to {\em hidden node problem}~\cite{b2} while allocating channels to node $A$. To be more specific, if $A$ and $B$ both want to communicate their messages to $C$ at the same time using the same channel (when both of them independently sense that channel as free), the node $C$ will experience a collision, and thus both the messages will be lost at $C$. The channel allocation algorithm must address this hidden node problem while allocating channels for the message communication from $A$ to $C$. Another problem arises when, node $C$ has a capability of receiving a multimedia signal of bandwidth $512$ Kbps as shown in Fig.~\ref{f6}. Node $A$ sends some data to node $C$ which requires only $128$ Kbps bandwidth. Now, node $B$ also wants to send some data to node $C$ at the same time which requires, say, $384$ Kbps bandwidth. With the existing $OFDM$ technique~\cite{b34} in $CRN$, we can not transmit data simultaneously to the node $C$ from node $A$ and node $B$, though node $C$ might have the capacity to handle the data, unless there is a sufficient gap between the channels used for two different pairs of communicating nodes to avoid channel interference. According to Mahmoud et al.~\cite{b35}, the $MBOFDM$ system to handle such situation is a challenging problem due to synchronization requirement between the transmitter and the receiver. We consider the situation for multimedia communication in which a typical user may require varying number of channels. Thus, a particular node may sometimes need just one single channel and sometimes a number of channels to communicate its messages depending on the types of the multimedia signals and their required $QoS$. \subsection{Our Contribution} \label{contribution} In this paper, we propose an elegant way of overcoming the problem of fragmentation of the available spectrum as mentioned in Section~\ref{problem}, with regard to the communication of multimedia signals over the $CRN$, while maintaining the $QoS$ requirement. We propose a technique for establishing a communication between sender and receiver nodes for single hop communication of multimedia data, where we first decompose a multimedia signal in time domain in terms of a number of bit-sets, with each set containing sufficiently sparsed bits so as to be transmitted over just a single channel of bandwidth $B_{min}$ and yet maintaining the signal quality. Thus, the total information content of a signal during a particular time frame is basically divided into several packets, with each packet being transmitted through one available channel in the white space. The constituent packets generated for a given time frame may, however, be transmitted over non-contiguous channels. At the receiver end, all the packets received through these channels will be used for reconstructing the original signal without degrading the signal quality. Using the above strategy, we propose here a novel channel allocation technique which assigns non-contiguous channels for effecting multimedia communication between a sender and a receiver very fast through a random number generation process, so that the channel fragmentation problem as experienced in conventional first-fit or best-fit techniques is completely overcome resulting in very high channel utilization with negligible overhead in time. Detailed description of the proposed scheme is given below. \begin{figure}[] \centering \includegraphics[width=3.0in,height=1.25in]{OverNonOver.eps} \caption{Channels configuration}\label{ONOC} \end{figure} To allow multiple senders for sending their data simultaneously through the $CRN$, we propose two possible channel allocation techniques, one based on {\em Frequency Division Multiplexing} ($FDM$) and {\em Frequency Division Multiple Access} ($FDMA$) ($FDM-FDMA$) and another based on $OFDM$ and $FDMA$ ($OFDM-FDMA$). For the $FDM-FDMA$ allocation, we use the non-overlapping channels, where the channel width is assumed to be large enough to include the guard band, as shown in Fig.~\ref{ONOC}(a). Here, the basic idea is to use $FDM$ for every pair of communicating nodes, but $FDMA$ for different pairs of communicating nodes. Referring to Fig.~\ref{f6}, while the channels for communication between two nodes $A$ and $C$ are allocated using $FDM$ and the channels for communication between nodes $B$ and $C$ are also allocated using $FDM$, the channel allocation for the pairs $(A,C)$ and $(B,C)$ follows the $FDMA$ technique. For the $OFDM-FDMA$ allocation, we use the overlapping orthogonal channels, as shown in Fig.~\ref{ONOC}(b). Here, the basic idea is to use $OFDM$ for every pair of communicating nodes, but $FDMA$ for different pairs of communicating nodes. To avoid inter-channel interferences we have to maintain certain minimum gap between every pair of channels allocated to different nodes. Referring to Fig.~\ref{f6}, while the channels for communication between two nodes $A$ and $C$ are allocated using $OFDM$ and the channels for communication between nodes $B$ and $C$ are also allocated using $OFDM$, the channel allocation for the pairs $(A,C)$ and $(B,C)$ follows the $FDMA$ technique. Thus, we have to maintain certain minimum gap between every pair of channels allocated to nodes $A$ and $B$ to avoid inter-channel interferences. We present algorithms for channel sensing, channel reservation and channel deallocation avoiding the hidden node problem and also avoiding possible collision with the channel demands from other users of the $CRN$. Corresponding transmission and reception protocols are also proposed. We theoretically analyze our proposed algorithms to predict the average number of iterations or attempts made by our proposed algorithm for allocating the channels. In our later discussions, we use the terms {\em iterations} and {\em attempts} interchangeably throughout the text. The average number of such attempts is $O(1/f)$, where $f$ is the fraction of the free or available channels. In dynamic channel allocation, first-fit and best-fit techniques are commonly used ones~\cite{b109, b110, b111, b112, b39, b40}, and thus in our simulation, we compare our proposed protocol with first-fit and best-fit techniques for channel allocation. Simulation results show that the average number of attempts for acquiring the required number of channels agrees well to the theoretical values even for extremely heavy traffic with about $96\%$ blocked channels. From simulation, we also see that the proposed technique always outperforms the existing first-fit and best-fit~\cite{b109, b110} allocation techniques in terms of the average number of attempts needed for acquiring the necessary number of channels for all traffic situations ranging from light to extremely heavy traffic. The proposed technique can allocate the required numbers of channels in less than a second time with $FDM-FDMA$ even for $96\%$ traffic load and in less than $4.5$ sec with $OFDM-FDMA$ for $99\%$ traffic load, while the first-fit and best-fit techniques fail to allocate any channel in such situations. We can intuitively explain why our proposed technique performs better than the first-fit and best-fit techniques. Actually, both the latter techniques suffer from the channel fragmentation problem and channels cannot be allocated unless a contiguous free band of required number ($DN$) of channels is found. In contrast to this, our proposed technique removes the requirement of a single contiguous free band containing all these $DN$ channels and thus, the success rate is $100\%$ with our technique even with an extremely heavy traffic load when the existing approaches fail to allocate the channels. Moreover, because we are exploring the free channels through a random number generation process and every time we get a free channel, we include that one for our purpose, leads to a quick termination of our allocation algorithm with a success. \section{Basic Ideas of our Proposed Protocol} \label{prelims} \subsection{Creating Small Bandwidth Cognitive Sub-Data Channels}\label{CSBCUDC} \begin{figure}[] \centering \includegraphics[width=3.0in,height=1.5in]{a4.eps} \caption{Samples from the signal}\label{f4} \end{figure} Consider a band-limited signal having a bandwidth of, say $W$. Let us assume that the signal is sampled with a sampling frequency of $2W$. Referring to Fig.~\ref{f4}, let $s_0, s_1, \cdots , s_{N-1}$ be $N$ samples taken over the time period $T$ of the band-limited signal at a sampling interval of $\tau = \frac{1}{2W}$. Thus, $T = N \tau$. Let us assume that from every sample, we get $b$ bits. Thus, the total number of bits is $N b$. So, the bits generated from all $N$ samples are $b_0, b_1, \cdots , b_{Nb-1}$. Let the bandwidth $W$ of this signal be less than or equal to $n B_{min}$. We then partition the $N b$ bits in $n$ subsets $BS_0, BS_1, BS_2, \cdots, BS_{n-1}$, where the bit-set $BS_i$ is defined as, \begin{equation} BS_i = \{ b_j |j = i \mod n, 0 \le i, j \le n-1\} \end{equation} Note that in each of these $BS_i$'s, the bits are separated by $n \tau$ time, and hence, these would require a transmission bandwidth of $\frac {W} {n} \le B_{min}$. Thus, to transmit the original signal as shown in Fig.~\ref{f4}, we search for the availability of $n$ channels each of bandwidth $B_{min}$ in the white space of the spectrum. \begin{figure}[] \centering \includegraphics[width=3.0in,height=1.0in]{a5.eps} \caption{Utilized Spectrum with small bandwidth Cognitive Channels}\label{f5} \end{figure} Let $COGCH_i, i = 0, 1, \cdots, n-1$ be these $n$ cognitive channels such that the bits in the bit-set $BS_i$ is transmitted through $COGCH_i$ (as shown in Fig.~\ref{f5} for $n$ = 8). In practice, corresponding to each time frame of a suitable duration $T$, we take the bits in the bit-set $BS_i$ to form a data sub-packet $SP_i$. The header of each such sub-packet will contain the identity of the time frame (e.g., in the form of a packet number $PN$) as well as its {\em Sub-Packet Number} ($SPN$) equal to $i$. At the receiving end, all these received sub-packets having the same packet number will be used to reconstruct the original transmitted signal. \subsection{Physical Implementation}\label{PIm} \subsubsection{Physical Implementation of $FDM-FDMA$}\label{PIm1} \begin{figure*}[] \centering \includegraphics[width=6.0in,height=2.50in]{SDM_Block.eps} \caption{(a) $FDM-FDMA$ Transmitter Block Diagram, (b) $FDM-FDMA$ Receiver Block Diagram}\label{Block} \end{figure*} We assume that all $CR$ users are {\em Secondary User}s ($SU$s) and have the same priority. Similarly, all {\em Primary User}s ($PU$s) are assumed to have the same priority which is greater than that of a $SU$. We also assume that any given node in the system has the maximum capability of providing some $DN$ ({\em Demand Number}) channels. Thus, a node $A$ may be allowed to be involved in simultaneously communicating more than one signal, so long as the sum of the numbers of channels used by it in communicating all these signals is less than or equal to $DN$. For example, voice ($64$ Kbps), data ($128$ Kbps), still image ($256$ Kbps), video ($384$ Kbps) and online streaming ($512$ Kbps) needs $DN$ as $1,~2,~4,~6$ and $8$, respectively, as we assume that $B_{min}$ is $64$ Kbps. We assume the presence of a dedicated $CCC$~\cite{b27,b28,b109} operating on a specific frequency ($f_{CCC}$) for coordination between the various $SU$s, with the communications through $CCC$ effected in discrete time slots. For a low traffic load, communication through $CCC$ can be done by following either $IEEE$ $802.11$ $CSMA/CA$~\cite{b114, b115} protocol. However, under moderate to heavy traffic, one may use any conventional {\em controlled access}~\cite{b114, b115} method like {\em Bit-Map}~\cite{b114, b115} protocol to improve the performance. If, we use {\em Bit-Map} protocol then each attempt made by our algorithm requires $O(\Delta)$ time, where $\Delta$ is the maximum node degree of the network. The block diagrams of the proposed $FDM-FDMA$ transmitter and receiver have been shown in Figs.~\ref{Block}(a) and~\ref{Block}(b), respectively. In this scheme, for every channel we need a separate modulator and demodulator system. The $CCC$ channel, through which the control messages are transmitted, is totally separated from the data channels. The block {\em Splitter}, in Fig.~\ref{Block}(a), is working as a demultiplexer by which the $BS_i$ can be created, leading to generation of sub-packets $SP_i$. On the receiver side, the {\em Collector} in Fig.~\ref{Block}(b), is used to gather bits from different channels to form the packet constituted from the bits corresponding to all the $BS_i$s, which is required for regeneration of the multimedia data. Each $COGCH_i$ works on different frequencies $f_i$'s. For $FDM-FDMA$ technique, we can select all the frequencies $f_i$'s in non-overlapping channel (as shown in Fig.~\ref{ONOC}(a)). An alternative scheme using commercially available $MIMO$ system can also be used depending on the relative cost and ease of implementation. \subsubsection{Physical Implementation of $OFDM-FDMA$}\label{PIm1} \begin{figure}[] \centering \includegraphics[width=3.25in,height=1.25in]{Chalo.eps} \caption{Channel allocation for node $A$ and node $B$ in overlapping channel} \label{chalo} \end{figure} For $OFDM-FDMA$ technique, the frequency $f_i$'s can be selected in such a way that all the $f_i$'s are orthogonal \cite{b34} leading to $OFDM$ modulation for one node which requires more than one channel to transmit its data, and the other nodes are to select free channels in such a way as to maintain certain minimum gap between two consecutively chosen channels to avoid inter-channel interference for different nodes in overlapping channels (as shown in Fig.~\ref{ONOC}(b)). As an example, referring to Fig.~\ref{f6}, nodes $A$ and $B$ need $2$ and $4$ channels respectively, to transmit some data to node $C$. Thus, node $A$ selects frequencies $f_{x0}^A$ and $f_{x1}^A$ for transmitting its data, while node $B$ selects frequencies $f_{y0}^B$, $f_{y1}^B$, $f_{y2}^B$ and $f_{y3}^B$ for its communication purpose as shown in Fig.~\ref{chalo}. Here, the carriers operating at $f_{x0}^A$ and $f_{x1}^A$ are orthogonal to each other and similarly $f_{y0}^B$, $f_{y1}^B$, $f_{y2}^B$ and $f_{y3}^B$ are also orthogonal to each other, but $f_{x0}^A$ and $f_{x1}^A$ need to be separated with some minimum band gap of $S$ (as shown in Fig.~\ref{chalo}) from the frequencies $f_{y0}^B$, $f_{y1}^B$, $f_{y2}^B$ and $f_{y3}^B$ to facilitate the synchronization process in the two destined receivers. Furthermore, to maintain the needed orthogonality condition between the $OFDM$ channels, all the $OFDM$ carriers need to be synchronously related to a unique pilot carrier, which can be transmitted (from some select nodes playing the role of collaborating leaders) in the same $OFDM$ band periodically over time. The frequency of the pilot carrier should be placed conveniently in the $OFDM$ frequency range (but not used by any node for data transmission). We, however, assume that this sacrifice of one single $OFDM$ carrier for the pilot in the entire $OFDM$ transmission bandwidth will not impact the spectral efficiency of the system at large. Every node will synthesize its own $OFDM$ carriers from the pilot carrier. In effect this will imply that, the $OFDM$ carrier frequencies transmitted from all the nodes and the pilot carrier frequency should be integrally related to a lower carrier frequency, which would be a highest common factor for all of them. This will lead to some additional hardware for the nodes along with the $IFFT/FFT$-based $OFDM$ generation and demodulation schemes with arbitrary but small number ($DN$) of $OFDM$ carriers. However, we consider this additional hardware complexity to be realizable with today's $VLSI$ design techniques and worthwhile as well keeping in mind the benefits that one would be able to derive in respect of the spectral efficiency achievable from this proposition. \subsection{State Diagram of the Overall System}\label{SDOS} \begin{figure}[] \centering \includegraphics[height=2.25in, width=3.0in]{a15.eps} \caption{State diagram}\label{f15} \end{figure} In Fig.~\ref{f15} we draw a state diagram that explains the basic functional units of the communication system as depicted through its various states and the state transition arcs. We start from the $Idle$ state. When the transmitter buffer of a node becomes full, a status bit $B_{Tx}$ of the node is set to $1$ (which is otherwise $0$). When $B_{Tx} = 1$ and this node wants to transmit, it moves from the $Idle$ state to the $Sensing$ state. The node in this $Sensing$ state explores the availability of free channels of the required number $DN$ as demanded by the multimedia signal to be transmitted by the node. If it finds $DN$ number of channels not being used by any of its 1-distance neighbors for transmission, then it blocks these channels temporarily and moves to $Allocate$ state. In $Allocate$ state, it determines whether there is any hidden node problem. If not, then it goes on to the $Transmit$ state, otherwise it moves back to the $Sensing$ state. In the $Sensing$ state, the node maintains a clock to measure the time needed for sensing and allocation of the required number $DN$ of channels. When the nodes first enters the $Sensing$ state from the $Idle$ state, the clock and the timing register both are set to $0$. The timing register is updated by the clock whenever the node moves from the $Allocate$ to $Sensing$ state. If allocation of channels is not done within a specified time-out period $T$, then the node moves to $Timed~Out$ state, at which it releases all the blocked channels, if any, and goes back to the $Idle$ state. In the $Transmit$ state, the node transmits its message through the allocated $DN$ channels. When the transmission is complete, it goes to $Deallocate$ state where it releases all these $DN$ number of blocked channels, resets $B_{Tx}$ to $0$ and goes back to the $Idle$ state. While the node is in the $Transmit$ state, if any primary user $PU$ reports, asking for any of the channels used by this node, then those channels will be immediately released and the system will go back to the $Sensing$ state, setting again both the clock and the timing register to $0$. \section{Proposed Protocol} \subsection{Algorithm for Connection Establishment} \label{connsetup} Let $SA$ and $DA$ denote the addresses of the source node and the destination node, respectively. To establish a communication link the source node $SA$ needs to sense and allocate $DN$ number of channels for transmitting the multimedia signal using $CCC$. We consider below the connection establishment process for multimedia signals to be executed by the source node $SA$ and the destination node $DA$. \begin{enumerate} \item Sense the channels not being used by $A$'s 2-distance neighbors (to avoid the hidden node problem) as shown in Fig.~\ref{f6}. This would be effected with the help of some control and acknowledgement messages communicated through the $CCC$. \item Allocate the $DN$ free channels found above to the destination node $C$ so that it becomes ready for receiving the desired multimedia signal from $A$. \end{enumerate} The above steps of allocating channels to any source-destination pair would be done {\em dynamically} in a {\em distributed manner} with the help of the $CCC$. After this allocation process, the actual multimedia communication between a source-destination pair will continue unless some or all of these channels are deallocated due to the arrival of one or more primary users. \subsubsection{Reservation of Channel}\label{grabccc} \begin{figure}[] \centering \includegraphics[width=2in,height=0.25in]{a13.eps} \caption{(a) $CM$ message and (b) $ACK$ or $WAIT$ message}\label{f13} \end{figure} The node $SA$ transmits a {\em Control Message} ($CM$) with $SA$, $DA$ and $DN$ values as shown in Fig.~\ref{f13}(a). After sending this control message, it waits up to some maximum time-out period, say $\delta_T$, for getting either an {\em Acknowledgement} ($ACK$) message or a $WAIT$ message from $DA$, both of which would contain the $SA$ and $DA$ values, with one more $TAG$ bit, as shown in Fig.~\ref{f13}(b), which is set to '$0$' for an $ACK$ message and '$1$' for a $WAIT$ message. The $ACK$ message is sent if node $DA$ is capable of providing $DN$ number of channels for receiving the multimedia signal from $SA$ (i.e., when the available number of channels $AN$ at $DA$ is greater than or equal to $DN$, while the $WAIT$ message is sent when $AN \le DN$.) Since the node $DA$ may simultaneously receive such channel reservation requests from other source nodes as well, for $AN \geq DN$, it temporarily reserves the requested number (i.e., $DN$) of channels for node $SA$ on a first-come-first-serve basis (without bothering about which $DN$ channels). If the node $DA$ is not capable of allocating the requested $DN$ number of channels to $SA$, then along with sending the $WAIT$ message to the node $SA$, it puts this request from $SA$ (in the form of $CM$) in a waiting queue for later servicing. If neither the $ACK$ nor the $WAIT$ message is received by $SA$ within $\delta_T$ time (due to a possible collision caused by the simultaneous transmission of messages from some other node(s) within 1-distance from $SA$ or due to the hidden node problem, i.e., due to a collision at the node $DA$ caused by messages from some node, say $V$ which is at 1-distance from $DA$, but at 2-distance from $SA$), then $SA$ retransmits its control message $CM$. This process of retransmission is repeated by $SA$ until an $ACK$ or $WAIT$ message is received from $DA$. The algorithms {\em reserve\_channels\_transmitter} and {\em reserve\_channels\_receiver} to be executed by nodes $SA$ and $DA$ are given in Algorithm~\ref{algo1} and~\ref{algo2} respectively. \begin{algorithm}[] \scriptsize \linesnumbered \KwIn{$SA$, $DA$ and $DN$} \KwOut{channels\_reserved} $channels\_reserved = false$ \; \While{$channels\_reserved = false$ AND $B_{Tx} = 1$} { Transmit $CM$ to the node $DA$\; Wait for $\delta_T$ time to receive $ACK$ or $Wait$ Signal\; \If{$ACK$ or $WAIT$ Signal received within $\delta_T$} { \eIf{$ACK$ is received within $\delta_T$} { channels\_reserved = true\; } { Wait for $ACK$ from $DA$ /* $WAIT$ received */\; channels\_reserved = true\; } } } \caption{{\em Reserve\_Channels\_Transmitter}} \label{algo1} \normalsize \end{algorithm} \begin{algorithm}[] \scriptsize \linesnumbered \KwIn{$CM$ (consisting of $SA$, $DA$ and $DN$)} \KwOut{$ACK$, $WAIT$ to the Transmitter} \eIf{$AN$ $\geq$ $DN$} { $AN=AN-DN$\; Update its database\; Transmit $ACK$\; } { Enter $CM$ in a waiting queue\; Send back $WAIT$ Signal to node $SA$\; } \caption{{\em Reserve\_Channels\_Receiver}} \label{algo2} \normalsize \end{algorithm} \subsubsection{Sensing and Allocation of Channels} \label{SAC} \paragraph{Sensing and Allocation of Channels for $FDM-FDMA$ Technique} \label{SAC11} After getting the $ACK$ message from the destination node $DA$ in reply to the $CM$ message as described in Section~\ref{grabccc}, the source node $SA$ will try to find the required $DN$ number of data channels from the currently available white spaces of the spectrum. This will be done by randomly choosing a set of $DN$ distinct channels which are not being used by any of the 2-distance neighbors of $SA$ for transmission as well as reception (to avoid the hidden node problem) of data. We assume that the width of the non-overlapping channel already includes the bandgap to avoid inter-channel interference and hence, a free channel for a node $U$ means one which is not being used by any other node within the 2-distance neighborhood of $U$. We randomly generate a number $i$ and then sense whether channel $i$ is free (with respect to all nodes in 2-distance neighborhood of $U$). If channel $i$ is free, then it can be allocated to $U$. In fact, we generate $DN$ such random numbers and sense in parallel whether all the channels corresponding to these randomly generated $DN$ numbers are free. If $m$ free channels are found by this process, then the process terminates if $m = DN$; otherwise the whole process is repeated to find the required number $m-DN$ of free channels to be allocated to $U$. Each iteration of this loop is termed as an {\em attempt}, as introduced earlier in Section~\ref{contribution}. \begin{figure}[] \centering \includegraphics[width=3.0in,height=0.5in]{a14.eps} \caption{(a) $TAM$ or $CCB$ message, (b) $TAM\_ACK$ or $NACK$ message, (c) $CHALLOC$ message and (d) $CRM$ message ($d=|channel\_set|$)}\label{f14} \end{figure} \begin{figure}[] \centering \includegraphics[width=2.5in,height=1.25in]{a17.eps} \caption{$AVL$ tree for storing the channel usage status of a node}\label{f17} \end{figure} The fact that none of the 1-distance neighbors of $SA$ is currently using a given channel for transmitting their data, can easily be determined by listening to this channel by $SA$ (channel sensing). However, to avoid the hidden node problem, whether a given channel is being used by any node $U$ among the 1-distance neighbors of $SA$ for receiving some messages from some node, say $V$, which is at 2-distance from $SA$, can not be determined by such channel sensing. To decipher that, node $SA$ has to send a {\em Trial Allocation Message} ($TAM$) to all of its 1-distance neighbors which would contain the source and destination addresses (i.e., $SA$ and $DA$) along with the {\em Channel Number} ($CN$) in question. The structure of the $TAM$ message is as shown in Fig.~\ref{f14}(a), where the $TAG$ field is set to '$0$' for a $TAM$. On getting this $TAM$, a node $U$ would send back an acknowledgement ($TAM\_ACK$) or a no-acknowledgement ($NACK$) message to $SA$ depending on whether $U$ is currently not using the channel $CN$ for receiving any message or not, respectively. The $TAM\_ACK$ and $NACK$ messages are of the form as shown in Fig.~\ref{f14}(b), where the $TAG$ field is set to '$00$' for a $NACK$ and '$01$' for an $TAM\_ACK$. Node $U$ can check this fact efficiently if it maintains a channel usage database in the form an $AVL$ tree as shown in Fig.~\ref{f17} where each node of the tree contains a tuple ($CN$, $SA$) and insertion or finding an element in the tree is done based on the $CN$ field only. The choice of $AVL$ tree as the data structure for this purpose enables us insertion, deletion and finding an element from it all in $O(log~m)$ time where $DN$ is the total number of nodes in this tree. In case $U$ is currently not using the channel $CN$ for receiving any message, it temporarily allocates the channel $CN$ to the node $SA$ and keeps this information by inserting a new node with this $CN$ and $SA$ information in the $AVL$ tree. This would help prohibiting other nodes selecting this channel $CN$ for transmitting their data when the node $SA$ is still in the process of selecting all of its required channels and has not yet completed that process. If $SA$ does not receive any $NACK$ message within a maximum time-out period $\delta_T$ from any node in reply to this $TAM$ message, then $SA$ puts this channel number $CN$ in its chosen set of channels $channel\_set$; otherwise, $SA$ can not use the channel $CN$ for transmitting its data and hence it broadcasts a {\em Clear Channel Blockage} ($CCB$) message to all of its 1-distance neighbors. The structure of the $CCB$ message is same as that of a $TAM$ shown in Fig.~\ref{f14}(a), where the $TAG$ field is set to '$1$' for a $CCB$. If a node $U$ receives this $CCB$ message, then $U$ will delete the corresponding node from its $AVL$ tree storing its channel usage status (thus, the channel $CN$ will now be treated as available by the node $U$). The above process of allocating channels for $SA$ will be repeated to get all $DN$ channels after which the transmission will be started. When the required number of channels are found through the above process, a $Channel~Allocate~(CHALLOC)$ command is broadcast by $SA$ to its 1-distance neighbors with the information regarding the destination node $DA$, and the sub-packet number ($SPN$) of every packet along with the allocated channel number ($CN$) as shown in Fig.~\ref{f14}(c). On receiving this $CHALLOC$ command, node $DA$ will record the information regarding the $SPN$ and $CN$ for the sub-packets to be received from $SA$ in its channel reservation database, while any other node will release the temporary blockage of the corresponding channel numbers. If, however, the required number of channels are not found within a maximum time unit, say $T$ ($\delta_T \ll T$), then the node $SA$ can not start its transmission at the moment and it broadcasts a $Channel~ Release~ Message$ ($CRM$) signal of the form as shown in Fig.~\ref{f14}(d), to all of its 1-distance neighbors to release temporarily blocked channels. Node $SA$ has to try again for getting the required $DN$ number of channels until success or the transmitter buffer becomes $0$. The algorithms {\em Sense\_Allocate\_Transmitter\_$FDM-FDMA$} and {\em Sense\_Allocate\_Receiver} to be executed by the node $SA$ and any other receiving node are given in Algorithm~\ref{algo3}, and~\ref{algo6}, respectively. \begin{algorithm}[] \scriptsize \linesnumbered \KwIn{$DA$, $DN$, $MAX$} \KwOut{Selected channel numbers $c_1, c_2, \cdots, c_{DN}$} $channel\_set = \emptyset$\; $cardinality = 0$ $/*cardinality = |channel\_set|*/$\; $j = DN$\; $time = 0$\; \While{$time$ $\le$ $T$} { Randomly generate a set of $j$ distinct channel numbers $c_1, c_2, \cdots, c_j$ in the range $1$ to $MAX$ such that for $1 \leq i \leq j$, $c_i \notin channel\_set$ \; Sense channel numbers $c_1, c_2, \cdots, c_j$ in parallel /*to check if channel $c_i$ is idle*/\; \For{$i = 1$ to $j$} { \If{$|channel\_set| < DN$} { \If{channel $c_i$ idle} { Form the $TAM$ message with $CN = c_i$\; $trial\_no\_TAM= 1$\; $flag = true$\; \While{($trial\_no\_TAM < Max\_trial\_TAM$ AND $flag$)} { Broadcast the $TAM$ message to all 1-distance neighbors using the $CCC$\; Wait up to a maximum time of $\delta_T$ to receive reply message(s)\; \eIf{reply received} { flag=false\; \eIf{no $NACK$ received} { $channel\_set = channel\_set \cup \{c_i\}$\; $cardinality=cardinality+1$\; } { Send $CCB$ with $CN$\; } } { $trial\_no\_TAM = trial\_no\_TAM + 1$\; } } } } } $time = time + 1$\; } \eIf{$DN = cardinality$} { Broadcast $CHALLOC$ command formed with the $channel\_set$ to all 1-distance neighbors\; } { Broadcast $CRM$ formed with the $channel\_set$ to all 1-distance neighbors to release all temporarily blocked channels\; } \caption{{\em Sense\_Allocate\_Transmitter\_$FDM-FDMA$}} \label{algo3} \normalsize \end{algorithm} \begin{algorithm}[] \scriptsize \linesnumbered \KwIn{$TAM$, $CHALLOC$} \KwOut{Select and locked data channels.} /* The following code will be executed by all nodes receiving the $TAM$ and $CHALLOC$ messages*/ \; \If{$TAM$ received with source node $SA$ and channel number $CN$} { \eIf{channel $CN$ is free for its 1-distance neighbor AND $CN$ is not temporarily blocked for any other node} { Update its channel usage database by temporarily marking channel number $CN$ as being used by node $SA$\; Transmit $TAM\_ACK$ to $SA$ through $CCC$\; } { Transmit $NACK$ to $SA$ through $CCC$\; } } \If{$CHALLOC$ received} { \eIf {$DA$ in $CHALLOC$ = its own $id$} { Update its channel reservation database with $SPN$ and $CN$ values assigned from $CHALLOC$ for each channel\; } { Update its channel usage database by releasing the temporarily blocked channel numbers indicated in $CHALLOC$\; } } \If{$CRM$ received} { Update its channel usage database by releasing the temporarily blocked channel numbers indicated in $CRM$\; } \caption{{\em Sense\_Allocate\_Receiver}} \label{algo6} \normalsize \end{algorithm} \paragraph{Sensing and Allocation of Channels for $OFDM-FDMA$ Technique} \label{SAC1} \begin{algorithm}[] \scriptsize \linesnumbered \KwIn{$DA$, $DN$, $MAX$} \KwOut{Selected channel numbers $c_1, c_2, \cdots, c_{DN}$} $channel\_set = \emptyset$\; $j = DN$\; $time = 0$\; $flag_1=true$\; $required\_channel\_numbers=DN$\; \While{$time$ $\le$ $T$ AND $flag_1$} { $temp\_set = \emptyset$\; Randomly generate a set of $j$ distinct channel numbers $c_1^1, c_2^1, \cdots, c_j^1$ in the range $1$ to $MAX$ such that for $1 \leq i \leq j$, $c_i^1 \notin channel\_set$ \; \For{$k=1$ to $j$} { $c_k^2 = c_k^1 + 1$ \; $c_k^3 = c_k^1 + 2$ \; } Create a list of $3 \times DN$ numbers with $c_1^1, c_1^2, c_1^3 \cdots, c_j^1, c_j^2, c_j^3$\; Sense channel numbers $c_1^1, c_1^2, c_1^3 \cdots, c_j^1, c_j^2, c_j^3$ in parallel /*to check if channel $c_i^1, c_i^2, c_i^3$ is idle*/\; \For{$i = 1$ to $j$} { \For{$k=1$ to $3$} { \If{channel $c_i^k$ idle} { Form the $TAM$ message with $CN = c_i^k$\; $trial\_no\_TAM= 1$\; $flag_2 = true$\; \While{($trial\_no\_TAM < Max\_trial\_TAM$ AND $flag_2$)} { Broadcast the $TAM$ message to all 1-distance neighbors using the $CCC$\; Wait up to a maximum time of $\delta_T$ to receive reply message(s)\; \eIf{reply received} { $flag_2=false$\; \eIf{no $NACK$ received} { $temp\_set = temp\_set \cup \{c_i^k\}$\; } { Send $CCB$ with $CN$\; } } { $trial\_no\_TAM = trial\_no\_TAM + 1$\; } } } } } $find\_free\_bands(temp\_set,required\_channel\_numbers,$ $channel\_set)$\; \If{$required\_channel\_numbers=0$} { $flag_1=false$\; } $time = time + 1$\; } \eIf{$DN = |channel\_set|$} { Broadcast $CHALLOC$ command formed with the $channel\_set$ to all 1-distance neighbors\; } { Broadcast $CRM$ formed with the $channel\_set$ to all 1-distance neighbors to release all temporarily blocked channels\; } \caption{{\em Sense\_Allocate\_Transmitter\_$OFDM-FDMA$}} \label{algo4} \normalsize \end{algorithm} \begin{algorithm}[] \scriptsize \linesnumbered \KwIn{$temp\_set$, $required\_channel\_numbers$\;} \KwOut{$required\_channel\_numbers$, $channel\_set$} Sort $temp\_set$ in non-increasing order\; Scan $temp\_set$ to form $band\_set$ with the 2-tuple ($band\_length$, $start\_channel\_number$) as its element \; Set $t \leftarrow |band\_set|$ \; Sort $band\_set$ in non-increasing order of $band\_length$ component of its elements (2-tuples) \; \For{$i=1$ to $t$} { \If{$band\_set[i].band\_length>3$} { $channel\_set = channel\_set \cup \{band\_set[i].start\_channel\_number+1, \cdots, band\_set[i].start\_channel\_number+band\_set[i].band\_length-2 \}$\; $temp\_number= min[DN, band\_set[i].start\_channel\_number+band\_set[i].band\_length-2]$ \; $required\_channel\_numbers=required\_channel\_numbers-temp\_number$\; } } return($required\_channel\_numbers$, $channel\_set$); \caption{{\em procedure $find\_free\_bands$}} \label{algo5} \normalsize \end{algorithm} Node $SA$ has to send a {\em Trial Allocation Message} ($TAM$) to all of its 1-distance neighbors which would contain the source and destination addresses (i.e., $SA$ and $DA$) along with the {\em Channel Number} ($CN$) in question. The structure of the $TAM$ message is as shown in Fig.~\ref{f14}(a), where the $TAG$ field is set to '$0$' for a $TAM$. On getting this $TAM$, a node $U$ would send back an acknowledgement ($TAM\_ACK$) or a no-acknowledgement ($NACK$) message to $SA$ depending on whether $U$ is currently not using the channel $CN$ for receiving any message or not, respectively. The $TAM\_ACK$ and $NACK$ messages are of the form as shown in Fig.~\ref{f14}(b), where the $TAG$ field is set to '$00$' for a $NACK$ and '$01$' for an $TAM\_ACK$. Node $U$ can check this fact efficiently if it maintains a channel usage database in the form an $AVL$ tree as shown in Fig.~\ref{f17} where each node of the tree contains a tuple ($CN$, $SA$) and insertion or finding an element in the tree is done based on the $CN$ field only. The choice of $AVL$ tree as the data structure for this purpose enables us insertion, deletion and finding an element from it all in $O(log~m)$ time where $DN$ is the total number of nodes in this tree. In case $U$ is currently not using the channel $CN$ for receiving any message, it temporarily allocates the channel $CN$ to the node $SA$ and keeps this information by inserting a new node with this $CN$ and $SA$ information in the $AVL$ tree. This would help prohibiting other nodes selecting this channel $CN$ for transmitting their data when the node $SA$ is still in the process of selecting all of its required channels and has not yet completed that process. If $SA$ does not receive any $NACK$ message within a maximum time-out period $\delta_T$ from any node in reply to this $TAM$ message, then $SA$ puts this channel number $CN$ in its chosen set of channels $channel\_set$; otherwise, $SA$ can not use the channel $CN$ for transmitting its data and hence it broadcasts a {\em Clear Channel Blockage} ($CCB$) message to all of its 1-distance neighbors. The structure of the $CCB$ message is same as that of a $TAM$ shown in Fig.~\ref{f14}(a), where the $TAG$ field is set to '$1$' for a $CCB$. If a node $U$ receives this $CCB$ message, then $U$ will delete the corresponding node from its $AVL$ tree storing its channel usage status (thus, the channel $CN$ may now be treated as available by the node $U$). We assume that a free overlapping channel to be used by a node $U$ refers to a channel which is i) not being used by any node within the 2-distance neighborhood of $U$ and ii) sufficiently separated from those channels which are being used by all nodes in the 2-distance neighborhood to avoid inter-channel interference. We assume a gap of one channel on either side of the channel to be used by $U$ to avoid this interference. Thus, if channel $i$ is to be allocated to $U$, then channels $i-1$, $i$ and $i+1$ must not be used by any node in the 2-distance neighborhood of $U$. However, it can be generalized and we assume that with $OFDM$ communication, a contiguous set of $m$ channels, $1 \leq m \leq DN$ may be allocated to a node $U$, provided we find a set of $m+2$ contiguous channels which are not being used by any of the nodes in the 2-distance neighborhood of $U$. Thus, if none of the channels $i-1, ~i, ~\cdots~, ~i + m$ are being used by any node in the 2-distance neighborhood of $U$, then the $m$ channels $i, ~i+1, ~\cdots~, ~i+m-1$ can be used by $U$ in the $OFDM$ mode. After randomly generating $i$, we sense whether channels $i$, $i+1$ and $i+2$ are free (with respect to all nodes in 2-distance neighborhood of $U$). We mark all these free channels. Here also, we generate $DN$ such random numbers and sense in parallel whether all the channels corresponding to these randomly generated $DN$ numbers are free, and mark all these free channels found by this step. After this, we check the status of all channels to find a consecutive band of $m$ free channels, $3 < m < DN+2$, out of which $(m-2)$ consecutive channels may be allocated to $U$. If $m - 2 = DN$, then the process is terminated; otherwise, the whole process is repeated for finding the $m - 2- DN$ channels still to be allocated to $U$. As with $FDM-FDMA$ implementation, one iteration of this loop is termed as an {\em attempt}. Thus, the sensing time per attempts in $OFDM-FDMA$ channel allocation technique is three times more than that in $FDM-FDMA$ channel allocation technique. The detailed steps for finding free bands after getting the free channel numbers have been presented in the {\em procedure $find\_free\_bands$} (Algorithm~\ref{algo5}). First, the free channels numbers are included in a set $temp\_set$ which is then sorted in non-increasing order. This sorted $temp\_set$ is then scanned once from left to right to produce a set $band\_set$ containing the 2-tuples ($band\_length$, $start\_channel\_number$) as its elements. This set $band\_set$ is then sorted in non-increasing order based on the $band\_length$ field of each 2-tuple. Finally, this sorted $band\_set$ is scanned once from largest to smallest $band\_length$ to collect the free bands with largest possible sizes to form the $channel\_set$. Since the number of elements in $temp\_set$ is small (less than $3 \times DN$), the total time for executing this procedure will be very small. The above process of allocating channels for $SA$ will be repeated to get all $DN$ channels after which the transmission will be started. When the required number of channels are found through the above process, a $Channel~Allocate~(CHALLOC)$ command is broadcast by $SA$ to its 1-distance neighbors with the information regarding the destination node $DA$, and the sub-packet number ($SPN$) of every packet along with the allocated channel number ($CN$) as shown in Fig.~\ref{f14}(c). On receiving this $CHALLOC$ command, node $DA$ will record the information regarding the $SPN$ and $CN$ for the sub-packets to be received from $SA$ in its channel reservation database, while any other node will release the temporary blockage of the corresponding channel numbers. If, however, the required number of channels are not found within a maximum time unit, say $T$ ($\delta_T \ll T$), then the node $SA$ can not start its transmission at the moment and it broadcasts a $Channel~ Release~ Message$ ($CRM$) signal of the form as shown in Fig.~\ref{f14}(d), to all of its 1-distance neighbors to release temporarily blocked channels. Node $SA$ has to try again for getting the required $DN$ number of channels until success or the transmitter buffer becomes $0$. The algorithm {\em Sense\_Allocate\_Transmitter\_$OFDM-FDMA$} to be executed by the node $SA$ and any other receiving node are given in \ref{algo4}. \subsection{Algorithms for Transmission and Reception}\label{datacomm} \begin{figure}[] \centering \includegraphics[width=1.25in,height=0.15in]{a20.eps} \caption{$DATA\_ACK$ message}\label{f20} \end{figure} When all the required $DN$ channels are allocated to both the nodes $SA$ and $DA$, the node $SA$ starts transmission of its multimedia data following the algorithm {\em Transmit\_Data\_Packet} given below. The receiving node $DA$ will execute the algorithm {\em Receive\_Data\_Packet} described below to receive the $DN$ sub-packets corresponding to each sub-packet number $SPN$ and will reconstruct the original message from these sub-packets. If a sub-packet is received correctly by $DA$, then an acknowledgement message ($DATA\_ACK$) will be sent by $DA$ back to $SA$. The structure of the $ACK$ message is as shown in Fig.~\ref{f20}. If $DATA\_ACK$ is not received within time out period $\delta_T$, then node $SA$ has to sense if a primary user has started using his channel. Then it immediately relinquishes this channel. $SA$ will then look for some other alternative channel which can be allocated for transmitting the corresponding data sub-packet. If this is not possible in an extreme situation with a maximum number of trials, say $maxtrial$, then the node $SA$ has to abort the transmission. The algorithms {\em Transmit\_Data\_Packet} and {\em Receive\_Data\_Packet} to be executed by nodes $SA$ and $DA$ are given in Algorithms~\ref{algo7} and~\ref{algo8} respectively. \begin{algorithm}[] \scriptsize \linesnumbered \KwIn{$DN$, $channel\_set$, packet to be transmitted, $maxtrial$} \KwOut{Transmitted packets} $abort = false$\; $PN = 0$\; \While{$B_{Tx} = 1$ AND $abort = false$} { \For{$i=0$ to $DN - 1$} { Form the sub-packet $SPN_i$ with packet number = $PN$, $sub\_packet\_number = i$\; $sub\_packet\_received[i] = false$\; } \ForAll{$COGCH_i$, $0 \leq i \leq (DN - 1)$, in parallel} { $trial\_number = 1$\; \While{$trial\_number \le maxtrial$ AND $sub\_packet\_received[i] = false$} { Transmit the sub-packet $SPN_i$ through the channel $COGCH_i$\; \eIf{$DATA\_ACK$ received within the time out period $\delta_T$} { $sub\_packet\_received = true$\; } { Sense if $PU$ uses this channel\; \If {$PU$ uses this channel} { Release this channel and look for another available channel using Algorithm~\ref{algo3} ($FDM-FDMA$) or~\ref{algo4} ($OFDM-FDMA$) \; \If{a new channel number $new\_channel$ is found} { $COGCH_i = new\_channel$\; $trial\_number = 1$ \; /*re-transmission of $sub\_packet[i]$ is started on this $new\_channel$ */\\ } } } $trial\_number = trial\_number + 1$\; } \If {$sub\_packet\_received = false$} { $abort = true$\; } } $PN = PN + 1$\; } \caption{{\em Transmit\_Data\_Packet}} \label{algo7} \normalsize \end{algorithm} \begin{algorithm}[] \scriptsize \linesnumbered \KwIn{Received Packet from Transmitter} \KwOut{$DATA\_ACK$ messages to the transmitting node $SA$} /* to be executed by the receiving node $DA$ */ \; \ForAll{$COGCH_i$, $0 \leq i \leq (DN - 1)$, in parallel} { \If{packet received correctly with packet number $PN$} { Send $DATA\_ACK$ message to the transmitting node $SA$ with packet number $PN$ and sub-packet number $i$\; } } \caption{{\em Receive\_Data\_Packet}} \label{algo8} \normalsize \end{algorithm} \subsection{Algorithm for Deallocation of Channels}\label{Deallo} \begin{figure}[] \centering \includegraphics[width=1.5in,height=0.15in]{a21.eps} \caption{$CLS$ message}\label{f21} \end{figure} After successful transmission of all of its data packets, the transmitting node $SA$ will release all the data channels used by it (by deleting the corresponding entries from its $AVL$ tree storing the channel usage status). Also it issues a channel release message clear signal ($CLS$) of the form shown in Fig.~\ref{f21} through $CCC$. The receiving node $DA$ release all data channels used by node $DA$ for this communication (update $AVL$ tree) and update its $AN$. All other 1-distance neighbors are also deleting the corresponding entries from its $AVL$ tree storing the channel usage status. In case the node $SA$ has to abort a transmission, it releases all the channels allocated to both $SA$ and $DA$ in the same way. When one or more channels used by the node $DA$ are released, the next channel reservation request from its waiting queue is considered if that can be satisfied. The waiting queue can be implemented using a linked list with $INFO$ field of each node containing the $2$-tuple ($SA$, $DN$). However, sensing these waiting requests in a {\em First-Come-First-Serve} ($FCFS$) order may result in a poor utilization of the channels. Instead, some other variants of this servicing policy may be chosen to increase the channel utilization. For example, the request from a node with the minimum number of required channels from amongst those waiting for the service may be chosen. This would increase the channel utilization, but in turn, may lead to starvation (similar to {\em Shortest-Job-First} ($SJF$) CPU scheduling in operating systems~\cite{b33}) of the requests with a large value of $DN$. This problem of starvation may, however, be avoided by taking into account the ageing factor of the accumulated requests, resulting into an increased channel utilization with no starvation. The algorithms {\em Deallocate\_Data Channels\_Transmitter} and {\em Deallocate\_Data Channels\_Receiver} to be executed by the node $SA$ and $DA$ are given in Algorithms~\ref{algo9} and~\ref{algo10} respectively. \begin{algorithm}[] \scriptsize \linesnumbered \KwIn{Transmission completion signal} \KwOut{Deallocation of all channels} \If{ data transmission completed} { Set $B_{Tx}=0$\; } \For{$COGCH_i \mid_{(0\le i\le n-1)}$} { Transmits $CLS$ to all 1-distance neighbors through $CCC$\; Release all data channels\; } \caption{Deallocate\_Data Channels\_Transmitter.} \label{algo9} \normalsize \end{algorithm} \begin{algorithm}[] \scriptsize \linesnumbered \KwIn{$CLS$ from transmitter} \KwOut{Deallocation of all channels} \For{$COGCH_i \mid_{(0\le i\le n-1)}$} { \If{$CLS$ received} { Release all data channels\; } \If{$DA$ in $CLS$ = its own id} { Update $AN$\; Process the waiting queue\; } } \caption{Deallocate\_Data Channels\_Receiver.} \label{algo10} \normalsize \end{algorithm} \section{Performance Analysis}\label{PA} \subsection{Performance of Channel Allocation Algorithm using $FDM-FDMA$ Technique}\label{PCAA} Let $C$ be the total number of channels out of which we assume that $\pi$ channels are in the primary band and the rest are in the secondary band. At any time instant $t$, let $B_{p,t}$ and $B_{s,t}$ be the numbers of blocked (already allocated by 2-distance neighbors and maintain certain minimum gap between two consecutively chosen channels to avoid inter-channel interference for different nodes) channels in the primary band and the secondary band, respectively. Thus, the total number of blocked channels at time $t$ is given by $B_t = B_{p,t} + B_{s,t}$. Let $F_{p,t}$ be the number of free channels in the primary band at time $t$, which is given by $\pi-B_{p,t}$. Similarly, let $F_{s,t}$ be the number of free channels in the secondary band at time $t$, which is given by $C-\pi-B_{s,t}$. Let $F_t = F_{p,t} + F_{s,t}$. Let there be a request at time $t$ for allocating $n$ channels to communicate a given multimedia signal. Referring to Algorithm~\ref{algo3}, we try to reserve the required number of channels, i.e., $n$ channels in successive attempts, where each attempt corresponds to a single execution of steps $5$ to $26$. Assuming that the availability of the $F_t$ free channels can be uniformly distributed over the total spectrum, the probability of getting $i$, $0 \leq i \leq n$, free channels out of $n$ channels chosen at random follows hypergeometric distribution and is given by $\frac{{F_t \choose i}{C-F_t \choose n-i}}{{C \choose n}}$. The expected number of free channels over all possible situations is then given by $\sum\limits_{i=0}^{n}{\frac{i{F_t \choose i}{C-F_t \choose n-i}}{{C \choose n}}} = \sum\limits_{i=1}^{n}{\frac{F_t{F_t -1 \choose i-1}{C-F_t \choose n-i}}{{C \choose n}}} = \frac{F_t {C-1 \choose n-1}}{{C \choose n}} = nf$, where $f = \frac{F_t}{C}$. Thus, on an average, the number of reserved channels by the first attempt is equal to $nf$. When all channels are free, $f=1$ and all the required $n$ channels are reserved in the first attempt. If $f<1$, then the remaining number of channels to be allocated after the first attempt is $n-nf = n(1-f)$, on an average. For the second attempt, since $F_t - nf$ is the number of free channels, the success probability for getting a free channel will again be a hypergeometric distribution, leading to $nf(1-\frac{n}{C})$ channels reserved by the second attempt on an average. Thus, on an average, after the second attempt, the total number of reserved channels is $min\{n,nf+nf(1-\frac{n}{C})\}$ and the number of channels yet to be allocated is $n-\{nf+nf(1-\frac{n}{C})\}$. Generalizing this observation, we have the following result. \begin{lemma}\label{lemma1} The expected number of channels reserved during the $(k+1)^{th}$ attempt, $k \geq 0$, is $nf(1-\frac{kn}{C})$. Also, on an average, the total number of channels reserved after the $k^{th}$ attempt is $min(n, nkf)$. \end{lemma} {\bf Proof :} We prove the result by induction. From the discussion above, the proposition that {\it at the $k^{th}$ attempt, the number of channels reserved is $nf\{1-\frac{(k-1)n}{C}\}$ on an average}, is true for $k$ $= 1$ and $2$. Let us assume that this proposition is true for $k = k$. Hence, at the $(k+1)^{th}$ attempt, the expected number of channels reserved is equal to \\ $\frac{n(F_t- [nf+nf(1-\frac{n}{C}) + \cdots + nf\{1-\frac{(k-1)n}{C}\} ])}{C}$ $\approx n f(1-\frac{kn}{C})$. Hence, on an average, the total number of channels reserved after the $k^{th}$ attempt is $min[n, \{nf+nf(1-\frac{n}{C}) + nf(1-\frac{2n}{C}) + \cdots + nf(1-\frac{(k-1)n}{C}\}]$ $\approx min(n,nkf)$. \hfill $~\qed$ \begin{theorem}\label{theorem1} To reserve $n$ channels, the required number of attempts, on an average, is equal to $\lceil\frac{1}{f}\rceil$. \end{theorem} {\bf Proof:} By lemma~\ref{lemma1}, the total number of channels reserved after $k^{th}$ attempt is $min(n,nkf)$, on an average. Hence, if $\alpha$ is the minimum number of attempts required for reserving all the $n$ channels, then $n f \alpha \geq n$, i.e., $\alpha \geq \frac{1}{f}$, on an average. Hence the theorem. \hfill $\qed$ \begin{remark}\label{remark1} Theorem~\ref{theorem1} basically establishes that more the number of free channels, less is the average number of attempts for acquiring the required numbers of channels. \end{remark} \begin{corollary}\label{corollary1} On an average, reservation of all $n$ channels can be done in $\psi = \lceil\frac{1}{f}\rceil \zeta + O(1)$ time, where $\zeta$ is the time for a single execution of the loop in the channel allocation algorithm. \end{corollary} It may be noted that, if we use {\em Bit-Map}~\cite{b114, b115} protocol for communication through $CCC$, then $\zeta$ is $O(\Delta)$ time, where $\Delta$ is the maximum node degree of the network as already mentioned in Section~\ref{PIm}. \subsection{Performance of Channel Allocation Algorithm using $OFDM-FDMA$ Technique} In order to evaluate the theoretical performance of the Algorithm~\ref{algo4}, let us assume that we try to reserve $n$ channels in successive attempts, where each attempt corresponds to a single execution of steps $6$ to $34$ of the algorithm. We first generate $n$ random numbers $c_k^1$, $1 \leq k \leq n$. Note that $c_k^1$ is one of the $n$ channels chosen at random in which the probability that $i$ channels will be free is given by the hypergeometric distribution as in the case of $FDM-FDMA$ allocation. However, in Algorithm~\ref{algo4}, we also check whether the two adjacent channels $c_k^2 = c_k^1 + 1$ and $c_k^3 = c_k^1 + 2$ are free. There can be eight different possibilities regarding the status of these three channels as depicted in Table \ref{tc1}, where an entry is '$0$' if the corresponding channel is free, and '$1$' if it is blocked. Note that the probability that the channel $c_k^1 + 1$ or $c_k^1 + 2$ will be free, is given by $\frac{F_t}{C} = f$. Table \ref{tc1} shows the probability of selections of $c_k^2$ and $c_k^3$ in each of the eight possible situations with the corresponding number of free channels found. \begin{table}[] \caption{{\small All possible cases about the status of three consecutive channels}}\label{tc1} \centering \tiny \begin{tabular}{|c|c|c|l|c|c|c|c|c|c|c|c|c|} \hline Status of $c_k^1$ & Status of $c_k^2$ & Status of $c_k^3$ & Probability of selecting $c_k^2$ and $c_k^3$ as free & Number of free channels \\ \hline $0$ & $0$ & $0$ & $f^2$ & $3$ \\ $0$ & $0$ & $1$ & $f.(1-f)$ & $2$ \\ $0$ & $1$ & $0$ & $f.(1-f)$ & $1$ \\ $0$ & $1$ & $1$ & $(1-f)^2$ & $1$ \\ $1$ & $0$ & $0$ & $f^2$ & $2$ \\ $1$ & $0$ & $1$ & $f.(1-f)$ & $1$ \\ $1$ & $1$ & $0$ & $f.(1-f)$ & $1$ \\ $1$ & $1$ & $1$ & $(1-f)^2$ & $0$ \\ \hline \end{tabular} \end{table} From Table \ref{tc1}, given that the channel $c_k^1$ is free, the expected number of free channels selected out of these three consecutive channels is given by $3f^2 + 2 f (1-f) + f(1-f) + (1-f)^2 = f^2 + f + 1$. Similarly, given that the channel $c_k^1$ is blocked, the expected number of free channels selected out of these three consecutive channels is given by $2f^2 + 2f(1-f) = 2f$. Thus, the expected number of free channels over all possible situations is given by, $\sum\limits_{i=0}^{n}{i.(f^2+f+1)\frac{{F_t \choose i}{C-F_t \choose n-i}}{{C \choose n}}} + \sum\limits_{i=0}^{n}{(n-i).2f \frac{{F_t \choose i}{C-F_t \choose n-i}}{{C \choose n}}} = n (3f - f^2 + f^3) \approx 3nf$, when $f << 1$. Thus, in a heavy traffic condition, when $f$ is very small, the average number of attempts to reserve the required number of channels made by Algorithm~\ref{algo4} is approximately equal to $\lceil\frac{1}{3f}\rceil$. \section{Simulation of Channel Allocation Algorithm}\label{simulation} In this section, we show the results of simulating our proposed protocol, and evaluate its performance in terms of the average number of attempts made by the proposed algorithm for acquiring the required number of channels to communicate a given multimedia signal and also in term of success rate. We also compare our proposed protocol with first-fit and best-fit channel allocation techniques. Simulations are performed $10,000$ times on random network topologies with each of $100$ to $1100$ nodes, in which nodes are distributed randomly within an area of $(100 \times 100)m^2$. The number of channels required by a node for communication is also varied from $1$ to $8$. We assume that, the signal to be sent has a mix of different multimedia signal types with the proportion of $50\%$, $20\%$, $15\%$, $10\%$ and $5\%$ for voice, data, still image, video and online streaming data, respectively, with demand number of channels ($DN$) as $1,~2,~4,~6$ and $8$, respectively. For $FDM-FDMA$ technique, the simulation is performed with values of each of the primary and secondary non-overlapping channels as $500$, each, as shown in Fig.~\ref{ONOC}(a). Thus, $C$, the total number of channels is $1000$. The primary channels are assumed to be uniformly distributed over the whole spectrum. We assume that on an average, $30\%$ of the primary channels are used by primary users for different broadcasting purposes. That is, $150$ primary channels are used by different broadcasting purposes and the rest $70\%$ are idle~\cite{b2}, and those $70\%$ will be available for cognitive radio users. For $OFDM-FDMA$ technique, the simulation is performed with values of each of the primary and secondary overlapping channels as $1000$ each, because the width of one non-overlapping channel is equal to that of two overlapping channels, as shown in Fig.~\ref{ONOC}(b). Thus, $C$, the total number of overlapping channels will be taken as $2000$ for the given total communication bandwidth of $1000$ non-overlapping channels. \begin{table*}[] \caption{{\small Average number of blocked channels by 2-distance neighbors with different values of range}}\label{t1} \centering \tiny \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Range in meter} & \multicolumn{11}{|c|}{Number of Blocked Channels} \\ \cline{2-12} & 100 Nodes & 200 Nodes & 300 Nodes & 400 Nodes & 500 Nodes & 600 Nodes & 700 Nodes & 800 Nodes & 900 Nodes & 1000 Nodes & 1100 Nodes \\ \hline \hline $10$ & $9$ & $32$ & $55$ & $79$ & $104$ & $129$ & $153$ & $178$ & $203$ & $229$ & $254$ \\ \hline $15$ & $33$ & $82$ & $133$ & $184$ & $237$ & $288$ & $340$ & $392$ & $444$ & $497$ & $549$ \\ \hline $20$ & $62$ & $143$ & $227$ & $312$ & $397$ & $481$ & $565$ & $652$ & $736$ & $819$ & $905$ \\ \hline $25$ & $96$ & $212$ & $334$ & $453$ & $571$ & $689$ & $811$ & $927$ & $1048$ & $1166$ & $1283$ \\ \hline \end{tabular} \end{table*} When a cognitive radio user wants to transmit a multimedia signal, the average number of channels blocked by all of its neighbors up to distance two, with different values of range are shown in Table~\ref{t1}. Note that these values exclude the $150$ primary channels used for various broadcasting purposes. From Table~\ref{t1}, the number of blocked channels by all 2-distance neighbors increases with the sensing and transmission range as expected. \begin{table}[] \caption{{\small Average number of free channels for $500$ and $700$ nodes in Non-Overlapping Channel}}\label{t2} \centering \scriptsize \begin{tabular}{|c|c|c|c|} \hline \parbox{1.25in}{Average number of blocked channels} & \parbox{1.25in}{Average number of free primary channels ($F_P$)} & \parbox{1.25in}{Average number of free secondary channels ($F_S$)}& \parbox{1.25in}{Average number of free channels ($F$)} \\ \hline \hline \multicolumn{4}{|c|}{{\bf For $500$ nodes}} \\ \hline \hline $254$ & $307$ & $439$ & $746$ \\ \hline $387$ & $252$ & $361$ & $613$ \\ \hline $547$ & $187$ & $266$ & $453$ \\ \hline $721$ & $115$ & $164$ & $279$ \\ \hline \hline \multicolumn{4}{|c|}{{\bf For $700$ nodes}}\\ \hline \hline $303$ & $287$ & $410$ & $697$ \\ \hline $490$ & $210$ & $300$ & $510$ \\ \hline $715$ & $117$ & $168$ & $285$ \\ \hline $961$ & $16$ & $23$ & $39$ \\ \hline \end{tabular} \end{table} \begin{table}[] \caption{{\small Average number of free channels for $700$ and $1000$ nodes in Overlapping Channel}}\label{t2_1} \centering \scriptsize \begin{tabular}{ |c|c|c|c| } \hline \parbox{1.25in}{Average number of blocked channels} & \parbox{1.25in}{Average number of free primary channels ($F_P$)} & \parbox{1.25in}{Average number of free secondary channels ($F_S$)}& \parbox{1.25in}{Average number of free channels ($F$)} \\ \hline \hline \multicolumn{4}{|c|}{{\bf For $700$ nodes}} \\ \hline \hline $303$ & $545$ & $640$ & $1185$ \\ \hline $490$ & $367$ & $431$ & $798$ \\ \hline $715$ & $210$ & $246$ & $456$ \\ \hline $961$ & $97$ & $114$ & $211$ \\ \hline \hline \multicolumn{4}{|c|}{{\bf For $1000$ nodes}}\\ \hline \hline $379$ & $466$ & $549$ & $1015$ \\ \hline $647$ & $251$ & $295$ & $546$ \\ \hline $969$ & $94$ & $111$ & $205$ \\ \hline $1316$ & $10$ & $11$ & $21$ \\ \hline \end{tabular} \end{table} For $FDM-FDMA$ channel allocation technique, Table~\ref{t2} shows the values of $F_s$, $F_p$ and $F = F_p + F_s$ for $500$ and $700$ nodes, respectively for different values of range used in the simulation experiment. Column $1$ of Table~\ref{t2} also show the total number of blocked channels, i.e., the channels allocated to all nodes up to 2-distance neighbors of a transmitting node for avoiding the hidden node problem (which are actually taken from Table~\ref{t1} corresponding to $500$ and $700$ nodes, respectively), plus $150$ broadcasting channels blocked by primary users. Table~\ref{t2_1}, for $OFDM-FDMA$ channel allocation technique, show the values of $F_s$, $F_p$ and $F = F_p + F_s$ for $700$ and $1000$ nodes, respectively for different values of range used in the simulation experiment. Column $1$ of Table~\ref{t2_1} also show the total number of blocked channels, i.e., the channels allocated to all nodes up to 2-distance neighbors of a transmitting node for avoiding the hidden node problem (which are actually taken from Table~\ref{t1} corresponding to $700$ and $1000$ nodes, respectively). We notice from Table~\ref{t2_1} that the total number of free channels $F$ is much less than the difference between the total number of channels (which is $1000$ in our case) and the number of blocked channels given in column $1$ of these corresponding Table~\ref{t2_1}. This is because of the fact that for avoiding channel interferences between two distinct pairs of communicating nodes, we maintain a gap of one channel on either side of a channel allocated to a communicating pair of nodes for computing the number of free channels to be allocated to other users. From Table~\ref{t2_1}, we see that with $700$ nodes, the number of free channels can go down to $211$, i.e., $10.55\%$ of the total number of channels for a range of $25$ meters. This situation corresponds to a moderate traffic in the network. On the other hand, with $1000$ nodes and a range of $25$ meters, the number of free channels can go down to as low as only $21$, which corresponds to a very heavy traffic in the network. The average number of attempts for different values of $DN$ (corresponding to different multimedia signal types) with different percentage values of blocked channels for $500$ and $700$ nodes for $FDM-FDMA$ channel allocation technique and $700$ and $1000$ nodes for $OFDM-FDMA$ channel allocation technique are shown in Tables~\ref{t3} and~\ref{t3_1}, respectively. The values in these tables capture the behavior of our proposed algorithm under different traffic load ({\em by load we mean the percentage of blocked channels}) ranging from $30\%$ (light load) to more than $95\%$ (extremely heavy load). Both the Tables~\ref{t3} and~\ref{t3_1} show that when the number of free channels decreases, the average number of attempts increases, as expected. For $FDM-FDMA$ technique, when the number of blocked channels lies between $30\%$ and $70\%$, we require only $2$ to $4$ attempts. However, in the most unlikely situations of an extremely heavy traffic load with about $96\%$ blocked channels, the average number of attempts will be $29$ for $DN=8$, as shown in Table~\ref{t3}. For $OFDM-FDMA$ technique, when the number of blocked channels lies between $50\%$ and $70\%$, the average number of attempt will be equal to $2$, while with about $90\%$ blocked channels, the algorithm needs at most $4$ attempts on an average. For extremely heavy traffic load with about $99\%$ blocked channels, the average number of attempt is equal to $45$ for $DN = 8$, as shown in Table~\ref{t3_1}. We, however, note that the sensing time per attempts in $OFDM-FDMA$ channel allocation technique is three times more than that in $FDM-FDMA$ channel allocation technique, as already mentioned in Section~\ref{SAC1}. We see from Tables~\ref{t3} and~\ref{t3_1} that the number of attempts found through simulation agrees well with the theoretical values except when the traffic is extremely heavy, e.g., $96\%$ blocked channels for $FDM-FDMA$ technique and $99\%$ blocked channels for $OFDM-FDMA$ technique. This small deviation may arise due to randomness in the simulation process. \begin{figure}[] \centering \includegraphics[width=3.0in,height=2.25in]{pbvsna.eps} \caption{Number of attempts vs percentage of blocked channel for $700$ nodes with $FDM-FDMA$}\label{fpbvsna} \end{figure} \begin{figure}[] \centering \includegraphics[width=3.0in,height=2.25in]{pbvsna1.eps} \caption{Number of attempts vs percentage of blocked channel for $1000$ nodes with $OFDM-FDMA$}\label{fpbvsna1} \end{figure} Tables~\ref{t3} and~\ref{t3_1} also show the performance comparison of the proposed protocol with the first-fit and best-fit allocation techniques. We see that our proposed protocol is superior to either of them under all traffic situations in respect of average number of attempts as well as the success rate, {\em where the success rate is defined as the percentage of the cases the protocol in question can successfully allocate channels}. Thus, with $DN = 8$ and traffic load of $70\%$, our protocol with $FDM-FDMA$ technique requires only $4$ attempts as opposed to $500$ attempts using first-fit and $638$ attempts using best-fit allocations. Similarly, with $DN = 8$ and traffic load of $70\%$, our protocol with $OFDM-FDMA$ technique requires only $2$ attempts as opposed to $250$ attempts using first-fit and $639$ attempts using best-fit allocations. Further, with $DN = 8$, using $FDM-FDMA$ technique, the success rate of our protocol is always $100\%$ even up to a traffic load of $96\%$, while both the first-fit and best-fit techniques fail to allocate any channel under this condition. Similarly, with $DN = 8$ and a traffic load of $99\%$, using $OFDM-FDMA$ technique, neither of the first-fit and best-fit techniques can allocate the channels, although our proposed protocol has the success rate of $100\%$. The nature of variation of the simulated values of the required number of attempts under different traffic conditions with $700$ (for $FDM-FDMA$) and $1000$ (for $OFDM-FDMA$) nodes is also shown in Fig.~\ref{fpbvsna} and~\ref{fpbvsna1}, respectively for different values of $DN$. \begin{table*}[] \caption{{\small Comparison of $FDM-FDMA$ and $OFDM-FDMA$ for different types of multimedia signal with different nodes and range $25$ meter}}\label{ttx} \centering \scriptsize \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{3}{*}{\parbox{1cm}{$~$ \\Number of nodes\\}} & \multicolumn{5}{|c|}{Success rate in $FDM-FDMA$} & \multicolumn{5}{|c|}{Success rate in $OFDM-FDMA$} \\ \cline{2-11} & \multicolumn{5}{|c|}{Demand Number of channels} & \multicolumn{5}{|c|}{Demand Number of channels} \\ \cline{2-11} & $8$ & $6$ & $4$ & $2$ & $1$ & $8$ & $6$ & $4$ & $2$ & $1$ \\ \hline $100$ nodes & $100$ & $100$ & $100$ & $100$ & $100$ & $100$ & $100$ & $100$ & $100$ & $100$ \\ \hline $200$ nodes & $100$ & $100$ & $100$ & $100$ & $100$ & $100$ & $100$ & $100$ & $100$ & $100$ \\ \hline $300$ nodes & $100$ & $100$ & $100$ & $100$ & $100$ & $100$ & $100$ & $100$ & $100$ & $100$ \\ \hline $400$ nodes & $100$ & $100$ & $100$ & $100$ & $100$ & $100$ & $100$ & $100$ & $100$ & $100$ \\ \hline $500$ nodes & $100$ & $100$ & $100$ & $100$ & $100$ & $100$ & $100$ & $100$ & $100$ & $100$ \\ \hline $600$ nodes & $100$ & $100$ & $100$ & $100$ & $100$ & $100$ & $100$ & $100$ & $100$ & $100$ \\ \hline $700$ nodes & $100$ & $100$ & $100$ & $100$ & $100$ & $100$ & $100$ & $100$ & $100$ & $100$ \\ \hline $800$ nodes & $0$ & $0$ & $0$ & $0$ & $0$ & $100$ & $100$ & $100$ & $100$ & $100$ \\ \hline $900$ nodes & $0$ & $0$ & $0$ & $0$ & $0$ & $100$ & $100$ & $100$ & $100$ & $100$ \\ \hline $1000$ nodes & $0$ & $0$ & $0$ & $0$ & $0$ & $100$ & $100$ & $100$ & $100$ & $100$ \\ \hline $1100$ nodes & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ \\ \hline \end{tabular} \end{table*} \begin{figure}[] \centering \includegraphics[width=3.0in,height=2.25in]{FDMvsOFDM.eps} \caption{Success rate comparison for $FDM-FDMA$ vs $OFDM-FDMA$ with $8$ demand number of channels}\label{FDMvsOFDM} \end{figure} Table~\ref{ttx} and Fig.~\ref{FDMvsOFDM} show the performance comparison between the $FDM-FDMA$ and $OFDM-FDMA$ implementations with our protocol. From Table~\ref{ttx} and Fig.~\ref{FDMvsOFDM}, we notice that with a range of $25$ meter, $FDM-FDMA$ can not allocate the channel beyond $700$ nodes, while $OFDM-FDMA$ can work satisfactorily even with $1000$ nodes. \subsection{Execution Time of the Proposed Protocol}\label{overhead} Assuming that the address fields follow $IPv4$ (or $IPv6$) format, the maximum length of a message for channel sensing (e.g., $TAM$) will be $10$ (or $34$) bytes which need $1.25$ (or $4$) msec with a bandwidth of $64$ Kbps for $CCC$. From the data given in Table~\ref{t3}, we see that for video transmission with about $70\%$ traffic load through eight $64$ Kbps channels, four attempts are needed by the Algorithm~\ref{algo3} with $FDM-FDMA$ technique, which corresponds to an overhead of $4 \times 8 \times 1.25$ msec $= 40$ msec with $IPv4$ format ($4 \times 8 \times 4 = 128$ msec with $IPv6$ format). On the other hand, with $OFDM-FDMA$ and about $70\%$ blocked channels, Algorithm~\ref{algo4} makes $2$ attempts, leading to $2 \times 3 \times 8 \times 1.25 = 60$ msec with $IPv4$ format ($2 \times 3 \times 8 \times 4 = 192$ msec with $IPv6$ format) for $DN = 8$. \subsection{Delay and Delay Jitter in Transmission}\label{addj} In this section, we show our experimental results through simulation by using our proposed scheme on different types of multimedia data. The file categories and the test files have been chosen so as to reflect as closely as possible the different types of wireless data communication that can be seen in today's world. For this purpose, we choose files categorized into four different file types, namely, video, music, image and text. Below we provide a description of files chosen from each of these categories : \begin{enumerate} \item {\bf Video files} : The video files are in $MPEG$ and $DAT$ format \cite{video}. Average size of these files is about $48$ $MB$. \item {\bf Music files} : The music files are in $MP3$ encoded format \cite{music}. Average size of these files is about $5.87$ $MB$. \item {\bf Image files} : The image files are in $JPEG$ format \cite{image}. Average size of these files is about $1.5$ $MB$. \item {\bf Text files} : The text files are in $TXT$ file format \cite{data}. Average size of these files is about $80.89$ $KB$. \end{enumerate} For our experiment with $FDM-FDMA$ technique, we assume a data frame size of $1024$ bytes using $IPv4$ packet format. We also assume that the channel request and release rates of a user are $0.7$ and $0.3$, respectively. Each file was tested $1000$ times during the simulation to obtain the average delay and delay jitter as reported in the Tables~\ref{different_video_file}, \ref{different_music_file}, \ref{different_image_file} and \ref{different_data_file}. In each of the Tables~\ref{different_video_file}, \ref{different_music_file}, \ref{different_image_file} and \ref{different_data_file}, the first column represents the file size. For video transmission we use $8$ channels and for music, images and data we use $6$, $4$ and $2$ channels, respectively. The next column represents the average number of channels deallocated by $PU$s when a $PU$ asks for its channel currently used by the $SU$. The third column represents ideal transmission ($IT$) time for transmitting files (with zero overhead). The next column represents initial channel allocation ($ICA$) time for grabbing all the required $DN$ channels before starting the communication. The fifth column represents channel reallocation ($CR$) time when a $SU$ has to deallocate a channel and then a new channel is re-allocated to the $SU$. The, actual transmission ($AT$) time is equal to $IT + ICA + CR$, which is given in the next column. The seventh column represents the maximum possible jitter between two consecutive packets due to channel reallocation. The next column represents the mean jitter and the last column represents the standard deviation of jitter. The appropriate units for the values in different columns have been mentioned in the tables. In each of the above tables, the bold-faced entries in a row represent the average behavior for the corresponding traffic load. These simulation results show that the overhead in time due to the proposed allocation algorithm constitutes a very small fraction of the total transmission time. For example, from Table~\ref{different_video_file}, we see that with a traffic load of about $70\%$, for transmitting video files of size about $48$ MB, the total transmission time through a $64$ Kbps channel is around $786.5$ sec, while the overhead due to channel allocation and reallocation required by our proposed algorithm is approximately $49$ msec ($\approx 0.0062 \%$). During the transmission of the packets, a delay may appear between the transmission of two consecutive data packets whenever a $PU$ channel is allocated and that channel is to be released due to the demand from the corresponding $PU$. This delay is due to the execution of our proposed algorithm for reallocating the channels which varies from zero to a finite amount of time, causing delay jitter. We have estimated this delay jitter for all the above cases of our simulation experiment with different types of real-life multimedia data and find that this jitter is very small. For example, from Table~\ref{different_video_file}, we see that with a traffic load of about $70\%$, for transmitting video files, the mean jitter is around $0.0015$ msec with a standard deviation of about $0.064$ msec, whereas the transmission time of a sub-packet of size $1024$ bytes through $64$ Kbps channel is $125$ msec. \section{Analysis of Protocol by Markov Model} To analyze the performance of the proposed protocol for channel allocation and deallocation, we model the system by the Markov's birth and death process, where channel allocation corresponds to the birth process and deallocation corresponds to the death process. At any time instant, the allocation status of the system can be designated by a state $S_{k}^{k'}$ of the system, where $k$ represents the number of reserved secondary channels and $k'$ represents that of the reserved primary channels. The system will start transmitting the message when all the $n$ required channels are reserved, i.e., $k+k'=n$, and then allocated by the Algorithm~\ref{algo3}. At any state $S_{k}^{k'}$, when a single new (free) channel is reserved by the Algorithm~\ref{algo3}, the system can move either to the state $S_{k}^{k'+1}$ (if the new channel is reserved from the primary band) or to the state $S_{k+1}^{k'}$ (if the new channel is reserved from the secondary band). Similarly, if $i$ channels are reserved in one attempt (Algorithm~\ref{algo3}), out of which $i_1$ channels ($0 \le i_1 \le i$), are in the primary band and the rest $i-i_1$ are in the secondary band, then the system will move from the state $S_{k}^{k'}$ to the state $S_{k+i-i_1}^{k'+i_1}$. Let $\mu_1 $ be the probability per unit time for reserving one new channel in one attempt (either from the primary band or from the secondary band). Then the probability per unit time that the system moves from the state $S_{k}^{k'}$ to $S_{k}^{k'+1}$ is given by $\frac{{F_{p,t} \choose 1}}{{F_t \choose 1}}\mu_1$ and the probability per unit time that the system moves from the state $S_{k}^{k'}$ to $S_{k+1}^{k'}$ is given by $\frac{{F_{s,t} \choose 1}}{{F_t \choose 1}}\mu_1$. In a similar manner, let $\mu_2 $ be the probability per unit time for reserving two new channels in one single attempt. Then the system can move from the state $S_{k}^{k'}$ to $3$ possible states $-$ $i)$ to the state $S_{k}^{k'+2}$ with a probability of $\frac{{F_{p,t} \choose 2}}{{F_t \choose 2}}\mu_2$, $ii)$ to the state $S_{k+2}^{k'}$ with a probability of $\frac{{F_{s,t} \choose 2}}{{F_t \choose 2}}\mu_2$, and $iii)$ to the state $S_{k+1}^{k'+1}$ with a probability of $\frac{{F_{p,t} \choose 1}{F_{s,t} \choose 1}}{{F_t \choose 2}}\mu_2$. Similarly, when $i$ channels are reserved in one single attempt, the transition probabilities from the state $S_{k}^{k'}$ to $i+1$ different possible states can be expressed in terms of $\mu_i$, the total probability per unit time for reserving $i$ channels, $F_{p,t}, F_{s,t}$ and $F_t$. We assume that the time required for allocating the required number $n$ of channels is small enough so that the values of $F_{p,t}, F_{s,t}$ and $F_t$ do not change during the allocation process. Thus, in all the expressions above for transition probabilities, we replace $F_{p,t}, F_{s,t}$ and $F_t$ by time-invariant values $F_p, F_s$ and $F$, respectively. Let $\lambda$ be the probability per unit time that a grabbed primary channel is released by the system due to the arrival of a channel allocation request from a primary user. For the time being, we assume that only one primary channel may be released at any instant of time. Also, let $T$ be the total time required for completing the communication process of a multimedia message. That means, $\frac{1}{T}$ is the probability per unit time that the system moves from the state $S_{k}^{n-k}$, $0 \leq k \leq n$, to the state $S_{0}^{0}$. Further, let $\sigma$ be the probability per unit time that the Algorithm~\ref{algo3} terminates unsuccessfully (when all the requested number of channels could not be allocated) after the time-out period. \subsection{1-Channel System} \begin{figure}[] \centering \includegraphics[width=1.75in,height=1.0in]{a23.eps} \caption{State transition diagram for 1-channel system}\label{f23} \end{figure} Suppose we want to transmit a voice signal for which $n = 1$. In this case, the system will have only three possible states $-$ $i)$ $S_0^0$, when no channel has been reserved, $ii)$ $S_0^1$, at which one channel is reserved from the primary band and $iii)$ $S_1^0$, at which one channel is reserved from the secondary band. Let $p_i^j, (0 \le i,j \le 1)$ be the probability that the system is in state $S_i^j$. The state transition diagram for this system is as shown in Fig.~\ref{f23}. The different transition arcs are labeled with the corresponding probabilities of transition during a small time interval $dt$. Using the principle of detailed balance for transitions between the states $S_0^0$ and $S_0^1$, we can write, \begin{equation}\label{e1} \left(\lambda + \frac{1}{T}\right) p_0^1 = \frac{{F_{p} \choose 1}}{{F \choose 1}} \mu_1 p_0^0 \end{equation} Similarly, for the transitions between the states $S_0^0$ and $S_1^0$, we have, \begin{equation}\label{e2} \frac{1}{T} p_1^0 = \frac{{F_{s} \choose 1}}{{F \choose 1}} \mu_1 p_0^0 \end{equation} Since the system must be in one of the three states, we have, \begin{equation}\label{e3} p_0^0 + p_1^0 + p_0^1 = 1 \end{equation} Substituting the values of $p_0^1$ and $p_1^0$ from eqns.~(\ref{e1}) and (\ref{e2}) in eqn.~(\ref{e3}), we get, \begin{equation} p_0^0 + \left( \frac{\frac{{F_{p} \choose 1}}{{F \choose 1}}}{\lambda + \frac{1}{T}} \right) \mu_1 p_0^0 + \left( \frac{\frac{{F_{s} \choose 1}}{{F\choose 1}}}{\frac{1}{T}} \right) \mu_1 p_0^0 = 1 \end{equation} That is, \begin{equation}\label{e4} p_0^0 = \frac{1}{1 + \left( \frac{\frac{{F_{p} \choose 1}}{{F \choose 1}}}{\lambda + \frac{1}{T}} \right) \mu_1 + \left( \frac{\frac{{F_{s} \choose 1}}{{F \choose 1}}}{\frac{1}{T}} \right) \mu_1 } \end{equation} We say that the system in the {\em active} condition when the required numbers of channels are reserved, and the probability for the system being in that condition is given by, \begin{equation}\label{e5} P_1 = p_1^0 + p_0^1 = \frac{ \left( \frac{\frac{{F_{p} \choose 1}}{{F \choose 1}}}{\lambda + \frac{1}{T}} \right) \mu_1 + \left( \frac{\frac{{F_{s} \choose 1}}{{F \choose 1}}}{\frac{1}{T}} \right) \mu_1 }{1 + \left( \frac{\frac{{F_{p} \choose 1}}{{F \choose 1}}}{\lambda + \frac{1}{T}} \right) \mu_1 + \left( \frac{\frac{{F_{s} \choose 1}}{{F \choose 1}}}{\frac{1}{T}} \right) \mu_1 } \end{equation} We already assume that $T$ is the total time required for completing the communication process of a multimedia message. We also find that the probability for the system being in active state $P_1$. So, the average time for communicating the multimedia data is become $\frac{T}{P_1}$. Thus, the average waiting time can be expressed as $\Gamma_1 = T(\frac{1-P_1}{P_1})$. \subsection{2-Channel System} We now consider the case for the 2-channel system for which $n = 2$. As in the case of 1-channel system, we draw the state transition diagram for this system as shown in Fig.~\ref{f22}, consisting of the six states $S_0^0,S_0^1,S_1^0, S_1^1, S_0^2$ and $S_2^0$. \begin{figure}[] \centering \includegraphics[width=3.5in,height=3.0in]{a22.eps} \caption{State transition diagram for 2-channel system}\label{f22} \end{figure} Considering all possible transitions to and from the state $S_2^0$, we have, \begin{equation}\label{e21} \frac{1}{T} p_2^0 = \frac{{F_{s} \choose 2}}{{F \choose 2}} \mu_2 p_0^0 + \frac{{F_{s} \choose 1}}{{F \choose 1}} \mu_1 p_1^0 \end{equation} Similarly, corresponding to all transitions to and from the state $S_0^2$, we have, \begin{equation}\label{e22} \left(\lambda + \frac{1}{T}\right) p_0^2 = \frac{{F_{p} \choose 2}}{{F \choose 2}} \mu_2 p_0^0 + \frac{{F_{p} \choose 1}}{{F \choose 1}} \mu_1 p_0^1 \end{equation} Corresponding to all possible transitions to and from the state $S_1^1$, we have, \begin{equation}\label{e23} \left(\lambda + \frac{1}{T}\right) p_1^1 = \frac{{F_{p} \choose 1}}{{F \choose 1}} \mu_1 p_1^0 + \frac{{F_{s} \choose 1}}{{F \choose 1}} \mu_1 p_0^1 +\frac{{F_{p} \choose 1} {F_{s} \choose 1}}{{F \choose 2}} \mu_2 p_0^0 \end{equation} Considering all possible transitions to and from the state $S_0^1$, we get, \begin{equation}\label{e24} \left\{\lambda + \sigma + \frac{{F_{p}\choose 1}}{{F \choose 1}}\mu_1 + \frac{{F_{s}\choose 1}}{{F \choose 1}}\mu_1 \right\} p_0^1 = \lambda p_0^2 \end{equation} Similarly, considering all possible transitions to and from the state $S_1^0$, we get, \begin{equation}\label{e25} \left\{\sigma + \frac{{F_{p}\choose 1}}{{F \choose 1}}\mu_1 + \frac{{F_{s}\choose 1}}{{F \choose 1}}\mu_1 \right\} p_1^0 = \lambda p_1^1 \end{equation} We also have the following condition to be satisfied : \begin{equation}\label{e27} p_0^0 + p_1^0 + p_0^1 + p_2^0 + p_0^2 + p_1^1 = 1 \end{equation} The total probability for the system to be in the {\em active} condition, is given by \begin{equation}\label{e28} P_2 = p_2^0 + p_0^2 + p_1^1 \end{equation} Like {\em 1-channel system} we can get the value of $P_2$ and $\Gamma_2$ in terms of $\lambda, ~\sigma, ~\mu, ~T, ~F_s, ~F_p$ and $~F$ easily. \subsection{3-Channel System} We now consider the case for the 3-channel system for which $n = 3$. As in the case of 1-channel and 2-channel system, we draw the state transition diagram for this system as shown in Fig.~\ref{f24}, consisting of the ten states $S_0^0$, $S_0^1$, $S_1^0$, $S_1^1$, $S_0^2$, $S_2^0$, $S_2^1$, $S_1^2$, $S_3^0$ and $S_0^3$. The value of $r_1$, $r'_1$, $r_2$, $r'_2$, $r_3$, $r'_3$, $r_{1,1}$, $r_{1,2}$ and $r_{2,1}$ used in the discussion below refer to those given in Fig.~\ref{f24}. \begin{figure*}[] \centering \includegraphics[width=5.5in,height=4.5in]{a24.eps} \caption{State transition diagram for 3-channel system}\label{f24} \end{figure*} Considering all possible transitions to and from the state $S_2^0$, we have, \begin{equation} \left\{\sigma + r'_1 + r_1 \right\} p_2^0 = \lambda p_2^1 \end{equation} Similarly, considering all possible transitions to and from the state $S_0^2$, we have, \begin{equation} \left\{\sigma + \lambda + r'_1 + r_1 \right\} p_0^2 = \lambda p_0^3 \end{equation} Considering all possible transitions to and from the state $S_1^1$, we get, \begin{equation} \left\{\sigma + \lambda + r'_1 + r_1 \right\} p_1^1 = \lambda p_1^2 \end{equation} Considering all possible transitions to and from the state $S_0^1$, we have, \begin{equation} \left\{\sigma + \lambda + r'_2 + r_2 + r_{1,1} \right\} p_0^1 = \lambda p_0^2 \end{equation} Considering all possible transitions to and from the state $S_1^0$, we get, \begin{equation} \left\{\sigma + r'_2 + r_2 + r_{1,1} \right\} p_1^0 = \lambda p_1^1 \end{equation} Considering all possible transitions to and from the state $S_3^0$, we have, \begin{equation} \frac{1}{T}p_3^0 = ( r_3 p_0^0 + r_1 p_2^0 + r_2 p_1^0 ) \end{equation} Considering all possible transitions to and from the state $S_0^3$, we get, \begin{equation} \left(\lambda + \frac{1}{T} \right) p_0^3 = ( r'_3 p_0^0 + r'_2 p_0^1 + r'_1 p_0^2 ) \end{equation} Considering all possible transitions to and from the state $S_2^1$, we have, \begin{equation} \left(\lambda + \frac{1}{T} \right) p_2^1 = ( r_{1,2} p_0^0 + r'_1 p_2^0 + r_2 p_0^1 + r_1 p_1^1 + r_{1,1} p_1^0 ) \end{equation} Considering all possible transitions to and from the state $S_1^2$, we get, \begin{equation} \left(\lambda + \frac{1}{T}\right) p_1^2 = ( r'_1 p_1^1 + r'_2 p_1^0 + r_{2,1} p_0^0 + r_{1,1} p_0^1 + r_1 p_0^2 ) \end{equation} We also have the following condition to be satisfied : \begin{equation} p_0^0 + p_1^0 + p_0^1 + p_2^0 + p_0^2 + p_1^1 + p_3^0 + p_0^3 + p_2^1 + p_1^2 = 1 \end{equation} The total probability for the system to be in the {\em active} condition, is given by \begin{equation} P_3 = p_3^0 + p_0^3 + p_2^1 + p_1^2 \end{equation} Like {\em 1-channel system} we can get the value of $P_3$ in terms of $\lambda, ~\sigma, ~\mu, ~T, ~F_s, ~F_p$ and $~F$ easily. \subsection{Examples Showing the Results from Markov Model} \begin{table}[] \caption{{\small $P_n$ and $\Gamma_n$ with different message lengths, $F_s=23$ and $F_p=16$ for non-overlapped channels with $700$ nodes}}\label{t4} \centering \scriptsize \begin{tabular}{|c|c|l|l|l|l|l|} \hline \multicolumn{2}{|c|}{\multirow{2}{*}{$P_n / \Gamma_n$}} & \multicolumn{5}{|c|}{{$1 / T$}} \\ \cline{3-7} \multicolumn{2}{|c|}{} & $0.01$ & $0.25$ & $0.5$ & $0.75$ & $0.99$ \\ \hline \hline \multirow{3}{*}{\parbox{1.5cm}{Active State}} & $P_1$ & $0.9769$ & $0.6849$ & $0.5423$ & $0.4517$ & $0.3901$ \\ \cline{2-7} & $P_2$ & $0.969$ & $0.6512$ & $0.5172$ & $0.4332$ & $0.3758$ \\ \cline{2-7} & $P_3$ & $0.962$ & $0.6282$ & $0.5012$ & $0.4217$ & $0.367$ \\ \hline \hline \multirow{3}{*}{\parbox{1.5cm}{Average Waiting Time (in units of $T$)}} & $\Gamma_1$ & $2.3692$ & $1.8404$ & $1.6883$ & $1.6183$ & $1.5792$ \\ \cline{2-7} & $\Gamma_2$ & $3.1979$ & $2.1426$ & $1.867$ & $1.7446$ & $1.6778$ \\ \cline{2-7} & $\Gamma_3$ & $3.9501$ & $2.3671$ & $1.9903$ & $1.8287$ & $1.7419$ \\ \hline \end{tabular} \end{table} \begin{figure}[] \centering \includegraphics[width=3.0in,height=2.25in]{Pn1.eps} \caption{$P_n$ with different lengths of messages for $F_s=23$ and $F_p=16$}\label{Pn1} \end{figure} \begin{figure}[] \centering \includegraphics[width=3.0in,height=2.25in]{Gn1.eps} \caption{$\Gamma_n$ with different lengths of messages for $F_s=23$ and $F_p=16$}\label{Gn1} \end{figure} In our simulation, we assumed the time-out period to be equal to that corresponding to twice the number of attempts for allocating the required number of channels as predicted by our theoretical estimate. With this time-out period, our algorithm always terminated successfully. Accordingly, we set the value of $\sigma$, which is the probability per unit time that the Algorithm~\ref{algo3} terminates unsuccessfully after the time-out period, as equal to zero. For an extremely heavy traffic load of above $96\%$ with $700$ nodes using $FDM-FDMA$ technique, we use the data taken from Table~\ref{t2} with $F_p=23$ and $F_s=16$. Assuming $\sigma = 0$, $\lambda = 0.3$ and $\mu_1 = \mu_2 = \mu_3 = 0.7$. The values of both $P_n$ and $\Gamma_n$ are shown in Tables~\ref{t4}, for $n= 1, ~2, ~3$ and for different values of message length. The values of $P_n$ and $\Gamma_n$ are also shown graphically in Figs.~\ref{Pn1} and~\ref{Gn1}, respectively. From Table~\ref{t4}, we observe that the probability for the system to be in active condition is more for larger lengths of messages. The value of $P_n$ actually depends on two main factors, $i)$ length of the message, and $ii)$ channel mobility (which depends on deallocation of a channel when asked by a primary user and then how quickly we get another channel for communication). Thus, under extremely heavy traffic load of above $95\%$, the probability that the system is in the active condition is effectively dependent only on the length of the message. When the message length is very long, our proposed protocol enables the system to be more than $96\%$ of the time in active condition, while for very short length messages, the system is in active condition for more than $36\%$ of the time. This may be explained by the fact that longer time is needed for transmitting longer messages, and thus the system remains in active condition by grabbing the channel for a larger fraction of time for longer messages. \begin{table}[] \caption{{\small $P_n$ and $\Gamma_n$ with different message lengths, $F_s=11$ and $F_p=10$ for overlapped channels with $1000$ nodes}}\label{t41} \centering \scriptsize \begin{tabular}{|c|c|l|l|l|l|l|} \hline \multicolumn{2}{|c|}{\multirow{2}{*}{$P_n / \Gamma_n$}} & \multicolumn{5}{|c|}{{$1 / T$}} \\ \cline{3-7} \multicolumn{2}{|c|}{} & $0.01$ & $0.25$ & $0.5$ & $0.75$ & $0.99$ \\ \hline \hline \multirow{3}{*}{\parbox{1.25cm}{Active State}} & $P_1$ & $0.9742$ & $0.6746$ & $0.5349$ & $0.4464$ & $0.386$ \\ \cline{2-7} & $P_2$ & $0.9643$ & $0.6374$ & $0.508$ & $0.4266$ & $0.3709$ \\ \cline{2-7} & $P_3$ & $0.9555$ & $0.6139$ & $0.4922$ & $0.4155$ & $0.3625$ \\ \hline \hline \multirow{3}{*}{\parbox{1.5cm}{Average Waiting Time (in units of $T$)}} & $\Gamma_1$ & $2.6496$ & $1.9298$ & $1.7391$ & $1.6535$ & $1.6065$ \\ \cline{2-7} & $\Gamma_2$ & $3.7059$ & $2.2755$ & $1.9374$ & $1.7918$ & $1.7135$ \\ \cline{2-7} & $\Gamma_3$ & $4.6541$ & $2.5161$ & $2.0633$ & $1.8757$ & $1.7766$ \\ \hline \end{tabular} \end{table} \begin{figure}[] \centering \includegraphics[width=3.0in,height=2.25in]{Pn2.eps} \caption{$P_n$ with different lengths of messages for $F_s=11$ and $F_p=10$}\label{Pn2} \end{figure} \begin{figure}[] \centering \includegraphics[width=3.0in,height=2.25in]{Gn2.eps} \caption{$\Gamma_n$ with different lengths of messages for $F_s=11$ and $F_p=10$}\label{Gn2} \end{figure} With $1000$ nodes using $OFDM-FDMA$ technique for $F_s=11$ and $F_p=10$, we have similarly computed the values of $P_n$ and $\Gamma_n$ for $n = 1, ~2, ~3$ which are shown in Tables~\ref{t41}. These values of $P_n$ and $\Gamma_n$ are also plotted in Figs.~\ref{Pn2} and~\ref{Gn2}, respectively. \subsection{Generalization to n-Channel System} To get the value of $P_n$ for any value of $n$, we need five basic types of states for which the probabilities are given as below. \begin{enumerate} \item When all reserved channels are secondary : \begin{equation} \frac{1}{T} p_n^0 = \sum_{k=1}^{n} \frac{{F_{s} \choose k}}{{F \choose k}} \mu_k p_{n-k}^0 \end{equation} \item When all reserved channels are primary : \begin{equation} \left(\lambda + \frac{1}{T} \right) p_0^n = \sum_{k=1}^{n} \frac{{F_{p} \choose k}}{{F \choose k}} \mu_k p_0^{n-k} \end{equation} \item When $k+k'=n$ and $ \forall i,~ 0 < i < n $, \begin{align} \left(\lambda + \frac{1}{T} \right) p_{i}^{n-i} &= {\sum_{k=0}^{i} \sum_{k'=0 \atop {~k+k' < n}}^{n-i}} \nonumber \\ &\qquad {} \left \{ \frac{{F_{p} \choose n-i-k'}{F_{s} \choose i-k}}{{F \choose n-k-k'}} \mu_{n-k-k'} p_{k}^{k'} \right \} \end{align} \item When $k+k'<n$ and $k'=0$ (no primary channel is reserved), \begin{equation} \left\{ \sigma + \sum\limits_{i=0}^{n-k} \frac{{F_{p} \choose i} {F_{s} \choose n-k-i}}{{F \choose n-k}} \mu_{n-k} \right\}p_k^0 = \lambda p_k^1 \end{equation} and \item When $k+k'<n$ and $k'>0$ (at least one primary channel is reserved), \begin{align} \left\{ \lambda + \sigma + \sum\limits_{i=0}^{n-k-k'} \frac{{F_{p} \choose i}{F_{s} \choose n-k-k'-i}}{{F \choose n-k-k'}} \mu_{n-k-k'} \right\} p_k^{k'} &= \nonumber\\ \lambda p_k^{k'+1} \end{align} \end{enumerate} \noindent Since the system must be always in one of the above types of states, we have, \begin{equation} \sum_{k=0}^{n} \sum_{k'=0}^{n-k} p_{k}^{k'} = 1 \end{equation} When $k+k'=n$, the system will be in {\em active} state, and the probability for the system being in such a state is given by, $P_n = \sum_{k=0}^{k'=n-k} p_{k}^{n-k}$. The average waiting time for an $n-channel$ system can be expressed as $\Gamma_n = T(\frac{1-P_n}{P_n})$. \section{Conclusion} A novel channel allocation technique for multimedia communication in a $CRN$ has been presented in this paper. The proposed technique works even when the white spaces in the spectrum do not provide a contiguous bandwidth large enough for maintaining the $QoS$ of the multimedia signals. Our technique is based on first finding a set of non-contiguous white spaces whose total width will be equal to the required bandwidth of the multimedia signal. We then sub-divide the bits from the original signal in the time domain, form sub-packets with these subsets of bits and transmit these sub-packets through the set of channels so found. The algorithms for sensing, allocating and deallocating the required channels from the available white spaces taking into account the presence of high-priority primary users have been presented along with the algorithms for transmitting and receiving the data packets with two different implementations using $FDM-FDMA$ and $OFDM-FDMA$ techniques. The performance of this system has been analyzed by means of a Markov model. Also we find that the average number of attempts for acquiring the required number of channels as obtained from simulation agrees well to the theoretical values for all types of traffic situations ranging from light to extremely heavy (about $96\%$ blocked channels). Simulation results show that the proposed technique always outperforms the existing first-fit and best-fit allocation techniques in terms of the average number of attempts needed for acquiring the necessary number of channels for all traffic situations ranging from light to extremely heavy traffic. \begin{landscape} \begin{table}[] \caption{{\small Average number of attempts required for allocating channels for different types of multimedia signal with $500$ and $700$ nodes in $FDM-FDMA$}}\label{t3} \centering \scriptsize \begin{tabular}{ |c|c|c|c|c|c|c|c|c|c|c|c|c|c|c| } \hline \multirow{2}{*}{\parbox{1.0cm}{$~$ \\Average number of free channels ($F$)\\}} & \multirow{2}{*}{\parbox{1.0cm}{$~$ \\Demand number of channels ($DN$) \\}} & \multicolumn{5}{c|}{{\bf Our proposed protocol}} & \multicolumn{4}{c|}{{\bf first-fit protocol}} & \multicolumn{4}{c|}{{\bf best-fit protocol}}\\ \cline{3-15} & & \parbox{1.0cm}{$~$ \\Number of attempts (theoretical value) \\} & \parbox{1.0cm}{$~$ \\Number of attempts (simulation results) \\} & \parbox{1.0cm}{$~$ \\Number of selected primary channels\\} & \parbox{1.0cm}{$~$ \\Number of selected secondary channels\\} & \parbox{1.0cm}{$~$ \\Average success rate} & \parbox{1.0cm}{$~$ \\Number of attempts (simulation results) \\} & \parbox{1.0cm}{$~$ \\Number of selected primary channels\\} & \parbox{1.0cm}{$~$ \\Number of selected secondary channels\\} & \parbox{1.0cm}{$~$ \\Average success rate} & \parbox{1.0cm}{$~$ \\Number of attempts (simulation results) \\} & \parbox{1.0cm}{$~$ \\Number of selected primary channels\\} & \parbox{1.0cm}{$~$ \\Number of selected secondary channels\\} & \parbox{1.0cm}{$~$ \\Average success rate}\\ \hline \hline \multicolumn{15}{|c|}{{\bf For $500$ nodes}}\\ \hline \hline \multirow{5}{*}{$746$} & $8$ & $2$ & $2$ & $3$ & $5$ & $100$ & $36$ & $4$ & $4$ & $100$ & $160$ & $4$ & $4$ & $100$ \\ & $6$ & $2$ & $2$ & $2$ & $4$ & $100$ & $18$ & $3$ & $3$ & $100$ & $88$ & $3$ & $3$ & $100$ \\ & $4$ & $2$ & $2$ & $2$ & $2$ & $100$ & $8$ & $2$ & $2$ & $100$ & $48$ & $2$ & $2$ & $100$ \\ & $2$ & $2$ & $2$ & $1$ & $1$ & $100$ & $3$ & $1$ & $1$ & $100$ & $26$ & $1$ & $1$ & $100$ \\ & $1$ & $2$ & $2$ & $0$ & $1$ & $100$ & $1$ & $0$ & $1$ & $100$ & $19$ & $0$ & $1$ & $100$ \\ \hline \multirow{5}{*}{$613$} & $8$ & $2$ & $3$ & $3$ & $5$ & $100$ & $124$ & $4$ & $4$ & $100$ & $315$ & $4$ & $4$ & $100$ \\ & $6$ & $2$ & $3$ & $2$ & $4$ & $100$ & $45$ & $3$ & $3$ & $100$ & $124$ & $3$ & $3$ & $100$ \\ & $4$ & $2$ & $3$ & $2$ & $2$ & $100$ & $15$ & $2$ & $2$ & $100$ & $45$ & $2$ & $2$ & $100$ \\ & $2$ & $2$ & $2$ & $1$ & $1$ & $100$ & $4$ & $1$ & $1$ & $100$ & $16$ & $1$ & $1$ & $100$ \\ & $1$ & $2$ & $2$ & $0$ & $1$ & $100$ & $1$ & $0$ & $1$ & $100$ & $9$ & $0$ & $1$ & $100$ \\ \hline \multirow{5}{*}{$453$} & $8$ & $3$ & $3$ & $3$ & $5$ & $100$ & $430$ & $4$ & $4$ & $62.266$ & $648$ & $4$ & $4$ & $62.266$ \\ & $6$ & $3$ & $3$ & $2$ & $4$ & $100$ & $199$ & $3$ & $3$ & $99.498$ & $350$ & $3$ & $3$ & $99.498$ \\ & $4$ & $3$ & $3$ & $2$ & $2$ & $100$ & $40$ & $2$ & $2$ & $100$ & $77$ & $2$ & $2$ & $100$ \\ & $2$ & $3$ & $3$ & $1$ & $1$ & $100$ & $7$ & $1$ & $1$ & $100$ & $15$ & $1$ & $1$ & $100$ \\ & $1$ & $3$ & $3$ & $0$ & $1$ & $100$ & $2$ & $0$ & $1$ & $100$ & $6$ & $0$ & $1$ & $100$ \\ \hline \multirow{5}{*}{$279$} & $8$ & $4$ & $5$ & $3$ & $5$ & $100$ & $508$ & $4$ & $4$ & $2.441$ & $630$ & $4$ & $4$ & $2.441$ \\ & $6$ & $4$ & $5$ & $2$ & $4$ & $100$ & $476$ & $3$ & $3$ & $28.429$ & $610$ & $3$ & $3$ & $28.429$ \\ & $4$ & $4$ & $4$ & $1$ & $3$ & $100$ & $213$ & $2$ & $2$ & $99.136$ & $292$ & $2$ & $2$ & $99.136$ \\ & $2$ & $4$ & $4$ & $1$ & $1$ & $100$ & $16$ & $1$ & $1$ & $100$ & $23$ & $1$ & $1$ & $100$ \\ & $1$ & $4$ & $4$ & $0$ & $1$ & $100$ & $3$ & $0$ & $1$ & $100$ & $5$ & $0$ & $1$ & $100$ \\ \hline \hline \multicolumn{15}{|c|}{{\bf For $700$ nodes}}\\ \hline \hline \multirow{5}{*}{$697$} & $8$ & $2$ & $2$ & $3$ & $5$ & $100$ & $55$ & $4$ & $4$ & $100$ & $192$ & $3$ & $5$ & $100$ \\ & $6$ & $2$ & $2$ & $2$ & $4$ & $100$ & $25$ & $3$ & $3$ & $100$ & $93$ & $2$ & $4$ & $100$ \\ & $4$ & $2$ & $2$ & $2$ & $2$ & $100$ & $10$ & $2$ & $2$ & $100$ & $44$ & $2$ & $2$ & $100$ \\ & $2$ & $2$ & $2$ & $1$ & $1$ & $100$ & $3$ & $1$ & $1$ & $100$ & $21$ & $1$ & $1$ & $100$ \\ & $1$ & $2$ & $2$ & $0$ & $1$ & $100$ & $1$ & $0$ & $1$ & $100$ & $14$ & $0$ & $1$ & $100$ \\ \hline \multirow{5}{*}{$510$} & $8$ & $2$ & $3$ & $3$ & $5$ & $100$ & $332$ & $4$ & $4$ & $90.594$ & $570$ & $3$ & $5$ & $90.594$ \\ & $6$ & $2$ & $3$ & $2$ & $4$ & $100$ & $112$ & $3$ & $3$ & $99.996$ & $230$ & $2$ & $4$ & $99.996$ \\ & $4$ & $2$ & $3$ & $2$ & $2$ & $100$ & $27$ & $2$ & $2$ & $100$ & $60$ & $2$ & $2$ & $100$ \\ & $2$ & $2$ & $3$ & $1$ & $1$ & $100$ & $5$ & $1$ & $1$ & $100$ & $14$ & $1$ & $1$ & $100$ \\ & $1$ & $2$ & $2$ & $0$ & $1$ & $100$ & $1$ & $0$ & $1$ & $100$ & $7$ & $0$ & $1$ & $100$ \\ \hline \multirow{5}{*}{$285$} & $8$ & $4$ & $4$ & $3$ & $5$ & $100$ & $500$ & $4$ & $4$ & $2.844$ & $638$ & $3$ & $5$ & $2.844$ \\ & $6$ & $4$ & $4$ & $2$ & $4$ & $100$ & $474$ & $3$ & $3$ & $31.158$ & $610$ & $3$ & $3$ & $31.158$ \\ & $4$ & $4$ & $4$ & $2$ & $2$ & $100$ & $200$ & $2$ & $2$ & $99.468$ & $277$ & $1$ & $3$ & $99.468$ \\ & $2$ & $4$ & $4$ & $1$ & $1$ & $100$ & $15$ & $1$ & $1$ & $100$ & $22$ & $1$ & $1$ & $100$ \\ & $1$ & $4$ & $4$ & $0$ & $1$ & $100$ & $3$ & $0$ & $1$ & $100$ & $5$ & $0$ & $1$ & $100$ \\ \hline \multirow{5}{*}{$39$} & $8$ & $26$ & $29$ & $3$ & $5$ & $100$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ \\ & $6$ & $26$ & $28$ & $3$ & $3$ & $100$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ \\ & $4$ & $26$ & $28$ & $2$ & $2$ & $100$ & $494$ & $2$ & $2$ & $0.204$ & $502$ & $2$ & $2$ & $0.204$ \\ & $2$ & $26$ & $27$ & $1$ & $1$ & $100$ & $391$ & $1$ & $1$ & $78.472$ & $406$ & $1$ & $2$ & $78.472$ \\ & $1$ & $26$ & $26$ & $0$ & $1$ & $100$ & $25$ & $0$ & $1$ & $100$ & $26$ & $0$ & $1$ & $100$ \\ \hline \end{tabular} \end{table} \end{landscape} \begin{landscape} \begin{table}[] \caption{{\small Average number of attempts required for allocating channels for different types of multimedia signal with $700$ and $1000$ nodes in $OFDM-FDMA$}}\label{t3_1} \centering \scriptsize \begin{tabular}{ |c|c|c|c|c|c|c|c|c|c|c|c|c|c|c| } \hline \multirow{2}{*}{\parbox{1.0cm}{$~$ \\Average number of free channels ($F$)\\}} & \multirow{2}{*}{\parbox{1.0cm}{$~$ \\Demand number of channels ($DN$) \\}} & \multicolumn{5}{c|}{{\bf Our proposed protocol}} & \multicolumn{4}{c|}{{\bf first-fit protocol}} & \multicolumn{4}{c|}{{\bf best-fit protocol}}\\ \cline{3-15} & & \parbox{1.0cm}{$~$ \\Number of attempts (theoretical value) \\} & \parbox{1.0cm}{$~$ \\Number of attempts$^*$ (simulation results) \\} & \parbox{1.0cm}{$~$ \\Number of selected primary channels\\} & \parbox{1.0cm}{$~$ \\Number of selected secondary channels\\} & \parbox{1.0cm}{$~$ \\Average success rate} & \parbox{1.0cm}{$~$ \\Number of attempts (simulation results) \\} & \parbox{1.0cm}{$~$ \\Number of selected primary channels\\} & \parbox{1.0cm}{$~$ \\Number of selected secondary channels\\} & \parbox{1.0cm}{$~$ \\Average success rate} & \parbox{1.0cm}{$~$ \\Number of attempts (simulation results) \\} & \parbox{1.0cm}{$~$ \\Number of selected primary channels\\} & \parbox{1.0cm}{$~$ \\Number of selected secondary channels\\} & \parbox{1.0cm}{$~$ \\Average success rate}\\ \hline \hline \multicolumn{15}{|c|}{{\bf For $700$ nodes}}\\ \hline \hline \multirow{5}{*}{$1185$} & $8$ & $1$ & $2$ & $4$ & $4$ & $100$ & $42$ & $4$ & $4$ & $100$ & $229$ & $4$ & $4$ & $100$ \\ & $6$ & $1$ & $2$ & $3$ & $3$ & $100$ & $30$ & $3$ & $3$ & $100$ & $161$ & $3$ & $3$ & $100$ \\ & $4$ & $1$ & $2$ & $2$ & $2$ & $100$ & $16$ & $2$ & $2$ & $100$ & $109$ & $2$ & $2$ & $100$ \\ & $2$ & $1$ & $2$ & $1$ & $1$ & $100$ & $9$ & $1$ & $1$ & $100$ & $74$ & $1$ & $1$ & $100$ \\ & $1$ & $1$ & $2$ & $0$ & $1$ & $100$ & $4$ & $0$ & $1$ & $100$ & $63$ & $0$ & $1$ & $100$ \\ \hline \multirow{5}{*}{$798$} & $8$ & $1$ & $2$ & $4$ & $4$ & $100$ & $94$ & $4$ & $4$ & $100$ & $323$ & $4$ & $4$ & $100$ \\ & $6$ & $1$ & $2$ & $3$ & $3$ & $100$ & $52$ & $3$ & $3$ & $100$ & $174$ & $3$ & $3$ & $100$ \\ & $4$ & $1$ & $2$ & $2$ & $2$ & $100$ & $25$ & $2$ & $2$ & $100$ & $90$ & $2$ & $2$ & $100$ \\ & $2$ & $1$ & $2$ & $1$ & $1$ & $100$ & $12$ & $1$ & $1$ & $100$ & $47$ & $1$ & $1$ & $100$ \\ & $1$ & $1$ & $2$ & $0$ & $1$ & $100$ & $5$ & $0$ & $1$ & $100$ & $31$ & $0$ & $1$ & $100$ \\ \hline \multirow{5}{*}{$456$} & $8$ & $2$ & $2$ & $4$ & $4$ & $100$ & $405$ & $4$ & $4$ & $99.473$ & $874$ & $4$ & $4$ & $99.473$ \\ & $6$ & $2$ & $2$ & $3$ & $3$ & $100$ & $152$ & $3$ & $3$ & $100$ & $364$ & $3$ & $3$ & $100$ \\ & $4$ & $2$ & $2$ & $2$ & $2$ & $100$ & $53$ & $2$ & $2$ & $100$ & $129$ & $2$ & $2$ & $100$ \\ & $2$ & $2$ & $2$ & $1$ & $1$ & $100$ & $18$ & $1$ & $1$ & $100$ & $45$ & $1$ & $1$ & $100$ \\ & $1$ & $2$ & $2$ & $0$ & $1$ & $100$ & $8$ & $0$ & $1$ & $100$ & $24$ & $0$ & $1$ & $100$ \\ \hline \multirow{5}{*}{$211$} & $8$ & $4$ & $4$ & $4$ & $4$ & $100$ & $946$ & $4$ & $4$ & $34.442$ & $1386$ & $4$ & $4$ & $34.442$ \\ & $6$ & $4$ & $4$ & $3$ & $3$ & $100$ & $686$ & $3$ & $3$ & $88.625$ & $1084$ & $3$ & $3$ & $88.625$ \\ & $4$ & $4$ & $4$ & $2$ & $2$ & $100$ & $191$ & $2$ & $2$ & $100$ & $343$ & $2$ & $2$ & $100$ \\ & $2$ & $4$ & $4$ & $1$ & $1$ & $100$ & $39$ & $1$ & $1$ & $100$ & $70$ & $1$ & $1$ & $100$ \\ & $1$ & $4$ & $4$ & $0$ & $0$ & $100$ & $15$ & $0$ & $1$ & $100$ & $29$ & $0$ & $1$ & $100$ \\ \hline \hline \multicolumn{15}{|c|}{{\bf For $1000$ nodes}}\\ \hline \hline \multirow{5}{*}{$1015$} & $8$ & $1$ & $2$ & $4$ & $4$ & $100$ & $56$ & $4$ & $4$ & $100$ & $242$ & $4$ & $4$ & $100$ \\ & $6$ & $1$ & $2$ & $3$ & $3$ & $100$ & $36$ & $3$ & $3$ & $100$ & $154$ & $3$ & $3$ & $100$ \\ & $4$ & $1$ & $2$ & $2$ & $2$ & $100$ & $19$ & $2$ & $2$ & $100$ & $93$ & $2$ & $2$ & $100$ \\ & $2$ & $1$ & $2$ & $1$ & $1$ & $100$ & $10$ & $1$ & $1$ & $100$ & $57$ & $1$ & $1$ & $100$ \\ & $1$ & $1$ & $2$ & $0$ & $0$ & $100$ & $4$ & $0$ & $1$ & $100$ & $43$ & $0$ & $1$ & $100$ \\ \hline \multirow{5}{*}{$546$} & $8$ & $2$ & $2$ & $4$ & $4$ & $100$ & $250$ & $4$ & $4$ & $99.996$ & $639$ & $4$ & $4$ & $99.996$ \\ & $6$ & $2$ & $2$ & $3$ & $3$ & $100$ & $104$ & $3$ & $3$ & $100$ & $274$ & $3$ & $3$ & $100$ \\ & $4$ & $2$ & $2$ & $2$ & $2$ & $100$ & $41$ & $2$ & $2$ & $100$ & $109$ & $2$ & $2$ & $100$ \\ & $2$ & $2$ & $2$ & $1$ & $1$ & $100$ & $16$ & $1$ & $1$ & $100$ & $44$ & $1$ & $1$ & $100$ \\ & $1$ & $2$ & $2$ & $0$ & $1$ & $100$ & $7$ & $0$ & $1$ & $100$ & $25$ & $0$ & $1$ & $100$ \\ \hline \multirow{5}{*}{$205$} & $8$ & $4$ & $4$ & $4$ & $4$ & $100$ & $958$ & $4$ & $4$ & $31.828$ & $1388$ & $4$ & $4$ & $31.828$\\ & $6$ & $4$ & $4$ & $3$ & $3$ & $100$ & $707$ & $3$ & $3$ & $86.829$ & $1103$ & $3$ & $3$ & $86.829$\\ & $4$ & $4$ & $4$ & $2$ & $2$ & $100$ & $201$ & $2$ & $2$ & $99.998$ & $358$ & $2$ & $2$ & $99.998$\\ & $2$ & $4$ & $4$ & $1$ & $1$ & $100$ & $40$ & $1$ & $1$ & $100$ & $71$ & $1$ & $1$ & $100$ \\ & $1$ & $4$ & $4$ & $0$ & $1$ & $100$ & $15$ & $0$ & $1$ & $100$ & $29$ & $0$ & $1$ & $100$ \\ \hline \multirow{5}{*}{$21$} & $8$ & $32$ & $45$ & $4$ & $4$ & $100$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ \\ & $6$ & $32$ & $41$ & $3$ & $3$ & $100$ & $413$ & $3$ & $3$ & $0.1$ & $413$ & $3$ & $3$ & $0.1$ \\ & $4$ & $32$ & $39$ & $2$ & $2$ & $100$ & $965$ & $2$ & $2$ & $5.7$ & $1218$ & $2$ & $2$ & $5.7$ \\ & $2$ & $32$ & $37$ & $1$ & $1$ & $100$ & $582$ & $1$ & $1$ & $90.3$ & $658$ & $1$ & $1$ & $90.3$ \\ & $1$ & $32$ & $36$ & $0$ & $1$ & $100$ & $115$ & $0$ & $1$ & $100$ & $135$ & $0$ & $1$ & $100$ \\ \hline \multicolumn{15}{l}{$^*$The sensing time per attempts in $OFDM-FDMA$ channel allocation technique is three times more than that in $FDM-FDMA$ channel allocation technique.} \end{tabular} \end{table} \end{landscape} \begin{landscape} \begin{table}[] \caption{Average Delay and Delay Jitter for different real-life Video files} \label{different_video_file} \scriptsize \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \parbox{1.75cm}{Video File Size (in bits)} & \parbox{1.75cm}{Average Number of Channel Deallocation by PUs} & \parbox{1.75cm}{Ideal Transmission Time (IT) (in sec)} & \parbox{1.75cm}{Initial Channel Allocation Time (ICA) (in msec)} & \parbox{1.75cm}{Channel Reallocation Time (CR) (in msec)} & \parbox{2cm}{Actual Transmission Time (AT) (in sec) (AT=IT+ICA+CR)} & \parbox{1.75cm}{Maximum Jitter (in msec)} & \parbox{1.75cm}{Mean Jitter (in msec)} & \parbox{1.75cm}{Standard Deviation In Jitter (in msec)} \\ \hline \multicolumn{9}{|c|}{Average number of free channels ($F$) is $697$ } \\ \hline $387685216$ & $3.901$ & $757.1976875$ & $19.8$ & $5.85375$ & $757.22334125$ & $1.96625$ & $0.000989475$ & $0.038375$ \\ $389171680$ & $4.053$ & $760.1009375$ & $19.68$ & $6.0525$ & $760.12667$ & $1.94$ & $0.0010191125$ & $0.038625$ \\ $390823936$ & $4.059$ & $763.328$ & $19.79$ & $6.09375$ & $763.35388375$ & $1.99875$ & $0.00102175$ & $0.039125$ \\ $393725152$ & $3.891$ & $768.9944375$ & $19.92$ & $5.88375$ & $769.02024125$ & $1.97625$ & $0.000979325$ & $0.03825$ \\ $399652192$ & $3.973$ & $780.5706875$ & $19.76$ & $5.9975$ & $780.596445$ & $2.01$ & $0.0009833625$ & $0.0385$ \\ $403520768$ & $4.072$ & $788.1265$ & $19.61$ & $6.22125$ & $788.15233125$ & $2.07625$ & $0.001010275$ & $0.039625$ \\ $411901408$ & $4.083$ & $804.4949375$ & $19.61$ & $6.155$ & $804.5207025$ & $2.01125$ & $0.0009791625$ & $0.038375$ \\ $413877088$ & $4.049$ & $808.3536875$ & $19.76$ & $6.07375$ & $808.37952125$ & $1.97875$ & $0.00096165$ & $0.03775$ \\ $415495232$ & $4.168$ & $811.514125$ & $19.79$ & $6.27875$ & $811.54019375$ & $2.03125$ & $0.0009903375$ & $0.038625$ \\ $420989536$ & $4.112$ & $822.2451875$ & $19.71$ & $6.1375$ & $822.271035$ & $1.99375$ & $0.0009554$ & $0.03775$ \\ \boldmath{$402684220.8$} & \boldmath{$4.0361$} & \boldmath{$786.49261875$} & \boldmath{$19.743$} & \boldmath{$6.07475$} & \boldmath{$786.5184365$} & \boldmath{$1.99825$} & \boldmath{$0.000988985$} & \boldmath{$0.0385$} \\ \hline \multicolumn{9}{|c|}{Average number of free channels ($F$) is $510$ } \\ \hline $387685216$ & $4.026$ & $757.1976875$ & $23.68$ & $6.48625$ & $757.22785375$ & $2.31625$ & $0.0010963875$ & $0.043125$ \\ $389171680$ & $3.923$ & $760.1009375$ & $24.26$ & $6.3325$ & $760.13153$ & $2.285$ & $0.0010662625$ & $0.042375$ \\ $390823936$ & $4.011$ & $763.328$ & $24.02$ & $6.4825$ & $763.3585025$ & $2.305$ & $0.0010869375$ & $0.042875$ \\ $393725152$ & $3.986$ & $768.9944375$ & $24.14$ & $6.51125$ & $769.02508875$ & $2.35625$ & $0.0010837625$ & $0.043125$ \\ $399652192$ & $4.191$ & $780.5706875$ & $23.93$ & $6.7525$ & $780.60137$ & $2.34625$ & $0.0011071$ & $0.0435$ \\ $403520768$ & $4.13$ & $788.1265$ & $24.24$ & $6.64625$ & $788.15738625$ & $2.3125$ & $0.0010792875$ & $0.0425$ \\ $411901408$ & $4.147$ & $804.4949375$ & $23.96$ & $6.675$ & $804.5255725$ & $2.32$ & $0.00106205$ & $0.042375$ \\ $413877088$ & $4.071$ & $808.3536875$ & $24.18$ & $6.5425$ & $808.38441$ & $2.2825$ & $0.0010358625$ & $0.0415$ \\ $415495232$ & $4.278$ & $811.514125$ & $24$ & $6.80875$ & $811.54493375$ & $2.3025$ & $0.0010739375$ & $0.042375$ \\ $420989536$ & $4.139$ & $822.2451875$ & $23.8$ & $6.80375$ & $822.27579125$ & $2.4$ & $0.0010591125$ & $0.042875$ \\ \boldmath{$402684220.8$} & \boldmath{$4.0902$} & \boldmath{$786.49261875$} & \boldmath{$24.021$} & \boldmath{$6.604125$} & \boldmath{$786.523243875$} & \boldmath{$2.322625$} & \boldmath{$0.001075075$} & \boldmath{$0.0426625$} \\ \hline \multicolumn{9}{|c|}{Average number of free channels ($F$) is $285$ } \\ \hline $387685216$ & $4.293$ & $757.1976875$ & $39.77$ & $9.4225$ & $757.24688$ & $3.9$ & $0.001625$ & $0.06575$ \\ $389171680$ & $4.202$ & $760.1009375$ & $39.83$ & $9.1$ & $760.1498675$ & $3.8725$ & $0.0015$ & $0.064125$ \\ $390823936$ & $4.305$ & $763.328$ & $40.04$ & $9.185$ & $763.377225$ & $3.8$ & $0.0015$ & $0.0635$ \\ $393725152$ & $4.306$ & $768.9944375$ & $39.41$ & $9.32625$ & $769.04317375$ & $3.92875$ & $0.0015$ & $0.064875$ \\ $399652192$ & $4.393$ & $780.5706875$ & $40.35$ & $9.3875$ & $780.620425$ & $3.94875$ & $0.0015$ & $0.064375$ \\ $403520768$ & $4.212$ & $788.1265$ & $39.82$ & $9.16875$ & $788.17548875$ & $3.9225$ & $0.0015$ & $0.063875$ \\ $411901408$ & $4.409$ & $804.4949375$ & $39.88$ & $9.3675$ & $804.544185$ & $3.81375$ & $0.0015$ & $0.0625$ \\ $413877088$ & $4.25$ & $808.3536875$ & $39.76$ & $9.2375$ & $808.402685$ & $3.8975$ & $0.0015$ & $0.06275$ \\ $415495232$ & $4.323$ & $811.514125$ & $40.55$ & $9.25375$ & $811.56392875$ & $3.88$ & $0.0015$ & $0.0625$ \\ $420989536$ & $4.394$ & $822.2451875$ & $40.11$ & $9.5325$ & $822.29483$ & $4.01125$ & $0.0015$ & $0.06375$ \\ \boldmath{$402684220.8$} & \boldmath{$4.3087$} & \boldmath{$786.49261875$} & \boldmath{$39.952$} & \boldmath{$9.298125$} & \boldmath{$786.541868875$} & \boldmath{$3.8975$} & \boldmath{$0.0015125$} & \boldmath{$0.0638$} \\ \hline \multicolumn{9}{|c|}{Average number of free channels ($F$) is $39$ } \\ \hline $387685216$ & $5.169$ & $757.1976875$ & $279.59$ & $70.92625$ & $757.54820375$ & $38.32625$ & $0.012$ & $0.578125$ \\ $389171680$ & $4.919$ & $760.1009375$ & $281.79$ & $70.46875$ & $760.45319625$ & $38.9775$ & $0.011875$ & $0.581$ \\ $390823936$ & $4.971$ & $763.328$ & $286.82$ & $69.2$ & $763.68402$ & $37.66625$ & $0.011625$ & $0.564125$ \\ $393725152$ & $5.074$ & $768.9944375$ & $283.47$ & $70.0275$ & $769.347935$ & $36.8475$ & $0.011625$ & $0.55675$ \\ $399652192$ & $5.119$ & $780.5706875$ & $288.57$ & $68.0825$ & $780.92734$ & $36.27125$ & $0.011125$ & $0.5405$ \\ $403520768$ & $5.016$ & $788.1265$ & $282.11$ & $66.82875$ & $788.47543875$ & $36.2325$ & $0.010875$ & $0.534625$ \\ $411901408$ & $5.083$ & $804.4949375$ & $285.96$ & $70.90625$ & $804.85180375$ & $37.86625$ & $0.01125$ & $0.55925$ \\ $413877088$ & $5.077$ & $808.3536875$ & $281.94$ & $68.72875$ & $808.70435625$ & $37.84625$ & $0.010875$ & $0.548125$ \\ $415495232$ & $5.093$ & $811.514125$ & $285.01$ & $67.725$ & $811.86686$ & $36.2175$ & $0.010625$ & $0.528875$ \\ $420989536$ & $5.16$ & $822.2451875$ & $281.46$ & $68.43625$ & $822.59508375$ & $36.64125$ & $0.010625$ & $0.531625$ \\ \boldmath{$402684220.8$} & \boldmath{$5.0681$} & \boldmath{$786.49261875$} & \boldmath{$283.672$} & \boldmath{$69.133$} & \boldmath{$786.84542375$} & \boldmath{$37.28925$} & \boldmath{$0.01125$} & \boldmath{$0.5523$} \\ \hline \end{tabular} \end{table} \end{landscape} \begin{landscape} \begin{table}[] \caption{Average Delay and Delay Jitter for different real-life Music files} \label{different_music_file} \scriptsize \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \parbox{1.75cm}{Music File Size (in bits)} & \parbox{1.75cm}{Average Number of Channel Deallocation by PUs} & \parbox{1.75cm}{Ideal Transmission Time (IT) (in sec)} & \parbox{1.75cm}{Initial Channel Allocation Time (ICA) (in msec)} & \parbox{1.75cm}{Channel Reallocation Time (CR) (in msec)} & \parbox{2cm}{Actual Transmission Time (AT) (in sec) (AT=IT+ICA+CR)} & \parbox{1.75cm}{Maximum Jitter (in msec)} & \parbox{1.75cm}{Mean Jitter (in msec)} & \parbox{1.75cm}{Standard Deviation In Jitter (in msec)} \\ \hline \multicolumn{9}{|c|}{Average number of free channels ($F$) is $697$ } \\ \hline $48070656$ & $0.745$ & $125.184$ & $14.2575$ & $1.14625$ & $125.19940375$ & $0.8475$ & $0.0011720375$ & $0.03025$ \\ $48381952$ & $0.785$ & $125.9946666667$ & $14.475$ & $1.22125$ & $126.0103629167$ & $0.9125$ & $0.00123985$ & $0.03225$ \\ $48382976$ & $0.822$ & $125.9973333333$ & $14.55$ & $1.265$ & $126.0131483333$ & $0.92375$ & $0.00125$ & $0.033$ \\ $48451864$ & $0.8$ & $126.1767291667$ & $14.3175$ & $1.27125$ & $126.1923179167$ & $0.9475$ & $0.00125$ & $0.0335$ \\ $48594944$ & $0.787$ & $126.5493333333$ & $14.4825$ & $1.19125$ & $126.5650070833$ & $0.8625$ & $0.0012045$ & $0.030875$ \\ $48647680$ & $0.824$ & $126.6866666667$ & $14.4375$ & $1.285$ & $126.7023891667$ & $0.93$ & $0.00125$ & $0.03325$ \\ $49990032$ & $0.785$ & $130.182375$ & $14.4525$ & $1.2275$ & $130.198055$ & $0.885$ & $0.0012058$ & $0.031125$ \\ $50430976$ & $0.831$ & $131.3306666667$ & $14.3925$ & $1.32$ & $131.3463791667$ & $0.95875$ & $0.00125$ & $0.0335$ \\ $50432936$ & $0.842$ & $131.3357708333$ & $14.3925$ & $1.33$ & $131.3514933333$ & $0.98375$ & $0.00125$ & $0.034125$ \\ $51168256$ & $0.899$ & $133.2506666667$ & $14.505$ & $1.375$ & $133.2665466667$ & $0.97125$ & $0.001375$ & $0.034125$ \\ \boldmath{$49255227.2$} & \boldmath{$0.812$} & \boldmath{$128.2688208333$} & \boldmath{$14.42625$} & \boldmath{$1.26325$} & \boldmath{$128.2845103333$} & \boldmath{$0.92225$} & \boldmath{$0.0012447188$} & \boldmath{$0.0326$} \\ \hline \multicolumn{9}{|c|}{Average number of free channels ($F$) is $510$ } \\ \hline $48070656$ & $0.921$ & $125.184$ & $17.7375$ & $1.76125$ & $125.20349875$ & $1.32375$ & $0.00175$ & $0.0465$ \\ $48381952$ & $0.946$ & $125.9946666667$ & $18.12$ & $1.78875$ & $126.0145754167$ & $1.3025$ & $0.001875$ & $0.04575$ \\ $48382976$ & $0.947$ & $125.9973333333$ & $17.9175$ & $1.87375$ & $126.0171245833$ & $1.3825$ & $0.001875$ & $0.04825$ \\ $48451864$ & $0.997$ & $126.1767291667$ & $17.9625$ & $1.93875$ & $126.1966304167$ & $1.36875$ & $0.002$ & $0.04875$ \\ $48594944$ & $0.919$ & $126.5493333333$ & $17.9925$ & $1.79625$ & $126.5691220833$ & $1.30875$ & $0.001875$ & $0.04575$ \\ $48647680$ & $0.939$ & $126.6866666667$ & $17.7075$ & $1.775$ & $126.7061491667$ & $1.2975$ & $0.00175$ & $0.0455$ \\ $49990032$ & $0.993$ & $130.182375$ & $17.7$ & $1.87375$ & $130.20194875$ & $1.33625$ & $0.001875$ & $0.04675$ \\ $50430976$ & $1.006$ & $131.3306666667$ & $17.67$ & $1.865$ & $131.3502016667$ & $1.37$ & $0.001875$ & $0.04725$ \\ $50432936$ & $0.983$ & $131.3357708333$ & $17.8425$ & $1.8775$ & $131.3554908333$ & $1.31875$ & $0.001875$ & $0.046$ \\ $51168256$ & $0.99$ & $133.2506666667$ & $17.8125$ & $1.83$ & $133.2703091667$ & $1.3225$ & $0.00175$ & $0.0455$ \\ \boldmath{$49255227.2$} & \boldmath{$0.9641$} & \boldmath{$128.2688208333$} & \boldmath{$17.84625$} & \boldmath{$1.838$} & \boldmath{$128.2885050833$} & \boldmath{$1.333125$} & \boldmath{$0.00185$} & \boldmath{$0.0466$} \\ \hline \multicolumn{9}{|c|}{Average number of free channels ($F$) is $285$ } \\ \hline $48070656$ & $1.338$ & $125.184$ & $29.5875$ & $4.1925$ & $125.21778$ & $2.92375$ & $0.00425$ & $0.103375$ \\ $48381952$ & $1.344$ & $125.9946666667$ & $29.9175$ & $4.015$ & $126.0285991667$ & $2.7925$ & $0.004125$ & $0.0985$ \\ $48382976$ & $1.365$ & $125.9973333333$ & $29.37$ & $4.1725$ & $126.0308758333$ & $2.92625$ & $0.00425$ & $0.102875$ \\ $48451864$ & $1.353$ & $126.1767291667$ & $29.2425$ & $4.2225$ & $126.2101941667$ & $2.99375$ & $0.00425$ & $0.104625$ \\ $48594944$ & $1.345$ & $126.5493333333$ & $29.16$ & $4.1525$ & $126.5826458333$ & $2.9225$ & $0.00425$ & $0.10225$ \\ $48647680$ & $1.395$ & $126.6866666667$ & $29.7525$ & $4.205$ & $126.7206241667$ & $2.905$ & $0.00425$ & $0.1025$ \\ $49990032$ & $1.429$ & $130.182375$ & $29.1$ & $4.49$ & $130.215965$ & $3.12875$ & $0.004375$ & $0.108125$ \\ $50430976$ & $1.432$ & $131.3306666667$ & $29.4225$ & $4.41375$ & $131.3645029167$ & $3.1$ & $0.00425$ & $0.10625$ \\ $50432936$ & $1.334$ & $131.3357708333$ & $29.79$ & $4.11375$ & $131.3696745833$ & $2.91125$ & $0.004$ & $0.100125$ \\ $51168256$ & $1.373$ & $133.2506666667$ & $29.1$ & $4.27875$ & $133.2840454167$ & $3.0225$ & $0.004125$ & $0.10275$ \\ \boldmath{$49255227.2$} & \boldmath{$1.3708$} & \boldmath{$128.2688208333$} & \boldmath{$29.44425$} & \boldmath{$4.225625$} & \boldmath{$128.3024907083$} & \boldmath{$2.962625$} & \boldmath{$0.0042125$} & \boldmath{$0.1031375$} \\ \hline \multicolumn{9}{|c|}{Average number of free channels ($F$) is $39$ } \\ \hline $48070656$ & $2.811$ & $125.184$ & $208.0425$ & $46.91$ & $125.4389525$ & $29.3175$ & $0.048$ & $1.05175$ \\ $48381952$ & $2.821$ & $125.9946666667$ & $205.545$ & $48.30125$ & $126.2485129167$ & $30.1775$ & $0.049$ & $1.07725$ \\ $48382976$ & $2.798$ & $125.9973333333$ & $209.4075$ & $46.585$ & $126.2533258333$ & $29.2625$ & $0.04725$ & $1.042625$ \\ $48451864$ & $2.695$ & $126.1767291667$ & $209.3625$ & $43.43375$ & $126.4295254167$ & $27.34125$ & $0.044$ & $0.974$ \\ $48594944$ & $2.794$ & $126.5493333333$ & $210.39$ & $47.80125$ & $126.8075245833$ & $29.93875$ & $0.048375$ & $1.067125$ \\ $48647680$ & $2.899$ & $126.6866666667$ & $207.0825$ & $46.5275$ & $126.9402766667$ & $29.435$ & $0.047$ & $1.04275$ \\ $49990032$ & $2.821$ & $130.182375$ & $205.575$ & $48.3425$ & $130.4362925$ & $30.51$ & $0.0475$ & $1.068125$ \\ $50430976$ & $2.839$ & $131.3306666667$ & $211.38$ & $46.33125$ & $131.5883779167$ & $29.09875$ & $0.045125$ & $1.01725$ \\ $50432936$ & $2.93$ & $131.3357708333$ & $206.445$ & $47.4175$ & $131.5896333333$ & $29.33375$ & $0.046125$ & $1.029$ \\ $51168256$ & $2.75$ & $133.2506666667$ & $206.805$ & $45.8925$ & $133.5033641667$ & $27.82375$ & $0.044$ & $0.9795$ \\ \boldmath{$49255227.2$} & \boldmath{$2.8158$} & \boldmath{$128.2688208333$} & \boldmath{$208.0035$} & \boldmath{$46.75425$} & \boldmath{$128.5235785833$} & \boldmath{$29.223875$} & \boldmath{$0.0466375$} & \boldmath{$1.0349375$} \\ \hline \end{tabular} \end{table} \end{landscape} \begin{landscape} \begin{table}[] \caption{Average Delay and Delay Jitter for different real-life Image files} \label{different_image_file} \scriptsize \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \parbox{1.75cm}{Image File Size (in bits)} & \parbox{1.75cm}{Average Number of Channel Deallocation by PUs} & \parbox{1.75cm}{Ideal Transmission Time (IT) (in sec)} & \parbox{1.75cm}{Initial Channel Allocation Time (ICA) (in msec)} & \parbox{1.75cm}{Channel Reallocation Time (CR) (in msec)} & \parbox{2cm}{Actual Transmission Time (AT) (in sec) (AT=IT+ICA+CR)} & \parbox{1.75cm}{Maximum Jitter (in msec)} & \parbox{1.75cm}{Mean Jitter (in msec)} & \parbox{1.75cm}{Standard Deviation In Jitter (in msec)} \\ \hline \multicolumn{9}{|c|}{Average number of free channels ($F$) is $697$ } \\ \hline $12448128$ & $0.23$ & $48.6255$ & $9.235$ & $0.34625$ & $48.63508125$ & $0.325$ & $0.0009111875$ & $0.017$ \\ $12459888$ & $0.232$ & $48.6714375$ & $9.125$ & $0.385$ & $48.6809475$ & $0.34625$ & $0.0010105$ & $0.018375$ \\ $12486256$ & $0.199$ & $48.7744375$ & $9.165$ & $0.33125$ & $48.78393375$ & $0.31$ & $0.00086715$ & $0.01625$ \\ $12596912$ & $0.248$ & $49.2066875$ & $9.275$ & $0.385$ & $49.2163475$ & $0.3475$ & $0.001$ & $0.018375$ \\ $12650768$ & $0.234$ & $49.4170625$ & $9.215$ & $0.38125$ & $49.42665875$ & $0.3475$ & $0.0009851375$ & $0.01825$ \\ $12710512$ & $0.221$ & $49.6504375$ & $9.11$ & $0.35375$ & $49.65990125$ & $0.32875$ & $0.000911725$ & $0.017125$ \\ $12828824$ & $0.24$ & $50.11259375$ & $9.095$ & $0.385$ & $50.12207375$ & $0.345$ & $0.0009821375$ & $0.018$ \\ $12903144$ & $0.249$ & $50.40290625$ & $9.08$ & $0.4$ & $50.41238625$ & $0.36$ & $0.001015225$ & $0.01875$ \\ $12927384$ & $0.25$ & $50.49759375$ & $9.22$ & $0.38375$ & $50.5071975$ & $0.35625$ & $0.000971525$ & $0.018375$ \\ $12996424$ & $0.263$ & $50.76728125$ & $9.19$ & $0.43375$ & $50.776905$ & $0.395$ & $0.001092575$ & $0.0205$ \\ \boldmath{$12700824$} & \boldmath{$0.2366$} & \boldmath{$49.61259375$} & \boldmath{$9.171$} & \boldmath{$0.3785$} & \boldmath{$49.62214325$} & \boldmath{$0.346125$} & \boldmath{$0.0009747163$} & \boldmath{$0.0181$} \\ \hline \multicolumn{9}{|c|}{Average number of free channels ($F$) is $510$ } \\ \hline $12448128$ & $0.331$ & $48.6255$ & $11.57$ & $0.7575$ & $48.6378275$ & $0.68875$ & $0.002$ & $0.036375$ \\ $12459888$ & $0.311$ & $48.6714375$ & $11.755$ & $0.63125$ & $48.68382375$ & $0.57125$ & $0.001625$ & $0.03025$ \\ $12486256$ & $0.32$ & $48.7744375$ & $11.69$ & $0.68$ & $48.7868075$ & $0.60375$ & $0.00175$ & $0.032$ \\ $12596912$ & $0.342$ & $49.2066875$ & $11.775$ & $0.72625$ & $49.21918875$ & $0.655$ & $0.001875$ & $0.034375$ \\ $12650768$ & $0.302$ & $49.4170625$ & $11.905$ & $0.66125$ & $49.42962875$ & $0.59375$ & $0.00175$ & $0.031$ \\ $12710512$ & $0.309$ & $49.6504375$ & $11.675$ & $0.67125$ & $49.66278375$ & $0.6125$ & $0.00175$ & $0.031875$ \\ $12828824$ & $0.31$ & $50.11259375$ & $11.93$ & $0.66$ & $50.12518375$ & $0.595$ & $0.001625$ & $0.030875$ \\ $12903144$ & $0.327$ & $50.40290625$ & $11.655$ & $0.69875$ & $50.41526$ & $0.62125$ & $0.00175$ & $0.032375$ \\ $12927384$ & $0.306$ & $50.49759375$ & $11.58$ & $0.68375$ & $50.5098575$ & $0.6125$ & $0.00175$ & $0.031875$ \\ $12996424$ & $0.313$ & $50.76728125$ & $11.75$ & $0.71375$ & $50.779745$ & $0.64625$ & $0.00175$ & $0.033375$ \\ \boldmath{$12700824$} & \boldmath{$0.3171$} & \boldmath{$49.61259375$} & \boldmath{$11.7285$} & \boldmath{$0.688375$} & \boldmath{$49.625010625$} & \boldmath{$0.62$} & \boldmath{$0.0017625$} & \boldmath{$0.0324375$} \\ \hline \multicolumn{9}{|c|}{Average number of free channels ($F$) is $285$ } \\ \hline $12448128$ & $0.501$ & $48.6255$ & $19.315$ & $1.72875$ & $48.64654375$ & $1.51375$ & $0.0045$ & $0.0805$ \\ $12459888$ & $0.464$ & $48.6714375$ & $19.395$ & $1.565$ & $48.6923975$ & $1.335$ & $0.004125$ & $0.071625$ \\ $12486256$ & $0.44$ & $48.7744375$ & $19.39$ & $1.6675$ & $48.795495$ & $1.48$ & $0.004375$ & $0.078375$ \\ $12596912$ & $0.497$ & $49.2066875$ & $18.76$ & $1.705$ & $49.2271525$ & $1.4875$ & $0.004375$ & $0.078625$ \\ $12650768$ & $0.444$ & $49.4170625$ & $19.865$ & $1.6275$ & $49.438555$ & $1.3925$ & $0.00425$ & $0.073875$ \\ $12710512$ & $0.469$ & $49.6504375$ & $19.665$ & $1.7825$ & $49.671885$ & $1.55375$ & $0.004625$ & $0.0815$ \\ $12828824$ & $0.511$ & $50.11259375$ & $19.61$ & $1.91625$ & $50.13412$ & $1.695$ & $0.004875$ & $0.08825$ \\ $12903144$ & $0.524$ & $50.40290625$ & $19.725$ & $1.96875$ & $50.4246$ & $1.69125$ & $0.005$ & $0.0885$ \\ $12927384$ & $0.471$ & $50.49759375$ & $19.645$ & $1.7075$ & $50.51894625$ & $1.46125$ & $0.004375$ & $0.0765$ \\ $12996424$ & $0.524$ & $50.76728125$ & $19.48$ & $1.83875$ & $50.7886$ & $1.55125$ & $0.004625$ & $0.081375$ \\ \boldmath{$12700824$} & \boldmath{$0.4845$} & \boldmath{$49.61259375$} & \boldmath{$19.485$} & \boldmath{$1.75075$} & \boldmath{$49.6338295$} & \boldmath{$1.516125$} & \boldmath{$0.0045125$} & \boldmath{$0.0799125$} \\ \hline \multicolumn{9}{|c|}{Average number of free channels ($F$) is $39$ } \\ \hline $12448128$ & $1.455$ & $48.6255$ & $137.89$ & $27.355$ & $48.790745$ & $19.87375$ & $0.072$ & $1.10475$ \\ $12459888$ & $1.429$ & $48.6714375$ & $131.225$ & $27.9175$ & $48.83058$ & $21.2225$ & $0.07325$ & $1.158875$ \\ $12486256$ & $1.428$ & $48.7744375$ & $135.71$ & $27.91375$ & $48.93806125$ & $21.02$ & $0.073125$ & $1.1525$ \\ $12596912$ & $1.463$ & $49.2066875$ & $138.17$ & $28.785$ & $49.3736425$ & $20.625$ & $0.07475$ & $1.146125$ \\ $12650768$ & $1.465$ & $49.4170625$ & $133.615$ & $27.2375$ & $49.577915$ & $20.34$ & $0.070375$ & $1.112375$ \\ $12710512$ & $1.4$ & $49.6504375$ & $138.69$ & $26.57125$ & $49.81569875$ & $20.06125$ & $0.0685$ & $1.091125$ \\ $12828824$ & $1.396$ & $50.11259375$ & $136.1$ & $25.3825$ & $50.27407625$ & $18.91625$ & $0.06475$ & $1.02425$ \\ $12903144$ & $1.419$ & $50.40290625$ & $134.345$ & $26.0275$ & $50.56327875$ & $18.91625$ & $0.066$ & $1.033$ \\ $12927384$ & $1.501$ & $50.49759375$ & $136.59$ & $27.545$ & $50.66172875$ & $19.88125$ & $0.06975$ & $1.085375$ \\ $12996424$ & $1.425$ & $50.76728125$ & $136.87$ & $27.82375$ & $50.931975$ & $20.9325$ & $0.070125$ & $1.123125$ \\ \boldmath{$12700824$} & \boldmath{$1.4381$} & \boldmath{$49.61259375$} & \boldmath{$135.9205$} & \boldmath{$27.255875$} & \boldmath{$49.775770125$} & \boldmath{$20.178875$} & \boldmath{$0.0702625$} & \boldmath{$1.10315$} \\ \hline \end{tabular} \end{table} \end{landscape} \begin{landscape} \begin{table}[] \caption{Average Delay and Delay Jitter for different real-life Text files} \label{different_data_file} \scriptsize \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \parbox{1.75cm}{Data File Size (in bits)} & \parbox{1.75cm}{Average Number of Channel Deallocation by PUs} & \parbox{1.75cm}{Ideal Transmission Time (IT) (in sec)} & \parbox{1.75cm}{Initial Channel Allocation Time (ICA) (in msec)} & \parbox{1.75cm}{Channel Reallocation Time (CR) (in msec)} & \parbox{2cm}{Actual Transmission Time (AT) (in sec) (AT=IT+ICA+CR)} & \parbox{1.75cm}{Maximum Jitter (in msec)} & \parbox{1.75cm}{Mean Jitter (in msec)} & \parbox{1.75cm}{Standard Deviation In Jitter (in msec)} \\ \hline \multicolumn{9}{|c|}{Average number of free channels ($F$) is $697$ } \\ \hline $25848$ & $0$ & $0.2019375$ & $3.9925$ & $0$ & $0.20593$ & $0$ & $0$ & $0$ \\ $29008$ & $0$ & $0.226625$ & $4.1225$ & $0$ & $0.2307475$ & $0$ & $0$ & $0$ \\ $30752$ & $0$ & $0.24025$ & $4.035$ & $0$ & $0.244285$ & $0$ & $0$ & $0$ \\ $112920$ & $0.004$ & $0.8821875$ & $4.11$ & $0.00625$ & $0.88630375$ & $0.00625$ & $0.0008928625$ & $0.002375$ \\ $322448$ & $0.009$ & $2.519125$ & $4.0375$ & $0.015$ & $2.5231775$ & $0.015$ & $0.00075$ & $0.003375$ \\ $453664$ & $0.014$ & $3.54425$ & $3.915$ & $0.0275$ & $3.5481925$ & $0.0275$ & $0.0009821375$ & $0.00525$ \\ $564880$ & $0.013$ & $4.413125$ & $3.9925$ & $0.02625$ & $4.41714375$ & $0.02625$ & $0.00075$ & $0.004375$ \\ $703728$ & $0.018$ & $5.497875$ & $4.0775$ & $0.0275$ & $5.50198$ & $0.0275$ & $0.0006395375$ & $0.00425$ \\ $1284416$ & $0.029$ & $10.0345$ & $3.9675$ & $0.05125$ & $10.03851875$ & $0.05125$ & $0.0006487375$ & $0.00575$ \\ $3098904$ & $0.06$ & $24.2101875$ & $3.9575$ & $0.10625$ & $24.21425125$ & $0.105$ & $0.0005592125$ & $0.007625$ \\ \boldmath{$662656.8$} & \boldmath{$0.0147$} & \boldmath{$5.17700625$} & \boldmath{$4.02075$} & \boldmath{$0.026$} & \boldmath{$5.181053$} & \boldmath{$0.025875$} & \boldmath{$0.0005222488$} & \boldmath{$0.0033$} \\ \hline \multicolumn{9}{|c|}{Average number of free channels ($F$) is $510$ } \\ \hline $25848$ & $0.003$ & $0.2019375$ & $5.415$ & $0.0075$ & $0.20736$ & $0.0075$ & $0.00375$ & $0.00525$ \\ $29008$ & $0$ & $0.226625$ & $5.565$ & $0$ & $0.23219$ & $0$ & $0$ & $0$ \\ $30752$ & $0$ & $0.24025$ & $5.495$ & $0$ & $0.245745$ & $0$ & $0$ & $0$ \\ $112920$ & $0.003$ & $0.8821875$ & $5.405$ & $0.00625$ & $0.88759875$ & $0.00625$ & $0.0008928625$ & $0.002375$ \\ $322448$ & $0.008$ & $2.519125$ & $5.505$ & $0.02$ & $2.52465$ & $0.02$ & $0.001$ & $0.0045$ \\ $453664$ & $0.016$ & $3.54425$ & $5.4275$ & $0.03625$ & $3.54971375$ & $0.03625$ & $0.00125$ & $0.006875$ \\ $564880$ & $0.012$ & $4.413125$ & $5.6$ & $0.02375$ & $4.41874875$ & $0.02375$ & $0.000678575$ & $0.004$ \\ $703728$ & $0.03$ & $5.497875$ & $5.4475$ & $0.06375$ & $5.50338625$ & $0.06$ & $0.0015$ & $0.009375$ \\ $1284416$ & $0.04$ & $10.0345$ & $5.5$ & $0.0975$ & $10.0400975$ & $0.09375$ & $0.001234175$ & $0.01075$ \\ $3098904$ & $0.076$ & $24.2101875$ & $5.3725$ & $0.1825$ & $24.2157425$ & $0.175$ & $0.000960525$ & $0.012875$ \\ \boldmath{$662656.8$} & \boldmath{$0.0188$} & \boldmath{$5.17700625$} & \boldmath{$5.47325$} & \boldmath{$0.04375$} & \boldmath{$5.18252325$} & \boldmath{$0.04225$} & \boldmath{$0.0011266138$} & \boldmath{$0.0056$} \\ \hline \multicolumn{9}{|c|}{Average number of free channels ($F$) is $285$ } \\ \hline $25848$ & $0.002$ & $0.2019375$ & $9.1725$ & $0.00375$ & $0.21111375$ & $0.00375$ & $0.001875$ & $0.002625$ \\ $29008$ & $0.001$ & $0.226625$ & $9.54$ & $0.00375$ & $0.23616875$ & $0.00375$ & $0.001875$ & $0.002625$ \\ $30752$ & $0.001$ & $0.24025$ & $9.6875$ & $0.0025$ & $0.24994$ & $0.0025$ & $0.00125$ & $0.00175$ \\ $112920$ & $0.006$ & $0.8821875$ & $9.2875$ & $0.02125$ & $0.89149625$ & $0.02125$ & $0.003$ & $0.008$ \\ $322448$ & $0.018$ & $2.519125$ & $9.3875$ & $0.0875$ & $2.5286$ & $0.0875$ & $0.004375$ & $0.019625$ \\ $453664$ & $0.028$ & $3.54425$ & $9.2975$ & $0.08875$ & $3.55363625$ & $0.0875$ & $0.003125$ & $0.0165$ \\ $564880$ & $0.036$ & $4.413125$ & $9.7025$ & $0.17125$ & $4.42299875$ & $0.17125$ & $0.004875$ & $0.029$ \\ $703728$ & $0.038$ & $5.497875$ & $9.495$ & $0.11125$ & $5.50748125$ & $0.11125$ & $0.002625$ & $0.017$ \\ $1284416$ & $0.057$ & $10.0345$ & $9.5725$ & $0.28125$ & $10.04435375$ & $0.2775$ & $0.003625$ & $0.0315$ \\ $3098904$ & $0.135$ & $24.2101875$ & $9.2675$ & $0.54625$ & $24.22000125$ & $0.5175$ & $0.002875$ & $0.038125$ \\ \boldmath{$662656.8$} & \boldmath{$0.0322$} & \boldmath{$5.17700625$} & \boldmath{$9.441$} & \boldmath{$0.13175$} & \boldmath{$5.186579$} & \boldmath{$0.128375$} & \boldmath{$0.00295$} & \boldmath{$0.016675$} \\ \hline \multicolumn{9}{|c|}{Average number of free channels ($F$) is $39$ } \\ \hline $25848$ & $0.008$ & $0.2019375$ & $65.48$ & $0.22875$ & $0.26764625$ & $0.22875$ & $0.114375$ & $0.16175$ \\ $29008$ & $0.011$ & $0.226625$ & $66.48$ & $0.3425$ & $0.2934475$ & $0.3425$ & $0.17125$ & $0.242125$ \\ $30752$ & $0.01$ & $0.24025$ & $63.5325$ & $0.37625$ & $0.30415875$ & $0.37625$ & $0.188125$ & $0.266$ \\ $112920$ & $0.044$ & $0.8821875$ & $64.85$ & $1.05375$ & $0.94809125$ & $1.05375$ & $0.1505$ & $0.39825$ \\ $322448$ & $0.13$ & $2.519125$ & $66.66$ & $4.40625$ & $2.59019125$ & $4.33375$ & $0.220375$ & $0.973$ \\ $453664$ & $0.162$ & $3.54425$ & $65.68$ & $5.44375$ & $3.61537375$ & $5.19125$ & $0.194375$ & $0.994125$ \\ $564880$ & $0.204$ & $4.413125$ & $66.3075$ & $6.0925$ & $4.485525$ & $5.8725$ & $0.174125$ & $1.001125$ \\ $703728$ & $0.215$ & $5.497875$ & $66.3075$ & $6.04$ & $5.5702225$ & $5.74125$ & $0.1405$ & $0.884625$ \\ $1284416$ & $0.358$ & $10.0345$ & $66.82$ & $9.1375$ & $10.1104575$ & $8.3625$ & $0.115625$ & $0.962125$ \\ $3098904$ & $0.508$ & $24.2101875$ & $66.67$ & $10.94$ & $24.2877975$ & $9.53625$ & $0.057625$ & $0.716625$ \\ \boldmath{$662656.8$} & \boldmath{$0.165$} & \boldmath{$5.17700625$} & \boldmath{$65.87875$} & \boldmath{$4.406125$} & \boldmath{$5.247291125$} & \boldmath{$4.103875$} & \boldmath{$0.1526875$} & \boldmath{$0.659975$} \\ \hline \end{tabular} \end{table} \end{landscape} \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
\part{Prologue}\label{part:intro} \section{Introduction}\label{s:intro} We are concerned in this paper with a generalisation of the affine adjunction of classical algebraic geometry to algebraic and categorical settings, and with the resulting relationship to the theory of dualities. We establish a general framework that encompasses several important categorical dualities in mathematics, as well as several attempts to unify them. The paper thus spans a number of different topics, often at varying levels of generality. We offer an introduction that is perhaps longer than customary, in the hope of providing motivation and guidance for the interested reader. \subsection{The classical affine adjunction} In classical affine algebraic geometry, one studies solutions to systems of polynomial equations with coefficients in an algebraically closed field $k$. For any subset $R$ of the polynomial ring over finitely many variables $k[X]:=k[X_{1},\ldots,X_{n}]$, there remains defined the (possibly infinite) system of equations: \begin{equation}\label{eq:sist} p(X_1,\ldots, X_{n})=0, \ p \in R. \end{equation} Let us write $\VV{(R)}\subseteq k^{n}$ for the set of solutions of \eqref{eq:sist} over $k^{n}$, where $k^{n}$ is the affine $n$-space over $k$. Then $\VV{(R)}$ is the \emph{affine set defined by $R$}. Since $k[X]$ is Noetherian by Hilbert's Basis Theorem, it is no loss of generality to assume that $R$ be finite. Conversely, for any subset $S\subseteq k^{n}$ we can consider the set $\CC{(S)}\subseteq k[X]$ of polynomials that vanish over $S$, which is automatically an ideal. Then $\CC{(S)}$ is the \emph{ideal defined by $S$}. Again by Hilbert's Basis Theorem, the quotient $k$-algebra $k[X]/\CC(S)$ ---the \emph{co-ordinate ring} of the affine set $S$--- is \emph{finitely presentable}. Writing $2^{E}$ for the power set of the set $E$, we obtain functions (implicitly indexed by $n$) \begin{align} \CC&\colon 2^{k^{n}}\longrightarrow 2^{k[X]},\label{intro:c}\\ \VV&\colon 2^{k[X]}\longrightarrow 2^{k^{n}}\label{intro:v} \end{align} that yield a (contravariant) Galois connection. The fixed points of the closure operator $\VV\circ\CC$ are then precisely the affine sets in $k^{n}$. Since $\VV\circ\CC$ is a \emph{topological} closure operator ---i.e.\ it commutes with finite unions--- affine algebraic sets are the closed sets of a topology on $k^{n}$, namely, the \emph{Zariski topology}. The fixed points of the dual closure operator $\CC\circ\VV$, on the other hand, may be identified thanks to Hilbert's {\it Nullstellensatz}: they are precisely the \emph{radical ideals} of $k[X]$, that is, those ideals that coincide with the intersection of all prime ideals containing them. The {\it Nullstellensatz} thus characterises co-ordinate rings, for $k[X]/I$ is one such if, and only if, $I$ is radical. Since radical ideals may in turn be elementarily characterised as those ideals $I$ such that $k[X]/I$ has no non-zero nilpotents, co-ordinate rings are precisely the finitely presented nilpotent-free (or \emph{reduced}) $k$-algebras. The Galois connection given by the pair $(\CC,\VV)$ in (\ref{intro:c}--\ref{intro:v}) can be made functorial. On the algebraic side we consider the category of finitely presented $k$-algebras with their homomorphism. On the geometric side we take as objects subsets of $k^{n}$, for each finite $n$, by which we mean sets $S$ equipped with a specific embedding $S\hookrightarrow k^{n}$. It is important not to blur the distinction between $S$ itself --- a mere set--- and $S\hookrightarrow k^{n}$ ---an object of our category. Indeed, arrows in the geometric category are to be defined affinely, i.e.\ by restriction from $k^{n}$. An arrow from $S\hookrightarrow k^{n}$ to $T\hookrightarrow k^{m}$ is a \emph{regular map} $S\to T$, that is, the equivalence class of a \emph{polynomial function} $f\colon k^{n}\to k^{m}$ such that $f$ throws $S$ onto $T$; two such functions are equivalent if, and only if, they agree on $S$. There is a functor that associates to each regular map $S\to T$ a contravariant homomorphism of the (automatically presented) co-ordinate rings of $\VV\circ\CC{(T)}$ and $\VV\circ\CC{(S)}$. And there is a companion functor that associates to each homomorphism of presented $k$-algebras $k[X]/I\to k[Y]/J$, with $Y=\{Y_1,\ldots,Y_{m}\}$ and $J$ an ideal of $k[Y]$, a contravariant regular map $\VV{(J)}\to\VV{(I)}$. The two functors yield a contravariant adjunction; upon restricting each functor to the fixed points in each domain, one obtains the classical duality (=contravariant equivalence) between affine algebraic varieties and their co-ordinate rings. Compare\footnote{Terminology: Hartshorne's corollary is stated for irreducible varieties, which he calls varieties {\it tout court}.} e.g.\ \cite[Corollary 3.8]{hartshorne1977algebraic}. \subsection{The universal algebra framework} A first aim of this paper is to generalise the classical affine adjunction above to any \emph{variety of algebras}, whether finitary to infinitary. We assume some familiarity with Birkhoff's theory of general, or ``universal'', algebra; for background see e.g.\ \cite{birkhoff79, cohn81, jacobson80, Burris:81}. Henceforth, variety (of algebras) means ``possibly infinitary variety (of algebras)'' in the sense of S{\l}ominsky \cite{Slominsky:1959} and Linton \cite{Linton:1965} (after Lawvere \cite{lawverereprint}). The main observation is that in any variety, the \emph{free algebras} play the same r\^{o}le as the ring of polynomials in the above correspondence. Ideals of the ring of polynomials become then, in full generality, congruences of some free algebra, while the ground field $k$ can be substituted for any algebra $A$ in the variety. We refer the reader to Table \ref{tab:transl} below, for a schematic translation of the main concepts in the adjunctions. In Part \ref{part:alg} we show that the classical affine adjunction extends \textit{verbatim} to this general algebraic setting. It is important to note that in the geometric adjunction, co-ordinate rings are \emph{presented}, that is, they are not merely isomorphic to a ring of the form $k[X]/I$: they come with a specific defining ideal $I$. By an easy general argument relying on the Axiom of Choice ---cf.\ Remark \ref{rem:Vpequiv} below--- the category of finitely presented $k$-algebras is equivalent to that of finitely presentable $k$-algebras (morphisms being the ring homomorphisms in each case), whether actually presented or not. Nonetheless, for our purposes here the presented and the \emph{presentable} objects are to be kept distinct. We shall indicate by $\Vap$ the category of presented algebras in the variety $\Va$. \begin{table}[h!] \adjustbox{max width=\columnwidth}{ \begin{tabular}{|p{4cm}|p{4cm}|p{4cm}|} \hline \textbf{Algebraic geometry} & \textbf{Universal algebra} & \textbf{Categories} \\ \hline Ground field $k$ & Any algebra $A$ in $\Va$ & Functor $\mathscr{I}:\T \to \S $ \\ \hline Class of $k$-algebras & Any variety $\Va$ & Category $\R$ \\ \hline $k[X_{1}, \ldots, X_{n}]$ & Free algebras & Objects in $\T$ \\ \hline Ideals & Congruences & Subsets of $\hom^{2}_{\T}(t,\a)$ with $t$ in $\T$ \\ \hline Assignment $k[X_{1}, \ldots, X_{n}] \to k$ & Assignment $\mathcal{F}(\mu) \to A$ & Object $\a$ in $\T$ \\ \hline Regular map & Definable map & Restriction of $\mathscr{I}(f)$ \\ \hline Co-ordinate algebra of $S$ & Algebra presented by $\CC(S)$ & Pair $(t, \CC{(S)})$ in $\R$ \\ \hline Affine variety & $\VV\circ\CC$-closed set & Pair $(t,\VV{(R)})$ in $\S$ \\ \hline \end{tabular}} \vspace{0.3cm} \caption{Corresponding concepts in the geometric, algebraic, and categorical setting.}\label{tab:transl} \end{table} In Corollary \ref{cor:algadj} we obtain the adjunction between $\Vap^{\rm op}$, the opposite of the category of presented $\Va$-algebras, and the category of subsets of (the underlying set of) $A^\mu$, as $\mu$ ranges over all cardinals, with definable maps as morphisms. The functors that implement the adjunction act on objects by taking a subset $R\subseteq \F{(\mu)}\times\F{(\mu)}$ ---that is, a ``system of equations in the language of $\Va$''--- to its solution set $\VV{(R)}\subseteq A^\mu$, where $\VV{(R)}$ is the set of elements of $A^\mu$ such that each pair of terms in $R$ evaluate identically over it; and a subset $S\hookrightarrow A^\mu$ to its ``co-ordinate $\Va$-algebra'', namely, $\F{(\mu)}/\CC{(S)}$, where $\CC{(S)}$ is the congruence on $\F{(\mu)}$ consisting of all pairs of terms that evaluate identically at each element of $S$. Please see section \ref{sec:algadj} for details. To identify the fixed points of this general affine adjunction on the algebraic side, we prove an appropriate generalisation of the {\it Nullstellensatz}. The final result is stated as Theorem \ref{thm:algnull}. The identification of an appropriate notion of radical congruence (in part (ii) of the theorem) leads to a result formally analogous to the ring-theoretic {\it Nullstellensatz}. Additionally, the identification of an appropriate type of representation for those $\Va$-algebras that are fixed under the adjunction (in part (iii) of the theorem) leads to a result reminiscent of Birkhoff's Subdirect Representation Theorem. Failure of the latter for infinitary varieties is irrelevant for our purposes here. In fact, while our Theorem \ref{thm:algnull} may be be conceived of as a version of the Subdirect Representation theorem that is ``relative to the ground algebra $A$'', it is formally incomparable to Birkhoff's result: neither statement entails the other, in general. See section \ref{sec:algebraic} for details and further comments. To characterise the fixed points on the \emph{affine} side, we use the fact that in several cases the composition $\VV\circ\CC$ gives a topological closure operator. The topology induced by $\VV\circ\CC$ is readily seen to be a generalisation of the Zariski topology (see e.g., \cite[Chapter 1]{hartshorne1977algebraic}). We therefore provide some sufficient condition to characterise the fixed points on this side as topological $A$-compact sets (\cite{weir1975hewitt}). \subsection{The general affine adjunction} The general affine adjunction of Corollary \ref{cor:algadj} can be lifted from the algebraic setting to a more general categorical context. This we do in Part \ref{part:general} of the paper, thus achieving our second aim. Conceptually, the key ingredient in the algebraic construction sketched above is the functor $\I_{A}\colon \T \to \Set$. In the categorical abstraction, the basic {\it datum} is any functor $\I \colon \T \to \S$, which can be conceived as the \emph{interpretation} of the ``syntax'' $\T$ into the ``semantics'' $\S$, along with a distinguished object $\a$ of $\T$. (In the algebraic specialisation, $\a$ is $\F{(1)}$, the free singly generated $\Va$-algebra.) Here $\T$ and $\S$ are simply arbitrary (locally small\footnote{All categories in this paper are assumed to be locally small.}) categories. Out of these data, we construct two categories $\D$ and $\R$ of subobjects and relations, respectively. The category $\D$ abstracts that of sets affinely embedded into $A^{\mu}$; here, sets are replaced by objects of $\S$, the powers $A^{\mu}$ are replaced by objects $\I(t)$ as $t$ ranges over objects of $\T$, and the morphisms of $\S$ that are ``definable'' are declared to be those in the range of $\I$. The category $\R$ abstracts the category of relations (not necessarily congruences) on the free $\Va$-algebras $\F{(\mu)}$; that is, its objects are relations on the hom-set $\hom_{\T}(t,\a)$, as $t$ ranges over objects of $\T$. Arrows are $\T$-arrows that preserve the given relations. It is possible, in this setting, to define the operator $\CC$ in full generality. In order to define an appropriate abstraction of the operator $\VV$, we need to require that $\S$ has enough limits (Assumption \ref{ass:limits} below), because ``solutions'' to ``systems of equations'' are computed by intersecting solutions to ``single equations''. The pair $(\CC,\VV)$ yields a Galois connection (Lemma \ref{lem:galois}) that satisfies an appropriate abstraction of the {\it Nullstellensatz}, as we show in Theorem \ref{thm:null}. Moreover, the Galois connection functorially lifts to an adjunction between $\D$ and $\R$; see Theorem \ref{thm:weakadj}. This is to be considered a weak form of the algebraic adjunction, because in the algebraic setting one can additionally take quotients of the categories $\D$ and $\R$ that have semantic import. One would like to identify pairs of definable morphisms in $\D$, if they agree on the given ``affine subobject''. Similarly, one would like to identify morphisms that agree on the same ``presented object'', in the appropriate abstract sense. See section \ref{sec:weak} for details. This can be done via appropriate equivalence relations that lead us to quotient categories $\D^{q}$ and $\R^{q}$. However, in order for the adjunction between $\D$ and $\R$ to descend to the quotients, it is necessary to impose a condition on the object $\a$. More precisely, we find that it suffices to assume that $\a$ be an \emph{$\I$-coseparator}; please see Definition \ref{def:coseparator} and Lemma \ref{lem:cosep}. (In the algebraic specialisation, we prove that this assumption on $\a=\F{(1)}$ is automatically satisfied; see Lemma \ref{lem:algcosep}.) Under this additional assumption (Assumption \ref{ass:cosep} below) we obtain our general affine adjunction between $\D^{q}$ and $\R^{q}$, Theorem \ref{thm:genadj}. In section \ref{s:further} of the paper we develop some further theory with an eye towards comparing our results to the existing literature. \subsection{Applications to duality theory} Our third and final aim in this paper is to illustrate the connection between the theory of dualities, and the general affine adjunctions that we have hitherto summarised. This we do in Part \ref{part:alg}, where we select three landmark duality theorems in order to illustrate different aspects of our construction. Some familiarity with duality theory is assumed here. By way of a preliminary, in section \ref{s:classical} we show in detail how the classical affine adjunction can be recovered as a rather direct special case of the algebraic affine adjunction. In section \ref{s:stone} we frame Stone duality for Boolean algebras in our setting. This provides the most classical example of a duality for a finitary variety of algebras. In section \ref{s:stone-gelfand} we do the same for Gelfand duality between commutative unital $C^{*}$-algebras and compact Hausdorff spaces, an important example of a duality for an infinitary variety of algebras. Our treatment of Gelfand duality stresses its analogy with Stone duality for Boolean algebras. \subsection{Related literature} Before turning to the proofs, we comment on related literature. The idea of generalising the classical affine adjunction to an algebraic setting is far from new. In \cite{daniyarova2012algebraic} and subsequent papers, various elements of abstract algebraic geometry for finitary varieties of algebras are developed. The authors use this to apply geometric methods in universal algebra. However, they do not relate their theory to duality theory, so the overlap with our results is modest. In \cite{diers1999affine} Diers develops a framework generalising the classical affine adjunction for rings of polynomials to the context of (possibly infinitary) algebraic theories. He establishes, for any algebra $L$ in a given variety, an adjunction between a category of ``affine subsets'' over $L$ and a category of ``algebraic systems'', as well as an adjunction between a category of ``affine algebraic sets'' and the category of algebras of the given sort, specialising to a duality between the former category and a category of ``functional algebras'' over $L$. The notion of (algebraic) affine set in the sense of Diers is significantly different from ours: indeed, it amounts to a pair $(X, A(X))$ consisting of a set $X$ and of a subalgebra of the algebra $L^{X}$. Nonetheless, in section \ref{ssolution} we show that Diers' ``system-solution'' adjunction can be obtained from our general categorical framework with an appropriate choice of the parameters. Another important difference between Diers' approach and ours consists in the fact that the categories of algebras involved in his other adjunction are not categories of \emph{presented algebras} (i.e., algebras equipped with a presentation) as it is the case in our setting, nor the objects of his category of affine algebraic sets are presented as \emph{subsets} of affine spaces. There is also a strong connection between our approach and the theory of dualities generated by a dualising object (see e.g.\ \cite{barr2008isbell}, \cite{clark1998natural}, and \cite{MR1147921}). In section \ref{sec:representable} we show that, whenever $\S$ is the category of sets and maps, and $\I$ is representable, our adjunction can be seen as one induced by a dualising object. Moreover, in section \ref{universality} we show that every duality between categories satisfying mild requirements can be obtained from the general categorical framework developed in Part \ref{part:general}. Finally, we mention that the connection between a general {\sl Nullstellensatz} theorem for varieties of algebras and Birkhoff's subdirect representation theorem is addressed in \cite{tholen2013nullstellen}, although in a context different from ours. \part{The general adjunction}\label{part:general} \section{The weak Nullstellensatz, and the weak affine adjunction}\label{sec:weak} If $x$ and $y$ are objects in a category $\Cc$, and $f\colon x \to y$ is an arrow in $\Cc$, we write $\hom_{\Cc}{(x,y)}$ to denote the collection of arrows in $\Cc$ from $x$ to $y$, and $\dom{f}$ to denote the domain $x$ of $f$. We consider the following. \begin{itemize} \item Two categories $\T$ and $\S$. \item A functor $\I \colon \T \to \S$. \item An object $\a$ of $\T$. \end{itemize} \subsection{The category $\D$ of subobjects and definable morphisms}\label{ss:D} Objects are all pairs $(t,s)$ where $t$ is $\T$-object and $s \colon \dom{s} \to\I(t)$ is an $\S$-subobject. Arrows $(t,s)\to(t',s')$ are the $\T$-arrows $f \colon t \to t'$ such that $\I(f)\circ s$ factors through $s'$; that is, there exists an $\S$-arrow $g \colon S \to S'$ such that the diagram \begin{small} \[ \begin{tikzpicture}[scale=0.4] \node (S) at (0,0) {$\dom{s}$}; \node (VTS) at (0,5) {$\I(t)$}; \node (T) at (5,0) {$\dom{s'}$}; \node (VTT) at (5,5) {$\I(t')$}; \draw [->] (S) -- (T) node [below, midway] {$g$}; \draw [->] (VTS) -- (VTT) node [above, midway] {$\I(f)$}; \draw [->] (S) -- (VTS) node [left, midway] {$s$}; \draw [->] (T) -- (VTT) node [right, midway] {$s'$}; \end{tikzpicture} \] \end{small} commutes. \subsection{The category $\R$ of relations and relation-preserving morphisms}\label{ss:R} Objects are all pairs $(t,R)$ where $t$ is a $\T$-object and $R$ is a relation on $\hom_\T{(t,\a)}$. Arrows $(t,R)\to(t',R')$ are the $\T$-arrows $f\colon t\to t'$ such that the function \begin{align}\label{eq:Rarrow} -\circ f\colon \hom_{\T}{(t', \a)} \to \hom_{\T}{(t, \a)} \end{align} satisfies the property \begin{align*} (p', q')\in R' \quad \Longrightarrow \quad (p' \circ f, q' \circ f)\in R. \end{align*} We say in this case that $f$ \emph{preserves $R'$} (\emph{with respect to $R$}). \begin{remark}\label{rem:equivfactor} Observe that if (\ref{eq:Rarrow}) satisfies the property above, then it must factor through the equivalence relations $\overline{R'}$ and $\overline{R}$ generated by $R'$ and $R$, respectively. In other words, if the $\T$-arrow $f\colon t\to t'$ preserves $R'$ with respect to $R$, then it also preserves $\overline {R'}$ with respect to $\overline{R}$. Hence, if $f$ defines an $\R$-arrow $(t,R)\to (t',R')$, then it also defines an $\R$-arrow $(t,\overline{R})\to (t',\overline{R'})$. \end{remark} We emphasise that $\D$ will depend on $\I$ (and hence on $\T$ and $\S$) but not on $\a$, and $\R$ will depend on $\a$ (and hence on $\T$) but not on $\I$ (nor on $\S$). Hence a more informative notation would be $\D_{\I}$ and $\R_{\a}$, which however we do not adopt for the sake of legibility. \begin{terminology}\label{term:identification}Throughout, when we say that $f \colon (t,s)\to(t',s')$ is a $\D$-arrow, we entail that $f\colon t \to t'$ is the unique $\T$-arrow that defines it. Similarly, when we say that $f\colon (t,R)\to(t',R')$ is an $\R$-arrow, we imply that $f\colon t \to t'$ is the unique $\T$-arrow that defines it. \end{terminology} \subsection{The Galois connection $(\CC,\VV)$}\label{ss:Galois} \begin{definition}\label{def:C} For any $(t, s)\in \D$, we define the following equivalence relation on $\hom_{\T}{(t, \a)}$: \begin{align}\label{eq:RS} \CC{(s)}:=\left\{(p,q)\in \hom_{\T}^{2}{(t, \a)} \mid \I(p)\circ s=\I(q)\circ s\right\}. \end{align} \end{definition} In order to define $\VV$ it is necessary to assume that $\S$ has enough limits. It is sufficient to make the following \begin{assumption}\label{ass:limits} Henceforth, we always assume that $\S$ has equalisers of pairs of parallel arrows, and intersections of arbitrary families of subobjects. We denote the intersection of a family $\{E_i\}_{i\in I}$ of $\S$-subobjects by $\bigwedge_{i \in I} E_i$. \end{assumption} \begin{definition}\label{def:V} For any $(t, R)$ in $\R$, we set \begin{align}\label{eq:SR} \VV{(R)}:=\bigwedge_{(p,q)\in R}\Eq{(\I(p),\I(q))}, \end{align} where, for $(p,q)\in R$, $\Eq{(\I(p),\I(q))}$ denotes the $\S$-subobject of $\I(t)$ given by the equaliser in $\S$ of the $\S$-arrows $\I(p), \I(q) \colon \I(t) \rightrightarrows \I(\a)$. \end{definition} We now show that the operators $\VV$ and $\CC$ yield contravariant Galois connections between relations and subobjects. Let us write $\leq$ to denote the partial order on subobjects in a category. Thus, as usual, if $x$ and $y$ are subobjects of $z$, $x\leq y$ if there is an arrow $m\colon \dom{x} \to \dom{y}$ such that $x=y\circ m$. \begin{lemma}[Galois connection]\label{lem:galois} For any $\T$-object $t$, any relation $R$ on $\hom_\T{(t,\a)}$, and any $\S$-subobject $s \colon \dom{s}\to \I(t)$, we have \begin{align}\label{eq:galois} R \subseteq \CC{(s)} \quad \quad \text{if, and only if,} \quad \quad s\leq \VV{(R)}. \end{align} \end{lemma} \begin{proof}We have $R \subseteq \CC(s)$ if, and only if, for any $(p,q) \in R$ it is the case that $\I{(p)}\circ s = \I{(q)}\circ s$. On the other hand, $s\leq \VV{(R)}$ if, and only if, there is an $\S$-arrow $m\colon \dom{s}\to \dom{\VV{(R)}}$ with $s=\VV{(R)}\circ m$. Now, if the former holds then $s$ must factor through $\VV{(R)}$ because the latter is defined in (\ref{eq:SR}) as the intersection of all $\S$-subobjects of $\I(t)$ that equalise some pair in $R$. Conversely, if the latter holds then for each $(p,q)\in R$ we obtain, composing both sides of $\I{(p)}\circ \VV{(R)} =\I{(q)}\circ\VV{(R)}$ with $m$, that $\I{(p)}\circ s =\I{(q)}\circ s$. \end{proof} \subsection{The weak {\sl Nullstellensatz}}\label{subs:weaknull} Recall that a collection of arrows $A\subseteq \hom_\A{(x,y)}$ in a category $\A$ is \emph{jointly epic} if whenever $f_1,f_2\colon y\rightrightarrows z$ are $\A$-arrows with $f_1\circ g = f_2\circ g$ for all $g\in A$, then $f_1=f_2$. \begin{theorem}[Weak {\sl Nullstellensatz}]\label{thm:null} Fix an $\R$-object $(t,R)$. For any family $\Sigma=\{\sigma_{i}\}_{i\in I}$ of subobjects of $\I{(t)}$ such that for each $\sigma_{i}$ there exists $m_{i}$ with $\sigma_{i}=\VV{(R)}\circ m_{i}$ (i.e.\ $\sigma_{i}\leq \VV{(R)}$) and the family of $\S$-arrows $\{m_{i}\}_{i\in I}$ is jointly epic in $\S$, the following are equivalent. \begin{enumerate}[\textup{(}i\textup{)}] \item $R=\CC{(\VV{(R)})}$, i.e.\ $R$ is fixed by the Galois connection \textup{(\ref{eq:galois})}. \item $R=\bigcap_{i\in I}\CC(\sigma_{i})$. \end{enumerate} \end{theorem} \begin{proof} First observe that the Galois connection (\ref{eq:galois}) in Lemma \ref{lem:galois} implies the \emph{expansiveness} of $\CC\circ\VV$, i.e. \begin{align}\label{eq:contained1} R\subseteq \CC{(\VV{(R)})}\ . \end{align} Further, since each $\sigma_{i}\leq \VV{(R)}$, again by general properties of Galois connections, it follow that \begin{align}\label{eq:contained2} R\subseteq \bigcap_{i\in I}\CC(\sigma_{i})\ . \end{align} \smallskip \noindent (i)$\Rightarrow$(ii)\, As by hypothesis $R=\CC{(\VV{(R)})}$, by (\ref{eq:contained2}) above, it is enough to prove \[\bigcap_{i\in I}\CC(\sigma_{i})\subseteq \CC{(\VV{(R)})}\ .\] If $(p,q)\in \bigcap_{i\in I}\CC(\sigma_{i})$, then for every $\sigma_{i}\in \Sigma$, $\I(p)\circ \sigma_{i}=\I(q)\circ \sigma_{i}$. By hypothesis, the latter can be rewritten as $\I(p)\circ \VV{(R)}\circ m_{i}=\I(q)\circ \VV{(R)}\circ m_{i}$. Now, the family of factorisations $\{m_{i}\}_{i\in I}$ is jointly epic in $\S$, hence we obtain $\I(p)\circ \VV{(R)}=\I(q)\circ \VV{(R)}$, which proves $(p,q)\in \CC{(\VV{(R)})}$. \smallskip \noindent (ii)$\Rightarrow$(i)\, By (\ref{eq:contained1}) above and the hypothesis (ii), it is enough to prove \[\CC{(\VV{(R)})}\subseteq \bigcap_{i\in I}\CC(\sigma_{i}) .\] Suppose that $(p,q)\in\CC{(\VV{(R)})}$, i.e.\ $\I(p)\circ \VV{(R)}=\I(q)\circ \VV{(R)}$. By composing on the right with $m_i$ we obtain, for all $\sigma_{i}\in \Sigma$, $\I(p)\circ \VV{(R)}\circ m_{i}=\I(q)\circ \VV{(R)}\circ m_{i}$. Applying the above commutativity of $\sigma_{i}$ one obtains $\I(p)\circ \sigma_{i}=\I(q)\circ\sigma_{i}$. The latter entails that, for all $i\in I$, $(p,q)\in \CC{(\sigma_{i})}$, whence $(p,q)\in \bigcap_{i\in I}\CC(\sigma_{i})$. \end{proof} \begin{remark} Notice that one such family $\Sigma$ always exists, namely $\dom{\VV{(R)}}$ and the arrow $\VV{(R)}$. However, in this case the theorem becomes tautological. When the category $S$ is $\Set$, the category of sets and functions, one can chose as $\Sigma$ the family of maps with domain the singleton $*$ (i.e.\ the terminal object of \Set). The family $\Sigma$ is obviously jointly surjective and Theorem \ref{thm:null} can be restated in a more concrete form as follows. \end{remark} \begin{theorem}\label{thm:null-in-set} Suppose $\S=\Set$. For any $\R$-object $(t,R)$ the following are equivalent. \begin{enumerate}[\textup{(}i\textup{)}] \item $R=\CC{(\VV{(R)})}$, \item $R=\bigcap_{\sigma\leq \VV{(R)}}\CC(\sigma)$, where $\sigma$ ranges over all $\S$-subobjects $*\to \I(t)$. \end{enumerate} \end{theorem} The operators $\CC$ and $\VV$ naturally give rise to functors $\C$ and $\V$, as we spell out in the following. \subsection{The functor $\C\colon \D\to\R$} For any $\D$-object $(t, s)$, we set \begin{align}\label{eq:Cobj} \C(t, s):=(t, \CC(s)). \end{align} For a $\D$-arrow $f\colon(t, s)\to (t', s')$ we let $\C(f)$ be the $\R$-arrow $f\colon (t, \CC(s))\to (t', \CC(s'))$. To check that this is well-defined, we need to show that the function \begin{align*} -\circ f\colon \hom_{\T}{(t', \a )} \to \hom_{\T}{(t, \a )} \end{align*} satisfies $(p' \circ f, q' \circ f)\in \CC(s)$ for any $(p', q')\in \CC(s')$. Indeed, note that for some $\S$-arrow $g\colon \dom{s} \to \dom{s'}$ \begin{align}\label{eq:welldef1} \I(f)\circ s= s'\circ g \ , \end{align} because $f\colon(t, s)\to (t', s')$ is a $\D$-arrow. Now, given $p', q'\in \hom_{\T}{(t', \a )}$, assume $(p',q')\in \CC(s')$, that is \begin{align}\label{eq:welldef2} \I(p')\circ s'=\I(q')\circ s'. \end{align} Composing both sides of (\ref{eq:welldef2}) with $g$, and applying (\ref{eq:welldef1}), we obtain $\I(p'\circ f)\circ s = \I(q'\circ f)\circ s$, which shows $(p'\circ f,q'\circ f)\in \CC(s)$. \subsection{The functor $\V\colon \R \to \D$} For any $\R$-object $(t, R)$, we set \begin{align}\label{eq:Vobj} \V(t,R):=(t,\VV{(R)}). \end{align} For an $\R$-arrow $f\colon (t, R)\to (t', R')$ we define $\V(f)$ to be the $\D$-arrow $f\colon (t, \VV{(R)})$ $\to (t', \VV(R'))$. To check that this is well-defined, we need to show that $\I(f)\circ \VV{(R)}$ factors through $\VV{(R')}$. Indeed, let $p',q' \in \hom_{\T}{(t',\a)}$, and assume $(p',q') \in R'$. Then $(p' \circ f, q'\circ f)\in R$ because $f$ is an $\R$-arrow, and therefore $\I(p')\circ (\I(f) \circ \VV{(R)})=\I(q')\circ (\I(f)\circ \VV{(R)})$ for all $(p',q')\in R'$. By the universal property of the pull-back $\VV(R'):=\bigwedge_{(p',q')\in R}\Eq{(\I(p'),\I(q'))}$ it follows that $\I(f)\circ \VV{(R)}$ factors through $\VV(R')$. \subsection{The weak adjunction}\label{subs:weak} The Galois connection (\ref{eq:galois}) lifts to an adjunction. \begin{theorem}[Weak affine adjunction]\label{thm:weakadj} The functor $\C\colon \D\to \R$ is left adjoint to the functor $\V\colon \R\to\D$. In symbols, $\C\dashv \V$. \end{theorem} \begin{proof} Let us show that for any $\D$-object $(t, s)$ and any $\R$ -object $(t', R')$ we have a natural bijective correspondence between the $\R$-arrows $\C(t, s)=(t, \CC(s))\to (t', R')$ and the $\D$-arrows $(t, s)\to (t', \VV(R'))=\V(t', R')$. Let $f\colon t\to t'$ be a $\T$-arrow. Then $f$ defines an $\R$-arrow $(t, \CC(s))\to (t', R')$ if, and only if, for any $p', q' \in \hom_{\T}{(t', \a)}$, if $(p', q')\in R'$ then $\I(p') \circ \I(f)\circ s=\I(q') \circ \I(f) \circ s$. On the other hand, $f$ defines a $\D$-arrow $(t, s)\to (t', \VV(R'))$ in $\D$ if, and only if, $f\circ s$ factors through $\VV(R')$, i.e.\ for any $p', q' \in \hom_{\T}{(t', \a )}$, if $(p', q')\in R'$ then $\I(p') \circ \I(f)\circ s=\I(q') \circ \I(f) \circ s$. It is thereby clear that $\C\dashv\V$. \end{proof} \section{The general adjunction}\label{sec:general} We now consider appropriate quotients of the categories $\D$ and $\R$. We shall need a lemma about factorisations of adjoint pairs through quotient categories \cite[II.8]{MMT87}. A \emph{congruence relation} on a category $\A$ is a family $R$ of equivalence relations $R_{x,x'}$ on $\hom_{\A}{(x,x')}$ indexed by the pairs $(x,x')$ of $\A$-objects, such that for all $\A$-arrows $f_1,g_1 \colon x \rightrightarrows x'$, $f_2,g_2\colon x' \rightrightarrows x''$, if $(f_1,g_1)\in R_{x,x'}$ and $(f_2,g_2)\in R_{x',x''}$ then $(f_2\circ f_1,g_2\circ g_1)\in R_{x,x''}$. The \emph{quotient category} $\A/R$ of $\A$ modulo the congruence $R$ has then as objects the $\A$-objects, and as hom-sets the quotients $\hom_{\A/R}{(x,x')}:=(\hom_{\A}{(x,x')})/R_{(x,x')}$ for each pair of $(\A/R)$-objects $(x,x')$; composition is defined in the obvious manner. There is a canonical projection functor \begin{align*}\label{eq:quotientfunctori} \F_R\colon \A \to \A/R \end{align*} that acts identically on $\A$-objects, and carries the $\A$-arrow $x\to x'$ to the $\A/R$ arrow given by its $R_{x,x'}$-equivalence class. The functor $\F_R$ is universal amongst functors $\G\colon \A \to \Cc$ with the property that $(f,g)\in R_{x,x'}$ implies $\G(f)=\G(g)$; see \cite[Proposition II.8.1]{MMT87}. \begin{lemma}\label{lem:adjointquot} Let $\F\colon \A\to \B$ and $\G\colon\B\to \A$ be two adjoint functors with $\F\dashv\G$. Let $R$ and $S$ be two congruence relations on $\A$ and $\B$, respectively. Suppose that $\F$ preserves $R$, in the sense that $(f,g)\in R_{x,x'}$ implies $(\F(f),\F(g))\in S_{\F(x),\F(x')}$ for all pairs $f,g \colon x \rightrightarrows x'$ of $\A$-arrows. Similarly, suppose that $\G$ preserves $S$. Then the factorisations $\F^{q}\colon\A/R \to \B/S$ and $\G^{q}\colon \B/S\to \A/R$ of $\F$ and $\G$, respectively, through the canonical projection functors $\F_R\colon \A\to \A/R$ and $\G_S\colon \B\to \B/S$ are adjoint with $\F^{q}\dashv \G^{q}$. \end{lemma} \begin{proof} Consider an $\A$-object $x$ and a $\B$-object $y$. Because $\F\dashv\G$, there is a natural bijection $\hom_\B{(\F(x),y})\cong\hom_\A{(x ,\G(y))}$. Since $\F$ and $\G$ preserve $R$ and $S$, respectively, it is elementary to verify that there is an induced natural bijection between the quotient sets $\hom_{\B/S}{(\F^q(x),y)}$ and $\hom_{\A/R}{(x ,\G^q(y))}$. Indeed, the arrow $\alpha_{f}:x \to \G(y)$ corresponding to an arrow $f:\F(x) \to y$ under the bijection above is given by the composite of $\G(f)$ with the unit $\eta_{x}:x\to \G(\F(x))$ of the adjunction between $\F$ and $G$, and hence if $(f, f')\in R$ then $(\alpha_{f}, \alpha_{f'})\in S$ since $\alpha_{f}=\G(f)\circ \eta_{x}$ and $\alpha_{f'}=\G(f')\circ \eta_{x}$ (here we use the fact that $S$ is a congruence and that $\G$ preserves $R$); the proof of the other direction is entirely analogous (it uses the counit of the adjunction between $\F$ and $\G$, the fact that $R$ is a congruence and the fact that $\F$ preserves $S$). \end{proof} \begin{definition}\label{d:Dq} We define $\D^q$ to be the quotient of $\D$ modulo the congruence $\delta$ defined by declaring the $\D$-arrows $f, g\colon (t, s)\to (t', s')$ equivalent precisely when $\I(f)\circ s=\I(g)\circ s$. \end{definition} It is an exercise to check that the relation above is indeed a congruence. Specularly, \begin{definition}\label{d:Rq} We define $\R^{q}$ to be the quotient of $\R$ modulo the congruence $\rho$ defined by declaring the $\R$-arrows $f, g\colon(t, R)\to (t', R')$ equivalent precisely when the factorisations of $-\circ f, -\circ g\colon \hom_{\T}{(t', \a )} \rightrightarrows \hom_{\T}{(t, \a )}$ through the quotient sets $\hom_{\T}{(t', \a )}/\overline{R'}$ and $\hom_{\T}{(t, \a )}/\overline{R}$ are equal, where $\overline{R'}$ and $\overline{R}$ denote the equivalence relations generated by $R'$ and $R$, respectively. (Recall Remark \ref{rem:equivfactor}.) \end{definition} Once more, it is elementary to verify that the relation defined above is indeed a congruence. We therefore have canonical projection functors $\C_\delta\colon\D \to\D^q$ and $\V_\rho\colon \R\to\R^q$. \begin{remark}\label{rem:structured} The arrows in the categories $\D^{q}$ and $\R^{q}$ admit the following more concrete descriptions: the arrows $(t, s)\to (t', s')$ in $\D^{q}$ are precisely the functions $\dom(s)\to \dom(s')$ which are restrictions of some arrow of the form $\I{(f)}$ where $f:t\to t'$, while the arrows $(t, R)\to (t', R')$ in $R^{q}$ are precisely the functions $\hom_{\T}{(t', \a )}/\overline{R'} \to \hom_{\T}{(t, \a )}/\overline{R}$ which are induced by an arrow $t\to t'$ in the sense specified above. In fact, in Remark \ref{r:RandD-structured-on-set} we shall define functors that, under the hypotheses of Theorem \ref{thm:schizo}, realise ${\R^{q}}^{\textrm{op}}$ and $\D^{q}$ (in case $\S=\Set$) as concrete categories structured over $\Set$. \end{remark} \begin{definition}\label{def:coseparator} We say that the $\T$-object $\a$ is an \emph{$\I$-coseparator} if for any $\T$-object $t$ the family of arrows $\I(\f)\colon \I(t)\to \I(\a)$, as $\f$ ranges over all $\T$-arrows $\f\colon t \to \a$, is jointly monic in $\S$. In other words, given any two $\S$-arrows $h_1,h_2\colon S \to \I(t)$, if $\I(\f)\circ h_1=\I(\f)\circ h_2$ for all $\f$, then $h_1=h_2$. \end{definition} \begin{lemma}\label{lem:cosep}\textup{(i)}\, The functor $\C\colon \D \to \R$ preserves $\delta$. \textup{(ii)}\, If the $\T$-object $\a$ is an $\I$-coseparator, the functor $\V\colon \R \to \D$ preserves $\rho$. \end{lemma} \begin{proof}(i)\, Let $x=(t,s),x'=(t',s')$ be $\D$-objects, and let $f,g\in\hom{(x,x')}$ be such that $(f,g)\in\delta_{x,x'}$. Explicitly, \begin{equation} \label{eq:1}\I(f)\circ s=\I(g)\circ s\ . \end{equation} We need to show that $(\C(f),\C(g))\in\rho_{\C(x),\C(x')}$. Since $\C(x)=(t,\CC(s))$, $\C(x')=(t',\CC(s'))$, $\C(f)=f$, and $\C(g)=g$, we equivalently need to show that the factorisations of $-\circ f, -\circ g\colon \hom_{\T}{(t', \a )} \rightrightarrows \hom_{\T}{(t, \a )}$ through the quotient sets $\hom_{\T}{(t', \a )}/\overline{\CC(s')}$ and $\hom_{\T}{(t, \a )}/\overline{\CC(s)}$ are equal. This is, for all $\f\in\hom{(t',\a)}$ the equality $(\f\circ f)/\overline{\CC(s)}=(\f\circ g)/\overline{\CC(s)}$ holds, or equivalently, $((\f\circ f),(\f\circ g))\in \CC(s)$. The latter means by definition, cf.\ (\ref{eq:RS}), that $\I(\f\circ f)\circ s=\I(\f\circ g)\circ s$, which can be obtained from (\ref{eq:1}) above by composing both sides with $\I(\f)$. \smallskip \noindent (ii)\, Let $(t,R), (t',R')$ be $\R$-objects, and suppose that $(f,g)\in\rho_{(t,R),(t',R')}$. This holds if, and only if, for all $\f\colon t'\to \a $, $(\f\circ f,\f\circ g)\in\overline{R}$, which in turn entails $\I(\f\circ f)\circ \VV{(R)}=\I(\f\circ g)\circ \VV{(R)}$, by the definition (\ref{eq:SR}) of $\VV{(R)}$ as an intersection of equalisers. Since $\a$ is an $\I$-coseparator we conclude that $\I(f)\circ \VV{(R)}=\I(g)\circ \VV{(R)}$. Recalling that $\V(f)=f$ and $\V(g)=g$, the last equality holds precisely when $(\V(f),\V(g))\in\delta_{\V(y),\V(y')}$. \end{proof} \begin{assumption}\label{ass:cosep}In light of Lemma \ref{lem:cosep}.ii, we henceforth assume that the $\T$-object $\a$ is an $\I$-coseparator. \end{assumption} \begin{definition}\label{def:quotients} We let $\C^{q}\colon\D^q \to \R^{q}$ and $\V^{q}\colon\R^q \to \D^{q}$ be the functors given by Lemma \ref{lem:cosep} as the canonical factorisations of the functors $\V_{\rho}\circ \C$ and $\C_{\delta}\circ \V$ through the projection functors $\C_{\delta}$ and $\V_{\rho}$, respectively. \end{definition} \begin{center} \begin{tikzpicture}[scale=0.4] \node (S) at (0,0) {$\D^{q}$}; \node (VTS) at (0,5) {$\D$}; \node (T) at (5,0) {$\R^{q}$}; \node (VTT) at (5,5) {$\R$}; \draw[->] (S) to [bend right] node[below, midway] {$\C^{q}$} (T); \draw [<-] (S) to [bend left] node[above, midway] {$\V^{q}$} node [below, midway,yshift=-0.15cm]{$\top$} (T) ; \draw [->] (VTS) to [bend right] node[below, midway] {$\C$} (VTT); \draw [<-] (VTS) to [bend left] node [above, midway] {$\V$} node [below, midway,yshift=-0.1cm]{$\top$} (VTT) ; \draw [<-] (S) -- (VTS) node [left, midway] {$\C_{\delta}$}; \draw [<-] (T) -- (VTT) node [right, midway] {$\V_{\rho}$}; \end{tikzpicture} \end{center} \begin{theorem}[General affine adjunction]\label{thm:genadj} Under our standing Assumption \ref{ass:cosep}, the functors $\C^{q}\colon\D^{q} \to \R^{q}$ and $\V^{q}\colon \R^{q} \to \D^{q}$ satisfy $\C^q\dashv\V^q$.\end{theorem} \begin{proof} Combine Lemma \ref{lem:cosep} and Lemma \ref{lem:adjointquot}. \end{proof} \begin{remark}\label{rem:nullquotient}The algebraic {\sl Nullstellensatz} (Theorem \ref{thm:null}) applies {\it verbatim} to the quotient categories $\D^{q}$ and $\R^{q}$, too. Indeed, the theorem does not mention morphisms in $\D$ or $\R$ at all. \end{remark} \section{Further general theory}\label{s:further} \subsection{Comprehensiveness of the framework}\label{universality} We prove in this section that any duality meeting Assumption \ref{ass:limits} is amenable to our framework. \begin{theorem} Suppose a duality between two categories $\A$ and $\B$ is given by functors $\F\colon\A^{\rm op}\to \B$ and $\G\colon\B\to \A^{\rm op}$, further suppose that $\B$ satisfies the requirements in Assumption \ref{ass:limits}. Then there exist categories $\T,\S$ a functor $\I\colon \T\to\S$ and an object $\a$ in $\T$ such that: \begin{enumerate} \item $\A^{\rm op}$ is equivalent to a full subcategory $\R_{eq}$ of $\R$, \item $\B$ is equivalent to a full subcategory $\D_{eq}$ of $\D$, \item the suitable compositions of the above equivalencies with $\V$ and $\C$ yield $\F$ and $\G$. \end{enumerate} \end{theorem} \begin{proof} Let us set \begin{itemize} \item $\T:=\A^{\textrm{op}}$, \item $\S:=\B$, \item $\I\colon\T \to \S$ equals to $\F\colon\A^{\textrm{op}} \to \B$, and \item $\Delta$ a fixed but arbitrary element of $\A$. \end{itemize} The categories $\R$ and $\D$ are defined as in sections \ref{ss:D} and \ref{ss:R}. Define the full subcategories $\R_{eq}$ and $\D_{eq}$ as follows. The former is given by the pairs $(t,R)$ in $\R$ such that $R$ is the identity relation on $\hom(t,\a)$, which we call $\id_{t}$. The latter is given by the pairs $(t,s)$ in $\D$ such that $s$ is the identity (subobject) of $\I(t)$, which we call $1_{t}$. Notice that if $f:t\to t'$ is \emph{any} arrow in $\T$, then $f$ is also an arrow in $R_{eq}$. Indeed, $(p,q)\in \id_{t'}$ iff $p=q$ and this implies $p\circ f=q\circ f$, i.e. $(p\circ f,q\circ f)\in \id_{t}$. Further, $f$ is also an arrow in $\D_{eq}$ for similarly trivial reasons. It is useful to calculate how $\V$ and $\C$ operate on $\R_{eq}$ and $\D_{eq}$. Given an object $(t,\id_{t})$ in $\R_{eq}$, we have $\V(t,\id_{t})=(t,\VV(\id_{t}))$, where \begin{align*} \VV(\id_{t})=\bigwedge_{(p,q)\in \id_{t}}Eq(\I(p),\I(q))=\I(t)=1_{t}\ . \end{align*} Given an object $(t,1_{t})$ in $\D_{eq}$, we have $\C(t,1_{t})=(t,\CC(1_{t}))$, where \begin{align} \CC(1_{t})&=\{(p,q)\in\hom_{\T}(t,\a)\mid 1_{t}\circ\I(p)=1_{t}\circ\I(q)\}\\ &=\{(p,q)\in\hom_{\T}(t,\a)\mid \I(p)=\I(q)\}\label{eq:2}\\ &=\{(p,q)\in\hom_{\T}(t,\a)\mid p=q\}\label{eq:3}\\ &=\id_{t}\ . \end{align} where \eqref{eq:2} holds because $1_{t}$ is an iso and \eqref{eq:3} because $\I=\F$ is faithful. In fact, $\V$ and $\C$ induce an equivalence between $\R_{eq}$ and $\D_{eq}$. We now define four functors as follows: \begin{enumerate} \item $\U_{\R^{eq}}\colon \R_{eq}\to \A^{\textrm{op}}$, is defined on objects as $\U_{\R^{eq}}(t,\id_{t}):=t$ and as the identity on arrows. \item $\U_{\D_{eq}}\colon \D_{eq} \to \B$, is defined on objects as $\U_{\D^{eq}}(t,1_{t}):=1_{t}$ and on arrows as $\F$. \item $\mathscr{L}\colon \A^{\textrm{op}} \to \R_{eq}$, is defined on objects as $\mathscr{L}(x):=(x,\id_{x})$ and as the identity on arrows \item $\mathscr{R}\colon \B\to \D_{eq}$, is defined on objects as $\mathscr{R}(y):=(\G(y),1_{\G(y)})$ and on arrows as $\G$. \end{enumerate} So we have the following diagram: \[ \begin{tikzpicture}[every node/.style={on grid}, scale=0.4] \node (S) at (0,0) {$\R_{eq}$}; \node (VTS) at (0,5) {$\A^{\textrm{op}}$}; \node (T) at (5,0) {$\D_{eq}$}; \node (VTT) at (5,5) {$\B$}; \draw [->] (S.10) -- (T.170) node [above, midway] {$\V$}; \draw [<-] (S.-10) -- (T.-170) node [below, midway] {$\C$}; \draw [->] (VTS.10) -- (VTT.165) node [above, midway] {$\F$}; \draw [<-] (VTS.-10) -- (VTT.-165) node [below, midway] {$\G$}; \draw [->] (S.100) -- (VTS.260) node [left, midway] {$\U_{\R_{eq}}$}; \draw [<-] (S.80) -- (VTS.280) node [right, midway] {$\mathscr{L}$}; \draw [<-] (T.100) -- (VTT.260) node [left, midway] {$\mathscr{R}$}; \draw [->] (T.80) -- (VTT.280) node [right, midway] {$\U_{\D_{eq}}$}; \end{tikzpicture} \] Notice that the pairs $\mathscr{L},\U_{\R_{eq}}$ and $\mathscr{R},\U_{\D_{eq}}$ are equivalences, indeed: \begin{itemize} \item for any $(t,\id_{t})$ in $\R_{eq}$, $\mathscr{L}\circ\U_{\R_{eq}}(t,\id_{t})=(t,\id_{t})$ and for any $x$ in $\A$ $\U_{\R_{eq}}\circ \mathscr{L}(x)=x$; \item for any $(t,1_{t})$ in $\D_{eq}$, $\mathscr{R}\circ\U_{\D_{eq}}(t,1_{t})=\mathscr{R}(1_{t})=\mathscr{R}(\I(t))=\mathscr{R}(\F(t))= (\G(\F(t)),1_{\G(\F(t))})$ and for any $y$ in $\B$ $\U_{\D_{eq}}\circ \mathscr{R}(y)=\U_{\D_{eq}}(\G(y),1_{\G_{y}})=1_{\G(y)}=\F(\G(y))$. \end{itemize} Finally we calculate the compositions: \begin{itemize} \item For any $x\in \A$, $\U_{\D_{eq}}\circ\V\circ\mathscr{L}(x)=\U_{\D_{eq}}\circ\V(x,\id_{x})=\U_{\D_{eq}}(x,1_{x})=1_{x}=\F(x)$ \item For any $y\in \B$, $\U_{\R_{eq}}\circ\C\circ\mathscr{R}(y)=\U_{\R_{eq}}\circ\C(\G(y),1_{\G(y)})=\U_{\R_{eq}}\circ(\G(y),\id_{\G(y)})=\G(y)$. \end{itemize} \end{proof} \subsection{The case of a representable $\I\colon\T\to \S$}\label{sec:representable} In this section we show that, under the hypothesis that the functor $\I\colon\T\to \S$ is representable, a suitable restriction (spelled out in Remark \ref{rem:restradj}) of the adjunction of Theorem \ref{thm:genadj} is induced by a dualising object, in the sense of the definition below. Recall that a functor $\F$ from a category $\Cat$ into $\Set$ is called \emph{representable} if there exists an object $c\in\Cat$ such that $\F$ is naturally isomorphic to the functor $\hom(c,€-)$. A representation of $\F$ is a pair $(c, \psi)$ where $\psi\colon \hom(c,-) \to \F$ is a natural isomorphism. \begin{definition}\label{d:schizo} Let $\sf{A}$ and $\sf{B}$ be two categories equipped with functors $\U_{\sf{A}}:\sf{A} \to \Set$ and $\U_{\sf{B}}:\sf{B}^{\textrm{op}} \to \Set$. We say that an adjunction $\F:\sf{A}\to \sf{B}^{\rm op}$ and $\G:\sf{B}^{\rm op}\to \sf{A}$, is \emph{induced by a dualising object} if there exist objects $a\in \sf{A}$ and $b\in \sf{B}$ such that \begin{enumerate} \item\label{d:schizo:item1} $\U_{\sf{A}}$ is representable by $a$, \item\label{d:schizo:item2} $\U_{\sf{B}}$ is representable by $b$, \item\label{d:schizo:item3} the composite functor ${\U_{\sf{B}}} \circ \F$ is represented by the object $\G(b)$, \item\label{d:schizo:item4} the composite functor ${\U_{\sf{A}}} \circ \G$ is represented by the object $\F(a)$, and \item\label{d:schizo:item5} $\U_{\sf{B}}(\F(a))=\U_{\sf{A}}(\G(b))$. \end{enumerate} \end{definition} \begin{remark} If the functors $\F$ and $\G$ define a categorical equivalence between $\sf{A}$ and $\sf{B}$, then such equivalence is induced by a dualising object if and only if conditions \eqref{d:schizo:item1},\eqref{d:schizo:item2}, and \eqref{d:schizo:item5} hold, for in this case conditions \eqref{d:schizo:item3} and \eqref{d:schizo:item4} follow from the other ones. \end{remark} \begin{remark}\label{r:RandD-structured-on-set} Notice that we can always define a forgetful functor $\U_{\R^{q}}\colon{\R^{q}}^{\textrm{op}}\to \Set$ as follows. For any object $(t, R)$ of ${\R^{q}}$, we set \begin{align*} \U_{\R^{q}}((t, R)):=\hom_{\T}(t, \a)\slash \overline{R} \end{align*} and for any arrow $f\colon(t, R)\to (t', R')$ in ${\R^{q}}$ we set \begin{align*} \U_{\R^{q}}(f):=f'\ , \end{align*} where $f'\colon \hom_{\T}(t', \a)\slash \overline{R'} \to \hom_{\T}(t, \a)\slash \overline{R}$ is the factorisation of \begin{align*} -\circ f\colon \hom_{\T}{(t', \a)} \to \hom_{\T}{(t, \a)} \end{align*} across the quotients $\hom_{\T}(t', \a)\slash \overline{R'}$ and $\hom_{\T}(t, \a)\slash \overline{R}$ (recall Remark \ref{rem:equivfactor}). The fact that this functor is faithful comes immediately from the fact if two arrows yield the same factorisation, then they are equal in $\R^{q}$. If, in addition, $\S=\Set$, we can define another forgetful functor $\U_{\D^{q}}: \D^{q} \to \Set$ as follows. For any $(t, s)\in \D^{q}$, we set \begin{align*} \U_{\D^{q}}((t, s)):=\dom(s)\ , \end{align*} and for any arrow $f\colon(t, s)\to (t', s')$ in $\D^{q}$ we set \begin{align*} \U_{\D^{q}}(f):=g\ , \end{align*} where $g\colon\dom(s)\to \dom(s')$ is the factorisation of $\I(f)\circ s$ through $s'$, as required in section \ref{ss:D}. The functor $\U_{\D^{q}}$ is faithful, for if $f'$ and $f'$ yield the same factorisation, then they are the same arrow in $\D^{q}$. \end{remark} For the sequel, it is important to introduce the following notion. \begin{definition} Let $(t, R)$ be an object of the category $\R^{q}$. We say that the relation $R$ is $\a$-stable if for any $(h,k) \in R$ and any arrow $f\colon \a\to \a$ in $\T$, $(f\circ h, f\circ k)\in R$. \end{definition} \begin{remark}\label{rem:restradj} Notice that the functor $\C^{q}\colon\D^{q} \to \R^{q}$ takes values in the full subcategory $\R^{q}_{s}$ of $\R^{q}$ on the objects $(t, R)$ such that $R$ is $\a$-stable. Therefore, if we denote by $\C^{q}_{s}\colon\D^{q} \to \R_{s}^{q}$ this restriction of the functor $\C^{q}$, and by $\V_{s}^{q}\colon \R_{s}^{q} \to \D^{q}$ the restriction of the functor $\V^{q}\colon \R^{q} \to \D^{q}$ to the subcategory $\R_{s}^{q}$, we have that the adjunction of Theorem \ref{thm:genadj} restricts to an adjunction $\C_{s}^q\dashv\V_{s}^q$. We shall denote by $\D_{i}^{q}$ the full subcategory of $\D^{q}$ whose objects are (up to isomorphism) of the form $\V_{s}^{q}(t, R)$ for some object $(t, R)$ of $\R_{s}^{q}$, and by $\C_{i}^q:\D^{q}_{i}\to \R_{s}^{q} $ and $\V_{i}^q:\R_{s}^{q} \to \D^{q}_{i}$ the restricted functors; then we clearly have an adjunction $\C_{i}^q\dashv\V_{i}^q$. \end{remark} In the following let us denote by $\id_{\a}$ the identical relation on the set $\hom_{\T}(\a, \a)$. \begin{lemma}\label{l:representable} The functor $\U_{\R^{q}}\colon{\R^{q}}^{\textrm{op}}\to \Set$ is represented by the object $(\a, \id_{\a})$ of ${\R^{q}}$. \end{lemma} \begin{proof} We need to provide a set-theoretical bijection $\hom_{\R^{q}}((t, R), (\a, \id_{\a}))\cong \hom_{\T}(t, \a)\slash \overline{R}$ naturally in $(t, R)\in \R^{q}$. By Definition \ref{d:Rq}, the arrows $(t, R)\to (\a, \id_{\a})$ in $\R^{q}$ are the arrows $f\colon t \to \a$ in $\T$ (as all of them preserve $\id_{\a}$), modulo the equivalence relation $\rho$. Recall from Definition \ref{d:Rq} that $f\rho f'$ if, and only if, the factorisations of $-\circ f, -\circ f'$ through $\hom_{\T}(\a, \a)\slash \id_{\a}$ and $\hom_{\T}(t, \a)\slash \overline{R'}$ (which by an abuse of notation we still indicate by $-\circ f$ and $-\circ f'$) are equal. This latter condition is equivalent to saying that $(f, f')\in \overline{R}$. Indeed, if $-\circ f= -\circ f'$ then $(-\circ f)(1_{\a})=(-\circ f')(1_{\a})$ in $\hom_{\T}(t, \a)\slash \overline{R}$, i.e.\ $(f, f')\in \overline{R}$. For the other implication, notice that, by assumption, $R$ is $\a$-stable. This amounts to saying that, for any arrow $g\in\hom_{\T}(\a,\a)$, $(g\circ f,g\circ f')\in \overline{R}$, hence $(-\circ f)(g)$ is equal to $(-\circ f')(g)$ in $\hom_{\T}(t, \a)\slash \overline{R}$. This proves that sending a $\R^{q}$-arrow from $(t,R)$ to $(\a,\id_{\a})$ into its $\overline{R}$-equivalence class in $\hom_{\T}(t,\a)$, gives a bijection. \end{proof} \begin{theorem}\label{thm:schizo} Under our standing Assumption \ref{ass:cosep} that $\a$ is an $\I$-coseparator, if the category $\S$ coincides with $\Set$ and the functor $\I \colon \T \to \S$ is represented by an object $\b$ then the adjunction $\C_{i}^q\dashv\V_{i}^q$ of Remark \ref{rem:restradj} is induced by a dualising object. \end{theorem} \begin{proof} Notice that the functor $\U_{\R^{q}}\colon{\R^{q}}^{\textrm{op}}\to \Set$ is represented by the object $(\a, \id_{\a})$ of ${\R^{q}}$ by Lemma \ref{l:representable}. Let us denote by $1_{\b}$ the identical subobject of $\I(\b)$. \noindent\textbf{Claim (1)} The functor $\U_{\D^{q}_{i}}: \D_{i}^{q} \to \Set$ is represented by the object $(\b, 1_{\b})$ of $\D^{q}$.\\ A natural isomorphism from $\hom((\b, 1_{\b}), -)$ to $\U_{\D^{q}}$ amounts to a set-theoretical bijection between $\hom_{\D_{i}^{q}}((\b, 1_{\b}), (t, s))$ and $\dom(s)$, holding naturally in $(t, s)\in \D_{i}^{q}$. Notice that, by definition of the category $\D_{i}^{q}$, any $(t, s)$ in $\D_{i}^{q}$ has the form $(t, \VV{(R)})$, for some object $(t, R)$ of the category $\R^{q}_{s}$. The arrows $(\b, 1_{\b})\to (t, s)$ in $\D^{q}$ are, by Definition \ref{d:Dq}, the arrows $f\colon\b \to t$ such that $1_{\b}\circ\I(f)=\I(f)$ factors through $s=\VV{(R)}$, modulo the equivalence relation $\simeq$ given by: \begin{align*} f\simeq f'\text{ if and only if }\I(f)=\I(f')\ . \end{align*} Now, since the functor $\I$ is represented by the object $\b$ we have a natural isomorphism \begin{align} \label{eq:I-rep} \xi\colon\hom_{\T}(\b, -)\cong \I, \end{align} from which it follows at once that for any $k\in \hom_{\T}(\b, t)$ the following diagram commutes: \begin{figure}[h!] \begin{small} \smallskip \begin{center} \begin{tikzpicture}[scale=0.4] \node (F) at (0,5) {$\hom_{\T}(\b, \b)$}; \node (A) at (0,0) {$\I(\b)$}; \node (C) at (8,0) {$\I(t)$}; \node (D) at (8, 5) {$\hom_{\T}(\b, t)$}; \draw [->] (F) -- (D) node [above, midway] {$k\circ -$}; \draw [->] (F) -- (A) node [left, midway] {$\xi_{\b}$} node [right, midway] {$\cong$}; \draw [->] (D) -- (C) node [right, midway] {$\xi_{t}$} node [left, midway] {$\cong$}; \draw [->] (A) -- (C) node [above, midway] {$\I(k)$}; \end{tikzpicture} \end{center} \end{small} \caption{The naturality diagram for the isomorphism $\I\cong \hom_{\T}(\b, -)$ with respect to $k$.} \label{fig:nat4} \end{figure} It follows that for any $k\in \hom_{\T}(\b, t)$, $\xi_{t}(k)=\I(k)(\xi_{\b}(1_{\b}))$. Therefore for any $f, f'\in \hom_{\T}(\b, t)$, \begin{align*} \I(f)=\I(f')\text{ if and only if }\xi_{t}(f)=\xi_{t}(f'). \end{align*} Indeed, $\xi_{t}(f)=\xi_{t}(f')$ implies $f=f'$ (since $\xi_{t}$ is an isomorphism) and hence $\I(f)=\I(f')$, while by the naturality in $t$ of $\xi_{t}$ $\I(f)=\I(f')$ implies $\xi_{t}(f)=\xi_{t}(f')$, since $\xi_{t}(f)=\I(f)(\xi_{\b}(1_{\b}))$ and $\xi_{t}(f')=\I(f')(\xi_{\b}(1_{\b}))$. Therefore for any arrow $f:\b \to t$ in $\T$, we have that $\I(f)$ factors through $s$ if and only if $\xi_{t}(f)$ lies in $\dom(s)$. This can be proved as follows. We have that $\I(f)$ factors through $s$ if and only if for any $(h, k)\in R$, $\I(h)\circ \I(f)=\I(k)\circ \I(f)$ (i.e. $\I(h\circ f)=\I(k\circ f)$); but, by the remarks above, this holds if and only if $\xi_{\Delta}(k\circ f)=\xi_{\Delta}(h\circ f)$ which, by the naturality of $\xi$ with respect to the arrows $k$ and $h$, is equivalent to the requirement $\I(k)(\xi_{t}(f)))=\I(h)(\xi_{t}(f))$, which holds if $\xi_{t}(f)$ lies in the image of $s=\VV{(R)}$ (by definition of $\VV{(R)}$). Conversely, if $\I(f)$ factors through $s$ then \emph{a fortiori} $\xi_{t}(f))=\I(f)(\xi_{\b}(\id_{\b}))\in \dom(s)$. The arrows $(\b, 1_{\b})\to (t, \VV{(R)})$ in $\D^{q}$ can thus be naturally identified with the elements of $\I(t)$ which are in the image of the subobject $s$, i.e. with the elements of $\dom(s)$, as required. \noindent\textbf{Claim (2)} The composite functor $\U_{\R^{q}} \circ {\C_{s}^{q}}^{\textrm{op}}:{\D_{s}^{q}}^{\textrm{op}}\to \Set$ is represented by the object $\V_{s}^{q}((\a, \id_{\a}))=(\a, 1_{\a})$.\\ To verify this, we have to exhibit a bijection $\hom_{\D^{q}}((t, s), (\a, 1_{\a}))\cong \hom_{\T}(t, \a)\slash \CC{(s)}$ natural in $(t, s)\in \D^{q}$. Applying Definition \ref{d:Dq}, we obtain that the arrows $(t, s)\to (\a, 1_{\a})$ in $\D_{q}$ are the arrows $f\colon t\to \a$ modulo the equivalence relation $\simeq$ defined by $f\simeq f'$ if and only if $\I(f)\circ s=\I(f')\circ s$. By Definition \ref{def:C}, this latter condition is satisfied precisely $(f, f')\in \CC{(s)}$, as required. \noindent\textbf{Claim (3)} The composite functor $\U_{D^{q}}\circ \V_{s}^{q}\colon \R_{s}^{q}\to \Set$ is represented by the object $\C_{s}^{q}((\b, 1_{\b}))=(\b, \id_{\b})$.\\ We have to check that there exists a bijection \[\hom_{\R^{q}}((\b, \id_{\b}), (t, R))\cong \dom(\VV{(R)}),\] natural in $(t, R)\in \R^{q}$. By Definition \ref{d:Rq}, the arrows $(\b, \id_{\b}) \to (t, R)$ in $\R^{q}$ are the arrows $f\colon \b \to t$ such that for any $(h, k)\in R$, $f\circ h=f\circ k$, modulo the equivalence relation $\simeq$ given by: $f\simeq f'$ if and only if the factorisations of the arrows $-\circ f, -\circ f'$ though $\hom_{\T}(t, \a)\slash \overline{R}$ and $\hom_{\T}(\b, \a)$ are equal. We claim that \begin{align} f\simeq f'\text{ if and only if }f=f'.\label{eq:claim} \end{align} Recall that the functor $\I$ is represented by the object $\b$. So, for any arrow $g:t \to \a$, \eqref{eq:I-rep} gives the following commutative naturality square. \begin{figure}[h!] \begin{small} \smallskip \begin{center} \begin{tikzpicture}[scale=0.4] \node (F) at (0,5) {$\hom_{\T}(\b, t)$}; \node (A) at (0,0) {$\I(t)$}; \node (C) at (8,0) {$\I(\a)$}; \node (D) at (8, 5) {$\hom_{\T}(\b, \a)$}; \draw [->] (F) -- (D) node [above, midway] {$g\circ -$}; \draw [->] (F) -- (A) node [left, midway] {$\xi_{t}$} node [right, midway] {$\cong$}; \draw [->] (D) -- (C) node [right, midway] {$\xi_{\a}$} node [left, midway] {$\cong$}; \draw [->] (A) -- (C) node [above, midway] {$\I(g)$}; \end{tikzpicture} \end{center} \end{small} \caption{The naturality diagram for the isomorphism $\I\cong \hom_{\T}(\b, -)$ with respect to $g$.} \label{fig:nat2} \end{figure} Notice that the factorisations of $-\circ f$ and $-\circ f'$ are equal if and only if for every $g\in \hom_{\T}(t, \a)$, $g\circ f=g\circ f'$, if, and only if, $(g\circ -)(f)=(g\circ -)(f')$. Hence, the commutativity of the above diagram allows to rewrite the latter condition as $\I(g)(\xi_{t}(f))=\I(g)(\xi_{t}(f'))$. Now, since $\a$ is a $\I$-coseparator, $\I(g)(\xi_{t}(f))=\I(g)(\xi_{t}(f'))$ holds for all $g\in \hom_{\T}(t, \a)$ if and only if $\xi_{t}(f)=\xi_{t}(f')$. In turn, $\xi_{t}$ being an isomorphism, the latter is equivalent to $f=f'$. This complete the proof of \eqref{eq:claim}. In order to complete the proof the claim, it remains to observe that for any $f\in \hom_{\T}(\b, t)$, $\xi_{t}(f)\in \dom(\VV(R))$ if and only if for any $(h, k)\in R$, $h\circ f=k\circ f$. Indeed, $\xi_{t}(f)\in \dom(\VV(R))$ if and only if for any $(h, k)\in R$, $\I(k)(\xi_{t}(f))=\I(h)(\xi_{t}(f))$; but, by the naturality of $\xi$, $\I(k)(\xi_{t}(f))=\xi_{\Delta}(k\circ f)$ and $\I(h)(\xi_{t}(f))=\xi_{\Delta}(h\circ f)$, whence $\I(k)(\xi_{t}(f))=\I(h)(\xi_{t}(f))$ if and only if $\xi_{\Delta}(h\circ f)=\xi_{\Delta}(k\circ f)$ i.e., if and only if $h\circ f=k\circ f$. To conclude the whole proof, it remains to observe that $\U_{D^{q}}((\a, 1_{\a}))=\I(\a)$, $\U_{\R^{q}}((\b, \id_{\b}))=\hom_{\T}(\b, \a)$, and that, by the representability of $\I$, $\I(\a)\cong\hom_{\T}(\b, \a)$. \end{proof} \begin{remark} The functors $\U_{\D^{q}}: \D^{q} \to \Set$ and $\U_{\R^{q}}:{\R^{q}}^{\textrm{op}}\to \Set$ defined above are faithful and representable, that is they realise $\D^{q}$ and ${\R^{q}}^{\textrm{op}}$ as concrete categories structured over $\Set$ (cf. Remark \ref{rem:structured}). \end{remark} \subsection{The setting of syntactic categories}\label{syntacticcategories} The notion of \emph{syntactic category} of a first-order theory is particularly useful in connection with the abstract categorical framework for generating affine adjunctions developed in Part \ref{part:general}. In fact, it allows us to apply this framework in contexts which go well beyond the standard setting of universal algebra investigated in Part \ref{part:alg}. In the following paragraphs we recall just the basic notions useful for our analysis; for more details we refer the reader to an introduction to categorical logic and topos theory (see e.g., \cite{CaramelloBackground}). \begin{definition} Let $\mathbb T$ be a theory over a signature $\mathcal{L}$ in a given fragment of first-order logic. The \emph{syntactic category} $\mathcal{C}_{\mathbb T}$ of $\mathbb T$ has as objects the formulas-in-context $\{\vec{x}. \phi\}$ over the signature (considered up to `renaming' of variables), where the context $\vec{x}$ contains all the free variables appearing in the formula $\phi$. The arrows $\{\vec{x}. \phi\} \to \{\vec{y}. \phi\}$\footnote{We always suppose, without loss of generality, that the contexts $\vec{x}$ and $\vec{y}$ are disjoint.} are the $\mathbb T$-provable equivalence classes $[\theta]$ of formulas $\theta(\vec{x}, \vec{y})$ which are $\mathbb T$-provably functional from $\phi(\vec{x})$ to $\psi(\vec{y})$, in the sense that the sequents \[ (\phi \vdash_{\vec{x}} (\exists y)\theta(\vec{x}, \vec{y})), \] \[ (\theta \vdash_{\vec{x}, \vec{y}} \phi \wedge \psi) \] and \[ (\theta(\vec{x}, \vec{y}) \wedge \theta(\vec{x}, \vec{z}\slash \vec{y}) \vdash_{\vec{x}, \vec{y}, \vec{z}} \vec{y}=\vec{z}) \] are provable in $\mathbb T$\footnote{Recall that a sequent $(\phi \vdash_{\vec{x}} \psi)$ has the same meaning that the first-order sentence $(\forall \vec{x})(\phi \!\Rightarrow\! \psi)$.} . \end{definition} The notion of $\mathbb T$-provably functional formula naturally generalises the notion of (morphism defined by) a term; indeed, for any term $t(\vec{x})$, the formula $\vec{y}=t(\vec{x})$ is provably functional from $\{\vec{x}. \top\}$ to $\{\vec{y}. \top\}$. We shall be concerned in particular with syntactic categories of \emph{geometric theories}, i.e.\ (many-sorted) first-order theories whose axioms can be presented in the form $(\forall \vec{x})(\phi(\vec{x})\imp \psi(\vec{x}))$, where $\phi(\vec{x})$ and $\psi(\vec{x})$ are \emph{geometric formulas} that is, first-order formulas built-up from atomic formulas by only using finitary conjunctions, possibly infinitary disjunctions and existential quantifications. One can consider models of (many-sorted) first-order theories not only in the classical set-theoretic context, but in arbitrary Grothendieck toposes. The sorts of the language over which the theory is written are interpreted as objects of the topos, the function symbols as arrows, and the relation symbols as subobjects; the interpretation of the formulas is given by subobjects defined inductively from these data by using the categorical structure present in the topos, mimicking the classical Tarskian definition of first-order structure. Recall that the category $\Set$ of sets and functions between them is a Grothendieck topos. For any geometric theory $\mathbb{T}$, the models of $\mathbb T$ in any Grothendick topos $\mathcal{E}$ can be identified with functors ${\mathcal{C}}_{\mathbb T}\to \mathcal{E}$ preserving the geometric structure on the category ${\mathcal{C}}_{\mathbb T}$. The functor $F_{M}:{\mathcal{C}}_{\mathbb T}\to \mathcal{E}$ corresponding to a $\mathbb T$-model $M$ in $\mathcal{E}$ sends $\{\vec{x}. \phi\}$ to (the domain of) its interpretation $[[\vec{x}. \phi]]_{M}$ in $M$ and any arrow $[\theta]:\{\vec{x}. \phi\} \to \{\vec{y}. \psi\}$ in $\mathcal{C}_{\mathbb T}$ to the arrow $[[\vec{x}. \phi]]_{M}\to [[\vec{y}. \psi]]_{M}$ in $\mathcal{E}$, denoted by $[[\theta]]_{M}$ abusing notation, whose graph is the interpretation of the formula $\theta$ in $M$. To apply the categorical framework of section \ref{sec:weak}, we stipulate, for a given geometric theory $\mathbb T$: \begin{itemize} \item $\T$ a suitable full subcategory of $\mathcal{C}_{\mathbb T}$ containing the object $\{x. \top\}$; \item $\a$ the object $\{x. \top\}$; \item $\S$ a Grothendieck topos (for instance, the category $\Set$ of sets); \item $\mathcal{I}$ the functor $F_{M}:\T\to \S$ corresponding to an arbitrarily fixed $\mathbb T$-model $M$ in $\S$ as specified above. \end{itemize} Assumption \ref{ass:limits} of Part \ref{part:general} is satisfied as any Grothendieck topos $\S$ has small limits. The next lemma takes care of verifying the remaining requirement that $\a$ is a $\I$-coseparator. \begin{lemma} In the setting defined above, the object $\a$ is always a $\I$-coseparator. \end{lemma} \begin{proof} We have to verify that for every object $\{\vec{x}. \phi\}$ of $\T$, the family of arrows $[[\theta]]_{M}:[[\vec{x}. \phi]]_{M}\to M$, where $\theta$ varies among the $\mathbb T$-provably functional formulas from $\{\vec{x}. \phi\}$ to $\{y. \top\}$, is jointly monic in $\S$. Now, if $\vec{x}=(x_{1}, \ldots, x_{n})$, for any $i\in \{1, \ldots, n\}$ the formula $y=x_{i}\wedge \phi(\vec{x})$ is $\mathbb T$-provably functional $\{\vec{x}. \phi\}$ to $\{y. \top\}$. But the interpretations in $M$ of such formulas are nothing but the canonical projections $[[\vec{x}. \phi]]_{M}\subseteq M^{n}\to M$, which are obviously jointly monic. \end{proof} \begin{remarks} \begin{enumerate}[(a)] \item If $\mathbb T$ is an algebraic theory, it is natural to take $\T$ equal to the subcategory of ${\mathcal{C}_{\mathbb T}}$ whose objects are the powers of $\{x. \top\}$. One can prove that the $\mathbb T$-functional formulas between formulas are all induced by terms, up to $\mathbb T$-provable equivalence (see e.g.\ \cite[p.\ 120]{blasce}, where it is proved that any $\mathbb T$-provably functional geometric formula $\theta(\vec{x}, \vec{y})$ between Horn formulas is $\mathbb T$-provably equivalent to a formula of the form $\vec{y}=\vec{t}(\vec{x})$, where $\vec{t}$ is a sequence of terms of the appropriate sorts in the context $\vec{x}$). As we shall see below, the algebraic framework of Part 3 can be precisely obtained by specialising the framework defined above to such syntactic categories. \item Let $M$ be a \emph{conservative model} for $\mathbb T$, i.e.\ a model of $\mathbb T$ such that any assertion (in the relevant fragment of logic) over the signature of $\mathbb T$ which is valid in it is provable in $\mathbb T$. Then the arrows $\{x. \top\}^{k} \to \{x. \top\}$ in $\T$ can be identified with the $\mathbb T$-model definable homomorphisms $M^{k}\to M$, for each $k$. Indeed, for any two $\mathbb T$-provably functional formulas $\theta_{1}, \theta_{2}$ from $\{x. \top\}^{k}$ to $\{x. \top\}$, we have $[\theta_{1}]=[\theta_{2}]$ if and only if $\theta_{1}$ and $\theta_{2}$ are $\mathbb T$-provably equivalent; but this is equivalent, $M$ being conservative, to the condition that $[[\theta_{1}]]_{M}=[[\theta_{2}]]_{M}$. As an example, let $\mathbb T$ be the algebraic theory of Boolean algebras. The algebra $\{0,1\}$ is a conservative model for $\mathbb T$, and in fact the free Boolean algebra on $k$ generators can be identified with the set of definable maps $\{0,1\}^{k}\to \{0,1\}$. \item This framework generalises that of Part \ref{part:alg}, which relies on the existence of a free object in the variety. By working at the syntactic level, we can to carry out our constructions by replacing the underlying set of each free object on $k$ generators by the set of arrows $\{\vec{x}^{k}. \top\}\to \{x. \top\}$ in the syntactic category of the theory. \end{enumerate} \end{remarks} A particularly natural class of theories to which one can apply the setting defined above is that of theories of presheaf type. A (geometric) theory is said to be of \emph{presheaf type} if it is classified by a presheaf topos (for a self-contained introduction to the theory of classifying toposes we refer the reader to \cite{CaramelloBackground}). This class of theories is interesting for several reasons: \begin{enumerate} \item Every finitary algebraic theory (or, more generally, any cartesian theory) is of presheaf type; \item There are many other interesting mathematical theories which are of presheaf type without being cartesian, such as the theory of total orders, the theory of algebraic extensions of a base field, the theory of lattice-ordered abelian groups with strong unit, the theory of perfect MV-algebras, the cyclic theory classified by Connes' topos (cf.\ \cite{CaramelloWentzlaff}) etc. \item Every small category can be regarded, up to idempotent-splitting completion, as the category of finitely presentable models of a theory of presheaf type (cf.\ \cite{CaramelloPresheaf}). \end{enumerate} The class of theories of presheaf type thus represents a natural generalisation of the class of algebraic theories. For a comprehensive study of this class of theories, we refer the reader to \cite{CaramelloPresheaf}. Interestingly, free objects in the category of set-based models of a theory of presheaf type $\mathbb T$ do not always exist, but this category is always generated by the finitely presentable (equivalently, finitely presented) models of the theory. The full subcategory spanned by such models is dual to the full subcategory of the syntactic category of the theory $\mathbb T$ on the $\mathbb T$-irreducible formulas (cf. \cite{CaramelloSyntactic}), and for each such formula $\phi(\vec{x})$ presenting a model $M_{\phi}$, we have $M_{\phi}\cong \hom_{{\mathcal C}_{\mathbb T}}(\{\vec{x}. \phi\}, \{x. \top\})$ (cf. \cite{CaramelloPresheaf}). \begin{definition} Let $\mathbb T$ be a geometric theory and $M$ a $\mathbb T$-model. \begin{enumerate}[(a)] \item A \emph{definable map} $M^{k}\to M$ is a map of the form $[[\theta]]_{M}$ where $\theta$ is a $\mathbb T$-provably functional formula from $\{x. \top\}^{k}$ to $\{x. \top\}$. \item A \emph{congruence} on $M$ is an equivalence relation $R$ on $M$ such that for any definable map $d:M^{k}\to M$, $(x_{i}, y_{i})\in R$ for all $i=1, \ldots, k$ implies that $(d(x_{1}, \ldots, x_{k}), d(y_{1}, \ldots, y_{k}))\in R$. \end{enumerate} \end{definition} \begin{remark} As we recalled above, if $\mathbb T$ is a finitary algebraic theory then the $\mathbb T$-provably functional formulas $\{x. \top\}^{k}$ to $\{x. \top\}$ are all induced by terms, up to $\mathbb T$-provable equivalence, so the above notions specialize to the classical universal algebraic ones. \end{remark} \begin{proposition}\label{stability} If $\mathbb T$ is a theory of presheaf type and $M$ is a finitely presentable $\mathbb T$-model then any congruence on $M$ is $\a$-stable (regarding $M$ as $\hom_{\T}(\{\vec{x}. \phi\}, \a)$, where $\{\vec{x}.\phi\}$ is a formula presenting $M$). \end{proposition} \begin{proof} We have to show that if $R$ is a congruence on $M=\hom_{\T}(\{\vec{x}. \phi\}, \a)$ then for any $([\theta_{1}], [\theta_{2}])\in R$ and any arrow $[\theta]:\a \to \a$ in $\S$, $([\theta \circ \theta_{1}], [\theta \circ \theta_{2}])\in R$. Now, $[[\theta]]_{M}$ is precisely the function $[\theta]\circ -:\hom_{\T}(\{\vec{x}. \phi\}, \a) \to \hom_{\T}(\{\vec{x}. \phi\}, \a)$, whence our thesis follows immediately. \end{proof} Hence, taking $\T$ to be the full subcategory of the geometric syntactic category $\mathcal{C}_{\mathbb T}$ of a theory of presheaf type $\mathbb T$ on the formulas which are either $\{y. \top\}$ or $\mathbb T$-irreducible and $\S$ to be $\Set$ yields in particular an adjunction between a category of congruences on finitely presentable $\mathbb T$-models and a certain category of definable sets and $\mathbb T$-definable homomorphisms between them. We note that the equivalence between the first two items in the algebraic \emph{Nullstellensatz} (Theorem \ref{thm:algnull}) holds more generally for any theory of presheaf type $\mathbb T$ (replacing $\F(\mu)$ with any finitely presentable $\mathbb T$-model), with essentially the same proof. \subsection{Recovering Diers' ``system-solution'' adjunction}\label{ssolution} In this section we show that Diers' system-solution adjunction (\cite[Proposition 3.6]{Diers}) can be recovered as an instance of Theorem \ref{thm:weakadj}. The context in which Diers works is that of an (essentially) algebraic theory $\mathbb T$, and of a fixed model ${\mathcal{L}}$ in $\Set$. Diers defines a category $\textbf{A}f\textbf{S}ub\textbf{S}et(\mathcal{L})$ of affine subsets over $\mathcal{L}$ whose objects are the triplets $(X, A(X), Y)$, where $X$ is a set, $A(X)$ is a $\mathbb T$-subalgebra of the $\mathbb T$-algebra ${\mathcal{L}}^{X}$ and $Y$ is a subset of $X$, and whose arrows $(X, A(X), Y)\to (X', A(X'), Y')$ are the functions $f\colon X\to X'$ such that $F(Y)\subseteq Y'$ and ${\mathcal{L}}^{f}\colon{\mathcal{L}}^{X'}\to {\mathcal{L}}^{X}$ restricts to a function (in fact, a $\mathbb T$-model homomorphism) $A(X')\to A(X)$. On the other side, he considers a category $\textbf{A}lg\textbf{S}yst(\mathcal{L})$ of algebraic systems over $\mathcal{L}$, whose objects are triplets $(X, A(X), E)$, where $X$ is a set, $A(X)$ is a $\mathbb T$-subalgebra of the $\mathbb T$-algebra ${\mathcal{L}}^{X}$ and $E$ is a set of pairs $(u, v)$, where $u, v\in A(X)$. He then defines two functors defining a ``system-solution'' adjunction: $\mathcal{Z}:\textbf{A}lg\textbf{S}yst(\mathcal{L}) \to \textbf{A}f\textbf{S}ub\textbf{S}et(\mathcal{L})$ and $\mathcal{S}:\textbf{A}f\textbf{S}ub\textbf{S}et(\mathcal{L}) \to \textbf{A}lg\textbf{S}yst(\mathcal{L})$ by setting $\mathcal{Z}(X, A(X), E)$ equal to $(X, A(X), S)$, where $S$ is the locus of solutions in $X$ of the equations $u=v$ for $(u, v)\in E$ and $\mathcal{S}(X, A(X), Y)=(X, A(X), E)$, where $E$ is the set of pairs $(u, v)$ of elements of $A(X)$ such that $u(x)=v(x)$ for all $x\in Y$. This adjunction can be recovered as a particular case of our Theorem \ref{thm:genadj} by setting: \begin{itemize} \item $\T$ equal to the category $\textbf{A}f\textbf{S}et(\mathcal{L})$ whose objects are the pairs $(X, A(X)$, where $X$ is a set, $A(X)$ is a $\mathbb T$-subalgebra of the $\mathbb T$-algebra ${\mathcal{L}}^{X}$ and whose arrows $(X, A(X))\to (X', A(X')) $ are the functions $f\colon X\to X'$ such that ${\mathcal{L}}^{f}\colon{\mathcal{L}}^{X'}\to {\mathcal{L}}^{X}$ restricts to a function $A(X')\to A(X)$. \item $\S$ equal to the category $\Set$ of sets \item $\I \colon \T \to \S$ equal to the forgetful functor sending any object $(X, A(X))$ in ${\mathcal C}$ to the set $X$ and any arrow $f\colon(X, A(X))\to (Y, A(Y))$ in $\T$ to the function $f\colon X\to Y$; \item $\Delta$ equal to the object $({\mathcal{L}}, {\mathcal{A}}_{\mathcal{L}})$, where ${\mathcal A}_{\mathcal L}$ is the $\mathbb T$-subalgebra of ${\mathcal{L}}^{\mathcal{L}}$ generated by the set $\{1_{\mathcal{L}}\}$, where $1_{\mathcal{L}}$ is the identity on $\mathcal L$. \end{itemize} Indeed, for any object $(X, A(X))$ of $\T$, the set $\hom_{\T}((X, A(X)), ({\mathcal{L}}, {\mathcal{A}}_{\mathcal{L}}))$ is canonically isomorphic to $A(X)$, since a function $f\colon X\to{\mathcal L}$ belongs to $A(X)$ if and only if ${\mathcal L}^{f}=-\circ f\colon {\mathcal{L}}^{\mathcal{L}}\to {\mathcal{L}}^{X}$ restricts to a function ${\mathcal A}_{\mathcal L}\to A(X)$ (note that the arrow ${\mathcal L}^{f}$ is a $\mathbb T$-algebra homomorphism and hence its restriction to ${\mathcal A}_{\mathcal L}$ factors through $A(X)\hookrightarrow {\mathcal L}^{X}$ if and only if ${\mathcal{L}}^{f}(1_{\mathcal L})=f\in A(X)$, as any element of ${\mathcal A}_{\mathcal{L}}$ can be obtained from $1_{\mathcal{L }}$ by applying the $\mathbb T$-operations a finite number of times). The objects of the category $\R$ can thus be identified with the pairs $((X, A(X)), R)$, where $R$ is a relation on the set $A(X)$. The arrows $((X, A(X)), R) \to ((Y, A(Y)), R')$ are the functions $f\colon X\to Y$ such that ${\mathcal L}^{f}$ restricts to a function $A(Y)\to A(X)$ which sends $R'$-pairs to $R$-pairs. In other words, $\R$ coincides with the category $\textbf{A}lg\textbf{S}yst({\mathcal L})$. On the other hand, it is immediate to see that the category $\D$ coincides with the category $\textbf{A}f\textbf{S}ub\textbf{S}et({\mathcal L})$ of affine subsets over $\mathcal L$ of \cite{Diers}. It is also clear that the adjunction of Theorem \ref{thm:weakadj} specializes precisely to the adjunction between $\mathcal{Z}$ and $\mathcal{S}$ of \cite[Proposition 3.6]{Diers}. \part{The specialisation to varieties of algebras}\label{part:alg} \section{The general setting}\label{sec:algebraic} We now specialise the setting of part \ref{part:general} to algebraic categories. In particular, we shall work with \emph{varieties of algebras} (i.e., equationally definable classes of algebras) in the sense of S{\l}ominsky \cite{Slominsky:1959} and Linton \cite{Linton:1965}. In this section we fix the following notation. \begin{enumerate}[$-$] \item $\Va$ is a (possibly infinitary) variety of algebras, regarded as a category whose objects are the $\Va$-algebras and whose morphisms are the $\Va$-homomorphisms. \item $\U\colon \Va \to \Set$ is the canonical underlying set functor. \item $\F$ is the canonical free functor, i.e.\ the left adjoint to $\U$. \item $A$ is an arbitrary but fixed $\Va$-algebra. \end{enumerate} We henceforth often speak of `algebras' and `homomorphisms' (and also `isomorphisms' etc.) rather than `$\Va$-algebras' and `$\Va$-homomorphisms', the variety $\Va$ being understood. If $I$ is any set, the algebra $\F(I)$ in $\Va$ is, as usual, \emph{a free algebra generated by $I$}. We fix canonical representatives for the isomorphism class of each free algebra in $\Va$. To this end, let \begin{align}\label{eq:vars} X_\mu:=\{X_\alpha\}_{\alpha<\mu} \end{align} be a specific set (of \emph{variables}, or \emph{free generators}) of cardinality $\mu$, where $\alpha$ ranges over ordinals (of cardinality) less than $\mu$. We often write $\mu$ as a shorthand for $X_{\mu}$, and therefore $\F(\mu)$ as a shorthand for $\F(X_\mu)$. To stress that we are selecting a specific representative for the isomorphism class of a free algebra $\F(I)$, we refer to $\F(\mu)$ as \emph{the} free algebra on $\mu$ generators. The adjunction relation \begin{align}\label{eq:freeforget} \frac{\F(\mu)\to A}{\mu\to \U(A)} \end{align} shows that $\U(A)$ may be naturally identified (in $\Set$) with the set of homomorphisms $\F(1)\to A$, i.e.\ \begin{align}\label{eq:identification} \U{(A)}\cong \hom_\Va{(\F(1),A)}. \end{align} In particular, because $\F$ is a left adjoint and therefore preserves all existing colimits, \begin{align}\label{eq:coprod} \F(\mu)=\coprod_\mu\F(1) \end{align} i.e.\ $\F(\mu)$ is the coproduct in $\Va$ of $\mu$ copies of $\F(1)$. To specialise the general framework of section \ref{sec:weak} to varieties of algebras, we stipulate that\footnote{Notice that this framework is a particular case of that developed in section \ref{syntacticcategories}, obtained by taking as $\mathbb{T}$ the algebraic theory axiomatising the variety $\Va$ and taking $\T$ equal to the full subcategory of of the syntactic category of $\mathbb{T}$, whose objects are powers of the object $\{x. \top\}$.}: \begin{itemize} \item $\T$ is the opposite of the full subcategory of $\Va$ whose objects are the free $\Va$-algebras $\F(\mu)$, as $\mu$ ranges over all cardinals. \item $\S$ is the category $\Set$. \item $\a$ is the $\T$-object $\F{(1)}$. \end{itemize} It remains to provide an instantiation for the functor $\I\colon \T\to \S$. To this end notice that any algebra $A$ yields a functor \begin{align*} \I_{A}\colon \T \to \Set \end{align*} that preserves arbitrary products, in the spirit of the Lawvere-Linton functorial semantics of algebraic theories \cite{lawvere63,Linton:1965,pareigis70, gabrielulmer71, manes76, adameketal94, adameketal11}; henceforth we shall write just $\I$ for $\I_{A}$. To define $\I$ on objects, we set \begin{align}\label{eq:defI} \I\left(\F(\mu)\right):= \U(A)^\mu \end{align} for any $\mu$. Given a homomorphism $\F(\mu)\to\F(\nu)$, we construct a function $\U(A)^\nu\to\U(A)^\mu$ as follows. First, by (\ref{eq:coprod}), it suffices to consider the case that $\mu=1$, i.e.\ the free algebra be singly generated. Thus, let \begin{align}\label{eq:element} p\colon\F(1)\to\F(\nu) \end{align} be given. Given an element of $\U(A)^{\nu}$, i.e.\ a function \begin{align}\label{eq:pointofa} a_\nu\colon \nu \to \U(A), \end{align} by the adjunction (\ref{eq:freeforget}) there is a unique $\Va$-arrow \begin{align}\label{eq:tuple} \widehat{a_\nu}\colon\F(\nu)\to A. \end{align} We then have the composition \begin{align}\label{eq:comp} \F(1)\overset{p}{\longrightarrow}\F(\nu)\overset{\widehat{a_\nu}}{\longrightarrow} A \end{align} of (\ref{eq:element}) and (\ref{eq:tuple}). Applying again the adjunction (\ref{eq:freeforget}) to (\ref{eq:comp}) we obtain an arrow in $\Set$ \begin{align}\label{eq:eval} \ev(p,a_\nu):= 1 \to \U{(A)}, \end{align} i.e.\ an element of $\U({A})$, called the \emph{evaluation of $p$ at $a_\nu$}. Keeping $p$ fixed and letting $a_{\nu}$ range over all elements (\ref{eq:pointofa}) of $\U{(A)}^{\nu}$, we thus obtain the \emph{evaluation map} \begin{align}\label{eq:dualeval} \ev(p,-)\colon \U(A)^\nu\to\U(A). \end{align} We set \begin{align}\label{eq:Ionarrows} \I(p):=\ev(p,-), \end{align} and this completes the definition of the functor $\I\colon \T\to\Set$. \begin{definition}\label{def:definablemap}A function $\U{(A)}^{\nu}\to \U{(A)}^{\mu}$ is called \emph{definable} \textup{(}\emph{in the language of $\Va$}\textup{)} if it is in the range of $\I$, as defined above. In other words, the definable functions $\U{(A)}^{\nu}\to \U{(A)}^{\mu}$ are precisely those that can be obtained by evaluating a $\mu$-tuple of elements of $\U{(\F{(\nu)})}$ at the $\nu$-tuples of elements of $\U{(A)}$. \end{definition} \begin{remark}\label{rem:products}Observe that, in the above, $\I$ preserves all products in $\T$ by construction. Moreover, recall that the forgetful functor $\U\colon\Va\to\Set$ commutes with products in $\Set$, because it is adjoint on the right and hence preserves all existing limits. Stated otherwise, products in varieties are direct products. Hence we have an isomorphism of sets $\U{(A^{\mu})}\cong \U{(A)}^{\mu}$. Therefore, the replacement of (\ref{eq:defI}) in the definition of $\I$ by $\F{(\mu)}\longmapsto\U{(A^{\mu})}$ would be immaterial. \end{remark} Let us now consider the categories $\D$ and $\R$ in the present algebraic setting. Specialising the definitions in Subsection \ref{ss:D}, we see that the $\D$-objects are all subsets $S\subseteq \U{(A})^{\mu}$, as $\mu$ ranges over all cardinals. The $\D$-arrows from $S'\subseteq \U{(A})^{\nu}$ to $S\subseteq \U{(A})^{\mu}$ are the definable functions $\U{(A)}^{\nu}\to \U{(A)}^{\mu}$, in the sense of Definition \ref{def:definablemap}, that restrict to functions $S' \to S$. We stress that distinct definable functions $\U{(A)}^{\nu}\to \U{(A)}^{\mu}$ are regarded as distinct $\D$-arrows even when they yield the same restriction $S' \to S$. Concerning the category $\R$, let us specialise the definitions in Subsection \ref{ss:R}. The $\R$-objects can be naturally identified with the relations $R$ on $\U{(\F{(\mu)})}$, as $\mu$ ranges over all cardinals. To see this, observe that an $\R$-object is, by definition, a relation $R$ on $\hom_\T{(\F{(\mu)},\F{(1)})}$. That is, by our choice of $\T$, $R$ is a relation on $\hom_\Va{(\F{(1)},\F{(\mu)})}$. By the adjunction (\ref{eq:freeforget}), homomorphisms $\F{(1)}\to\F{(\mu)}$ are in natural bijection with the elements of $\U{(\F{(\mu)})}$, so that we can regard $R$ as a relation on $\U{(\F{(\mu)})}$. Let us henceforth write \begin{align} (\F{(\mu)},R) \end{align} to denote an $\R$-object. We show next that an $\R$-arrow % \begin{align*} (\F{(\nu)}, R') \to (\F{(\mu)}, R) \end{align*} % can be naturally identified with a homomorphism % \begin{align*} h\colon \F{(\mu)}\to\F{(\nu)} \end{align*} that preserves $R$ with respect to $R'$, i.e.\ satisfies \begin{align}\label{eq:rarrowalg} \forall p,q\in \U{(\F{(\mu)})}\colon \ \ (p,q)\in R \ \Longrightarrow \ (h(p),h(q))\in R'. \end{align} Indeed, the $\R$-arrow $(\F{(\nu)}, R') \to (\F{(\mu)}, R)$ is by definition a $\Va$-arrow $h\colon \F{(\mu)}\to \F{(\nu)}$ such that the function \begin{align*} h\circ -\colon \hom_{\Va}{(\F{(1)},\F{(\mu)})} \longrightarrow \hom_{\Va}{(\F{(1)}, \F{(\nu)})} \end{align*} satisfies the property \begin{align}\label{eq:rarrowalg1} (p, q)\in R \quad \Longrightarrow \quad (h \circ p, h\circ q)\in R' \end{align} for each pair of homomorphisms $p,q\colon \F{(1)}\to \F{(\mu)}$. Identifying $p$ and $q$ with elements of $\U{(\F{(\nu)})}$ as usual via the adjunction (\ref{eq:freeforget}), we obtain (\ref{eq:rarrowalg}) from (\ref{eq:rarrowalg1}). \begin{notation}\label{not:underlying}For the rest of this paper we follow the standard practice in algebra of omitting the underlying set functor. Thus when we write, for instance, $a \in A$, it is understood that we mean $a \in \U{(A)}$. \end{notation} \section{The algebraic affine adjunction}\label{sec:algadj} Let us specialise the functors $\C\colon \D \to \R$ and $\V\colon \R\to \D$ to the algebraic setting of section \ref{sec:algebraic}. Recall that $\Va$ is a variety of algebras, $A$ is an arbitrary $\Va$-algebra, $\T$ is the opposite of the full subcategory of $\Va$ whose objects are the free $\Va$-algebras $\F{(\mu)}$, as $\mu$ ranges over all cardinals, $\a:=\F{(1)}$, and $\I\colon \T \to \Set$ is the functor defined in (\ref{eq:defI}--\ref{eq:Ionarrows}) above. It is appropriate to recall at this stage the notions of operation and congruence. \begin{definition}\label{def:operation} For $\nu$ a cardinal, a \emph{$\Va$-operation \textup{(or, more simply, an} operation\textup{)} of arity $\nu$} is a $\Va$-homomorphism $t\colon \F{(1)}\to\F{(\nu)}$. The operation $h$ is \emph{finitary} if $\nu$ is finite, and \emph{infinitary} otherwise. An \emph{operation on the $\Va$-algebra} $A$ is a function $f\colon A^{\nu}\to A$ that is definable in the sense of Definition \ref{def:definablemap}, that is, such that $h=\I{(t)}:=\ev{(t,-)}$ for some $t\colon \F{(1)}\to\F{(\nu)}$. \end{definition} \begin{remark}\label{rem:operations}Since homomorphisms $t\colon \F{(1)}\to\F{(\nu)}$ are naturally identified with elements $t \in \F{(\nu)}$ via the adjunction (\ref{eq:freeforget}), the preceding definition agrees with the usual notion of operations as \emph{term-definable functions}; one calls $t$ a \emph{defining term} for the operation in question. By a classical theorem of G.\ Birkhoff (see e.g.\ \cite[Theorem 10.10]{Burris:81}) the free algebra $\F{(\nu)}$ can indeed be represented as the algebra of \emph{terms} ---elements of absolutely free algebras--- over the set of variables $X_{\nu}$, modulo the equivalence relation that identifies two such terms if, and only if, they evaluate to the same element in any $\Va$-algebra. For the infinitary case see \cite[Ch.\ III]{Slominsky:1959}. \end{remark} \begin{remark}\label{rem:homscommute}When, in the sequel, we say that \emph{homomorphisms commute with operations}, we mean that given any $\Va$-homomorphism $h\colon A\to B$, any $\nu$-ary operation $t \in \F{(\nu)}$, and any element $a:=(a_{\beta})_{\beta<\nu}\in A^{\nu}$, we have \begin{align}\label{eq:operation} h(\ev_{A}{(t,a)})=\ev_{B}{(\,t, (h(a_{\beta}))_{\beta < \nu}}\,), \end{align} where $\ev_{A}(t,-)\colon A^{\nu} \to A$ and $\ev_{B}(t,-)\colon B^{\nu} \to B$ are the evaluation maps with respect to $A$ and $B$. That (\ref{eq:operation}) holds follows by direct inspection of the definitions. It is common to write (\ref{eq:operation}) as \begin{align}\label{eq:operationterm} h(t(\,a_{\nu}\,))=t(\,h(a_{\nu})\,), \end{align} where the algebras $A$ and $B$ over which $t$ is evaluated are tacitly understood. \end{remark} \begin{definition}\label{def:cong} A \emph{congruence} $\theta$ on a $\Va$-algebra $A$ is an equivalence relation on $A$ that is \emph{compatible with \textup{(or} preserved by\textup{)} all operations}, i.e.\ with all definable maps $f\colon A^{\nu}\to A$, where $\nu$ is a cardinal. This means that whenever $x_{\nu}:=(x_{\beta})_{\beta<\nu}$, $y_{\nu}:=(y_{\beta})_{\beta<\nu}$ are $\nu$-tuples of elements of $A$, \begin{align}\label{eq:cong} (x_{\beta},y_{\beta})\in \theta \text{ for each $\beta<\nu$} \ \ \ \Longrightarrow \ \ \ (f(x_{\nu}), f(y_{\nu})) \in \theta. \end{align} \end{definition} \begin{remark}\label{rem:congaed}With the notation of the preceding definition, upon writing $f=\ev{(t,-)}$ for some defining term $t \in \F{(\nu)}$ condition (\ref{eq:cong}) reads \begin{align}\label{eq:congterm} (x_{\beta},y_{\beta})\in \theta \text{ for each $\beta<\nu$} \ \ \ \Longrightarrow \ \ \ (\ev{(t,x_{\nu})}, \ev{(t,(y_{\nu}))} \in \theta. \end{align} Equivalently, with the convention adopted in (\ref{eq:operationterm}), \begin{align}\label{eq:congterm1} (x_{\beta},y_{\beta})\in \theta \text{ for each $\beta<\nu$} \ \ \ \Longrightarrow \ \ \ (t(\,(x_{\beta})_{\beta<\nu}\,),\, t(\,(y_{\beta})_{\beta<\nu}\,))\in \theta. \end{align} It is a standard fact, even in the infinitary case, that congruences in the sense of Definition \ref{def:cong} coincide with congruences defined in terms of kernel pairs; see \cite[p.\ 33]{Linton:1965} and \cite[Ch.\ II.5]{Slominsky:1959}. \end{remark} The Galois connections $(\CC,\VV)$ of Subsection \ref{ss:Galois} now specialise as follows. Given a subset $S\subseteq A^{\mu}$, we have \begin{align}\label{eq:Calg} \CC{(S)}=\left\{(p,q)\in\F{(\mu)} \mid \forall a\in S: \ \ \ev{(p,a)}=\ev{(q,a)} \right\}, \end{align} where $\ev(p,-)\colon A^{\mu}\to A$ is, once more, the evaluation map (\ref{eq:dualeval}). \begin{lemma}\label{lem:iscong}For any cardinal $\mu$, and any subset $S\subseteq A^{\mu}$, the set $\CC{(S)}\subseteq \F{(\mu)}^{2}$ is a congruence relation. \end{lemma} \begin{proof} It is clear that $\CC{(S)}$ is an equivalence relation. To show it is a congruence, let $\nu$ be a cardinal and consider two $\nu$-tuples $(x_{\beta})_{\beta<\nu}$, $(y_{\beta})_{\beta<\nu}$ of elements of $\F{(\mu)}$. Since the pairs $(x_{\beta},y_{\beta})$ are in $\CC{(S)}$ for each $\beta<\nu$, we have \begin{align}\label{eq:cond1} \forall \beta<\nu: \ \ \ \ev{(x_{\beta},a)}=\ev{(y_{\beta},a)}, \end{align} for all $a \in S$. If $t\in\F{(\nu)}$ is any $\nu$-ary operation on $\F{(\mu)}$, applying $t$ to (\ref{eq:cond1}) we obtain \begin{align}\label{eq:cond2.1} t(\,(\ev{(x_{\beta},a)})_{\beta<\nu}\,)=t(\,(\ev{(y_{\beta},a)})_{\beta<\nu}\,), \end{align} that is, more explicitly, \begin{align}\label{eq:cond2.2} \ev{(t,(\ev{(x_{\beta},a)})_{\beta<\nu}\,))}=\ev{(t,(\ev{(y_{\beta},a)})_{\beta<\nu}\,))}. \end{align} Directly from the definitions one verifies \begin{align}\label{eq:verifies} \ev{(t,(\ev{(x_{\beta},a)})_{\beta<\nu}\,))}=\ev{(t(\,(x_\beta)_{\beta<\nu}\,),a)}, \end{align} so that from (\ref{eq:cond2.2}--\ref{eq:verifies}) we obtain \begin{align*} \ev{(t(\,(x_{\beta})_{\beta<\nu}\,),\, a\,)}=\ev{(t(\,(y_{\beta})_{\beta<\nu}\,),\, a)}, \end{align*} for all $a \in S$, and the proof is complete. \end{proof} \begin{remark} Every congruence on the free algebra $\F(\mu)$ is $\a$-stable (cf. the proof of Proposition \ref{stability}). \end{remark} Concerning the operator $\VV$, note first that $\Set$ obviously satisfies Assumption \ref{ass:limits}. Given a relation $R$ on $\F{(\mu)}$, we have \begin{align}\label{eq:Valg} \VV{(R)}=\bigcap_{(p,q)\in R}\left\{a\in\I{(\F{(\mu)})} \mid \ev{(p,a)}=\ev{(q,a)} \right\}. \end{align} Lemma \ref{lem:galois} asserts that, for any cardinal $\mu$, any relation $R$ on $\F{(\mu)}$, and any subset $S\subseteq A^{\mu}$, we have \begin{align*} R \subseteq \CC{(S)} \quad \quad \text{if, and only if,} \quad \quad S\subseteq \VV{(R)}. \end{align*} In other words, the functions $\VV\colon 2^{\F{(\mu)}^{2}}\to 2^{A^{\mu}}$ and $\CC\colon 2^{A^{\mu}}\to2^{\F{(\mu)}^{2}}$ yield a contravariant Galois connection between the indicated power sets. Consider subsets $S'\subseteq A^{\nu}$, $S\subseteq A^{\mu}$, with $\mu$ and $\nu$ cardinals, and a $\D$-arrow $f\colon S'\subseteq A^{\nu}\to S\subseteq A^{\mu}$, i.e.\ a definable function $f\colon A^{\nu}\to A^{\mu}$ that restricts to a function $S'\to S$. Recall from (\ref{eq:defI}--\ref{eq:Ionarrows}) that $f$ is induced by a (uniquely determined) homomorphism $h\colon\F{(\mu)}\to \F{(\nu)}$ via evaluation. We have \begin{align}\label{eq:Calgfunct} \C{(S)}=(\F{(\mu)}, \CC{(S)}) \end{align} with $\CC{(S)}$ as in (\ref{eq:Calg}), and similarly for $S'$. Recall from section \ref{sec:algebraic} that an $\R$-arrow $(\F{(\nu)}, \CC{(S')})\to(\F{(\mu)}, \CC{(S)})$ is naturally identified with a homomorphism $\F{(\mu)}\to\F{(\nu)}$ that preserves $\CC{(S)}$ with respect to $\CC{(S')}$ in the sense of (\ref{eq:rarrowalg}). Now $\C$ carries the $\D$-arrow $f$ to the unique $\R$-arrow corresponding to the homomorphism $h\colon\F{(\mu)}\to \F{(\nu)}$, i.e.\ we have \begin{align} \C{(f)}=h. \end{align} % Consider, conversely, $\R$-objects $(\F{(\nu)},R')$ and $(\F{(\mu)}, R)$, together with an $\R$-arrow $(\F{(\nu)},R')\to (\F{(\mu)}, R)$. The latter, by our choice of $\T$, can be identified with a homomorphism $h\colon \F{(\mu)}\to\F{(\nu)}$ that preserves $R$ with respect to $R'$. We have \begin{align}\label{eq:Valgfunct} \V{(\F{(\mu)}, R)}=\VV{(R)}\subseteq \I{(\F{(\mu)})} \end{align} with $\VV{(R)}$ as in (\ref{eq:Valg}), and similarly for $(\F{(\nu)},R')$. Via evaluation, $h$ induces a definable function $f\colon A^{\nu}\to A^{\mu}$ that restricts to a function $S'\to S$, and thus yields a $\D$-arrow $f\colon S'\subseteq A^{\nu}\to S\subseteq A^{\mu}$; i.e., we have \begin{align*} \V{(h)}=f. \end{align*} % The weak affine adjunction (Theorem \ref{thm:weakadj}) applies to show $\C\dashv \V$. \smallskip We shall carry the adjunction $\C\dashv \V$ through to the quotient categories $\D^{q}$ and $\R^{q}$ in the algebraic setting. \subsection{The quotient $\D^{q}$: Affine subsets}\label{ss:affinesub} Specialising Subsection \ref{sec:general}, the quotient category $\D^{q}$ has the same objects as $\D$, namely, all subsets $S\subseteq A^\mu$, as $\mu$ ranges over all cardinals. The $\D$-arrows from $S'\subseteq A^{\nu}$ to $S\subseteq A^{\mu}$ are the definable functions $A^{\nu}\to A^{\mu}$, in the sense of Definition \ref{def:definablemap}, that restrict to functions $S' \to S$, up to the equivalence relation that identifies two such definable functions if, and only if, they restrict to the same function $S' \to S$. It is reasonable to call an object $S\subseteq A^{\mu}$ of $\D^{q}$ an \emph{affine subset} (\emph{relative to $A$ and $\Va$}). \subsection{The quotient $\R^{q}$: Presented algebras}\label{ss:presalg} Continuing our specialisation of Subsection \ref{sec:general}, the quotient category $\R^{q}$ has as objects the pairs $(\F{(\mu)},R)$, where $\mu$ ranges over all cardinals, and $R$ is a relation on (the underlying set of) $\F{(\mu)}$. The $\R^{q}$-morphisms $(\F{(\nu)},R')\to (\F{(\mu)},R)$ are the homomorphisms $h\colon \F{(\mu)}\to\F{(\nu)}$ that preserve $R$ with respect to $R'$ in the sense of (\ref{eq:rarrowalg1}), up to the equivalence relation that identifies two of them if, and only if, their factorisations through the natural quotient maps $\F{(\mu)}\twoheadrightarrow \F{(\mu)}/\overline{R}$ and $\F{(\nu)}\twoheadrightarrow \F{(\nu)}/\overline{R'}$ are equal. As already noted, when $R$ and $R'$ are congruences, the factorisations in question are in fact homomorphisms from the algebra $\F{(\mu)}/R$ to the algebra $\F{(\nu)}/R'$. We therefore recall a standard \begin{definition}\label{def:presentation}We call a pair $(\F{(\mu)},\theta)$, for $\mu$ a cardinal and $\theta$ a congruence on $\F{(\mu)}$, a \emph{presentation} (\emph{in the variety $\Va$}). We call the algebra $\F{(\mu)}/\theta$ the \emph{algebra presented by} $(\F{(\mu)},\theta)$. We write $\Vap$ for the category of \emph{presented $\Va$-algebras}, having as objects all presentations in $\Va$, and as morphisms the $\Va$-homomorphisms between the $\Va$-algebras presented by them. \end{definition} \begin{theorem}\label{thm:quotienteq} Let $\Va$ be any \textup{(}finitely or infinitary\textup{)} variety of algebras, $\Vap$ the associated category of presented $\Va$-algebras. Set $\T$ to be the opposite of the full subcategory of $\Va$ whose objects are the free $\Va$-algebras $\F{(\mu)}$, for $\mu$ an arbitrary cardinal, $\a:=\F{(1)}$, $\I\colon \T \to \Set$ to be the functor defined in section \ref{sec:algebraic} and $\R^{q}$ be the category defined as in section \ref{sec:general}. Then, the category $\Vap$ fully embeds into $(\R^{q})^{\rm op}$. \end{theorem} \begin{proof} Consider the functor that sends an object $(\F{(\mu)},\theta)$ in $\Vap$ into the object $(\F{(\mu)},\theta)$ of $(\R^{q})^{\rm op}$. The functor associates with any map $h\colon (\F{(\mu)},\theta)\to (\F{(\nu)},\theta')$, a map $\bar{h}\colon\F{(\nu)}\to \F{(\mu)}$ which is the dual of the unique homomorphism extension of the assignment $X_{\alpha}\mapsto Y_{\beta}$, where $\{X_{\alpha}\mid \alpha<\mu\}$ are the free generators of $\F{(\mu)}$ and $Y_{\beta}$ is an arbitrary representative of the $\theta'$-equivalence class $h(X_{\alpha}/\theta)$. The verification that this is indeed a well-defined functor is straightforward. It remains to prove that it is full and faithful. For the first claim, consider a (representative of the) $\R^{q}$-arrow $f\colon (\F{(\nu)},\theta')\to (\F{(\mu)},\theta)$. Since $f$ preserves $\theta$ with regard to $\theta'$, the map $h\colon (\F{(\mu)},\theta)\to (\F{(\nu)},\theta')$ defined by $h(t/\theta):=f(t)/\theta'$ is well-defined and is a homomorphism of $\Va_{p}$-algebras. Now, $\bar{h}$ as defined above, sends a free generator $X_{\alpha}$ into an arbitrary representative of the $\theta'$-equivalence class $h(X_{\alpha}/\theta)=f(X_{\alpha})/\theta'$, so $\bar{h}$ and $f$ have the same factorisation through the algebras $(\F{(\mu)},\theta)$ and $(\F{(\nu)},\theta')$, hence they are the same arrow in $\R^{q}$. To prove that the functor is faithful, notice that if two arrows $h_{1},h_{2}\colon (\F{(\mu)},\theta)\to (\F{(\nu)},\theta')$ in $\Va_{p}$ are different, then $\bar{h}_{1}$ and $\bar{h}_{2}$ are different in $\R$ and they belong do different equivalence classes in $\R^{q}$ as they factors differently through the quotients. \end{proof} \begin{remark}\label{rem:Vpequiv}While $\Vap$ is clearly not a variety of algebras ---it is not closed, for example, under isomorphisms--- it is equivalent to $\Va$. Indeed, we have a functor that sends each presented algebra $(\F{(\mu)}, \theta)$ into the quotient $\F{(\mu)}/ \theta$ in $\Va$ and acts identically on maps. It is an exercise to see that such a functor is full, faithful, and dense, hence provides an equivalence of categories. \end{remark} \subsection{Algebraic affine adjunction}\label{ss:algaffadj} Recall the notion of $\I$-coseparator from Definition \ref{def:coseparator}. In the algebraic setting, Assumption \ref{ass:cosep} always holds: \begin{lemma}\label{lem:algcosep}The object $\a=\F{(1)}$ is an $\I$-coseparator for the functor $\I$ defined in \textup{(\ref{eq:defI}--\ref{eq:Ionarrows})} above. \end{lemma} \begin{proof}We need to show that, for any cardinal $\mu$, the family of definable functions $f\colon A^{\mu} \to A$ is jointly monic in $\Set$. That is, given any two functions $h_1,h_2\colon S \to A^{\mu}$, if $f\circ h_1=f\circ h_2$ for all definable $f$, then $h_1=h_2$. Note that the canonical projection functions $\pi_{\alpha}\colon A^{\mu}\to A$ of the product $A^{\mu}$, for $\alpha<\mu$ an ordinal, are definable. Indeed, inspection of the definition of $\I$ shows that the unique homomorphism $\iota_{\alpha}\colon \F{(1)}\to\F{(\mu)}$ induced by $X_{1}\mapsto X_{\alpha}$ is such that $\I{(\iota_{\alpha})}=\pi_{\alpha}$. If now $h_{1}\neq h_{2}$, by the universal property of products there $\alpha<\mu$ with $\pi_{\alpha}\circ h_{1}\neq \pi_{\alpha}\circ h_{2}$, as was to be shown. \end{proof} \begin{remark}\label{rem:range-of-C} In the light of Lemma \ref{lem:iscong} and Theorem \ref{thm:quotienteq}, the image of $\C^{q}$ ranges within the full subcategory $\Va_{p}$ of $\R^{q}$. Thus, without loss of generality, we restrict our attention to this subcategory rather than the whole $\R^{q}$. \end{remark} Specialising Definition \ref{def:quotients}, we obtain functors $\C^{q}\colon \D^{q} \to \Vap^{\rm op}$ and $\V^{q}\colon \Vap^{\rm op}\to \D^{q}$. As an immediate consequence of Theorem \ref{thm:genadj}, Lemma \ref{lem:algcosep}, Remark \ref{rem:range-of-C}, and Theorem \ref{thm:quotienteq}, we have: \begin{corollary}[Algebraic affine adjunction]\label{cor:algadj} Consider any \textup{(}finitary or infinitary\textup{)} variety $\Va$ of algebras, and fix any $\Va$-algebra $A$. Let $\Vap$ be the associated category of presented algebras as in Definition \ref{def:presentation}. Let $\D^{q}$ be the category of affine subsets relative to $A$ and $\Va$, as in Subsection \ref{ss:affinesub}. The functors $\C^{q}\colon \D^{q} \to \Vap^{\rm op}$ and $\V^{q}\colon \Vap^{\rm op}\to \D^{q}$ are adjoint with $\C^{q}\dashv \V^{q}$.\qed \end{corollary} \section{The algebraic {\sl Nullstellensatz}}\label{s:algnull} \begin{remark}\label{rem:ontointo}It is well known that in any (finitary or infinitary) variety $\Va$ of algebras we have: \begin{enumerate} \item\label{it:into} The monomorphisms are exactly the injective $\Va$-homomorphisms, which we also call embeddings. \item\label{it:onto} The regular epimorphisms (=the coequalisers of some pair of parallel arrows) are exactly the surjective $\Va$-homomorphisms, which we also call quotient maps. \end{enumerate} (See \cite[pp.\ 87--88]{Linton:1965}.) We shall use these basic facts often in this section. \end{remark} \subsection{A Stone-Gelfand-Kolmogorov Lemma}\label{ss:SKG} Recall from section \ref{sec:algebraic} that, for a cardinal $\nu$ and a given element $a\in A^{\nu}$, we have the homomorphism \begin{align*}\tag{\ref{eq:tuple}} \widehat{a}\colon \F{(\nu)}\to A. \end{align*} Note that the action of (the underlying function $\U{(\widehat{a})}$ of ) (\ref{eq:tuple}) is given by \begin{align}\label{eq:action} p \in \F{(\nu)} \ \overset{\widehat{a}}{\longmapsto} \ \ev{(p,a)} \in A. \end{align} For, applying the adjunction $\F\dashv\U$ to% \begin{align*}\tag{\ref{eq:comp}} \F(1)\overset{p}{\longrightarrow}\F(\nu)\overset{\widehat{a}}{\longrightarrow} A \end{align*} we obtain the commutative diagram \begin{small} \begin{align*} \begin{tikzpicture}[scale=0.4] \node (1) at (0,0) {$1$}; \node (UF) at (7,0) {$\U{(\F{(\nu)})}$}; \node (U) at (14,0) {$\U{(A)}$}; \draw [->] (1) to node [above, midway] {$\widecheck{p}$} (UF); \draw [->] (UF) to node [above, midway] {$\U{(\widehat{a})}$} (U); \draw [->] (1) to [bend right] node [below, midway] {$\ev{(p,a)}$} (U); \end{tikzpicture} \end{align*} \end{small} where we write $\widecheck{p}\colon 1 \to \U{(\F{(\nu)})}$ for the unique function corresponding to $p\colon \F{(1)}\to \F{(\nu)}$ under the adjunction. We also have the natural quotient homomorphism \begin{align}\label{eq:pointquotient} q_{a}\colon \F{(\nu)}\twoheadrightarrow \F{(\nu)}/\CC{(\{a\})}. \end{align} By construction, $q_{a}$ preserves the relation $\CC{(\{a\})}$ on $\F{(\nu)}$ with respect to the identity relation on $\F{(\nu)}/\CC{(\{a\})}$. Now, $\widehat{a}$ preserves the relation $\CC{(\{a\})}$ on $\F{(\nu)}$ with respect to the identity relation on $A$. Indeed, if $(p,q) \in \CC{(\{a\})}$ then, by definition, $\ev{(p,a)}=\ev{(q, a)}$, whence $\widehat{a}(p)=\widehat{a}(q)$ by (\ref{eq:action}). Therefore, by the universal property of the quotient homomorphism there exists a unique homomorphism \begin{align}\label{eq:gelfeval} \gamma_{a}\colon \F{(\nu)}/\CC{(\{a\})} \longrightarrow A \end{align} that makes the diagram in Fig.\ \ref{fig:gelfand} commute. \begin{figure}[h!] \begin{small} \smallskip \begin{center} \begin{tikzpicture}[scale=0.4] \node (F) at (0,5) {$\F{(\nu)}$}; \node (A) at (0,0) {$A$}; \node (C) at (5,0) {$\frac{\F{(\nu)}}{\CC{(\{a\})}}$}; \draw [->] (F) -- (A) node [left, midway] {$\widehat{a}$}; \draw [->] (F) -- (C) node [right, midway] {$q_{a}$}; \draw [<-] (A) -- (C) node [below, midway] {$\gamma_{a}$} node [above, midway] {$!$}; \end{tikzpicture} \end{center} \end{small} \caption{The Gelfand evaluation $\gamma_a$.} \label{fig:gelfand} \end{figure} \begin{definition}[Gelfand evaluation]\label{def:gelfeval}Given a cardinal $\nu$ and an element $a\in A^{\nu}$, the homomorphism \textup{(\ref{eq:gelfeval})} above is called the \emph{Gelfand evaluation} (\emph{of $\F{(\nu)}$ at $a$}). \end{definition} \begin{lemma}[Stone-Gelfand-Kolmogorov Lemma]\label{l:SGK}Fix a cardinal $\nu$. \begin{enumerate}[\textup{(}i\textup{)}] \item For each $a\in A^{\nu}$, the Gelfand evaluation $\gamma_{a}$ is a monomorphism, and hence its underlying function $\U{(\gamma_{a})}$ is injective. \item Conversely, for each congruence relation $\theta$ on $\F{(\nu)}$, and each homomorphism $e\colon \F{(\nu)}/\theta\to A$, consider the commutative diagram \begin{small} \begin{center} \begin{tikzpicture}[scale=0.4] \node (F) at (0,5) {$\F{(\nu)}$}; \node (A) at (0,0) {$A$}; \node (C) at (5,0) {$\frac{\F{(\nu)}}{\theta}$}; \draw [->] (F) -- (A) node [left, midway] {$e\circ q_{\theta}$}; \draw [->] (F) -- (C) node [right, midway] {$q_\theta$}; \draw [<-] (A) -- (C) node [below, midway] {$e$}; \end{tikzpicture} \end{center} \end{small} where $q_{\theta}$ is the natural quotient homomorphism. Set $a:=(e\circ q_\theta(X_\beta))_{\beta<\nu}\in A^\nu$. If $e$ is a monomorphism, then $\theta = \CC{(\{a\})}$, and the commutative diagram above coincides with the one in Fig.\ \ref{fig:gelfand}. \textup{(}That is, $q_\theta=q_a$, $e=\gamma_{a}$, and $e\circ q_\theta=\widehat{a}$.\textup{)} \end{enumerate} \end{lemma} \begin{proof} \noindent$(i)$\ It suffices to check that the underlying function of $\gamma_{a}$ is injective, cf.\ Remark \ref{rem:ontointo}. Pick $p,q \in \F{(\nu)}$ such that $(p,q)\not \in\CC{(\{a\})}$. Then, by definition, $\ev{(p,a)}\neq\ev{(q, a)}$, and therefore $\widehat{a}(p)\neq \widehat{a}(q)$ by (\ref{eq:action}). But then, by the definition of Gelfand evaluation, it follows that $\gamma_{a}(p)\neq \gamma_{a}(q)$. \noindent$(ii)$\ Since $e$ is monic, we have $\ker{(e\circ q_\theta)}=\ker{q_\theta}=\theta$. Explicitly, \begin{align}\label{eq:kernel} \forall s,t \in \F{(\nu)}: \ \ (s,t)\in\theta \ \ \Longleftrightarrow \ \ e(q_\theta(s))=e(q_\theta(t)). \end{align} Since homomorphisms commute with operations, cf.\ Remark \ref{rem:homscommute}, and recalling the definition of $a$, (\ref{eq:kernel}) yields \begin{align}\label{eq:kerneltuple} \forall s,t \in \F{(\nu)}: \ \ (s,t)\in\theta \ \ \Longleftrightarrow \ \ \ev{(s,a)}=\ev{(t,a)}. \end{align} Therefore, by (\ref{eq:kerneltuple}), we have $a \in \VV{(\theta)}$. By the Galois connection (\ref{eq:galois}) this is equivalent to \begin{align}\label{eq:vnotempty} \theta\subseteq \CC{(\{a\})}. \end{align} For the converse inclusion, if $(u,v)\in \CC{(\{a\})}$, then $\ev{(u,a)}=\ev{(v,a)}$, and therefore $(u,v)\in\theta$ by (\ref{eq:kerneltuple}). This proves $\theta=\CC{(\{a\})}$, and therefore $q_\theta=q_a$. To show $\widehat{a}=e\circ q_a$, note that, by the definition of $\widehat{a}$ and the universal property of $\F{(\nu})$, they both are the (unique) extension of the function $X_\beta \mapsto \ev{(X_\beta,(e\circ q_\theta(X_\beta)))}$, for $\beta < \nu$. \end{proof} \subsection{Transforms}\label{ss:trans} For a congruence relation $\theta$ on $\F{(\nu)}$, we now consider the natural quotient homomorphism \begin{align}\label{eq:quotth} q_{\theta}\colon \F{(\nu)} \to \F{(\nu)}/\theta, \end{align} together with the product $\prod_{a\in \VV{(\theta)}}\F{(\nu)}/\CC{(\{a\})}$ and its projections \begin{align}\label{eq:prodsheaf} \pi_{a}\colon \prod_{a\in \VV{(\theta)}}\frac{\F{(\nu)}}{\CC{(\{a\})}}\longrightarrow\frac{\F{(\nu)}}{\CC{(\{a\})}}. \end{align} We also consider the power $A^{\VV{(\theta)}}$ and its projections \begin{align}\label{eq:prodfunc} p_{a}\colon A^{\VV{(\theta)}}\longrightarrow A. \end{align} The morphisms (\ref{eq:gelfeval}--\ref{eq:prodfunc}) yield the commutative diagrams ---one for each $a\in\VV{(\theta)}$--- in Fig.\ \ref{fig:birkgelf}, \begin{figure}[h!] \centering \smallskip \begin{tikzpicture}[scale=0.75] \node (Fth) at (0,0) {$\F{(\nu)}/\theta$}; \node (P1) at (5,0) {$\prod_{a\in \VV{(\theta)}}\frac{\F{(\nu)}}{\CC{(\{a\})}}$}; \node (P2) at (10,0) {$A^{\VV{(\theta)}}$}; \node (FC) at (0,-3) {$\F{(\nu)}/\CC{(\{a\})}$}; \node (A) at (5,-3) {$A$}; \draw [->] (Fth) to node [left, midway] {$q$} (FC); \draw [->] (P1) to node [right, midway] {$\gamma_{a}\circ\pi_{a}$} (A); \draw [->] (FC) to node [below, midway] {$\gamma_{a}$} (A); \draw [->] (P2) to node [below, midway,yshift=-0.15cm] {$p_{a}$} (A); \draw [->] (P1) to node [below, midway,yshift=-0.15cm] {$\pi_{a}$} (FC); \draw [->] (Fth) to node [above, midway] {$\sigma_{\theta}$} node [below, midway] {$!$} (P1) ; \draw [->] (P1) to node [above, midway] {$\iota_{\theta}$} node [below, midway] {$!$} (P2) ; \draw [->] (Fth) to [bend left] node [above, midway] {$\gamma_{\theta}:=\iota_{\theta}\circ\sigma_{\theta}$} node [below, midway] {$!$} (P2); \end{tikzpicture} \caption{The Gelfand and Birkhoff transforms $\gamma_\theta$ and $\sigma_\theta$.} \label{fig:birkgelf} \end{figure} where $\sigma_{\theta}$ and $\iota_{\theta}$ are the unique homomorphisms whose existence is granted by the universal property of the products $\prod_{a\in \VV{(\theta)}}\frac{\F{(\nu)}}{\CC{(\{a\})}}$ and $A^{\VV{(\theta)}}$, respectively. \begin{definition}[Gelfand and Birkhoff transforms]\label{eq:gelftrans}Given a cardinal $\nu$ and a congruence $\theta$ on $\F{(\nu)}$, the homomorphisms $\gamma_{\theta}:=\iota_{\theta}\circ\sigma_{\theta}$ and $\sigma_{\theta}$ given by the commutative diagram above are called the \emph{Gelfand} and the \emph{Birkhoff transforms} (\emph{of $\F{(\nu)}/\theta$ with respect to $A$}), respectively.\end{definition} \begin{lemma}\label{lem:easygelf}With the notation above, and for each $a \in A$, the homomorphisms $\pi_{a}\circ \sigma_{\theta}$ and $\iota_{\theta}$ are surjective and injective, respectively. \end{lemma} \begin{proof}It is clear that $\pi_{a}\circ \sigma_{\theta}$ is onto, because $q\colon \F{(\nu)}/\theta \to \F{(\nu)}/\CC{(\{a\})}$ is onto (cf.\ Remark \ref{rem:ontointo}). Concerning $\iota_{\theta}$, let $x,y \in \prod_{a \in \VV{(a)}}\F{(\nu)}/\CC{(\{a\})}$, and suppose $\iota_{\theta}(x)=\iota_{\theta}(y)$. With reference to the commutative diagram in Fig.\ {\ref{fig:birkgelf}}, for each $a \in \VV{(\theta)}$ we have $p_a(\iota_\theta(x))=p_a(\iota_\theta(y))$, and therefore $\gamma_a(\pi_a(x))= \gamma_a(\pi_a(y))$. Since $\gamma_{a} $ is a monomorphism for each $a$ by Lemma \ref{lem:easygelf}, we infer $\pi_a(x)=\pi_a(y)$ for each $a$, and hence $x=y$ by the universal property of the product $\prod_{a \in \VV{(a)}}\F{(\nu)}/\CC{(\{a\})}$. \end{proof} \subsection{The algebraic {\sl Nullstellensatz}}\label{ss:algnull} \begin{definition}[Radical]\label{def:radical}For a cardinal $\nu$ and a relation $R$ on $\F{(\nu)}$, we call the congruence \[ \bigcap_{a \in \VV{(R)}} \CC{(\{a\})} \] the \emph{radical of $R$ \textup{(}with respect to the $\Va$-algebra $A$\textup{)}}. A congruence $\theta$ on $\F{(\nu)}$ is \emph{radical \textup{(}with respect to $A$\textup{)}} if $\theta=\bigcap_{a \in \VV{(\theta)}} \CC{(\{a\})}$. \end{definition} Note that the inclusion \begin{align}\label{eq:galoisincl} \theta \subseteq \bigcap_{a \in \VV{(\theta)}} \CC{(\{a\})}, \end{align} always holds, cf.\ \eqref{eq:contained2}. \begin{theorem}[Algebraic {\sl Nullstellensatz}]\label{thm:algnull} For any $\Va$-algebra $A$, any cardinal $\nu$, and any congruence $\theta$ on $\F{(\nu)}$. The following are equivalent. \begin{enumerate}[\textup{(}i\textup{)}] \item\label{l:null-subdirect-item1} $\CC{(\VV{(\theta)})}=\theta$. \item\label{l:null-subdirect-item2} $\theta=\bigcap_{a \in \VV{(\theta)}} \CC{(\{a\})}$, i.e.\ $\theta$ is a radical congruence with respect to $A$. \item\label{l:null-subdirect-item3} The Birkhoff transform $\sigma_{\theta}\colon\F{(\nu)}/\theta\to\prod_{a\in \VV{(\theta)}}\frac{\F{(\nu)}}{\CC{(\{a\})}}$ is a subdirect embedding. \end{enumerate} \end{theorem} \begin{remark}In the proof that follows we apply three standard results in universal algebra, namely, \cite[Theorems 7.15, 6.15, and 6.20]{Burris:81}. Although in \cite{Burris:81} these results are stated and proved for finitary varieties, the same proofs work for infinitary ones. \end{remark} \begin{proof} \noindent The hypotheses of Theorem \ref{thm:null} are satisfied: the terminal object in $\Set$ is a singleton $\{a\}$, and the family of functions $\{a\}\to \VV{(R)}$ ---i.e.\ the elements of $\VV{(R)}$--- is obviously jointly epic. This proves the equivalence of (\ref{l:null-subdirect-item1}) and (\ref{l:null-subdirect-item2}). \noindent $(\ref{l:null-subdirect-item2})\Leftrightarrow (\ref{l:null-subdirect-item3})$. By \cite[Theorem 7.15]{Burris:81}, given any algebra $B$ and a family $\{\theta_{i}\}_{i\in I}$ of congruences on $B$, the natural homomorphism $h\colon B\to\prod_{i\in I}B/\theta_{i}$ induced by the quotient homomorphisms $q_{\theta_{i}}\colon B\to B/{\theta_{i}}$ is an embedding if, and only if, $\bigcap_{i\in I}\theta_{i}$ is the identity congruence $\Delta$ on $B$. Taking \[B:=\F{(\nu)}/\theta\ \text{ and } \ \{\theta_{i}\}:=\{\CC{(\{a\})}\}_{a \in \VV{(\theta)}},\] we obtain the natural homomorphism \begin{align}\label{eq:nath} h\colon \F{(\nu)}/\theta\longrightarrow \prod_{a\in \VV{(\theta)}}\frac{\F{(\nu)}/\theta}{\CC{(\{a\})}/\theta}, \end{align} where $\CC{(\{a\})}/\theta$ denotes the set $\{(p/\theta,q/\theta)\in \F{(\nu)}/\theta\mid (p,q) \in \CC{(\{a\})}\}$, which is easily seen to be a congruence relation on $\F{(\nu)/\theta}$. It is clear by construction that if $h$ is an embedding, then it is subdirect. Hence we have: \begin{align}\label{eq:subdirect} h\text{ is a subdirect embedding} \ \Longleftrightarrow\ \bigcap_{a\in \VV{(\theta)}}\CC{(\{a\})}/\theta=\Delta/\theta\ \end{align} For each $a\in \VV{(\theta)}$, by the Galois connection Lemma \ref{lem:galois} we have $\theta\subseteq \CC{(\{a\})}$. Therefore, by the Second Isomorphism Theorem \cite[Theorem 6.15]{Burris:81}, \begin{align}\label{eq:secondiso} \forall a \in \VV{(a)}: \ \ \frac{\F{(\nu)}/\theta}{\CC{(\{a\})}/\theta}\cong\F{(\nu)}/\CC{(\{a\})}. \end{align} From (\ref{eq:subdirect}--\ref{eq:secondiso}) we see: \begin{align}\label{eq:birkhoff} h \text{ is a subdirect embedding} \ \ \Longleftrightarrow \ \ \sigma_{\theta} \text{ is a subdirect embedding.} \end{align} Finally, upon recalling that, by \cite[Theorem 6.20]{Burris:81}, the mapping $\theta'\mapsto \theta'/\theta$ is an isomorphism of lattices between the lattice of congruences of $\F{(\nu)}$ extending $\theta$ and the lattice of congruences of $\F{(\nu)}/\theta$, we have \begin{align}\label{eq:lattcong} \bigcap_{a\in \VV{(\theta)}}\CC{(\{a\})}/\theta=\Delta/\theta \ \ \Longleftrightarrow \ \ \bigcap_{a\in\VV{(\theta)}}\CC{(\{a\})}=\theta. \end{align} In conclusion, (\ref{eq:nath}--\ref{eq:lattcong}) amount to the equivalence between $(\ref{l:null-subdirect-item2})$ and $(\ref{l:null-subdirect-item3})$. \end{proof} \begin{remark}\label{r:finitary-vs-inifinitary} Since Birkhoff's influential paper \cite{birkhoff1944subdirect} the theory of algebras definable by operations of finite arity only has been developed intensively. In \cite[Theorem 1]{birkhoff1944subdirect} Birkhoff pointed out, by way of motivation for his main result, that the Lasker-Noether theorem \cite{noether21} generalises easily to algebras whose congruences satisfy the ascending chain condition, even in the presence of operations of infinite arity. His main result \cite[Theorem 2]{birkhoff1944subdirect} then showed how to extend the Lasker-Noether theorem to any variety of algebras, without any chain condition, provided however that all operations be finitary. In short, Birkhoff's Subdirect Representation theorem (see e.g.\ \cite[Theorem 2.6]{jacobson80} for a textbook treatment) fails for infinitary varieties of algebras. Much of the remaining general theory, however, carries over to the infinitary case. The two classical references on infinitary varieties are \cite{Slominsky:1959, Linton:1965}. Linton's paper \cite{Linton:1965}, in particular, extended Lawvere's categorical treatment of universal algebra \cite{lawvere63, lawverereprint}. \end{remark} As a consequence of Theorem \ref{thm:algnull}, the adjunction $\C^{q}\dashv \V^{q}$ need not be a duality for the whole variety $\Va$. The case of MV-algebras, treated in \cite{MarSpa12}, is a non-trivial example of the adjunction $\C^{q}\dashv \V^{q}$ that only fixes a subclass of the variety $\Va$. The following corollary provides some sufficient and some necessary conditions for the whole variety $\Va$ to be fixed under the composition $\C^{q}\circ \V^{q}$; it will also be useful to prove the dualities for Boolean algebras and $C^{*}$-algebras in sections \ref{s:stone} and \ref{s:stone-gelfand} of Part \ref{p:dualities}. In the light of Remark \ref{rem:Vpequiv}, instead than $\Vap$ we work with the equivalent category $\Va$. \newpage \begin{corollary}\label{c:semisimple-fixed} \begin{enumerate} \item\label{c:semisimple-fixed-item1} Let $\Va$ be a semisimple variety and suppose there is a cardinal $\kappa$ such that the number of pairwise non-isomorphic simple algebras in $\Va$ is less than $\kappa$. Let $A$ be the coproduct of all pairwise non-isomorphic simple algebras in $\Va$, then the composition $\C^{q}\circ \V^{q}$ fixes all algebras in $\Va$. \item\label{c:semisimple-fixed-item2} Let $\Va$ be a finitary variety and suppose there is a cardinal $\kappa$ such that the number of pairwise non-isomorphic subdirectly irreducible algebras in $\Va$ is less than $\kappa$. Let $A$ be the coproduct of all pairwise non-isomorphic subdirectly irreducible algebras in $\Va$, then the composition $\C^{q}\circ \V^{q}$ fixes all algebras in $\Va$. \item\label{c:semisimple-fixed-item3} Let $\Va$ be a finitary variety. Suppose that functors $\C^{q}$ and $\V^{q}$ have been defined relatively to some arbitrary algebra $A$ in $\Va$. The algebra $A$ contains \textup{(}up to isomorphism\textup{)} all subdirectly irreducible algebras in the variety if, and only if, the composition $\C^{q}\circ \V^{q}$ fixes the whole variety $\Va$. \item\label{c:semisimple-fixed-item4} Let $\Va$ be a finitary variety and suppose that the composition $\C^{q}\circ \V^{q}$ fixes all algebras in $\Va$, then $A$ generates the variety $\Va$. \end{enumerate} \end{corollary} \begin{proof} We prove item \eqref{c:semisimple-fixed-item1}. We set the algebra $A$ to be the aforementioned coproduct. Notice that, if an algebra $\F(\mu)/\psi$ is simple algebra, then it canonically embeds into $A$, for $A$ is the coproduct of all pairwise non isomorphic simple algebras. So, by Lemma \ref{l:SGK}, $\psi=\CC(\{a\})$ for some $a\in\VV(\psi)$. Let now $\F(\mu)/\theta$ be any algebra in $\Va$. Since the variety $\Va$ is semisimple, $\F(\mu)/\theta$ is semisimple, so by definition there is a subdirect embedding of $\F(\mu)/\theta$ into simple algebras. Each of the simple algebras in $\Va$ is isomorphic to $\F(\mu)/\CC(\{a\})$ for some cardinal $\mu$ and some $a\in A^{\mu}$. Since the decomposition is subdirect $\theta\subseteq \CC(\{a\})$, so by (\ref{eq:galois}) we have $a\in \VV(\theta)$. But as seen in Figure \ref{fig:birkgelf}, there is only one arrow from $\F(\mu)/\theta$ into $\prod_{a\in\VV(\{a\})}\F(\mu)/\CC(\{a\})$ and this is the Birkhoff transform. Thus, Theorem \ref{thm:algnull} can be applied yielding that the algebra $\F(\mu)/\theta$ is fixed by $\CC^{q}\circ \VV^{q}$ and a straightforward verification of the definitions of $\C^{q}$ and $\V^{q}$ give us $\C^{q}{(\V^{q}{(\,(\F{(\mu)},\theta)\,)})}\cong (\F{(\mu)},\theta)$. For the proof of item \eqref{c:semisimple-fixed-item2}, replace ``simple'' for ``subdirectly irreducible'' in the proof of item \eqref{c:semisimple-fixed-item1}. The proof then goes through, upon noticing that by Birkhoff theorem in a finitary variety any algebra is the subdirect product of subdirectly irreducible algebras. For the proof of item \eqref{c:semisimple-fixed-item3}, the sufficiency is again obtained as in \eqref{c:semisimple-fixed-item2}. To see that also the other direction holds, notice that a congruence of $\F{(\mu)}$ presents a subdirectly irreducible algebra if, and only if, it is \emph{completely meet irreducible} in the lattice of congruences of $\F{(\mu)}$ (see e.g.\ \cite[Lemma 4.43]{MMT87}). Recall that an element $x$ of a lattice $L$ is completely meet irreducible if whenever $x=\bigvee K$ for some subset $K$ of $L$, then $x$ must belong to $K$. Now, suppose that the composition $\C\circ\V$ fixes all algebras in $\Va$, let $\F{(\mu)}/\theta$ be a subdirectly irreducible algebra. In particular we have $\CC\circ\VV(\theta)=\theta$, so by Theorem \ref{thm:algnull} $\theta=\bigcap_{a\in\VV(\theta)}\CC{(\{a\})}$. But $\theta$ is completely meet irreducible, so there exists $a\in \VV{(\theta)}$ such that $\theta=\CC{(\{a\})}$. By Lemma \ref{l:SGK} this entails that $\F{(\mu)}/\theta$ embeds into $A$ and the claim is proved. Finally, to prove item \eqref{c:semisimple-fixed-item4}, notice that, by \eqref{c:semisimple-fixed-item3} $A$ must contain all subdirectly irreducible algebras in $\Va$, hence it generates the variety. \end{proof} \section{The topological {\sl Nullstellensatz}} Having settled the characterisation of fixed points on the algebraic side, we turn to the study of the fixed points on the geometric side of the adjunction. Unfortunately, we are not able at this stage of giving a characterisation as satisfactory as the one for the algebraic side. Nonetheless, in this section we collect some general facts that will be useful to obtain Stone and Gelfand dualities in the next sections. We shall assume henceforth that the operator $\VC$ is topological\footnote{This is an actual restriction, as the condition may fail in an arbitrary variety. However it holds for all classical dualities mentioned in this paper.} i.e., $\VC{(X\cup Y)}=\VC{(X)}\cup\VC{(Y)}$. We shall topologise the set $A$ by declaring closed the sets of the form $\VC{(S)}$ for some $S\subseteq A$ . We call this the \emph{Zariski} topology or the $\VC$ topology. For any cardinal $\mu$ the power $A^{\mu}$ can be endowed with at least two natural topologies: \begin{enumerate} \item The product topology w.r.t.\ the $\VC$ topology on $A$. \item The $\VC$ topology given by definable definable functions from $A^{\mu}$ into $A$ i.e., where the closed subsets are of the form \[\VV{(\theta)}:=\{s\in A^{\mu}\mid p(s)=q(s) \quad \forall (p,q)\in \theta \} \] for $\theta\subseteq\F^{2}(\mu)$. \end{enumerate} We are interested in cases in which the two topologies above coincide on $A^{\mu}$ for any cardinal $\mu$. \begin{definition}\label{d:spectral} We say that a function $f\colon A^{\mu}\to A$ is \emph{strongly continuous} if the pre-images of a sets of the form $\VC{(S)}$ with $S\subseteq A$ can be written as $\VC{(T)}$ with $T\subseteq A^{\mu}$. \end{definition} Strong continuity generalises spectral maps in the setting of Stone duality for distributive lattices. Notice that strong continuity implies continuity. \begin{lemma}\label{l:continuous} Definable functions are strongly continuous \textup{(}hence in particular continuous\textup{)} with respect to the $\VC$ topology. \end{lemma} \begin{proof} Let $f$ be a definable function from $A^{\mu}$ into $A$, with definable term $\lambda((X)_{\alpha<\mu})$ and let $C=\VC{(S)}$ for some $S\subseteq A$. Consider the set $\theta=\{\left( s(\lambda), t(\lambda)\right )\mid (s,t)\in \CC(S)\}$, we claim that $f^{-1}[C]=\VV(\theta)$. Indeed $d\in f^{-1}[C]$ if, and only if, $\exists c\in C$ such that $f(d)=c$ if, and only if, $\exists c\forall (s,t)\in \CC(S)$, $s(f(c))=t(f(c))$, if and only if, $c\in \VV(\theta)$. \end{proof} As an immediate consequence of Lemma \ref{l:continuous} and the fact that projections are definable functions, we observe that the product topology is coarser than the $\VC$ topology. \begin{lemma}[Co-Nullstellensatz]\label{l:co-null} Assume that the $\VC$ topology on $A$ is Hausdorff and that all definable functions are continuous with respect to the product topology. Then a set $S\subseteq A^{\mu}$ is closed in the product topology if, and only if, $\VV(\CC{(S)})=S$. \end{lemma} \begin{proof} Let us write $\overline{S}$ for the smallest closed set in the product topology that contains $S$. As noticed above, the product topology is coarser than the $\VC$ topology, so we have $\VC{(S)}\subseteq \overline{S}$. To prove the other direction, notice that if $X$ is any topological space, and $Y$ is Hausdorff, then for any two continuous functions $f,g\colon X\to Y$ the solution set of the equation $f=g$ is a closed subset of $X$, \cite[1.5.4]{engelking}. Now, by assumption $A$ is Hausdorff and definable functions are continuous by Lemma \ref{l:continuous}, so for any pair of terms $(s,t)$, the set $\VV{(s,t)}$ is closed in product topology. On the other hand, $\VV(R)=\VV{\big(\bigcup_{(s,t)\in R}\{(s,t)\}\big)}=\bigcap_{(s,t)\in R}\VV{(s,t)}$ holds by Lemma \ref{lem:galois}. We conclude that $\VV(R)$ is closed in the product topology for any subset $R$ of $\F_{\mu}\times\F_{\mu}$. Thus we obtain the inclusion $\overline{S}\subseteq\VV{(\CC{(S)})}$. \end{proof} \begin{corollary}\label{c:discrete-topology} Suppose $\Va$ is finitary. If the $\VC$ topology on $A$ is discrete, then the $\VC$ topology and the product topology coincide. \end{corollary} \begin{proof} If the $\VC$ topology on $A$ is discrete then it obviously is Hausdorff. In addition all finite products are also discrete, and this can be shown to imply that definable functions are continuous with respect to the product topology on $A^{\mu}$ for any cardinal $\mu$, because the variety is finitary. Thus the assumptions of Lemma \ref{l:co-null} are met and the corollary follows. \end{proof} \part{Three classical examples and one epilogue}\label{p:dualities} \section{The classical affine adjunction}\label{s:classical} Continuing the notation in the Introduction, we consider an algebraically closed field $k$, and finitely many variables $X:=\{X_{1},\ldots,X_{n}\}$, $n\geq 0$ an integer. Then $k$-algebras and their homomorphisms form a finitary variety in the sense of Birkhoff. The $k$-algebra freely generated by $X$ is the polynomial ring $k[X]$. Congruences on any $k$-algebra are in one-one inclusion-preserving correspondence with ideals. We shall now apply the results of Part \ref{part:alg} to derive a form of the {\it Nullstellensatz}, with the {\it proviso} that congruences are conveniently represented by ideals. We let $\Va$ be the variety of $k$-algebras, and we let $A:=k$. The details then depend on what definition one takes for the notion of radical ideal. We shall use: \begin{definition}\label{def:radideal}An ideal of a $k$-algebra is \emph{radical} if, and only if, it is an intersection of maximal ideals. \end{definition} We shall need a classical result from commutative algebra; see e.g.\ \cite{atyiahmacdonald}. \begin{lemma}[Zariski's Lemma]\label{lem:zariski} Let $F$ be any field, and suppose $E$ is a finitely generated $F$-algebra that is itself a field. Then $E$ is a finite field extension of $F$.\qed \end{lemma} Specialising the Stone-Gelfand-Kolmogorov Lemma \ref{l:SGK} to the ring-theoretic setting now yields: \begin{lemma}[Ring-theoretic Stone-Gelfand-Kolmogorov]\label{l:SGK-for-rings}An ideal $I$ of $k[X]$ is maximal if, and only if, there exists $a \in k^{n}$ such that $I=\CC{(\{a\})}$. \end{lemma} \begin{proof} Assume $I=\CC{(\{a\})}$, and consider the Gelfand evaluation $\gamma_{a}\colon k[X]/\CC{(\{a\})}\to k$ of Definition \ref{def:gelfeval}. By Lemma \ref{l:SGK}, $\gamma_{a}$ is an embedding. From the fact that $\gamma_{a}$ is a homomorphism of $k$-algebras it follows at once that it is onto $k$, and hence an isomorphism. Moreover $k$, being a field, is evidently simple in the universal-algebraic sense, i.e.\ it has no non-trivial ideals. Hence $\CC{(\{a\})}$, the kernel of the homomorphism $q_{a}\colon k[X]\to k[X]/I$ as in (\ref{eq:pointquotient}), is maximal (by direct inspection, or using the more general \cite[Theorem 6.20]{Burris:81}). Conversely, assume that $I$ is maximal, and consider the natural quotient map $q_{I}\colon k[X]\to k[X]/I$. Then $k[X]/I$ is a simple finitely generated $k$-algebra, and hence a field. By Zariski's Lemma \ref{lem:zariski}, $k[X]/I$ is a finite field extension of $k$; since $k$ is algebraically closed, $k$ and $k[X]/I$ are isomorphic. Applying Lemma \ref{l:SGK} with $e\colon k[X]/I\to k$ the preceding isomorphism completes the proof. \end{proof} \begin{corollary}[Ring-theoretic {\it Nullestellensatz}]\label{c:ringnull}For any ideal $I$ of $k[X]$, the following are equivalent. \begin{itemize} \item[\textup{(}i\textup{)}] $\CC{(\VV{(I)})}=I$. \item[\textup{(}ii\textup{)}] $I$ is radical. \end{itemize} \end{corollary} \begin{proof}Immediate consequence of Lemma \ref{l:SGK-for-rings} together with Theorem \ref{thm:algnull}. \end{proof} It is now possible to functorialise the above along the lines of the first part of this paper, thereby obtaining the usual classical algebraic adjunction. We do not spell out the details. \section{Stone duality for Boolean algebras}\label{s:stone} In this section we derive Stone duality for Boolean algebras from the general adjunction. Let $\Va$ be the variety of Boolean algebras and their homomorphisms, and set $A$ to be two-element Boolean algebra $\{0,1\}$. By Corollary \ref{cor:algadj} we have a dual adjunction between $\Vap$ and $\D^{q}$ given by the functors $\C^{q}$ and $\V^{q}$. We are interested in characterising the fixed points of this adjunction. We begin with the algebraic side. Recall the following: \begin{lemma}[\mbox{\cite[Lemma 1]{birkhoff1944subdirect}}]\label{l:sub-boole} To within an isomorphism, the only subdirectly irreducible Boolean algebra is $\{0,1\}$. \end{lemma} \begin{corollary}\label{c:fix-alg-boole} With $\Va$ and $A$ as in the above, and with reference to the functors of Corollary \ref{cor:algadj}, one has \[\C^{q}{(\V^{q}{(\,(\F{(\mu)},R)\,)})}\cong (\F{(\mu)},R) \] for any $(\F{(\mu)},R)\in \Vap$. \end{corollary} \begin{proof} Apply Corollary \ref{c:semisimple-fixed} item \ref{c:semisimple-fixed-item1} in view of Lemma \ref{l:sub-boole}. \end{proof} We now turn to the side of affine subsets. The category $\D^{q}$ is given by subsets of $\{0,1\}^{\mu}$ for $\mu$ ranging among all cardinals, and definable maps among them. The Zariski (=$\VC$) topology on $\{0,1\}$ is discrete as $\{0\}=\VV{(0,x)}$ and $\{1\}=\VV{(1,x)}$. \begin{lemma}\label{l:fix-top-boole} Fix a cardinal $\mu$. A set $S\subseteq\{0,1\}^{\mu}$ is closed in the product topology if, and only if, $\VV{(\CC{(S)})}=S$. \end{lemma} \begin{proof} The topology on $A$ is discrete and Boolean algebras form a finitary variety, so the claim follows from Corollary \ref{c:discrete-topology}. \end{proof} So the space $\{0,1\}^{\mu}$ is a \emph{Cantor cube}, i.e.\ it is topologised by $\VC$ according to the product topology, $\{0,1\}$ having the discrete topology. \begin{corollary}\label{c:fix-top-boole} Let $\Va$ be the variety of Boolean algebras and their homomorphisms, and let $A$ be the Boolean algebra $\{0,1\}$. With reference to the functors of Corollary \ref{cor:algadj}, one has that for any closed set $S\in \D^{q}$, \[\V^{q}{(\C^{q}{(S)})}\cong S \ .\] \end{corollary} \begin{proof} By Lemma \ref{l:fix-top-boole} and direct inspection of the definitions. \end{proof} \begin{corollary}\label{c:presentedBoole-closedsubsets} The category of Boolean algebras with their homomorphisms is dually equivalent to the category of closed subspaces of the Cantor cubes $\{0,1\}^{\mu}$ with continuous maps among them. \end{corollary} \begin{proof} By Corollary \ref{c:fix-alg-boole} the whole category $\Va$ is fixed by the composition $\C^{q}\circ\V^{q}$. By Corollary \ref{c:fix-top-boole} the full subcategory of closed subsets in $\D^{q}$ is fixed by the composition $\V^{q}\circ\C^{q}$. \end{proof} The last result needed to obtain Stone duality in classical form is an intrinsic characterisation of the closed subspaces of $\{0,1\}^{\kappa}$ for $\kappa$ any cardinal. This is a specific instance of a general problem in abstract topology: given a topological space $E$, characterise the topological spaces which are subspaces of $E^{\kappa}$. Such spaces are known as $E$-compact spaces \cite[Section 1.4]{weir1975hewitt}. \begin{lemma}\label{stonespace-cantorset} The category of compact, Hausdorff, totally disconnected spaces with continuous maps among them is equivalent to the category $\D^{q}$. \end{lemma} \begin{proof} It is enough to prove that for any compact, Hausdorff, totally disconnected space $X$, there exists a cardinal $\mu$ and closed subset $S$ of $\{0,1\}^{\mu}$ such that $X$ is homeomorphic to $S$. The rest is routine. To prove the claim notice that by \cite[Lemma 4.5, pag. 116]{kelley1955general} given a family $F$ of continuous functions from a Hausdorff space $X$ into spaces $Y_{f}$ the \emph{evaluation} map $e\colon X\to \prod_{f\in F}Y_{f}$ defined as $e(x)_{f}:=f(x)$ is a homeomorphism between $X$ and $f[X]$, provided that for any $p\in X$ and any closed subset $C$ such that $p\notin C$ there exists $f\in F$ such that $f(p)\notin f[C]$. Given a compact, Hausdorff, totally disconnected space $X$, we therefore consider the family $F$ of all continuous functions from $X$ to $\{0,1\}$. If $C$ is a closed subset of $X$ and $p\in X\setminus C$, then there exists a clopen $K$ which extends $C$ and does not contain $p$. Consider the function \[f(x):=\begin{cases} 0 & \text{if } x\in K\\ 1 & \text{otherwise.} \end{cases}\] It is straightforward to see that the function $f$ belongs to $F$. \end{proof} \begin{corollary}[Stone 1936]\label{c:stone-duality} The category of Boolean algebras with their homomorphisms is dually equivalent to the category of compact, Hausdorff, totally disconnected spaces with continuous maps among them. \end{corollary} \begin{proof} By composing the equivalences of Corollary \ref{c:presentedBoole-closedsubsets} and the one of Lemma \ref{stonespace-cantorset}. \end{proof} \section{Gelfand duality for $C^{*}$-algebras}\label{s:stone-gelfand} A \emph{\textup{(}complex, commutative, unital\textup{)} $C^*$-algebra} is a complex commutative Banach algebra $A$ (always unital, with identity element written $1$) equipped with an involution ${\cdot}^*\colon A\rightarrow A$ satisfying $\|xx^*\|=\|x\|^2$ for each $x\in A$. Henceforth, `$C^*$-algebra' means `complex commutative until $C^*$-algebra'. The category $\Cst$ has as objects $C^*$-algebras, and as morphisms their $^{*}$-homomorphisms, i.e.\ the complex-algebra homomorphisms preserving the involution and $1$. If $X$ is a any compact Hausdorff space, let $\Cont{(X,\Cx)}$ denote the complex algebra of all continuous complex-valued functions on $X$, with operations defined pointwise. Equipped with the involution $^{*}$ given by pointwise conjugation, and with the supremum norm, this is a $C^*$-algebra. The landmark Gelfand-Naimark Theorem (commutative version) states that, in fact, any $C^*$-algebra is naturally representable in this manner. A functorial version of the theorem leads to \emph{Gelfand duality}: the category $\Cst$ is dually equivalent to the category of $\KH$ of compact Hausdorff spaces and continuous maps. In this section we show how Gelfand duality fits in the framework of affine adjunctions developed above. The first important fact is that we can work at the level of the algebraic adjunction. For this, we first recall that $x\in A$ is \emph{self-adjoint} if it is fixed by $*$, i.e.\ if $x^*=x$. Further, we recall that self-adjoint elements carry a partial order which may be defined in several equivalent ways; see e.g.\ \cite[Section 8.3]{conway}. For our purposes here it suffices to define a self-adjoint element $x\in A$ to be \emph{non-negative}, written $x\geq 0$, if there exists a self-adjoint $y\in A$ such that $x=y^2$. There is a functor $U\colon \Cst\to\Set$ that takes a $C^*$-algebra $A$ to the collection of its non-negative self-adjoint elements whose norm does not exceed unity: \[ U(A):=\{x\in A\mid x^*=x, 0\leq x, \|x\|\leq 1 \}. \] In particular, $U(\Cx)=[0,1]$, the real unit interval. In the following we always topologies $[0,1]$ with its Euclidean topology, and powers $[0,1]^S$ with the product topology. It is elementary that the restriction of a $^{*}$-homomorphism $A\to B$ to $U(A)$ induces a function $U(A)\to U(B)$, for all $C^*$-algebras $A$ and $B$, so that $U$ is indeed a functor. \begin{theorem}\label{thm:negrepontis}The category $\Cst$ is equivalent to the category $\Va^*$ of models of a \textup{(}necessarily infinitary\textup{)} algebraic variety. Moreover, under this equivalence the underlying set functor of the variety naturally corresponds to the functor $U\colon \Cst\to\Set$ above. The left adoint $F$ to the functor $U$ associates to a set $S$ the $C^*$-algebra of all complex valued, continuous functions on the compact Hausdorff space $[0,1]^{S}$.\end{theorem} \begin{proof}It is well known that the unit-ball functor on $C^*$-algebras is monadic over $\Set$, see \cite[Theorem 1.7]{Negrep}. The functor $U$ that we are considering here is a variant of the unit-ball functor. See \cite{pr} for further background and results. The fact that no finitary variety can dualise $\KH$ was proved, as a consequence of a considerably stronger result, in \cite{Bana}. Together with Gelfand duality this shows that $\Va^*$ cannot be finitary. \end{proof} \begin{remark}\label{rem:isbell} In \cite{Isbell}, Isbell proved that there is a finite set of finitary operations, along with a single infinitary operation of countably infinite parity, that generate all operations in $\Va^*$. It has been a long-standing open problem to provide a manageable equational axiomatisation of $\Va^*$. A solution to this problem appears in \cite{Marra-Reggio}, where a \emph{finite} axiomatisation is provided. The interested reader is referred to \cite{Marra-Reggio} for details. For our purposes here, we do not need an explicit presentation of $\Va^*$. Indeed, we shall identify $C^*$-algebras with objects of $\Va^*$ whenever convenient, it being understood that this identification is via Theorem \ref{thm:negrepontis}. \end{remark} We start by setting: \begin{enumerate} \item $\Va:=\Va^{*}$, and \item $A:=U(\Cx)=[0,1]$. \end{enumerate} Corollary \ref{cor:algadj} ensures that there exists a dual adjunction $\C^{q}\dashv \V^{q}$ between $\Va^{*}$ and the category of subsets of $[0,1]^{\mu}$ ---with $\mu$ ranging among all cardinals--- and definable maps. The characterisation of the fixed points of the compositions $\C^{q}\circ \V^{q}$ and $\V^{q}\circ \C^{q}$ is now very similar to the one in Stone duality. \begin{lemma}\label{l:c*semisimple} \mbox{} \begin{enumerate} \item\label{l:c*semisimple-item2} The $C^{*}$-algebra $\Cx$ is the only simple algebra in $\Va^{*}$. \item\label{l:c*semisimple-item1} The variety $\Va^{*}$ semisimple. \end{enumerate} \end{lemma} \begin{proof} The first item amounts to the standard fact that a quotient of a $C^*$-algebra modulo an ideal $I$ is isomorphic to $\Cx$ if, and only if, $I$ is maximal. The second item amounts to the equally well-known fact that each $C^*$ algebra has enough maximal ideals, hence it is a subdirect product of copies of $\Cx$. \end{proof} \begin{corollary} Every commutative $C^{*}$-algebra is fixed by the composition $\C^{q}\circ \V^{q}$. \end{corollary} \begin{proof} By combining Proposition \ref{c:semisimple-fixed} and Lemma \ref{l:c*semisimple}. \end{proof} We now turn to the characterisation of the fixed points in the geometric side. \begin{lemma}\label{l:continuos=definable} A function $f$ from $S\subseteq [0,1]^{\mu}\to T\subseteq [0,1]^{\nu}$ is definable if, and only if, $f$ is continuous with respect to the product topologies, where $[0,1]$ has the Euclidean topology. \end{lemma} \begin{proof} For any fixed cardinal $\mu$, by Theorem \ref{thm:negrepontis} the underlying set of the algebra in ${\Va^{*}}$ freely generated by a set of cordiality $\mu$ is $U(F(\mu))$, that is, the collection of all continuous functions from $[0,1]^{\mu}$ to $[0,1]$. By definition, a function $f\colon S\subseteq A^{\mu}\to T\subseteq A^{\nu}$ is definable if, and only if, there exists a family of elements $(t_{\beta})_{\beta<\nu}$ of elements of $U(F(\mu))$ such that for any $x\in S$, $f(x)=(t(x)_{\beta})_{\beta<\nu}$. This proves the lemma. \end{proof} \begin{lemma} A subset $S$ of $[0,1]^{\mu}$ is closed in the Zariski \textup{(}=$\VC$\textup{)} topology if, and only if, it is closed in the Euclidean topology. \end{lemma} \begin{proof} We start by proving the claim for subsets of $[0,1]$. If $S$ is closed in the Zariski topology, there exists a set of pairs of definable functions $\theta$ such that \[S=\VV(\theta)=\bigcap_{(f,g)\in\theta}\{s\in [0,1]\mid f(s)=g(s)\}.\] It is then enough to prove that for any $(f,g)\in\theta$ the set $\{s\in [0,1]\mid f(s)=g(s)\}$ is closed in the Euclidean topology. By Lemma \ref{l:continuos=definable} both $f$ and $g$ are continuous, so by \cite[1.5.4]{engelking} $S$ is closed. Conversely, if $S$ is closed in the Euclidean topology then there is a function $f\colon [0,1]\to [0,1]$ that vanishes exactly on $S$, because closed sets are zero-sets in metrisable spaces. Hence $S=\VV(f,0)$ is closed in the Zariski topology. Thus the Zariski and the Euclidean topologies coincide on $[0,1]$. Since $[0,1]$ is Hausdorff, and since by Lemma \ref{l:continuos=definable} all definable functions are continuous, by Lemma \ref{l:co-null} the product and the Zariski topologies coincide, and the proof is complete. \end{proof} \begin{lemma} A topological space is compact and Hausdorff if, and only if, it is homeomorphic to a closed subset of $[0,1]^{\mu}$ for some $\mu$. \end{lemma} \begin{proof} This is a standard fact; see e.g.\ Kelly's embedding lemma \cite[Lemma 4.5, pag. 116]{kelley1955general}. \end{proof} \begin{corollary}The variety $\Va^*$ is dually equivalent to $\KH$. \end{corollary} \section{Conclusions}\label{s:conc} The categorical and the algebraic frameworks presented above are general enough to encompass several dualities in mathematics. The algebraic framework of Part \ref{part:alg}, for example, accommodates such standard theories as Priestley duality for distributive lattices \cite{priestley1984ordered}, Baker-Baynon duality for Riesz spaces and lattice-ordered Abelian groups~\cite{MR0376480}, or Pontryagin duality for compact Abelian groups~\cite{pontrjagin1934theory}. Also, we remark that the dualities for semisimple and finitely presented MV-algebras developed in \cite{MarSpa12, MarSpa13} arose by applying the constructions of the present paper to that specific setting, and thus motivated the present general development. We conclude the paper with a few remarks on further research. \begin{remark}[Galois theory of field extensions] Let $K$ be a field, $L$ a fixed extension of $K$, and let $\Gal_{K}(L)$ be the group of automorphisms of $L$ that fix $K$ (i.e.\ if $h\in \Gal_{K}(L)$ and $k\in K$ then $h(k)=k$). The classical Galois connection between the intermediate field extensions $K\subseteq F \subseteq L$ and the subgroups of $\Aut_{K}(L)$ can be recovered as a restriction of the adjunction of Theorem \ref{thm:genadj}. To this end we set: \begin{itemize} \item $\T=\Gal_{K}(L)$ (i.e., the category with a single object $\a$ and with arrows the elements of the group $\Gal_{K}(L)$, composition between them being given by the group operation), \item $\a$ is the unique object of $\T$, \item $\S$ has field extensions of $K$ as objects and elements of $\Gal_{K}(L)$ as arrows, \item $\I$ is the functor picking the object $L$ of $\S$ and acting identically on arrows. \end{itemize} In this set up the objects of the category $\R$ are pairs $(\a, R)$, where $R\subseteq \hom_{\T}^{2}(\a, \a)$. As the first component of the pairs can only be $\a$, we only write $R$ for an object of $\R$. Further, as automorphisms always have an inverse, the condition that $p$ and $q$ act equally on some field $F$ is equivalent to the condition that the automorphism $pq^{-1}$ acts identically on $F$. We can therefore conceive of relations on $\hom{(\a,\a)}$ as subsets of $\hom{(\a,\a)}$. The objects of the category $\D$ are pairs $(\a, F)$ where $F$ is a a field such that $K\subseteq F\subseteq L$. For the same reason as above, we only write $F$ for an object of $\D$. For any object $R$ in $\R$, the operator $\VV$ specialises to the following: \begin{align} \VV{(R)}=\bigcap \{F\mid \forall h\in R\quad \restr{h}{F}=\id_{F} \}\label{eq:Gal1} \end{align} For any object $F$ in $\S$, the operator $\CC$ specialises to the following: \begin{align} \CC{(F)}= \{h\in \Gal_{K}(L)\mid \forall f\in F\quad h(f)=f \}\label{eq:Gal2} \end{align} The right-hand set of \eqref{eq:Gal1} is often denoted by $L^{R}$ in classical Galois theory \cite[Chapter VI]{Lang} The right-hand side of \eqref{eq:Gal2} is actually $\Aut_{K}(F)$. I a similar way one can also give an account of the Galois connection between fundamental groups and covering spaces of a sufficiently nice topological space; cf.\ Grothendieck's paper \cite{grothendieck1971revetements}. \end{remark} We have characterised the fixed points of affine adjunction in the algebraic framework through the {\sl Nullstellensatz} and the Stone-Gelfand-Kolomogorv Lemma. The topological side, however, awaits further investigation. In particular, one would like to know when the operator $\VV\circ \CC$ is topological, and one would like to be able to compare abstractly the $\VV\circ \CC$ and the product topology on $A^{\mu}$. \subsection*{Acknowledgements.} \noindent The first author thankfully acknowledges partial support from a Research Fellowship at Jesus College, Cambridge, a CARMIN Fellowship at IH\'ES-IHP, a Marie Curie INdAM-COFUND-2012 Fellowship and two travel grants of the Dipartimento di Matematica {\sl Federico Enriques} of the Universit\`a degli Studi di Milano. The second author gratefully acknowledges partial support by the Italian FIRB ``Futuro in Ricerca'' grant RBFR10DGUA, which also made possible two visits of the first author to his Department. The second author is also grateful to the Department of Pure Mathematics and Mathematical Statistics of Cambridge University, to the Category Theory Seminar there, and to the first author, for inviting him to give a talk related to the present paper. The second and third authors further acknowledge partial support from the Italian National Research Project (PRIN2010--11) entitled \emph{Metodi logici per il trattamento dell'informazione}. Parts of this article where written while the second and third authors were kindly hosted by the CONICET in Argentina within the European FP7-IRSES project \emph{MaToMUVI} (GA-2009-247584). The third author gratefully acknowledges partial support by the Marie Curie Intra-European Fellowship for the project ``ADAMS" (PIEF-GA-2011-299071).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} We are interested in the possible necessary and sufficient conditions for a continuous mapping to be open. Recall that a mapping is {\em open} if it maps open sets onto open sets. By {\em Remmert's open mapping theorem} (see, for example, \cite[Theorem~2, p.~297]{Lojasiewicz1991} or \cite[Proposition~4, p.~132]{Narasimhan1966}), it is well known that a holomorphic mapping $f\colon\mathbb C^n\to\mathbb C^n$ is open if and only if its fibers are discrete. From the work of Church~\cite{Church1963} (see also \cite{Church1978}) we know that if a mapping $f \colon\mathbb R^n \to \mathbb R^n$ is $C^n$ and light (i.e., the fibers of $f$ are totally disconnected), then $f$ is open if and only if the Jacobian (i.e., the determinant of the Jacobian matrix) of $f$ does not change sign, equivalently, the set of points at which $f$ is not a local homeomorphism has dimension at most $n - 2.$ For a real polynomial mapping $f\colon\mathbb R^n\to\mathbb R^n$, Gamboa and Ronga~\cite{Gamboa1996} proved that $f$ is open if and only if the fibers of $f$ are finite and the Jacobian of $f$ does not change sign. After that, Hirsch~\cite{Hirsch2002} affirmed that the Jacobian of a real analytic open mapping $f\colon\mathbb R^n\to\mathbb R^n$ does not change sign. Recently in \cite[Theorem 3.14]{Denkowski2017}, Denkowski and Loeb showed that a subanalytic (or definable in some o-minimal structure) mapping $f$ of class $C^1$ is open if and only if the fibers of $f$ are discrete and the Jacobian of $f$ does not change sign. In the non-smooth setting, a result of Scholtes~\cite{Scholtes2012} (see Corollary~\ref{Corollary31} below) stated that a piecewise affine mapping $f \colon \mathbb{R}^n \to \mathbb{R}^n$ is open if and only if it is coherently oriented (meaning that the Jacobian of affine mappings associated to $f$ have the same nonzero sign). See also \cite{Borwein1988, Gowda1996, Lee2021, Mordukhovich2018, Penot1989, Yen2008} for related works. We present here a {\em definable non-smooth} version of the above results. Namely, let ${f} \colon \Omega \to \mathbb{R}^n$ be a definable continuous mapping, where $\Omega$ is a definable connected open set in $\mathbb R^n.$ Denote by $D_f$ the set of points in $\Omega$ at which $f$ is differentiable; this set is dense in $\Omega$ (see, e.g., \cite{Coste2000, Dries1998}). We also denote by $B_f$ the set of points in $\Omega$ at which $f$ fails to be a local homeomorphism. Then the following statements are equivalent: \begin{enumerate}[{\rm (i)}] \item The mapping $f$ is open. \item The fibers of $f$ are finite and the Jacobian of $f$ does not change sign on $D_f.$ \item The fibers of ${f}$ are finite and the set $B_f$ has dimension at most $n - 2.$ \end{enumerate} The idea of the proof of the equivalence (i)~$\Leftrightarrow$~(ii) is similar to those in \cite{Denkowski2017, Gamboa1996, Hirsch2002}. However, more arguments need to be taken into account in the non-smooth case. \medskip In \cite{Whyburn1951} Whyburn stated the following conjecture and verified it for the case $n = 2$: {\em Suppose that $f$ is a light open continuous mapping of one closed ball of dimension $n$ onto another. If $f$ maps the boundary homeomorphically, then $f$ is a homeomorphism.} Whyburn's conjecture has been proved for several class of continuous mappings by McAuley~\cite{McAuley1965}. The conjecture has been verified for differentiable mappings with an additional hypothesis by Cronin and McAuley~\cite{Cronin1966}. It is also valid for $C^2$ mappings due to Marx~\cite{Marx1968}. On the other hand, it has been shown by Wilson~\cite{Wilson1973} that the conjecture is false for continuous mappings on higher dimensional spaces. However, the second result of this paper shows that the conjecture is still true for definable continuous mappings. \medskip The rest of this paper is organized as follows. Section~\ref{SectionPreliminary} contains some properties of definable sets and mappings in o-minimal structures. For the convenience of the reader, the classical invariance of domain theorem and some properties of the Brouwer degree are also recalled here. Finally, results are given in Section~\ref{Section3}. \section{Preliminaries} \label{SectionPreliminary} \subsection{Notation and definitions} We suppose $1 \le n \in \mathbb{N}$ and abbreviate $(x_1, \ldots, x_n)$ by $x.$ The space $\mathbb{R}^n$ is equipped with the usual scalar product $\langle \cdot, \cdot \rangle$ and the corresponding Euclidean norm $\| \cdot\|.$ The open ball and sphere of radius $r$ centered at the origin in $\mathbb{R}^n$ will be denoted by $\mathbb{B}^n_r$ and $\mathbb{S}^{n - 1}_r,$ respectively. If $x \in \mathbb{R}^n$ and $\Omega \subset \mathbb{R}^n$ is a non-empty set then the distance from $x$ to $\Omega$ is defined by $\mathrm{dist}(x, \Omega):=\inf_{y \in \Omega}\|x - y\|.$ The closure and boundary of a set $\Omega \subset \mathbb{R}^n$ will be written as $\overline{\Omega}$ and $\partial \Omega,$ respectively. Let ${f}$ be a mapping from an open set $\Omega$ in $\mathbb{R}^n$ into $\mathbb{R}^n.$ Then $f$ is an {\em open mapping} if $f(U)$ is an open subset of $\mathbb{R}^n$ whenever $U$ is an open subset of $\Omega;$ $f$ has {\em finite fibers} if for each $y \in \mathbb{R}^n,$ the fiber $f^{-1}(y)$ is a finite (possibly empty) set. Let $D_f$ denote the set of points in $\Omega$ at which $f$ is differentiable. If $x \in D_f$ then we denote the Jacobian matrix of ${f}$ at $x$ by $d{f}(x) =\left[\frac{\partial {f}_i}{\partial x_j}(x)\right],$ and the determinant of $d{f}(x)$ is the {\em Jacobian} of ${f}$ at $x,$ denoted by $J{f}(x).$ Let $R_f$ denote the set of points $x \in \Omega$ such that ${f}$ is of class $C^1$ in a neighborhood of $x$ and the Jacobian $Jf(x)$ is nonzero. Observe that $R_f$ is an open set but it is not necessarily connected. As in \cite{Church1963}, the {\em branch} set of $f,$ denoted by $B_f,$ is the set of points in $\Omega$ at which $f$ fails to be a local homeomorphism. Note that $B_f \subset \Omega \setminus R_f$ by the inverse mapping theorem. \subsection{The invariance of domain theorem and the Brouwer degree} For the convenience of the reader, we recall here the classical invariance of domain theorem and some properties of the Brouwer degree. \begin{lemma} [Invariance of domain] \label{Brouwer1} Let $\Omega$ be an open subset of $\mathbb{R}^n.$ Then every injective continuous mapping from $\Omega$ into $\mathbb{R}^n$ is open. \end{lemma} \begin{proof} See, for example, \cite[Chapter~4, Proposition~7.4]{Dold1972}. \end{proof} Suppose $\Omega$ is an open and bounded set in $\mathbb{R}^n,$ $f \colon \overline{\Omega} \rightarrow \mathbb{R}^n$ is a continuous mapping, and $y \not \in f(\partial \Omega).$ The {\em Brouwer degree} of $f$ on $\Omega$ with respect to $y,$ denoted by $\deg(f, \Omega, y),$ is a function of integer values which enjoys several important properties (normalization, domain decomposition, local constancy, homotopy invariance, etc.). The Brouwer degree is a power tool used in analysis and topology, in particular, it gives an estimation and the nature of the solution(s) of the equation $f(x) = y$ in $\Omega.$ For more details, the reader may refer to \cite{Deimling1985, LLoyd1978} and the references therein. The two lemmas below provide some useful properties of the Brouwer degree. \begin{lemma}\label{Brouwer2} Let $\Omega$ be an open and bounded set in $\mathbb{R}^n,$ $f \colon \overline{\Omega} \rightarrow \mathbb{R}^n$ be a continuous mapping, and $y \not \in f(\partial \Omega).$ Then the following statements hold: \begin{enumerate}[{\rm (i)}] \item If $\deg(f, \Omega, y) \ne 0,$ then there is $x \in \Omega$ such that $f(x) = y.$ \item For all $y' \in \mathbb{R}^n$ with $\|y' - y\| < \mathrm{dist} (y, f(\partial \Omega))$ we have \begin{eqnarray*} \deg(f, \Omega, y') & = & \deg(f, \Omega, y). \end{eqnarray*} \item If $H \colon \overline{\Omega} \times [0,1] \to \mathbb{R}^n$ is continuous, $\gamma \colon [0,1] \to \mathbb{R}^n$ is continuous, and $\gamma(t) \not \in H(\partial \Omega, t)$ for every $t\in [0,1]$, then $\deg(H(\cdot, t), \Omega, \gamma(t))$ is independent of $t\in [0,1]$. \end{enumerate} \end{lemma} \begin{proof} See, for example, \cite{Deimling1985, LLoyd1978}. \end{proof} \begin{lemma}\label{Brouwer3} Let $\Omega$ be an open and bounded set in $\mathbb{R}^n$ and $f \colon \overline{\Omega} \rightarrow \mathbb{R}^n$ be a continuous mapping. Then the following statements hold: \begin{enumerate}[{\rm (i)}] \item Let $x \in \Omega$ be such that the Jacobian matrix $df(x)$ exists and is nonsingular. Then there exists a neighborhood $W$ of $x$ such that $\overline{W} \cap f^{-1}(f(x)) = \{x\}$ and \begin{eqnarray*} \deg (f, W, f(x)) &=& \mathrm{sign} Jf(x). \end{eqnarray*} \item Let $y \not \in f(\partial \Omega)$ be such that for every $x \in f^{-1}(y),$ the Jacobian matrix $df(x)$ exists and is nonsingular. Then \begin{eqnarray*} \deg (f, \Omega, y) &=& \sum_{x \in f^{-1}(y)} \mathrm{sign} Jf(x). \end{eqnarray*} \end{enumerate} \end{lemma} \begin{proof} See \cite{Deimling1985, LLoyd1978} for the smooth case and see \cite{HaTDX2021, Pourciau1983-2, Shannon1994} for the non-smooth case. \end{proof} \subsection{O-minimal structures and definable mappings} The notion of o-minimality was developed in the late 1980s after it was noticed that many proofs of analytic and geometric properties of semi-algebraic sets and mappings can be carried over verbatim for subanalytic sets and mappings. We refer the reader to \cite{Coste2000, Dries1998, Dries1996} for the basic properties of o-minimal structures used in this paper. \begin{definition}{\rm An {\em o-minimal structure} on $(\mathbb{R}, +, \cdot)$ is a sequence $\mathcal{D} := (\mathcal{D}_n)_{n \in \mathbb{N}}$ such that for each $n \in \mathbb{N}$: \begin{itemize} \item [(a)] $\mathcal{D}_n$ is a Boolean algebra of subsets of $\mathbb{R}^n$. \item [(b)] If $X \in \mathcal{D}_m$ and $Y \in \mathcal{D}_n$, then $X \times Y \in \mathcal {D}_{m+n}.$ \item [(c)] If $X \in \mathcal {D}_{n + 1},$ then $\pi(X) \in \mathcal {D}_n,$ where $\pi \colon \mathbb{R}^{n+1} \to \mathbb{R}^n$ is the projection on the first $n$ coordinates. \item [(d)] $\mathcal{D}_n$ contains all algebraic subsets of $\mathbb{R}^n.$ \item [(e)] Each set belonging to $\mathcal{D}_1$ is a finite union of points and intervals. \end{itemize} }\end{definition} A set belonging to $\mathcal{D}$ is said to be {\em definable} (in that structure). {\em Definable mappings} in structure $\mathcal{D}$ are mappings whose graphs are definable sets in $\mathcal{D}.$ Examples of o-minimal structures are \begin{itemize} \item the semi-algebraic sets (by the Tarski--Seidenberg theorem), \item the globally subanalytic sets, i.e., the subanalytic sets of $\mathbb{R}^n$ whose (compact) closures in the real projective space $\mathbb{R}\mathbb{P}^n$ are subanalytic (using Gabrielov's complement theorem). \end{itemize} In this note, we fix an arbitrary o-minimal structure on $(\mathbb{R}, +, \cdot).$ The term ``definable'' means definable in this structure. We recall some useful facts which we shall need later. \begin{lemma} [{Monotonicity}] \label{MonotonicityLemma} Let $f \colon (a, b) \rightarrow \mathbb{R}$ be a definable function and $p$ be a positive integer. Then there are finitely many points $a = t_0 < t_1 < \cdots < t_k = b$ such that the restriction of $f$ to each interval $(t_i, t_{i + 1})$ is of class $C^p$, and either constant or strictly monotone. \end{lemma} \begin{lemma}[Path connectedness]\label{PathConnectedness} The following statements hold: \begin{enumerate}[{\rm (i)}] \item Every definable set has a finite number of connected components and each such component is definable. \item Every definable connected set $X$ is path connected, i.e., for every points $x, y$ in $X,$ there exists a definable continuous curve $\gamma \colon [0, 1] \to X$ such that $\gamma(0) = x$ and $\gamma(1) = y.$ \end{enumerate} \end{lemma} By the cell decomposition theorem (see, for example~\cite[4.2]{Dries1996}), for any $p \in \mathbb{N}$ and any nonempty definable subset $X$ of $\mathbb{R}^n,$ we can write $X$ as a disjoint union of finitely many definable $C^p$-manifolds of different dimensions. The {\em dimension} $\dim X$ of a nonempty definable set $X$ can thus be defined as the dimension of the manifold of highest dimension of such a decomposition. This dimension is well defined and independent on the decomposition of $X.$ By convention, the dimension of the empty set is taken to be negative infinity. A point $x\in X$ is {\em generic} if there exists a neighborhood $U$ of $x$ in $\mathbb R^n$ such that $X\cap U$ is a definable $C^1$-manifold of dimension $\dim X.$ \begin{lemma}\label{DimensionLemma} Let $X \subset \mathbb R^n$ be a nonempty definable set. Then the following statements hold: \begin{enumerate}[{\rm (i)}] \item The set $X$ has measure zero if and only if $\dim X <n.$ \item The interior of $X$ is nonempty if and only if $\dim X = n.$ \item $\dim (\overline{X}\setminus X) < \dim X.$ In particular, $\dim \overline{X} = \dim X.$ \item Let $Y \subset \mathbb{R}^n$ be a definable set containing $X.$ If $X$ is dense in $Y$ then $\dim (Y \setminus X) < \dim Y.$ \item If $Y \subset \mathbb{R}^n$ is definable, then $\dim (X \cup Y) = \max \{\dim X, \dim Y\}.$ \item Let $f \colon X \rightarrow \mathbb R^m$ be a definable mapping. Then $\dim f(X) \leq \dim X.$ \item The complement in $X$ of the set of generic points in $X$ is a definable set of dimension less than $\dim X.$ \end{enumerate} \end{lemma} \begin{lemma}\label{DiffrentiableLemma} Let $X \subset \mathbb{R}^n$ be a definable open set and $f \colon X \to \mathbb R^m$ be a definable mapping. Then for each positive integer $p,$ the set of points where $f$ is not of class $C^p$ is a definable set of dimension less than $n.$ \end{lemma} In the sequel we will make use of Hardt's triviality (see \cite{Hardt1980, Dries1998}). \begin{theorem}[Hardt's triviality] \label{HardtTheorem} Consider a definable continuous mapping $f \colon X \rightarrow Y$ where $X \subset \mathbb{R}^n$ and $Y \subset \mathbb{R}^m$ are definable sets. Then there exists a finite partition $Y = Y_1 \cup \cdots \cup Y_k$ of $Y$ into definable sets $Y_i$ such that $f$ is definably trivial over each $Y_i,$ that is, there exists a definable set $F_i \subset \mathbb{R}^{n_i},$ for some $n_i,$ and a definable homeomorphism $h_i \colon f^{-1} (Y_i) \rightarrow Y_i \times F_i$ such that the composition $h_i$ with the projection $Y_i \times F_i \rightarrow Y_i, (y, z) \mapsto y,$ is equal to the restriction of $f$ to $f^{-1} (Y_i).$ \end{theorem} \section{Results and proofs} \label{Section3} The following result provides necessary and sufficient conditions for a definable continuous mapping to be open. For related results, we refer the reader to \cite{Chernavski1964, Chernavski1965, Church1960, Church1963, Church1967, Church1978, Denkowski2017, Gamboa1996, Hirsch2002, Titus1952, Vaisala1966}. \begin{theorem}\label{MainTheorem} , Let ${f} \colon \Omega \to \mathbb{R}^n$ be a definable continuous mapping, where $\Omega$ is a definable connected open set in $\mathbb R^n.$ Then the following two conditions are equivalent: \begin{enumerate}[{\rm (i)}] \item The mapping ${f}$ is open. \item The fibers of ${f}$ are finite and the Jacobian $Jf$ does not change sign on $D_{f}.$ \item The fibers of ${f}$ are finite and the Jacobian $Jf$ does not change sign on $R_{f}.$ \item The fibers of ${f}$ are finite and the branch set $B_f$ has dimension at most $n - 2.$ \end{enumerate} \end{theorem} In order to prove Theorem~\ref{MainTheorem}, we need some lemmas. The first one is \cite[Fact~2.2]{Peterzil2007}, which is a consequence of \cite[Proposition~2]{Johns2001}; we recall it here together with a direct proof. \begin{lemma} \label{Lemma31} Let $\Omega$ be a definable connected open subset of $\mathbb{R}^n$ and $R$ be a definable open dense subset of $\Omega.$ Consider the graph $\Gamma$ whose vertices are the connected components of $R$ with two components $R_i$ and $R_j$ connected by an edge if and only if $\dim (\Omega \cap \overline{R}_i\cap \overline{R}_j)\geqslant n - 1.$ Then $\Gamma$ is connected. \end{lemma} \begin{proof} Suppose for contradiction that the graph $\Gamma$ is not connected. There exists a connected component of $\Gamma$ whose vertices are $R_1, \ldots, R_p$ for some $p \geqslant 1,$ and let $R_{p + 1}, \ldots, R_{q}$ with $q \geqslant p + 1$ be the remaining vertices of $\Gamma.$ By assumption, we have for all $i = 1, \ldots, p,$ and all $j = p + 1, \ldots, q,$ $$\dim(\Omega \cap \overline R_i\cap\overline R_j) < n - 1.$$ Set $$X_1:= \bigcup_{i=1}^p (\Omega \cap \overline R_i) \quad \text{ and } \quad X_2 := \bigcup_{j=p+1}^q (\Omega \cap \overline R_j).$$ Observe that $$X_1\cap X_2 = \left(\bigcup_{i=1}^p (\Omega \cap \overline R_i) \right)\bigcap\left(\bigcup_{j=p+1}^q (\Omega \cap \overline R_j)\right)=\bigcup_{\substack{i=1,\dots,p\\ j=p + 1,\dots,q}} (\Omega \cap \overline R_i\cap \overline R_j).$$ Hence $\dim (X_1\cap X_2) < n - 1$ and so $\Omega \setminus(X_1\cap X_2)$ is path connected. On the other hand, since $R = \cup_{i = 1, \ldots, q} R_i$ is dense in $\Omega,$ we have $\Omega = X_1 \cup X_2$ and so $$\Omega\setminus(X_1\cap X_2) = (X_1 \setminus X_2) \cup (X_2 \setminus X_1).$$ Therefore, there is a continuous path $$\gamma\colon[0,1]\to\Omega\setminus(X_1\cap X_2)$$ such that $\gamma(0)\in X_1 \setminus X_2$ and $\gamma(1)\in X_2 \setminus X_1.$ Set $$t_* :=\sup \{t \in[0, 1]:\ \gamma(s) \in X_1 \setminus X_2 \ \textrm{ for all } \ s \in [0, t) \}.$$ Then it is easy to check that $\gamma(t_*) \in X_1\cap X_2$, which is a contradiction. The lemma is proved. \end{proof} The following result is taken from \cite[Lemma~3.1]{Peterzil2007}. \begin{lemma} \label{Lemma32} Let $W\subset \mathbb R^{n-1}$ and $U \subset\mathbb R^n$ be definable open sets such that the set $\widetilde W : = W \times \{0\}$ is contained in the boundary of $U.$ Let $f \colon U \cup \widetilde W\to \mathbb R$ be a definable function continuous on $U \cup \widetilde W$ and $C^1$ on $U.$ Let $g \colon W \to \mathbb R$ be the function $y \mapsto f(y, 0)$ and $a \in W$ be a generic point. Then $g$ is differentiable at $a$ and, for all $i = 1,\dots, n - 1,$ $$\frac{\partial g}{\partial x_i}(a)\ =\ \lim_{x \to (a, 0),\, x \in U}\frac{\partial f}{\partial x_i}(x).$$ \end{lemma} The following fact is simple but useful. \begin{lemma} \label{Lemma33} Let ${f} \colon \Omega \to \mathbb{R}^n$ be a definable continuous mapping, where $\Omega$ is a definable open set in $\mathbb R^n.$ Assume that the fibers of $f$ are finite. Then the following two statements hold: \begin{enumerate}[{\rm (i)}] \item If $X$ is a definable subset of $\Omega,$ then $\dim X = \dim f(X).$ \item The set $R_f$ is dense in $\Omega.$ \end{enumerate} \end{lemma} \begin{proof} (i) Let $X$ be a definable subset of $\Omega.$ Applying Hardt's Trivial Theorem~\ref{HardtTheorem} to the restriction mapping $f|_X$ of $f$ to $X,$ we obtain a finite definable partition of $f(X)$ onto $Y_1, \ldots, Y_k$ such that $f|_X$ is definably trivial over each $Y_i.$ Since the fibers of $f$ are finite, it follows that $$\dim (f|_X)^{-1}(Y_i)\ =\ \dim Y_i \quad \textrm{ for } \quad i = 1, \ldots, k.$$ Hence $$\dim X\ =\ \max_{i = 1, \ldots, k} \dim (f|_X)^{-1}(Y_i)\ =\ \max_{i = 1, \ldots, k} \dim Y_i\ =\ \dim f(X),$$ which yields (i). (ii) Let $B$ be the set of points $x \in \Omega$ such that ${f}$ is not $C^1$ in a neighborhood of $x.$ Then $B$ is closed in $\Omega.$ Further, in view of Lemma~\ref{DiffrentiableLemma}, $B$ is a definable set of dimension less than $n.$ Consider the definable set \begin{eqnarray*} {C} &:=& \{x \in \Omega \setminus B \ : \ J{f}(x) = 0 \}. \end{eqnarray*} Applying Sard's theorem to the $C^1$-mapping $$\Omega \setminus B \to \mathbb{R}^n, \quad x \mapsto {f}(x),$$ we have that ${f}({C})$ has measure zero, and so it has dimension less than $n$ in view of Lemma~\ref{DimensionLemma}(i). This, together with the statement~(i), implies that $$\dim {C}\ =\ \dim f({C}) < n.$$ Consequently, $\dim (B \cup {C}) < n.$ Now (ii) follows immediately since $R_f = \Omega \setminus (B \cup {C}).$ This ends the proof of the lemma. \end{proof} We are now in position to prove Theorem~\ref{MainTheorem}. \begin{proof}[Proof of Theorem~\ref{MainTheorem}] (cf. \cite[Theorem~3.14]{Denkowski2017}, \cite[Theorem~3.2]{Peterzil2007}). \medskip (ii) $\Rightarrow$ (iii): Obviously. \medskip (iii) $\Rightarrow$ (i): To see that $f$ is open, take any $\overline{x} \in \Omega$ and let $W$ be a neighborhood of $\overline{x}.$ Since the fiber ${f}^{-1}({f}(\overline{x}))$ is finite, there exists an open ball $U$ centered at $\overline{x}$ with $\overline{U} \subset W$ such that $\overline{U} \cap {f}^{-1}({f}(\overline{x})) = \{\overline{x}\}.$ Since $\partial{U}$ is compact and ${f}(\overline{x}) \not \in {f}(\partial U),$ we can find an open ball $V \subset \mathbb{R}^n$ centered at ${f}(\overline{x})$ such that ${f}(\partial U) \cap V = \emptyset.$ Now the conclusion follows if we can show that ${f}(U) \supset V.$ Recall that $R_f$ denotes the set $R_f$ of points $x \in \Omega$ such that ${f}$ is of class $C^1$ on a neighborhood of $x$ and the Jacobian $Jf(x)$ is nonzero. In view of Lemma~\ref{Lemma33}(ii), $R_f$ is dense in $\Omega.$ Since $U$ is an open subset of $\Omega,$ there exists a point $u$ in $U \cap R_f.$ Then ${f}$ is of class $C^1$ on a neighborhood of $u$ and the Jacobian matrix of $f$ at $u$ is nonsingular. By the inverse mapping theorem, the image ${f}(U)$ must contain an open set $Y \subset V.$ On the other hand, it follows from Lemmas~\ref{Lemma33}~and~\ref{DimensionLemma}(iv) that \begin{eqnarray*} \dim f(\Omega \setminus R_f) &=& \dim(\Omega \setminus R_f) \ < \ n. \end{eqnarray*} So there must exist some point $\overline{y}$ in $Y$ with $f^{-1}(y) \subset R_f.$ Therefore, $U \cap f^{-1}(\overline{y})$ is nonempty and for every $x \in U \cap f^{-1}(\overline{y}),$ the Jacobian matrix $df(x)$ exists and is nonsingular. Furthermore, by construction, $\overline{y}$ is not in ${f}(\partial U).$ For simplicity of notation, we let ${g}$ stand for the restriction of $f$ to $\overline{U}.$ Then the Brouwer degree $\deg ({g}, U, \overline{y})$ is defined. In light of Lemma~\ref{Brouwer3}(ii), we have \begin{eqnarray*} \deg ({g}, U, \overline{y}) &=& \sum_{x \in {g}^{-1}(\overline{y})} \mathrm{sign} J{g}(x). \end{eqnarray*} On the other hand, by assumption, the Jacobian $Jf$ is positive (or negative) on $R_{f},$ and so \begin{eqnarray*} \sum_{x \in {g}^{-1}(\overline{y})} \mathrm{sign} J{g}(x) &\ne& 0. \end{eqnarray*} Therefore $\deg ({g}, U, \overline{y}) \ne 0.$ Finally, to prove that $V \subset {f}(U),$ take any $y \in V$ and consider the continuous curve $$\gamma \colon [0, 1] \to V, \quad t \mapsto (1 - t)y + t \overline{y},$$ connecting $y$ to $\overline{y}.$ Since ${f}(\partial U) \cap V = \emptyset,$ we have $\gamma(t) \not \in {g}(\partial U)$ for all $t \in [0, 1].$ This, together with Lemma~\ref{Brouwer2}(iii), implies that $$\deg ({g}, U, y) \ = \ \deg ({g}, U, \overline{y}) \ \ne \ 0.$$ By Lemma~\ref{Brouwer2}(i), ${g}(x) = y$ for some $x$ in $U,$ and this proves the openness of $f.$ \medskip (i) $\Rightarrow$ (ii): Assume that ${f}$ is open. If $n = 1,$ then ${f}$ is strictly monotone and there is nothing to prove. So for the rest of the proof we assume that $n > 1.$ By \cite[Theorem~3.10]{Denkowski2017} (see also \cite[Proposition, page~298]{Gamboa1996}), the fibers of $f$ are finite. We first show that the Jacobian $Jf$ has constant sign on $R_f$ which means that $Jf$ is positive (or negative) on $R_f.$ Obviously, the set $R_f$ is definable open, and according to Lemma~\ref{Lemma33}(ii), it is dense in $\Omega.$ By Lemma~\ref{PathConnectedness}, $R_f$ has a finite number of connected components, say ${R}_1, \ldots, {R}_k,$ and these components are path connected. By definition, $J{f}$ has constant sign on each ${R}_i.$ According to Lemma~\ref{Lemma31}, we need to show that for any two components ${R}_i, {R}_j$ with $\dim (\Omega \cap \overline{R}_i \cap \overline{R}_j) \geqslant n - 1,$ the sign of $J{f}$ on ${R}_i$ is the same as on ${R}_j.$ We consider two such components ${R}_i, {R}_j$ and assume that they are ${R}_1$ and ${R}_2.$ Let $\Sigma := \Omega \cap \overline{R}_1 \cap \overline{R}_2.$ In view of Lemma~\ref{DimensionLemma}(iii), it is easy to see that $\dim \Sigma = n - 1.$ Let $\overline{x}$ be a generic point in $\Sigma;$ then $\Sigma$ is a $C^1$-submanifold of $\mathbb{R}^n$ of dimension $n - 1$ near $\overline{x}.$ Hence, there is a definable connected open neighborhood $U \subset \Omega$ of $\overline{x}$ and a definable diffeomorphism $\Phi$ from $U$ onto an open subset of $\mathbb R^n$ such that $\Phi(U \cap \Sigma) \subset \{x_n = 0\}.$ Shrinking $U$ and composing $\Phi$ with the reflection with respect to the hyperplane $\{x_n = 0\}$ if necessary, we may assume that $\Phi(U \cap {R}_1) \subset \{x_n > 0\}$ and $\Phi(U \cap {R}_2) \subset \{x_n < 0\}.$ Since $\Phi$ is a diffeomorphism and $U$ is connected, the sign of $J\Phi$ is constant on $U.$ Furthermore ${f} \circ \Phi^{-1}$ is open on $\Phi(U).$ Hence we may replace $\Omega$ by $\Phi(U),$ ${f}$ by ${f} \circ \Phi^{-1}$ and assume that $${R}_1 \subset \{x_n > 0\}, \quad {R}_2 \subset \{x_n < 0\} \quad \text{ and } \quad \Sigma \subset \{x_n = 0\}.$$ On the other hand, since the fibers of $f$ are finite, it follows from Lemma~\ref{Lemma33}(i) that \begin{eqnarray*} \dim f(\Sigma) &=& \dim \Sigma \ = \ n - 1. \end{eqnarray*} Let $\overline{y}$ be a generic point in $f(\Sigma);$ then $f(\Sigma)$ is a $C^1$-submanifold of $\mathbb{R}^n$ of dimension $n - 1$ near $\overline{y}.$ As before, by applying an appropriate definable diffeomorphism on an open neighborhood of $\overline{y}$ we may assume that ${f}(\Sigma) \subset \{y_n = 0\}.$ Applying Hardt's Trivial Theorem~\ref{HardtTheorem} to the restriction of $f$ to $f^{-1}(f(\Sigma)),$ we obtain a finite definable partition of $f(\Sigma)$ onto $Y_1, \ldots, Y_k$ such that this restriction mapping is definably trivial over each $Y_i.$ By Lemma~\ref{PathConnectedness}, $f^{-1}(Y_i)$ has a finite number of connected components and each such component is homeomorphic to $Y_i$ because $f$ has finite fibers. Observe that there exists an index $i$ such that the set $f^{-1}(Y_i) \cap \Sigma$ is of dimension $n - 1.$ Let $\widetilde{\Sigma}$ be a connected component of $f^{-1}(Y_i) \cap \Sigma$ of dimension $n - 1.$ Now, by shrinking $\Omega$ we may assume that $\Omega \cap f^{-1}(f(\widetilde{\Sigma})) = \widetilde{\Sigma}.$ By construction, it is easy to see that each of the (connected open) sets $f(R_1)$ and $f(R_2)$ is contained in either $\{y_n > 0\}$ or $\{y_n < 0\}$ but not in both. Furthermore, since $f$ is open, the sets $f(R_1)$ and $f(R_2)$ cannot lie in the same half-space $\{y_n > 0\}$ or $\{y_n < 0\}.$ Hence, without lost of generality, we may assume that $${f}({R}_1) \subset \{y_n > 0\} \quad \textrm{ and } \quad {f}({R}_2) \subset \{y_n < 0\}.$$ Write ${f} := ({f}_1, \dots, {f}_n)$ and let $(a, 0) \in \mathbb{R}^{n - 1} \times \{0\}$ be a generic point in $\widetilde{\Sigma}.$ Observe that $$f_n(a, -t)\ <\ f_n(a, 0)\ =\ 0\ <\ f_n(a, t)$$ for all $t > 0$ sufficiently small. By Lemma~\ref{MonotonicityLemma}, it is easy to see that ${f}_n(a, t)$ is strictly increasing in $t$ near $0.$ Consequently, we can find $\epsilon > 0$ small enough such that $\frac{\partial {f}_n}{\partial x_n}(a, t) > 0$ for all $t \in (-\epsilon, \epsilon)$ different from $0.$ Since $(a, 0)$ is generic in $\widetilde{\Sigma},$ we may assume, by shrinking $\Omega$ if needed, that for all $(x_1,\dots, x_n) \in \Omega,$ if $x_n \ne 0,$ then $\frac{\partial {f}_n}{\partial x_n}(x_1,\dots, x_n) > 0.$ If this last change of $\Omega$ makes the point $(a, 0)$ not generic in $\widetilde{\Sigma},$ we replace $(a, 0)$ by another generic point and continue to assume that $(a, 0)$ is generic in $\widetilde{\Sigma}.$ Define the definable continuous mapping $\Psi\colon \Omega \to \mathbb R^n$ by $$\Psi(x_1,\dots, x_n)\ :=\ (x_1,\dots, x_{n-1}, {f}_n(x_1, \dots , x_n)).$$ Obviously, $\Psi$ is identity on $\widetilde{\Sigma}$ and differentiable on $\Omega \setminus \widetilde{\Sigma}.$ Furthermore, $\Psi$ is injective since for every $(x_1,\dots, x_{n-1}, 0) \in \widetilde{\Sigma},$ the function $x_n \mapsto {f}_n(x_1,\dots, x_n)$ is strictly increasing. In light of Lemma~\ref{Brouwer1}, $\Psi$ is a homeomorphism from $\Omega$ onto $\Psi(\Omega).$ Observe that $J\Psi(x) = \frac{\partial {f}_n}{\partial x_n}(x)$ for all $x \in \Omega \setminus \widetilde{\Sigma},$ hence $J\Psi$ is positive outside of $\widetilde{\Sigma}.$ Consequently, we can replace the mapping ${f}$ by the mapping ${f} \circ \Psi^{-1}$ without changing the sign of the Jacobian of ${f}$ on $\Omega \setminus \widetilde{\Sigma}.$ Without loss of generality, assume from now on that ${f}_n(x_1,\dots, x_n) = x_n$ on $\Omega.$ For $x = (x_1,\dots, x_n) \in \Omega \setminus \widetilde{\Sigma}$ we have \begin{equation}\label{Eqn1} J{f}(x)\ =\ \left|\begin{array}{ccccc} \frac{\partial {f}_1}{\partial x_1}(x)& \cdots& \frac{\partial {f}_1}{\partial x_{n-1}}(x)&\frac{\partial {f}_1}{\partial x_n}(x)\\ \vdots&\ddots&\vdots&\vdots\\ \frac{\partial {f}_{n-1}}{\partial x_1}(x)& \cdots& \frac{\partial {f}_{n-1}}{\partial x_{n-1}}(x)&\frac{\partial {f}_{n-1}}{\partial x_n}(x)\\ 0&\cdots&0&1 \end{array}\right| \ =\ \left|\begin{array}{ccccc} \frac{\partial {f}_1}{\partial x_1}(x)& \cdots& \frac{\partial {f}_1}{\partial x_{n-1}}(x)\\ \vdots&\ddots&\vdots\\ \frac{\partial {f}_{n-1}}{\partial x_1}(x)& \cdots& \frac{\partial {f}_{n-1}}{\partial x_{n-1}}(x) \end{array}\right|. \end{equation} As ${f}$ is open, the restriction of ${f}$ to ${{f}^{-1}(\mathbb R^{n-1}\times\{0\})}$ is an open mapping from ${{f}^{-1}(\mathbb R^{n-1}\times\{0\})}$ into $\mathbb R^{n-1}\times\{0\}.$ Hence the mapping $${g} \colon W \to \mathbb R^{n-1}, \quad x' \mapsto ({f}_1(x', 0), \dots, {f}_{n - 1}(x', 0)),$$ is definable, open and continuous, where $W := \{x' \in \mathbb R^{n-1}:\ (x', 0) \in \widetilde{\Sigma}\}$ is a definable open subset of $\mathbb{R}^{n - 1}.$ By \cite[Theorem~3.10]{Denkowski2017} (see also \cite[Proposition, page~298]{Gamboa1996}), the fibers of $g$ are finite. Applying Lemma~\ref{Lemma33}(ii) to the mapping $g,$ we have that the set $R_g$ is dense in $W.$ Thus, replacing $a$ by a point in $R_g$ if necessary, we may assume that $J{g}(a)\ne 0.$ By Lemma~\ref{Lemma32}, for $1\leqslant i, j \leqslant n - 1,$ we have $$\lim_{x \to (a, 0), \, x \in \Omega}\frac{\partial {f}_i}{\partial x_{j}}(x)\ =\ \frac{\partial {g}_i}{\partial x_{j}}(a).$$ It follows then from~\eqref{Eqn1} that $$\lim_{x \to (a, 0), \, x \in\Omega}J{f}(x)\ =\ J{g}(a).$$ Hence, for $x \in \Omega$ close enough to $(a, 0),$ whether in ${R}_1$ or in ${R}_2,$ the sign of $J{f}(x)$ is the same as the sign of $J{g}(a).$ In particular, the sign of $J{f}$ is the same in ${R}_1$ and in ${R}_2.$ We have thus proved that the Jacobian $Jf$ has constant sign on $R_f.$ So, without loss of generality, we may assume that the Jacobian $Jf$ is positive on $R_f.$ It remains to show that $$Jf(x) \geqslant 0 \quad \textrm{ for all } \quad x \in D_f.$$ To see this, let $x \in D_f$ be such that $Jf(x) \ne 0.$ In view of Lemma~\ref{Brouwer3}(i), there exists a definable open and bounded neighborhood $W$ of $x$ such that $\overline{W} \cap f^{-1}(f(x)) = \{x\}$ and \begin{eqnarray} \label{Eqn2} \deg (f, W, f(x)) &=& \mathrm{sign} Jf(x). \end{eqnarray} On the other hand, by Lemma~\ref{Lemma33}, the set $f(R_f)$ is dense in $f(\Omega).$ Note that $f(W)$ is an open subset of $f(\Omega)$ because the mapping $f$ is open. Consequently, there exists a point $y \in f(W)$ with $\|y - f(x)\| < \mathrm{dist}(f(x), f(\partial W))$ such that $f^{-1}(y) \subset R_f.$ In particular, for every $w \in f^{-1}(y),$ the Jacobian matrix $df(w)$ exists and is nonsingular. It follows from Lemmas~\ref{Brouwer2}(ii) and \ref{Brouwer3}(ii) that \begin{eqnarray} \label{Eqn3} \deg(f, W, f(x)) &=& \deg(f, W, y) \ = \ \sum_{w \in f^{-1}(y)} \mathrm{sign} Jf(w) \ > \ 0. \end{eqnarray} Combining \eqref{Eqn2} and \eqref{Eqn3}, we get $Jf(x) > 0,$ which completes the proof of the implication (i)~$\Rightarrow$~(ii). \medskip (i) $\Rightarrow$ (iv): If $n = 1,$ then ${f}$ is strictly monotone and there is nothing to prove. So for the rest of the proof we assume that $n > 1.$ By \cite[Theorem~3.10]{Denkowski2017} (see also \cite[Proposition, page~298]{Gamboa1996}), the fibers of $f$ are finite. Hence, it suffices to show that $\dim B_f \leqslant n - 2.$ (Recall that $B_f$ denotes the set of points at which $f$ is not a local homeomorphism.) By the inverse mapping theorem, we have $B_f \subset \Omega \setminus R_f.$ This, together with Lemmas~\ref{DimensionLemma} and \ref{DiffrentiableLemma}, implies \begin{eqnarray*} \dim B_f &\leqslant& \dim (\Omega \setminus R_f) \ \leqslant \ n - 1. \end{eqnarray*} Suppose for contradiction that $\dim B_f = n - 1.$ By analysis similar to that in the proof of the implication (i)~$\Rightarrow$~(ii), we may assume the following conditions hold: \begin{enumerate}[{\rm (a)}] \setcounter{enumi}{0} \item $\Omega\setminus R_f = B_f$; \item $B_f \subset \{x_n = 0\}$ and $f(B_f)\subset \{y_n=0\}$; \item $\Omega\setminus B_f$ has two connected components, denoted by $R_1$ and $R_2$ with $${R}_1 \subset \{x_n > 0\},\ \ {R}_2 \subset \{x_n < 0\},\ \ {f}({R}_1) \subset \{y_n > 0\}\ \text{ and }\ {f}({R}_2) \subset \{y_n < 0\};$$ \item ${f}_n(x_1,\dots, x_n) = x_n$ on $\Omega;$ \item the mapping $${g} \colon W \to \mathbb R^{n-1}, \quad x' \mapsto ({f}_1(x', 0), \dots, {f}_{n - 1}(x', 0)),$$ is definable, open and continuous, where $W := \{x' \in \mathbb R^{n-1}:\ (x', 0) \in B_f\}$ is a definable open subset of $\mathbb{R}^{n - 1};$ \item $J{g}(x')\ne 0$ for all $x' \in W.$ \end{enumerate} Let $\overline{x} := ({a}, 0) \in B_f.$ Then $f$ is not a local homeomorphism at $\overline{x},$ and so it is not injective. Hence there are sequences $x^k\to\overline x$ and $y^k\to\overline x$ such that $x^k\ne y^k$ and $ f(x^k)= f(y^k)$ for all $k.$ Taking subsequences if needed, we can suppose that the sequences $x^k$ and $y^k$ belong to only one of the sets $R_1$, $R_2$ and $B_f.$ Note that by Item~(f), the restriction of $f$ on $B_f$ is a local diffeomorphism at $\overline x.$ So $x^k, y^k\not\in B_f$ for all $k$ large enough. With no loss of generality, assume that $x^k, y^k\in R_1.$ Furthermore, by construction, we can assume that the segment joining $x^k$ and $y^k$ is contained in $R_1$ for all $k$. Clearly $ f$ is $C^1$ on $R_1$. So for each $i=1,\dots,n-1$ and for each $k$, by the mean value theorem, there is a point $z^{ik}$ in the segment joining $x^k$ and $y^k$ such that \begin{eqnarray}\label{mean} 0 & = & f_i(y^k)- f_i(x^k) \ = \ [d f_i(z^{ik})](y^k-x^k). \end{eqnarray} Let $$v^k:=\frac{y^k-x^k}{\|y^k-x^k\|}.$$ By Item~(d), we have $f_n(x) = x_n$ for all $x\in R_1,$ so the condition $ f(x^k)= f(y^k)$ implies $x^k_n=y^k_n$, i.e., $v^k_n=0$ for all $k.$ Furthermore, in view of~\eqref{mean}, we have $$[d f_i(z^{ik})](v^k)=\frac{1}{\|y^k-x^k\|}[d f_i(z^{ik})](y^k-x^k)=0.$$ Equivalently \begin{equation}\label{sum} \sum_{j=1}^{n-1}\frac{\partial { f}_i}{\partial x_{j}}(z^{ik})v^k_j=0. \end{equation} On the other hand, by Lemma~\ref{Lemma32}, for $1\leqslant i, j \leqslant n - 1,$ we have $$\lim_{x \to ({a}, 0), \, x \in \Omega}\frac{\partial { f}_i}{\partial x_{j}}(x) = \frac{\partial {g}_i}{\partial x_{j}}({a}).$$ In addition, since $x^k\to\overline x$ and $y^k\to\overline x$, it follows that $z^{ik}\to\overline x$. Hence $$\lim_{k\to+\infty}\frac{\partial { f}_i}{\partial x_{j}}(z^{ik}) = \frac{\partial {g}_i}{\partial x_{j}}({a}).$$ Furthermore, taking a subsequence if necessary, we can assume that the sequence $v^k$ converges to a limit $v = (v', 0).$ It follows then from~\eqref{sum} that $$\sum_{j=1}^{n-1}\frac{\partial {g}_i}{\partial x_{j}}({a}) v_j=0.$$ Equivalently $d{g}({a})(v')=0.$ On the other hand, $d{g}({a})$ is a linear isomorphism by Item~(f). In addition, since $v^k\to v = (v', 0)$ and $\|v^k\|=1$ for all $k$, it follows that $\|v'\|=\|v\|=1$. These imply $d{g}({a})(v')\ne 0,$ which is a contradiction. Therefore the dimension of $B_f$ must be smaller than $n - 1.$ \medskip (iv) $\Rightarrow$ (iii): Let $x^0, x^1 \in R_f.$ By the inverse mapping theorem, $x^0, x^1 \in \Omega \setminus B_f.$ On the other hand, the set $\Omega \setminus B_f$ is path connected because of our assumption that $\dim B_f \leqslant n - 2.$ Hence, there exists a continuous curve $\alpha \colon [0, 1] \to \Omega \setminus B_f$ such that $\alpha(0) =x^0$ and $\alpha(1) = x^1.$ For each $t \in [0, 1],$ we can find an open neighbourhood $U_t$ of $\alpha(t)$ with $\overline{U}_t \subset \Omega$ such that the restriction of $f$ to $U_t,$ denoted by $f|_{U_t},$ is a homeomorphism. Using properties of the Brouwer degree, it is not hard to check that the function $$[0, 1] \to \mathbb{Z}, \quad t \mapsto \deg(f|_{U_t}, U_t, f(\alpha(t))),$$ is constant. In particular, we have $$\deg(f|_{U_0}, U_0, f(x^0)) = \deg(f|_{U_1}, U_1, f(x^1)).$$ This relation, together with Lemma~\ref{Brouwer3}(i), gives $$\mathrm{sign} Jf(x^0) = \mathrm{sign} Jf(x^1).$$ Since $x^0, x^1$ are two arbitrary points in $R_f,$ we get the desired conclusion. \end{proof} Recall that a continuous mapping $f \colon \mathbb{R}^n \to \mathbb{R}^n$ is {\em piecewise affine} if there exists a set of triples $(\Omega_i, A_i, b_i), i = 1, \ldots, k,$ such that each $\Omega_i$ is a polyhedral set in $\mathbb{R}^n$ with non-empty interior, each $A_i$ is an $n\times n$-matrix, each $b_i$ is a vector in $\mathbb{R}^n,$ and \begin{enumerate}[{\rm (a)}] \item $\mathbb{R}^n = \cup_{i = 1}^k \Omega_i$; \item for $i \ne j,$ $\Omega_i \cap \Omega_j$ is either empty or a proper common face of $\Omega_i$ and $\Omega_j$; \item $f(x) = A_ix + b_i$ on $\Omega_i,$ $i = 1, \ldots, k.$ \end{enumerate} We say that $f$ is {\em coherently oriented} if all the matrices $A_i$ have the same nonzero determinant sign. The following result is well-known; for more details, please refer to \cite[Theorem~2.3.1]{Scholtes2012} and the references therein. \begin{corollary}\label{Corollary31} Let $f \colon \mathbb{R}^n \to \mathbb{R}^n$ be a piecewise affine mapping. Then ${f}$ is open if and only if it is coherently oriented. \end{corollary} \begin{proof} This is a direct consequence of Theorem~\ref{MainTheorem}. \end{proof} Finally we prove that Whyburn's conjecture is true for definable mappings. \begin{theorem}\label{WhyburnConjecture} Let $f \colon \overline{\mathbb{B}^n_r} \to \overline{\mathbb{B}^n_s}$ be a definable open continuous mapping such that $f^{-1}(\mathbb{S}_s^{n - 1}) = \mathbb{S}_r^{n - 1}$ and the restriction of $f$ to $\mathbb{S}_r^{n - 1}$ is a homeomorphism. Then $f$ is a homeomorphism. \end{theorem} \begin{proof} By the invariance of domain theorem (Lemma~\ref{Brouwer1}), it suffices to show that $f$ is injective. To this end, define the mapping ${g} \colon \mathbb{R}^n \to \mathbb{R}^n $ by $${g}(x) := \begin{cases} f(x) & \textrm{ if } \ \|x\| < r,\\ \frac{\|x\|}{r} f \left(\frac{r}{\|x\|} x\right) & \textrm{ otherwise}. \end{cases}$$ Then it is easy to see that ${g}$ is a definable open continuous mapping. By Theorem~\ref{MainTheorem}, the fibers of ${g}$ are finite and the Jacobian $Jg$ does not change sign on $R_{g}.$ Moreover, in view of Lemma~\ref{Lemma33}(ii), $R_f$ is dense in $\mathbb{R}^n.$ Fix $r' > r.$ There exists a point $x \in R_g$ with $r < \|x\| < r'.$ By construction, $g^{-1}(g(x)) = \{x\}.$ It follows from Lemma~\ref{Brouwer3} that $$\deg(g|_{\overline{\mathbb{B}^n_{r'}}}, \mathbb{B}^n_{r'}, g(x)) = \mathrm{sign} Jg(x) = \pm 1,$$ where $g|_{\overline{\mathbb{B}^n_{r'}}}$ stands for the restriction of $g$ to ${\overline{\mathbb{B}^n_{r'}}}.$ This, together with Lemma~\ref{Brouwer2}(iii), yields \begin{equation} \label{PT6} \deg(g|_{\overline{\mathbb{B}^n_{r'}}}, \mathbb{B}^n_{r'}, y) = \pm 1 \quad \textrm{ for all } \quad y \in \mathbb{B}^n_s. \end{equation} We now show that $f$ is injective, or equivalently, the restriction of $g$ to $\mathbb{B}_r^n$ is injective. By contradiction, suppose that there exist two distinct points $x^0, x^1 \in \mathbb{B}_r^n$ whose image point $y \in g(\mathbb{B}_r^n) = \mathbb{B}_s^n.$ Let $U^0$ and $U^1$ be disjoint open sets containing $x^0$ and $x^1,$ respectively. Then $g(U^0) \cap g(U^1)$ is a nonempty open set and thus contains a point $y'$ of $\mathbb{B}_s^n \setminus g(\mathbb{R}^n \setminus R_g).$ This means that $g^{-1}(y')$ is a subset of $\mathbb{B}_r^n \cap R_g$ and it contains at least two points. Since the Jacobian $Jg$ does not change sign on $R_g,$ it follows from Lemma~\ref{Brouwer3} that $$\big |\deg(g|_{\overline{\mathbb{B}^n_{r'}}}, \mathbb{B}_{r'}^n, y') \big | = \big | \sum_{x \in g^{-1}(y')} \mathrm{sign} Jg(x) \big | \geqslant 2,$$ which contradicts \eqref{PT6}. The theorem is proved. \end{proof} \begin{remark}{\rm Another proof of Theorem~\ref{WhyburnConjecture} can be obtained by applying Theorem~\ref{MainTheorem} and \cite[Theorem~5.5]{Vaisala1966}; the detail is left to the reader. }\end{remark} \bibliographystyle{abbrv}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Online platforms have been criticized for their role in a profusion of societal ills, from the increased spread of political misinformation \cite{benham2012web}, to extrajudicial killings in the Philippines \cite{pimentel_facebook_nodate}. Platforms have sought to address these issues with various \emph{interventions}, from attempts to filter or restrict various forms of content (e.g. hate speech) \cite{hern_how_2020} to attempts to decrease discriminatory practices by users \cite{dickey_heres_2016}. Few, if any, would argue that these interventions have fully addressed the problems at hand. Underlying agreement to this high-level assertion, however, lies a more debated question--\emph{what are the limits of a platform's ability to intervene on society's ills?} Given the documented problems platforms have helped give rise to, it would be wrong to assume that they have no role to play in positive change. At the same time, however, we must be careful to ascribe too heavy of a causal arrow to platforms. Doing so is destined to lead to an ill-informed aura of technological determinism \cite{marx1994does}, where we search for solutions to societal problems by endlessly tweaking the structure of our technology, ignoring other root underlying causes \cite{fisman2016fixing} How might we understand the limits to a platform's ability to intervene to reduce a given societal ill? Empirical work, especially based on natural experiments \cite{malik2016identifying} and using algorithmic auditing \cite{sandvig2014auditing,hannak2013measuring}, is critical, but is restricted to studying the platform as it currently exists. Exploration of the effects of interventions is thus limited to what platforms are willing and able to implement given their profit-oriented goals. Even beyond these profit-related restrictions, there are ethical questions about the use of experimentation without the consent of platform users \cite{siroker2013b}. Even still, scholars have pursued ethical empirical research not bound by profit-oriented goals, for example, by constructing platforms of their own \cite{chen2017gender,may2019gender}. But doing so is time-consuming and expensive; it would be useful if we could first scope potential solutions and their potential side effects in a more rapid and expansive way. Social theory can partially fulfill this role \cite{joseph_theory}, but it can be difficult to theoretically characterize, and much less operationalize, the myriad causal factors underlying sociotechnical systems What is needed is a methodology that exists in this middle ground between difficult empirical work on the one hand, and often abstract theory on the other. Simulation has long filled this role of bridging theory and empirical work in the social sciences \cite{schelling_dynamic_1971,gilbert_simulation_2005,epstein_agent-based_1999}, and more recently has proved useful in the study of algorithmic fairness. For example, \cite{lum_predict_2016}, and later \cite{ensign_runaway_2017}, use simulation to show concretely how the abstract notion of feedback loops in predictive policing can lead to settings where police unfairly target one population over another, despite equal crime rates. Simulation has the added benefit of being applied to hypothetical people, allowing us to carefully and fully consider the harms that may be caused by a particular intervention before it has any real effects.In the present work, we use simulation, and specifically, an \emph{agent-based model (ABM)}, to make concrete a set of proposed interventions to address a given societal ill on a given online platform, and to provide new ideas about their implications and side-effects \paragraph{Online Dating} Specifically, we construct an ABM of an online dating platform and assess how interventions theorized recently by Hutson et al. in their award-winning CSCW paper \cite{hutson2018debiasing} affect the degree of racial homogamy in a simulated society. \emph{Racial homogamy} is defined as the \emph{tendency of individuals to seek out and engage in long-term relationships with individuals who are the same race}. Critically, we do not argue that racial homogamy is in and of itself a societal ill; there are many valid reasons why individuals, particularly those identifying with an oppressed racial group, may want to maintain racial homogamy in their relationships. However, racial homogamy has been well-established to contribute to the reinforcement of racial inequality by increasing racial segregation \cite{baldassarri2007dynamics} and racialized income inequality \cite{grodsky2001structure}. \citet{hutson2018debiasing} argue that racial homogamy is hence worthy of dating-platform intervention, as it is one racializer of economic and social inequality \cite{hutson2018debiasing}. After detailing this connection between racial homogamy and racial inequality, \citet{hutson2018debiasing} explore an array mechanisms within online dating platforms that present opportunities for interventions to reduce racial homogamy. For example, ``search, sort, and filter tools'' allow users to perform initial searches for romantic partners, and thus might encourage users to make particular decisions about the race of their partner. Hutson et al. therefore take, at least implicitly, the stance that platforms could construct interventions that reduce racial homogamy. Others, however, dispute this claim. In particular, \citet{anderson2014political} study racial bias in online dating from a sociological perspective, and conclude that ``merely altering the structural environment [on the platform]—say, by creating more opportunities for individuals of different races to interact—would not necessarily ameliorate persistent patterns of racial homogamy in romantic relationships'' (pp. 38-39). This claim puts the ideas presented in Anderson et al.'s work in contrast with those presented by Hutson et al., who explicitly argue that structural changes to platforms can reduce racial homogamy. These two papers therefore present a difference of opinion on the extent to which interventions on online dating platforms can impact racial homogamy, and ultimately racial inequality. However, empirical analyses of this debate is difficult for the reasons listed above. We therefore employ a simulation model to bridge the middle ground between \citet{hutson2018debiasing} theorized interventions and potential future empirical work. Our work helps to operationalize the theoretical constructs and to explore their impacts and potentially unintended consequences in an \emph{artificial}, but \emph{empirically and theoretically grounded}, complex sociotechnical system. \paragraph{Contributions} The present work makes three contributions: \begin{itemize} \item We provide a blueprint (including an open-source modeling framework)\footnote{Repository link: https://github.com/StefaniaI/ABM-for-Online-Dating.git.} for the use of agent-based modeling to help operationalize theory and explore the potential impacts of interventions of online platforms \item Specific to racial homogamy and online dating platforms, we find that various interventions theorized by \citet{hutson2018debiasing} indeed decrease racial homogamy in our simulated society according to our selected measure by as much as 10\% \item \emph{However}, the benefits of these interventions are dependent on specific assumptions about the sociocultural structure of that society, and are often at odds with societal outcomes and profit-oriented goals of the platform \end{itemize} Like many social science endeavors \cite{watts2011everything}, our core findings present claims that, once stated, seem quite obvious. However, given their motivation from established differences of opinion in published literature by leading scholars, our work serves a critical role in the path forward to understanding the potential of platforms to intervene to reduce racial homogamy, and as a broader methodology to address similar questions in other domains. \section{Related Work} \subsection{Online Dating} Online dating plays an increasingly central role in the process of finding romantic partners, with 15\% of Americans reporting to have used an online dating site \cite{Cacioppo10135}. Indeed, at least for heterosexual couples, online dating has become the predominant means of meeting romantic partners \cite{rosenfeld2019disintermediating}. This change in the means for dating has opened the door to a debate on whether platforms can disrupt patterns of assortative mating, and in particular decrease racial homogamy. While \citet{hutson2018debiasing} argue that platforms have the means of decreasing racial homogamy through interventions, \citet{anderson2014political} think that this would likely not be the case as homogamy is partly caused by same-race preferences which cannot be changed through opportunity alone. These preferences influence and are influenced by social norms, and their formation is based on imitation and recurrent behaviour \cite{opp1982evolutionary} and enforced through social verification \cite{hardin1996shared}. To capture this phenomena, we consider norms and their impact on preferences. In alignment with \citet{hutson2018debiasing}, we acknowledge that platforms can potentially intervene to change norms on the platform, so, in our model, we differentiate between offline and online norms. The process people go through while dating online is complex in multiple ways. First, one goes through a variety of different ``phases'' of dating, from seeking information about dating platforms to developing a long-term relationship \cite{finkel2012online}. Second, the decision-making process depends on distinct factors such as the perceived level of compatibility \cite{bruch2018aspirational}, how much we can validly infer about others online \cite{frost2008people}, and one's own personal stereotypes \cite{krueger1996personal}. The complexity of studying dating have naturally given rise to the use of simulations. Previous agent-based models for dating have looked at, for example, how the selection of partners is influenced by space and mobility \cite{smaldino2012human}, and whether assortative mating patterns could explain previously observed statistics on marriages and divorce \cite{hills2008population}. We build upon these examples in our work, but focus on the novel problem of understanding the role of platform interventions on racial homogamy in the context of online dating. \subsection{Uses of Simulation to Study Fairness} Starting with the seminal work of \citet{schelling_dynamic_1971} on housing segregation, simulation has been widely leveraged to explore how inequality and unfairness can arise and be combated in social systems. These studies have provided useful fodder for the more recent application of simulation to questions of \emph{algorithmic} fairness. For example, \citet{martin_jr_extending_2020} use a type of simulation called System Dynamics Modeling \cite{karnopp2012system} to show how simulation can help us to think through the complex impacts of different interventions in the context of credit lending. Others have leveraged different kinds of simulation models to study lending in similar ways, for example, within a reinforcement learning paradigm \cite{damour_fairness_2020}. Still others have used simulation beyond the context of lending. For example, various forms of simulation have been used to study fairness and feedback loops in the context of predictive policing (as noted above \cite{lum_predict_2016,ensign_runaway_2017}) the effects of recommender systems \cite{bountouridis_siren_2019}, and income inequality for drivers on rideshare platforms \cite{bokanyi2020understanding}. Like \citet{bokanyi2020understanding}, we use an agent-based model here. Agent-based modeling involves the construction of virtual worlds in which artificial agents interact according to a set of pre-specified rules \cite{epstein_agent-based_1999}. While behavior is thus predictable at the individual level, ABMs are useful for understanding how these pre-specified micro-level behaviors result in emergent properties, e.g., social biases across social groups \cite{joseph_coevolution_2014} or opinion polarization \cite{schweighofer2019weighted}. Our work compliments existing literature using simulation modeling to study questions of fairness and equality in three ways. First, we study a concrete, contemporary, and contested question about the role that online dating platforms can play in reducing racial homogamy. Second, rather than focus on algorithmic fairness specifically, we emphasize other ways in which online platforms can have impacts on society. Finally, and at the same time, we develop an agent-based model that can be easily extended to study algorithmic fairness, complementing existing System Dynamics and reinforcement learning platforms provided by prior work \cite{damour_fairness_2020,martin_jr_extending_2020}. \section{Model} Our model contains a variety of parameters, listed in Table~\ref{tab:virt_exp} alongside references, where applicable, to were used to determine their values. Below, we first provide an overview of the model, and then further details on each major element of the model. \subsection{Model Overview} \begin{figure}[t] \centering \begin{subfigure}{.45\linewidth} \centering \includegraphics[width=\linewidth]{creating_profile.png} \caption{Creating a profile} \label{fig:create_prof} \end{subfigure} \hfill \begin{subfigure}{.45\linewidth} \centering \includegraphics[width=\linewidth]{filtering.png} \caption{Searching (using filters)} \label{fig:filtering} \end{subfigure} \begin{subfigure}{.45\linewidth} \centering \includegraphics[width=\linewidth]{browse.png} \caption{Browsing the (filtered) list of potential partners} \label{fig:browsing} \end{subfigure} \hfill \begin{subfigure}{.45\linewidth} \centering \includegraphics[width=\linewidth]{messages.png} \caption{Looking at (and potentially responding to) messages} \label{fig:sub-second} \end{subfigure} \caption{The actions agents can take on the platform} \label{fig:dating_platform} \end{figure} \begin{figure*}[t] \centering \includegraphics[width=\linewidth]{Paper_diagrams.pdf} \caption{Overview of relationship phases and transitions in each turn of an iteration, explained through the blue agent's choices and actions. Quote boxes represent decisions the blue agent makes, and the 0/1 vectors represent the agent's attributes, or the blue agent's knowledge of potential partner's attributes.} \label{fig:rel_phases} \end{figure*} \begin{table}[ht] \caption{\label{tab:virt_exp} Tabular description of model parameters (Left) and the values taken in the virtual experiment (Right). For parameters with a red-coulured value, we simulate each value for all simulated interventions but keep the values of the other parameters to red values (see Section~\ref{sec:virtual_experiment}).} \small \begin{tabular}{p{5.7 cm} p {2.25 cm}} \hline Parameters & Values Taken \\ \hline {\bf Varied in Virtual Experiment} & \\ \ \ \ Tolerance to un-successful search turns & 20/ {\color{red} 25}/ 30\\ \ \ \ Out-platform norms for protected attributes & 0/0.1/ {\color{red} 0.2}/0.3\\ \ \ \ Strength of updates: norms $\to$ pref.s & 0/ {\color{red} 1\%}/ 5\%\\ \ \ \ Initial degree of ethnocentrism & 0/ {\color{red} 0.25}/ 0.5/ 1\\ \ \ \ Correlations in attribute generation: $\beta, \gamma$ & 0/ {\color{red}0.4}/0.6/0.8, {\color{red} 0.2}/0.6\\ \hline {\bf Fixed in Virtual Experiment} & \\ \ \ \ Strength of updates: interaction $\to$ pref. & 2\%\\ \ \ \ Strength of updates: average pref. $\to$ norms & 5\%\\ \ \ \ Relative importance of out-platform norms when generating initial preferences & 20\%\\ \ \ \ Failure tolerance to unsuccessful relationships & 20\\ \ \ \ \# initial acquaintances in-group \& out-group & 300, 100 \lit{\cite{mcpherson_birds_2001,tajfel_social_1971,joseph_coevolution_2014}} \\ \ \ \ Weight of searchable vs experiential attributes & 1 - 3 \lit{\cite{frost2008people}}\\ \ \ \ Pr. searchable attribute specified on profile & 50\% \lit{\cite{lewis2016preferences}}\\ \ \ \ Pr. learning searchable \& experiental attributes & 50\% \lit{\cite{finkel2012online}}\\ \ \ \ \# rounds until offline \& long-term & 7, 37 \lit{\cite{ramirez2015online, brym2001love}}\\ \ \ \ Pr. to meet while offline & 14\%\lit{\cite{brym2001love, munoz2007aggression}}\\ \ \ \ Pr. to consider a message (while in search) & 50\%\lit{\cite{finkel2012online}}\\ \ \ \ Maximum \# agents considered in a search turn & 30 \lit{\cite{frost2008people, finkel2012online}}\\ \ \ \ \# iterations & 2000\\ \ \ \ Initial population size & 300\\ \ \ \ \# agents added per iteration & 4 \\ \hline \end{tabular} \end{table} \emph{Agents} in our model are people that have at some point used our simulated dating website. Agents are assumed to carry out a series of activities on the site in pursuit of a romantic partner. These activities include creating a profile (Figure~\ref{fig:dating_platform}a), searching for a partner (b), browsing the list of potential partners returned by that search (c), and messaging (d). Agents in our model perceive other agents according to their \emph{attributes}. As in most ABMs \cite{gilbert_agent-based_2007}, attributes are just a vector of 0s and 1s, but are assumed to represent real-world traits that might be important in the context of dating, e.g. partisanship. All agents also have a ``protected attribute,'' that is, an attribute relevant to questions about discriminatory dating practices. We will focus on race as the protected attribute here. Moreover, we will focus on a binarized model of race, that is, agents can be only one of two races in the model. In the context of prior work we discuss above, the two races of interest here are White and Black. Section~\ref{sec:agents} provides full details on how agents are modeled. As shown in Figure~\ref{fig:rel_phases}, relationships between agents evolve over a series of \emph{phases}, from finding (or being found by) a potential partner, to interacting with each other on the platform via messages (online relationship), and finally to an offline relationship \cite{finkel2012online}. At any phase, a relationship might end, at which point the agents will resume their search for another romantic partner, or exit the platform. If a relationship continues for long enough, it becomes a \emph{long-term} relationship. Our primary outcome measures focus on racial homogamy in these long-term relationships. and Section~\ref{sec:phases} provides complete details on the different relationships phases. The simulation is carried out via a sequence of iterations, or \emph{turns}, with one turn being loosely the equivalent of one day. On the first turn of the simulation, 300 agents are initialized and become the population of users on the online dating site. On each subsequent turn, we add four new agents, modeling a constant flow of new users. After new agents are added, all agents get a turn to act. During their turn, agents can 1) conduct a search, potentially using a set of search \emph{filters} provided by the site, 2) send a message, 3) start a new relationship, 4) continue or end an existing relationship, or 5) leave the platform, either because they have found a long-term partner, or because they have become frustrated with the site. As shown in Figure~\ref{fig:rel_phases} , which of these things agents decide to do is based on which part on the relationship phase they are in, and on the \emph{decision model} we construct for agents. The decisions made by an agent are determined by 1) their attributes, 2) the attributes of their potential (or current) partner, 3) the \emph{stereotypes} they hold of how different attributes are related, and 4) their \emph{preferences} for others with certain attributes. These preferences are themselves informed by \emph{social norms} that exist both on and off the platform. Section~\ref{sec:decisions} provides details on how agents make decisions. Agent's attributes are static in our model. However, their stereotypes and preferences \emph{evolve} as they interact with others and move through relationship phases. The social norms governing all agents also evolve throughout the simulation. Section~\ref{sec:bel_updates} details when and how these updates to stereotypes, preferences, and norms occur. \subsection{Agents} \label{sec:agents} As in all ABMs, our agents have state and can take actions. Agents' states encompass their \emph{type}, \emph{attributes}, \emph{stereotypes}, \emph{preferences}, and \emph{knowledge of others}. The actions agents take, based on these states, are a set of \emph{decisions}, which we describe in Section~\ref{sec:decisions}. \subsubsection{Type} Agents can be of two types, and are only attracted to agents of the opposite type. We thus loosely mimic a hetero- and mono-normative online dating website, as was studied by \citet{anderson2014political}. We return to this limitation in our conclusion. Agents are assigned to the two modeled types with equal probability \subsubsection{Attributes} Each agent $i$ has a vector of binary attributes, $a^{(i)}$. We distinguish between race, $a^{(i)}_0$, and $K$ other attributes. Following theory in the online dating literature \cite{finkel2012online,bruch2018aspirational}, we make two additional distinctions among attributes. First, we distinguish between \emph{matching} and \emph{competing} attributes. For \textit{matching} attributes, agents try to find partners with similar values, while for the \textit{competing} attributes, they try to find partners with the highest possible values. We assume that the protected attribute is matching. Second, we distinguish between attributes visible only offline as \emph{experiential} attributes, and ones that are visible both online and offline as \emph{searchable} attributes. Examples of searchable attributes include personal interests and location. Sense of humor is an example of experiential attribute \cite{finkel2012online}. Since people do not share everything about themselves in their profiles \cite{lewis2016preferences}, agents have a probability of $50\%$ to specify each searchable attribute on the platform. In the example from Figure~\ref{fig:dating_platform}, the agent chose to specify their music taste, but not their race. We describe the complete process of generating attributes for new agents in the Supplementary Materials. \subsubsection{Stereotypes} Agents' \emph{stereotypes} encompass their beliefs of \emph{which attribute combinations are more likely to be encountered} \cite{krueger1996personal}. We model the stereotypes of agent $i$ as a matrix $G^{(i)}$. This matrix is initialized via a theoretically grounded model that is also based on prior work. Specifically, when generating a new agent, we first generate a sample of $300$ agents having the same race, and $100$ agents with a different race. The imbalance in sample sizes is motivated by homophily: people are, in general, more likely to meet and know people similar to themselves \cite{mcpherson_birds_2001}. In addition to homophily, we also model in-group favoritism, i.e. that people tend to look favorably on people like them \cite{hastorf_they_1954,tajfel_social_1971}. To capture this in our model, we follow prior agent-based models \cite{joseph_coevolution_2014} and assume that agents also have an initial degree of ethnocentrism. For example, as in \cite{joseph_coevolution_2014}, if the degree of ethnocentrism is $0.3$, then the agent will, on average, expect 30\% of the attributes of agents in the other-race sample to have values that the agent does not like. The Supplementary Materials contain full details on stereotype initialization, and Section~\ref{sec:bel_updates} on how stereotypes evolve over time. \subsubsection{Preferences} An agent's \emph{preferences} define which attributes they believe are most important in a potential partner. We model the preferences of agent $i$ as a weight vector, $w^{(i)}\in \Delta^{K}$. This vector of preferences is in the $K$th simplex, i.e. its entries sum to one, thus capturing the \emph{relative} importance of different attributes in the eyes of the agent. For example, a weight vector with $w^{(i)}_j = 1$ and all other entries of $0$ corresponds to an agent who only considers the $j$th attribute to be important in a potential partner. The preferences of agents influence and are influenced by two different types of social norms, i.e. sociocultural regulations on one's preferences \cite{lizardo2010skills}.\footnote{We discuss initialization of preferences and norms in the supplementary material.} First, while interacting online, an agent impacts and is impacted by \emph{platform norms}. Second, when in an offline relationship, the agent preferences inform and are informed by off-platform norms. Critically, then, in our model as in the real world \cite{lizardo2010skills}, agent's preferences thus vary in part based on social norms, which change as they move between the context of the platform and their offline social life. At the same time, norms are aggregations of the preferences of individuals, except, as we will discuss, when explicit efforts are made by the platform to fix norms. Agent preferences are initialized based on a procedure common in the decision-making literature \cite{cantwell2020thresholding}. We note here that two parameters govern important aspects of this process. First, a parameter $\beta$ is used to capture the level of correlation between race and other attributes. Second, $\gamma$ captures the level of correlation between the non-protected attributes. Varying these two parameters changes the predictability of one attribute, given another. In our virtual experiment, both $\beta$ and $\gamma$ are changed. Most importantly, $\beta$ is varied to reflect different degrees to which race can be inferred by agents based on other available attributes when the agent chooses not to display race in their online profile. \subsubsection{Knowledge of other agents} As they interact with another agent, agents can become aware of the values of new attributes of that agent. This knowledge is used to make decisions about, e.g., whether to continue relationships. How agents acquire this knowledge is discussed in Section~\ref{sec:phases}. \subsection{Relationship Phases} \label{sec:phases} Agents can be in one of three relationship phases: 1) searching, 2) in an online relationship, or 3) in an offline relationship.\footnote{We make the simplifying assumption that an agent can only be in one of the before-mentioned phases at one time, e.g. one cannot be both searching and dating offline.} \subsubsection{Searching phase} When first entering the platform, an agent is in the searching phase. An agent can search in two ways. First, the agent can perform a query for potential partners using a search interface. Second, the agent can look through messages in their inbox that they have received from other agents. In one turn of the simulation, agents can perform up to $30$ search actions, mimicking the typical length of a user's session on an online dating website \cite{frost2008people, finkel2012online}. A search action represents looking at either a single search result or a single message in the agent's inbox. The agent randomly decides which of these two actions to take. If the agent decides to view a profile from their search result list, they look at the profile of the next search result and decide whether or not they should send that person a message. If the agent decides to view a message in their inbox, they look at the oldest un-read message and consider whether or not they should reply. Search results are a randomly ordered subset of other agents currently on the platform. Importantly, however, the agent \emph{decides which other agents will appear in their search results}. They do so by \emph{filtering} for potential dating partners. The agent filters on a single attribute, the one that, on this turn, is most important to them (i.e. for agent $i$, the attribute with the highest value of $w^{(i)}$). In Figure~\ref{fig:dating_platform}, for example, interest in music is the most important factor for the blue agent, and they hence use this as a filter. The blue agent is then only shown profiles of other agents having a passion for music. If the agent decides to reply to a message in their inbox, their search ends, and the two agents involved proceed to an online relationship phase. However, the agent can send as many initial messages to agents in their search results as they would like. An \emph{unsuccessful search} is considered to be a turn in which the agent is not interested in answering any of the messages they have looked at, and in which they did not find at least half of the profiles they've looked at interesting enough to send an initial message. Too many unsuccessful search turns would make the agent exit the platform \subsubsection{Online relationship phase} After answering or getting a reply to an initial message, the agent moves to an Online Relationship Phase with another agent. While in this phase, in each iteration the two agents within the relationship have an online interaction. During this interaction, three things happen. First, agents may update their knowledge about the attributes of the other. Going back to the example in Figures~\ref{fig:dating_platform} and~\ref{fig:rel_phases}, when entering the relationship, the blue agent only knows the value of the green agent's interest in music. After one interaction, they have also learned the self-identified race of the green agent. Second, agents update their stereotypes based on what they learn about the other; see Section~\ref{sec:bel_updates}. Finally, agents decide whether or not to continue the relationship. If both decide to continue, the interaction is \emph{positive}, and the relationship will continue with a new interaction in the next iteration. Otherwise, the interaction is \emph{negative}, and the relationship ends. After deciding on whether or not to continue the relationship, the preferences of agents are updated based on whether the interaction was positive or negative (see Section~\ref{sec:bel_updates}) and the turn of the agents ends. If the agents have decided to end the relationship, their \emph{tolerance to failed relationships decreases}. Agents will exit the platform if this tolerance reaches $0$, and otherwise they return to searching. If the agents decide to continue the relationship, they either continue the online relationship, or, if the relationship has lasted long enough, move to the Offline Relationship Phase. \subsubsection{Offline relationship phase} The turn of an agent in an Offline Relationship Phase progresses similarly to the Online one, with two key differences. First, if the interaction was positive, the agents either continue the relationship (as above); however, if the relationship lasts for $30$ iterations, the agents then exit the platform into a \emph{long-term} relationship. Second, while offline, in each iteration there is a fixed $1/7$ chance of interacting. This means that, in expectation, people in offline relationships meet once per week \cite{brym2001love, munoz2007aggression}. \subsection{Agent Decision-Making} \label{sec:decisions} In the Searching phase, agents answer the questions (posed to themselves), "should I reply to/send an initial message to this agent?". When in a relationship, the agent answers questions of the type "should I continue being in a relationship with this agent?". Both of these are yes/no questions, and agents use the same decision function, described below, to answer them. Suppose that agent $i$ must make a decision about sending a message or continuing a relationship with agent $j$. We assume agent $i$ may know only an incomplete vector of agent $j$'s attributes, which we call $\hat{a}^{(j)}$. We use $\hat{a}^{(j)} \subseteq a$ if $a$ extends the incomplete vector of attributes $\hat{a}^{(j)}$, i.e. $a$ is a complete vector that has the same entries as $\hat{a}^{(j)}$ on the known attributes. Lastly, for the agents $i, j$ there is some time value, $t$, capturing the amount of time spent in that specific phase with one another (for somebody in the searching phase or who just started an online or an offline relationship, $t=0$). In making decisions, agents consider a) the time spent so far with the other agent and b) how well the attributes of the other agent align with their own attributes and preferences. Time is an important predictor for breakups, as agents that have stayed longer in a relationship are less likely to interrupt their relationship. For example, people tend to be especially selective in their first offline date together \cite{long2010scripts}. Consequently, using the above introduced notation, the probability that agent $i$ answers yes to a decision question regarding agent $j$ is given by the following sigmoid function:$$\mathbb{P}(\text{yes}) = \frac{1}{1 + \exp \left(- \mathbb{E} \left[w^{(i)}\cdot s^{(i)}(a) | \hat{a}^{(j)} \subseteq a \right]- t \right)}.$$ Two aspects of the above equation require further explanation. First, the expected value is taken over all possible attribute combinations extending the incomplete information of $i$ over the attributes of $j$. The probability of each combination is given by the current stereotypes of $i$. Thus, $i$'s stereotypes impact what they \emph{expect} the unknown attributes of $j$ to be. Second, the function $s^{(i)}(a)$ is an attribute scoring function that for a given agent and complete attribute vector returns a vector with entries $1$, $0$, and $-1$ depending on whether each attribute is viewed as good, neutral, or bad. Agents see a matching attribute value as being \textit{good} if it is the same as their own and \textit{bad} if it is different. Similarly, they see a competing attribute value as \textit{good} if it is larger than their own, \textit{neutral} if it is equal, and \textit{bad} if it is lower. The Supplementary Material provides a formalization of this scoring function. \subsection{Belief, Preferences, and Norm Updates} \label{sec:bel_updates} Agents' stereotypes, preferences, and the norms of the platform change on each iteration. \subsubsection{Agent Stereotype Updates} Recall that when generating an agent, we construct $G^{(i)}$, a sample of attribute combinations they have encountered so far. After each interaction, an agent observes a new combination of attributes. If that observation is a complete list of attributes, they add it to the sample. Otherwise, they find the probabilities of each possible complete combination of attributes that could extend the observed one and adds a proportional fraction to each of these categories. For example, consider somebody that observes the combination $(1, 0, \bot)$ after an interaction, and their sample was $25$ observations for $(1, 0, 0)$ and $75$ for $(1, 0, 1)$. Post-interaction, their sample changes to $25.25$ and $75.75$. \subsubsection{Norm and Preference Updates} Preferences and on-platform norms both change during each iteration. First, agent preferences change according to their interactions based on a) whether the interaction was positive or negative and b) the attributes of agents. For example, a positive interaction with somebody of the same race will change the preference of the agent by increasing the preference of an agent to date someone of the same race. In contrast, a negative interaction with someone of the same race will make the agent \emph{less} likely to seek out a same-race partner in the future. At the end of each simulation turn, norms and preferences are updated to reflect their convergence \cite{lizardo2010skills}. Specifically, the on-platform norm gets closer to the average preference of agents on the platform, and the preferences of agents are updated by getting closer to the respective norm. Each update is carried out by taking convex combinations between the original value of the norms (preferences) and the values of the influencing preferences (norms). The extent to which norms determine preferences, and vice versa, is set via model parameters. See the supplementary information for more details on how these updates are made, as well as details on initialization. \section{Virtual Experiment}\label{sec:virtual_experiment} We now describe how we use our model to explore three different classes of \emph{interventions} outlined by \citet{hutson2018debiasing}. In Section~\ref{sec:interventions}, we operationalize these interventions, which are also listed in Table~\ref{table:filter_inter}. We then discuss how we vary model parameters to evaluate the effectiveness of these interventions under different assumptions about the underlying society. Finally, we detail the outcome measures we use to understand the impacts of these interventions. \subsection{Interventions} \label{sec:interventions} \citet{hutson2018debiasing} suggest three places where platforms might intervene to reduce bias and discrimination on intimate platforms: 1) at the search, sort, and filter level, 2) at the matching algorithm level, and 3) at the community policy and messaging level. Here, we focus on the first and the last, and leave out the second for two reasons. First, not all platforms use matching algorithms, and even when they do, the algorithm differs substantially. Second, the literature on the possible algorithm to implement is vast~\cite{gale1962college, shapley1974cores, abdulkadirouglu2005boston, hitsch2010matching, tu2014online, xia2015reciprocal, brozovsky2007recommender}. Consequently, there are many different alternatives to substitute the existing matching algorithm with. A comparison between these various procedures is beyond the scope of this paper. \subsubsection{Filtering Interventions} The ease of filtering out people based on their attributes when searching for potential partners in online systems may exacerbate racial homogamy in two ways: 1) it reduces the diversity in search results and 2) it makes race a legitimate basis for decisions \cite{lewis2016preferences, hutson2018debiasing, bowker2000sorting}. The first suggested solution to mitigate this problem by \citet{hutson2018debiasing} is to purposefully introduce diversity into search results, e.g. by removing the ability to filter by certain attributes entirely. We refer to any intervention along these lines as a \emph{filter intervention}. We consider four different filter interventions, comparing them to a baseline of traditional platform behaviors. As explained above, without any intervention, agents will filter their searches for romantic partners by the attribute they consider to be most important. We refer to this baseline option as \emph{Strong filtering}. From this baseline, \citet{hutson2018debiasing} note that interventions to reduce racial homogamy can be constructed along two dimensions. First, we can restrict the possibility of filtering on race entirely. To signal such an intervention we add a "non-race" label to the filter name. Second, we can introduce diversity by showing profiles of agents who did not provide the given attribute in their profile. We label filtering strategies that only include the desired setting of the given attribute as "Strong," and those that also include potential partners who simply do not fill out a value for this attribute as "Weak". Combining interventions on these two dimensions gives three more filtering options, namely Strong non-race, Weak, and Weak non-race. We also add a fourth intervention where no filtering is allowed at all, on any attribute, i.e. the "Off" filtering option. The five filtering conditions are most clearly stated with an example. Consider the agents in Figure ~\ref{fig:dating_platform}. Let us assume the platform opts to use \emph{Strong} filtering. Further, let us assume that the blue agent's strongest preference is for a romantic partner that is the same race. In this case, the blue agent will search for others with the same race, and will see only other potential partners who publicly self-identified with that race on the platform. If the platform had \emph{Weak} filtering, the agent would also see any other agents who a) have the same self-identified race or b) do not specify any race on the platform. If the platform employed \emph{Strong non-race} filtering, the platform would not allow the agent to search by race at all. The agent would therefore use their second strongest preference (say, music), to perform their search. Weak non-race follows similarly, and in the Off condition, agents would simply not be able to filter at all. Finally, note that we have described the process with a matching attribute; for competing attributes, agents are always assumed to search for other agents who hold the desired attribute. \begin{table}[t] \caption{The three types of Interventions, the baselines for each, and the different intervention conditions we test.} \label{table:filter_inter} \small \begin{tabular}{p{1.8cm} p{6.2cm}} {\bf Intervention} & {\bf Description} \\ \hline\hline \multicolumn{2}{c}{{\bf Filtering Interventions}} \\ \emph{Strong} & \emph{Baseline. Only see others with the preferred attribute value on the most important attribute.} \\ Strong non-race & Strong filtering, if not on race. Otherwise, Strong filter on second most important.\\ Weak & Also shows profiles with unknown values on most important attribute\\ Weak non-race & Weak filtering, but not on Race.\\ Off & No filtering allowed. \\ \hline \multicolumn{2}{c}{{\bf Attribute Intervention}} \\ \emph{5 attributes} & \emph{Each agent has the protected attribute, plus one of each kind of attribute} \\ 9 attributes& Each agent has the protected attribute, plus {\bf two} of each kind of attribute \\ \hline \multicolumn{2}{c}{{\bf Norm Intervention}} \\ \emph{Off} & \emph{Baseline. On-platform norms are an average of agent preferences} \\ On & On-platform norms are fixed to 0 on race, encouraging agents not to consider it\\ \end{tabular} \end{table} \subsubsection{Attribute Intervention} Removing filtering options can make it harder for agents to find the partners they desire. A suggested solution by \citet{hutson2018debiasing} to mitigate this problem is to add or give additional attention to new non-race attributes on which agents can filter (e.g. political view or smoking preferences). We refer to this intervention as an \emph{Attribute Intervention}. In our model, this intervention is implemented by increasing the number of dating-relevant attributes from a baseline of 5 to a total of 9, i.e. from having one non-racial attribute of each type, to having two of them.\footnote{As a reminder, attributes differ along two dimensions matching-competing and searchable-experiential, thus giving 4 possible types. Note also that \citet{hutson2018debiasing} additionally suggest a variant of this intervention, namely one based on the minimal group paradigm in social psychology \cite{tajfel_social_1971}. We provide results for this similar intervention in the supplemental material.} \subsubsection{Norm Intervention} The final intervention from \citet{hutson2018debiasing} that we model are attempts to change or outline platform norms. \citet{hutson2018debiasing} suggest that different community policies, such as introducing pledges, educating users on the effect of expressing racial preferences, and detailing the disparities faced by racial minorities, can encourage social norms to date interracially. Norms, in turn, are known to be perhaps the most powerful mechanism for changing discriminatory behaviors \cite{tankard2016norm}. In our experiment, the platform norm intervention fixes the value of the on-platform norm for the race attribute to $0$. This means that agents, via the link between norms and agent preferences, are encouraged not to consider race at all in their decisions. Differently from the baseline condition, when on-platform norms are updated to reflect the average of preferences, the value on the race dimension does not change and continues to be 0 (see the Supplemental Material). \subsection{Assumptions about Society} \label{sec:society} Table~\ref{tab:virt_exp} lists the model parameters we vary or keep constant in our virtual experiment. As we will show, consideration of these parameters is critical to understanding how well claims about intervention effects generalize to different assumptions about society. At the same time, a full exploration of the parameter space is infeasible. We take a three-pronged approach to addressing this. First, as noted in the table, many of these parameter and their values are theoretically grounded and/or empirically informed by prior work. Second, we engaged in sensitivity testing, described in the appendix, across a range of values for all parameters. Finally, for our virtual experiment, we fixed parameters that did not appear to have significant impacts on model outcomes. For parameters that did appear to have an effect, we select a default value for each (coloured in red in Table~\ref{tab:virt_exp}); when varying one such parameter, we keep the others at their default values. As shown in Table~\ref{tab:virt_exp}, we vary five model parameters in our simulation. First, tolerance to unsuccessful search turns, which allows us to see how results change if agents are more or less tolerant to seeing undesired profiles and messages while browsing. Second, we vary racial bias in the out-platform norms, capturing how much importance society puts on same-race relationships compared to the other attributes of individuals. Third, we vary the influence norms have on preferences, i.e. the percentage of individual preferences explained by social norms. Fourth, we alter the initial degree of ethnocentrism, which intuitively corresponds to the extent to which agents are likely to prefer same-race relationships. The last parameters we change are the levels of correlation between attributes, i.e. $\beta$ and $\gamma$; we test in turn scenarios with zero, low, mild, and high correlations between race and the other attributes ($\beta$), and low and medium correlations between the other non-race attributes ($\gamma$). Each simulation runs over $2000$ iterations, which corresponds roughly to a period of over $5$ years. Moreover, we use $20$ distinct random seeds for each scenario, thus having a total of $3000$ runs. \subsection{Outcomes} \label{sec:outcomes} We treat racial \emph{hetero}gamy--the opposite of racial homogamy---as the main outcome variable when studying the impact of the simulated interventions on racial homogamy.\footnote{We do so since we would like to emphasize the positive nature of the proposed change.} We operationalize racial heterogamy as the percentage of long-term relationships that are interracial. However, a proportion-based measurement can of course vary based on changes in the numerator or denominator. Thus, an increase in racial heterogamy might be caused by a decrease in efficiency, i.e. the overall number of long-term relationships, or an increase in the number of interracial relationships. We therefore investigate these two quantities separately as well. Lastly, we provide two measures of user satisfaction. , which are important for two reasons. First, profit-oriented decisions by platforms depend in part on the number and, hence, also on the happiness of users. This in turn suggests platforms may be less likely to implement interventions which threaten these metrics. Second, even if a platform was somehow persuaded to implement such interventions, the free nature of the market permits unsatisfied users to easily move to a different dating platform. To capture user satisfaction, we use two measures informed by literature on online dating \cite{frost2008people}. First, we measure the percentage of time spent in an offline relationship, relative to online dating phases. Second, we measure the number of users who exit the platform because they have exhausted their tolerance for unsuccessful online searches. \section{Results} \subsection{Average Effects on Racial Heterogamy} Each intervention proposed by \cite{hutson2018debiasing} increases racial heterogamy. However, as shown in Figure~\ref{fig:res_pr}, the magnitude of these effects vary. The largest positive effect of any intervention is the Strong non-race filtering condition, i.e. when our simulated platform removes the option of filtering searches by race but allows agents to search on all other attributes. Doing so increases racial heterogamy from 10.5\% to 18.3\%. In contrast, a Weak filtering option increases the percentage by 3.4\% over the baseline. Combining the Strong non-race filtering intervention with the norm and attribute interventions further increase racial heterogamy. However, the effects are non-additive. For example, the effect of the norm intervention decreases after implementing a filtering intervention. Applying the norm intervention to the baseline filtering option results in a 4\% increase in interracial relationships, while if it is applied on top of a Strong non-race intervention, the impact reduces to less than 3\%. Finally, adding dating-relevant attributes produces an additional and relatively stable increase of 2\% to 3\%. These results suggest the potential of interventions on the platform, but also that their impacts may quickly reach a saturation point. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{pr_outgroup_longterm_2.png} \caption{Racial heterogamy (y-axis) given a particular filter intervention (x-axis), and depending on whether or not the norm intervention is On (green lines, Xs as points) or Off (blue lines, triangles as points) and whether or not the attribute intervention is off (left subplot) or on (right subplot) } \label{fig:res_pr} \end{figure} \subsection{Effects on Total Number of Relationships} We also find that increases in racial heterogamy--the \emph{percent} of long term interracial relationships--are not solely due to an increase in the \emph{number} of interracial long-term relationships. Rather, as depicted in the first part of Figure~\ref{fig:res_no}, most interventions are also paired with a significant decrease in the overall number of long-term relationships. With respect to the filtering interventions, removing the ability to filter by race causes an almost 20\% drop in the total number of long-term relationships. This happens, in short, because agents, and people, often \emph{want} to filter by race, whether they are conscious of that fact or not \cite{anderson2014political}. Thus, removing this filter makes it harder to find partners that agents desire. The attribute intervention causes this decrease for a separate, non-obvious reason. Specifically, including additional attributes makes the search space becomes more diverse, thus making it harder to find the ``right'' partner. Most obviously, with more attributes, the chance of two agents matching on all attributes decreases. Finally, with respect to the norm intervention, influencing the preferences of agents by changing on-platform norms produces fluctuating evaluation criteria, especially when paired with racially biased offline norms. Agents enter the simulation with their own potentially racially-biased preferences. As they interact on the platform, their preferences slowly converge towards the racially unbiased on-platform norm. However, when going through the offline relationship phase, their preferences again adapt to the racially biased societal (offline) norms, thus shifting the evaluation criteria of agents again. These fluctuations means it takes longer to find suitable partners; so, if the process extends over the tolerance of agents for either bad recommendations or failed relationships, they will exit the platform without entering a long-term relationship. In our model, as in modern cultural sociological theory \cite{patterson_making_2014}, then, the impact of the norm intervention is therefore contingent on the extent to which norms within the context of the platform permeate to other normative context by way of individual preferences. \subsection{Effects on User Satisfaction} Our simulations suggest that most interventions, on average, will decrease user satisfaction, especially filter interventions that do not allow users to filter on race. However, the magnitude, and in select cases, even the sign of these effects vary according to the settings of the other interventions, and across the two metrics considered. These findings present evidence that interventions can potentially be carried out without decreasing user satisfaction, but that these interventions typically are weaker (in terms of reducing racial homogamy), that effects on one satisfaction metric may contradict results on another, and that which combinations of interventions are most effective is not obvious a priori. Specifically, Figure ~\ref{fig:time_offline} shows that filtering interventions, to varying degrees, all decrease the percentage of agent's time spent in offline relationships, and in doing so likely decrease user satisfaction. The most drastic impacts are from the Strong non-race intervention, where 19.0\% of agents' time is spent offline, as opposed to 24.2\% in the baseline condition. This effect is, however, mitigated when paired with the attribute intervention. This is likely because agents become more satisfied with the non-race filtering options they have available to them. Other interventions, while less capable of reducing racial homogamy than the Strong non-race filter, have a much more limited effect on user satisfaction. These observations for time spent offline also appear when measuring user satisfaction in term of the percentage of users who leave the platform without a partner. As shown in Figure~\ref{fig:user_sat}, however, certain filter interventions can actually \emph{increase} user satisfaction along this metric. Specifically, the use of Weak or no filtering can increase satisfaction on this metric, because agents often do not report all of their attributes, and thus including also the profiles with unknown attribute-values lowers the chance of excluding possibly interesting partners. Full descriptions of other patterns in Figures~\ref{fig:time_offline} and \ref{fig:user_sat} are included in the Supplemental Material. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{no_longterm.png} \caption{Number of total (left) and interracial (right) long-term relationships (y-axis) for the filter conditions (x-axis) and the norm intervention (Blue: off, Green: on). Results are only shown without the attribute intervention} \label{fig:res_no} \end{figure} \begin{figure}[t] \begin{subfigure}{.95\linewidth} \centering \includegraphics[width=\linewidth]{time_offline.png} \caption{The percentage of time spent in offline relationships.} \label{fig:time_offline} \end{subfigure} \begin{subfigure}{.95\linewidth} \centering \includegraphics[width=\linewidth]{exit_reason.png} \caption{The percentage of exits due to exhausting the tolerance for un-successful search turns.} \label{fig:bad_rec} \end{subfigure} \caption{Plots showing the level of user satisfaction with the application of different interventions.} \label{fig:user_sat} \end{figure} \subsection{Intervention Impacts under Different Assumptions about Society} \begin{figure}[h] \centering \includegraphics[width=\linewidth]{dependencies_2nd_top.png} \caption{The first and second order marginal effects of the varied parameters on the percentage of out-group long-term relationships, for the 10 effects of strongest absolute value. } \label{fig:2nd_dep} \end{figure} Finally, we explore how interventions fare under different assumptions about society. Broadly speaking, we find, as reflected in Figure~\ref{fig:2nd_dep}, that interventions do have strong stand-alone beneficial effects, but that their impact is diluted if certain, often reasonable, assumptions about society are met. The figure shows coefficients resulting from a linear regression with racial heterogamy as the outcome variable, and all (centered and scaled) model parameters in Table~\ref{tab:virt_exp} \emph{and their pairwise interactions} as independent variables. For readability, Figure~\ref{fig:2nd_dep} only includes the 10 coefficients with the strongest absolute effect. The regression model fits the data quite well ($R^2 = 0.83$), indicating that first- and second-order effects explain much of the dynamics of the complex system we model, but also that the model cannot be fully explained by phenomenon enumerable by theory alone.\footnote{See the Supplementary Material for full results of the first-order effects.} Filter interventions have the largest absolute effect on racial heterogamy. On average, across all simulated societies, removing the possibility to filter by race increases racial heterogamy by $7.65\%$ (CI = $(7.43, 7.88)$)\footnote{Note that confidence intervals are provided here but should be cautiously interpreted, as they can be made infinitely small simply by running additional simulations}. The Norm intervention also has a considerable positive impact, as it alone induces an average increase of $3.95\%$ (CI = $(3.73, 4.17)$). The impact of these interventions reflects their potential to have real and possibly lasting changes on society. Not all interventions we tested, however, increase racial homogamy in the manner theorized by Hutson et al. \cite{hutson2018debiasing}. Specifically, we find that only \emph{matching} (as opposed to competing) and \emph{searchable} (as opposed to experiential) attributes produce a meaningful increase in racial heterogamy. Adding other types of attributes can actually have negative effects, either in decreasing overall numbers of relationships while not impacting the number of interracial relationships, or in simply leading to a decrease in racial heterogamy.\footnote{See the Supplement for full results} These findings underscore the importance of using simulation to think through the operationalization of potential interventions, especially by combining various theoretical models (here, of intervention and attribute types) into a single simulation model. Other results in Figure~\ref{fig:2nd_dep} show, unsurprisingly, that different assumptions about society reflect drastically different baselines from which this improvement might come. For example, we see that racial heterogamy is significantly \emph{reduced} as correlations between race and the other attributes increase, and with an increase in racial bias in off-platform norms. Perhaps most importantly, however, Figure~\ref{fig:2nd_dep} reveals that intervention effects and societal conditions can \emph{interact} to change the effectiveness of an intervention. For example, the Norm intervention is particularly effective when we make the assumption that agent preferences are strongly impacted by social norms. But, adding a non-race filtering option is less effective with increased values of the $\beta$ parameter. Put another way, the effectiveness of not allowing individuals to search by race on an online platform is limited by the extent to which race is signaled by, or correlated with, other attributes that the platform \emph{does} allow one to search on. \section{Conclusion} In their recent articles, \citet{hutson2018debiasing} and \citet{anderson2014political} present contrasting views on the extent to which online dating platforms can intervene to impact long-term patterns of racial homogamy in online dating. In the present work, we developed an agent-based modeling approach that synthesizes modern perspectives in (cultural) sociology, the psychology of online dating, and social computing to help us explore this debate. Before discussing the implications of our work, we again explicitly call out two critical assumptions that our model makes. While we allow race to co-vary with other individual attributes, reflecting its multidimensional nature \cite{sen_race_2016}, our virtual experiment does ultimately assume that race, and gender, are both binary, i.e. that one can easily and completely specify an individual's race or gender. Doing so lends an aura of racial and gender determinism to our work that FAccT scholars are rightfully skeptical of \cite{hanna2020towards}. Our experiment also assumes that these variables are independent, and thus ignores important intersectional identities. However, we emphasize that these assumptions are made only in the present virtual experiment, and are by no means hard-coded into our open source simulation framework. We hope that we and others can therefore easily explore other constructions of race, dating, gender, and sexual preference in future work. These limitations in mind, our findings shed critical light on the effectiveness and side effects of the interventions proposed by \citet{hutson2018debiasing} in reducing racial homogamy, and ultimately racial inequality. Our work implies that the extent to which these interventions will be effective depends on four things. First, it depends on what we assume about society. We find that some of the proposed assumptions are effective only if we assume that people are unable to infer (perceived) race from other cultural preferences along which race can be performed online, an assumption that is potentially untenable given what is known about correlations between cultural attributes in general \cite{dellaposta_why_2015} and perceptions of race in particular \cite{freeman_dynamic_2011}. And, other interventions are effective only if we assume that 1) the platform can significantly impact social norms towards interracial dating on their site and that 2) these norms have more significant effects on individuals than other ``offline'' normative structures (e.g. the family) that have historically encouraged racial homogamy. Second, it depends on what platforms, and society, are willing to accept as sacrifice for progress on racial homogamy. Our model makes explicit these potentially non-obvious tradeoffs, quantifying how certain interventions to reduce racial homogamy have negative impacts on user satisfaction and rates of overall long-term relationships. Third, it depends on how long one is willing to wait. We find that changing on platform norms have a limited effect; however, our model only studies short-term effects. Changes in large-scale social norms can take much longer, and thus the payoff of these normative interventions may exist only much further into the future as on-platform norms influence agent preferences, which in turn slowly reshape norms in other contexts. Finally, it depends on what one's ultimate goal is. If one's ultimate goal is to increase racial homogamy to any degree, our results provide evidence that this is possible under a fairly wide range of assumptions. However, if one's goal is to eradicate racial homogamy, then interventions at the platform level seem to hit a saturation point well below true equality. These technological interventions will therefore almost certainly never be enough; they must be supplemented with structural and normative social change beyond the platform. While one could argue that these findings are post-hoc somewhat obvious, our work is motivated by a disagreement in the existing literature over the potential for platform intervention. This should serve as an important reminder that everything is obvious once it is observed \cite{watts2011everything}, and we hope motivates the use of our approach to other domains. Especially when unsuitable changes could have large negative effects, constructing a model and simulating different possible outcomes could help explore potential problems to those interventions that emerge within a complex sociotechnical system. \begin{acks} We thank Yuhao Du, David Garcia, Atri Rudra, Christoph Stadtfeld, Aleksandra Urman, and the FAccT reviewers for their valuable feedback on this work. KJ was supported by an NSF award IIS-1939579, with partial support from Amazon. \end{acks} \bibliographystyle{ACM-Reference-Format} \section{Further Details on the Decision Model} Recall that the decision function is based on the following equation: $$\mathbb{P}(\text{yes}) = \frac{1}{1 + \exp \left(- \mathbb{E} \left[w^{(i)}\cdot s^{(i)}(a) | \hat{a}^{(j)} \subseteq a \right]- t \right)}.$$ Formally, $\mathbb{E} \left[w^{(i)}\cdot s^{(i)}(a) | \hat{a}^{(j)} \subseteq a \right] = \sum_{a \subseteq \hat{a}^{(j)}} \left( w^{(i)}\cdot f(a^{(i)}, a) \right) \cdot \mathbb{P}_{G^{(i)}} (A = a'|\hat{a}^{(j)}\subseteq A)$, where $\mathbb{P}_{G^{(i)}} (A = a'|\hat{a}^{(j)}\subseteq A)$ to be the probability an agent with the known attributes $\hat{a}^{(j)}$ has the vector of attributes $a'$ in the view of agent $i$. This probability is obtained by dividing the entry for $a'$ in $G^{(i)}$ over the sum of the entries corresponding to all possible extension of $\hat{a}^{(j)}$. \subsection{Attribute Scoring Function} In the main text, we informally introduced the attribute scoring function. Below, we include its formal definition: \[ \left(s^{(i)}(a)\right)_k= \left\{ \begin{array}{ll} \ 1 & \text{,if } k \text{ matching and } a^{(i)}_k = a_k \text{, or } \\ & k \text{ competing and } a^{(i)}_k < a_k \\ \ 0 & \text{,if } k \text{ competing and } a^{(i)}_k = a_k\\ -1 & \text{,if } k \text{ matching and } a^{(i)}_k \neq a_k \text{, or } \\ & k \text{ competing and } a^{(i)}_k > a_k \\ \end{array} \right. \] \subsection{Time Variable} Finally, note that the time contribution gets reset to $0$ when moving from an online to an offline relationship. This is consistent with the literature, as the first offline date is a crucial point in deciding whether or not the relationship will continue \cite{finkel2012online}. \section{Further Details on Initialization} \subsection{Agent attributes} To generate agent attributes during initialization of a new agent, we use a generative model common in both the decision-making literature \cite{cantwell2020thresholding} and to other areas of computational social science, e.g. topic modeling \cite{blei2007correlated}. For each agent $i$, we start by generating one instance, $d^{(i)}$, of a $K+1$ dimensional normally distributed random variable $A \sim \mathcal{N}(\mu, \Sigma)$. For interpretability, we use a zero-mean distribution, i.e. we fix $\mu = 0$. Therefore, $d^{(i)}$ will be the degree to which agent $i$ fits with a $1$ category on each attribute. For example, if $d^{(i)}_0 = 0.5$ then $i$ identifies more as having a race-value of $1$, while if $d^{(i)}_0 = -1$ they identify more as having a race-value of $0$. Moreover, without loss of generality, we fix the variance of each component to $1$, i.e. we let $\Sigma_{j, j} = 1$. To complete the values of the covariance matrix, $\Sigma$, we use two parameters: 1) a parameter $\beta$ capturing the level of correlation between the protected attribute and the others, and 2) a parameter $\gamma$ capturing the level of correlation between the non-protected attributes. That is, $\Sigma_{0, j} = \beta$ and $\Sigma_{j, t} = \gamma$ for every $j \neq t$ non-zero. Varying these two parameters changes the predictability of one attribute, given another. In our virtual experiment, both $\beta$ and $\gamma$ are changed to reflect various degrees of correlation (none, low, medium, and high) between the protected attributes and all other attributes ($\beta$), and between all non-protected attributes (low and medium) for $gamma$ - see Table~\ref{tab:virt_exp}). The binary attribute vector for each agent is obtained from the signs of the entries in $d^{(i)}$: negative values of $d^{(i)}_j$ correspond to $a^{(i)}_j = 0$, while non-negative value correspond to $a^{(i)}_j = 1$. Also generated from this distribution is the \emph{contingency table}, $G$, which is used by agents to construct their \emph{stereotypes}. \subsection{Stereotypes} Agents do not necessarily know the true contingency table $G$. Instead, based on their previous experience, they form a belief over this table, namely $G^{(i)}$ for agent $i$. We refer to this as the personal stereotypes of agents \cite{krueger1996personal}. This matrix is generated together with a new agent, and, as mentioned in the main text, changes thought their time in the simulation. In this subsection we disucss its initialisation. Since this matrix is a result of the experience of the agent prior to entering the platform, we first create a sample of agents that they encountered (either directly or indirectly) before entering the platform. Because racial homophily\cite{mcpherson_birds_2001} and in-group favoritism\cite{hastorf_they_1954,tajfel_social_1971} influence the formation of stereotypes, this sampling is not race-balanced. We first generate, using the true contingency table $G$, $300$ agents having the same race as agent $i$ and $100$ having a different race. Next, similar to previous work \cite{joseph_coevolution_2014}, we use the degree of ethnocentrism, $e$, and alter the true attributes of the different-race agents in the second sample. More precisely, for each agent of a different race in the sample, and each attribute that is perceived as good by agent $i$ there is a chance $e$ of changing it into a negatively-perceived attribute. The (modified) sample of $400$ agents induces a belief for agent $i$ over which attribute combinations are more likely to be encountered. Formally, this belief is captured into the stereotype matrix $G^{(i)}$. \subsection{Norms and preferences} As mentioned in the main text, out-platform norms, agent preferences, and on-platform norms are initialized sequentially. To start, the out-platform norm vector is generated inside the $K$th simplex. More precisely, since there is a restriction on the value on the race-component and the sum of searchable attributes versus the sum of experiential attributes, three parts are generated separately. The first part corresponds to the race attribute and is a $(K+1)$-dimensional vector with the first value given by the respective parameter and 0s on all the remaining positions. We refer to this as $n^{off}_{race}$. The second part corresponds to the searchable attributes. The sum on the components corresponding to the searchable attributes is given by accounting for the weight of the race attribute and for the relative importance of searchable vs experiential attributes, i.e. this part is given by the equation $\text{sum}_{\text{searchable}} = 1 - (\text{the norm value on the race attribute})\times (\text{the importance of searchable attributes})$. Next we take an $S$- dimensional vector uniformly at random from the $(S-1)$-th simplex, complete it with $(K+1-S)$ values of $0$ (one on the first position, and the reaming at the end), and multiply it by $\text{sum}_{\text{searchable}}$. This gives the searchable part of the out-platform norm, namely $n^{off}_{searchable}$. The last part, $n^{off}_{experiential}$, is obtained similarly as the searchable one. The out-platform norm is the sum of these three components, i.e. $n^{off} = n^{off}_{race} + n^{off}_{searchable} + n^{off}_{experiential}$. Next, the individual preferences are obtained by taking the out-platform norm and adding some noise. Last, the on-platform norm vector is obtained by averaging the preferences on the searchable components. More precisely, for each agent $i$ on the platform we consider $w^{(i, s)}$ to be the searchable entries of the preference vector of that agent. Since $w^{(i)}\in \Delta^{K}$, the sum of the searchable entries of that vector might be below $1$, say $s_i = \sum_j w^{(i, s)}_j$. A re-scaling of $w^{(i, s)}$ will ensure that it is in the $(S-1)$th simplex, therefore $w^{(i, s)}$ becomes $\frac{1}{s_i} \cdot w^{(i, s)}$. We refer to this version of $w^{(i, s)}$ as the searchable vector of preferences. Their average is the initial on-platform norm, i.e. $n^{on} = \frac{1}{N} \cdot \sum_{i\leq N} w^{(i, s)}$. When intervening on the on-platform norm, its value on the protected characteristic, i.e. its first entry, remains 0. To reflect this, after initially setting the on-platform norm as the average of preferences, we let its first entry to be $0$, and re-scale the others to keep the norm in the $(S-1)$th simplex. Similarly, after getting the norm closer to the average of preferences, we once agin set its first entry to $0$ and re-scale the other ones. With such an intervention in place, at the end of each iteration, when preferences are getting closer to norms, the agents' weight on the protected characteristic will lower. \section{Further Details on Updates} \subsection*{Norms and preferences} We now turn our attention to norm and preference updates, which, as mentioned in the main text, are carried out in sequence during each iteration. First, the preferences update after each interaction to reflect its outcome. Second, at the end of each iteration, the on-platform norm gets closer to the average preference of agents. Third, the preferences are updated by getting closer to norms. Each of this three updates are done by taking convex combinations between the original value of the norms/preferences and the values of the influencing vector. Since both preferences and norms are in a simplex, which is closed under convex combinations, they will remain in the simplex after the update. The remaining of this section explains the details of these updates. First, the preferences of each agent in a relationship update based on the interaction. Using the decision function, agents decide whether or not to continue the relationship. We consider a mutual continuation decision as a sign of a \textit{positive interaction}, and any other decision combination as a sign of a \textit{negative interaction}. Let agent $A$ update their preferences after an interaction of quality $q$ with agent $B$. In this case, we construct an interaction vector $i$. The preference of $A$ will then update to the convex combination between this interaction vector and the old preference vector. We construct the interaction vector into stages. First, based on the quality of the interaction and how good each attribute is perceived, we determine whether the weight of each attribute should increase, decrease, or stagnate (see Table~\ref{table:change_interaction}). For example, if there was a negative interaction with an agent having a good attribute, then perhaps that attribute is not so important, and its weight should decrease. Second, for each attribute $k$ where the value should decrease, we put $i_k = 0$, and for each attribute $k$ where the value should stagnate we put the previous weight value, i.e. $i_k = w^{(A)}_k$. Last, the interaction values for attributes that whose weight should increase are completed in proportion to their old-value, such that $i\in\Delta^K$. As a note, we only construct such a vector if there is some relative change, i.e. some attribute weight values should increase and some should decrease. If, for example, there is some positive interaction with somebody with all attributes perceived as being bad, then there will be no post-interaction preference update, as this indicates no relative change in preferences. Using this interaction vector, the preferences of agent $A$ updates to a convex combination between $i$ and the old preference, with a strength given by some parameter $\theta$, i.e. $\theta \cdot i +(1-\theta) \cdot w^{(A)}$. \begin{table}[h!] \centering \begin{tabular}{c | c c c} & $k$ good & $k$ neutral/unkwnon & $k$ bad \\ \hline\hline $q$ = positive & $\uparrow$ & $-$ &$\downarrow$ \\ $q$ = negative & $\downarrow$ & $-$ & $\uparrow$ \\ \end{tabular} \caption{Table showing whether the interaction vector should dictate an increase, decrease, or stagnation in each attribute $k$ depending on the value of the interaction $q$.} \label{table:change_interaction} \end{table} Next, at the end of each turn, the on-platform norms are changed to get closer to the average of the searchable preferences of agents. As a reminder from Section~\ref{sec:gen}, the searchable vector of preferences is obtained by taking from the weight vector of preferences only the searchable entries, and re-scaling it to be in the $(S-1)$th simplex. The on-platfom norms are updated to the convex combination between the average of agents' searchable vector of preferences and the old value of on-platform norms. Lastly, afterwords, each preference becomes the convex combination between its old value and the norm. Depending on whether the agent is online or offline, the normsin question could be the on-platform norm, which updates only the searchable attributes, or the out-platform norm. \section{Further Details on Fixed Parameter Values} As outlined in the table showcasing the parameters in our model, some parameters remain constant thought the experiment. In some cases, their values are chosen based on past work, while for others we did a sensitivity test to see if they have a meaningful change. Below, we discuss in turn some of the reasoning that went behind fixing each of these parameters: \begin{itemize} \item \textit{Strength of updates: interaction $\to$ preferences.} In the initial experiment we also tried more extreme values, such as a $50\%$ and an $100\%$ influence. Increasing this parameter also increases the unpredictability of the results: the variance and the time taken to achieve convergent behaviour surge. Therefore, we settled for more reasonable values on this parameter. \item \textit{Strength of updates: preferences $\to$ norms.} For this parameter, we also tested larger values, such as $10\%$, $20\%$ and $50\%$. Using such values did not affect the results, especially when paired with low level of influences of interactions on preferences, i.e. when preferences were more stable. \item \textit{Relative importance of out-platform norms when generating initial preferences.} This parameter controls how much alike the preferences of agents are originally, and how much they resemble the out-platform norms. Variations of these parameter influenced more the short-term results, i.e. the ones after only a couple of hundred of iterations, rather then the end-results observed after $2000$ iterations \item \textit{Number of initial acquaintances in-group and out-group.} We have also tried having less such acquaintances, namely $150$ and $50$. Besides a slight increase in variability no difference was observed. \item \textit{Weight of searchable versus experiential attributes.} \citet{frost2008people} conducted a study on how much weight do searchable attributes have when compared to experiential attributes when considering a potential partner. We choose the values of this parameter based on their result. \item \textit{The probability a searchable attribute is specified on the profile and the probability of learning an attribute of someone else.} Past work indicates that users do not always specify all the attributes they could on their dating profile\cite{lewis2016preferences} and that attributes are discovered during interactions \cite{finkel2012online}. These motivates the using of these two parameters in our model. The exact numbers influence only the amount of time the two agents act on incomplete information. Small variations to these numbers do not qualitatively change our results. \item \textit{The number of rounds until a relationship becomes offline and long-term and the frequency of offline interactions.} The time it takes one to move to the next step of their relationship as well as how often they interact varies a lot across people. Therefore, we used values close to the averages reported in previous work to set these parameters \cite{ramirez2015online, brym2001love, munoz2007aggression}. \item \textit{The relative time spent looking at received messages versus new profiles during search and number of profiles and messages considered per iteration.} During their time searching, users can both look at profiles and consider messages \cite{finkel2012online}. Based on previous work, we estimated that, on average, one considers around $30$ potential partners (either for sending an initial message, or for giving a reply to such a message and start an online relationship) \cite{frost2008people, finkel2012online}. \item \textit{Time-span for observing the platform.} The simulation runs for $2000$ iterations, the rough equivalent of $5$ years. In our sensitivity test we also tracked $5000$ iterations, but the results revealed that $2000$ are enough to fully observe the effects of interventions, i.e. achieve convergence and small variance. \item \textit{Initial population size and the number of agents added per iterations.} The sensitivity test also experimented with using less ($150$ initially and $2$ added per iteration). This decrease did not produce qualitative changes. Due to computational limitation we did not further increase the number of agents. \item \textit{Types of agents.} In the main text, agents can be of two types, loosely mimicking a heteronormative online dating website. However, we also tested a platform with only one type of agent, thus moving form a two-sided matching problem to a one-sided matching one. The results remind qualitatively similar. \end{itemize} Further analysing the impact of changing these parameter could potentially reveal other interesting phenomena. For example, we know that users usually meet within $1-2$ weeks, but what would happen if an intervention was made such that they are encouraged to meet sooner? That being said, this is outside the scope of our paper and is thus left for future research. Even though such an analysis could reveal new dependencies between parameters as well as potentially new effective interventions, they should not change the causal implications observed in this paper. \section{Additional Results and Result Details} \subsection{Explaining Patterns in User Satisfaction Metrics} Norm interventions produce smaller and often insignificant drops in user satisfaction, as they change the underlying preference of users, so also the way they perceive potential partners. The slight drop, of only 1.06\% on average, in offline-time when intervening on on-platform norms is due to the persistence of off-platform norms: even though users change their preference while on the platform because of the intervention there, they will still go through an offline state, when their preferences are altered to reflect these different norms. When increasing the number of attributes, the negative effects of adding non-race filtering options decrease. For example, a shift to Strong non-race changes the percentage of time spent offline from 23.6\% to 21.2\%. The reason for this is that removing the possibility to search by one attribute affects users less if they have multiple others to search by; with more attributes there is a higher chance of having a second attribute that is almost as important as the first. Slightly different results can be noted with respect to the dissatisfaction resulting from un-successful search turns (see Figure 5b in the main text). While the addition of non-race filtering options on top of both strong and weak filtering increase the level of un-satisfaction, the adding of variety with weak filtering and sometimes even with no filtering does not necessarily do so. Since agents often do not report all of their attributes, including also the profiles with unknown attribute-values lowers the chance of excluding possibly interesting partners. Consequently, when around 50\% of users do not report their value on the search attribute, a strong filtering option excludes many potentially suitable candidates, thus explaining the gaps. Similarly to our other measures of satisfaction, disregarding of the filtering or norm intervention, the addition of attributes improves the satisfaction. When agents have more attributes to search on, excluding one of these or indirectly influencing the preferences through norms disturbs less the capabilities of users to signal and search based on this desires. \subsection{Regression Model with First-Order Effects} \begin{figure}[h] \centering \includegraphics[width=.9\linewidth]{dependencise.png} \caption{The marginal effect of each varied parameters on the percentage of out-group long-term relationships. } \label{fig:1st_dep} \end{figure} In the main text, we used a linear regression with racial heterogamy as the outcome variable and the varied model parameters and their pairwise interactions as independent variables to analyse the impact of parameters and pair of parameters on the percentage of out-group long-term relationships. That model had a high predictability ($R^2 = 0.83$). However, the first-order terms alone also fit data quite well ($R^2 = 0.79$). In this part of the supplementary material we look at all the first order effects that were not already discussed in the main text. Figure \ref{fig:1st_dep} provides an overview of these effects. Before, we have seen that racial heterogamy is significantly \emph{reduced} as correlations between race and the other attributes increases, i.e. $\beta$, and with an increase in racial bias in off-platform norms. Figure \ref{fig:1st_dep} shows that the correlation between non-race attributes also negatively impacts heterogamy. In contrast, as expected, there are also societal structures that can \emph{increase} racial heterogamy. For example, adding more attributes usually improves the percentage of out-group relationships, with the only exception being the addition of matching experiential attributes. To understand why this is the case, one needs to remember that the effect on the percentage is a mix between the effect on the total number of long-term relationships and the effect on the number of out-group ones. Adding competing attributes decreases the total number of relationships and has no effect on the number of out-group relationships. Consequently, adding them increase the percentage. Conversely, adding matching attributes has no effect on the total number of relationships, so the effect on the percentage is given by the change in the number of out-group ones. Increasing the number of matching attributes that are observable while online result in more out-group relationships, while adding matching attributes that are only observable when offline produce a decrease. In conclusion, even though adding attributes generally increases the percentage of out-group relationships, the only attributes that produce a meaningful improvement are the matching searchable ones, as their effect is due to the increase in the total number of out-group long-term relationships rather than due to a decrease in the overall number of relationships. The level of stereotyping is another interesting parameter, as values away from $50\%$, in either direction have a negative effect on the percentage of out-group relationships. Putting this another way, stereotyping that agents of a different race have half of their "good" attributes "bad" have worse effect than both perceiving the different race agent as they are and believing that all of their attributes are "bad" (where "good" and "bad" matching attributes are in relationship with the agent considering them). To give an example, the blue agent in Figure~\ref{fig:rel_phases}, has a higher chance of finding a partner within a different bias group if either they believe that people with a race of $1$ have exactly the true distribution over music liking ($0\%$ stereotyping) or that they never like music ($100\%$ stereotyping) than if they think half of the ones that do actually like music and have a different race do not have a passion for it ($50\%$ stereotyping). To understand why this is the case, let us consider the case when the blue agent observes another agent with a music passion and an un-specified race. If the blue agent has a $100\%$ level of stereotyping they will assume that the agent has the same race and possibly also associate other positive attributes with them. When they learn the true race value of the agent they will also have learned more of the true attributes of the agent, so by that time, race might not be a determining factor in decision making. On the other hand, if the blue agent has a $50\%$ level of stereotyping, the blue agent still perceives them being of a different race as a possiblity, which might result in projecting the perceived negative attributes on the agent in question, consequently resulting in a lower probability of forming and continuing a relationship. Two smaller effects are due to the level at which norms influence preferences and the tolerance level of agents to un-successful search turns. Although their marginal effect on the percentage of out-group relationships is small, they affect the user satisfaction. For example, an increased level of influence of the norms on preferences results in higher dissatisfaction when implementing norm interventions due to an increase in the drop rate after the first relationships, while a lowered bad recommendation tolerance decreases both the time spent offline and the percentage of exits due to too many un-successful search turns. \subsection{Minimal Group Paradigm Intervention} As a variant of the attribute intervention, \citet{hutson2018debiasing} also suggest creating attributes that help users "categorize potential partners along new axes unassociated with protected characteristics". This intervention is based on on the minimal group paradigm in social psychology \cite{tajfel_social_1971}. To implement it, we added a new attribute searchable and matching attribute that is correlated neither with race nor with any other preexisting attributes. We also made the assumption that agents value this newly introduced attribute in the same way they value the others. That is, when generating and updating preferences and norms we treat these attribute as any of the other matching searchable non-bias attribute. Figure \ref{fig:minimal_gr_par} shows the impact of this intervention on racial heterogamy as well as on the total and out-group number of long-term relationships. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{minimal_gr_par.png} \caption{The percentage and number of out-group long-term relationships after the minimal group paradigm intervention.} \label{fig:minimal_gr_par} \end{figure} \begin{figure}[h] \centering \includegraphics[width=\linewidth]{no_and_pr_outgroup_longterm_9_attr.png} \caption{The percentage and number of out-group long-term relationships after the attribute intervention.} \label{fig:attr_intvn_2} \end{figure} Similarly to the attribute number intervention, this intervention produces an increase in the percentage of out-group long-term relationships over the baseline (see main text for this figure). Moreover, under the assumption mentioned before, i.e. that creating this new attribute can be done in such a way that users value it as much as the other attributes, this intervention is more effective that the general attribute intervention (Figure \ref{fig:attr_intvn_2}). More precisely, the increase in the percentage of out-group long-term relationships is the same, but the decrease in the total number of relationships is smaller and the increase in the number of out-group ones is larger. The analysis of the first order effects in the section above explains this difference. As observed there, matching searchable attributes are the only type of attributes with meaningfully effects, i.e. an improvement in heterogamy due largely to an increase in the number of out-group long-term relationship rather than to a decrease in the total number of relationships. Thus, one woud expect the minimal group paradigm intervention, which only adds matching searchable attributes, to have more positive effects than the general one, which adds one attribute of each. \bibliographystyle{ACM-Reference-Format}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Reconfigurable intelligent surfaces (RISs) have recently been introduced for improved energy efficiency (EE), spectrum efficiency (SE), localization accuracy, sensing capability, as well as network/physical-layer security~\cite{huang2019reconfigurable, risTUTORIAL2020, wymeersch2019radio,hu2020reconfigurable,RISE6G_COMMAG}. The RIS, either being passive, active, or hybrid, is used to smartly control the radio propagation environment, by virtue of multi-function capabilities, e.g., reflection, refraction, diffraction, polarization, scattering, and even absorption~\cite{di2019smart,huang2019reconfigurable}. In the literature, the RIS is commonly used as an intelligent reflector, which breaks the well-known law of reflection~\cite{Alexandropoulos2020}, to mitigate the blockage and shadowing effect, and expand the communication coverage, especially for millimeter wave (mmWave) and Terahertz (THz) communications. Recently, there appears another new type of RIS, termed as simultaneously transmitting and reflecting RIS (STAR-RIS)~\cite{Yuanwei2021}, which can be regarded as a promising bridge linking indoor and outdoor connectivity. Unlike the conventional RIS, the STAR-RIS can simultaneously realize two different functionalities, e.g., reflection and refraction. Namely, mobile stations (MSs) on both sides of the STAR-RIS, for instance, one is indoor and the other is outdoor, can be served by it at the same time. In addition, the two functionalities can also be realized in different manners, e.g., energy splitting, time switching, and model switching. Even though the potential of traditional reflective RISs for radio localization has been studied in the literature~\cite{Jiguang2020,Elzanaty2021}, we envision that the localization capability can be even further extended by the introduction of one or multiple STAR-RISs. With the aid of such a surfaces, an outdoor base station (BS) is capable of localizing an indoor user, in addition to an outdoor user, via uplink sounding reference signal (SRS) transmission, not only at sub-6GHz but also mmWave frequency bands. In this paper, we study the Cram\'er Rao lower bounds (CRLBs) for the intermediate channel parameter estimation based on the Fisher information analysis, and extend it to three-dimensional (3D) localization of one indoor and one outdoor MSs by virtue of the Jacobian matrix. We also examine the effect of the energy splitting between the two STAR-RIS functionalities and power allocation between the two MSs during SRS transmission. Besides, we find the optimal design of the STAR-RIS based on the CRLBs, which offers better performance than other alternative designs. \textit{Notations}: A bold lowercase letter ${\mathbf{a}}} \def\mbox {\boldmath $b$}{{\mathbf{b}}} \def\mbox {\boldmath $c$}{{\mathbf{c}}} \def\mbox {\boldmath $d$}{{\mathbf{d}}} \def\be{{\mathbf{e}}$ denotes a vector, and a bold capital letter ${\mathbf{A}}} \def\bB{{\mathbf{B}}} \def\bC{{\mathbf{C}}} \def\bD{{\mathbf{D}}} \def\bE{{\mathbf{E}}$ denotes a matrix. $(\cdot)^\mathsf{T}$ and $(\cdot)^\mathsf{H}$ denote the matrix or vector transpose and Hermitian transpose, respectively. $(\cdot)^{-1}$ denotes inverse of a matrix, $\mathrm{tr}(\cdot)$ denotes the trace operator, $\mathrm{diag}({\mathbf{a}}} \def\mbox {\boldmath $b$}{{\mathbf{b}}} \def\mbox {\boldmath $c$}{{\mathbf{c}}} \def\mbox {\boldmath $d$}{{\mathbf{d}}} \def\be{{\mathbf{e}})$ denotes a square diagonal matrix with the entries of ${\mathbf{a}}} \def\mbox {\boldmath $b$}{{\mathbf{b}}} \def\mbox {\boldmath $c$}{{\mathbf{c}}} \def\mbox {\boldmath $d$}{{\mathbf{d}}} \def\be{{\mathbf{e}}$ on its diagonal, ${\mathbf{A}}} \def\bB{{\mathbf{B}}} \def\bC{{\mathbf{C}}} \def\bD{{\mathbf{D}}} \def\bE{{\mathbf{E}} \otimes \bB$ and ${\mathbf{A}}} \def\bB{{\mathbf{B}}} \def\bC{{\mathbf{C}}} \def\bD{{\mathbf{D}}} \def\bE{{\mathbf{E}} \diamond \bB$ denote the Kronecker and Khatri-Rao products of ${\mathbf{A}}} \def\bB{{\mathbf{B}}} \def\bC{{\mathbf{C}}} \def\bD{{\mathbf{D}}} \def\bE{{\mathbf{E}}$ and $\bB$, respectively, $\mathbb{E}[\cdot]$ and $\mathrm{var}(\cdot)$ are the expectation and variance operators, $\mathbf{1}$ is the all-one vector, $\bI_{M}$ denotes the $M\times M$ identity matrix, $j = \sqrt{-1}$, and $\|\cdot\|_2$ denotes the Euclidean norm of a vector. $[{\mathbf{a}}} \def\mbox {\boldmath $b$}{{\mathbf{b}}} \def\mbox {\boldmath $c$}{{\mathbf{c}}} \def\mbox {\boldmath $d$}{{\mathbf{d}}} \def\be{{\mathbf{e}}]_i$, $[{\mathbf{A}}} \def\bB{{\mathbf{B}}} \def\bC{{\mathbf{C}}} \def\bD{{\mathbf{D}}} \def\bE{{\mathbf{E}}]_{ij}$, and $[{\mathbf{A}}} \def\bB{{\mathbf{B}}} \def\bC{{\mathbf{C}}} \def\bD{{\mathbf{D}}} \def\bE{{\mathbf{E}}]_{i:j, i:j}$ denote the $i$th element of ${\mathbf{a}}} \def\mbox {\boldmath $b$}{{\mathbf{b}}} \def\mbox {\boldmath $c$}{{\mathbf{c}}} \def\mbox {\boldmath $d$}{{\mathbf{d}}} \def\be{{\mathbf{e}}$, the $(i,j)$th element of ${\mathbf{A}}} \def\bB{{\mathbf{B}}} \def\bC{{\mathbf{C}}} \def\bD{{\mathbf{D}}} \def\bE{{\mathbf{E}}$, and the submatrix of ${\mathbf{A}}} \def\bB{{\mathbf{B}}} \def\bC{{\mathbf{C}}} \def\bD{{\mathbf{D}}} \def\bE{{\mathbf{E}}$ formed by rows $i,i+1, \ldots, j$ and columns $i,i+1, \ldots, j$. Finally, $|\cdot|$ returns the absolute value of a complex number. \section{System Model} \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{System_Model} \caption{The considered STAR-RIS-assisted mmWave MIMO systems for simultaneous indoor and outdoor 3D localization.} \label{System_Model} \vspace{-0.5cm} \end{figure} We consider a nearly-passive STAR-RIS-aided mmWave multiple-input multiple-output (MIMO) system, which consists of one multi-antenna BS, one multi-element STAR-RIS, one indoor MS, and one outdoor MS, which is intended for simultaneous indoor and outdoor 3D localization, as shown in Fig.~\ref{System_Model}. The BS and STAR-RIS are equipped with $M$ antennas and $N$ almost passive scattering elements, respectively, while each MS is equipped with a single antenna. Extension to multi-antenna MSs is feasible. Both the BS and STAR-RIS employ the uniform planar array (UPA) structure parallel to the $x-z$ plane. The STAR-RIS has two operation functionalities, i.e., reflection and refraction, which can be realized simultaneously. \subsection{Channel Model} The system is assumed to operate in the mmWave frequency band and we consider the Saleh-Valenzuela parametric channel model to construct all four individual channels. The direct line-of-sight (LoS) channel between the outdoor MS and the $M$-antenna cellular BS is denoted as $\mbox {\boldmath $h$}_1\in\mathbb{C}^{M \times 1}$ and is mathematically expressed as follows: \begin{equation}\label{h_1} \mbox {\boldmath $h$}_1 = \frac{e^{-j 2\pi d_1/\lambda}}{\sqrt{\rho_1}} \boldsymbol{\alpha}_x(\theta_{1},\phi_{1}) \otimes \boldsymbol{\alpha}_z(\phi_{1}), \end{equation} where $d_1$ (in meters) and $\rho_1$ (for the sake of simplicity, we assume that $\rho_1 = d_1^2$) are the distance and path loss between the outdoor MS and BS, respectively, $\lambda$ is the wavelength of the carrier frequency, $\theta_1$ and $\phi_1$ are the azimuth and elevation angles of arrival associated with $\mbox {\boldmath $h$}_1$, respectively.\footnote{In fact, we can also consider the free-space path loss, which is modeled: \begin{equation} \rho_1 = d_1^2 f_c^2 / 10^{8.755}, \nonumber \end{equation} where $f_c$ (in KHz) is the carrier frequency, defined as $f_c = \frac{c}{\lambda}$ with $c$ being the speed of light ($3\times 10^8$ m/s). Alternatively, the standard 3GPP urban micro (UMi) path loss model can be considered, according to which holds: \begin{equation} \rho_1 = 10^{2.27} d_1^{3.67} f_c^{2.6}, \nonumber \end{equation} where $f_c$ needs to be included in GHz~\cite{Akdeniz2014}. } The steering vectors $\boldsymbol{\alpha}_x(\theta_{1},\phi_{1})$ and $\boldsymbol{\alpha}_z(\phi_{1})$ can be written as~\cite{Tsai2018}: \begin{align} \boldsymbol{\alpha}_x(\theta_{1},\phi_{1}) =& \Big[e^{-j \frac{2\pi d_x}{\lambda} (\frac{M_x -1}{2}) \cos(\theta_1) \sin(\phi_1)},\nonumber \\ & \cdots, e^{j \frac{2\pi d_x}{\lambda} (\frac{M_x -1}{2}) \cos(\theta_1)\sin(\phi_1)} \Big]^{\mathsf{T}},\\ \boldsymbol{\alpha}_z(\phi_{1}) =& \Big[e^{-j \frac{2\pi d_z}{\lambda} (\frac{M_z -1}{2}) \cos(\phi_1) },\nonumber \\ & \cdots, e^{j \frac{2\pi d_z}{\lambda} (\frac{M_z -1}{2}) \cos(\phi_1)} \Big]^{\mathsf{T}}, \end{align} where $M = M_x M_z$ with $M_x$ and $M_z$ being the number of horizontal and vertical BS antennas, respectively, and $d_x$ and $d_z$ denote their inter-element spacing in the horizontal and vertical axes, which were set as half-wavelength without loss of generality. Similarly, the other channels, e.g., $\mbox {\boldmath $h$}_2 \in \mathbb{C}^{N \times 1}$ and $\mbox {\boldmath $h$}_3\in \mathbb{C}^{N \times 1}$, can be presented in the same manner, as: \begin{equation}\label{h_2_h_3} \mbox {\boldmath $h$}_i = \frac{e^{-j2\pi d_i/\lambda}}{\sqrt{\rho_i}} \boldsymbol{\alpha}_x(\theta_{i},\phi_{i}) \otimes \boldsymbol{\alpha}_z(\phi_{i}), \end{equation} for $i = 2$ and $3$, where $N = N_x N_z$ is the number of STAR-RIS elements with $N_x$ and $N_z$ denoting the numbers in the horizontal and vertical axes, respectively. Note that the array response vectors $\boldsymbol{\alpha}_x(\cdot)$ and $\boldsymbol{\alpha}_z(\cdot)$ may have a different dimension compared to that in~\eqref{h_1} but possess the same form. Finally, $\rho_2$ and $\rho_3$ follow the same assumption as $\rho_1$. The wireless channel between the STAR-RIS and BS, i.e., $\bH_4 \in \mathbb{C}^{M \times N}$, is expressed as follows: \begin{equation}\label{H_4} \bH_4 = \frac{e^{-j 2\pi d_4/\lambda}}{\sqrt{\rho_4}} \boldsymbol{\alpha}_x(\theta_{4},\phi_{4}) \otimes \boldsymbol{\alpha}_z(\phi_{4})(\boldsymbol{\alpha}_x(\theta_{4},\phi_{4}) \otimes \boldsymbol{\alpha}_z(\phi_{4}))^\mathsf{H}, \end{equation} assuming that the two UPAs (i.e., one for the BS and the other for the STAR-RIS) are deployed in parallel without any biased orientation. Due to the wall in between the BS and the indoor MS, there does not exist a direct LoS path between them. The only path is the reflection route via the STAR-RIS. In this work, we consider only LoS paths in all the individual channels; the extension for the multipath scenario and for arbitrary orientations between the UPAs is left for future work. \subsection{Geometric Relationship} The Cartesian coordinates of the BS and STAR-RIS as well as the outdoor and indoor MSs are ${\mathbf{p}}} \def\mbox {\boldmath $q$}{{\mathbf{q}}} \def\mbox {\boldmath $r$}{{\mathbf{r}}} \def\mbox {\boldmath $s$}{{\mathbf{s}}} \def\mbox {\boldmath $t$}{{\mathbf{t}}_\text{B} = (x_\text{B},y_\text{B},z_\text{B})^\mathsf{T}$, ${\mathbf{p}}} \def\mbox {\boldmath $q$}{{\mathbf{q}}} \def\mbox {\boldmath $r$}{{\mathbf{r}}} \def\mbox {\boldmath $s$}{{\mathbf{s}}} \def\mbox {\boldmath $t$}{{\mathbf{t}}_\text{R} = (x_\text{R},y_\text{R},z_\text{R})^\mathsf{T}$, ${\mathbf{p}}} \def\mbox {\boldmath $q$}{{\mathbf{q}}} \def\mbox {\boldmath $r$}{{\mathbf{r}}} \def\mbox {\boldmath $s$}{{\mathbf{s}}} \def\mbox {\boldmath $t$}{{\mathbf{t}}_{\text{U},1} = (x_{\text{U},1},y_{\text{U},1},z_{\text{U},1})^\mathsf{T}$, and ${\mathbf{p}}} \def\mbox {\boldmath $q$}{{\mathbf{q}}} \def\mbox {\boldmath $r$}{{\mathbf{r}}} \def\mbox {\boldmath $s$}{{\mathbf{s}}} \def\mbox {\boldmath $t$}{{\mathbf{t}}_{\text{U},2} = (x_{\text{U},2},y_{\text{U},2},z_{\text{U},2})^\mathsf{T}$, respectively. The relationship between the distances and a pair of Cartesian coordinates are listed below: \begin{align} d_1 &= \|{\mathbf{p}}} \def\mbox {\boldmath $q$}{{\mathbf{q}}} \def\mbox {\boldmath $r$}{{\mathbf{r}}} \def\mbox {\boldmath $s$}{{\mathbf{s}}} \def\mbox {\boldmath $t$}{{\mathbf{t}}_\text{B} - {\mathbf{p}}} \def\mbox {\boldmath $q$}{{\mathbf{q}}} \def\mbox {\boldmath $r$}{{\mathbf{r}}} \def\mbox {\boldmath $s$}{{\mathbf{s}}} \def\mbox {\boldmath $t$}{{\mathbf{t}}_{\text{U},1} \|_2, \\ d_i &= \|{\mathbf{p}}} \def\mbox {\boldmath $q$}{{\mathbf{q}}} \def\mbox {\boldmath $r$}{{\mathbf{r}}} \def\mbox {\boldmath $s$}{{\mathbf{s}}} \def\mbox {\boldmath $t$}{{\mathbf{t}}_\text{R} - {\mathbf{p}}} \def\mbox {\boldmath $q$}{{\mathbf{q}}} \def\mbox {\boldmath $r$}{{\mathbf{r}}} \def\mbox {\boldmath $s$}{{\mathbf{s}}} \def\mbox {\boldmath $t$}{{\mathbf{t}}_{\text{U},i-1} \|_2,\;\text{for}\; i = 2,3, \\ d_4 &= \|{\mathbf{p}}} \def\mbox {\boldmath $q$}{{\mathbf{q}}} \def\mbox {\boldmath $r$}{{\mathbf{r}}} \def\mbox {\boldmath $s$}{{\mathbf{s}}} \def\mbox {\boldmath $t$}{{\mathbf{t}}_\text{B} - {\mathbf{p}}} \def\mbox {\boldmath $q$}{{\mathbf{q}}} \def\mbox {\boldmath $r$}{{\mathbf{r}}} \def\mbox {\boldmath $s$}{{\mathbf{s}}} \def\mbox {\boldmath $t$}{{\mathbf{t}}_{\text{R}} \|_2. \end{align} By introducing the three-element vector $\boldsymbol{\xi}_i \triangleq [\cos(\theta_i) \cos(\phi_i), \sin(\theta_i) \cos(\phi_i), \sin(\phi_i) ]^\mathsf{T}$ for $i = 1,2,3,4$, the geometric relationship between the angular parameters and the Cartesian coordinates of the nodes can be expressed as \begin{align}\label{Geometry} {\mathbf{p}}} \def\mbox {\boldmath $q$}{{\mathbf{q}}} \def\mbox {\boldmath $r$}{{\mathbf{r}}} \def\mbox {\boldmath $s$}{{\mathbf{s}}} \def\mbox {\boldmath $t$}{{\mathbf{t}}_{\text{R}} &= {\mathbf{p}}} \def\mbox {\boldmath $q$}{{\mathbf{q}}} \def\mbox {\boldmath $r$}{{\mathbf{r}}} \def\mbox {\boldmath $s$}{{\mathbf{s}}} \def\mbox {\boldmath $t$}{{\mathbf{t}}_{\text{B}} + d_4 \boldsymbol{\xi}_4, \\ {\mathbf{p}}} \def\mbox {\boldmath $q$}{{\mathbf{q}}} \def\mbox {\boldmath $r$}{{\mathbf{r}}} \def\mbox {\boldmath $s$}{{\mathbf{s}}} \def\mbox {\boldmath $t$}{{\mathbf{t}}_{\text{U},1} & = {\mathbf{p}}} \def\mbox {\boldmath $q$}{{\mathbf{q}}} \def\mbox {\boldmath $r$}{{\mathbf{r}}} \def\mbox {\boldmath $s$}{{\mathbf{s}}} \def\mbox {\boldmath $t$}{{\mathbf{t}}_{\text{B}} + d_1 \boldsymbol{\xi}_1 = {\mathbf{p}}} \def\mbox {\boldmath $q$}{{\mathbf{q}}} \def\mbox {\boldmath $r$}{{\mathbf{r}}} \def\mbox {\boldmath $s$}{{\mathbf{s}}} \def\mbox {\boldmath $t$}{{\mathbf{t}}_{\text{R}} + d_2 \boldsymbol{\xi}_2, \label{p_u_1_1} \\ {\mathbf{p}}} \def\mbox {\boldmath $q$}{{\mathbf{q}}} \def\mbox {\boldmath $r$}{{\mathbf{r}}} \def\mbox {\boldmath $s$}{{\mathbf{s}}} \def\mbox {\boldmath $t$}{{\mathbf{t}}_{\text{U},2} & = {\mathbf{p}}} \def\mbox {\boldmath $q$}{{\mathbf{q}}} \def\mbox {\boldmath $r$}{{\mathbf{r}}} \def\mbox {\boldmath $s$}{{\mathbf{s}}} \def\mbox {\boldmath $t$}{{\mathbf{t}}_{\text{R}} + d_3 \boldsymbol{\xi}_3. \label{p_u_2} \end{align} \subsection{Signal Model} Recall that the STAR-RIS has two operation functionalities, i.e., reflection and refraction. Two separate series of phase shifters are leveraged for controlling them, so each one is represented by a phase control matrix, i.e., $\boldsymbol{\Omega}_1$ for controlling refraction and $\boldsymbol{\Omega}_2$ for controlling reflection. $\boldsymbol{\Omega}_1$ and $\boldsymbol{\Omega}_2$ are diagonal matrices with each diagonal element satisfying the unit-modulus constraints, e.g., $|[\boldsymbol{\Omega}_1]_{jj}| = |[\boldsymbol{\Omega}_2]_{jj}| = 1$, $\forall j=1,2,\ldots,N$. We consider the 3D localization via the uplink transmission, where the two users send SRSs towards the BS simultaneously. The received signal during the $k$th time slot, for $k =1,2,\ldots,K$, can be mathematically expressed as \begin{equation} \label{by_k} \mbox {\boldmath $y$}_k = \mbox {\boldmath $h$}_1 x_{1,k} + \epsilon_2 \bH_4\boldsymbol{\Omega}_{2,k}\mbox {\boldmath $h$}_2 x_{1,k} + \epsilon_1 \bH_4\boldsymbol{\Omega}_{1,k}\mbox {\boldmath $h$}_3 x_{2,k} + \bn_k, \end{equation} where $x_{1,k}$ is the SRS from the outdoor MS, $x_{2,k}$ is the SRS from the indoor MS, as well as $\epsilon_1$ (for refraction) and $\epsilon_2$ (for reflection) are used to control the energy splitting for the two different operational modes of the STAR-RIS, satisfying the following condition: $\epsilon_1^2 + \epsilon_2^2 = 1$. The received signal at the BS is further corrupted by the white Gaussian noise $\bn_k$, and each element of $\bn_k$ follows $\mathcal{CN}(0, \sigma^2)$. During the $k$th time slot, the refraction matrix $\boldsymbol{\Omega}_{1,k}$ and the reflection matrix $\boldsymbol{\Omega}_{2,k}$ are considered. In order to ensure good estimates, $\boldsymbol{\Omega}_{1,k}$ and $\boldsymbol{\Omega}_{2,k}$ vary from one time slot to another, i.e., $\boldsymbol{\Omega}_{1,1} \neq \boldsymbol{\Omega}_{1,2} \neq \cdots \neq \boldsymbol{\Omega}_{1,K}$, and $\boldsymbol{\Omega}_{2,1} \neq \boldsymbol{\Omega}_{2,2} \neq \cdots \neq \boldsymbol{\Omega}_{2,K}$. This refractive/reflective beam sweeping was verified through our numerical results on the STAR-RIS design in Section~\ref{STAR_RIS_Design}. Based on the received signals across $K$ time slots, the BS estimates the Cartesian coordinates of both the indoor and outdoor users, enabling 3D localization. Without loss of generality, we assume that the sum power constraint is applied for each time slot, i.e., $\mathbb{E}[|x_{1,k}|^2] + \mathbb{E}[|x_{1,k}|^2] = P$, $\forall k$. The vector $\mbox {\boldmath $y$}_k$ in \eqref{by_k} can be further expressed as \begin{align} \mbox {\boldmath $y$}_k = &\mbox {\boldmath $h$}_1 x_{1,k} + \epsilon_2 \bH_4 \mathrm{diag}(\mbox {\boldmath $h$}_2) \boldsymbol{\omega}_{2,k} x_{1,k} \nonumber\\ &+ \epsilon_1 \bH_4 \mathrm{diag}(\mbox {\boldmath $h$}_3)\boldsymbol{\omega}_{1,k} x_{2,k} + \bn_k, \end{align} where $\boldsymbol{\Omega}_{1,k} = \mathrm{diag} (\boldsymbol{\omega}_{1,k})$ and $\boldsymbol{\Omega}_{2,k} = \mathrm{diag} (\boldsymbol{\omega}_{2,k})$, $\forall k$. By stacking all $\mbox {\boldmath $y$}_k$'s column by column, we get the expression: \begin{align} \label{bY} \bY =& \eta_1 \sqrt{P} \mbox {\boldmath $h$}_1 \mathbf{1}^\mathsf{T} + \eta_1 \sqrt{P} \epsilon_2 \bH_4 \mathrm{diag}(\mbox {\boldmath $h$}_2) \bar{\boldsymbol{\Omega}}_2 \nonumber \\ &+ \eta_2 \sqrt{P} \epsilon_1 \bH_4 \mathrm{diag}(\mbox {\boldmath $h$}_3)\bar{\boldsymbol{\Omega}}_1 + \bN, \end{align} where $\mathbf{1}$ denotes the $K$-element all-one vector, $\bY = [\mbox {\boldmath $y$}_1, \cdots, \mbox {\boldmath $y$}_K]$, $\bN = [\bn_1, \cdots, \bn_K]$, $\bar{\boldsymbol{\Omega}}_1 = [\boldsymbol{\omega}_{1,1}, \cdots, \boldsymbol{\omega}_{1,K}]$, and $\bar{\boldsymbol{\Omega}}_2 = [\boldsymbol{\omega}_{2,1}, \cdots, \boldsymbol{\omega}_{2,K}]$. We have also set $|x_{1,k}|^2 = \eta_1^2 P$, $|x_{2,k}|^2 = \eta_2^2 P$, where $\eta_1^2 + \eta_2^2 =1$. Applying vectorization to $\bY$ in~\eqref{bY}, the following expression is deduced: \begin{align}\label{vec_Y} \mbox {\boldmath $y$} =& \eta_1\sqrt{P} (\mathbf{1} \otimes \bI_{M}) \mbox {\boldmath $h$}_1+ \eta_1 \sqrt{P} \epsilon_2 (\bar{\boldsymbol{\Omega}}_2^\mathsf{T} \otimes \bI_M)(\bI_N \diamond \bH_4) \mbox {\boldmath $h$}_2\nonumber\\ &+ \eta_2 \sqrt{P} \epsilon_1 (\bar{\boldsymbol{\Omega}}_1^\mathsf{T} \otimes \bI_M)(\bI_N \diamond \bH_4) \mbox {\boldmath $h$}_3 + \bn, \end{align} where $\mbox {\boldmath $y$} = \mathrm{vec}(\bY)$ and $\bn = \mathrm{vec}(\bN)$. We further introduce the following new notations to simplify the analyses that follows: \begin{align} {\mathbf{A}}} \def\bB{{\mathbf{B}}} \def\bC{{\mathbf{C}}} \def\bD{{\mathbf{D}}} \def\bE{{\mathbf{E}}_1 &= (\mathbf{1} \otimes \bI_M), \\ {\mathbf{A}}} \def\bB{{\mathbf{B}}} \def\bC{{\mathbf{C}}} \def\bD{{\mathbf{D}}} \def\bE{{\mathbf{E}}_2 &= (\bar{\boldsymbol{\Omega}}_2^\mathsf{T} \otimes \bI_M)(\bI_N \diamond \bH_4), \\ {\mathbf{A}}} \def\bB{{\mathbf{B}}} \def\bC{{\mathbf{C}}} \def\bD{{\mathbf{D}}} \def\bE{{\mathbf{E}}_3 & = (\bar{\boldsymbol{\Omega}}_1^\mathsf{T} \otimes \bI_M)(\bI_N \diamond \bH_4). \end{align} To this end, expression in~\eqref{vec_Y} can be re-written as: \begin{equation} \label{vec_Y1} \mbox {\boldmath $y$} =\sqrt{P} \eta_1{\mathbf{A}}} \def\bB{{\mathbf{B}}} \def\bC{{\mathbf{C}}} \def\bD{{\mathbf{D}}} \def\bE{{\mathbf{E}}_1 \mbox {\boldmath $h$}_1 + \sqrt{P}\eta_1 \epsilon_2 {\mathbf{A}}} \def\bB{{\mathbf{B}}} \def\bC{{\mathbf{C}}} \def\bD{{\mathbf{D}}} \def\bE{{\mathbf{E}}_2 \mbox {\boldmath $h$}_2 + \sqrt{P}\eta_2\epsilon_1{\mathbf{A}}} \def\bB{{\mathbf{B}}} \def\bC{{\mathbf{C}}} \def\bD{{\mathbf{D}}} \def\bE{{\mathbf{E}}_3 \mbox {\boldmath $h$}_3 + \bn. \end{equation} Upon the STAR-RIS deployment, we assume that the BS knows the exact/precise location of the STAR-RIS. Thus, we assume that the BS has exact information on $\bH_4$ in terms of the parameters $\theta_4$, $\phi_4$, and $d_4$.\footnote{The assumption needs to be further relaxed since perfect information is usually infeasible in practice.} Therefore, ${\mathbf{A}}} \def\bB{{\mathbf{B}}} \def\bC{{\mathbf{C}}} \def\bD{{\mathbf{D}}} \def\bE{{\mathbf{E}}_1$, ${\mathbf{A}}} \def\bB{{\mathbf{B}}} \def\bC{{\mathbf{C}}} \def\bD{{\mathbf{D}}} \def\bE{{\mathbf{E}}_2$, and ${\mathbf{A}}} \def\bB{{\mathbf{B}}} \def\bC{{\mathbf{C}}} \def\bD{{\mathbf{D}}} \def\bE{{\mathbf{E}}_3$ in~\eqref{vec_Y1} are known measurement matrices to the BS (the BS also knows the refractive/reflective phase configurations due to its interaction with the STAR-RIS controller). \section{Cram\'er Rao Lower Bound Analyses} In this section, we will provide the CRLBs on the estimation of the intermediate channel parameters, followed by the 3D Cartesian coordinates' estimation. We also present the refractive/reflective optimization of the STAR-RIS for the indoor/outdoor localization objective. \subsection{Estimation of Channel Parameters} The channel parameters to be estimated are those included in $\mbox {\boldmath $h$}_1$, $\mbox {\boldmath $h$}_2$, and $\mbox {\boldmath $h$}_3$, i.e., the nine-tuple $\boldsymbol{\nu} \triangleq [\theta_1, \phi_1, d_1,\theta_2, \phi_2, d_2,\theta_3, \phi_3, d_3]^\mathsf{T}$. Since the additive noise is complex Gaussian distributed, by introducing $\boldsymbol{\mu}(\boldsymbol{\nu}) \triangleq \eta_1{\mathbf{A}}} \def\bB{{\mathbf{B}}} \def\bC{{\mathbf{C}}} \def\bD{{\mathbf{D}}} \def\bE{{\mathbf{E}}_1 \mbox {\boldmath $h$}_1 + \eta_1 \epsilon_2 {\mathbf{A}}} \def\bB{{\mathbf{B}}} \def\bC{{\mathbf{C}}} \def\bD{{\mathbf{D}}} \def\bE{{\mathbf{E}}_2 \mbox {\boldmath $h$}_2 + \eta_2\epsilon_1{\mathbf{A}}} \def\bB{{\mathbf{B}}} \def\bC{{\mathbf{C}}} \def\bD{{\mathbf{D}}} \def\bE{{\mathbf{E}}_3 \mbox {\boldmath $h$}_3$ for~\eqref{vec_Y1}, the Fisher information matrix for $\boldsymbol{\nu}$ is obtained as: \begin{equation}\label{Fisher_parameter} [\bJ(\boldsymbol{\nu}) ]_{i,j} = \frac{P}{\sigma^2}\Re \Big\{\frac{\partial \boldsymbol{\mu}^\mathsf{H}} {\partial \nu_i} \frac{ \partial \boldsymbol{\mu}}{ \partial \nu_j} \Big\}. \end{equation} The detailed information on the partial derivatives in~\eqref{Fisher_parameter} related to parameters in $\mbox {\boldmath $h$}_1$, $\mbox {\boldmath $h$}_2$, and $\mbox {\boldmath $h$}_3$ is provided below: \begin{align} \frac{\partial \boldsymbol{\mu}} {\partial \theta_1} &= \eta_1{\mathbf{A}}} \def\bB{{\mathbf{B}}} \def\bC{{\mathbf{C}}} \def\bD{{\mathbf{D}}} \def\bE{{\mathbf{E}}_1 (\bD_{11} \otimes \bI_{M_x})\mbox {\boldmath $h$}_1,\\ \frac{\partial \boldsymbol{\mu}} {\partial \phi_1} &= \eta_1{\mathbf{A}}} \def\bB{{\mathbf{B}}} \def\bC{{\mathbf{C}}} \def\bD{{\mathbf{D}}} \def\bE{{\mathbf{E}}_1 (\bD_{12} \otimes \bI_{M_x})\mbox {\boldmath $h$}_1 + \eta_1{\mathbf{A}}} \def\bB{{\mathbf{B}}} \def\bC{{\mathbf{C}}} \def\bD{{\mathbf{D}}} \def\bE{{\mathbf{E}}_1 (\bI_{M_z} \otimes \bD_{13}) \mbox {\boldmath $h$}_1, \nonumber \\ & = \eta_1 {\mathbf{A}}} \def\bB{{\mathbf{B}}} \def\bC{{\mathbf{C}}} \def\bD{{\mathbf{D}}} \def\bE{{\mathbf{E}}_1 (\bD_{12} \otimes \bI_{M_x} + \bI_{M_z} \otimes \bD_{13})\mbox {\boldmath $h$}_1, \\ \frac{\partial \boldsymbol{\mu}} {\partial d_1} & = \eta_1 \frac{-j2\pi d_1/\lambda -1}{d_1} {\mathbf{A}}} \def\bB{{\mathbf{B}}} \def\bC{{\mathbf{C}}} \def\bD{{\mathbf{D}}} \def\bE{{\mathbf{E}}_1 \mbox {\boldmath $h$}_1,\\ \frac{\partial \boldsymbol{\mu}} {\partial \theta_2} &= \eta_1\epsilon_2 {\mathbf{A}}} \def\bB{{\mathbf{B}}} \def\bC{{\mathbf{C}}} \def\bD{{\mathbf{D}}} \def\bE{{\mathbf{E}}_2 (\bD_{21} \otimes \bI_{N_x})\mbox {\boldmath $h$}_2,\\ \frac{\partial \boldsymbol{\mu}} {\partial \phi_2} &= \eta_1\epsilon_2{\mathbf{A}}} \def\bB{{\mathbf{B}}} \def\bC{{\mathbf{C}}} \def\bD{{\mathbf{D}}} \def\bE{{\mathbf{E}}_2 (\bD_{22} \otimes \bI_{N_x})\mbox {\boldmath $h$}_2 + \eta_1\epsilon_2 {\mathbf{A}}} \def\bB{{\mathbf{B}}} \def\bC{{\mathbf{C}}} \def\bD{{\mathbf{D}}} \def\bE{{\mathbf{E}}_2 (\bI_{N_z} \otimes \bD_{23}) \mbox {\boldmath $h$}_2, \nonumber \\ & = \eta_1 \epsilon_2{\mathbf{A}}} \def\bB{{\mathbf{B}}} \def\bC{{\mathbf{C}}} \def\bD{{\mathbf{D}}} \def\bE{{\mathbf{E}}_2 (\bD_{22} \otimes \bI_{N_x} + \bI_{N_z} \otimes \bD_{23})\mbox {\boldmath $h$}_2, \\ \frac{\partial \boldsymbol{\mu}} {\partial d_2} & = \eta_1\epsilon_2 \frac{-j2\pi d_2/\lambda -1}{d_2} {\mathbf{A}}} \def\bB{{\mathbf{B}}} \def\bC{{\mathbf{C}}} \def\bD{{\mathbf{D}}} \def\bE{{\mathbf{E}}_2 \mbox {\boldmath $h$}_2,\\ \frac{\partial \boldsymbol{\mu}} {\partial \theta_3} &= \eta_2\epsilon_1 {\mathbf{A}}} \def\bB{{\mathbf{B}}} \def\bC{{\mathbf{C}}} \def\bD{{\mathbf{D}}} \def\bE{{\mathbf{E}}_3 (\bD_{31} \otimes \bI_{N_x})\mbox {\boldmath $h$}_3,\\ \frac{\partial \boldsymbol{\mu}} {\partial \phi_3} &= \eta_2\epsilon_1{\mathbf{A}}} \def\bB{{\mathbf{B}}} \def\bC{{\mathbf{C}}} \def\bD{{\mathbf{D}}} \def\bE{{\mathbf{E}}_3 (\bD_{32} \otimes \bI_{N_x})\mbox {\boldmath $h$}_3 + \eta_2\epsilon_1 {\mathbf{A}}} \def\bB{{\mathbf{B}}} \def\bC{{\mathbf{C}}} \def\bD{{\mathbf{D}}} \def\bE{{\mathbf{E}}_3 (\bI_{N_z} \otimes \bD_{33}) \mbox {\boldmath $h$}_3, \nonumber \\ & = \eta_2 \epsilon_1{\mathbf{A}}} \def\bB{{\mathbf{B}}} \def\bC{{\mathbf{C}}} \def\bD{{\mathbf{D}}} \def\bE{{\mathbf{E}}_3 (\bD_{32} \otimes \bI_{N_x} + \bI_{N_z} \otimes \bD_{33})\mbox {\boldmath $h$}_3, \\ \frac{\partial \boldsymbol{\mu}} {\partial d_3} & = \eta_2\epsilon_1 \frac{-j2\pi d_3/\lambda -1}{d_3} {\mathbf{A}}} \def\bB{{\mathbf{B}}} \def\bC{{\mathbf{C}}} \def\bD{{\mathbf{D}}} \def\bE{{\mathbf{E}}_3 \mbox {\boldmath $h$}_3, \end{align} where we have used the following matrix definitions: \begin{align} \bD_{11} =& \mathrm{diag}\Big(\Big[-j\pi \Big(-\frac{M_x -1}{2}\Big) \sin(\theta_1) \sin(\phi_1), \cdots, \nonumber\\ & -j\pi \Big(\frac{M_x -1}{2}\Big) \sin(\theta_1) \sin(\phi_1)\Big]\Big), \end{align} \begin{align} \bD_{12} =& \mathrm{diag}\Big(\Big[j\pi \Big(-\frac{M_x -1}{2}\Big) \cos(\theta_1) \cos(\phi_1), \cdots,\nonumber\\ &j\pi \Big(\frac{M_x -1}{2}\Big) \cos(\theta_1) \cos(\phi_1)\Big]\Big),\\ \bD_{13} &= \mathrm{diag}\Big(\Big[-j\pi \Big(-\frac{M_x -1}{2}\Big) \sin(\phi_1), \cdots,\nonumber\\ & -j\pi \Big(\frac{M_x -1}{2}\Big) \sin(\phi_1)\Big]\Big),\\ \bD_{i1} =& \mathrm{diag}\Big(\Big[-j\pi \Big(-\frac{N_x -1}{2}\Big) \sin(\theta_i) \sin(\phi_i), \cdots,\nonumber\\ &-j\pi \Big(\frac{N_x -1}{2}\Big) \sin(\theta_i) \sin(\phi_i)\Big]\Big), \;\text{for}\; i = 2,3, \\ \bD_{i2} =& \mathrm{diag}\Big(\Big[j\pi \Big(-\frac{N_x -1}{2}\Big) \cos(\theta_i) \cos(\phi_i), \cdots,\nonumber\\ &j\pi \Big(\frac{N_x -1}{2}\Big) \cos(\theta_i) \cos(\phi_i)\Big]\Big),\;\text{for}\; i = 2,3,\\ \bD_{i3} =& \mathrm{diag}\Big(\Big[-j\pi \Big(-\frac{N_x -1}{2}\Big) \sin(\phi_i), \cdots, \nonumber \\ &-j\pi \Big(\frac{N_x -1}{2}\Big) \sin(\phi_i)\Big]\Big), \;\text{for}\; i = 2,3, \end{align} under the assumption of half-wavelength inter-element spacing for both the BS and the STAR-RIS UPAs. For the above estimators of the channel parameters (unbiased $\hat{\boldsymbol{\nu}}(\mbox {\boldmath $y$})$), we can calculate the CRLB on the error covariance matrix as follows: \begin{equation}\label{CRLB_NU} \mathbb{E}\{(\boldsymbol{\nu}- \hat{\boldsymbol{\nu}}(\mbox {\boldmath $y$}))(\boldsymbol{\nu}- \hat{\boldsymbol{\nu}}(\mbox {\boldmath $y$}))^\mathsf{H} \} \succeq \bJ^{-1}(\boldsymbol{\nu}), \end{equation} where the notation ${\mathbf{A}}} \def\bB{{\mathbf{B}}} \def\bC{{\mathbf{C}}} \def\bD{{\mathbf{D}}} \def\bE{{\mathbf{E}} \succeq \bB$ for square matrices ${\mathbf{A}}} \def\bB{{\mathbf{B}}} \def\bC{{\mathbf{C}}} \def\bD{{\mathbf{D}}} \def\bE{{\mathbf{E}}$ and $\bB$ means ${\mathbf{a}}} \def\mbox {\boldmath $b$}{{\mathbf{b}}} \def\mbox {\boldmath $c$}{{\mathbf{c}}} \def\mbox {\boldmath $d$}{{\mathbf{d}}} \def\be{{\mathbf{e}}^\mathsf{H} {\mathbf{A}}} \def\bB{{\mathbf{B}}} \def\bC{{\mathbf{C}}} \def\bD{{\mathbf{D}}} \def\bE{{\mathbf{E}} {\mathbf{a}}} \def\mbox {\boldmath $b$}{{\mathbf{b}}} \def\mbox {\boldmath $c$}{{\mathbf{c}}} \def\mbox {\boldmath $d$}{{\mathbf{d}}} \def\be{{\mathbf{e}} \geq {\mathbf{a}}} \def\mbox {\boldmath $b$}{{\mathbf{b}}} \def\mbox {\boldmath $c$}{{\mathbf{c}}} \def\mbox {\boldmath $d$}{{\mathbf{d}}} \def\be{{\mathbf{e}}^\mathsf{H} \bB {\mathbf{a}}} \def\mbox {\boldmath $b$}{{\mathbf{b}}} \def\mbox {\boldmath $c$}{{\mathbf{c}}} \def\mbox {\boldmath $d$}{{\mathbf{d}}} \def\be{{\mathbf{e}}$ for any valid vector ${\mathbf{a}}} \def\mbox {\boldmath $b$}{{\mathbf{b}}} \def\mbox {\boldmath $c$}{{\mathbf{c}}} \def\mbox {\boldmath $d$}{{\mathbf{d}}} \def\be{{\mathbf{e}}$. \subsection{Estimation of 3D Cartesian Coordinates} We are interested in estimating the 3D Cartesian coordinates of the indoor and outdoor MSs. Therefore, after estimating the channel parameters, we need to map them to 3D Cartesian coordinates, e.g., $\boldsymbol{\kappa} = [x_{\text{U},1}, y_{\text{U},1}, z_{\text{U},1},x_{\text{U},2}, y_{\text{U},2}, z_{\text{U},2}]^\mathsf{T}$, based on the geometrical relationship among the BS, the STAR-RIS, and the two MSs. For the CRLB evaluation of $\boldsymbol{\kappa}$, we resort to the Jacobian matrix $\bT$, which links the connection between the channel parameters $ \boldsymbol{\nu}$ and the 3D Cartesian coordinates of the two MSs $\boldsymbol{\kappa}$. Each $(i,j)$th element of $\bT$ is expressed as: \begin{equation} [\bT]_{ij} = \frac{\partial [\boldsymbol{\nu}]_j}{\partial [\boldsymbol{\kappa}]_i}. \end{equation} We provide the following derivatives for the calculation of the submatrix of the Jacobian matrix related to the outdoor MS: \begin{align}\label{theta_x_U_1} \partial \theta_i / \partial x_{\text{U},1} & =- \frac{\sin(\theta_i)}{d_i \cos(\phi_i)}, \\ \partial \theta_i/ \partial y_{\text{U},1} & = \frac{\cos(\theta_i)}{d_i \cos(\phi_i)}, \\ \partial \theta_i / \partial z_{\text{U},1} & = 0, \\ \partial \phi_i / \partial x_{\text{U},1} & =- \frac{\cos(\theta_i)\sin(\phi_i)}{d_i }, \\ \partial \phi_i/ \partial y_{\text{U},1} & =- \frac{\sin(\theta_i)\sin(\phi_i)}{d_i }, \end{align} \begin{align} \partial \phi_i / \partial z_{\text{U},1} & = \frac{\cos(\phi_i)}{d_i},\\ \partial d_i / \partial x_{\text{U},1} & = \cos(\theta_i) \cos(\phi_i), \\ \partial d_i/ \partial y_{\text{U},1} & = \sin(\theta_i) \cos(\phi_i), \\ \partial d_i / \partial z_{\text{U},1} & = \sin(\phi_i),\label{d1_z_U_1} \end{align} for $i =1$ and $2$. The submatrix of $\bT$ related to the indoor MS can be computed in the same manner. In addition, it can be easily seen that only the channel parameters $\{ \theta_1, \phi_1, d_1,\theta_2, \phi_2, d_2\}$ are related to the coordinates $(x_{\text{U},1}, y_{\text{U},1}, z_{\text{U},1})$ of the outdoor MS, and only the parameters $\{\theta_3, \phi_3, d_3\}$ are related to the coordinates $(x_{\text{U},2}, y_{\text{U},2}, z_{\text{U},2})$ of the indoor MS, as concluded from~\eqref{p_u_1_1} and \eqref{p_u_2}. Therefore, the Jacobian matrix $\bT$ has the following form: \begin{equation} \bT = \begin{bmatrix} \bT_{1} & \mathbf{0} \\ \mathbf{0} & \bT_{2} \end{bmatrix}, \end{equation} where the submatrix $\bT_{1} \in \mathbb{R}^{3\times 6}$ consists of the partial derivatives from~\eqref{theta_x_U_1} to \eqref{d1_z_U_1}, and the submatrix $\bT_{2}\in \mathbb{R}^{3\times 3}$ consists of the partial derivatives related to the indoor MS; this part is omitted here due to the space limitation. The Fisher information of $\boldsymbol{\kappa}$ can be then expressed as~\cite{Elzanaty2021} \begin{equation}\label{J_kappa} \bJ(\boldsymbol{\kappa}) = \bT \bJ(\boldsymbol{\nu}) \bT^\mathsf{T}. \end{equation} Similar to~\eqref{CRLB_NU}, we have the inequality for the CLRB: \begin{equation}\label{CRLB_kappa} \mathbb{E}\{(\boldsymbol{\kappa}- \hat{\boldsymbol{\kappa}}(\mbox {\boldmath $y$}))(\boldsymbol{\kappa}- \hat{\boldsymbol{\kappa}}(\mbox {\boldmath $y$}))^\mathsf{T} \} \succeq \bJ^{-1}(\boldsymbol{\kappa}). \end{equation} The lower bounds on the root mean square error (RMSE) of the position estimation of the outdoor and indoor MSs are: \begin{align} \text{RMSE}_{\text{U},1} = \sqrt{\mathrm{var}(\hat{\mbox {\boldmath $x$}}_{\text{U},1})} &\geq \sqrt{\mathrm{tr}\{[\bJ^{-1}(\boldsymbol{\kappa})]_{1:3,1:3}\}}, \\ \text{RMSE}_{\text{U},2} = \sqrt{\mathrm{var}(\hat{\mbox {\boldmath $x$}}_{\text{U},2})} &\geq \sqrt{\mathrm{tr}\{[\bJ^{-1}(\boldsymbol{\kappa})]_{4:6,4:6}\}}. \end{align} \subsection{Localization-Optimal Design for the STAR-RIS}\label{Optimal_Design_of_STAR_RIS} By introducing $\bG_1 \triangleq [\frac{\partial \boldsymbol{\mu}} {\partial \theta_1}, \frac{\partial \boldsymbol{\mu}} {\partial \phi_1}, \frac{\partial \boldsymbol{\mu}} {\partial d_1}, \frac{\partial \boldsymbol{\mu}} {\partial \theta_2}, \frac{\partial \boldsymbol{\mu}} {\partial \phi_2}, \frac{\partial \boldsymbol{\mu}} {\partial d_2} ]$, $\hat{\bG}_1 \triangleq \bG_1 \bT_{1}^\mathsf{H}$, $\bG_2 \triangleq [\frac{\partial \boldsymbol{\mu}} {\partial \theta_3}, \frac{\partial \boldsymbol{\mu}} {\partial \phi_3}, \frac{\partial \boldsymbol{\mu}} {\partial d_3}]$, and $\hat{\bG}_2 \triangleq \bG_2 \bT_{2}^\mathsf{H}$, the expression of $\bJ^{-1}(\boldsymbol{\kappa})$ in~\eqref{CRLB_NU} can be expressed as~\cite{scharf1993geometry, Pakrooh2015} \begin{equation}\label{bJ_inv} \bJ^{-1}\!(\boldsymbol{\kappa}) \!=\! \frac{\sigma^2}{P} \begin{bmatrix} (\hat{\bG}_1^\mathsf{H} (\bI \!-\! {\mathbf{P}}} \def\bQ{{\mathbf{Q}}} \def\bR{{\mathbf{R}}} \def\bS{{\mathbf{S}}} \def\bT{{\mathbf{T}}_{\hat{\bG}_2})\hat{\bG}_1)^{-1} & * \\ * & (\hat{\bG}_2^\mathsf{H} (\bI \!-\! {\mathbf{P}}} \def\bQ{{\mathbf{Q}}} \def\bR{{\mathbf{R}}} \def\bS{{\mathbf{S}}} \def\bT{{\mathbf{T}}_{\hat{\bG}_1})\hat{\bG}_2 )^{-1} \end{bmatrix}\!\!, \end{equation} where ${\mathbf{P}}} \def\bQ{{\mathbf{Q}}} \def\bR{{\mathbf{R}}} \def\bS{{\mathbf{S}}} \def\bT{{\mathbf{T}}_{\hat{\bG}_i} = \hat{\bG}_i( \hat{\bG}_i^\mathsf{H} \hat{\bG}_i)^{-1}\hat{\bG}_i^\mathsf{H} $ is the orthogonal projection onto the column space of $\hat{\bG}_i$ for $i = 1$ and $2$. In order to simplify the diagonal terms in~\eqref{bJ_inv}, we rewrite $\bG_1$ and $\bG_2$ as $\bG_1 = [\eta_1{\mathbf{A}}} \def\bB{{\mathbf{B}}} \def\bC{{\mathbf{C}}} \def\bD{{\mathbf{D}}} \def\bE{{\mathbf{E}}_1\bH_1, \; \eta_1 \epsilon_2 {\mathbf{A}}} \def\bB{{\mathbf{B}}} \def\bC{{\mathbf{C}}} \def\bD{{\mathbf{D}}} \def\bE{{\mathbf{E}}_2 \bH_2]$ and $\bG_2 = \eta_2 \epsilon_1 {\mathbf{A}}} \def\bB{{\mathbf{B}}} \def\bC{{\mathbf{C}}} \def\bD{{\mathbf{D}}} \def\bE{{\mathbf{E}}_3\bH_3$, where $\bH_1 = [(\bD_{11} \otimes \bI_{M_x})\mbox {\boldmath $h$}_1, \; (\bD_{12} \otimes \bI_{M_x} + \bI_{M_z} \otimes \bD_{13})\mbox {\boldmath $h$}_1, \; \frac{-j2\pi d_1/\lambda -1}{d_1} \mbox {\boldmath $h$}_1]$, $\bH_2 = [(\bD_{21} \otimes \bI_{N_x})\mbox {\boldmath $h$}_2, \; (\bD_{22} \otimes \bI_{N_x} + \bI_{N_z} \otimes \bD_{23})\mbox {\boldmath $h$}_2, \; \frac{-j2\pi d_2/\lambda -1}{d_2} \mbox {\boldmath $h$}_2]$, and $\bH_3 = [(\bD_{31} \otimes \bI_{N_x})\mbox {\boldmath $h$}_3, \; (\bD_{32} \otimes \bI_{N_x} + \bI_{N_z} \otimes \bD_{33})\mbox {\boldmath $h$}_3, \; \frac{-j2\pi d_3/\lambda -1}{d_3} \mbox {\boldmath $h$}_3]$. It can be seen that ${\mathbf{A}}} \def\bB{{\mathbf{B}}} \def\bC{{\mathbf{C}}} \def\bD{{\mathbf{D}}} \def\bE{{\mathbf{E}}_2$ is a function of $\bar{\boldsymbol{\Omega}}_2$ and ${\mathbf{A}}} \def\bB{{\mathbf{B}}} \def\bC{{\mathbf{C}}} \def\bD{{\mathbf{D}}} \def\bE{{\mathbf{E}}_3$ is a function of $\bar{\boldsymbol{\Omega}}_1$, while ${\mathbf{A}}} \def\bB{{\mathbf{B}}} \def\bC{{\mathbf{C}}} \def\bD{{\mathbf{D}}} \def\bE{{\mathbf{E}}_1$ is independent of $\bar{\boldsymbol{\Omega}}_1$ and $\bar{\boldsymbol{\Omega}}_2$. By dividing $\bT_{1}$ into two submatrices, i.e., as $\bT_{1} = [\tilde{\bT}_{1}\in \mathbb{R}^{3\times 3}, \bar{\bT}_{1}\in \mathbb{R}^{3\times 3}]$, we can derive the expressions $\hat{\bG}_1 = \eta_1{\mathbf{A}}} \def\bB{{\mathbf{B}}} \def\bC{{\mathbf{C}}} \def\bD{{\mathbf{D}}} \def\bE{{\mathbf{E}}_1\bH_1\tilde{\bT}_{1}^\mathsf{H} + \eta_1 \epsilon_2 {\mathbf{A}}} \def\bB{{\mathbf{B}}} \def\bC{{\mathbf{C}}} \def\bD{{\mathbf{D}}} \def\bE{{\mathbf{E}}_2 \bH_2\bar{\bT}_{1}^\mathsf{H}$ and $\hat{\bG}_2 = \eta_2 \epsilon_1 {\mathbf{A}}} \def\bB{{\mathbf{B}}} \def\bC{{\mathbf{C}}} \def\bD{{\mathbf{D}}} \def\bE{{\mathbf{E}}_3\bH_3\bT_{2}^\mathsf{H}$. For the sake of tractability for the STAR-RIS optimization (i.e., $\bar{\boldsymbol{\Omega}}_1$ and $\bar{\boldsymbol{\Omega}}_2$), we maximize the principal angle (within $[0,\pi/2]$) between the subspaces of $\hat{\bG}_1$ and $\hat{\bG}_2$ by following~\cite{scharf1993geometry}. The optimal solutions are founded when $ [\mathbf{1}, \bar{\boldsymbol{\Omega}}_2^\mathsf{T}]$ is orthogonal to $\bar{\boldsymbol{\Omega}}_1^\mathsf{T}$. In this regard, the largest principal angle, e.g., $\pi/2$, is obtained~\cite{golub2013matrix}. The choices for the $\bar{\boldsymbol{\Omega}}_1$ and $\bar{\boldsymbol{\Omega}}_2$ can be the non-overlapping parts of a DFT or a Hadamard matrix.\footnote{Hadamard matrices possess good properties, since they only contain $\{-1, 1\}$. Therefore, only $1$-bit quantization is needed for both the reflection and refraction matrices at the STAR-RIS.} Note that we need to assume that $K \geq 2 N$ here in order to guarantee that both $\bar{\boldsymbol{\Omega}}_1$ and $\bar{\boldsymbol{\Omega}}_2$ are optimal. \section{Numerical Results} In this section's numerical investigation, we have set the system parameters as follows: ${\mathbf{p}}} \def\mbox {\boldmath $q$}{{\mathbf{q}}} \def\mbox {\boldmath $r$}{{\mathbf{r}}} \def\mbox {\boldmath $s$}{{\mathbf{s}}} \def\mbox {\boldmath $t$}{{\mathbf{t}}_\text{B} =(0, 0, 8)^\mathsf{T}$, ${\mathbf{p}}} \def\mbox {\boldmath $q$}{{\mathbf{q}}} \def\mbox {\boldmath $r$}{{\mathbf{r}}} \def\mbox {\boldmath $s$}{{\mathbf{s}}} \def\mbox {\boldmath $t$}{{\mathbf{t}}_\text{R} =(2, 2, 5)^\mathsf{T}$, ${\mathbf{p}}} \def\mbox {\boldmath $q$}{{\mathbf{q}}} \def\mbox {\boldmath $r$}{{\mathbf{r}}} \def\mbox {\boldmath $s$}{{\mathbf{s}}} \def\mbox {\boldmath $t$}{{\mathbf{t}}_{\text{U},1} =(5, 1, 2)^\mathsf{T}$, and ${\mathbf{p}}} \def\mbox {\boldmath $q$}{{\mathbf{q}}} \def\mbox {\boldmath $r$}{{\mathbf{r}}} \def\mbox {\boldmath $s$}{{\mathbf{s}}} \def\mbox {\boldmath $t$}{{\mathbf{t}}_{\text{U},2} =(1, 5, 2)^\mathsf{T}$. The numbers of BS antennas, STAR-RIS elements, and SRSs from each MS were set as $M = 16$, $N = 64$, and $K = 128$. The signal-to-noise ratio (SNR) is defined as $P/\sigma^2$. \subsection{Results on the Channel Parameters' Estimation} We evaluate the estimation of the intermediate parameters with different setups for the STAR-RIS energy splitting coefficient $\epsilon_1$ and the power allocation coefficient $\eta_1$, i.e., $\epsilon_1 = \eta_1 = \sqrt{0.5}$, $\epsilon_1 = \sqrt{0.9}, \eta_1 = \sqrt{0.5}$, and $\epsilon_1 = \sqrt{0.5}, \eta_1 = \sqrt{0.9}$. The simulation results on estimation of $\theta_i$'s and $d_i$'s are shown in Figs.~\ref{MSE_Theta_effect_eta_epsilon} and~\ref{MSE_d_effect_eta_epsilon} in terms of the mean square error (MSE). Due to the similar estimation performance for $\theta_i$'s, the results for $\phi_i$'s are omitted here. From the simulation results, we can see that when $\epsilon_1$ increases with fixed $\eta_1$, better performance on the estimation of parameters in $\mbox {\boldmath $h$}_3$ can be achieved. Meanwhile, the performance degrades for the parameter estimation in $\mbox {\boldmath $h$}_1$ and $\mbox {\boldmath $h$}_2$ when $\eta_1$ decreases (less transmission power at the outdoor MS). Note that when fixing $\eta_1$, the performance for the parameter estimation in $\mbox {\boldmath $h$}_1$ will not change with $\epsilon_1$. \begin{figure}[t] \centering \includegraphics[width=0.99\linewidth]{MSE_Theta_effect_eta_epsilon} \caption{CRLB on the estimation of $\{\theta_i\}$ for $i =1,2$ and $3$ with different pairs of $\epsilon_1$ and $\eta_1$. } \label{MSE_Theta_effect_eta_epsilon} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.95\linewidth]{MSE_d_effect_eta_epsilon} \caption{CRLB on the estimation of $\{d_i\}$ for $i =1,2$ and $3$ with different pairs of $\epsilon_1$ and $\eta_1$. } \label{MSE_d_effect_eta_epsilon} \end{figure} \subsection{CRLBs on 3D Localization} Based on the expressions~\eqref{J_kappa} and \eqref{CRLB_kappa}, we can calculate the CRLBs for the estimation of the 3D Cartesian coordinates of the indoor and outdoor MSs. The simulation results are shown in Fig.~\ref{RMSE_3D_Loc_effect_eta_epsilon} in terms of RMSE. The performance of 3D localization has been controlled by the two parameters $\epsilon_1$ and $\eta_1$. When $\epsilon_1 = \eta_1 =\sqrt{0.5}$, the performance gap between the two MSs' position estimation is small. However, in the other two cases, the gap is obvious. In this sense, well-selected $\epsilon_1$ and $\eta_1$ can satisfy the quality of services (QoSs) and user fairness for both MSs simultaneously. \begin{figure}[t] \centering \includegraphics[width=0.95\linewidth]{RMSE_3D_Loc_effect_eta_epsilon} \caption{CRLB on the estimation of the 3D Cartesian coordinates of the two MSs with different pairs of $\epsilon_1$ and $\eta_1$. } \label{RMSE_3D_Loc_effect_eta_epsilon} \vspace{-0.5cm} \end{figure} More comprehensive results on $\epsilon_1$ and $\eta_1$ on the 3D localization are shown in Fig.~\ref{heat_map}. In general, a small $\epsilon_1$ and a large $\eta_1$ offer poor performance for the indoor MS. On the contrary, a large $\epsilon_1$ and a small $\eta_1$ results in poor performance for the outdoor MS. However, for most of the cases, where it holds $\eta_1,\epsilon_1 >0.3$, we can achieve promising 3D localization in term of the RMSE for both MSs. \begin{figure}[t] \centering \includegraphics[width=0.99\linewidth]{heat_map} \caption{The effect of $\eta_1$ and $\epsilon_1$ on the 3D localization, when the SNR is fixed to $15$ dB. } \label{heat_map} \vspace{-0.5cm} \end{figure} \subsection{Effect of the STAR-RIS Design}\label{STAR_RIS_Design} We evaluate the effect of the STAR-RIS design by considering three different cases: i) $\bar{\boldsymbol{\Omega}}_1$ and $\bar{\boldsymbol{\Omega}}_2$ are part of a DFT matrix; ii) $\bar{\boldsymbol{\Omega}}_1$ and $\bar{\boldsymbol{\Omega}}_2$ are part of a Hadamard matrix; and iii) the phases of $\bar{\boldsymbol{\Omega}}_1$ and $\bar{\boldsymbol{\Omega}}_2$ are randomly generated. The simulation results are shown in Fig.~\ref{Effect_of_STAR_RIS_Design}. We observe that the first two cases have the same performance and outperform the third one, since according to Section~\ref{Optimal_Design_of_STAR_RIS} they are optimal. The performance gain is not obvious due to the fact that, for large $K$ and $N$, case iii) approximates the optimal solutions (i.e., the subspaces of $\hat{\bG}_1$ and $\hat{\bG}_2$ are nearly orthogonal). \begin{figure}[t] \centering \includegraphics[width=0.99\linewidth]{Effect_of_STAR_RIS_Design} \caption{The effect of the STAR-RIS design on the 3D localization. } \label{Effect_of_STAR_RIS_Design} \vspace{-0.5cm} \end{figure} \section{Conclusion and Future Work} In this paper, we studied the fundamental 3D localization performance limits of STAR-RIS-aided mmWave MIMO systems for simultaneously serving one indoor MS and one outdoor MS. We also investigated the effect of energy splitting at the STAR-RIS and the power allocation between the two MSs to offer some useful insights for practical implementations. Moreover, the optimal design of the STAR-RIS reflection and refraction configuration matrices was derived by maximizing the principal angle of two associated subspaces. In the future, we will focus on the design of practical localization algorithms, which attain this paper's theoretical performance limits. We will also extend the CRLB analysis to general multipath scenarios and consider arbitrary orientations between the UPAs of the BS and the STAR-RIS. \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{introduction} Community detection in networks is a fundamental problem in the area of graph mining and machine learning, with many interesting applications such as social networks, image segmentation, and biological networks (see, e.g., the survey by~\cite{fortunato2010community}). The main goal is to partition the network into communities that are ``well-connected''; no standard definition for communities exists, and a large number of methods have been proposed, e.g.,~\cite{Blondel2008FastUO,girvan2002community,holland1983stochastic}, but, in general, there is a limited theoretical basis for the performance of these methods. One exception is the stochastic block model (SBM)~\cite{holland1983stochastic}, which is a probabilistic generative model for generating networks with underlying communities, providing a rigorous framework for detection algorithms. In the simplest canonical form of an SBM, the $n$ vertices are partitioned into $r$ communities, and a pair of vertices connect with probability $p$ within communities and with probability $q$ across communities, where $p > q$. ``Recovering'' communities in a graph generated from an SBM (defined formally in Section~\ref{sec:preliminaries_and_problem_statement}) has been a very active area of research, e.g.,~\cite{condon2001algorithms, arias2014community, abbe2015exact, hajek2016achieving}. The exact conditions for recoverability are well understood in terms of the scaling of $p$ and $q$ (more specifically the difference between $p$ and $q$). In particular, in the dense regime (the focus of this paper), with $p = a \log(n)/n$ and $q = b \log(n)/n$, for some constants $a > b > 0$, it is known that exact recovery is possible \emph{if and only if} $\sqrt{a} - \sqrt{b} > \sqrt{r}$ (see~\cite{abbe2017community} for a comprehensive survey). Efficient algorithms for recovering communities have been developed using spectral methods and semi-definite programming (SDP) ~\cite{boppana1987eigenvalues, mcsherry2001spectral,abbe2015exact, massoulie2014community,gao2017achieving, hajek2016achieving, abbe2020entrywise, wang2020nearly}. \begin{table*}[t] \centering \begin{tabular}{| c |c | c | c | c | c |} \hline & MLE-Stability & SDP-Stability & Bayesian & Exponential & RR + SDP \\ \hline \hline $\epsilon$ & \small{$\mathcal{O}(1)$} & \small{$\mathcal{O}(1)$} & \small{$\Omega(\log(a/b))$} & \small{$\mathcal{O}(1)$} & \small{$\Omega(\log(n))$} \\ \hline $\delta$ & $1/n$ & $1/n$ & 0 & 0 & $0$ \\ \hline \small{$\sqrt{a} - \sqrt{b} \geq $} & \small{$\sqrt{2} \cdot \sqrt{1 + 1/\epsilon}$} & \small{$4\sqrt{2} \cdot {(1 + 1/\sqrt{\epsilon})}$} & Theorem \ref{thm:bayesian-privacy} & Theorem \ref{thm:utility_exponential} & Theorem \ref{thm:private_threshold_condition} \\ \hline Time complexity & \small{$\mathcal{O}(\exp(n))$} & \small{$n^{(\mathcal{O}(\log{(n)}))}$} & \small{$\mathcal{O}(\exp(n))$} & \small{$\mathcal{O}(\exp(n))$} & \small{$\mathcal{O}(\operatorname{poly}(n))$} \\ \hline \end{tabular} \vspace{-5pt} \caption{Summary of the recovery threshold(s), complexity and privacy for Differentially Private Community Detection Algorithms, for $r=2$.} \label{table:SBM_summary_results_approach2} \vspace{-10pt} \end{table*} In many applications, e.g., healthcare, social networks, and finance, network data is often private and sensitive, and there is a risk of revealing private information through adversarial queries. Differential Privacy (DP) \cite{dwork2014algorithmic} is the \textit{de facto} standard notion for providing rigorous privacy guarantees. DP ensures that each user's presence in the dataset has minimal statistical influence (measured by the privacy budget $\epsilon$) on the output of queries. Within the context of network/graph data, two privacy models have been considered--- edge and node privacy, and DP algorithms have also been developed for a few network problems, e.g., the number of subgraphs, such as stars and triangles, cuts, dense subgraphs, and communities, and releasing synthetic graphs~\cite{Kasiviswanathan:2013:AGN:2450206.2450232, blocki:itcs13,mulle2015privacy, nguyen2016detecting, qin2017generating, imola2021locally,blocki:itcs13}; most of them focus on edge privacy models, especially when the output is not a count. Finally, there has been very little work on community detection with privacy. \cite{nguyen2016detecting} consider communities based on the modularity. Very recently,~\cite{hehir2021consistency,ji2019differentially} consider community detection in the SBM models subject to edge privacy constraints (also see related work Section \ref{sec:related_work}); however, neither provides any rigorous bounds on the accuracy or the impact of edge privacy on the recovery threshold. \vspace{-5pt} \subsection{Contributions} In this paper, \emph{we present the first differentially private algorithms for community detection in SBMs with rigorous bounds on recoverability}, under the edge privacy model. Informally, a community recovery algorithm satisfies edge privacy if the output has similar distribution irrespective of the presence or absence of an edge between any two vertices in the network (see Definition \ref{def:edgeDP}). Edge DP is the most natural privacy notion for community detection, as it involves outputting the partition of the nodes into communities. Our focus is on characterizing the recoverability threshold under edge DP, i.e., how much does the difference between $p$ and $q$ have to change in order to ensure recoverability with privacy. We analyze three classes of mechanisms for this problem. \noindent \emph{1. Stability based mechanisms.} We show that the stability mechanism~\cite{thakurta2013differentially} gives $(\epsilon, \delta)$-DP algorithms for our problem. The main idea is to determine if a non-private community recovery estimator is stable with respect to graph $G$, i.e., the estimate of community structure does not change if a few edges are perturbed; if the estimator is stable, the non-private estimate of community labels can be released; otherwise, we release a random label. We analyze stability based mechanism for two estimators--- the maximum likelihood estimator (MLE), which involves solving a min-bisection problem, and an SDP based estimator. We also derive sufficient conditions for exact recovery for $r =2$ and $r > 2$ communities for both these types of algorithms---these require a slightly larger separation between $p$ and $q$ as a function of the privacy budget $\epsilon$; further, the threshold converges to the well known non-private bound as $\epsilon$ becomes large. The SDP based stability mechanism can be implemented in quasi-polynomial time. Stability based mechanisms are less common in the DP literature, compared to other mechanisms, e.g., exponential or randomized response, since proving stability turns out to be very challenging, in general, and is one of our important technical contributions. Stability of the MLE scheme requires showing that the optimum bisection does not change when $k=\mathcal{O}(\log{n})$ edges are perturbed, with high probability. This becomes even harder for the SDP based algorithm, which doesn't always produce an optimum solution. \cite{hajek2016achieving} construct a ``certificate'' for proving optimality of the SDP solution, with high probability. A technical contribution is to identify a new condition that makes the certificate \emph{deterministic}---this is crucial in our stability analysis. \noindent \emph{2. Sampling based mechanisms.} In the second approach, we design two different sampling based mechanisms: (1) Bayesian Estimation and (2) Exponential mechanism. We show that these algorithms are differentially private (with constant $\epsilon$ for Bayesian Estimation and arbitrary small $\epsilon$ for the Exponential mechanism) and guarantee exact recovery under certain regimes of $\epsilon, a, b$; note that, in contrast to the stability based mechanisms, we have $\delta=0$. \noindent \emph{3. Randomized Response (RR) based mechanism.} We also study and analyze a baseline approach, in which one can use a randomized response (RR) technique to perturb the adjacency matrix, and subsequently run an SDP based algorithm for community recovery on the perturbed graph. Due to the post-processing properties of DP, this mechanism satisfies $\epsilon$-DP for any $\epsilon>0$. We show that in contrast to stability and sampling based methods, the baseline RR approach requires $\epsilon= \Omega(\log(n))$ for exact recovery. \noindent \emph{4. Empirical evaluation.} We also present simulation results\footnote{Source code for experiments is available at: \href{https://www.dropbox.com/sh/8jqhdoouroib6m0/AACNhSQGsHvPNzLaaqQhQ95Ja?dl=0}{\textcolor{blue}{https://www.dropbox.com/sh/8jqhdoouroib6m0/AACNhSQGsHvPNzLaaqQhQ95Ja?dl=0}}} on both synthetic and real-world graphs to validate our theoretical findings (Section \ref{sec:empirical_results}). We observe that the stability based mechanism generally outperforms the others in terms of the error, which is quite small even for fairly small $\epsilon$. Interestingly, the error is low even in real world networks. \begin{figure}[!t] \centering \begin{minipage}{.4\textwidth} \centering \subcaptionbox{$\epsilon = 2$. (\textit{high} privacy regime)} {\includegraphics[width=\linewidth]{thresholds_a_b_all_mechanisms.pdf}} \end{minipage} \begin{minipage}{.4\textwidth} \centering \subcaptionbox{$\epsilon = 4$. (\textit{low} privacy regime)} {\includegraphics[width=\linewidth]{thresholds_a_b_all_mechanisms_modified.pdf}} \end{minipage} \caption{Exact Recovery Threshold as a function of $(a, b)$, and the privacy budget $\epsilon$ for $r = 2$ communities.} \label{fig:plot_of_thresholds_private} \vspace{-15pt} \end{figure} \noindent \emph{Comparison between different mechanisms.} We summarize our theoretical results for differentially private community recovery in Table \ref{table:SBM_summary_results_approach2}, which shows the tradeoffs between $(a, b)$, $(\epsilon, \delta)$ as well as the computational complexity of the mechanisms for $r=2$ communities. Note that none of the mechanisms is redundant--- each is the best in some part of the complex space consisting of the parameters $a, b, \epsilon, \delta$, and the running time. To further illustrate these tradeoffs, we plot the recovery threshold conditions for these mechanisms in Fig. \ref{fig:plot_of_thresholds_private}. From Fig. \ref{fig:plot_of_thresholds_private}(a), we observe that for the high privacy regime (smaller $\epsilon$), MLE based Stability mechanism requires the least separation between $a$ and $b$ for exact recovery compared to all other algorithms. In the low privacy regime (larger $\epsilon$), as shown in Fig. \ref{fig:plot_of_thresholds_private}(b), we can see that exponential mechanism tends to overlap with the non-private recovery threshold \cite{abbe2015exact}, whereas stability-based and RR based mechanisms require more separation between $p$ and $q$. Complete proofs are presented in the Appendix. \vspace{-5pt} \subsection{Related Work}\label{sec:related_work} We first summarize a few of the main results on the complexity of different recoverability algorithms and then discuss some relevant work on SBMs with DP. The seminal work of \cite{abbe2015exact} showed that the optimal reconstruction of graph partitions is achieved by the maximum likelihood (ML) estimator, which is computationally intractable. \cite{boppana1987eigenvalues, mcsherry2001spectral} designed polynomial time algorithms for exact recovery; however, they did not achieve the optimal information theoretic bound, i.e., $\sqrt{a} - \sqrt{b} > \sqrt{r}$. \cite{abbe2015exact} showed the first computationally efficient algorithm that achieves the information theoretic limit. This algorithm has two phases: the first phase performs partial recovery via the algorithm of \cite{massoulie2014community}. The second phase uses a local improvement to refine the recovery. \cite{hajek2016achieving} showed that an SDP based rounding algorithm achieves the optimal recovery threshold in polynomial time, and settled the conjecture of \cite{abbe2015exact}. Recently, there have been different computationally efficient recovery algorithms \cite{gao2017achieving, hajek2016achieving, abbe2020entrywise, wang2020nearly} proposed that achieve the optimal recovery threshold in polynomial time or quasi-linear time for different settings, e.g., multiple communities with different sizes. As mentioned earlier, there has been little work on community detection with differential privacy. \cite{nguyen2016detecting} consider the problem of finding communities by modularity maximization. \cite{qin2017generating} design heuristics for models which are related to SBM. Other related work is on estimating parameters of graphons, which are generalizations of SBMs. \cite{borgs2015private} developed an exponential time algorithm for estimating properties in the node DP model, and derived optimal information theoretic error bounds. \cite{sealfon2019efficiently} improved this and designed a polynomial time algorithm. \cite{hehir2021consistency} study the problem of privacy-preserving community detection on SBMs using a simple spectral method \cite{lei2015consistency} for multiple communities. They generalized the convergence rate analysis of the spectral algorithm and showed the impact of the privacy parameters on the misclassification rate between the ground truth labels and the estimated labels for the algorithm. \cite{ji2019differentially} propose a DP gradient based community detection algorithm. However, neither of these results analyze the thresholds for recoverability, which has remained an open problem (under edge DP constraints) till now. \vspace{-5pt} \section{Problem Statement \& Preliminaries} \label{sec:preliminaries_and_problem_statement} We consider an undirected graph $G = (\mathcal{V}, E)$ consisting of $n$ vertices (vertices), where vertices are divided into $r$ communities with $\frac{n}{r}$ vertices in each community. The community label for vertex $i$ is denoted by $\sigma^{*}_{i} \in \{1, 2, \cdots, r\}, \forall i \in [n]$. We focus on the setting when the graph $G$ is generated through a Stochastic block model (SBM), where the edges within the classes are generated independently with probability $p$ and the edges between the classes are generated independently with probability $q$. More specifically, the connections between vertices are represented by an adjacency matrix $\mathbf{A} \in \{0, 1\}^{n \times n}$, where the elements in $\mathbf{A}$ are drawn as follows: \begin{align} A_{i,j} \sim \begin{cases} \operatorname{Bern}(p), & i < j, ~~\sigma_{i} = \sigma_{j}, \\ \operatorname{Bern}(q), & i < j, ~~\sigma_{i} \neq \sigma_{j}. \end{cases} \end{align} with $A_{i,i}=0$ and $A_{i,j}=A_{j,i}$. For the scope of this paper, we focus on the so called ``dense" connectivity regime, where $p = \frac{a \log(n)}{n}$ and $q = \frac{b \log(n)}{n}$, and $a, b\geq 0$ are fixed constants. Note that one can consider other regimes for $p$ and $q$ such as the ``sparse" regime \cite{decelle2011asymptotic}, i.e., $ p = \frac{a}{n}$ and $q = \frac{b}{n}$, however, in this regime exact recovery is not possible since the graph with high probability is not connected. On the other hand, in the dense regime one can still exactly recover the labels of the graph with high probability. The goal of community detection problem is to design a (stochastic) estimator $\hat{\bm{\sigma}}: \mathbf{A} \rightarrow \{1, 2, \cdots, r\}^{n}$ for community recovery (i.e, the true label vector $\bm{\sigma}^{*} = \{{\sigma}^{*}_{1}, {\sigma}^{*}_{2}, \cdots, {\sigma}^{*}_{n}\}$) upon observing the adjacency matrix. We next define the notion of exact asymptotic recovery as a measure of performance of an estimator. \begin{definition} [Exact Recovery] An estimator $\hat{\bm{\sigma}} = \{\hat{\sigma}_{1}, \hat{\sigma}_{2}, \cdots, \hat{\sigma}_{n}\}$ satisfies exact recovery (upto a global permutation of the community labels) if the probability of error behaves as \begin{align} \operatorname{Pr}(\hat{\bm{\sigma}} \neq \bm{\sigma}^{*} ) = o(1), \end{align} where the probability is taken over both the randomness of the graph $G$ as well as the stochastic estimation process. \end{definition} In addition to exact recovery, we require that the recovery algorithm for community detection also protects the individual relationships (i.e., the edges in the graph $G$) in the network. Specifically, we adopt the notion of $(\epsilon, \delta)$-edge differential privacy (DP) \cite{karwa2011private}, defined next. \begin{definition} \label{def:edgeDP} [$(\epsilon, \delta)$-edge DP] An estimator $\hat{\bm{\sigma}}$ satisfies $(\epsilon, \delta)$-edge DP for some $\epsilon \in \mathds{R}^{+}$ and $\delta \in (0, 1]$, if for any pair of adjacency matrices $\mathbf{A}$ and $\mathbf{A}'$ that differ in one edge, we have \begin{align}\label{eq:edgeDP} \operatorname{Pr}(\hat{\bm{\sigma}}(\mathbf{A}) = \bm{\sigma}) \leq e^{\epsilon} \operatorname{Pr}(\hat{\bm{\sigma}}(\mathbf{A}') = \bm{\sigma}) + \delta. \end{align} For privacy constraints in \eqref{eq:edgeDP}, the probabilities are computed only over the randomness in the estimation process. The case of $\delta = 0$ is called pure $\epsilon$-edge DP. \end{definition} \subsection{Prior results on exact recovery without privacy} The optimal maximum likelihood (ML) estimator for community detection, given by $\hat{\bm{\sigma}}_{\text{ML}} = \arg \max_{\bm{\sigma}} p(\mathbf{A}|\bm{\sigma})$ has been recently analyzed in a series of papers \cite{boppana1987eigenvalues, mcsherry2001spectral, choi2012stochastic, abbe2015exact, mossel2015consistency}. It has been shown that for SBMs with ``dense" regime, i.e., $p = \frac{a \log(n)}{n}$ and $q = \frac{b \log(n)}{n}$, exact recovery is possible if and only if $\sqrt{a} - \sqrt{b} > \sqrt{r}$. This condition is often referred to as the phase transition boundary or exact recovery threshold. Even for $r=2$ communities, the ML estimator is equivalent to finding the minimum bisection of the graph, which is known to be NP-hard \cite{abbe2015exact}. Specifically, the ML estimator of $\bm{\sigma}^{*}$ is the solution of the following optimization problem: \begin{align} \hat{\bm{\sigma}}_{\text{ML}} = \arg \max_{\bm{\sigma}} \{\bm{\sigma}^{T}\mathbf{A} \bm{\sigma}: \mathbf{1}^{T} \bm{\sigma} =0, \sigma_{i} = \pm 1\}. \end{align} Subsequently, several works have studied if polynomial time algorithms can still achieve the exact recovery threshold. For instance, it has been shown \cite{hajek2016achieving}, \cite{hajek2016achieving_extensions} that SDP relaxation of the ML estimator can also achieve the same recovery threshold. Recently, Abbe et.al. \cite{abbe2020entrywise} have analyzed the spectral clustering estimator \cite{lei2015consistency}, and showed that it achieves the same recovery threshold as ML for $r = 2$. \vspace{-5pt} \section{Main Results \& Discussions} \label{sec:main_results} In this section, we present three different approaches for the design of community detection algorithms for exact recovery while satisfying edge differential privacy. In the first approach, we analyze the stability property of ML based and SDP based algorithms. For MLE based algorithm, the stability property of the min-bisection hinges on the concentration properties of SBMs in terms of the intra and inter communities edges. For SDP based algorithm, we introduce a concept of concentration that both (1) provides sufficient conditions for the dual certificate of the SDP and (2) persists under certain degrees of connection perturbation. In the second approach, we study and analyze sampling based mechanisms, which release a differentially private estimate of the community labels via sampling. In the third approach, we perturb the adjacency matrix $\mathbf{A}$ to satisfy DP (using randomized response (RR)), and perform the estimation of community labels using the perturbed graph by using computationally efficient SDP relaxation of the maximum-likelihood estimator. In Table \ref{table:SBM_summary_results_approach2}, we summarize our main results for the case of $r=2$ communities. Specifically, we show the constraints on the privacy budget $(\epsilon, \delta)$ and sufficient conditions on $(a,b)$ for exact recovery for the proposed mechanisms. \vspace{-5pt} \subsection{Stability-based Mechanisms} The basic idea behind stability based mechanisms is as follows: Let us consider a non-private estimator for community detection $\hat{\bm{\sigma}}$. We first \emph{privately} compute the stability of this estimator with respect to a graph $G$, which essentially is the \emph{minimum} number of edge modifications on $G$, so that the estimator output on the modified graph $G'$ differs from that on $G$, i.e., $\hat{\bm{\sigma}}(G)\neq \hat{\bm{\sigma}}(G')$. If the graph $G$ is stable enough (i.e., if the estimate of stability is larger than a threshold, which depends on $(\epsilon, \delta)$), then we release the non-private estimate $\hat{\bm{\sigma}}(G)$, otherwise we release a random label vector. The key intuition is that from the output of a stable estimator, one cannot precisely infer the presence or absence of a single edge (thereby providing edge DP guarantee). Before presenting the general stability mechanism, we formally define $d_{\hat{\sigma}}(G)$, which quantifies the stability of an estimator $\hat{\bm{\sigma}}$ with respect to a graph $G$. \begin{definition} [Stability of $\hat{\bm{\sigma}}$] The stability of an estimator $\hat{\bm{\sigma}}$ with respect to a graph $G$ is defined as follows: \begin{align} \label{eqn:stab-sigma} d_{\hat{\bm{\sigma}}}(G) = \{\min k: \exists G', \text{dist}(G, G')\leq k, \hat{\bm{\sigma}}(G)\neq \hat{\bm{\sigma}}(G')\}. \end{align} \end{definition} We now present the general stability based mechanism in Algorithm $1$. \begin{algorithm} \caption{$\mathcal{M}^{\hat{\bm{\sigma}}}_{\operatorname{Stability}}(G)$: Stability Based Mechanism} \label{algo:stability} \begin{algorithmic}[1] \STATE {\bfseries Input:} $G(\mathcal{V}, E) \in \mathcal{G}$ \STATE {\bfseries Output:} labelling vector $\hat{\bm{\sigma}}_{\text{Private}}$. \STATE $d_{\hat{\bm{\sigma}}}(G) \leftarrow$ stability of $\hat{\bm{\sigma}}$ with respect to graph $G$ \STATE $\tilde{d} \leftarrow d_{\hat{\bm{\sigma}}}(G) + \operatorname{Lap}(1/\epsilon)$ \IF{$\tilde{d} > \frac{\log{1/\delta}}{\epsilon}$} \STATE Output $\hat{\bm{\sigma}}(G)$ \ELSE \STATE Output $\perp$ (random label) \ENDIF \end{algorithmic} \end{algorithm} We first state the following claim about the privacy guarantee of the above mechanism \cite{dwork2014algorithmic}. \begin{lemma}\label{thm:privacy_guarantee_stability} For any community detection algorithm $\hat{\bm{\sigma}}$, $\mathcal{M}^{\hat{\bm{\sigma}}}_{\operatorname{Stability}}(G)$ satisfies $(\epsilon, \delta)$-edge DP. \end{lemma} The proof of Lemma \ref{thm:privacy_guarantee_stability} is presented in the Appendix. In the above algorithm, Step $4$ ensures that the stability is computed privately and Step $5$ ensures that the non-private estimate is released only if the estimator is stable enough (i.e., $\tilde{d} > \frac{\log{1/\delta}}{\epsilon}$). Our first main contribution is to analyze the performance of $\mathcal{M}^{\hat{\bm{\sigma}}}_{\operatorname{Stability}}(G)$ and establish sharp phase transition thresholds for exact recovery as a function of $(p, q)$ and $(\epsilon, \delta)$. Specifically, we focus on two possible choices for $\hat{\bm{\sigma}}$: a) when we use the MLE estimator, i.e., $\hat{\bm{\sigma}}=\hat{\bm{\sigma}}_{\text{MLE}}$, and b) when we use the computationally efficient SDP relaxation, i.e., $\hat{\bm{\sigma}}=\hat{\bm{\sigma}}_{\text{SDP}}$. \noindent \textbf{\emph{Stability of MLE.}} We start by first presenting the results for MLE based approach for both $r=2$ communities and then for $r>2$ communities. The proofs of all the Theorems are presented in the Appendix. \begin{theorem}\label{thm:error_stability_mechanism_two} For $r=2$ communities, $\mathcal{M}^{\text{MLE}}_{\operatorname{Stability}}(G)$ satisfies exact recovery if \begin{align} \label{eqn:mle-2comm} \sqrt{a} - \sqrt{b} > \sqrt{2} \times \sqrt{1 + \frac{1}{\epsilon}} \end{align} for any $\epsilon>0$ and $\delta \geq 1/n$. \end{theorem} We note two important points: \emph{(1) In contrast to the non-private recovery threshold $\sqrt{a} - \sqrt{b} > \sqrt{2}$, the impact of edge DP shows up explicitly in the threshold condition; (2) As we relax the relax the privacy budget, namely as $\epsilon\rightarrow \infty$, the privacy constrained threshold converges to the non-private threshold.} We next generalize our results to $r>2$ equal sized communities and present a sufficient condition on $a$ and $b$ for exact recovery. \begin{theorem}\label{thm:error_stability_mechanism_general} For $r>2$ communities, $\mathcal{M}^{\text{MLE}}_{\operatorname{Stability}}(G)$ satisfies exact recovery if \begin{align} \label{eqn:mle-rcomm} \sqrt{a} - \sqrt{b} > \sqrt{r} \times \sqrt{ 1 + \frac{1}{\epsilon} \times \bigg(2 + \log \bigg(\frac{a}{b} \bigg)\bigg)} \end{align} for any $\epsilon>0$ and $\delta \geq 1/n$. \end{theorem} The result for $r>2$ communities is slightly weaker compared to the case for $r=2$ case. However, \emph{it still converges to the non-private optimal threshold $(\sqrt{a} - \sqrt{b} > \sqrt{r})$ when the privacy budget $\epsilon\rightarrow \infty$}. \emph{Main Ideas behind the Proof(s) of Theorems \ref{thm:error_stability_mechanism_two} and \ref{thm:error_stability_mechanism_general} and Intuition behind the private recovery threshold:} Analyzing the error probability for the stability based mechanism for SBM is highly non-trivial. Specifically, there are two types of error events occur in this mechanism when estimating the true labels $\bm{\sigma}^{*}$: $(1)$ When the stability mechanism outputs the ML estimate $\hat{\bm{\sigma}}_{\text{MLE}}$, then we are interested in bounding the $\operatorname{Pr}(\hat{\bm{\sigma}}_{\text{MLE}} \neq \bm{\sigma}^{*})$. This error probability can be analyzed using existing results on exact recovery \cite{abbe2015exact}, and the error vanishes as $o(1)$ if $\sqrt{a} - \sqrt{b} > \sqrt{r}$. $(2)$ The second source of error is when the mechanism outputs a random label $\perp$, whose probability is bounded by $\operatorname{Pr}(\tilde{d} \leq \frac{\log{1/\delta}}{\epsilon})$. The key technical challenge arises in the analysis of this probability. Specifically, we show that when the graph $G$ is drawn from an SBM, the ML estimator is $\mathcal{O}(\log(n))$-stable with high probability. By leveraging this result, we bound the probability $\operatorname{Pr}(\tilde{d} \leq \frac{\log{1/\delta}}{\epsilon})$, and in order to make this probability decay as $o(1)$ for exact recovery, we obtain sufficient conditions on $(a, b)$ presented in Theorems \ref{thm:error_stability_mechanism_two} and \ref{thm:error_stability_mechanism_general}. \textbf{\emph{Stability of SDP relaxation.}} We show that the SDP relaxation (SDP for short) method also has the stability property, i.e., a graph $G$ generated by an SBM is $\Omega(\log{n})$-stable with respect to the SDP with high probability, which gives us the following result for $r=2$ communities. \begin{theorem} \label{theorem:sdp-rcomm} For $r\geq 2$ communities, $\mathcal{M}^{\text{SDP}}_{\operatorname{Stability}}(G)$ satisfies exact recovery if \begin{align} \label{eqn:sdp-rcomm} \sqrt{a} - \sqrt{b} > \sqrt{r}\times 4 \left( 1 + \frac{1}{\sqrt{\epsilon}} \right) \end{align} for any $\epsilon >0$ and $\delta \geq 1/n$. \end{theorem} In contrast with the threshold condition (\ref{eqn:mle-rcomm}), we have a larger constant in (\ref{eqn:sdp-rcomm}) for $\mathcal{M}^{\text{SDP}}_{\operatorname{Stability}}(G)$, arising out of the concentration bounds for the SDP relaxation algorithm. \noindent \emph{Main ideas in the proof of Theorem~\ref{theorem:sdp-rcomm}.} The proof of the stability of $\mathcal{M}^{\text{SDP}}_{\operatorname{Stability}}(G)$ becomes more complex than that of MLE, because SDP only takes the ground truth label as the optimal solution in some regimes; further, arguing that a solution $\hat{\bm{\sigma}}=\hat{\bm{\sigma}}_{\text{SDP}}$ is not easy (since it may not be the min bisection). \cite{hajek2016achieving} design a sophisticated ``certificate'' for proving that the SDP solution is indeed the optimal, and show that the certificate holds with high probability when $\sqrt{a} - \sqrt{b} > \sqrt{r}$ (note that this certificate is much more complex than the primal-dual based certificate used earlier for $r=2$ communities~\cite{lecture7}). The high probability bound for the certificates is unfortunately not sufficient, since we need to argue about the stability for a graph $G$ generated from the SBM deterministically, and there are $n^{\mathcal{O}(\log{n})}$ graphs within distance $\mathcal{O}(\log{n})$ of $G$. Specifically, the high probability bound for the certificate does not hold after flipping $\Omega(\log{n})$ connections, which is required to maintain the stability of the optimal solution. Instead, we define a notion of ``concentration'', and show that if a graph is concentrated, then $SDP(G)$ is optimal at the ground truth label; \emph{note that this holds deterministically, not with high probability}. We then use this notion of concentration to determine stability, by showing that all graphs within $\mathcal{O}(\log{n})$ distance of $G$ are also concentrated. Finally, we derive a lower bound on $\sqrt{a} - \sqrt{b}$ that is both (1) sufficient for concentration and (2) able to preserve concentration after flipping up to $\Omega(\log{n})$ connections. \noindent \emph{Complexity of Stability Based Mechanisms.} A naive implementation of $\mathcal{M}^{\hat{\bm{\sigma}}}_{\operatorname{Stability}}(G)$, which involves computing $d_{\hat{\bm{\sigma}}}(G)$ in Step 3 using (\ref{eqn:stab-sigma}), requires computing $\hat{\bm{\sigma}}(G')$ for all graphs $G'$. It can be shown that the algorithm works if we use $\min\{d_{\hat{\bm{\sigma}}}(G), \mathcal{O}(\log{n})\}$, instead of $d_{\hat{\bm{\sigma}}}(G)$, for which it suffices to compute $\hat{\bm{\sigma}}(G')$ for only those graphs $G'$ with $d(G, G')=\mathcal{O}(\log{n})$. The MLE algorithm takes exponential time, so algorithm $\mathcal{M}^{\text{MLE}}_{\operatorname{Stability}}(G)$ still takes exponential time; however, $\mathcal{M}^{\text{SDP}}_{\operatorname{Stability}}(G)$ can be implemented in quasi-polynomial time, i.e., $n^{(\mathcal{O}(\log{(n)}))}$, using the above observation. \subsection{Sampling Mechanisms} We present two sampling based approaches for private community detection. In the first approach of \emph{Bayesian Sampling}, presented in Algorithm \ref{algo:bayesian}, we compute the posterior probability of label vectors given the graph $G$ and release a label estimate by sampling from this posterior distribution. \begin{algorithm} \caption{$\mathcal{M}_{\text{Bayesian}}(G)$: Bayesian Sampling Mechanism } \begin{algorithmic}[1] \STATE {\bfseries Input:} $G(\mathcal{V}, E) \in \mathcal{G}$ \STATE {\bfseries Output:} A labelling vector $\hat{\bm{\sigma}} \in \mathcal{L}$. \STATE For every $\bm{\sigma} \in{\mathcal{L}}$, calculate $p(\bm{\sigma}|G) = \frac{p(\bm{\sigma}) \times p(G|\bm{\sigma})}{p(G)}$ \STATE Sample and output a labelling $\hat{\bm{\sigma}}\in{\mathcal{L}}$ with probability $\Pr(\hat{\bm{\sigma}}|G)$ \label{algo:bayesian} \end{algorithmic} \end{algorithm} Surprisingly, we show that this mechanism satisfies pure $\epsilon$-edge DP whenever $\epsilon$ is larger than a threshold, namely, $\epsilon\geq \log(a/b)$. This is in-contrast with Stability mechanisms which achieve approximate $(\epsilon, \delta)$-edge DP, for any $\epsilon>0$ but require $\delta\geq 1/n$. Our main result for the Bayesian mechanism is stated in the following theorem along with the corresponding recovery threshold. \begin{theorem} \label{thm:bayesian-privacy} The mechanism $\mathcal{M}_{\text{Bayesian}}$(G) satisfies $\epsilon$-edge DP, $\forall \epsilon \geq \epsilon_0 = \log \big(\frac{a}{b}\big)$, and for $r=2$ communities, satisfies exact recovery if \begin{align} \sqrt{a} - \sqrt{b} & > \max \left[ \sqrt{2}, \frac{{2}}{(\sqrt{2} - 1)(1- e^{- \epsilon_{0}})} \right]. \end{align} \end{theorem} Despite the fact that the Bayesian mechanism provides pure edge DP, one disadvantage is that it requires the knowledge of $(a, b)$ for computing the posterior distribution. To this end, we present and analyze the exponential sampling mechanism in Algorithm \ref{algo:expo_1}, where we sample from a distribution over the labels which can be computed directly from the graph and does not require the knowledge of $(a,b)$. Specifically, for any label vector $\bm{\sigma}$ (partition of the graph in two communities), the $\text{score}(\bm{\sigma}) = {E}_{\text{inter}}(G, \bm{\sigma})$ is defined as the set of cross-community edges in the partition $\bm{\sigma}$ the corresponding sampling probability is computed as a function of this score and the privacy budget. \begin{algorithm} \caption{$\mathcal{M}_{\text{Expo.}}(G)$: Exponential Mechanism } \begin{algorithmic}[1] \STATE {\bfseries Input:} $G(\mathcal{V}, E) \in \mathcal{G}$ \STATE {\bfseries Output:} A labelling vector $\hat{\bm{\sigma}} \in \mathcal{L}$. \STATE For every $\bm{\sigma} \in \mathcal{L}$, calculate $\text{score}(\bm{\sigma}) = {E}_{\text{inter}}(G, \bm{\sigma})$ \STATE Sample and output a labelling $\hat{\bm{\sigma}}\in \mathcal{L}$ with probability $\exp(-\epsilon\times \text{score}(\bm{\sigma}))$ \label{algo:expo_1} \end{algorithmic} \end{algorithm} \begin{theorem}\label{thm:utility_exponential} The exponential sampling mechanism $\mathcal{M}_{\text{Expo.}}(G)$ satisfies $\epsilon$-edge DP and for $r=2$ communities, performs exact recovery if \begin{align} \sqrt{a} - \sqrt{b} & > \max \left[\sqrt{2}, \frac{{2}}{(\sqrt{2} - 1) \epsilon} \right]. \end{align} \end{theorem} \noindent \emph{Complexity and comparison with stability based mechanisms.} A key advantage of the sampling based mechanisms over stability based mechanisms is that they give $\epsilon$-DP solutions. However, implementing the sampling step in these mechanisms takes exponential time, as no efficient algorithm is known for sampling $\bm{\sigma}$ with probability depending on its utility. \subsection{Graph Perturbation Mechanisms} In this section, we present and analyze randomized response (RR) based mechanism for private community detection. The basic idea is to perturb the edges of the random graph (i.e., the adjacency matrix $\mathbf{A}$), where each element $A_{i,j}$ is perturbed independently to satisfy $\epsilon$-edge DP. For a graph with an adjacency matrix $\mathbf{A}$, the perturbed matrix is denoted as $\tilde{\mathbf{A}}$, where $\mu = \operatorname{Pr}(\tilde{A}_{i,j} = 1 | A_{i,j} = 0) = \operatorname{Pr}(\tilde{A}_{i,j} = 0 | A_{i,j} = 1) $. By picking $\mu = \frac{1}{e^{\epsilon} + 1}$, it can be readily shown that the mechanism satisfies $\epsilon$-edge DP. One can then apply any community recovery algorithm (MLE, SDP or spectral methods) on the perturbed matrix $\tilde{\mathbf{A}}$. This mechanism is presented in Algorithm \ref{algo:randomized_response}. \begin{algorithm} \caption{$\mathcal{M}^{\hat{\bm{\sigma}}}_{\text{RR}}(G)$: Graph Perturbation Mechanism via Randomized Response } \begin{algorithmic}[1] \STATE {\bfseries Input:} $G(\mathcal{V}, E) \in \mathcal{G}$ \STATE {\bfseries Output:} A labelling vector $\hat{\bm{\sigma}} \in \mathcal{L}$. \STATE Perturb $\mathbf{A} \rightarrow \tilde{\mathbf{A}} $ via randomized response mechanism \STATE Apply community detection algorithm on $\tilde{\mathbf{A}}$ \STATE Output $\hat{\bm{\sigma}}(\tilde{\mathbf{A}})$ \label{algo:randomized_response} \end{algorithmic} \end{algorithm} From the perspective of computational complexity, this Algorithm is faster compared to the stability and sampling based approaches. However, in the next Theorem, we state our main result which shows that RR based mechanism achieves exact recovery if $\epsilon = \Omega (\log(n))$, i.e., it requires the privacy leakage to grow with $n$ for exact recovery. \begin{theorem} \label{thm:private_threshold_condition} The mechanism $\mathcal{M}^{\text{SDP}}_{\operatorname{RR}}(G)$ satisfies $\epsilon$-edge DP, $\forall \epsilon \geq \epsilon_{n} = \Omega(\log(n))$, and for $r=2$ communities, satisfies exact recovery if \begin{align} \sqrt{a} - \sqrt{b} > \sqrt{2} \times \frac{\sqrt{e^{\epsilon} + 1}}{\sqrt{e^{\epsilon} - 1}} + \frac{1}{\sqrt{e^{\epsilon} - 1}}. \end{align} \end{theorem} \noindent The proof is presented in the Appendix. In order to understand the intuition behind the worse privacy leakage of RR mechanism for exact recovery, it is instructive to consider the statistics of the perturbed adjacency matrix $\tilde{\mathbf{A}}$ as a function of $\epsilon$. Specifically, the perturbed elements in the adjacency matrix $\tilde{\mathbf{A}}$ are distributed as follows \begin{align} \tilde{A}_{i,j} \sim \begin{cases} \operatorname{Bern}(\tilde{p}), & i < j, \sigma_{i} = \sigma_{j}, \\ \operatorname{Bern}(\tilde{q}), & i < j, \sigma_{i} \neq \sigma_{j}. \end{cases} \end{align} where, \begin{align} \tilde{p} & = \underbrace{\left[\frac{n}{(e^{\epsilon} + 1) \times \log(n)} + \frac{e^{\epsilon} -1}{e^{\epsilon} + 1} \times a \right]}_{a_{n}} \times \frac{\log (n)}{n} \nonumber \\ \tilde{q} & = \underbrace{\left[\frac{n}{(e^{\epsilon} + 1) \times \log(n)} + \frac{e^{\epsilon} -1}{e^{\epsilon} + 1} \times b \right]}_{b_{n}} \times \frac{\log (n)}{n}. \end{align} Note that $\tilde{p}$ and $\tilde{q}$ are the intra- and inter- community connection probabilities for the perturbed matrix. From the above, we note that if $\epsilon$ is chosen as a constant, and as $n$ grows, then $\lim_{n\rightarrow \infty} \tilde{p} = \lim_{n\rightarrow \infty} \tilde{q}$, i.e., if we insist on constant $\epsilon$, then asymptotically, the statistics of the inter- and intra-community edges are the same and exact recovery is impossible. The result of Theorem \ref{thm:private_threshold_condition} shows that one can indeed get exact recovery by allowing the leakage to grow logarithmically with $n$. \vspace{-7pt} \section{Numerical Experiments} \label{sec:empirical_results} \begin{figure*}[t] \centering \begin{minipage}{.24\textwidth} \centering \subcaptionbox{Impact of changing $a$ where $r = 2$ and $n = 100$ vertices.} {\includegraphics[width=\linewidth]{HPC_threshold_mechanisms_vs_a.pdf}} \end{minipage} \begin{minipage}{.24\textwidth} \centering \subcaptionbox{Impact of $\epsilon$ where $r = 2$ and $n = 200$ vertices.} {\includegraphics[width=\linewidth]{HPC_comparisons_mechanisms_a_b_vs_epsilon_final.pdf}} \end{minipage} \begin{minipage}{.24\textwidth} \centering \subcaptionbox{Impact of $\epsilon$ where $r = 3$ and $n = 200$ vertices.} {\includegraphics[width=\linewidth]{HPC_three_communities_comparisons.pdf}} \end{minipage} \begin{minipage}{.24\textwidth} \centering \subcaptionbox{SDP vs. Spectral method for $r = 2$ communities.} {\includegraphics[width=\linewidth]{Spectral_vs_SDP_vs_n_two_communities.pdf}} \end{minipage} \begin{minipage}{.24\textwidth} \centering \subcaptionbox{Comparison between stability and RR + SDP where $r = 2$.} {\includegraphics[width=\linewidth]{HPC_RR_vs_stability_vs_n.pdf}} \end{minipage} \begin{minipage}{.24\textwidth} \centering \subcaptionbox{Comparison between stability and RR + SDP where $r = 3$.} {\includegraphics[width=\linewidth]{HPC_three_communities_vs_n.pdf}} \end{minipage} \begin{minipage}{.24\textwidth} \centering \subcaptionbox{Impact of $\epsilon$ for Karate Club dataset where $r = 2$.} {\includegraphics[width=\linewidth]{Karate_real_world_dataset.pdf}} \end{minipage} \begin{minipage}{.24\textwidth} \centering \subcaptionbox{Impact of $\epsilon$ for Political Blogosphere dataset where $r =2$.} {\includegraphics[width=\linewidth]{blog_data_two_communities_vs_epsilon.pdf}} \end{minipage} \caption{Synopsis of Numerical results: (a) Shows the impact of changing $a$ for fixed $b, \epsilon$. (b)-(c) Show the impact of $\epsilon$ for $r = 2, 3$, respectively. (d)-(f) Show the error probability as function of $n$. (g)-(h) Show the performance on real-world datasets.} \label{fig:impact_changing_parameters} \vspace{-10pt} \end{figure*} In this section, we present experimental results to assess the performance of our proposed private community detection algorithms, and the associated tradeoffs between privacy and community recovery for both synthetically generated graphs (SBMs) as well as real-world graphs. The proposed mechanisms are implemented in MATLAB 2020b and the optimization (SDP) is done through CVX solver \cite{grant2009cvx}. Our numerical experiments address the following questions: \textbf{Q1: How does the error probability change with $a$ and $b$?} We first study community recovery on synthetic graphs (SBM) with $n = 100$ vertices, $r = 2$ communities, $b = 0.1$ and vary the parameter $a$. Fig. \ref{fig:impact_changing_parameters}(a) shows the impact of increasing $a$ on the error probability of (i) non-private recovery; (ii) SDP-stability mechanism and (iii) randomized-response SDP mechanism. For a fixed privacy budget $\epsilon$, we observe that when the difference between $a$ and $b$ increases, the error probabilities for all private mechanisms decrease but are no better than the non-private case. For a fixed $\epsilon$, the SDP-stability mechanism achieves a smaller error probability compared to RR+SDP mechanism, however, this comes at the expense of approximate edge DP guarantee (i.e., SDP-stability mechanism requires $\delta>0$). \textbf{Q2: What is the impact of $\epsilon$ on the error probability?} In Figs. \ref{fig:impact_changing_parameters} (b) and (c), we fix $n=200$, $a=3.5$, $b=0.1$ and study the impact of privacy budget $\epsilon$ on the error probability for the case of $r=2$ and $r=3$ communities. Specifically, for $r=2$, we observe that the SDP-stability mechanism outperforms RR+SDP; furthermore, as $\epsilon$ increases beyond a certain threshold, error probability for both converge to $0$. For $r=3$ communities, we can observe that the difference in performance between SDP-stability and RR+SDP is even more pronounced. In this setting, however, we do not expect the error probability to converge to $0$ even if $\epsilon\rightarrow \infty$ since the chosen values $(a=3.5, b=0.1)$ do not satisfy the exact recovery threshold $(\sqrt{a}-\sqrt{b}> \sqrt{r})$. \textbf{Q3: What is the impact of the problem size on the accuracy (SDP-Stability, RR+SDP, RR+Spectral)?} In Fig. \ref{fig:impact_changing_parameters} (d), we compare the performance of SDP relaxation based recovery versus spectral method proposed in \cite{hehir2021consistency}, both under randomized response for $a = 3.5$, $b = 0.1$ and $r = 2$ communities. We can observe that RR +SDP has less probability of error as a function of $n$ compared with the RR-Spectral method; however, RR+SDP has more computational complexity. In Fig. \ref{fig:impact_changing_parameters}(e), we show the error probability behavior as a function of $n$, the number of vertices for $r = 2$ communities and different privacy levels. From the figure, we observe that for the RR based approach, the privacy level should scale as $\Omega(\log(n))$ to achieve exact recovery, which is consistent with our theoretical findings. On the other hand, the stability based mechanisms can still provide exact recovery for finite $\epsilon$. We can draw similar conclusions for the case of $r = 3$ communities in Fig. \ref{fig:impact_changing_parameters}(f). \textbf{Q4: How do the \emph{private} community detection mechanisms perform on real-world datasets?} We now discuss our results for two real-world datasets (shown in Figs. \ref{fig:impact_changing_parameters} (g) \& (h)): $(1)$ Zachary's Karate Club dataset which contains a social network of friendships between $34$ members of a karate club at a US university in the 1970s. \cite{girvan2002community} and $(2)$ The Political Blogosphere dataset \cite{adamic2005political} which consists of $1490$ political blogs captured during $2004$ US elections. Each blog is classified as left/liberal or right/conservative, i.e., $r=2$ communities and links between blogs were automatically extracted from a crawl of the front page of the blog (see more details in the original dataset \cite{adamic2005political}). For the smaller size Karate club dataset ($n=34$), we observe from Fig. \ref{fig:impact_changing_parameters}(g), we can observe the impact of choosing $\delta$ on SDP-stability mechanism. Specifically, when $\delta= 10^{-3} \ll 1/n$, then RR+SDP has lower error probability for smaller $\epsilon$ compared to SDP-stability. On the other hand, by increasing $\delta$ to $\delta= 10^{-2}$ (which is comparable to $1/n$) we observe that SDP-stability outperforms RR+SDP for all $\epsilon$. For the larger Political Blogosphere dataset ($n=1490$), the SDP-stability mechanism outperforms RR+SDP for all values of $\epsilon$. We can observe that SDP-Stability performs better than RR+SDP for both datasets.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:introduction} \input{sections/1_introduction} \section{Simulation components} \label{sec:simulation_components} The AdePT workflow was built on top of the GPU ports of the main particle transport simulation components. While the VecGeom GPU functionality pre-dated our project, modules such as physics and magnetic field were rewritten or adapted from their original CPU-based versions. \subsection{Geometry integration, validation and optimisation} \input{sections/2.1_geometry} \label{sec:geometry} \subsection{Physics integration and validation} \input{sections/2.2_physics} \label{sec:physics} \subsection{Magnetic field} \input{sections/2.3_field} \label{sec:field} \section{Stepping workflow} \input{sections/3_workflow} \label{sec:workflow} \section{Geant4 integration} \input{sections/4_integration} \label{sec:integration} \section{Preliminary analysis} \label{sec:analysis} \input{sections/5_analysis} \section{Conclusions and next steps} \label{sec:conclusions} \input{sections/6_conclusions} \ack The oneAPI related work was conducted with the support of Intel in the framework of the CERN openlab-Intel collaboration agreement. The authors acknowledge the funding from EPSRC and STFC agencies. The single-precision work was supported by the GSoC 2021 program. \section*{References} \bibliographystyle{iopart-num}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{Acknowledgement}}{} \newenvironment{romenumerate}[1][-10pt] \addtolength{\leftmargini}{#1}\begin{enumerate \renewcommand{\labelenumi}{\textup{(\roman{enumi})}}% \renewcommand{\theenumi}{\textup{(\roman{enumi})}}% }{\end{enumerate}} \newcounter{oldenumi} \newenvironment{romenumerateq {\setcounter{oldenumi}{\value{enumi}} \begin{romenumerate} \setcounter{enumi}{\value{oldenumi}}} {\end{romenumerate}} \newcounter{thmenumerate} \newenvironment{thmenumerate} {\setcounter{thmenumerate}{0}% \renewcommand{\thethmenumerate}{\textup{(\roman{thmenumerate})}}% \def\item{\pa \refstepcounter{thmenumerate}\textup{(\roman{thmenumerate})\enspace}} } {} \newcounter{xenumerate} \newenvironment{xenumerate} {\begin{list} {\upshape(\roman{xenumerate})} {\setlength{\leftmargin}{0pt} \setlength{\rightmargin}{0pt} \setlength{\labelwidth}{0pt} \setlength{\itemindent}{\labelsep} \setlength{\topsep}{0pt} \usecounter{xenumerate}} } {\end{list}} \newcommand\xfootnote[1]{\unskip\footnote{#1}$ $} \newcommand\pfitem[1]{\par(#1):} \newcommand\pfitemx[1]{\par#1:} \newcommand\pfitemref[1]{\pfitemx{\ref{#1}}} \newcommand\pfcase[2]{\smallskip\noindent\emph{Case #1: #2} \noindent} \newcommand\step[2]{\smallskip\noindent\emph{Step #1: #2} \noindent} \newcommand\stepx{\smallskip\noindent\refstepcounter{steps}% \emph{Step \arabic{steps}:}\noindent} \newcommand{\refT}[1]{Theorem~\ref{#1}} \newcommand{\refC}[1]{Corollary~\ref{#1}} \newcommand{\refL}[1]{Lemma~\ref{#1}} \newcommand{\refR}[1]{Remark~\ref{#1}} \newcommand{\refS}[1]{Section~\ref{#1}} \newcommand{\refSS}[1]{Subsection~\ref{#1}} \newcommand{\refP}[1]{Proposition~\ref{#1}} \newcommand{\refD}[1]{Definition~\ref{#1}} \newcommand{\refE}[1]{Example~\ref{#1}} \newcommand{\refF}[1]{Figure~\ref{#1}} \newcommand{\refApp}[1]{Appendix~\ref{#1}} \newcommand{\refTab}[1]{Table~\ref{#1}} \newcommand{\refand}[2]{\ref{#1} and~\ref{#2}} \newcommand{\ignore}[1]{} \newcommand\marginal[1]{\marginpar{\raggedright\parindent=0pt\tiny #1}} \newcommand\JF{\marginal{JF}} \newcommand\WH{\marginal{WH}} \newcommand{\marginal{D}}{\marginal{D}} \newcommand\SJ{\marginal{SJ}} \newcommand\kolla{\marginal{CHECK! SJ}} \newcommand\ms[1]{\texttt{[ms #1]}} \newcommand\XXX{XXX \marginal{XXX}} \newcommand\REM[1]{{\raggedright\texttt{[#1]}\par\marginal{XXX}}} \newcommand\rem[1]{{\texttt{[#1]}\marginal{XXX}}} \newenvironment{OLD}{\Small \REM{Old stuff to be edited:}\par}{} \newcommand\linebreakx{\unskip\marginal{$\backslash$linebreak}\linebreak} \begingroup \count255=\time \divide\count255 by 60 \count1=\count255 \multiply\count255 by -60 \advance\count255 by \time \ifnum \count255 < 10 \xdef\klockan{\the\count1.0\the\count255} \else\xdef\klockan{\the\count1.\the\count255}\fi \endgroup \newcommand\nopf{\qed} \newcommand\noqed{\renewcommand{\qed}{}} \newcommand\qedtag{\eqno{\qed}} \DeclareMathOperator*{\sumx}{\sum\nolimits^{*}} \DeclareMathOperator*{\sumxx}{\sum\nolimits^{**}} \newcommand{\sum_{i=0}^\infty}{\sum_{i=0}^\infty} \newcommand{\sum_{j=0}^\infty}{\sum_{j=0}^\infty} \newcommand{\sum_{j=1}^\infty}{\sum_{j=1}^\infty} \newcommand{\sum_{k=0}^\infty}{\sum_{k=0}^\infty} \newcommand{\sum_{k=1}^\infty}{\sum_{k=1}^\infty} \newcommand{\sum_{m=0}^\infty}{\sum_{m=0}^\infty} \newcommand{\sum_{n=0}^\infty}{\sum_{n=0}^\infty} \newcommand{\sum_{n=1}^\infty}{\sum_{n=1}^\infty} \newcommand{\sum_{i=1}^n}{\sum_{i=1}^n} \newcommand{\sum_{j=1}^n}{\sum_{j=1}^n} \newcommand{\sum_{k=1}^n}{\sum_{k=1}^n} \newcommand{\prod_{i=1}^n}{\prod_{i=1}^n} \newcommand\set[1]{\ensuremath{\{#1\}}} \newcommand\bigset[1]{\ensuremath{\bigl\{#1\bigr\}}} \newcommand\Bigset[1]{\ensuremath{\Bigl\{#1\Bigr\}}} \newcommand\biggset[1]{\ensuremath{\biggl\{#1\biggr\}}} \newcommand\lrset[1]{\ensuremath{\left\{#1\right\}}} \newcommand\xpar[1]{(#1)} \newcommand\bigpar[1]{\bigl(#1\bigr)} \newcommand\Bigpar[1]{\Bigl(#1\Bigr)} \newcommand\biggpar[1]{\biggl(#1\biggr)} \newcommand\lrpar[1]{\left(#1\right)} \newcommand\bigsqpar[1]{\bigl[#1\bigr]} \newcommand\Bigsqpar[1]{\Bigl[#1\Bigr]} \newcommand\biggsqpar[1]{\biggl[#1\biggr]} \newcommand\lrsqpar[1]{\left[#1\right]} \newcommand\xcpar[1]{\{#1\}} \newcommand\bigcpar[1]{\bigl\{#1\bigr\}} \newcommand\Bigcpar[1]{\Bigl\{#1\Bigr\}} \newcommand\biggcpar[1]{\biggl\{#1\biggr\}} \newcommand\lrcpar[1]{\left\{#1\right\}} \newcommand\abs[1]{|#1|} \newcommand\bigabs[1]{\bigl|#1\bigr|} \newcommand\Bigabs[1]{\Bigl|#1\Bigr|} \newcommand\biggabs[1]{\biggl|#1\biggr|} \newcommand\lrabs[1]{\left|#1\right|} \def\rompar(#1){\textup(#1\textup)} \newcommand\xfrac[2]{#1/#2} \newcommand\xpfrac[2]{(#1)/#2} \newcommand\xqfrac[2]{#1/(#2)} \newcommand\xpqfrac[2]{(#1)/(#2)} \newcommand\parfrac[2]{\lrpar{\frac{#1}{#2}}} \newcommand\bigparfrac[2]{\bigpar{\frac{#1}{#2}}} \newcommand\Bigparfrac[2]{\Bigpar{\frac{#1}{#2}}} \newcommand\biggparfrac[2]{\biggpar{\frac{#1}{#2}}} \newcommand\xparfrac[2]{\xpar{\xfrac{#1}{#2}}} \newcommand\innprod[1]{\langle#1\rangle} \newcommand\expbig[1]{\exp\bigl(#1\bigr)} \newcommand\expBig[1]{\exp\Bigl(#1\Bigr)} \newcommand\explr[1]{\exp\left(#1\right)} \newcommand\expQ[1]{e^{#1}} \def\xexp(#1){e^{#1}} \newcommand\ceil[1]{\lceil#1\rceil} \newcommand\floor[1]{\lfloor#1\rfloor} \newcommand\lrfloor[1]{\left\lfloor#1\right\rfloor} \newcommand\frax[1]{\{#1\}} \newcommand\setn{\set{1,\dots,n}} \newcommand\nn{[n]} \newcommand\ntoo{\ensuremath{{n\to\infty}}} \newcommand\Ntoo{\ensuremath{{N\to\infty}}} \newcommand\asntoo{\text{as }\ntoo} \newcommand\ktoo{\ensuremath{{k\to\infty}}} \newcommand\mtoo{\ensuremath{{m\to\infty}}} \newcommand\stoo{\ensuremath{{s\to\infty}}} \newcommand\ttoo{\ensuremath{{t\to\infty}}} \newcommand\xtoo{\ensuremath{{x\to\infty}}} \newcommand\bmin{\wedge} \newcommand\norm[1]{\|#1\|} \newcommand\bignorm[1]{\bigl\|#1\bigr\|} \newcommand\Bignorm[1]{\Bigl\|#1\Bigr\|} \newcommand\downto{\searrow} \newcommand\upto{\nearrow} \newcommand\half{\tfrac12} \newcommand\thalf{\tfrac12} \newcommand\punkt{.\spacefactor=1000} \newcommand\iid{i.i.d\punkt} \newcommand\ie{i.e\punkt} \newcommand\eg{e.g\punkt} \newcommand\viz{viz\punkt} \newcommand\cf{cf\punkt} \newcommand{a.s\punkt}{a.s\punkt} \newcommand{a.e\punkt}{a.e\punkt} \renewcommand{\ae}{\vu} \newcommand\whp{w.h.p\punkt} \newcommand\ii{\mathrm{i}} \newcommand{\longrightarrow}{\longrightarrow} \newcommand\dto{\overset{\mathrm{d}}{\longrightarrow}} \newcommand\pto{\overset{\mathrm{p}}{\longrightarrow}} \newcommand\asto{\overset{\mathrm{a.s.}}{\longrightarrow}} \newcommand\eqd{\overset{\mathrm{d}}{=}} \newcommand\neqd{\overset{\mathrm{d}}{\neq}} \newcommand\op{o_{\mathrm p}} \newcommand\Op{O_{\mathrm p}} \newcommand\bbR{\mathbb R} \newcommand\bbC{\mathbb C} \newcommand\bbN{\mathbb N} \newcommand\bbT{\mathbb T} \newcommand\bbQ{\mathbb Q} \newcommand\bbZ{\mathbb Z} \newcommand\bbZleo{\mathbb Z_{\le0}} \newcommand\bbZgeo{\mathbb Z_{\ge0}} \newcounter{CC} \newcommand{\CC}{\stepcounter{CC}\CCx} \newcommand{\CCx}{C_{\arabic{CC}}} \newcommand{\CCdef}[1]{\xdef#1{\CCx}} \newcommand{\CCname}[1]{\CC\CCdef{#1}} \newcommand{\CCreset}{\setcounter{CC}0} \newcounter{cc} \newcommand{\cc}{\stepcounter{cc}\ccx} \newcommand{\ccx}{c_{\arabic{cc}}} \newcommand{\ccdef}[1]{\xdef#1{\ccx}} \newcommand{\ccname}[1]{\cc\ccdef{#1}} \newcommand{\ccreset}{\setcounter{cc}0} \renewcommand\Re{\operatorname{Re}} \renewcommand\Im{\operatorname{Im}} \renewcommand\L{\operatorname{L}} \newcommand\E{\operatorname{\mathbb E{}}} \renewcommand\P{\operatorname{\mathbb P{}}} \newcommand\Var{\operatorname{Var}} \newcommand\Cov{\operatorname{Cov}} \newcommand\Corr{\operatorname{Corr}} \newcommand\Exp{\operatorname{Exp}} \newcommand\Po{\operatorname{Po}} \newcommand\Bi{\operatorname{Bi}} \newcommand\Bin{\operatorname{Bin}} \newcommand\Be{\operatorname{Be}} \newcommand\Ge{\operatorname{Ge}} \newcommand\NBi{\operatorname{NegBin}} \newcommand\Res{\operatorname{Res}} \newcommand\fall[1]{^{\underline{#1}}} \newcommand\rise[1]{^{\overline{#1}}} \newcommand\supp{\operatorname{supp}} \newcommand\sgn{\operatorname{sgn}} \newcommand\Tr{\operatorname{Tr}} \newcommand\ga{\alpha} \newcommand\gb{\beta} \newcommand\gd{\delta} \newcommand\gD{\Delta} \newcommand\gf{\varphi} \newcommand\gam{\gamma} \newcommand\gG{\Gamma} \newcommand\gk{\varkappa} \newcommand\gl{\lambda} \newcommand\gL{\Lambda} \newcommand\go{\omega} \newcommand\gO{\Omega} \newcommand\gs{\sigma} \newcommand\gss{\sigma^2} \newcommand\gth{\theta} \newcommand\eps{\varepsilon} \newcommand\ep{\varepsilon} \newcommand\bJ{\bar J} \newcommand\cA{\mathcal A} \newcommand\cB{\mathcal B} \newcommand\cC{\mathcal C} \newcommand\cD{\mathcal D} \newcommand\cE{\mathcal E} \newcommand\cF{\mathcal F} \newcommand\cG{\mathcal G} \newcommand\cH{\mathcal H} \newcommand\cI{\mathcal I} \newcommand\cJ{\mathcal J} \newcommand\cK{\mathcal K} \newcommand\cL{{\mathcal L}} \newcommand\cM{\mathcal M} \newcommand\cN{\mathcal N} \newcommand\cO{\mathcal O} \newcommand\cP{\mathcal P} \newcommand\cQ{\mathcal Q} \newcommand\cR{{\mathcal R}} \newcommand\cS{{\mathcal S}} \newcommand\cT{{\mathcal T}} \newcommand\cU{{\mathcal U}} \newcommand\cV{\mathcal V} \newcommand\cW{\mathcal W} \newcommand\cX{{\mathcal X}} \newcommand\cY{{\mathcal Y}} \newcommand\cZ{{\mathcal Z}} \newcommand\ett[1]{\boldsymbol1_{#1}} \newcommand\bigett[1]{\boldsymbol1\bigcpar{#1}} \newcommand\Bigett[1]{\boldsymbol1\Bigcpar{#1}} \newcommand\etta{\boldsymbol1} \newcommand\smatrixx[1]{\left(\begin{smallmatrix}#1\end{smallmatrix}\right)} \newcommand\limn{\lim_{n\to\infty}} \newcommand\limN{\lim_{N\to\infty}} \newcommand\qw{^{-1}} \newcommand\qww{^{-2}} \newcommand\qq{^{1/2}} \newcommand\qqw{^{-1/2}} \newcommand\qqq{^{1/3}} \newcommand\qqqb{^{2/3}} \newcommand\qqqw{^{-1/3}} \newcommand\qqqbw{^{-2/3}} \newcommand\qqqq{^{1/4}} \newcommand\qqqqc{^{3/4}} \newcommand\qqqqw{^{-1/4}} \newcommand\qqqqcw{^{-3/4}} \newcommand\intii{\int_{-1}^1} \newcommand\intoi{\int_0^1} \newcommand\intoo{\int_0^\infty} \newcommand\intoooo{\int_{-\infty}^\infty} \newcommand\oi{[0,1]} \newcommand\ooo{[0,\infty)} \newcommand\ooox{[0,\infty]} \newcommand\oooo{(-\infty,\infty)} \newcommand\setoi{\set{0,1}} \newcommand\dtv{d_{\mathrm{TV}}} \newcommand\dd{\,\mathrm{d}} \newcommand\ddx{\mathrm{d}} \newcommand{probability generating function}{probability generating function} \newcommand{moment generating function}{moment generating function} \newcommand{characteristic function}{characteristic function} \newcommand{uniformly integrable}{uniformly integrable} \newcommand\rv{random variable} \newcommand\lhs{left-hand side} \newcommand\rhs{right-hand side} \newcommand\gnp{\ensuremath{G(n,p)}} \newcommand\gnm{\ensuremath{G(n,m)}} \newcommand\gnd{\ensuremath{G(n,d)}} \newcommand\gnx[1]{\ensuremath{G(n,#1)}} \newcommand\etto{\bigpar{1+o(1)}} \newcommand\GW{Galton--Watson} \newcommand\GWt{\GW{} tree} \newcommand\cGWt{conditioned \GW{} tree} \newcommand\GWp{\GW{} process} \newcommand\tX{{\widetilde X}} \newcommand\tY{{\widetilde Y}} \newcommand\kk{\varkappa} \newcommand\spann[1]{\operatorname{span}(#1)} \newcommand\tn{\cT_n} \newcommand\tnv{\cT_{n,v}} \newcommand\rea{\Re\ga} \newcommand\wgay{{-\ga-\frac12}} \newcommand\qgay{{\ga+\frac12}} \newcommand\ex{\mathbf e} \newcommand\hX{\hat X} \newcommand\sgt{simply generated tree} \newcommand\sgrt{simply generated random tree} \newcommand\hh[1]{d(#1)} \newcommand\WW{\widehat W} \newcommand\coi{C\oi} \newcommand\out{\gd^+} \newcommand\zne{Z_{n,\eps}} \newcommand\ze{Z_{\eps}} \newcommand\gatoo{\ensuremath{\ga\to\infty}} \newcommand\rtoo{\ensuremath{r\to\infty}} \newcommand\Yoo{Y_\infty} \newcommand\bes{R} \newcommand\tex{\tilde{\ex}} \newcommand\tbes{\tilde{\bes}} \newcommand{\tilde{c}}{\tilde{c}} \newcommand\Woo{W_\infty} \newcommand{m_1}{m_1} \newcommand{\tilde m_1}{\tilde m_1} \newcommand{B^{(3)}}{B^{(3)}} \newcommand{r^{1/2}}{r^{1/2}} \newcommand\coo{C[0,\infty)} \newcommand\coT{\ensuremath{C[0,T]}} \newcommand\expx[1]{e^{#1}} \newcommand\gdtau{\gD\tau} \newcommand\ygam{Y_{(\gam)}} \newcommand\EE{V} \newcommand\pigsqq{\sqrt{2\pi\gss}} \newcommand\pigsqqw{\frac{1}{\sqrt{2\pi\gss}}} \newcommand\gapigsqqw{\frac{(\ga-\frac12)\qw}{\sqrt{2\pi\gss}}} \newcommand\gdd{\frac{\gd}{2}} \newcommand\raisetagbase{\raisetag{\baselineskip}} \newcommand{H\"older}{H\"older} \newcommand{P\'olya}{P\'olya} \newcommand\CS{Cauchy--Schwarz} \newcommand\CSineq{\CS{} inequality} \newcommand{L\'evy}{L\'evy} \newcommand{Tak\'acs}{Tak\'acs} \newcommand{Fr\'echet}{Fr\'echet} \newcommand{\texttt{Maple}}{\texttt{Maple}} \newcommand\citex{\REM} \newcommand\refx[1]{\texttt{[#1]}} \newcommand\xref[1]{\texttt{(#1)}} \newcommand{\ro}[1]{\uppercase\expandafter{\romannumeral #1}} \hyphenation{Upp-sala} \begin{document} \maketitle \vspace{-.3in} \begin{center} \begin{small} September~29, 2021 \end{small} \end{center} \begin{abstract} We prove that, for every $0 \leq t \leq 1$, the limiting distribution of the scale-normalized number of key comparisons used by the celebrated algorithm {\tt QuickQuant} to find the $t$th quantile in a randomly ordered list has a Lipschitz continuous density function $f_t$ that is bounded above by~$10$. Furthermore, this density $f_t(x)$ is positive for every $x > \min\{t, 1 - t\}$ and, uniformly in~$t$, enjoys superexponential decay in the right tail. We also prove that the survival function $1 - F_t(x) = \int_x^{\infty}\!f_t(y) \dd y$ and the density function $f_t(x)$ both have the right tail asymptotics $\exp [-x \ln x - x \ln \ln x + O(x)]$. We use the right-tail asymptotics to bound large deviations for the scale-normalized number of key comparisons used by {\tt QuickQuant}. \end{abstract} \section{Introduction and summary} \label{S:intro} \subsection{Introduction} {\tt QuickQuant} is closely related to an algorithm called {\tt QuickSelect}, which in turn can be viewed as a one-sided analogue of {\tt QuickSort}. In brief, {\tt QuickSelect}$(n, m)$ is an algorithm designed to find a number of rank $m$ in an unsorted list of size~$n$. It works by recursively applying the same partitioning step as {\tt QuickSort} to the sublist that contains the item of rank~$m$ until the pivot we pick \emph{has} the desired rank or the size of the sublist to be explored has size one. Let $C_{n, m}$ denote the number of comparisons needed by {\tt QuickSelect}$(n, m)$, and note that $C_{n, m}$ and $C_{n, n + 1 - m}$ have the same distribution, by symmetry. Knuth~\cite{MR0403310} finds the formula \begin{equation} \label{E:exp_select} \E C_{n,m} = 2 \left[ (n+1)H_n - (n+3-m) H_{n+1-m} - (m+2) H_m + (n+3) \right] \end{equation} for the expectation. The algorithm {\tt QuickQuant}$(n, t)$ refers to {\tt QuickSelect}$(n, m_n)$ such that the ratio $m_n / n$ converges to a specified value $t \in [0, 1]$ as $n \to \infty$. It is easy to see that \eqref{E:exp_select} tells us about the limiting behavior of the expected number of comparisons after standardizing: \begin{equation} \label{EZ} \lim_{n \to \infty} \E[n^{-1} C_{n,m_n}] = 2 + 2 H(t), \end{equation} where $H(x) := - x \ln x - (1-x) \ln(1 - x)$ with $0 \ln 0 := 0$. We follow the set-up and notation of Fill and Nakama~\cite{MR3102458}, who use an infinite sequence $(U_i)_{i \geq 1}$ of independent Uniform$(0,1)$-distributed random variables to couple the number of key comparisons $C_{n,m_n}$ for all $n$. Let $L_0(n) := 0$ and $R_0(n) := 1$. For $k \geq 1$, inductively define \[ \tau_k(n) := \inf \{i \leq n:L_{k-1}(n) < U_i < R_{k-1}(n)\}, \] and let $r_k(n)$ be the rank of the pivot $U_{\tau_k(n)}$ in the set $\{U_1, \dots, U_n\}$ if $\tau_k(n) < \infty$ and be $m_n$ otherwise. [Recall that the infimum of the empty set is~$\infty$; hence $\tau_k(n) = \infty$ if and only if $L_{k - 1}(n) =R_{k - 1}(n)$.] Also, inductively define \begin{align} \label{left} L_k(n) :=& \mathbb{1}(r_k(n) \leq m_n) U_{\tau_k(n)} + \mathbb{1}(r_k(n) > m_n) L_{k-1}(n),\\ \label{right} R_k(n) :=& \mathbb{1}(r_k(n) \geq m_n) U_{\tau_k(n)} + \mathbb{1}(r_k(n) < m_n) R_{k-1}(n), \end{align} if $\tau_k(n) < \infty$, but \[ (L_k(n), R_k(n)) := (L_{k-1}(n), R_{k-1}(n)). \] if $\tau_k(n) = \infty$. The number of comparisons at the $k^{th}$ step is then \[ S_{n,k} := \sum_{i: \tau_k < i \leq n} \mathbb{1}(L_{k-1}(n) < U_i < R_{k-1}(n)), \] and the normalized total number of comparisons equals \begin{equation} \label{E:1} n^{-1} C_{n,m_n} := n^{-1} \sum_{k \geq 1} S_{n,k}. \end{equation} Mahmoud, Modarres and Smythe~\cite{MR1359052} studied {\tt QuickSelect} in the case that the rank~$m$ is taken to be a random variable $M_n$ uniformly distributed on $\{1, \dots, n\}$ and assumed to be independent of the numbers in the list. They used the Wasserstein metric to prove that $Z_n := n^{-1} C_{n,M_n} \xrightarrow{\mathcal{L}} Y$ as $n \to \infty$ and identified the distribution of $Y$. In particular, they proved that $Y$ has an absolutely continuous distribution function. Gr\"{u}bel and R\"{o}sler~\cite{MR1372338} treated all the quantiles $t$ simultaneously by letting $m_n \equiv m_n(t)$. Specifically, they considered the normalized process $X_n$ defined by \begin{equation} \label{E:2} X_n(t) := n^{-1} C_{n,\floor{n t} +1} \mbox{ for } 0 \leq t < 1, \quad X_n(t) := n^{-1} C_{n, n} \mbox{\ for\ } t = 1. \end{equation} Working in the Skorohod topology (see Billingsley~\cite[Chapter~3]{MR1700749}), they proved that this process has a limiting distribution as $n \to \infty$, and the value of the limiting process at argument $t$ is the sum of the lengths of all the intervals encountered in all the steps of searching for population quantile~$t$. We can use the \emph{same} sequence $(U_i)_{i \geq 1}$ of Uniform$(0, 1)$ random variables to express the limiting stochastic process. For $t \in [0, 1]$, let $L_0(t) := 0$ and $R_0(t) := 1$, and let $\tau_0(t) := 0$. For $t \in [0, 1]$ and $k \geq 1$, inductively define \begin{align} \label{taukt} \tau_k(t) :=& \inf \{i > \tau_{k - 1}(t) : L_{k-1}(t) \leq U_i \leq R_{k-1}(t) \},\\ \label{Lkt} L_k(t) :=& \mathbb{1} {(U_{\tau_k(t)} \leq t)}\, U_{\tau_k(t)} + \mathbb{1} {(U_{\tau_k(t)} > t)}\, L_{k-1}(t),\\ \label{Rkt} R_k(t) :=& \mathbb{1} {(U_{\tau_k(t)} \leq t)}\, R_{k-1}(t) + \mathbb{1} {(U_{\tau_k(t)} > t)}\, U_{\tau_k(t)}. \end{align} It is not difficult to see that \[ \P(\tau_k(t) < \infty\mbox{\ and\ }0 \leq L_k(t) \leq t \leq R_k(t) \leq 1 \mbox{\ for all $0 \leq t \leq 1$ and $k \geq 0$}) = 1 \] and that for each fixed $t \in (0, 1)$ we have \[ \P(L_k(t) < t < R_k(t)\mbox{\ for all $k \geq 0$}) = 1. \] The limiting process~$Z$ can be expressed as \begin{equation} \label{E:3} Z(t) := \sum_{k=0}^{\infty}{\left[R_k(t) - L_k(t)\right]} = 1 + \sum_{k=1}^{\infty}{\left[R_k(t) - L_k(t)\right]}; \end{equation} it is not hard to see that \[ \P(1 < Z(t) < \infty\mbox{\ for all $0 \leq t \leq 1$}) = 1. \] Note also that the processes~$Z$ and $(Z(1 - t))_{t \in [0, 1]}$ have the same finite-dimensional distributions. Gr\"{u}bel and R\"{o}sler~\cite[Theorem 8]{MR1372338} proved that we can replace the subscript $\floor{n t} +1$ in \eqref{E:2} by any $m_n(t)$ with $0 \leq m_n(t) \leq n$ such that $m_n(t) / n \to t$ as $n \to \infty$, and then the normalized random variables $n^{-1} C_{n, m_n(t)}$ converge (univariately) to the limiting random variable $Z(t)$ for each $t \in [0, 1]$. Among the univariate distributions of $Z(t)$ for $t \in [0, 1]$, only the common distribution of $Z(0)$ and $Z(1)$ is known at all explicitly. As established by Hwang and Tsai~\cite{MR1918722}, this distribution is the Dickman distribution; see their paper for a wealth of information about the distribution. The overarching goal of this paper is to establish basic properties of the distributions of $Z(t)$ for other values of~$t$. Kodaj and M\'ori~\cite{MR1454110} proved the (univariate) convergence of \eqref{E:1} to $Z(t)$ in the Wasserstein metric. Using the coupling technique and induction, they proved that \eqref{E:1} is stochastically smaller than its continuous counterpart \eqref{E:3}. Combining this fact with knowledge of their expectations (see \eqref{E:exp_select} and~\cite[Lemma 2.2]{MR1454110}), they proved that \eqref{E:1} converges to \eqref{E:3} in the Wasserstein metric and thus in distribution. Gr\"{u}bel~\cite{MR1622443} connected {\tt QuickSelect}$(n,m_n)$ to a Markov chain to identify the limiting process. For each fixed $n \geq 1$, he considered the Markov chain $(Y_m^{(n)})_{m \geq 0}$ on the state space $I_n := \left\{(i, j):1 \leq j \leq i \leq n \right\}$ with $Y_0^{(n)} := (n, m_n)$. Transition probabilities of $Y^{(n)}$ from the state $(i, j)$ are determined by the partition step of {\tt QuickSelect}$(i, j)$ as follows. If $Y_m^{(n)} = (i, j)$, then $Y_{m+1}^{(n)}$ is selected uniformly at random from the set \[ \left\{ (i - k, j - k):k = 1, \dots, j - 1 \right\} \cup \left\{ (1, 1) \right\} \cup \left\{ (i-k, j):k = 1, \dots, i-j \right\}; \] in particular, $(1, 1)$ is an absorbing state for $Y^{(n)}$. If we write $Y_m^{(n)} = (S_m^{(n)}, Q_m^{(n)})$, then we know \begin{equation} \label{E:4} n^{-1} C_{n,m_n} \stackrel{\mathcal{L}}{=} n^{-1} \sum_{m \geq 0} \left(S_m^{(n)} - 1\right). \end{equation} Gr\"{u}bel~\cite{MR1622443} constructed another Markov chain $Y = (Y_m) = ((S_m, Q_m))$, which is a continuous-value counterpart of the process $Y^{(n)}$, and he proved that for all $m \geq 0$, the random vector $Y^{(n)}_m$ converges to $Y_m$ almost surely. Using the dominated convergence theorem, he proved that the random variables $n^{-1} \sum_{m=0}^{\infty}{\left(S_m^{(n)}-1\right)}$ converge almost surely to $\sum_{m=0}^{\infty}{S_m}$; the limiting random variable here is exactly $Z(t)$ of~\eqref{E:3}. Combining with \eqref{E:4}, he concluded that $n^{-1} C_{n,m_n}$ converges in distribution to \eqref{E:3}. As previously mentioned, Hwang and Tsai~\cite{MR1918722} identified the limiting distribution of \eqref{E:1} when $m_n = o(n)$ as the Dickman distribution. Fill and Nakama~\cite{MR3102458} studied the limiting distribution of the cost of using {\tt QuickSelect} for a variety of cost functions. In particular, when there is simply \emph{unit} cost of comparing any two keys, then their work reduces to study of the number of key comparisons, to which we limit our focus here. They proved $L^p$-convergence of \eqref{E:1} for {\tt QuickQuant}$(n, t)$ to \eqref{E:3} for $1 \leq p < \infty$ by first studying the distribution of the number of key comparisons needed for another algorithm called {\tt QuickVal}, and then comparing the two algorithms. The algorithm {\tt QuickVal}$(n, t)$ finds the rank of the \emph{population} $t$-quantile in the sample, while its cousin {\tt QuickQuant}$(n, t)$ looks for the sample $t$-quantile. Intuitively, when the sample size is large, we expect the rank of the population $t$-quantile to be close to $n t$. Therefore, the two algorithms should behave similarly when $n$ is large. Given a set of keys $\{U_1, \dots, U_n\}$, where $U_i$ are i.i.d. Uniform$(0,1)$ random variables, one can regard the operation of {\tt QuickVal}$(n, t)$ as that of finding the rank of the value $t$ in the augmented set $\{U_1, ..., U_n, t\}$. It works by first selecting a pivot uniformly at random from the set of keys $\{U_1, ..., U_n\}$ and then using the pivot to partition the \emph{augmented} set (we don't count the comparison of the pivot with~$t$). We then recursively do the same partitioning step on the subset that contains $t$ until the set of the keys on which the algorithm operates reduces to the singleton $\{t\}$. For {\tt QuickVal}$(n, t)$ with the definitions \eqref{taukt}--\eqref{Rkt} and with \[ S_{n, k}(t) := \sum_{\tau_k(t) < i \leq n} \mathbbm{1}(L_{k - 1}(t) < U_i < R_{k - 1}(t)), \] Fill and Nakama~\cite{MR3102458} showed that $n^{-1} \sum_{k \geq 1} S_{n, k}(t)$ converges (for fixed~$t$) almost surely and also in $L^p$ for $1 \leq p < \infty$ to \eqref{E:3}. They then used these facts to prove the $L^p$ convergence (for fixed~$t$) of \eqref{E:1} to \eqref{E:3} for {\tt QuickQuant}$(n, t)$ for $1 \leq p < \infty$. Fill and Matterer~\cite{MR3249225} treated distributional convergence for the worst-case cost of {\tt Find} for a variety of cost functions. Suppose, for example, that we continue, as at the start of this section, to assign unit cost to the comparison of any two keys, so that $C_{n, m}$ is the total cost for ${\tt QuickSelect}(n, m)$. Then (for a list of length~$n$) the cost of worst-case {\tt Find} is $\max_{1 \leq m \leq n} C_{n, m}$, and its distribution depends on the joint distribution of $C_{n, m}$ for varying~$m$. We shall not be concerned here with worst-case {\tt Find}, but we wish to review the approach and some of the results of~\cite{MR3249225}, since there is relevance of their work to ${\tt QuickQuant}(n, t)$ for fixed~$t$. Fill and Matterer~\cite{MR3249225} considered tree-indexed processes closely related to the operation of the {\tt QuickSelect} algorithm, as we now describe. For each node in a given rooted ordered binary tree, let $\theta$ denote the binary sequence (or \emph{string}) representing the path from the root to this node, where $0$ corresponds to taking the left child and $1$ to taking the right. The value of~$\theta$ for the root is thus the empty string, denoted $\varepsilon$. Define $L_{\varepsilon} := 0$, $R_{\varepsilon} := 1$, and $\tau_{\varepsilon} := 1$. Given a sequence of i.i.d.\ Uniform$(0,1)$ random variables $U_1, U_2, \dots$, recursively define \begin{align*} \tau_{\theta} &:= \inf \{ i: L_{\theta} < U_i < R_{\theta}\},\\ L_{\theta 0} &:= L_{\theta}, \quad L_{\theta 1} := U_{\tau_{\theta}},\\ R_{\theta 0} &:= U_{\tau_{\theta}}, \quad R_{\theta 1} := R_{\theta}. \end{align*} Here the concatenated string $\theta 0$ corresponds to the left child of the node with string~$\theta$, while $\theta 1$ corresponds to the right child. Observe that, when inserting a key $U_i$ arriving at time $i > \tau_{\theta}$ into the binary tree, this key is compared with the ``pivot'' $U_{\tau_{\theta}}$ if and only if $U_i \in (L_{\theta}, R_{\theta})$. For~$n$ insertions, the total cost of comparing keys with pivot $U_{\tau_{\theta}}$ is therefore \[ S_{n, \theta} := \sum_{\tau_{\theta} < i \leq n} \mathbbm{1}(L_{\theta} < U_i < R_{\theta}). \] We define a binary-tree-indexed stochastic process $S_n = (S_{n, \theta})_{\theta \in \Theta}$, where $\Theta$ is the collection of all finite-length binary sequences. For each $1 \leq p \leq \infty$, Fill and Matterer~\cite[Definition~3.10 and Proposition~3.11]{MR3249225} defined a Banach space $\mathcal{B}^{(p)}$ of binary-tree-indexed stochastic processes that corresponds in a natural way to the Banach space $L^p$ for random variables. Let $I_{\theta} := R_{\theta} - L_{\theta}$ and consider the process $I = (I_{\theta})_{\theta \in \Theta}$. Fill and Matterer~\cite[Theorem~4.1 with $\beta \equiv 1$]{MR3249225} proved the convergence of the processes $n^{-1} S_n$ to $I$ in the Banach space $\mathcal{B}^{(p)}$ for each $2 \leq p < \infty$. For the simplest application in~\cite{MR3249225}, namely, to {\tt QuickVal}$(n, t)$ with~$t$ fixed, let $\gamma(t)$ be the infinite path from the root to the key having value~$t$ in the (almost surely) complete binary search tree formed by successive insertions of $U_1, U_2, \dots$ into an initially empty tree. The total cost (call it $V_n$) of {\tt QuickVal}$(n,t)$ can then be computed by summing the cost of comparisons with each (pivot-)node along the path, that is, \[ V_n := \sum_{\theta \in \gamma(t)}{S_{n,\theta}}. \] Using their tree-process convergence theorem described in our preceding paragraph, Fill and Matterer~\cite[Proposition~6.1 with $\beta \equiv 1$]{MR3249225} established $L^p$-convergence, for each $0 < p < \infty$, of $n^{-1} V_n$ to $I_{\gamma(t)}$ as $n \to \infty$, where $I_{\gamma(t)} := \sum_{\theta \in \gamma(t)} I_{\theta}$. Moreover (\cite[Theorem~6.3 with $\beta \equiv 1$]{MR3249225}), they also proved $L^p$-convergence of $n^{-1} Q_n$ to the same limit, again for every $0 < p < \infty$, where $Q_n$ denotes the cost of ${\tt QuickQuant}(n, t)$. Throughout this paper, we will use the standard notations $\mathbb{1}(A)$ to denote the indicator function of the event $A$ and $\E [f;\,A]$ for $\E[f \mathbb{1}(A)]$. \subsection{Summary} In \refS{S:density}, by construction we establish the existence of densities $f_t$ for the random variables $Z(t)$ defined in~\eqref{E:3}. In \refS{S:boundedness} we prove that these densities are uniformly bounded and in \refS{S:uniform continuity} that they are uniformly continuous. As shown in \refS{S:Integral equation}, the densities satisfy a certain integral equation for $0 < t < 1$. The right-tail behavior of the density functions is examined in \refS{S:decay}, and the left-tail behavior in \refS{S:left}. In \refS{S:other} we prove that $f_t(x)$ is positive if and only if $x > \min\{t, 1 - t\}$, and we improve the result of \refS{S:uniform continuity} by showing that $f_t(x)$ is Lipschitz continuous in~$x$ for fixed~$t$ and jointly continuous in $(t, x)$. Sections~\ref{S:improved}--\ref{S:lower} are devoted to sharp logarithmic asymptotics for the right tail of $f_t$, and \refS{S:Large_deviation} uses the results of those two sections to treat right-tail large deviation behavior of {\tt QuickQuant}$(n, t)$ for large but finite~$n$. \section{Existence (and construction) of density functions} \label{S:density} In this section, we prove that $Z \equiv Z(t)$ defined in \eqref{E:3} for fixed $0 \leq t \leq 1$ has a density. For notational simplification, we let $L_k \equiv L_k(t)$ and $R_k \equiv R_k(t)$. Let $J \equiv J(t) := Z(t)-1 = \sum_{k=1}^{\infty}{\Delta_k}$ with $\Delta_k \equiv \Delta_k(t) := R_k(t)-L_k(t)$. We use convolution notation as in Section V.4 of Feller~\cite{MR0270403}. The following lemma is well known and can be found, for example, in Feller\cite[Theorem V.4.4]{MR0270403} or Durrett\cite[Theorem 2.1.11]{MR2722836}. \begin{lemma} \label{L:Feller} If $X$ and $Y$ are independent random variables with respective distribution functions~$F$ and~$G$, then $Z = X + Y$ has the distribution function $F \star G$. If, in addition, $X$ has a density~$f$ (with respect to Lebesgue measure), then $Z$ has a density $f \star G$. \end{lemma} Let $X = \Delta_1 + \Delta_2$ and $Y = \sum_{k=3}^{\infty}{\Delta_k}$. For the remainder of this paragraph, we suppose $0 < t < 1$. If we condition on $(L_3, R_3) = (l_3, r_3)$ for some $0 \leq l_3 < t < r_3 \leq 1$ with $(l_3, r_3) \ne (0,1) $, we then have \begin{equation} \label{E:Y} Y = (r_3-l_3) \sum_{k=3}^{\infty}{\frac{R_k-L_k}{r_3-l_3}} = (r_3-l_3) \sum_{k=3}^{\infty}{(R_k'-L_k')}, \end{equation} where we set $L_k' = (L_k - l_3) / (r_3 - l_3)$ and $R_k' = (R_k - l_3)/(r_3 - l_3)$ for $k \geq 3$. Observe that, by definitions \eqref{taukt}--\eqref{Rkt}, the stochastic process $((L_k', R_k'))_{k \geq 3}$, conditionally given $(L_3, R_3) = (l_3, r_3)$, has the same distribution as the (unconditional) stochastic process of intervals $((L_k, R_k))_{k \geq 0}$ encountered in all the steps of searching for population quantile $(t - l_3)/(r_3-l_3)$ (rather than~$t$) by {\tt QuickQuant}. Note also that (again conditionally) the stochastic processes $((L_k, R_k))_{0 \leq k \leq 2}$ and $((L_k', R_k'))_{k \geq 3}$ are independent. Thus (again conditionally) $Y / (r_3 - l_3)$ has the same distribution as the (unconditional) random variable $Z\left((t - l_3) / (r_3 - l_3)\right)$ and is independent of~$X$. We will prove later (Lemmas~\ref{L:main}--\ref{L:main01}) that, conditionally given $(L_3, R_3) = (l_3, r_3)$, the random variable $X$ has a density. Let \[ f_{l_3, r_3}(x) := \P(X \in \ddx x \mid (L_3, R_3) = (l_3, r_3)) / \ddx x \] be such a conditional density. We can then use \refL{L:Feller} to conclude that $J = X + Y$ has a conditional density \[ h_{l_3, r_3}(x) := \P(J \in \ddx x \mid (L_3, R_3) = (l_3, r_3)) / \ddx x. \] By mixing $h_{l_3, r_3}(x)$ for all possible values of $l_3, r_3$, we will obtain an unconditional density function for~$J$, as summarized in the following theorem. \begin{theorem} \label{T:main} For each $0 \leq t \leq 1$, the random variable $J(t) = Z(t) - 1$ has a density \begin{equation} \label{E:density} f_t(x) := \int{\P((L_3, R_3) \in \ddx (l_3, r_3))} \cdot h_{l_3, r_3}(x), \end{equation} and hence the random variable $Z(t)$ has density $f_t(x-1)$. \end{theorem} Now, as promised, we prove that, conditionally given $(L_3, R_3) = (l_3, r_3)$, the random variable $X$ has a density $f_{l_3, r_3}$. We begin with the case $0 < t < 1$. \begin{lemma} \label{L:main} Let $0 \leq l_3 < t < r_3 \leq 1$ with $(l_3, r_3) \ne (0, 1)$. Conditionally given $(L_3, R_3) = (l_3, r_3)$, the random variable $X = \Delta_1 + \Delta_2$ has a right continuous density $f_{l_3, r_3}$. \end{lemma} \begin{proof} We consider three cases based on the values of $(l_3, r_3)$. \smallskip \par\noindent \emph{Case 1}: $l_3 = 0$ and $r_3 < 1$. Since $L_k$ is nondecreasing in~$k$, from $L_3 = 0$ it follows that $L_1 = L_2 = 0$. The unconditional joint distribution of $(L_1, R_1, L_2, R_2, L_3, R_3)$ satisfies \begin{align} &\P(L_1 = 0, R_1 \in \ddx r_1, L_2 = 0, R_2 \in \ddx r_2, L_3 = 0, R_3 \in \ddx r_3) \nonumber \\ &= \mathbb{1}{(t < r_3 < r_2 < r_1 < 1)} \dd r_1 \frac{\ddx r_2}{r_1} \frac{\ddx r_3}{r_2} \label{joint1} \end{align} and hence \begin{align} \P(L_3 = 0, R_3 \in \ddx r_3) &= \mathbb{1}{(t < r_3 <1)} \dd r_3\int_{r_2 = r_3}^{1}{\frac{\ddx r_2}{r_2}}\, \int_{r_1=r_2}^{1}{\frac{\ddx r_1}{r_1}} \nonumber\\ &= \mathbb{1}{(t < r_3 <1)} \dd r_3\int_{r_2 = r_3}^{1}{\frac{\ddx r_2}{r_2}}(-\ln r_2) \nonumber\\ &= \frac{1}{2}\, (\ln r_3)^2\, \mathbb{1}{(t < r_3 <1)} \dd r_3. \label{joint2} \end{align} Dividing \eqref{joint1} by \eqref{joint2}, we find \begin{align} \lefteqn{\hspace{-0.5in}\P(L_1 = 0, R_1 \in \ddx r_1, L_2 = 0, R_2 \in \ddx r_2 \mid L_3 = 0, R_3= r_3)} \nonumber \\ &=\frac{2\, r_1^{-1}r_2^{-1}\dd r_1\dd r_2}{(\ln r_3)^2} \, \mathbb{1}{(t < r_3 < r_2 < r_1 < 1)}.\label{cond1} \end{align} Thus for $x \in (2 r_3, 2)$, we find \begin{align} f_{0,r_3}(x) &= \P(X \in \ddx x \mid L_3 = 0, R_3 = r_3) / \ddx x \nonumber\\ &= \int_{r_2}\!\P(R_2 \in \ddx r_2,\,R_1 \in \ddx x - r_2 \mid L_3 = 0, R_3 = r_3) / \ddx x \nonumber\\ &= \frac{2}{(\ln r_3)^2}\int_{r_2=r_3 \vee (x-1)}^{x / 2}{(x-r_2)^{-1}r_2^{-1}} \dd r_2 \nonumber\\ \label{CD1} &= \frac{2}{\left(\ln \frac{1}{r_3}\right)^2} \frac{1}{x} \bigg[ \ln \left(\frac{x-r_3}{r_3}\right) \mathbb{1}{(2r_3 \leq x < 1+r_3)}\nonumber\\ &{} \qquad \qquad \qquad \quad +\ln \left(\frac{1}{x-1}\right) \mathbb{1}{(1+r_3 \leq x < 2)}\bigg]; \end{align} we set $f_{0, r_3}(x) = 0$ for $x \notin (2 r_3, 2)$. \smallskip \par\noindent \emph{Case 2}: $l_3 > 0$ and $r_3 = 1$. This condition implies that $R_1=R_2=1$. Invoking symmetry, we can skip the derivation and immediately write \begin{align} f_{l_3,1}(x) = \frac{2}{\left(\ln \frac{1}{1-l_3}\right)^2} \frac{1}{x}&\biggl[ \ln \left(\frac{x-1+l_3}{1-l_3}\right)\mathbb{1}{(2-2l_3 \leq x < 2-l_3)}\nonumber \\ \label{CD2} &{} \qquad + \ln \frac{1}{x-1} \mathbb{1}{(2-l_3 \leq x < 2)}\biggr] \end{align} for $x \in (2 - 2 l_3, 2)$; we set $f_{l_3, 1}(x) = 0$ for $x \notin (2 - 2 l_3, 2)$. \smallskip \par\noindent \emph{Case 3}: $0 < l_3 < t < r_3 < 1$. There are six possible scenarios for the random vector $(L_1, R_1, L_2, R_2, L_3, R_3)$, and to help us discuss the cases, we consider values $l_2, r_2$ satisfying $0 < l_2 < l_3 < t < r_3 < r_2 < 1$.\\ \\ (a) $L_1 = l_2, L_2 = L_3 = l_3$ and $R_1 = R_2 =1, R_3 = r_3$.\\ In this subcase, we consider the event that the first pivot we choose locates between $0$ and $l_3$, the second pivot has value $l_3$, and the third pivot has value $r_3$. Denote this event by $E_{llr}$ (with $llr$ indicating that we shrink the search intervals by moving the lefthand, lefthand, and then righthand endpoints). We have \begin{align} \lefteqn{\hspace{-.5in}\P(L_1 \in \ddx l_2, R_1 = 1, L_2 \in \ddx l_3 , R_2 = 1, L_3 \in \ddx l_3, R_3 \in \ddx r_3)} \nonumber \\ \label{E:3a} &= \mathbb{1}{(0 < l_2 < l_3 < t < r_3 < 1)} \dd l_2 \frac{\ddx l_3}{1-l_2} \frac{\ddx r_3}{1-l_3}. \end{align} Integrating over all possible values of $l_2$, we get \begin{align*} \lefteqn{\hspace{-0.5in}\P(L_3 \in \ddx l_3, R_3 \in \ddx r_3, E_{llr})} \\ &= \mathbb{1}{(0<l_3<t<r_3<1)}\, \frac{1}{1-l_3} \ln \left(\frac{1}{1-l_3}\right) \dd l_3 \dd r_3. \end{align*} (b) $L_1 = L_2 = 0, L_3 = l_3$ and $R_1 = r_2, R_2 = R_3 = r_3$.\\ In this and all subsequence subcases, we use notation like that in subcase~(a). In this subcase, we invoke symmetry in comparison with subcase~(a). The results are \begin{align} \lefteqn{\hspace{-0.5in}\P(L_1 = 0, R_1 \in \ddx r_2, L_2 = 0, L_3 \in \ddx l_3 , R_2 = R_3 \in \ddx r_3)} \nonumber \\ \label{E:3b} &= \mathbb{1}{(0 < l_3 < t < r_3 < r_2 < 1)} \dd r_2 \frac{\ddx r_3}{r_2} \frac{\ddx l_3}{r_3} \end{align} and \[ \P(L_3 \in \ddx l_3, R_3 \in \ddx r_3, E_{rrl}) = \mathbb{1}{(0<l_3<t<r_3<1)}\, \frac{1}{r_3} \ln \left(\frac{1}{r_3}\right) \dd l_3 \dd r_3. \] (c) $L_1 = L_2 = l_2, L_3 = l_3$ and $R_1 = 1, R_2 = R_3 = r_3$.\\ In this subcase we have \begin{align} \lefteqn{\hspace{-0.5in}\P(R_1 = 1, L_1 = L_2 \in \ddx l_2, L_3 \in \ddx l_3 , R_2 = R_3 \in \ddx r_3)} \nonumber \\ \label{E:3c} &= \mathbb{1}{(0 < l_2 < l_3 < t < r_3 < 1)} \dd l_2 \frac{\ddx r_3}{1-l_2} \frac{\ddx l_3}{r_3-l_2}. \end{align} Integrating over the possible values of $l_2$, we find \begin{align*} \lefteqn{\P(L_3 \in \ddx l_3, R_3 \in \ddx r_3, E_{lrl})/(\ddx l_3 \dd r_3)} \\ &= \mathbb{1}{(0<l_3<t<r_3<1)} \, \frac{1}{1-r_3} \left[ \ln \left(\frac{1}{r_3-l_3}\right)- \ln \left(\frac{1}{r_3}\right) - \ln \left(\frac{1}{1-l_3}\right) \right]. \end{align*} (d) $L_1 = 0, L_2 = L_3 = l_3$ and $R_1 = R_2 = r_2, R_3 = r_3$.\\ In this subcase, by symmetry with subcase~(c), we have \begin{align} \lefteqn{\hspace{-0.5in} \P(L_1 = 0, L_2 = L_3 \in \ddx l_3 , R_1 = R_2 \in \ddx r_2, R_3 \in \ddx r_3)} \nonumber\\ \label{E:3d} &= \mathbb{1}{(0 < l_3 < t < r_3 < r_2 < 1)} \dd r_2 \frac{\ddx l_3}{r_2} \frac{\ddx r_3}{r_2-l_3}. \end{align} and \begin{align*} \lefteqn{\hspace{-0.1in}\P(L_3 \in \ddx l_3, R_3 \in \ddx r_3, E_{rlr})/(\ddx l_3 \dd r_3)} \\ &= \mathbb{1}{(0<l_3<t<r_3<1)}\, \frac{1}{l_3} \left[ \ln \left(\frac{1}{r_3-l_3}\right) - \ln \left(\frac{1}{1-l_3}\right) - \ln \left(\frac{1}{r_3}\right) \right]. \end{align*} (e) $L_1 = 0, L_2 = l_2, L_3 = l_3$ and $R_1 = R_2 = R_3 = r_3$.\\ In this subcase we have \begin{align} \lefteqn{\hspace{-0.5in}\P(L_1 = 0, L_2 \in \dd l_2 , L_3 \in \ddx l_3, R_1 = R_2 = R_3 \in \ddx r_3)} \nonumber\\ \label{E:3e} &= \mathbb{1}{(0 < l_2 < l_3 < t < r_3 < 1)} \dd r_3 \frac{\ddx l_2}{r_3} \frac{\ddx l_3}{r_3-l_2}. \end{align} Integrating over the possible values of $l_2$, we have \begin{align*} \lefteqn{\hspace{-0.2in} \P(L_3 \in \ddx l_3, R_3 \in \ddx r_3, E_{rll})} \\ &= \mathbb{1}{(0<l_3<t<r_3<1)}\, \frac{1}{r_3} \left[ \ln \left(\frac{1}{r_3-l_3}\right) - \ln \left(\frac{1}{r_3}\right) \right] \dd l_3 \dd r_3. \end{align*} (f) $L_1 = L_2 = L_3 = l_3$ and $R_1 = 1, R_2 = r_2, R_3 = r_3$.\\ In this final subcase, by symmetry with subcase~(e), we have \begin{align} \lefteqn{\hspace{-0.5in}\P(L_1 = L_2 = L_3 \in \ddx l_3 , R_1 = 1, R_2 \in \ddx r_2, R_3 \in \ddx r_3)} \nonumber\\ \label{E:3f} &= \mathbb{1}{(0 < l_3 < t < r_3 < r_2 < 1)} \dd l_3 \frac{\ddx r_2}{1-l_3} \frac{\ddx r_3}{r_2-l_3} \end{align} and \begin{align*} \lefteqn{\hspace{-0.2in}\P(L_3 \in \ddx l_3, R_3 \in \ddx r_3, E_{lrr})/(\ddx l_3 \dd r_3)} \\ &= \mathbb{1}{(0<l_3<t<r_3<1)}\, \frac{1}{1-l_3} \left[ \ln \left(\frac{1}{r_3-l_3}\right) - \ln \left(\frac{1}{1-l_3}\right) \right]. \end{align*} Summing results from the six subcases, we conclude in Case~3 that \begin{equation} \label{E:4} \P(L_3 \in \ddx l_3, R_3 \in \ddx r_3) = \mathbb{1}{(0<l_3<t<r_3<1)} \, g(l_3, r_3) \dd l_3 \dd r_3, \end{equation} where \begin{align} g(l_3, r_3) &:= \left[\frac{1}{l_3(1-l_3)}+\frac{1}{r_3(1-r_3)}\right]\ln \left(\frac{1}{r_3-l_3}\right) \label{gdef} \\ &{} \qquad \qquad - \left(\frac{1}{l_3}+\frac{1}{1-r_3}\right)\left[\ln \left(\frac{1}{r_3}\right) + \ln \left(\frac{1}{1-l_3}\right)\right]. \nonumber \end{align} The conditional joint distribution of $(L_1, R_1, L_2, R_2)$ given $(L_3, R_3) = (l_3, r_3)$ can be derived by dividing each of \eqref{E:3a}--\eqref{E:3f} by~\eqref{E:4}, and we can then compute $f_{l_3,r_3}$ from these conditional distributions. Let us write \begin{equation} \label{E:5} f_{l_3,r_3}(x) = \mathbb{1}{(0 < l_3 < t < r_3 <1)}\, \frac{1}{g(l_3,r_3)} \sum_{i = 1}^{6}{f_{l_3,r_3}^{(i)}(x)}, \end{equation} where $f_{l_3, r_3}^{(i)}(x)\dd l_3\dd r_3\dd x$ is the contribution to \[ \P(L_3 \in \ddx l_3,\,R_3 \in \ddx r_3, \,X \in \ddx x) \] arising from the $i$th subcase of the six. In subcase~(a) we know that $X = R_1 - L_1 + R_2 - L_2 = 2 - l_2 - l_3$. Changing variables from $l_2$ to~$x$, from~\eqref{E:3a} we find \[ f_{l_3,r_3}^{(1)}(x) = \mathbb{1}{(2-2l_3 \leq x < 2-l_3)}\, \frac{1}{1-l_3} \, \frac{1}{x-1+l_3}. \] In subcase~(b) we know that $X = r_2 + r_3$. Changing variables from $r_2$ to~$x$, from~\eqref{E:3b} we find \[ f_{l_3,r_3}^{(2)}(x) = \mathbb{1}{(2r_3 \leq x < 1+ r_3)}\, \frac{1}{r_3} \, \frac{1}{x-r_3}. \] In subcase~(c), we know that $X = 1 - 2 l_2 + r_3$. Changing variables from $l_2$ to~$x$, from~\eqref{E:3c} we find \[ f_{l_3,r_3}^{(3)}(x) = \mathbb{1}{(1+r_3-2l_3 \leq x < 1+r_3)}\, \frac{1}{x+1-r_3} \, \frac{2}{x+r_3-1}. \] In subcase~(d), we know that $X = 2 r_2 - l_3$. Changing variables from $r_2$ to~$x$, from~\eqref{E:3d} we find \[ f_{l_3,r_3}^{(4)}(x) = \mathbb{1}{(2r_3-l_3 \leq x < 2-l_3)}\, \frac{2}{x+l_3} \, \frac{1}{x-l_3}. \] In subcase~(e), we know that $X = 2 r_3 - l_2$. Changing variables from $l_2$ to~$x$, from~\eqref{E:3e} we find \[ f_{l_3,r_3}^{(5)}(x) = \mathbb{1}{(2r_3-l_3 \leq x < 2r_3)}\, \frac{1}{r_3} \, \frac{1}{x-r_3}. \] Finally, in subcase~(f), we know that $X = 1 + r_2 - l_3$. Changing variables from $r_2$ to~$x$, from~\eqref{E:3f} we find \[ f_{l_3,r_3}^{(6)}(x) = \mathbb{1}{(r_3-2l_3+1 \leq x < 2-2l_3)}\, \frac{1}{1-l_3} \, \frac{1}{x-1+l_3}. \] The density functions $f_{0,r_3}$ and $f_{l_3,1}$ we have found in Cases~1 and~2 are continuous. We have chosen to make the functions $f_{l_3, r_3}^{(i)}$ (for $i = 1, \dots, 6$) right continuous in Case~3. Thus the density $f_{l_3, r_3}$ we have determined at~\eqref{E:5} in Case~3 is right continuous. \end{proof} Our next lemma handles the cases $t = 0$ and $t = 1$ that were excluded from \refL{L:main}, and its proof is the same as for Cases~1 and~2 in the proof of \refL{L:main}. \begin{lemma} \label{L:main01} {\rm (a)} Suppose $t = 0$. Let $0 < r_3 < 1$. Conditionally given $(L_3, R_3) = (0, r_3)$, the random variable $X = \Delta_1 + \Delta_2$ has the right continuous density $f_{0, r_3}$ specified in the sentence containing~\eqref{CD1}. {\rm (b)} Suppose $t = 1$. Let $0 < l_3 < 1$. Conditionally given $(L_3, R_3) = (l_3, 1)$, the random variable $X = \Delta_1 + \Delta_2$ has the right continuous density $f_{l_3, 1}$ specified in the sentence containing~\eqref{CD2}. \end{lemma} We need to check the trivariate measurability of the function $f_t(l_3, r_3, x) := f_{l_3, r_3} (x)$ before diving into the derivation of the density function of $J$. Given a topological space~$S$, let $\mathcal{B}(S)$ denote its Borel $\sigma$-field, that is, the $\sigma$-field generated by the open sets of~$S$. Also, given $0 < t < 1$, let \[S_t := \{(l_3, r_3) \ne (0, 1):0 \leq l_3 < t < r_3 \leq 1\}. \] \begin{lemma} \label{L:mble} {\rm (a)}~For $0 < t < 1$, the conditional density function $f_t(l_3, r_3, x)$, formed to be a right continuous function of $x$, is measurable with respect to $\mathcal{B}(S_t \times \mathbb{R})$. {\rm (b)}~For $t = 0$, the conditional density function $f_0(r_3, x) := f_{0, r_3}(x)$, continuous in~$x$, is measurable $\mathcal{B}((0, 1) \times \mathbb{R})$. {\rm (c)}~For $t = 1$, the conditional density function $f_1(l_3, x) := f_{l_3, 1}(x)$, continuous in~$x$, is measurable $\mathcal{B}((0, 1) \times \mathbb{R})$. \end{lemma} We introduce (the special case of real-valued $f$ of) a lemma taken from Gowrisankaran \cite[Theorem 3]{MR291403} giving a sufficient condition for the measurability of certain functions~$f$ defined on product spaces. The lemma will help us prove \refL{L:mble}. \begin{lemma}[Gowrisankaran~\cite{MR291403}] \label{L:Gowrisankaran} Let $(\Omega, \cF)$ be a measurable space. Let $f:\Omega \times \mathbb{R} \to \mathbb{R}$. Suppose that the section mapping $f(\cdot, y)$ is $\cF$-measurable for each $y \in \mathbb{R}$ and that the section mapping $f(\omega, \cdot)$ is either right continuous for each $\omega \in \Omega$ or left continuous for each $\omega \in \Omega$. Then~$f$ is measurable with respect to the product $\sigma$-field $\cF \otimes \mathcal{B}(\mathbb{R})$. \end{lemma} \begin{proof}[Proof of \refL{L:mble}] We prove~(b), then~(c), and finally~(a). \smallskip (b)~Recall the expression~\eqref{CD1} for $f_0(r_3, x)$ [for $0 < r_3 < 1$ and $x \in (2 r_3, 2)$]. We apply \refL{L:Gowrisankaran} with $(\Omega, \cF) = ((0, 1), \mathcal{B}((0, 1)))$. The right continuity of $f_0(r_3, \cdot)$ has already been established in \refL{L:main01}(a). On the other hand, when we fix $x$ and treat $f(0, r_3, x)$ as a function of $r_3$, the conditional density function can be separated into the following cases: \begin{itemize} \item If $x \leq 0$ or $x \geq 2$, then $f_0(r_3, x) \equiv 0$. \item If $0 < x < 2$, then from~\eqref{CD1} we see that $f_0(r_3, x)$ is piecewise continuous (with a finite number of measurable domain-intervals), and hence measurable, in $r_3$. \end{itemize} Since the product $\sigma$-field $\mathcal{B}((0, 1)) \otimes \mathcal{B}(\mathbb{R})$ equals $\mathcal{B}((0, 1) \times \mathbb{R})$, the desired result follows. \smallskip (c)~Assertion~(c) can be proved by a similar argument or by invoking symmetry. \smallskip (a)~We apply \refL{L:Gowrisankaran} with $(\Omega, \cF) = (S_t, \mathcal{B}(S_t))$. The right continuity of $f(l_3, r_3, \cdot)$ has already been established in \refL{L:main}, so it suffices to show for each $x \in \mathbb{R}$ that $f(l_3, r_3, x)$ is measurable in $(l_3, r_3) \in S_t$. For this it is clearly sufficient to show that $f(0, r_3, x)$ is measurable in $r_3 \in (t, 1)$, that $f(l_3, 1, x)$ is measurable in $l_3 \in (0, t)$, and that $f(l_3, r_3, x)$ is measurable in $(l_3, r_3) \in (0, t) \times (t, 1)$. All three of these assertions follow from the fact that piecewise continuous functions (with a finite number of measurable domain-pieces) are measurable; in particular, for the third assertion, note that the function~$g$ defined at~\eqref{gdef} is continuous in $(l_3, r_3) \in (0, t) \times (t, 1)$ and that each of the six expressions $f_{l_3, r_3}^{(i)}(x)$ appearing in~\eqref{E:5} is piecewise continuous (with a finite number of measurable domain-pieces) in these values of $(l_3, r_3)$ for each fixed~$x \in \mathbb{R}$. \smallskip This complete the proof. \end{proof} As explained at the outset of this section, a conditional density $\mbox{$h_{l_3, r_3}(\cdot)$}$ for $J(t)$ given $(L_3, R_3) = (l_3, r_3)$ can now be formed by convolving the conditional density function of $X$, namely $f_{l_3,r_3}(\cdot)$, with the conditional distribution function of~$Y$. That is, we can write \begin{equation} \label{E:6} h_{l_3, r_3}(x) = \int{f_{l_3,r_3}(x-y)\,\P\left(Y \in \ddx y \mid (L_3, R_3) = (l_3, r_3)\right)}. \end{equation} We now prove in the next two lemmas that the joint measurability of $f_{l_3, r_3}(x)$ with respect to $(l_3, r_3, x)$ ensures the same for $h_{l_3, r_3}(x)$. \begin{lemma} \label{L:mble_h_x} Let $(\Omega, \mathcal{F})$ be a measurable space. Let $g: \Omega \times \bbR \to \bbR$ be a nonnegative function measurable with respect to the product $\sigma$-field $\mathcal{F} \otimes \mathcal{B}(\bbR)$. Let $V$ and $Y$ be two measurable functions defined on a common probability space and taking values in $\Omega$ and $\bbR$, respectively. Then a regular conditional probability distribution $\P(Y \in dy \mid V )$ for $Y$ given $V$ exists, and the function $\cT g:\Omega \times \bbR \to \bbR$ defined by \[ \cT g(v, x) := \int g(v,x-y) \P(Y \in \ddx y \mid V = v ) \] is measurable with respect to the product $\sigma$-field $\mathcal{F} \otimes \mathcal{B}(\bbR)$. \end{lemma} \ignore{ \begin{proof} Since $Y$ is a real-valued random variable, by Billingsley~\cite[Theorem 33.3]{MR2893652} or Durrett~\cite[Theorem 4.1.18]{MR2722836} there exists a conditional probability distribution for $Y$ given $V$. Consider the restricted collection \[ \mathcal{H} := \{f \geq 0:Tf\mbox{\ is measurable\ }\mathcal{F} \otimes \mathcal{B}(\bbR)\} \] of functions defined on $\Omega \times \bbR$ and the $\pi$-system \[ \mathcal{A} := \{B_1 \times B_2 : B_1 \in \mathcal{F} \mbox{ and } B_2 \in \mathcal{B}(\bbR)\} \] of all measurable rectangles in $\mathcal{F} \otimes \mathcal{B}(\bbR)$. If we show that the indicator function $\mathbb{1}(A)$ is in~$\mathcal{H}$ for every $A \in \mathcal{A}$, it then follows from the monotone convergence theorem and the monotone class theorem in Durrett~\cite[Theorem~5.2.2]{MR2722836} that~$\mathcal{H}$ contains all nonnegative functions measurable with respect to $\sigma(\mathcal{A}) = \mathcal{F} \otimes \mathcal{B}(\bbR)$, as desired. We thus let $A = B_1 \times B_2 \in \mathcal{A}$ for some $B_1 \in \mathcal{F}$ and $B_2 \in \mathcal{B}(\bbR)$. Then \[ T \mathbb{1}(A)(v, x) = \mathbb{1}_{B_1}(v) \P(Y \in x - B_2 \mid V = v ). \] We claim that \begin{equation} \label{meas} (v, x) \mapsto \P(Y \in x - B \mid V = v )\mbox{\ is\ }\mathcal{F} \otimes \mathcal{B}(\bbR)\mbox{\ measurable}, \end{equation} for any $B \in \mathcal{B}(\bbR)$ and thus $\mathbb{1}(A) \in \mathcal{H}$; so the proof of the claim~\eqref{meas} will complete the proof of the lemma. Since the collection of sets $B \in \mathcal{B}(\bbR)$ satifying~\eqref{meas} is a $\lambda$-system, we need only check~\eqref{meas} for intervals of the form $B = [a,b)$ with $b > a$; we can then apply Dynkin's $\pi$--$\lambda$ theorem to complete the proof of the claim. For fixed $x$, by Billingsley~\cite[Theorem 34.5]{MR2893652} the mapping $v \mapsto \P(Y \in x - B \mid V = v )$ is a version of $\E[\mathbb{1}_B (x - Y) \mid V]$ and so is $\mathcal{F}$-measurable. Furthermore, for fixed $v \in \Omega$ and $x, z \in \bbR$ with $z > x$ and $z$ close enough to $x$, we have \begin{align*} | \P(Y \in x &- B \mid V = v ) - \P(Y \in z - B \mid V = v ) |\\ &\leq \P(Y \in (x-b, z-b] \mid V = v ) + \P(Y \in (x-a, z-a] \mid V = v ), \end{align*} and the bound here is small by the right continuity of the conditional distribution function. We have established right continuity of $\P(Y \in x - B \mid V = v )$ in $x$ for fixed $v$, and thus we can apply \refL{L:Gowrisankaran} to conclude that $(v, x) \mapsto \P(Y \in x - B \mid V = v )$ is $\mathcal{F} \otimes \mathcal{B}(\bbR)$ measurable. This completes the proof of the lemma. \end{proof} } \begin{proof} This is standard. For completeness, we provide a proof making use of Kallenberg~\cite[Lemma 3.2(i)]{MR4226142}. First, since~$Y$ is a real-valued random variable, by Billingsley~\cite[Theorem 33.3]{MR2893652} or Durrett~\cite[Theorem 4.1.18]{MR2722836} or Kallenberg~\cite[Theorem 8.5]{MR4226142} there exists a regular conditional probability distribution for~$Y$ given~$V$; this is a probability kernel from~$\Omega$ to~$\bbR$ and can trivially be regarded as a kernel from $\Omega \times \bbR$ to~$\bbR$. Let $S := \Omega \times \bbR$, $T := \bbR$, $\mu_{v, x}(\ddx y) := \P(Y \in \ddx y \mid V = v)$, and $f((v, x), y) := g(v, x - y)$. The conclusion of our lemma is then an immediate consequence of the first assertion in the aforementioned Kallenberg lemma. \end{proof} We can now handle the measurability of $(l_3, r_3, x) \mapsto h_{l_3,r_3}(x)$. \begin{lemma} \label{L:mble_h} \ \\ \indent {\rm (a)}~For $0 < t < 1$, the mapping $(l_3,r_3, x) \mapsto h_{l_3, r_3}(x)$ is $\mathcal{B}(S_t \times \bbR)$ measurable. {\rm (b)}~For $t = 0$, the mapping $(r_3, x) \mapsto h_{0, r_3}(x)$ is $\mathcal{B}((0, 1) \times \bbR)$ measurable. {\rm (c)}~For $t = 1$, the mapping $(l_3, x) \mapsto h_{l_3, 1}(x)$ is $\mathcal{B}((0, 1) \times \bbR)$ measurable. \end{lemma} \begin{proof} We prove {\rm (a)}; the claims {\rm (b)} and {\rm (c)} are proved similarly. Choosing $\Omega = S_t$ and $g(l_3, r_3, x) = f_{l_3,r_3}(x)$ with $(l_3,r_3) \in \Omega$ in~\refL{L:mble_h_x}, we conclude that \[ (l_3,r_3, x) \mapsto h_{l_3, r_3}(x) = \cT g(l_3, r_3, x) \] is $\mathcal{B}(S_t \times \bbR)$ measurable. \end{proof} Recall the definition of $f_t(x)$ at~\eqref{E:density}. It then follows from~\refL{L:mble_h} that $f_t(x)$ is well defined and measurable with respect to $x \in \bbR$ for fixed $0 \leq t \leq 1$. This completes the proof of~\refT{T:main}. \section{Uniform boundedness of the density functions} \label{S:boundedness} In this section, we prove that the functions $f_t$ are uniformly bounded for $0 \leq t \leq 1$. \begin{theorem} \label{T:bdd} The densities $f_t$ are uniformly bounded by $10$ for $0 \leq t \leq 1$. \end{theorem} The proof of~\refT{T:bdd} is our later Lemmas~\ref{L:bound} and~\ref{L:bdd_rho_0}. In particular, the numerical value $10$ comes from the bound in the last line of the proof of \refL{L:bound} plus two times the bound in the last sentence of the proof of \refL{L:bdd_rho_0}. A bound on the function $f_t$ is established by first finding a bound on the conditional density function $f_{l_3,r_3}$. Observe that the expressions in the proof of \refL{L:main} for $f^{(i)}_{l_3, r_3}(x)$ (for $i = 1, \dots, 6$) in Case~3 all involve indicators of intervals. The six endpoints of these intervals are \begin{center} $2 r_3 - l_3$, $2 r_3$, $1 + r_3 - 2 l_3$, $1 + r_3$, $2 - 2 l_3$, and $2 - l_3$, \end{center} with $0 < l_3 < t < r_3 < 1$. The relative order of these six endpoints is determined once we know the value of $\rho = \rho (l_3, r_3) := l_3/(1 - r_3)$. Indeed: \begin{itemize} \item When $\rho \in (0, 1/2)$, the order is \[ 2 r_3 - l_3 < 2 r_3 < 1+ r_3 - 2 l_3 < 1 + r_3 < 2 - 2 l_3 < 2 - l_3. \] \item When $\rho \in (1/2, 1)$, the order is \[ 2 r_3 - l_3 < 1+ r_3 - 2 l_3 < 2 r_3 < 2 - 2 l_3 < 1 + r_3 < 2 - l_3. \] \item When $\rho \in (1, 2)$, the order is \[ 1+ r_3 - 2 l_3 < 2 r_3 - l_3 < 2 - 2 l_3 < 2 r_3 < 2 - l_3 < 1 + r_3. \] \item When $\rho \in (2, \infty)$, the order is \[ 1+ r_3 - 2 l_3 < 2 r_3 - l_3 < 2 - 2 l_3 < 2 - l_3 < 2 r_3 < 1 + r_3. \] \end{itemize} When $\rho = 0$ (i.e.,\ in Case~1 in the proof of \refL{L:main}:\ $l_3 = 0 < r_3 < 1$), the function $f_{l_3, r_3}$ is given by $f_{0, r_3}$ at~\eqref{CD1}. When $\rho = \infty$ (i.e.,\ in Case~2 in the proof of \refL{L:main}:\ $0 < l_3 < r_3 = 1$), the function $f_{l_3, r_3}$ is given by $f_{l_3, 1}$ at~\eqref{CD2}. The result~\eqref{E:5} for Case~3 in the proof of \refL{L:main} can be reorganized as follows, where we define the following functions to simplify notation: \begin{align*} m_1(x,l_3,r_3) :=& \frac{1}{r_3(x-r_3)}+\frac{2}{(x+l_3)(x-l_3)},\\ m_2(x,l_3,r_3) :=& \frac{1}{(1-l_3)(x-1+l_3)}+\frac{2}{(x+l_3)(x-l_3)}, \\ m_3(x,l_3,r_3) :=& \frac{2}{(x+1-r_3)(x+r_3-1)}+\frac{1}{(1-l_3)(x+l_3-1)}, \\ m_4(x,l_3,r_3) :=& \frac{1}{r_3 (x-r_3)}+\frac{2}{(x+1-r_3)(x+r_3-1)}. \end{align*} When $\rho \in (0, 1)$, the conditional density $f_{l_3, r_3}$ satisfies \begin{align} f_{l_3,r_3}(x)\, g(l_3,r_3)&=\mathbb{1}{(2r_3-l_3 \leq x < 1+r_3-2l_3)}\, m_1(x,l_3,r_3) \label{rho<1}\\ &+\mathbb{1}{(1+r_3-2l_3\leq x<1+r_3)}\, [m_2(x,l_3,r_3)+m_4(x,l_3,r_3)] \nonumber\\ &+\mathbb{1}{(1+r_3 \leq x < 2-l_3)}\, m_2(x,l_3,r_3). \nonumber \end{align} Lastly, when $\rho \in (1, \infty)$, the conditional density $f_{l_3, r_3}$ satisfies \begin{align} f_{l_3,r_3}(x)\, g(l_3,r_3) &=\mathbb{1}{(1+r_3-2l_3 \leq x < 2r_3-l_3)}\, m_3(x,l_3,r_3) \label{rho>1}\\ &+\mathbb{1}{(2r_3-l_3\leq x<2-l_3)}\, [m_2(x,l_3,r_3) + m_4(x,l_3,r_3)] \nonumber\\ &+\mathbb{1}{(2-l_3 \leq x < 1+r_3)}\, m_4(x,l_3,r_3). \nonumber \end{align} Recall the definition of $f_t(x)$ at~\eqref{E:density}. For any $x \in \bbR$ we can decompose $f_t(x)$ into three contributions: \begin{align*} f_t(x) &= \E\,h_{L_3, R_3}(x) \\ &= \E [h_{L_3, R_3}(x);\,\rho(L_3, R_3) = 0] + \E [h_{L_3, R_3}(x);\,\rho(L_3, R_3) = \infty] \\ &{} \qquad + \E [h_{L_3, R_3}(x);\,0 < \rho(L_3, R_3) < \infty]. \end{align*} We first consider the contribution from the case $0 < \rho(L_3, R_3) < \infty$ for any $0 < t < 1$, noting that this case doesn't contribute to $f_t(x)$ when $t = 0$ or $t=1$. Define \begin{equation} \label{E:bdd_rho_finite} b(l_3, r_3) = \frac{1}{g(l_3, r_3)} \frac{3}{2} \left[ \frac{1}{r_3(r_3 - l_3)} + \frac{1}{(1-l_3)(r_3 - l_3)}\right]. \end{equation} \begin{lemma} \label{L:bound} The contribution to the density function $f_t$ from the case $0 < \rho(L_3, R_3) <\infty$ is uniformly bounded for $0 < t < 1$. More precisely, given $0 < t < 1$ and $0 < l_3 < t < r_3 < 1$, we have \begin{equation} \label{fhbound} f_{l_3,r_3}(x) \leq b(l_3, r_3)\mbox{\rm \ \ and\ \ }h_{l_3,r_3}(x) \leq b(l_3, r_3)\mbox{\rm \ for all\ }x \in \bbR; \end{equation} and, moreover, $\E [h_{L_3, R_3}(x);\,0 < \rho(L_3, R_3) < \infty]$ is uniformly bounded for $0 < t < 1$. \end{lemma} \begin{proof} For~\eqref{fhbound}, we need only establish the bound on~$f$. We start to bound \eqref{rho<1} for $0 < \rho < 1$. The function $m_1$ is a decreasing function of $x$ and thus reaches its maximum in~\eqref{rho<1} when $x = 2r_3-l_3$: \[ m_1(x, l_3, r_3) \leq m_1(2r_3 -l_3, l_3, r_3) = \frac{3}{2} \frac{1}{r_3 (r_3 - l_3)}. \] The function $m_2 + m_4$ is also a decreasing function of $x$, and the maximum in~\eqref{rho<1} occurs at $x = 1+r_3 - 2 l_3$. Plug in this $x$-value and use the fact that $1 - l_3 > r_3$ when $\rho < 1$ to obtain \[ (m_2 + m_4)(x, l_3, r_3) \leq \frac{3}{2} \frac{1}{r_3 (r_3 - l_3)} + \frac{3}{2} \frac{1}{(1-l_3) (r_3 - l_3)}. \] Finally, the function $m_2$ is again a decreasing function of $x$, and the maximum in~\eqref{rho<1} occurs at $x = 1 + r_3$. Plug in this $x$-value and use the facts that $1+r_3 - l_3 > 2 r_3 $ and $1 + r_3 + l_3 > r_3 - l_3$ to conclude \[ m_2(x, l_3, r_3) \leq \frac{1}{r_3 (r_3 - l_3)} + \frac{1}{(1-l_3) (r_3 - l_3)}. \] By the above three inequalities, we summarize that for $0 < \rho < 1$ we have for all~$x$ the inequality \[ f_{l_3, r_3}(x) \leq \frac{1}{g(l_3, r_3)} \frac{3}{2} \left[ \frac{1}{r_3 (r_3 - l_3)} + \frac{1}{(1-l_3) (r_3 - l_3)} \right]. \] The method to upper-bound \eqref{rho>1} is similar to that for~\eqref{rho<1}, or one can again invoke symmetry, and we skip the proof here. For the expectation of $b(L_3, R_3)$, we see immediately that \begin{align*} \E [b\left(L_3, R_3\right);\,0 < \rho(L_3, R_3) < \infty] &= \frac{3}{2} \int_0^{t}{\int_{t}^1\!\left[ \frac{1}{r (r - l)} + \frac{1}{(1-l) (r - l)} \right] \dd r \dd l} \\ &= \frac{\pi^2}{4} + \frac{3}{2} (\ln t)[\ln(1 - t)] \leq \frac{\pi^2}{4} + \frac{3}{2} (\ln 2)^2. \end{align*} \end{proof} For the cases $\rho(L_3, R_3) = 0$ and $\rho(L_3, R_3) = \infty$, we cannot find a constant bound $b(l_3, r_3)$ on the function $f_{l_3, r_3}$ such that the corresponding contributions to $\E b(L_3, R_3)$ are bounded for $0 \leq t \leq 1$. Indeed, although we shall omit the proof since it would take us too far afield, there exists no such bound $b(l_3, r_3)$. Instead, to prove the uniform boundedness of the contributions in these two cases, we take a different approach. The following easily-proved lemma comes from Gr\"{u}bel and R\"{o}sler~\cite[proof of Theorem~9]{MR1372338}. \begin{lemma} \label{L:Stochastic_upper_bdd} Consider a sequence of independent random variables $V_1, V_2, \dots$, each uniformly distributed on $(1/2, 1)$, and let \begin{equation} \label{E:V} V := 1+ \sum_{n=1}^{\infty} \prod_{k=1}^n V_k. \end{equation} Then the random variables $Z(t)$, $0 \leq t \leq 1$, defined at~\eqref{E:3} are all stochastically dominated by $V$. Furthermore, $\E V = 4$; and $V$ has everywhere finite moment generating function~$m$ and therefore superexponential decay in the right tail, in the sense that for any $\theta \in (0, \infty)$ we have $\P(V \geq x) = o(e^{-\theta x})$ as $x \to \infty$. \end{lemma} The following lemma pairs the stochastic upper bound~$V$ on $Z(t)$ with a stochastic lower bound. These stochastic bounds will be useful in later sections. Recall that the Dickman distribution with support $[1, \infty)$ is the distribution of $Z(0)$. \begin{lemma} \label{L:Stochastic_lower_bdd} Let $D$ be a random variable following the Dickman distribution with support $[1, \infty)$. Then for all $0 \leq t \leq 1$ we have $D \leq Z(t) \leq V$ stochastically. \end{lemma} \begin{proof} Recall that $\Delta_1(t) = R_1(t) - L_1(t)$. We first use a coupling argument to show that $\Delta_1(t)$ is stochastically increasing for $0 \leq t \leq 1 /2$. Let $U = U_1 \sim \mbox{uniform}(0, 1)$ be the first key in the construction of~$Z$ as described in \eqref{taukt}--\eqref{E:3}. Let $0 \leq t_1 < t_2 \leq 1/2$. It is easy to see that $\Delta_1(t_1) = \Delta_1(t_2)$ unless $t_1 < U < t_2$, in which case $\Delta_1(t_1) = U < t_2 \leq 1/2 \leq 1 - t_2 < 1 - U = \Delta_1(t_2)$. Let $V_1 \sim \mbox{uniform}(1/2, 1)$, as in \refL{L:Stochastic_upper_bdd}, and let $0 \leq t \leq 1/2$. Since $\Delta_1(0) \overset{\mathcal{L}}{=} U$ and $\Delta_1(1/2) \overset{\mathcal{L}}{=} V_1$, we immediately have $U \leq \Delta_1(t) \leq V_1$ stochastically. This implies by a simple induction argument on~$k$ involving the conditional distribution of $\Delta_k(t)$ given $\Delta_{k - 1}(t)$ that $D \leq Z(t) \leq V$ stochastically. \end{proof} \begin{remark} (a)~Note that we do \emph{not} claim that $Z(t)$ is stochastically increasing in $t \in [0, 1/2]$. Indeed, other than the stochastic ordering $D = Z(0) \leq Z(t)$, we do not know whether any stochastic ordering relations hold among the random variables $Z(t)$. (b)~The random variable~$V$ can be interpreted as a sort of ``limiting greedy (or `on-line') worst-case {\tt QuickQuant} normalized key-comparisons cost''. Indeed, if upon each random bisection of the search interval one always chooses the half of greater length and sums the lengths to get $V^{(n)}$, then the limiting distribution of $V^{(n)} / n$ is that of~$V$. \end{remark} \begin{lemma} \label{L:bdd_rho_0} The contributions to the density function $f_t$ from the cases $\rho(L_3, R_3) = 0$ and $\rho(L_3, R_3) = \infty$ are uniformly bounded for $0 \leq t \leq 1$. \end{lemma} \begin{proof} Because the Dickman density is bounded above by $e^{-\gamma}$, we need only consider $0 < t < 1$. The case $\rho(L_3(t), R_3(t)) = 0$ corresponds to $L_3(t) = 0$, while the case $\rho(L_3(t), R_3(t)) = \infty$ corresponds to $R_3(t) = 1$. By symmetry, the contribution from $R_3(t) = 1$ is the same as the contribution from $L_3(1 - t) = 0$, so we need only show that the contribution from $L_3(t) = 0$ is uniformly bounded. We will do this by showing that the larger contribution from $L_2(t) = 0$ is uniformly bounded. By conditioning on the value of $R_2(t)$, the contribution from $L_2(t) = 0$ is \begin{align} c_t(x) &:= \P(L_2(t) = 0,\,J(t) \in \ddx x) / \ddx x \nonumber \\ \label{cx} &= \int_{r \in (t, 1)}\!\int_{z > 1}\!{\bf 1}(r \leq x - r z < 1)\,(x - r z)^{-1}\,\P(Z(t / r) \in \ddx z)\dd r. \end{align} If $z > 1$ and $r \leq x - r z$, then $(x / r) - 1 \geq z > 1$ and so $r < x / 2$. Therefore we find \[ c_t(x) \leq \int_t^{\min\{1,\,x / 2\}}\!\int_{\max\{1,\,(x - 1) / r\} < z \leq (x / r) - 1}\!(x - r z)^{-1}\,\P(Z(t / r) \in \ddx z)\dd r. \] The integrand (including the implicit indicator function) in the inner integral is an increasing function of~$z$ over the interval $(- \infty, (x / r) - 1]$, with value $r^{-1}$ at the upper endpoint of the interval. We can thus extend it to a nondecreasing function $\phi \equiv \phi_{x, r}$ with domain $\bbR$ by setting $\phi(z) = r^{-1}$ for $z > (x / r) - 1$. It then follows that \begin{align} c_t(x) &\leq \int_0^{\min\{1,\,x / 2\}} \left[ \int_{\max\{1,\,(x - 1) / r\} < z \leq (x / r) - 1}\!(x - r z)^{-1}\,\P(V \in \ddx z) \right. \nonumber\\ &{} \hspace{1.5in} + \left. r^{-1}\,\P\left( V > \frac{x}{r} - 1 \right) \right] \dd r \nonumber \\ &\leq \int_0^{\min\{1,\,x / 2\}}\!\int_{\max\{1,\,(x - 1) / r\} < z \leq (x / r) - 1}\!(x - r z)^{-1}\,\P(V \in \ddx z)\dd r \nonumber\\ \label{cxbound} &{} \hspace{1.5in} + \int_0^{x / 2}\!r^{-1}\,\P\left( V > \frac{x}{r} - 1 \right) \dd r. \end{align} By the change of variables $v = (x / r) - 1$, the second term in~\eqref{cxbound} equals \[ \int_1^{\infty} (v + 1)^{-1} \P(V > v)\dd v \leq \frac{1}{2} \int_0^{\infty} \P(V > v)\dd v = \frac{1}{2} \E V = 2. \] Comparing the integrals $c_t(x)$ at~\eqref{cx} and the first term in~\eqref{cxbound}, we see that the only constraint that has been discarded is $r > t$. We therefore see by the same argument that produces~\eqref{cx} that the first term in~\eqref{cxbound} is the value of the density for $W := U_1 (1 + U_2 V)$ at~$x$, where $U_1$, $U_2$, and $V$ are independent and $U_1$ and $U_2$ are uniformly distributed on $(0, 1)$. Thus to obtain the desired uniform boundedness of $f_t$ we need only show that~$W$ has a bounded density. For that, it suffices to observe that the conditional density of~$W$ given $U_2$ and~$V$ is bounded above by~$1$ (for any values of $U_2$ and~$V$), and so the unconditional density is bounded by~$1$. We conclude that $c_t(x) \leq 3$, and this completes the proof. \end{proof} \begin{remark} Based on simulation results, we conjecture that the density functions $f_t$ are uniformly bounded by $e^{-\gamma}$ (the sup-norm of the right-continuous Dickman density $f_0$) for $0 \leq t \leq 1$. \end{remark} \section{Uniform continuity of the density function $f_t$} \label{S:uniform continuity} From the previous section, we know that for $0 < t < 1$ in the case $0 < l < t < r < 1$ (i.e.,\ the case $0 < \rho < \infty$) the function $f_{l, r}$ is c\`adl\`ag (that is, a right continuous function with left limits) and bounded above by $b(l, r)$, where the corresponding contribution $\E[b(L_3,R_3);\,0 < \rho(L_3, R_3) < \infty]$ is finite. Applying the dominated convergence theorem, we conclude that the contribution to $f_t$ from this case is also c\`adl\`ag. For the cases $0 = l < t < r < 1$ ($\rho = 0$) and $0 < l < t < r = 1$ ($\rho = \infty$), the functions $f_{0,r}$ and $f_{l,1}$ are both continuous on the real line. In this section, we will build bounds $b_t(l, r)$ (note that these bounds depend on $t$) for these two cases (\refL{L:b1} for $\rho = 0$ and \refL{L:b2} for $\rho = \infty$) in similar fashion as for~\refL{L:bdd_rho_0} such that both $\E[b_t(L_3,R_3)\,;\rho(L_3, R_3) = 0]$ and $\E[b_t(L_3,R_3) ;\rho(L_3, R_3) = \infty]$ are finite for any $0 < t < 1$. Given these bounds, we can apply the dominated convergence theorem to conclude that the density $f_t$ is c\`adl\`ag. Later, this result will be sharpened substantially in~\refT{T:uniform_cont}. Let $\alpha \approx 3.59112$ be the unique real solution of $1 + x - x \ln x = 0$ and let $\beta := 1 / \alpha \approx 0.27846$. Define \[ b_1(r) := \frac{2}{\ln r^{-1}} \frac{1}{1+r} \quad \mbox{and} \quad b_2(r) := \frac{2}{(\ln r^{-1})^2} \frac{1}{r} \beta. \] \begin{lemma} \label{L:b1} Suppose $\rho = 0$, i.e.,\ $0 = l_3 < t < r_3 < 1$. If $t \geq \beta$, then the optimal constant upper bound on $f_{l_3, r_3}$ is \[ b_t(l_3, r_3) = b_1(r_3), \] with corresponding contribution \[ \E[b_t(L_3(t), R_3(t));\,\rho\left(L_3(t), R_3(t)\right) = 0] = \int_t^1 \frac{\ln r^{-1}}{1 + r}\dd r \leq \int_{\beta}^1\!\frac{\ln r^{-1}}{1 + r}\dd r < \frac{1}{4} \] to $\E b_t(L_3(t), R_3(t))$. If $t < \beta$, then the optimal constant upper bound on $f_{l_3, r_3}$ is the continuous function \[ b_t(l_3, r_3) = b_1(r_3) \mathbb{1}{(\beta \leq r_3 < 1)} + b_2(r_3) \mathbb{1}{(t < r_3 < \beta)} \] of $r_3 \in (t, 1]$, with corresponding contribution \begin{align*} \E[b_t\left(L_3(t), R_3(t)\right);\,\rho\left(L_3(t), R_3(t)\right) = 0] &= \int_{\beta}^1 \frac{\ln r^{-1}}{1 + r}\dd r + \beta (\ln \beta - \ln t) \\ &< \frac{1}{4} + \beta (\ln \beta - \ln t). \end{align*} \end{lemma} \begin{lemma} \label{L:b2} Suppose $\rho = \infty$, i.e.,\ $0 < l_3 < t < r_3 = 1$. If $t \leq 1-\beta$, then the optimal constant upper bound on $f_{l_3, r_3}$ is \[ b_t(l_3, r_3) =b_1(1 - l_3), \] with corresponding contribution \[ \E[b_t\left(L_3(t), R_3(t)\right);\,\rho\left(L_3(t), R_3(t)\right) = 0] = \int_{1 - t}^1 \frac{\ln r^{-1}}{1 + r}\dd r \leq \int_{\beta}^1\!\frac{\ln r^{-1}}{1 + r}\dd r < \frac{1}{4}. \] If $t > 1-\beta$, then the optimal constant upper bound on $f_{l_3, r_3}$ is the continuous function \[ b_t(l_3, r_3) = b_1(1-l_3)\mathbb{1}{(0 < l_3 \leq 1-\beta)} + b_2(1-l_3) \mathbb{1}{(1-\beta < l_3 < t)} \] of $l_3 \in [0, t)$, with corresponding contribution \begin{align*} \E[b_t\left(L_3(t), R_3(t)\right);\,\rho\left(L_3(t), R_3(t)\right) = 0] &= \int_{\beta}^1 \frac{\ln r^{-1}}{1 + r}\dd r + \beta [\ln \beta - \ln(1 - r)] \\ &< \frac{1}{4} + \beta [\ln \beta - \ln(1 - t)]. \end{align*} \end{lemma} We prove \refL{L:b1} here, and \refL{L:b2} follows similarly or by symmetry. \begin{proof}[Proof of \refL{L:b1}] When $\rho = 0$, we have $l_3 = 0$, and the conditional density function is $f_{0,r_3}$ in~\eqref{CD1}. The expression in square brackets at~\eqref{CD1} is continuous and unimodal in~$x$, with maximum value at $x = 1 + r_3$. Because the factor $1 / x$ is decreasing, it follows that the maximum value of $f_{0, r_3}(x)$ is the maximum of \[ \frac{2}{\left(\ln r_3^{-1} \right)^2} \frac{1}{x} \ln \left(\frac{x-r_3}{r_3}\right) \] over $x \in [2 r_3, 1 + r_3]$, i.e.,\ the maximum of \[ \frac{2}{r_3 \left(\ln r_3^{-1} \right)^2} \frac{\ln y}{1 + y} \] over $y \in [1, 1 / r_3]$. A simple calculation shows that the displayed expression is strictly increasing for $y \in [1, \alpha]$ and strictly decreasing for $y \in [\alpha, \infty)$. Thus the maximum for $y \in [1, 1 / r_3]$ is achieved at $y = \alpha$ if $\alpha \leq 1 / r_3$ and at $y = 1 / r_3$ if $\alpha \geq 1 / r_3$. Equivalently, $f_{0, r_3}(x)$ is maximized at $x = r_3 (\alpha + 1)$ if $r_3 \leq \beta$ and at $x = 1 + r_3$ if $r_3 \geq \beta$. The claims about the optimal constant upper bound on $f_{l_3, r_3}$ and the contribution to $\E b_t(L_3(t), R_3(t))$ now follow readily. \end{proof} \begin{remark} \label{R:simple_bdd} If we are not concerned about finding the best possible upper bound, then for the case $\rho = 0$ we can choose $b_t(l , r) := b_2(r)$; for the case $\rho = \infty$, we can choose $b_t(l , r) := b_2(1-l)$. These two bounds still get us the desired finiteness of the contributions to $\E b_t(L_3, R_3)$ for any $0 < t < 1$. \end{remark} \begin{theorem} \label{T:uniform_cont} For $0 < t < 1$, the density function $f_t:\bbR \to [0, \infty)$ is uniformly continuous. \end{theorem} \begin{proof} Fix $0 < t < 1$. By the dominated convergence theorem, the contributions to $f_t(x)$ from $0 = l < t < r < 1$ and $0 < l < t < r = 1$, namely, the functions \[ c_0(x) := \int_{r, y}\!{f_{0, r}(x - y)\,\P(L_3(t) = 0,\,R_3(t) \in \ddx r,\,Y \in \ddx y)} \] and \[ c_1(x) := \int_{l, y}\!{f_{l, 1}(x - y)\,\P(L_3(t) \in \ddx l,\,R_3(t) = 1,\,Y \in \ddx y)}, \] are continuous for $x \in \bbR$. Further, according to~\eqref{E:4} and~\eqref{E:5}, the contribution from $0 < l < t < r < 1$ is $\sum_{i = 1}^6 c^{(i)}(x)$, where we define \[ c^{(i)}(x) := \int_{l, r, y}\!{f^{(i)}_{l, r}(x - y)\,\P(Y \in \ddx y \mid (L_3(t), R_3(t)) = (l, r))}\dd l\dd r. \] It is easy to see that all the functions $c_0$, $c_1$, and $c^{(i)}$ for $i = 1,\dots,6$ vanish for arguments $x \leq 0$. To prove the uniform continuity of $f_t(x)$ for $x \in \bbR$, it thus suffices to show that each of the six functions $c^{(i)}$ for $i = 1,\dots,6$ is continuous on the real line and that each of the eight functions $c_0$, $c_1$, and $c^{(i)}$ for $i = 1,\dots,6$ vanishes in the limit as argument $x \to \infty$. Fix $i \in \{1, \dots, 6\}$. Continuity of $c^{(i)}$ holds since $f^{(i)}_{l, r}$ is bounded by $b(l,r)$ defined at~\eqref{E:bdd_rho_finite} and is continuous except at the boundary of its support. To illustrate, consider, for example, $i = 3$. For each fixed $0 < l < t < r < 1$ and $x \in \bbR$, we have $f_{l, r}^{(3)}(x + h - y) \to f_{l, r}^{(3)}(x - y)$ as $h \to 0$ for all but two exceptional values of~$y$, namely, $y = x - (1 + r - 2 l)$ and $y = x - (1 - r)$. From the discussion following~\eqref{E:Y} and from \refT{T:main}, we know that the conditional law of~$Y$ given $(L_3(t), R_3(t)) = (l, r)$ has a density with respect to Lebesgue measure, and hence the set of two exceptional points has zero measure under this law. We conclude from the dominated convergence theorem that \[ \int_{y}\!{f^{(i)}_{l, r}(x - y)\,\P(Y \in \ddx y \mid (L_3(t), R_3(t)) = (l, r))} \] is continuous in $x \in \bbR$. It now follows by another application of the dominated convergence theorem that $c^{(i)}$ is continuous on the real line. Since the eight functions $f_{0, r}$, $f_{l, 1}$, and $f_{l, r}^{(i)}$ for $i = 1, \dots 6$ all vanish for all sufficiently large arguments, another application of the dominated convergence theorem shows that $c_0(x)$, $c_1(x)$, and $c^{(i)}(x)$ for $i = 1, \dots, 6$ all vanish in the limit as $x \to \infty$. This completes the proof. \end{proof} \begin{remark} For any $0 < t < 1$, by the fact that \[ J(t) \geq R_1(t) - L_1(t) \geq \min (t, 1 - t), \] we have $\P(J(t) < \min (t, 1 - t)) = 0$ and thus $f_t(\min (t, (1-t))) = 0$ by~\refT{T:uniform_cont}. This is a somewhat surprising result since we know that the right-continuous Dickman density $f_0$ satisfies $f_0(0) = e^{-\gamma} > 0$. \end{remark} \begin{remark} Since $f_0$ is both (uniformly) continuous on and piecewise differentiable on $(0, \infty)$, it \ignore{ \marginal{JF$\to$WH:\ Here is a much stronger fact about $f_0$. For each $k \in \{0, 1, \dots\}$, the Dickman density $f_0$ is piecewise $k$-times continuously differentiable with $k + 1$ pieces, namely, $[j - 1, j]$ for $j = 1, \dots, k$ and $[k, \infty)$. Is it reasonable to conjecture anything like this for $f_t$ with $0 < t < 1$?} } might be natural to conjecture that the densities $f_t$ for $0<t<1$ are also piecewise differentiable. Later, in \refT{T:Lipschitz_cont}, we prove that the densities $f_t$ are Lipschitz continuous, which implies that each of them is almost everywhere differentiable. \end{remark} \section{Integral equation for the density functions} \label{S:Integral equation} In this section we prove that for $0 \leq t < 1$ and $x \in \bbR$, the density function $f_t(x)$ is jointly Borel measurable in $(t, x)$. By symmetry, we can conclude that $f_t(x)$ is jointly Borel measurable in $(t, x)$ for $0 \leq t \leq 1$. We then use this result to establish an integral equation for the densities. \smallskip Let $F_t$ denote the distribution function for $J(t)$. Because $F_t$ is right continuous, it is Borel measurable (for each~$t$). \begin{lemma} \label{L:discmeas} For each positive integer~$n$, the mapping \[ (t, x) \mapsto F_{\frac{\lfloor n t \rfloor + 1}{n}}(x) \quad (0 \leq t < 1,\ x \in \bbR) \] is Borel measurable. \end{lemma} \begin{proof} Note that \[ F_{\frac{\lfloor n t \rfloor + 1}{n}}(x) = \sum_{j = 1}^n \mathbb{1}\left( \frac{j - 1}{n} \leq t < \frac{j}{n} \right) F_{\frac{j}{n}}(x). \] Each term is the product of a Borel measurable function of~$t$ and a Borel measurable function of~$x$ and so is a Borel measurable of $(t, x)$. The same is then true of the sum. \end{proof} \begin{lemma} \label{L:Frc} For each $0 \leq t < 1$ and $x \in \bbR$, as $n \to \infty$ we have \[ F_{\frac{\lfloor n t \rfloor + 1}{n}}(x) \to F_t(x). \] \end{lemma} \begin{proof} We reference Gr\"{u}bel and R\"{o}sler~\cite{MR1372338}, who construct a process $J = (J(t))_{0 \leq t \leq 1}$ with $J(t)$ having distribution function $F_t$ for each~$t$ and with right continuous sample paths. It follows (for each $t \in [0, 1)$) that $F_u$ converges weakly to $F_t$ as $u \downarrow t$. But we know that $F_t$ is a continuous (and even continuously differentiable) function, so for each $x \in \bbR$ we have $F_u(x) \to F_t(x)$ as $u \downarrow t$. The result follows. \end{proof} \begin{proposition} \label{P:Fmeas} The mapping \[ (t, x) \mapsto F_t(x) \quad (0 \leq t < 1,\ x \in \bbR) \] is Borel measurable. \end{proposition} \begin{proof} According to Lemmas~\ref{L:discmeas}--\ref{L:Frc}, this mapping is the pointwise limit as $n \to \infty$ of the Borel measurable mappings in \refL{L:Frc}. \end{proof} Let $f_t$ denote the continuous density for $F_t$, as in \refT{T:uniform_cont}. \begin{theorem} \label{T:fmeas} The mapping \[ (t, x) \mapsto f_t(x) \quad (0 \leq t < 1,\ x \in \bbR) \] is Borel measurable. \end{theorem} \begin{proof} By the fundamental theorem of integral calculus, $f_t = F'_t$. The mapping in question is thus the (sequential) limit of difference quotients that are Borel measurable by \refP{P:Fmeas} and hence is Borel measurable. \end{proof} Now we are ready to derive integral equations. We start with an integral equation for the distribution functions $F_t$. \begin{proposition} \label{P:Fint} The distribution functions $(F_t)$ satisfy the following integral equation for $0 \leq t \leq 1$ and $x \in \bbR$: \begin{equation} \label{Fint} F_t(x) = \int_{l \in (0, t)}\!F_{\frac{t - l}{1 - l}}\!\left( \frac{x}{1 - l} - 1 \right)\dd l + \int_{r \in (t, 1)}\!F_{\frac{t}{r}}\!\left( \frac{x}{r} - 1 \right)\dd r. \end{equation} \end{proposition} \begin{proof} This follows by conditioning on the value of $(L_1(t), R_1(t))$. Observe that each of the two integrands is (by \refP{P:Fmeas} for $t \notin \{0, 1\}$ and by right continuity of $F_0$ and $F_1$ for $t \in \{0, 1\}$) indeed [for fixed $(t, x)$] a Borel measurable function of the integrating variable. \end{proof} \begin{remark} It follows from (i)~the changes of variables from~$l$ to $v = (t - l) / (1 - l)$ in the first integral in~\eqref{Fint} and from~$r$ to $v = t / r$ in the second integral, (ii)~the joint continuity of $f_t(x)$ in $(t, x)$ established later in \refC{C:joint_continuous}, and (iii)~Leibniz's formula that $F_t(x)$ is differentiable with respect to $t \in (0, 1)$ for each fixed $x \in \bbR$. \end{remark} Integral equation~\eqref{Fint} for the distribution functions $F_t$ immediately leads us to an integral equation for the density functions $f_t$. \begin{proposition} \label{P:fint} The continuous density functions $(f_t)$ satisfy the following integral equation for $0 < t < 1$ and $x \in \bbR$: \[ f_t(x) = \int_{l \in (0, t)}\!(1 - l)^{-1} f_{\frac{t - l}{1 - l}}\!\left( \frac{x}{1 - l} - 1 \right)\dd l + \int_{r \in (t, 1)}\!r^{-1} f_{\frac{t}{r}}\!\left( \frac{x}{r} - 1 \right)\dd r. \] \end{proposition} \begin{proof} Fix $t \in (0, 1)$. Differentiate~\eqref{Fint} with respect to~$x$. It is easily proved by an argument applying the dominated convergence theorem to difference quotients and the mean value theorem that we can differentiate under the integral signs in~\eqref{Fint} provided that \begin{equation} \label{f*} \int_{l \in (0, t)}\!(1 - l)^{-1} f_{\frac{t - l}{1 - l}}^*\dd l + \int_{r \in (t, 1)}\!r^{-1} f_{\frac{t}{r}}^*\dd r \end{equation} is finite, where $f^*_t$ denotes any upper bound on $f_t(x)$ as~$x$ varies over~$\bbR$. By \refT{T:bdd} we can simply choose $f^*_t = 10$. Then~\eqref{f*} equals~$10$ times \[ - \ln (1-t) - \ln t, \] which is finite. \end{proof} In the next proposition, we provide an integral equation based on the formula for $f_t$ in~\eqref{E:density}; this integral equation will be useful in the next section. Recall that $Y(t) = \sum_{k=3}^\infty \Delta_k(t)$. Using \eqref{E:Y}, the conditional distribution of $Y(t) / (r_3 - l_3)$ given $(L_3, R_3) = (l_3, r_3)$ is the (unconditional) distribution of $ Z(\frac{t - l_3}{r_3-l_3}) = 1 + J(\frac{t - l_3}{r_3-l_3})$. Apply~\refT{T:main} on $Z(\frac{t - l_3}{r_3-l_3})$ leads us to an integral equation for the density function of $J(t)$. \begin{proposition} \label{Integral equation} The continuous density functions $f_t$ for the random variables $J(t) = Z(t) - 1$ satisfy the integral equation \[ f_t(x) = \int{\P((L_3(t), R_3(t)) \in \ddx (l_3, r_3))} \cdot h_t(x \mid l_3, r_3) \] for $x \geq 0$, where \[ h_t(x \mid l_3, r_3) = \int{f_{l_3,r_3}(x-y)\, (r_3 - l_3)^{-1} \, f_{\frac{t - l_3}{r_3 - l_3}}\!\left(\frac{y}{r_3 - l_3} - 1\right)\dd y}. \] \end{proposition} \section{Right-tail behavior of the density function} \label{S:decay} In this section we will prove, \emph{uniformly} for $0 < t < 1$, that the continuous density functions $f_t$ enjoy the same superexponential decay bound as Gr\"{u}bel and R\"{o}sler~\cite[Theorem~9]{MR1372338} proved for the survival functions $1 - F_t$. By a separate and easier argument, one could include the cases $t = 0, 1$. Let $m_t$ denote the moment generating function of $Z(t)$ and recall that~$m$ denotes the moment generating function of $V$ at~\eqref{E:V}. By \refL{L:Stochastic_upper_bdd}, the random variables $Z(t)$, $0 \leq t \leq 1$, are stochastically dominated by $V$. As a consequence, if $\theta \geq 0$, then \[ m_t(\theta) \leq m(\theta)< \infty \] for every $t \in (0, 1)$. \begin{theorem} \label{T:Exp_decay} Uniformly for $t \in (0, 1)$, the continuous {\tt QuickQuant} density functions $f_t(x)$ enjoy superexponential decay when $x$ is large. More precisely, for any $\theta > 0$ we have \[ f_t (x) < 4 \theta^{-1} e^{2 \theta} m(\theta) e^{-\theta x} \] for $x \geq 3$, where~$m$ is the moment generating function of the random variable $V$ at~\eqref{E:V}. \end{theorem} \begin{proof} Our starting point is the following equation from the discussion preceding~\refP{Integral equation}: \begin{align} \label{heart} \lefteqn{f_t(x)} \\ &= \int_{l, r} \P((L_3, R_3) \in (\ddx l, \ddx r))\,\int_y\!f_{l, r}(x - y)\,\P\left( (r - l) Z\!\left( \frac{t - l}{r - l} \right) \in \ddx y \right) \nonumber \\ &= \int_{l, r} \P((L_3, R_3) \in (\ddx l, \ddx r))\,\int_z\!f_{l, r}(x - (r - l) z)\,\P\left( Z\!\left( \frac{t - l}{r - l} \right) \in \ddx z \right). \nonumber \end{align} By \refL{L:Stochastic_upper_bdd}, for any $\theta \in \bbR$ we can obtain a probability measure $\mu_{t, \theta}(\ddx z) := m_t(\theta)^{-1} e^{\theta z} \P(Z(t) \in \ddx z)$ by exponential tilting. Since $m_t(\theta) \leq m(\theta) < \infty$ for every $\theta \geq 0$ and $t \in (0, 1)$, we can rewrite and bound~\eqref{heart} as follows (for any $\theta \geq 0$): \begin{align*} \lefteqn{f_t(x)} \\ &= \int_{l, r} \P((L_3, R_3) \in (\ddx l, \ddx r))\,m_{\frac{t - l}{r - l}}(\theta)\,\int_z\!e^{- \theta z}\,f_{l, r}(x - (r - l) z)\,\mu_{\frac{t - l}{r - l}, \theta}(\ddx z) \\ &\leq m(\theta) \int_{l, r} \P((L_3, R_3) \in (\ddx l, \ddx r))\,\int_z\!e^{- \theta z}\,f_{l, r}(x - (r - l) z)\,\mu_{\frac{t - l}{r - l}, \theta}(\ddx z). \end{align*} Recall that $f_{l, r}(x)$ is bounded above by $b_t(l, r)$ (Lemmas~\ref{L:bound} and \ref{L:b1}--\ref{L:b2}) and vanishes for $x \geq 2$. Therefore, if $\theta \geq 0$ then \begin{align} \lefteqn{f_t(x)} \nonumber \\ &\leq m(\theta) \int_{l, r} \P((L_3, R_3) \in (\ddx l, \ddx r))\,b_t(l, r) \int_{z > \frac{x - 2}{r - l}}\!e^{- \theta z}\,\mu_{\frac{t - l}{r - l}, \theta}(\ddx z) \nonumber \\ \label{exp} &\leq m(\theta) \int_{l, r} \P((L_3, R_3) \in (\ddx l, \ddx r))\,b_t(l, r)\,\exp\left(- \theta\,\frac{x - 2}{r - l} \right). \end{align} Suppose $x \geq 3$ and $\theta > 0$. We now consider in turn the contribution to~\eqref{exp} for $l = 0$, for $r = 1$, and for $0 < l < t < r < 1$. For $l = 0$, the contribution is $m(\theta)$ times the following: \begin{align*} \lefteqn{\hspace{-0.5in}\int_t^1\!r^{-1} \beta \exp[- \theta r^{-1} (x - 2)]\dd r} \\ &\leq \beta \int_0^1\!r^{-2} \exp[- \theta r^{-1} (x - 2)]\dd r \\ &= \beta\,[\theta (x - 2)]^{-1} \exp[- \theta (x - 2)] \leq \beta \theta^{-1} e^{2 \theta} e^{- \theta x}. \end{align*} Similarly (or symmetrically), the contribution for $r = 1$ is bounded by the same $\beta \theta^{-1} e^{2 \theta} m(\theta) e^{- \theta x}$. For $0 < l < t < r < 1$, by symmetry we may without loss of generality suppose that $0 < t \leq 1/2$, and then the contribution is $\frac{3}{2} m(\theta)$ times the following: \begin{align*} \lefteqn{\int_0^t{\int_{t}^1\!\left[ \frac{1}{r (r - l)} + \frac{1}{(1-l) (r - l)} \right] \exp\left(- \theta\,\frac{x - 2}{r - l} \right) \dd r \dd l}} \\ &= \int_0^t{\int_{t - l}^{1 - l}\!\left[ \frac{1}{(s + l) s} + \frac{1}{(1-l) s} \right] \exp[- \theta s^{-1} (x - 2)] \dd s \dd l} \\ &= \int_0^t{\int_{t - l}^{1 - l}\!(1 - l)^{-1} (s + l)^{-1} (1 + s) s^{-1} \exp[- \theta s^{-1} (x - 2)] \dd s \dd l} \\ &\leq 4 \int_0^t{\int_{t - l}^{1 - l}\!s^{-2} \exp[- \theta s^{-1} (x - 2)] \dd s \dd l} \\ &\leq 4 \int_0^{1/2}{\int_0^1\!s^{-2} \exp[- \theta s^{-1} (x - 2)] \dd s \dd l} \\ &= 2 [\theta (x - 2)]^{-1} \exp[- \theta (x - 2)] \leq 2 \theta^{-1} e^{2 \theta} e^{- \theta x}. \end{align*} Summing all the contributions, we find \begin{equation} \label{decay} f_t(x) \leq (3 + 2 \beta) \theta^{-1} e^{2 \theta} m(\theta) e^{- \theta x} < 4\,\theta^{-1} e^{2 \theta} m(\theta) e^{- \theta x}, \end{equation} for any $0 < t < 1$, $x \geq 3$, and $\theta > 0$, demonstrating the uniform superexponential decay. \end{proof} \begin{remark} Since $f_t$ is uniformly bounded by $10$ by~\refT{T:bdd}, for any $\theta > 0$, by choosing the coefficient $C_{\theta} := \max \{ 10 e^{3 \theta}, 4 \theta^{-1} e^{2 \theta} m(\theta)\}$, we can extend the superexponential bound on $f_t(x)$ in~\refT{T:Exp_decay} for $x \geq 3$ to $x \in \bbR$ as \begin{equation} \label{E:exp_bdd} f_t(x) \leq C_{\theta} e^{-\theta x} \mbox{ for } x \in \bbR \mbox{ and } 0 < t < 1. \end{equation} Note that this bound is not informative for $x \leq \min \{t,1-t \}$ since we know $f_t(x) = 0$ for such~$x$ by \refT{T:uniform_cont}, but it will simplify our proof of~\refT{T:Lipschitz_cont}. \end{remark} \section{Positivity and Lipschitz continuity of the continuous density functions} \label{S:other} In this section we establish several properties of the continuous density function $f_t$. We prove that $f_t(x)$ is positive for every $x > \min \{t, 1-t\}$ (\refT{T:pos}), Lipschitz continuous for $x \in \bbR$ (\refT{T:Lipschitz_cont}), and jointly continuous for $(t,x) \in (0,1) \times \bbR$ (\refC{C:joint_continuous}). \subsection{Positivity} \label{S:pos} \begin{theorem} \label{T:pos} For any $0 < t < 1$, the continuous density $f_t$ satisfies \[ f_t(x) > 0\mbox{\rm \ if and only if $x > \min\{t, 1 - t\}$}. \] \end{theorem} We already know that $f_t(x) = 0$ if $x \leq \min\{t, 1 - t\}$, so we need only prove the ``if'' assertion. Our starting point for the proof is the following lemma. Recall from Chung~\cite[Exercise 1.6]{MR1796326} that a point~$x$ is said to belong to the support of a distribution function~$F$ if for every $\epsilon > 0$ we have \begin{equation} \label{supp} F(x + \epsilon) - F(x - \epsilon) > 0. \end{equation} Note that to prove that~$x$ is in the support of~$F$ we may choose any $\epsilon_0(x) > 0$ and establish~\eqref{supp} for all $\epsilon \in (0, \epsilon_0(x))$. \begin{lemma} \label{L:supp} For any $0 < t < 1$, the support of $F_t$ is $[\min\{t, 1 - t\}, \infty)$. \end{lemma} \begin{proof} Clearly the support of $F_t$ is contained in $[\min\{t, 1 - t\}, \infty)$, so we need only establish the reverse containment. Since $F_t = F_{1 - t}$ by symmetry, we may fix $t \leq 1 / 2$. Also fixing $x \geq t$, write \[ x = t + K + b \] where $K \geq 0$ is an integer and $b \in [0, 1)$. We will show that~$x$ belongs to the support of $F_t$. Let \[ A := \bigcap_{k = 1}^K \{1 - k \epsilon < R_k < 1 - (k - 1) \epsilon\}. \] We break our analysis into four cases:\ (i) $t < b < 1$, (ii) $b = t$, (iii) $0 < b < t$, and (iv) $b = 0$. \medskip \par\noindent (i) $t < b < 1$. Let \[ B := \{b < R_{K + 1} < b + \epsilon\}\,\bigcap\,\{t < R_{K + 2} < t + \epsilon\}\,\bigcap\,\{t - \epsilon < L_{K + 3} < t\} \] and \begin{equation} \label{Cdef} C := \left\{ 0 \leq \sum_{k = K + 4}^{\infty} \Delta_k < 6 \epsilon \right\}. \end{equation} Upon observing that for $\delta_1, \delta_2 \in (0, \epsilon)$ we have by use of Markov's inequality that \begin{align} \lefteqn{\P(C \mid (L_{K + 3}, R_{K + 3}) = (t - \delta_1, t + \delta_2))} \nonumber \\ &= \P\left( (\delta_1 + \delta_2) J\left( \frac{\delta_1}{\delta_1 + \delta_2} \right) < 6 \epsilon \right) \geq \P\left( J\left( \frac{\delta_1}{\delta_1 + \delta_2} \right) < 3 \right) \nonumber \\ \label{Markov} &\geq 1 - \frac{1}{3} \E J\left( \frac{\delta_1}{\delta_1 + \delta_2} \right) \geq 1- \frac{1}{3} \left[ 1 + 2 H\left( \frac{1}{2} \right) \right] > 0.2 > 0. \end{align} We then see that $\P(A \cap B \cap C) > 0$ for all sufficiently small~$\epsilon$. But if the event $A \cap B \cap C$ is realized, then \[ J(t) > \sum_{k = 1}^K (1 - k \epsilon) + b + t = x - {{K + 1} \choose {2}} \epsilon \] and \[ J(t) < \sum_{k = 1}^K [1 - (k - 1) \epsilon] + (b + \epsilon) + (t + \epsilon) + 2 \epsilon + 6 \epsilon \leq x + 10 \epsilon. \] We conclude that~$x$ is in the support of $F_t$. \medskip \par\noindent (ii) $b = t$. Let \[ B := \{t < R_{K + 2} < R_{K + 1} < t + \epsilon\}\,\bigcap\,\{t - \epsilon < L_{K + 3} < t\} \] and define~$C$ by~\eqref{Cdef}. We then see that $\P(A \cap B \cap C) > 0$ for all sufficiently small~$\epsilon$. But if the event $A \cap B \cap C$ is realized, then \[ J(t) > \sum_{k = 1}^K (1 - k \epsilon) + t + t = x - {{K + 1} \choose {2}} \epsilon \] and \[ J(t) < \sum_{k = 1}^K [1 - (k - 1) \epsilon] + 2 (t + \epsilon) +2 \epsilon + 6 \epsilon \leq x + 10 \epsilon. \] We conclude that~$x$ is in the support of $F_t$. \medskip \par\noindent (iii) $0 < b < t$. Let \[ B := \{t < R_{K + 1} < t + \epsilon\} \bigcap \{t - b - \epsilon < L_{K + 2} < t - b\} \bigcap \{t - \epsilon < L_{K + 3} < t\} \] and define~$C$ by~\eqref{Cdef}. We then see that $\P(A \cap B \cap C) > 0$ for all sufficiently small~$\epsilon$. But if the event $A \cap B \cap C$ is realized, then \[ J(t) > \sum_{k = 1}^K (1 - k \epsilon) + t + b = x - {{K + 1} \choose {2}} \epsilon \] and \[ J(t) < \sum_{k = 1}^K [1 - (k - 1) \epsilon] + (t + \epsilon) + (b + 2 \epsilon) + 2 \epsilon + 6 \epsilon \leq x + 11 \epsilon. \] We conclude that~$x$ is in the support of $F_t$. \medskip \par\noindent (iv) $b = 0$. Let \[ B := \{t < R_{K + 1} < t + \epsilon\} \bigcap \{t - \epsilon < L_{K + 2} < t\} \] and define~$C$ by~\eqref{Cdef}, but with $K + 4$ there changed to $K + 3$. We then see that $\P(A \cap B \cap C) > 0$ for all sufficiently small~$\epsilon$. But if the event $A \cap B \cap C$ is realized, then \[ J(t) > \sum_{k = 1}^K (1 - k \epsilon) + t = x - {{K + 1} \choose {2}} \epsilon \] and \[ J(t) < \sum_{k = 1}^K [1 - (k - 1) \epsilon] + (t + \epsilon) +2 \epsilon + 6 \epsilon \leq x + 9 \epsilon. \] We conclude that~$x$ is in the support of $F_t$. \end{proof} We next use~\eqref{cx} together with \refL{L:supp} to establish \refT{T:pos} in a special case. \begin{lemma} \label{L:pos} For any $0 < t < 1$, the continuous density $f_t$ satisfies \[ f_t(x) > 0\mbox{\rm \ for all $x > 2 \min\{t, 1 - t\}$}. \] \end{lemma} \begin{proof} We may fix $t \leq 1/2$ and $x > 2 t$ and prove $f_t(x) > 0$. To do this, we first note from~\eqref{cx} that \begin{align*} f_t(x) &\geq c_t(x) = \P(L_2(t) = 0,\,J(t) \in \ddx x) / \ddx x \\ &= \int_{r \in (t, 1)} \int\!{\bf 1}(r \leq x - r z < 1)\,(x - r z)^{-1}\,\P(Z(t / r) \in \ddx z)\dd r \\ &\geq \int_{r \in (t, 1)} \int_{(x - 1) / r}^{(x / r) - 1}\,\P(Z(t / r) \in \ddx z)\dd r \\ &\geq \int_{r \in (t, 1)} \P\left( \frac{x - 1}{r} < Z\left( \frac{t}{r} \right) < \frac{x}{r} - 1 \right)\dd r. \end{align*} According to \refL{L:supp}, for the integrand in this last integral to be positive, it is necessary and sufficient that $(x - 1) / r < (x / r) - 1$ (equivalently, $r < 1$) and \[ \tfrac{x}{r} - 1 > 1 + \min\{\tfrac{t}{r}, 1 - \tfrac{t}{r}\} \] [for which it is sufficient that $r < (x + t) / 3$]. Thus \[ f_t(x) \geq \int_{r \in (t, \min\{(x + t) / 3, 1\})} \P\left( \frac{x - 1}{r} < Z\left( \frac{t}{r} \right) < \frac{x}{r} - 1 \right)\dd r > 0 \] because (recalling $x > 2 t$) the integrand here is positive over the nondegenerate interval of integration. \end{proof} Finally, we use a different contribution to $f_t(x)$ together with \refL{L:pos} to establish \refT{T:pos}. \begin{proof}[Proof of \refT{T:pos}] We may fix $t \geq 1 / 2$ and $x > 1 - t$ and prove $f_t(x) > 0$. To do this, we first note that \begin{align*} f_t(x) &\geq \int_{l \in (0, t)}\!\int_{r \in (t, 1)}\!\P(L_1(t) = L_2(t) \in \dd l,\,R_2(t) \in \ddx r) \\ &{} \qquad \left[ \P\left( (r - l) J\left( \frac{t - l}{r - l} \right) \in \dd x - [(1 - l) + (r - l)] \right) / \ddx x \right] \\ &= \int_{l \in (0, t)}\!\int_{r \in (t, 1)}\! (1-l)^{-1} (r - l)^{-1} f_{\frac{t - l}{r - l}} \left( \frac{x - (1 + r - 2 l)}{r - l} \right)\dd r\dd l. \end{align*} According to \refL{L:pos}, for the integrand in this double integral to be positive, it is sufficient that \[ \frac{x - (1 + r - 2 l)}{r - l} > 2 \min\left\{ \frac{t - l}{r - l},\,\frac{r - t}{r - l} \right\}, \] or, equivalently, \[ x > \min\{1 + 2 t + r - 4 l, 1 - 2 t + 3 r - 2 l\}. \] This strict inequality is true (because $x > 1 - t$) when $l = t$ and $r = t$ and so, for sufficiently small $\epsilon > 0$ is true for $l \in (t - \epsilon, t)$ and $r \in (t, t + \epsilon)$. Thus \[ f_t(x) \geq \int_{l \in (t - \epsilon, t)}\!\int_{r \in (t, t + \epsilon)}\!(1 - l)^{-1} (r - l)^{-1} f_{\frac{t - l}{r - l}} \left( \frac{x - (1 + r - 2 l)}{r - l} \right)\dd r\dd l > 0 \] because the integrand here is positive over the fully two-dimensional rectangular region of integration. \end{proof} \subsection{Lipschitz continuity} \label{S:Lip} We now prove that, for each $0< t <1$, the density function $f_t$ is Lipschitz continuous, which is a result stronger than~\refT{T:uniform_cont}. \begin{theorem} \label{T:Lipschitz_cont} For each $0 < t < 1$, the density function $f_t$ is Lipschitz continuous. \end{theorem} That is, there exists a constant $\Lambda_t \in (0, \infty)$ such that for any $x, z \in \bbR$, we have $| f_t(z) - f_t(x) | \leq \Lambda_t | z - x |$. The proof of \refT{T:Lipschitz_cont} will reveal that one can take $\Lambda_t = \Lambda [t^{-1} \ln t] [(1 - t)^{-1} \ln (1 - t)]$ for some constant $\Lambda < \infty$. Thus the densities $f_t$ are in fact uniformly Lipschitz continuous for~$t$ in any compact subinterval of $(0, 1)$. \smallskip We break the proof of~\refT{T:Lipschitz_cont} into two lemmas.~\refL{L:Lipschitz_1} deals with the contribution to $f_t$ from the disjoint-union event $\{0 = L_3(t) < t < R_3(t) < 1\} \cup \{0 < L_3(t) < t < R_3(t) = 1\}$ while ~\refL{L:Lipschitz_2} deals with the contribution from the event $\{0 < L_3(t) < t < R_3(t) < 1\}$. \begin{lemma} \label{L:Lipschitz_1} For each $0 < t < 1$, the contribution to $f_t$ from the event $\{0 = L_3(t) < t < R_3(t) < 1\} \cup \{0 < L_3(t) < t < R_3(t) = 1\}$ is Lipschitz continuous. \end{lemma} \begin{proof} Fix $0 < t < 1$. By symmetry, we need only consider the contribution to $f_t(x)$ from the event $\{0 = L_3(t) < t < R_3(t) < 1\}$. Recall that this contribution is \[ c_0(x) := \frac{1}{2} \int_{r, y}\!{(\ln r)^2 \, f_{0, r}(x - y) \, \P(Y \in dy \, | \,L_3(t) = 0,\,R_3(t) = r)\dd r}, \] and that the conditional probability in the integrand can be written as \[ \P(Y \in \dd y \, | \,L_3(t) = 0,\,R_3(t) = r) = \frac{1}{r} \, f_{\frac{t}{r}}\left(\,\frac{y}{r} - 1\,\right)\dd y. \] Let $z, x \in \bbR$ with $z > x$ and fixed $r \in (t, 1)$. Writing \[ d_r (x, z, y) :=\frac{1}{2} (\ln r)^2 [f_{0,r}(z-y) - f_{0,r}(x-y)], \] we are interested in bounding the absolute difference \[ | c_0(z) - c_0(x) | \leq \int_{r, y}\!{ | d_r (x, z, y) | \, \frac{1}{r} \, f_{\frac{t}{r}}\left(\,\frac{y}{r} - 1\,\right)\dd y. } \] \smallskip \par\noindent \emph{Case} 1.\ $z - x \leq 1- r$. We bound $d_r (x, z, y)$ for~$y$ in each of the seven subintervals of the real line determined by the six partition points \[ x - 2 < z - 2 \leq x - (1+r) < z - (1+r) \leq x - 2 r < z - 2r, \] and then the contribution to our bound on $| c_0(z) - c_0(x) |$ from all~$y$ in that subinterval (and all~$r$ satisfying the restriction of Case~1). For the two subcases $y \leq x - 2$ and $y > z - 2 r$, we have $d_r (x, z, y) = 0$. We bound the five nontrivial subcases as follows. \smallskip \par\noindent \emph{Subcase} 1(a).\ $x - 2 < y \leq z-2$. We have \[ | d_r (x, z, y) | = \left| \frac{1}{x- y} \right| \ln \left(\frac{1}{x- y - 1}\right) \leq \frac{1}{1+r} \ln \frac{1}{r}, \] and the contribution to $| c_0(z) - c_0(x) | $ is bounded by \[ \int_{r=t}^1 \frac{1}{r(1+r)} \left( \ln \frac{1}{r} \right) \int_{y = x-2}^{z-2} {f_{\frac{t}{r}}\left(\,\frac{y}{r} - 1\,\right) \dd y \dd r} \leq 10(z-x) \frac{1-t}{t(1+t)} \ln \frac{1}{t} \] since $f_{t/r}$ is bounded by $10$. \smallskip \par\noindent \emph{Subcase} 1(b).\ $z - 2 < y \leq x - (1+r)$. We have \begin{align*} d_r (x, z, y) &= \frac{1}{z-y} \ln \left(\frac{1}{z-y-1}\right) - \frac{1}{x-y} \ln \left(\frac{1}{x-y-1}\right)\\ &= \frac{1}{z-y} \left[ \ln \left(\frac{1}{z-y-1} \right) - \ln \left( \frac{1}{x-y-1}\right) \right] \\ &{} \qquad + \left(\frac{1}{z-y} - \frac{1}{x-y} \right) \ln \left(\frac{1}{x-y-1}\right). \end{align*} Observe that $z - y > x - y > 1+ r$ and that the function $\ln[1 / (x-1)]$ is differentiable for $x > 1$. We then use the mean value theorem to obtain \begin{align*} | d_r (x, z, y) | &\leq \frac{1}{1+r} \left| \ln \left(\frac{1}{z-y-1} \right) - \ln \left( \frac{1}{x-y-1}\right) \right| + \frac{(z-x)}{(1+r)^2} \ln \frac{1}{r}\\ &\leq (z-x) \left[ \frac{1}{r(1+r)}+ \frac{1}{(1+r)^2} \ln \frac{1}{r} \right]. \end{align*} The contribution to $| c_0(z) - c_0(x) | $ is then bounded by \[ (z-x) \int_{r=t}^1 \left[ \frac{1}{r(1+r)}+ \frac{1}{(1+r)^2} \ln \frac{1}{r} \right] \dd r \leq (z-x) \frac{1-t}{1+t} \left(\frac{1}{t}+ \frac{1}{1+t} \ln \frac{1}{t}\right). \] \smallskip \par\noindent \emph{Subcase} 1(c).\ $x - (1+r) < y \leq z - (1+r)$. We have \begin{align*} d_r (x, z, y) &= \frac{1}{z-y} \ln \left(\frac{1}{z-y-1}\right) - \frac{1}{x-y} \ln \left(\frac{x-y-r}{r}\right)\\ &= \frac{1}{z-y} \left[ \ln \left(\frac{1}{z-y-1}\right) - \ln \left(\frac{x-y-r}{r}\right) \right] \\ &+ \left(\frac{1}{z-y} - \frac{1}{x-y}\right) \ln \left(\frac{x-y-r}{r}\right). \end{align*} Using the inequalities $z - y \geq 1+r$ and $x - y > 2r$, we have \[ | d_r (x, z, y) | = \frac{1}{1+r} \left|\ln \left(\frac{1}{z-y-1}\right) - \ln \left(\frac{x-y-r}{r}\right) \right| + \frac{z-x}{2r(1+r)} \ln \frac{1}{r}. \] We can bound the absolute-value term here by \begin{align*} &\left|\ln \frac{1}{z-y-1} - \ln \frac{1}{(1+r) - 1} \right| + \left|\ln \frac{(1+r) - r}{r} - \ln \frac{x-y-r}{r} \right|\\ &\leq \frac{1}{r} [z - y - (1+r)] + \frac{1}{r} [(1+r) - (x-y)] = (z-x)\frac{1}{r}, \end{align*} where the above inequality comes from two applications of the mean value theorem. The contribution to $| c_0(z) - c_0(x) | $ is then bounded by \[ (z-x) \int_{r = t}^1 \left[ \frac{1}{r(1+r)} + \frac{1}{2r(1+r)} \ln \frac{1}{r} \right] \dd r \leq (z-x) \frac{1-t}{t(1+t)} \left(1+\frac{1}{2} \ln \frac{1}{t} \right). \] \smallskip \par\noindent \emph{Subcase} 1(d).\ $z - (1+r) < y \leq x - 2r$. We have \begin{align*} d_r (x, z, y) &= \frac{1}{z-y} \ln \left( \frac{z-y-r}{r}\right) - \frac{1}{x-y} \ln \left( \frac{x-y-r}{r}\right)\\ &= \frac{1}{z-y} \left[ \ln \left( \frac{z-y-r}{r}\right) - \ln \left( \frac{x-y-r}{r}\right) \right]\\ &{} \qquad + \left(\frac{1}{z-y} - \frac{1}{x-y} \right) \ln \left( \frac{x-y-r}{r}\right). \end{align*} Using the inequality $z - y > x - y \geq 2r$, we obtain \begin{align*} | d_r (x, z, y)| &\leq \frac{1}{2 r} [\ln(z-y-r) - \ln (x-y-r)] + \frac{z-x}{(2 r)^2} \ln \frac{1}{r}\\ &\leq (z-x) \left[ \frac{1}{2 r} \frac{1}{r} + \frac{1}{(2 r)^2} \ln \frac{1}{r} \right] \end{align*} by the differentiability of $\ln(x-r)$ for $x > r$ and the mean value theorem. The contribution to $| c_0(z) - c_0(x) | $ is then bounded by \[ (z-x) \int_{r = t}^1 \left[ \frac{1}{2 r} \frac{1}{r} + \frac{1}{(2 r)^2} \ln \frac{1}{r} \right] \dd r = (z-x)\, \frac{(1 - t )+ \ln(1 / t)}{4 t}. \] \smallskip \par\noindent \emph{Subcase} 1(e).\ $x - 2 r < y \leq z - 2r$. Using the inequality $2r \leq z - y < 1+r$, we have \[ | d_r (x, z, y) | = \frac{1}{z-y} \ln \left( \frac{z-y-r}{r} \right) \leq \frac{1}{2r} \ln \frac{1}{r}, \] and the contribution to $| c_0(z) - c_0(x) | $ is then bounded by \[ \int_{r=t}^1 \frac{1}{2r^2} \left( \ln \frac{1}{r} \right) \int_{y = x-2r}^{z-2r} {f_{\frac{t}{r}}\left(\,\frac{y}{r} - 1\,\right) \dd y \dd r} \leq 10(z-x) \frac{1-t}{2t^2} \ln \frac{1}{t}. \] This completes the proof for Case~1. \smallskip \par\noindent \emph{Case} 2.\ $z - x > 1-r$. We directly bound \[ | d_r (x, z, y) | \leq \frac{1}{2} (\ln r)^2 [f_{0,r}(z-y) + f_{0,r}(x-y)]. \] If $z - x \leq 1 - t$, use the bound in~\refR{R:simple_bdd}; we can then bound the contribution to $| c_0(z) - c_0(x) |$ by \[ \int_{r = 1 - (z-x)}^1 {\frac{2 \beta}{r} \dd r} \leq (z-x) \frac{2 \beta}{t}. \] On the other hand, if $z - x > 1 - t$, then we can bound the contribution to $| c_0(z) - c_0(x) | $ by \[ \frac{z-x}{1-t} \int_{r = t}^1 {\frac{2 \beta}{r} \dd r} \leq (z-x) \frac{2 \beta}{t}. \] This completes the proof for Case~2. We conclude that $c_0$ is a Lipschitz continuous function; note that the Lipschitz constant we have obtained depends on~$t$. \end{proof} \begin{lemma} \label{L:Lipschitz_2} For each $0 < t < 1$, the contribution to $f_t$ from the event $\{0 < L_3(t) < t < R_3(t) < 1\}$ is Lipschitz continuous. \end{lemma} \begin{proof} Fix $0 < t < 1$. According to~\eqref{E:4} and~\eqref{E:5}, the contribution from the event in question to $f_t(x)$ is $\sum_{i = 1}^6 c^{(i)}(x)$, where we define \[ c^{(i)}(x) := \int_{l, r, y}\!{f^{(i)}_{l, r}(x - y)\,\P(Y \in \ddx y \mid (L_3(t), R_3(t)) = (l, r))}\dd l\dd r. \] We show here that $c^{(3)}$ is Lipschitz continuous, and the claims that the other contributions $c^{(i)}$ are Lipschitz continuous are proved similarly. Let $x, z \in \bbR$ with $z > x$ and consider $(l,r)$ satisfying $0 < l < t < r < 1$. Define \[ d_{l,r}(x, z, y) := f_{l,r}^{(3)}(z-y) - f_{l,r}^{(3)}(x-y) \] and reformulate \[ f_{l,r}^{(3)}(x) = \mathbb{1}(1+r-2l\leq x < 1+r) \frac{1}{x}\left(\frac{1}{x+1-r} + \frac{1}{x+r-1}\right) \] from the expression for $f_{l,r}^{(3)}(x)$ found in \refS{S:density}. We are interested in bounding the quantity \begin{equation} \label{E:c_3_Lip} |c^{(3)}(z) - c^{(3)}(x)| \leq \int_{l, r, y}\!{|d_{l,r}(x, z, y)|\,\P(Y \in \ddx y \mid (L_3, R_3) = (l, r))}\dd l\dd r, \end{equation} where the conditional probability can also be written in density terms as \[ \P(Y \in \ddx y \mid (L_3, R_3) = (l, r)) = \frac{1}{r-l} f_{\frac{t-l}{r-l}}\left(\frac{y}{r-l} - 1\right)\dd y. \] Just as we did for \refL{L:Lipschitz_1}, we break the proof into consideration of two cases. \smallskip \par\noindent \emph{Case} 1.\ $z -x < 2 l$. As in the proof for Case~1 of \refL{L:Lipschitz_1}, we bound $d_{l, r}(x, z, y)$ for~$y$ in each of the five subintervals of the real line determined by the four partition points \[ x-(1+r) < z -(1+r) < x - (1+r - 2l) < z-(1+r-2l). \] For the two subcases $y \leq x-(1+r)$ and $y > z-(1+r-2l)$, we have $d_r(x, z, y) = 0$. We bound the three nontrivial subcases (listed in order of convenience of exposition, not in natural order) as follows. \smallskip \par\noindent \emph{Subcase} 1(a).\ $z -(1+r) < y \leq x - (1+r - 2l)$. We have \begin{align*} &d_{l,r}(x, z, y) \\ &= \frac{1}{z-y} \left(\frac{1}{z-y+1-r} - \frac{1}{z-y+r-1}\right) \\ &- \frac{1}{x-y} \left(\frac{1}{x-y+1-r} - \frac{1}{x-y+r-1}\right)\\ &= \frac{1}{z-y} \left(\frac{1}{z-y+1-r} - \frac{1}{x-y+1-r}\right) + \left(\frac{1}{z-y} - \frac{1}{x-y}\right) \frac{1}{x-y+1-r}\\ &- \frac{1}{z-y} \left(\frac{1}{z-y+r-1} - \frac{1}{x-y+r-1} \right) - \left(\frac{1}{z-y} - \frac{1}{x-y}\right) \frac{1}{x-y+r-1}. \end{align*} Using the inequality $z-y > x-y \geq 1+r-2l$, we obtain \begin{align*} & |d_{l,r}(x, z, y) | \\ &\leq \frac{1}{1+r-2l} \frac{z-x}{(2-2l)^2} + \frac{z-x}{(1+r-2l)^2}\frac{1}{2-2l}\\ &{} \qquad + \frac{1}{1+r-2l} \frac{z-x}{(z-y+r-1)(x-y+r-1)} + \frac{z-x}{(1+r-2l)^2} \frac{1}{2(r-l)}. \end{align*} Except for the third term, it is easy to see (by direct computation) that the corresponding contribution to the bound~\eqref{E:c_3_Lip} on $|c^{(3)}(z) - c^{(3)}(x)|$ is bounded by a constant (depending on~$t$) times $z - x$. So we now focus on bounding the contribution from the third term. Note that since $1+r-2l > 1-t > 0$, we need only bound \begin{equation} \label{focus} \int_{l, r, y}\!{\frac{1}{(z-y+r-1)(x-y+r-1)}\,\P(Y \in \ddx y \mid (L_3, R_3) = (l, r))}\dd l\dd r \end{equation} by a constant (which is allowed to depend on~$t$, but our constant will not). We first focus on the integral in~\eqref{focus} with respect to~$y$ and write it, using a change of variables, as \begin{equation} \label{E:int_y} \int_{y \in I}\!{d_{l,r}^*(x, z, y)\,f_{\frac{t-l}{r-l}}(y) \dd y}, \end{equation} with \[ d_{l,r}^*(x, z, y) = \frac{1}{[z-(r-l)(y+1)+r-1][x-(r-l)(y+1)+r-1]} \] and $I := \left\{y: \frac{z-(1+r)}{r-l} -1 < y \leq \frac{x-(1+r-2l)}{r-l} - 1\right\}$. Because the support of the density $f_{\frac{t - l}{r - l}}$ is contained in the nonnegative real line, the integral~\eqref{E:int_y} vanishes unless the right endpoint of the interval~$I$ is positive, which is true if and only if \[ r < \frac{x-1+3l}{2}. \] So we see that the integral of~\eqref{E:int_y} over $r \in (t, 1)$ vanishes unless this upper bound on~$r$ is larger than~$t$, which is true if and only if \begin{equation} \label{lbound} l > \frac{1-x+2t}{3}. \end{equation} But then the integral of~\eqref{E:int_y} over $\{(l, r):0 < l < t < r < 1\}$ vanishes unless this lower bound on~$l$ is smaller than~$t$, which is true if and only if $x > 1 - t$; we conclude that for $x \leq 1-t$, that integral vanishes. So we may now suppose $x > 1 - t$, and we have seen that the integral of~\eqref{E:int_y} over $\{(l, r):0 < l < t < r < 1\}$ is bounded above by its integral over the region \[ R := \left\{ (l, r) : \frac{1 - x + 2 t}{3} \vee 0 < l < t < r < 1 \wedge \frac{x - 1 + 3 l}{2} \right\}. \] Observe that on~$R$ we have \begin{equation} \label{on_R} \frac{x-(1+r-2l)}{r-l} - 1 = \frac{x-1+l}{r-l} -2 > \frac{2}{3} \frac{x+t-1}{r-l} -2 > \frac{1}{2} \frac{x+t-1}{r-l} -2. \end{equation} Define \[ B := \left\{ (l,r) : \frac{x+t-1}{2(r-l)} - 2 > 0 \right\}. \] We now split our discussion of the contribution to the integral of~\eqref{E:int_y} over $(l, r) \in R$ into two terms, corresponding to (i) $R \cap B^c$ and (ii) $R \cap B$. \smallskip \par\noindent \emph{Term} (i).\ $R \cap B^c$. Using~\eqref{on_R}, we can bound~\eqref{E:int_y} by extending the range of integration from~$I$ to \[ I^* := \left\{y : \frac{x+t-1}{2(r-l)} - 2 < y \leq \frac{x-(1+r-2l)}{r-l} - 1\right\}. \] Making use of the inequality~\eqref{E:exp_bdd}, the integral~\eqref{E:int_y} is bounded, for any $\theta > 0$, by \[ \int_{y \in I^*} {\frac{1}{4(r-l)^2}\, C_{\theta} e^{-\theta y} \dd y} \leq \frac{C_{\theta}}{4 \theta(r-l)^2} \exp \left[-\frac{x+t-1}{2(r-l)} \theta + 2 \theta \right]. \] The integral over $(l, r) \in R \cap B^c$ of~\eqref{E:int_y} is therefore bounded by \begin{align} &\frac{C_{\theta}}{4 \theta} \, e^{2 \theta} \int_{l = (1-x+2t)/3}^{t}\,\int_{r=t}^{(x-1+3l)/2} {\frac{1}{(r-l)^2} \exp \left[ -\frac{x+t-1}{2(r-l)} \theta \right] \dd r \dd l} \nonumber \\ &= \frac{C_{\theta}}{4 \theta} \, e^{2 \theta} \int_{l = (1-x+2t)/3}^{t} \int_{s=t-l}^{(x-1+l)/2} {\frac{1}{s^2} \exp \left(- \frac{x+t-1}{2} \theta s^{-1} \right) \dd s \dd l} \nonumber \\ &\leq \frac{C_{\theta}}{4 \theta} \, e^{2 \theta} \frac{2}{\theta (x+t-1)} \int_{l = (1-x+2t)/3}^{t} {\exp \left(- \frac{x+t-1}{2} \theta \frac{2}{x-1+l} \right) \dd l} \nonumber \\ \label{previous} &\leq \frac{C_{\theta}}{2 \theta^2} \, e^{2 \theta} \frac{1}{x+t-1} e^{-\theta} \left(t - \frac{1-x+2t}{3} \right) = \frac{C_{\theta}}{6 \theta^2} \, e^{\theta} < \infty. \end{align} \smallskip \par\noindent \emph{Term} (ii).\ $R \cap B$. We can bound~\eqref{E:int_y} by the sum of the integrals of the same integrand over the intervals $I^*$ and \[ I' := \left\{y: 0 < y \leq \frac{x+t-1}{2(r-l)} - 2\right\}. \] The bound for the integral over $I^*$ is the same as the bound for the $R \cap B^c$ term. To bound the integral over $I'$, we first observe that \[ d_{l,r}^*(x, z, y) \leq \frac{1}{[\frac{1}{2} (x-t-1) + 2r - l]^2} \leq \frac{4}{(x+t-1)^2}, \] where the last inequality holds because $l < t < r$. The contribution to~\eqref{focus} can be bounded by integrating $4 / (x + t - 1)^2$ with respect to $(l,r) \in R \cap B$. We then extend this region of integration to~$R$, and thus bound the contribution by \begin{align*} \frac{4}{(x+t-1)^2} \int_{l = \frac{2t+1-x}{3}}^t {\left(\frac{x-1+3l}{2} - t\right) \dd l} &\leq \frac{2}{(x+t-1)} \left(t - \frac{2t+1-x}{3}\right)\\ &=2/3. \end{align*} This completes the proof for Subcase 1(a). \smallskip \par\noindent \emph{Subcase} 1(b).\ $x -(1+r -2l) < y \leq z - (1+r-2l)$. First note that in this subcase we have $f^{(3)}(x-y) = 0$. We proceed in similar fashion as for Subcase 1(a), this time setting \[ I := \left\{y: \frac{x-1+l}{r-l} -2 < y \leq \frac{z-1+l}{r-l} -2\right\}. \] Again using a linear change of variables, the integral (with respect to~$y$ only, in this subcase) appearing on the right in~\eqref{E:c_3_Lip} in this subcase can be written as \begin{equation} \label{E:int_y_2} \int_{y \in I} {d^*_{l,r}(z, y) f_{\frac{t-l}{r-l}}(y) \dd y} \end{equation} where now \[ d^*_{l,r}(z, y) = \frac{1}{z-(r-l)(y+1) +1-r} \times \frac{2}{z-(r-l)(y+1)+r-1}. \] Note that, unlike its analogue in Subcase 1(a), here $d^*_{l, r}(z, y)$ does not possess an explicit factor $z - x$. By the same discussion as in Subcase 1(a), we are interested in the integral of~\eqref{E:int_y_2} with respect to $(l,r) \in R$, where this time \[ R := \left\{ (l, r) : \frac{1 - z + 2 t}{3} \vee 0 < l < t < r < 1 \wedge \frac{z - 1 + 3 l}{2} \right\}. \] and we may suppose that $z > 1 - t$. Observe that on~$R$ we have \[ \frac{z-1+l}{r-l} - 2 > \frac{2}{3} \frac{z + t - 1}{r-l} - 2 > \frac{1}{2} \frac{z + t - 1}{r-l} - 2. \] Following a line of attack similar to that for Subcase 1(a), we define \[ W := \left\{(l,r) : \frac{x-1+l}{r-l} - 2 > \frac{z + t - 1}{2(r-l)} - 2\right\} \] and split our discussion of the integral of~\eqref{E:int_y_2} over $(l, r) \in R$ into two terms, corresponding to (i) $R \cap W^c$ and (ii) $R \cap W$. \smallskip \par\noindent \emph{Term} (i).\ $R \cap W$. We bound~\eqref{E:int_y_2} by using the inequality~\eqref{E:exp_bdd} (for any $\theta > 0$) and obtain \begin{align*} \lefteqn{\hspace{-0.5in}\int_{y \in I} {\frac{1}{2-2l} \frac{1}{r-l} C_{\theta} \exp\left[ -\theta \left(\frac{z + t - 1}{2(r-l)} - 2\right) \right] \dd y}} \\ &\leq \frac{1}{2} \frac{1}{1-t} \frac{1}{(r-l)^2} C_{\theta} e^{2\theta} \exp\left[ -\theta \left(\frac{z + t - 1}{2(r-l)}\right) \right] (z-x). \end{align*} Integrate this terms with respect to $(l,r) \in R \cap W$, we get no more than \[ \frac{1}{2} \frac{(z-x) }{1-t} C_{\theta} e^{2\theta} \int_{l = (1-z+2t)/3}^t \int_{r = t}^{(z-1+3l)/2} {\frac{1}{(r-l)^2} \exp\left[ - \frac{z + t - 1}{2(r-l)} \theta \right] \dd r \dd l}, \] which [consult~\eqref{previous}] is bounded by $(z-x)$ times a constant depending only on~$t$ and~$\theta$. \smallskip \par\noindent \emph{Term} (ii).\ $R \cap W^c$. We partition the interval~$I$ of $y$-integration into the two subintervals \[ I^* := \left\{y : \frac{z + t - 1}{2(r-l)} - 2 < y \leq \frac{z-1+l}{r-l} -2\right\} \] and \[ I' := \left\{y: \frac{x-1+l}{r-l} -2 < y \leq \frac{z + t - 1}{2(r-l)} - 2\right\}. \] Observe that the length of each of the intervals $I^*$ and $I'$ is no more than the length of~$I$, which is $(z - x) / (r - l)$. We can bound the integral over $y \in I^*$ and $(l,r) \in R \cap W^c$ just as we did for Term~(i). For the integral over $y \in I'$ and $(l,r) \in R \cap W^c$, observe the following inequality: \begin{align*} d^*_{l,r}(z, y) \leq \frac{1}{2-2t} \frac{2}{\frac{1}{2} (z + t - 1) + 2r-l-t}. \end{align*} Using the constant bound in~\refT{T:bdd}, the integral of $d^*_{l, r}(z, y) f_{\frac{t-l}{r-l}}(y)$ with respect to $y \in I'$ and $(l,r) \in R \cap W^c$ is bounded above by \begin{equation} \label{10eq} 10 \frac{(z-x)}{1-t} \int_{l = (1-z+2t)/3}^t \int_{r = t}^{(z-1+3l)/2} {\frac{1}{r-l} \frac{1}{\frac{1}{2} (z + t - 1) + 2r-l-t} \dd r \dd l}. \end{equation} Write the integrand here in the form \[ \frac{1}{r-l} \frac{1}{\frac{z + t - 1}{2} + 2r-l-t} = \left(\frac{1}{r-l} - \frac{2}{2r-l-t + \frac{z + t - 1}{2} }\right) \frac{1}{l-t+\frac{z + t - 1}{2}}, \] and observe that $l-t+\frac{z + t - 1}{2} > \frac{z + t - 1}{6} > 0$. Hence we can bound~\eqref{10eq} by \begin{align*} &10 \frac{(z-x)}{1-t} \int_{l = (1-z+2t)/3}^t {\frac{1}{l-t+\frac{z + t - 1}{2}} \left[ \ln \frac{z-1+l}{2} - \ln (t-l) \right] \dd l}\\ &\leq 10 \frac{(z-x)}{1-t} \frac{6}{z + t - 1} \left[ \frac{z + t - 1}{3} \ln \frac{z + t - 1}{2} - \int_{l = \frac{1-z+2t}{3}}^t {\ln (t-l) \dd l }\right]\\ &= 20 \frac{(z-x)}{1-t} \left(\ln \frac{z + t - 1}{2} - \ln \left(\frac{z + t - 1}{3}\right) + 1 \right)\\ &= 20 \left(1 + \ln \frac{3}{2} \right) \frac{(z-x)}{1-t}. \end{align*} This completes the proof for Subcase 1(b). \smallskip \par\noindent \emph{Subcase} 1(c).\ $x -(1+r) < y \leq z - (1+r)$. In this case, the contribution from $f^{(3)}(z-y)$ vanishes. Without loss of generality we may suppose $z - x < t$, otherwise we can insert a factor $(z-x)/t$ in our upper bound, and the desired upper bound follows from the fact that the densities $f_{\tau}$ are all bounded by~$10$. Observe that the integrand $|d_{l, r}(x, z, y)|$ in the bound~\eqref{E:c_3_Lip} is \[ \frac{1}{x - y + 1 -r} \frac{2}{x - y + r -1} \leq \frac{1}{x - z + 2} \frac{2}{x - z + 2r} \leq \frac{1}{2-t} \frac{2}{2r - t} \leq \frac{2}{t (2 - t)}. \] Integrate this constant bound directly with respect to \[ P(Y \in \ddx y | (L_3,R_3) = (l,r)) \dd r \dd l \] on the region $x -(1+r) < y \leq z - (1+r)$ and $0 < l < t < r < 1$ and use the fact that the density is bounded by $10$; we conclude that this contribution is bounded by $(z-x)$ times a constant that depends on $t$. This completes the proof for Subcase 1(c) and also for Case~1. \smallskip \par\noindent \emph{Case}~2.\ $z -x \geq 2 l$. In this case we simply use \[ | d_{l,r}(x,z, y) | \leq f^{(3)}(z-y) + f^{(3)}(x-y), \] and show that each of the two terms on the right contributes at most a constant (depending on~$t$) times $(z - x)$ to the bound in~\eqref{E:c_3_Lip}. Accordingly, let~$w$ be either~$x$ or~$z$. We are interested in bounding \begin{equation} \label{E:int_y_3} \int_{l = 0}^{\frac{z-x}{2} \wedge t} \int_{r=t}^1 \int_{y = w-(1+r)}^{w-(1+r-2l)} {\frac{1}{w-y+1-r} \frac{2}{w-y+r-1} \mu(\ddx y, \ddx r, \ddx l)} \end{equation} with $\mu(\ddx y, \ddx r, \ddx l) := P(Y \in \ddx y \mid (L_3,R_3) =(l,r)) \dd r \dd l$. We bound the integrand as follows: \[ \frac{1}{w-y+1-r} \frac{2}{w-y+r-1} \leq \frac{1}{2-2l} \frac{2}{2r - 2l} \leq \frac{1}{2} \frac{1}{1-t} \frac{1}{r-l}. \] We first suppose $z-x < t$ and bound~\eqref{E:int_y_3} by \begin{align*} \frac{1}{2} \frac{1}{1-t} \int_{l = 0}^{\frac{z-x}{2}} \int_{r=t}^1 {\frac{1}{r-l} \dd r \dd l} &\leq \frac{1}{2} \frac{1}{1-t} \int_{l = 0}^{\frac{z-x}{2}} {[- \ln (t-l)] \dd l}\\ &\leq \frac{1}{2} \frac{1}{1-t} \left[ - \ln \left(t - \frac{z-x}{2}\right) \right] \frac{z-x}{2}\\ &\leq (z-x) \frac{\ln(2 / t)}{4(1-t)}. \end{align*} If instead $z - x \geq t$, we bound~\eqref{E:int_y_3} by \[ \frac{1}{2} \frac{z-x}{t} \int_{l = 0}^{t} \int_{r=t}^1 {\frac{1}{1-l}\frac{1}{r-l} \dd r \dd l} \leq (z-x) \frac{\pi^2}{12\,t}. \] This completes the proof for Case~2 and thus the proof of Lipschitz continuity of $c^{(3)}$. \end{proof} We immediately get the following corollary from the proof of~\refT{T:Lipschitz_cont}. \begin{corollary} \label{C:equi_continuous} For any $0 < \eta < 1/2$, the uniform continuous family $\{f_t:t \in [\eta, 1-\eta]\}$ is a uniformly equicontinous family. \end{corollary} \begin{proof} We observe from the proof of \refT{T:Lipschitz_cont} that for any $0 < \eta < 1/2$, the Lipschitz constants $L_t$ in~\refT{T:Lipschitz_cont} are bounded for $t \in [\eta, 1-\eta]$ by some universal constant $C < \infty$. The result follows. \end{proof} \subsection{Joint continuity} \label{S:joint} As noted in the proof of~\refL{L:Frc}, we reference Gr\"{u}bel and R\"{o}sler~\cite{MR1372338} to conclude that for each $t \in [0, 1)$, the distribution functions $F_u$ converge weakly to $F_t$ as $u \downarrow t$. It follows by symmetry that the convergence also holds for each $t \in (0, 1]$ as $u \uparrow t$. We now deduce the convergence from $f_u$ to $f_t$ for each $t \in (0,1)$ as $u \to t$, according to the following lemma. \begin{lemma} \label{L:density_converge} For each $0 < t < 1$ we have $f_u \to f_t$ uniformly as $u \to t$. \end{lemma} \begin{proof} We fix $0 < t \leq 1/2$ and choose $0< \eta < t$. By the weak convergence of $F_u$ to $F_t$ as $u \to t$, the uniform boundedness of the density functions (\refT{T:bdd}), the fact that $f_t(x) \to 0$ as $x \to \pm \infty$, and the uniform equicontinuity of the family $\{f_u:u \in [\eta, 1-\eta]\}$ (\refC{C:equi_continuous}), we conclude from Boos~\cite[Lemma 1]{MR773179} (a converse to Scheff\'{e}'s theorem) that $f_u \to f_t$ uniformly as $u \to t$. \end{proof} \begin{remark} The uniform equicontinuity in~\refC{C:equi_continuous} does not hold for the family $\{f_t:t \in (0, 1)\}$. Here is a proof. For the sake of contradiction, suppose to the contrary. We symmetrize $f_t(x)$ at $x=0$ for every $0 \leq t \leq 1$ to create another family of continuous densities $g_t$; that is, consider $g_t(x) := [f_t(x) + f_t(-x)]/2$. Observe that the supposed uniform equicontinuity of the functions $f_t$ for $t \in (0,1)$ extends to the functions $g_t$. Now suppose (for each $t \in [0, 1]$) that $W(t)$ is a random variable with density $g_t$. By a simple calculation we have $W(t) \Rightarrow W(0)$, and it follows by Boos~\cite[Lemma 1]{MR773179} that $g_t(x) \to g_0(x)$ uniformly in $x$. This contradicts to the fact that $g_t(0) = 0$ for all $t \in (0,1)$ but $g_0(0) = e^{-\gamma}$. \end{remark} \begin{remark} \label{R:KS_conti} Since $(F_t)_{t \in [0, 1]}$ is weakly continuous in~$t$ and $F_t$ is atomless for $0 \leq t \leq 1$, it follows from a theorem of P\'olya\ (\cite[Exercise~4.3.4]{MR1796326}) that $(F_t)_{t \in [0, 1]}$ is continuous in the sup-norm metric, \ie,\,that $(J(t))$ [or $(Z(t))$] is continuous in the Kolmogorov--Smirnov metric on distributions. \end{remark} \begin{corollary} \label{C:joint_continuous} The density $f_{t}(x)$ is jointly continuous in $(t,x) \in (0,1) \times \bbR$. \end{corollary} \begin{proof} As $(t', x') \to (t, x) \in (0, 1) \times \bbR$, we have \begin{align*} \limsup |f_{t'}(x') - f_t(x)| &\leq \limsup |f_{t'}(x') - f_t(x')| + \limsup |f_t(x') - f_t(x)| \\ &\leq \limsup \|f_{t'} - f_t\|_{\infty} + \delta_t(|x' - x|) \\ &= 0 \end{align*} where the sup-norm $\|f_{t'} - f_t\|_{\infty}$ tends to~$0$ as $t' \to t$ by \refL{L:density_converge} and the modulus of uniform continuity $\delta_t$ of the function $f_t$ tends to~$0$ as $x' \to x$ by \refT{T:uniform_cont}. \end{proof} \begin{remark} The positivity of $f_t(x)$ for each $0 < t < 1$ and $x > \min \{t,1-t\}$ in~\refT{T:pos} can be proved alternatively by using the integral equation~\refP{P:fint} and the joint continuity result of \refC{C:joint_continuous}. Here is the proof. Fix (for now) $t_0, t_1, t_2 \in (0,1)$ with $t_1 > t_0 > t_2$. We will show that $f_{t_0}(x) > 0$ for all $x > t_0$, using $t_1$ and $t_2$ in auxiliary fashion. Since this is true for arbitrarily chosen $t_0$, invoking symmetry ($f_t \equiv f_{1 - t}$) then completes the proof. We certainly know that $f_{t_0}(y_0) > 0$ for some $y_0 > t_0$; choose and fix such a $y_0$. Use~\refP{P:fint} to represent the density $f_{t_1}(x)$. We observe that the integrand of the integral with respect to~$l$ is positive at $l = l_1 = (t_1 - t_0) / (1 - t_0)$ and $x = y_1 = (1-l_1)(y_0+1)$. From \refC{C:joint_continuous} we conclude that the integrand is positive in a neighborhood of $l_1$ and thus $f_{t_1}(y_1) > 0$. Further, use ~\refP{P:fint} to represent the density $f_{t_0}(x)$. We observe that the integrand of the integral with respect to~$r$ is positive at $r = r_2 = \frac{t_0}{t_1}$ and $x = y_2 = r_2 (y_1 + 1)$. From $f_{t_1}(y_1) > 0$ and~\refC{C:joint_continuous} we conclude that $f_{t_0}(y_2) > 0$. Now letting $y_2 = y_0 + \epsilon_1$, we have \[ \epsilon_1 = \left(\frac{t_0}{t_1} \frac{1-t_1}{1-t_0} - 1\right) y_0 + \frac{t_0}{t_1} \left(1+\frac{1-t_1}{1-t_0}\right). \] Observe that as $t_1 \downarrow t_0$ we have $\epsilon_1 \to 2$, while as $t_1 \uparrow 1$ we have $\epsilon_1 \downarrow -y_0 + t_0 < 0$. Thus, given $\delta \in (0, 2 - t_0 + y_0)$ it is possible to choose $t_1 \in (t_0, 1)$ such that $\epsilon_1 = - y_0 + t_0 + \delta$, \ie, $y_2 = t_0 + \delta$. We conclude that $f_{t_0}(x)$ is positive for every $x > t_0$, as desired. \end{remark} \section{Left-tail behavior of the density function} \label{S:left} We consider the densities $f_t$ with $t \in (0, 1)$; since $f_t \equiv f_{1 - t}$ by symmetry, we may without loss of generality suppose $t \in (0, 1/2]$. As previously noted (recall Theorems~\ref{T:pos} and~\ref{T:uniform_cont}), $f_t(x) = 0$ for all $x \leq t$ and $f_t(x) > 0$ for all $x > t$. In this section we consider the left-tail behavior of $f_t$, by which we mean the behavior of $f_t(x)$ as $x \downarrow t$. As a warm-up, we first show that $f_t$ has a positive right-hand derivative at~$t$ that is large when~$t$ is small. \begin{lemma} \label{L:RHderiv} \ \vspace{-.1in}\\ {\rm (a)}~Fix $t \in (0, 1 / 2)$. Then the density function $f_t$ has right-hand derivative $f_t'(t)$ at~$t$ equal to $c_1 / t$, where \[ c_1 := \int_0^1\!\E [2 - w + J(w)]^{-2}\dd w \in (0.0879, 0.3750). \] {\rm (b)}~Fix $t = 1 / 2$. Then the density function $f_t$ has right-hand derivative $f_t'(t)$ at~$t$ equal to $2 c_1 / t = 4 c_1$. \end{lemma} \begin{proof} (a)~We begin with two key observations. First, if $L_1(t) > 0$, then $J(t) > 1 - t$. Second, if $1 > R_1(t) > R_2(t)$, then $J(t) > 2 t$. It follows that if $0 < z < \min\{1 - 2 t, t\}$, then, with $Y \equiv Y(t)$ as defined at~\eqref{E:Y}, \begin{align*} \lefteqn{f_t(t + z)\dd z} \\ &= \P(J(t) - t \in \ddx z) \\ &= \P(R_1(t) < 1,\,L_2(t) > 0,\,J(t) - t \in \ddx z) \\ &= \int\!\!\!\int_{\substack{y > x > 0:\,x + y < z, \\ x < 1 - t,\ y - x < t}}\! \P(R_1(t) - t \in \ddx x,\,t + x - L_2(t) \in \ddx y,\,Y(t) \in \ddx z - x - y) \\ &= \int\!\!\!\int_{\substack{y > x > 0:\,x + y < z, \\ x < 1 - t,\ y - x < t}}\! \dd x\,\frac{\ddx y}{t + x}\,\P\left( y\,J\left(\frac{y - x}{y}\right) \in \ddx z - x - y \right) \\ &= \int\!\!\!\int_{\substack{y > x > 0:\,x + y < z, \\ x < 1 - t,\ y - x < t}}\! \dd x\,\frac{\ddx y}{t + x}\,y^{-1}\,f_{1 - \frac{x}{y}}\left( \frac{z - x - y}{y} \right) \dd z. \end{align*} Now make the changes of variables from~$x$ to $u = x / z$ and from~$y$ to $v = y / z$. We then find \begin{align*} f_t(t + z) &= z \int\!\!\!\int_{\substack{v > u > 0:\,u+ v < 1 \\ u < (1 - t) / z,\ v - u < t / z}}\! (t + u z)^{-1}\,v^{-1}\,f_{1 - \frac{u}{v}}\left( \frac{1 - u - v}{v} \right)\dd u\dd v \\ &= z \int\!\!\!\int_{v > u > 0:\,u+ v < 1}\! (t + u z)^{-1}\,v^{-1}\,f_{1 - \frac{u}{v}}\left( \frac{1 - u - v}{v} \right)\dd u\dd v, \end{align*} where the second equality follows because $(1 - t) / z > (1 - 2 t) / z > 1$ and $t / z > 1$ by assumption. Thus, as desired, \[ f_t(t + z) \sim \frac{c_1 z}{t} \] as $z \downarrow 0$ by the dominated convergence theorem, if we can show that \[ \tilde{c} := \int\!\!\!\int_{v > u > 0:\,u+ v < 1}\!v^{-1}\,f_{1 - \frac{u}{v}}\left( \frac{1 - u - v}{v} \right)\dd u\dd v \] equals~$c_1$. For that, make another change of variables from~$u$ to $w = u / v$; then we find \begin{align*} \tilde{c} &= \int_0^1\,\int_0^{(1 + w)^{-1}}\!f_{1 - w}(v^{-1} - (1 + w)) \dd v\dd w \\ &= \int_0^1\,\int_0^{(2 - w)^{-1}}\!f_w(v^{-1} + w - 2) \dd v\dd w. \end{align*} Make one last change of variables, from~$v$ to $s = v^{-1} + w - 2$, to conclude \[ \tilde{c} = \int_0^1\,\int_0^{\infty}\!(2 - w + s)^{-2}\,f_w(s)\dd s\dd w = c_1, \] as claimed. To obtain the claimed upper bound on~$c_1$, we note, using the facts that $J(w)$ and $J(1 - w)$ have the same distribution and that $J(w) > w$ for $w \in (0, 1 / 2)$, that \begin{align*} c_1 &= \int_0^{1 / 2}\!\E [2 - w + J(w)]^{-2}\dd w + \int_{1 / 2}^1\!\E [2 - w + J(w)]^{-2}\dd w \\ &= \int_0^{1 / 2}\!\E [2 - w + J(w)]^{-2}\dd w + \int_0^{1 / 2}\!\E [1 + w + J(w)]^{-2}\dd w \\ &< \int_0^{1 / 2}\!\tfrac{1}{4}\dd w + \int_0^{1 / 2}\!(1 + 2 w)^{-2}\dd w = \tfrac{1}{8} + \tfrac{1}{4} = \tfrac{3}{8} = 0.3750. \end{align*} To obtain the claimed lower bound on~$c_1$, we combine Jensen's inequality with the known fact [\cf\,\eqref{EZ}] that $\E J(w) = 1 + 2 H(w)$ with $H(w) = - w \ln w - (1 - w) \ln(1 - w)$: \begin{align*} c_1 &= \int_0^1\!\E [2 - w + J(w)]^{-2}\dd w \\ &\geq \int_0^1\!(\E [2 - w + J(w)])^{-2}\dd w = \int_0^1\!(3 - w + 2 H(w))^{-2}\dd w > 0.0879. \end{align*} (b)~By an argument similar to that at the start of the proof of~(a), if $0 < z < 1 / 2$, then, using symmetry at the third equality, \begin{align*} f_t(t + z)\dd z &= \P(J(t) - t \in \ddx z) \\ &= \P(R_1(t) < 1,\,L_2(t) > 0,\,J(t) - t \in \ddx z) \\ &{} \quad + \P(L_1(t) > 0,\,R_2(t) < 1,\,J(t) - t \in \ddx z) \\ &= 2 \P(R_1(t) < 1,\,L_2(t) > 0,\,J(t) - t \in \ddx z) \\ &\sim \frac{2 c_1 z}{t} = 4 c_1 z. \end{align*} Here the asymptotic equivalence is as $z \downarrow 0$ and follows by the same argument as used for~(a). \end{proof} We are now prepared for our main result about the left-tail behavior of~$f_t$. \begin{theorem} \label{T:left} \ \vspace{-.1in}\\ {\rm (a)}~Fix $t \in (0, 1 / 2)$. Then $f_t(t + t z)$ has the uniformly absolutely convergent power series expansion \[ f_t(t + t z) = \sum_{k = 1}^{\infty} (-1)^{k - 1} c_k z^k \] for $z \in [0, \min\{t^{-1} - 2, 1\})$, where for $k \geq 1$ the coefficients \[ c_k := \int_0^1\!(1 - w)^{k - 1} \E [2 - w + J(w)]^{-(k + 1)}\dd w, \] not depending on~$t$, are strictly positive, have the property that $2^k c_k$ is strictly decreasing in~$k$, and satisfy \[ 0 < (0.0007) 2^{- (k + 1)} (k + 1)^{-2} < c_k < 2^{- (k + 1)} k^{-1} (1 + 2^{-k}) < 0.375 < \infty. \] {\rm [}In particular, $2^k c_k$ is both $O(k^{-1})$ and $\Omega(k^{-2})$.{\rm ]} {\rm (b)}~Fix $t = 1 / 2$. Then $f_t(t + t z)$ has the uniformly absolutely convergent power series expansion \[ f_t(t + t z) = 2 \sum_{k = 1}^{\infty} (-1)^{k - 1} c_k z^k \] for $z \in [0, 1)$. \end{theorem} \begin{proof} (a)~As shown in the proof of \refL{L:RHderiv}, for $z \in [0, \min\{t^{-1} - 2, 1\})$ we have \begin{align} \label{ft_integral} f_t(t + t z) &= z \int\!\!\!\int_{v > u > 0:\,u+ v < 1}\!(1 + u z)^{-1}\,v^{-1}\,f_{1 - \frac{u}{v}}\left( \frac{1 - u - v}{v} \right)\dd u\dd v. \end{align} Note that the expression on the right here doesn't depend on~$t$. Further, since $z \leq 1$ and $0 < u < 1/2$ in the range of integration, \begin{align*} \lefteqn{\hspace{-.5in}\frac{1}{2} \int\!\!\!\int_{v > u > 0:\,u+ v < 1}\!(1 - u z)^{-1}\,v^{-1}\, f_{1 - \frac{u}{v}}\left( \frac{1 - u - v}{v} \right)\dd u\dd v} \\ &< \int\!\!\!\int_{v > u > 0:\,u+ v < 1}\!v^{-1}\,f_{1 - \frac{u}{v}}\left( \frac{1 - u - v}{v} \right)\dd u\dd v \\ &= \tilde{c} = c_1 < 3 / 8 < \infty, \end{align*} with $\tilde{c}$ and $c_1$ as in the proof of \refL{L:RHderiv}. It follows that $f_t(t + t z)$ has the uniformly absolutely convergent power series expansion \[ f_t(t + t z) = \sum_{k = 1}^{\infty} (-1)^{k - 1} c_k z^k \] for $z \in [0, \min\{t^{-1} - 2, 1\})$, where for $k \geq 1$ we have \begin{align*} c_k &= 2 \times 2^{- k} \int\!\!\!\int_{v > u > 0:\,u+ v < 1}\!(2 u)^{k - 1}\,v^{-1}\,f_{1 - \frac{u}{v}}\left( \frac{1 - u - v}{v} \right)\dd u\dd v \\ &= \int_0^1\!(1 - w)^{k - 1} \E [2 - w + J(w)]^{-(k + 1)}\dd w; \end{align*} the second equality follows just as for $c = c_1$ in the proof of \refL{L:RHderiv}. From the first equality it is clear that these coefficients have the property that $2^k c_k$ is strictly decreasing in~$k$. To obtain the claimed upper bound on~$c_k$, proceed just as in the proof of \refL{L:RHderiv} to obtain \begin{align*} c_k &< 2^{- (k + 1)} \int_0^{1 / 2}\!(1 - w)^{k - 1}\dd w + \int_0^{1 / 2}\!w^{k - 1} (1 + 2 w)^{- (k + 1)}\dd w \\ &= 2^{- (k + 1)} k^{-1} (1 - 2^{-k}) + k^{-1} 4^{-k} = 2^{- (k + 1)} k^{-1} (1 + 2^{-k}). \end{align*} The claimed lower bound on~$c_k$ follows from \refL{L:RHderiv} for $k = 1$ but for $k \geq 2$ requires more work. We begin by establishing a lower bound on $\P(J(w) \leq 2 w)$ for $w \leq 1/3$, using what we have already proved: \begin{align*} \P(J(w) \leq 2 w) &= \int_0^w f_w(w + x) \dd x = w \int_0^1 f_w(w + w z) \dd z \\ &\geq w \int_0^1 (c_1 z - c_2 z^2) \dd z = w (\tfrac{1}{2} c_1 - \tfrac{1}{3} c_2) \\ &> [0.04395 - (1/3) (1/8) (1/2) (5/4)] w > 0.0179\,w. \end{align*} Thus $c_k$ is at least $0.0179\ 2^{- (k + 1)}$ times the following expression: \begin{align*} \lefteqn{2^{k + 1} \int_0^{1 / 3}\!w (1 - w)^{k - 1} (2 + w)^{- (k + 1)} \dd w} \\ &\geq \int_0^{1 / 3}\!w \exp[-2 (k + 1) w] \exp[- (k + 1) w / 2] \dd w \\ &\geq \int_0^{1 / (k + 1)}\!w \exp[- 5 (k + 1) w / 2] \dd w \\ &\geq e^{-5 / 2} \int_0^{1 / (k + 1)}\!w \dd w = \frac{1}{2} e^{-5 / 2} (k + 1)^{-2}. \end{align*} (b) The claim of part~(b) is clear from the proof of \refL{L:RHderiv}. \end{proof} \begin{corollary} \label{C:nice} \ \vspace{-.1in}\\ {\rm (a)}~Fix $t \in (0, 1 / 2)$. Then, for all $x \in (t, \min\{1 - t, 2 t\})$, the density $f_t(x)$ is infinitely differentiable, strictly increasing, strictly concave, and strictly log-concave. {\rm (b)}~Fix $t = 1 / 2$. Then for all $x \in [1 / 2, 1)$, the density $f_{1/2}(x)$ is infinitely differentiable, strictly increasing, strictly concave, and strictly log-concave. \end{corollary} \begin{proof} Once again it is clear that we need only prove~(a). The result is actually a corollary to~\eqref{ft_integral}, rather than to \refT{T:left}. It is easy to justify repeated differentiation with respect to~$z$ under the double integral of~\eqref{ft_integral}. In particular, for $z \in (0, \min\{t^{-1} - 2, 1\})$ we have \begin{align*} t f_t'(t + t z) &= \int\!\!\!\int_{v > u > 0:\,u+ v < 1}\!(1 + u z)^{-1}\,v^{-1}\,f_{1 - \frac{u}{v}}\left( \frac{1 - u - v}{v} \right)\dd u\dd v \\ &{} \quad - z \int\!\!\!\int_{v > u > 0:\,u+ v < 1}\!u (1 + u z)^{-2}\,v^{-1}\,f_{1 - \frac{u}{v}}\left( \frac{1 - u - v}{v} \right)\dd u\dd v \\ &= \int\!\!\!\int_{v > u > 0:\,u+ v < 1}\!(1 + u z)^{-2}\,v^{-1}\,f_{1 - \frac{u}{v}}\left( \frac{1 - u - v}{v} \right)\dd u\dd v > 0 \end{align*} and \begin{align*} t^2 f_t''(t + t z) &= -2 \int\!\!\!\int_{v > u > 0:\,u+ v < 1}\!u (1 + u z)^{-3}\,v^{-1}\,f_{1 - \frac{u}{v}}\left( \frac{1 - u - v}{v} \right)\dd u\dd v \\ &< 0. \end{align*} Strict log-concavity of the positive function $f_t$ follows immediately from strict concavity. \end{proof} \begin{remark} (a)~By extending the computations of the first and second derivatives of $f_t$ in the proof of \refC{C:nice} to higher-order derivatives, it is easy to see that $f_t(x)$ is real-analytic for~$x$ in the intervals as specified in \refC{C:nice}(a)--(b). For the definition of real analytic function, see Krantz and Parks~\cite[Definition 1.1.5]{MR1916029}. (b)~It may be that, like the Dickman density $f_0$, the densities $f_t$ with $0 < t < 1$ are log-concave everywhere and hence strongly unimodal. Even if this is false, we conjecture that the densities $f_t$ are all unimodal. \end{remark} \section{Improved right-tail asymptotic upper bound} \label{S:improved} In this section, we will prove that for $0 < t < 1$ and $x > 4$, the continuous density function $f_t$ satisfies \[ f_t(x) \leq \exp [-x \ln x -x \ln \ln x + O(x)] \] uniformly in~$t$. We first bound the moment generating function of the random variable $V$ treated in~\refL{L:Stochastic_upper_bdd}. \begin{lemma} Denote the moment generating function of $V$ by~$m$. Then for every $\epsilon > 0$ there exists a constant $a \equiv a(\epsilon) > 0$ such that for all $\theta > 0$ we have \begin{equation} \label{E:bdd_mgf_V} m(\theta) \leq \exp [(2+\epsilon) \theta^{-1} e^\theta + a \theta]. \end{equation} \end{lemma} \begin{proof} The idea of the proof comes from Janson~\cite[Lemma 6.1]{janson2015tails}. Observe that the random variable $V$ satisfies the following distributional identity \[ V \overset{\mathcal{L}}{=} 1 + V_1 \cdot V \] where $V_1 \sim \mbox{Uniform}(1 / 2, 1)$ is independent of $V$. It follows by conditioning on $V_1$ that the moment generating function~$m$ satisfies \begin{equation} \label{E:int_eq_mgf_V} m(\theta) = 2 e^\theta \int_{v = 1/2}^1 {m(\theta v) \dd v} = 2 e^\theta \int_{u = 0}^{1/2} {m(\theta (1-u)) \dd u}. \end{equation} Since~$m$ is continuous and $m(0) = 1$, there exists a $\theta_1 > 0$ such that the inequality~\eqref{E:bdd_mgf_V} holds (for \emph{any} constant $a > 0$) for $\theta \in [0,\theta_1]$. Choose and fix $\theta_2 > \max\{\theta_1, 5\}$ and choose $a \in [1, \infty)$ large enough such that the inequality~\eqref{E:bdd_mgf_V} holds for $\theta \in [\theta_1, \theta_2]$. We now suppose for the sake of contradiction that~\eqref{E:bdd_mgf_V} fails at some $\theta > \theta_2$. Define $T := \inf \{\theta > \theta_2:\mbox{\eqref{E:bdd_mgf_V} fails}\}$; then by continuity we have $m(T) = \exp [(2+\epsilon) T^{-1} e^T + a T]$. Since $m(\theta u) \geq 1$ for any $\theta > 0$ and $0 < u < 1/2$, we can conclude from~\eqref{E:int_eq_mgf_V} that $m$ satisfies \begin{equation} \label{E:int_ineq_mgf_V} m(\theta) \leq 2 e^{\theta} \int_{u = 0}^{1/2} {m(\theta u)\,m(\theta (1-u)) \dd u} \end{equation} for every $\theta > 0$, including for $\theta = T$. The proof is now completed effortlessly by applying exactly the same argument as for the limiting {\tt QuickSort} moment generating function in Fill and Hung~\cite[proof of Lemma~2.1]{MR3909444}; indeed, using only~\eqref{E:int_ineq_mgf_V} they prove that when $\theta = T$ the right-hand side of~\eqref{E:int_ineq_mgf_V} is strictly smaller than $m(T)$, which is the desired contradiction. \end{proof} Thus, for $\epsilon > 0$ and $\theta > 0$, the moment generating functions $m_t$ all satisfy \begin{equation} \label{mtbound} m_t(\theta) \leq m(\theta) \leq \exp[(2 + \epsilon) \theta^{-1} e^{\theta} + a \theta]. \end{equation} We now deduce a uniform right-tail upper bound on the survival functions $1 - F_t$ for $0 < t < 1$. \begin{theorem} \label{L:improved_rt_bdd_distribution} Uniformly in $0 < t < 1$, for $x > 1$ the distribution function $F_t$ for $J(t)$ satisfies \[ 1 - F_t(x) \leq \exp [-x \ln x - x \ln \ln x + O(x)]. \] \end{theorem} \begin{proof} The proof is essentially the same as for Fill and Hung~\cite[proof of Proposition~1.1]{MR3909444}, but for completeness we sketch the simple proof here. Fix $\epsilon >0$. For any $\theta > 0$ we have the Chernoff bound \begin{align*} 1 - F_t(x) = \P(J(t) > x) &\leq \P(Z(t) > x) \\ &\leq e^{-\theta x} m_t(\theta) \leq e^{-\theta x} \exp[(2 + \epsilon) \theta^{-1} e^{\theta} + a \theta] \end{align*} by~\eqref{mtbound}. Letting $\theta = \ln [(2+\epsilon)^{-1} x \ln x]$, and then $\epsilon \downarrow 0$ we get the desired upper bound---in fact, with the following improvement we will not find useful in the sequel: \[ 1 - F_t(x) \leq \exp [-x \ln x - x \ln \ln x + (1 + \ln 2) x + o(x)]. \] \end{proof} The continuous density function $f_t(x)$ enjoys the same uniform asymptotic bound for $0 < t < 1$ and $x > 4$. \begin{theorem} \label{T:improved_rt_bdd_density} Uniformly in $0 < t < 1$, for $x > 4$ the continuous density function $f_t$ satisfies \[ f_t(x) \leq \exp [-x \ln x - x \ln \ln x + O(x)]. \] \end{theorem} \begin{proof} Fix $0 < t < 1$ and let $x > 4$. We first use the integral equation in~\refP{Integral equation}, namely, \[ f_t(x) = \int{\P((L_3(t), R_3(t)) \in \ddx (l, r))} \cdot h_t(x \mid l, r), \] for $x \geq 0$, where, by a change of variables, \begin{equation} \label{htxlr} h_t(x \mid l, r) = \int{f_{l,r}((r-l)(y-1)) \, f_{\frac{t - l}{r - l}}\!\left(\frac{x}{r - l} - y\right)\dd y}; \end{equation} we consider the contribution to $f_t(x)$ from values $(l, r)$ satisfying $0 < l < t < r < 1$. Recall that the conditional density $f_{l, r}(z)$ vanishes if $z \geq 2$. Thus the only nonzero contribution to~\eqref{htxlr} is from values of~$y$ satisfying \[ y \leq \frac{2}{r-l} + 1. \] If this inequality holds, then the argument for the factor $f_{(t - l) / (r - l)}$ satisfies \[ \frac{x}{r-l} - y \geq \frac{x-2}{r-l} - 1 \geq x - 3. \] Using $b(l, r)$ of \refL{L:bound} and~\eqref{E:bdd_rho_finite} to bound the $f_{l,r}$ factor, we obtain \[ h_t(x \mid l, r) \leq b(l, r) \left(1 - F_{\frac{t-l}{r-l}} (x-3)\right). \] By \refT{L:improved_rt_bdd_distribution} and the last display in the proof of \refL{L:bound}, the contribution in question is thus bounded by $\exp[ - x \ln x - x \ln \ln x + O(x)]$, uniformly in~$t$, for $x > 4$. For the contribution to $f_t(x)$ corresponding to the cases $0 = L_3(t) < t < R_3(t) < 1$ and $0 < L_3(t) < t < R_3(t) = 1$, we use the same idea as in the proof of~\refL{L:bdd_rho_0}. By symmetry, we need only consider the first of these two cases. Recall from the proof of~\refL{L:bdd_rho_0} that the contribution in question is bounded by the sum of $f_W(x)$, which is the density of $W = U_1 (1 + U_2 V)$ evaluated at $x$ [where $U_1$, $U_2$, and $V$ are independent, $U_1$ and $U_2$ are uniformly distributed on $(0, 1)$, and~$V$ is as in \refL{L:Stochastic_upper_bdd}], and the integral \begin{align*} \int_{r = 0}^1\!{r^{-1} \P\left( V > \frac{x}{r} - 1 \right) \dd r} &= \int_{v = x-1}^{\infty} {(v+1)^{-1} \P (V > v) \dd v}\\ &\leq x^{-1} \int_{v = x-1}^{\infty} {\P (V > v) \dd v} \\ &\leq \exp [-x \ln x - x \ln \ln x + O(x)]. \end{align*} The last inequality here is obtained by applying a Chernoff bound and \refL{E:bdd_mgf_V} to the integrand and integrating; we omit the straightforward details. To bound the density of $W$ at~$x$, observe that by conditioning on the values of $U_2$ and $V$, we have \begin{align*} f_W(x) &= \int_{u,v} (1 + u v)^{-1}\,\mathbb{1}(0 \leq x \leq 1+uv)\,\P(U_2 \in \ddx u,\,V \in \ddx v)\\ &= \int_{u = 0}^1 \int_{v = (x-1)/u}^\infty (1 + u v)^{-1}\,\P(V \in \ddx v) \dd u\\ &\leq x^{-1} \int_{u = 0}^1\!\P\!\left(V > \frac{x-1}{u} \right) \dd u\\ &\leq x^{-1} \P(V > x-1) \leq \exp [-x \ln x - x \ln \ln x + O(x)]. \end{align*} This completes the proof. \end{proof} \section{Matching right-tail asymptotic lower bound} \label{S:lower} In this section we will prove for each fixed $t \in (0, 1)$ that the continuous density function $f_t$ satisfies \[ f_t(x) \geq \exp[- x \ln x - x \ln \ln x + O(x)]\mbox{\ as $x \to \infty$}, \] matching the upper bound of \refT{L:improved_rt_bdd_distribution} to two logarithmic asymptotic terms, with remainder of the same order of magnitude. While we are able to get a similarly matching lower bound to \refT{T:improved_rt_bdd_density} for the survival function $1 - F_t$ that is uniform in~$t$, we are unable to prove uniformity in~$t$ for the density lower bound. We begin with consideration of the survival function. \begin{theorem} \label{T:improved_rt_bdd_distribution_lower} Uniformly in $0 < t < 1$, the distribution function $F_t$ for $J(t)$ satisfies \[ 1 - F_t(x) \geq \exp[- x \ln x - x \ln \ln x + O(x)]. \] \end{theorem} \begin{proof} With~$D$ denoting a random variable having the Dickman distribution with support $[1, \infty)$, for any $0 < t < 1$ we have from \refL{L:Stochastic_lower_bdd} that \begin{align*} 1 - F_t(x) &= \P(J(t) > x) = \P(Z(t) > x + 1) \geq \P(D > x + 1) \\ &= \exp[- x \ln x - x \ln \ln x + O(x)]\mbox{\ as $x \to \infty$}. \end{align*} The asymptotic lower bound here follows by substitution of $x + 1$ for~$u$ in equation~(1.6) (for the unnormalized Dickman function) of Xuan~\cite{MR1245402}, who credits earlier work of de Bruijn~\cite{MR43838} and of Hua~\cite{Hua1951integral}. \end{proof} Now we turn our attention to the densities. \begin{theorem} \label{T:improved_rt_bdd_density_lower} For each fixed $t \in (0, 1)$ we have \[ f_t(x) \geq \exp[- x \ln x - x \ln \ln x + O(x)]\mbox{\ as $x \to \infty$}. \] \end{theorem} \begin{proof} From the calculations at the beginning of the proof of \refL{L:RHderiv}, for all $z > 0$ we have \[ f_t(t + z) \geq z \int\!\!\!\int_{\substack{v > u > 0:\,u+ v < 1, \\ u < (1 - t) / z,\ v - u < t / z}}\! (t + u z)^{-1}\,v^{-1}\,f_{1 - \frac{u}{v}}\left( \frac{1 - u - v}{v} \right) \dd u \dd v. \] Thus, changing variables from~$u$ to $w = 1 - (u / v)$, we have \[ f_t(t + t z) \geq z \int_0^1\!\int_0^{\Upsilon(t, z, w)}\![1 + v (1 - w) z]^{-1}\,f_w\left( v^{-1} + w - 2 \right) \dd v \dd w, \] where $\Upsilon(t, z, w) := {\min\{(2 - w)^{-1},\ (1 - t) (t z)^{-1} (1 - w)^{-1},\ z^{-1} w^{-1}\}}$. Now let \[ \Lambda(t, z, w) := \max\{0, t (1 - t)^{-1} z (1 - w) + w - 2, z w + w - 2\} \] and change variables from~$v$ to $s = v^{-1} + w - 2$ to find \begin{align*} \lefteqn{f_t(t + t z)} \\ &\geq z \int_0^1\!\int_{\Lambda(t, z, w)}^{\infty}\![1 + (2 - w + s)^{-1} (1 - w) z]^{-1}\,(2 - w + s)^{-2} f_w(s) \dd s \dd w. \end{align*} Observe that if $\delta > 0$ and $t \leq w \leq (1 + \delta) t \leq 1$, then \[ \Lambda(t, z, w) < (1 + \delta) t z \] and so \begin{align*} \lefteqn{f_t(t + t z)} \\ &\geq z \int_t^{(1 + \delta) t}\!\int_{(1 + \delta) t z}^{\infty}\![1 + (2 - w + s)^{-1} (1 - w) z]^{-1}\,(2 - w + s)^{-2} f_w(s) \dd s \dd w. \end{align*} If $\delta \leq 1$, it follows that \begin{align*} \lefteqn{f_t(t + t z)} \\ &\geq z \int_t^{(1 + \delta) t}\!\int_{(1 + \delta) t z}^{2 t z} [1 + (2 - w + s)^{-1} (1 - w) z]^{-1}\,(2 - w + s)^{-2} f_w(s) \dd s \dd w \\ &\geq \frac{z}{(2 + 2 t z)^2} \int_t^{(1 + \delta) t}\!\int_{(1 + \delta) t z}^{2 t z} \frac{1}{1 + (2 - w + (1 + \delta) t z)^{-1} (1 - w) z}\,f_w(s) \dd s \dd w \\ &\geq \frac{z}{(2 + 2 t z)^2} \int_t^{(1 + \delta) t}\!\int_{(1 + \delta) t z}^{2 t z} \left[ 1 + \frac{1 - w}{(1 + \delta) t} \right]^{-1}\,f_w(s) \dd s \dd w \\ &\geq \frac{z}{(2 + 2 t z)^2} \frac{(1 + \delta) t}{1 + \delta t} \int_t^{(1 + \delta) t}\!\int_{(1 + \delta) t z}^{2 t z} f_w(s) \dd s \dd w \\ &\geq \frac{t z}{(2 + 2 t z)^2} \int_t^{(1 + \delta) t}\!\int_{(1 + \delta) t z}^{2 t z} f_w(s) \dd s \dd w \\ &= \frac{t z}{(2 + 2 t z)^2} \int_t^{(1 + \delta) t}\big[ \P(J(w) > (1 + \delta) t z) - \P(J(w) > 2 t z) \big] \dd w. \end{align*} Recall that~$D$ defined in~\refL{L:Stochastic_lower_bdd} is a random variable having the Dickman distribution on $[1, \infty)$ and that~$V$ is defined in~\eqref{E:V}. By~\refL{L:Stochastic_lower_bdd}, we have $D - 1 \leq J(w) \leq V - 1$ stochastically, and thus we can further lower-bound the density function as follows: \begin{align*} f_t(t + t z) &\geq \frac{t z}{(2 + 2 t z)^2} \int_t^{(1 + \delta) t}\big[ \P(D - 1 > (1 + \delta) t z) - \P(V > 2 t z) \big] \dd w \\ &= \delta t \frac{t z}{(2 + 2 t z)^2} \big[ \P(D - 1 > (1 + \delta) t z) - \P(V > 2 t z) \big]. \end{align*} That is, if $0 < \delta \leq \min\{1, t^{-1} - 1\}$, then for every $z > 0$ we have \[ f_t(t + z) \geq \delta t \frac{z}{(2 + 2 z)^2} \big[ \P(D - 1 > (1 + \delta) z) - \P(V > 2 z) \big]. \] If $z \geq \max\{1, t / (1 - t)\}$, then we can choose $\delta \equiv \delta_z = z^{-1}$ and conclude \[ f_t(t + z) \geq t (2 + 2 z)^{-2} \big[ \P(D - 1 > z + 1) - \P(V > 2 z) \big]. \] Moreover, as $z \to \infty$, we have \[ (2 + 2 z)^{-2} \big[\P(D - 1 > z + 1) - \P(V > 2 z) \big] = \exp[- z \ln z - z \ln \ln z + O(z)]. \] The stated result follows readily. \end{proof} \begin{remark} The proof of \refT{T:improved_rt_bdd_distribution_lower} reveals that the result in fact holds uniformly for~$t$ in any closed subinterval of $(0, 1)$. In fact, the proof shows that the result follows uniformly in $t \in (0, 1)$ and $x \to \infty$ satisfying $x = \Omega(\ln[1 / \min\{t, 1 - t\}])$. \end{remark} \section{Right-tail large deviation behavior of {\tt QuickQuant}$(n, t)$} \label{S:Large_deviation} In this section, we investigate the right-tail large deviation behavior of {\tt QuickQuant}$(n, t)$, that is, of {\tt QuickSelect}$(n,m_n(t))$. Throughout this section, for each fixed $0 \leq t \leq 1$ we consider any sequence $1 \leq m_n(t) \leq n$ such that $m_n(t)/n \to t$ as $n \to \infty$. We abbreviate the normalized number of key comparisons of {\tt QuickSelect}$(n,m_n(t))$ discussed in ~\refS{S:intro} as $C_n(t) := n^{-1} C_{n, m_n(t)}$. Kodaj and M\'ori~\cite[Corollary 3.1]{MR1454110} bound the convergence rate of $C_n(t)$ to its limit $Z(t)$ in the Wasserstein $d_1$-metric, showing that the distance is $O (\delta_{n, t} \log (\delta_{n, t}^{-1}))$, where $\delta_{n, t} = | n^{-1} m_n(t) - t | + n^{-1}$. Using their result, we bound the convergence rate in Kolmogorov--Smirnov distance in the following lemma . \begin{lemma} \label{L:KS_distance} Let $d_{\rm KS}( \cdot , \cdot )$ be Kolmogorov--Smirnov {\rm (KS)} distance. Then \begin{equation} \label{E:KS_distance} d_{\rm KS}(C_n(t), Z(t)) = \exp \left[-\frac{1}{2} \ln \frac{1}{\delta_{n, t}} + \frac{1}{2} \ln \ln \frac{1}{\delta_{n, t}} + O(1)\right]. \end{equation} \end{lemma} \begin{proof} The lemma is an immediate consequence of Fill and Janson~\cite[Lemma 5.1]{MR1932675}, since the random variable $Z(t)$ has a density function bounded by~$10$, according to~\refT{T:bdd}. Indeed, by that result we have \[ d_{\rm KS}(C_n(t), Z(t)) \leq 2^{1/2} [10 \, d_1(C_n(t), Z(t))]^{1/2} = O([\delta_{n, t} \log (\delta_{n, t}^{-1})]^{1/2}). \] \end{proof} Using the right-tail asymptotic bounds on the limiting {\tt QuickQuant}$(t)$ distribution function $F_t$ in Theorems~\ref{L:improved_rt_bdd_distribution} and~\ref{T:improved_rt_bdd_distribution_lower} (which extend to $t \in \{0, 1\}$ by known results about the Dickman distribution), we can now derive the right-tail large-deviation behavior of $C_n(t)$. \begin{theorem} \label{T:Large_deviation} Fix $t \in [0, 1]$ and abbreviate $\delta_{n, t}$ as $\delta_n$. Let $(\omega_n)$ be any sequence diverging to $+\infty$ as $n \to \infty$ and let $c > 1$. For integer $n \geq 3$, consider the interval \[ I_n := \left[c, \frac{1}{2} \frac{\ln \delta_n^{-1}}{\ln \ln \delta_n^{-1}} \left(1-\frac{\omega_n}{\ln \ln \delta_n^{-1}}\right)\right]. \] \vspace{.01in} \par\noindent {\rm (a)}~Uniformly for $x \in I_n$ we have \begin{equation} \label{E:LD1} \P(C_n(t) > x) = (1 + o(1)) \mathbb{P}(Z(t) > x) \quad \mbox{as $n \to \infty$}. \end{equation} {\rm (b)}~If $x_n \in I_n$ for all large~$n$, then \begin{equation} \label{E:LD2} \mathbb{P}(C_n(t) > x_n) = \exp[- x_n \ln x_n - x_n \ln \ln x_n + O(x_n)]. \end{equation} \end{theorem} \begin{proof} The proof is similar to that of Fill and Hung~\cite[Theorem~3.3]{MR3909444} or its improvement in~\cite[Theorem 3.3]{MR3978217}. We prove part~(a) first. By~\refL{L:KS_distance}, it suffices to show that \[ \exp \left[-\frac{1}{2} \ln \frac{1}{\delta_n} + \frac{1}{2} \ln \ln \frac{1}{\delta_n} + O(1)\right] \leq o(\P (Z(t) > x_n)) \] with $x_n \equiv \frac{1}{2} \frac{\ln \delta_n^{-1}}{\ln \ln \delta_n^{-1}} \left(1-\frac{\omega_n}{\ln \ln \delta_n^{-1}}\right)$ and $\omega_n = o(\ln \ln \delta_n^{-1})$. Since, by~\refT{T:improved_rt_bdd_distribution_lower}, we have \[ \P(Z(t) > x_n) \geq \exp [-x_n \ln x_n - x_n \ln \ln x_n +O(x_n)], \] it suffice to show that for any constant $C < \infty$ we have \begin{equation} \label{minus_infinity} -\frac{1}{2} \ln \frac{1}{\delta_n} + \frac{1}{2} \ln \ln \frac{1}{\delta_n} + C + x_n \ln x_n + x_n \ln \ln x_n + C x_n \to -\infty. \end{equation} This is routine and similar to what is done in \cite[proof of Theorem~3.3]{MR3909444}. This completes the proof of part~(a). Part~(b) is immediate from part~(a) and Theorems~\ref{L:improved_rt_bdd_distribution} and~\ref{T:improved_rt_bdd_distribution_lower}. \end{proof} \begin{remark} Consider the particular choice $m_n(t) = \lfloor nt \rfloor + 1$ (for $t \in [0, 1)$, with $m_n(1) = n$) of the sequences $(m_n(t))$. That is, suppose that $C_n(t) = X_n(t)$ as defined in~\eqref{E:2}. In this case, large-deviation upper bounds based on tail estimates of the limiting $F_t$ have broader applicability than as described in \refT{T:Large_deviation} and are easier to derive, too. The reason is that, by Kodaj and M\'ori~\cite[Lemma 2.4]{MR1454110}, the random variable $X_n(t)$ is stochastically dominated by its continuous counterpart $Z(t)$. Then, by \refT{T:improved_rt_bdd_distribution_lower}, uniformly in $t \in [0, 1]$, we have \begin{equation} \label{VLD} \P(X_n(t) > x) \leq \P(Z(t) > x) \leq \exp [-x \ln x - x \ln \ln x + O(x)] \end{equation} for $x > 1$; there is \emph{no restriction at all} on how large $x$ can be in terms of~$n$ or~$t$. Here is an example of a \emph{very} large value of~$x$ for which the tail probability is nonzero and the aforementioned bound still matches logarithmic asymptotics to lead order of magnitude, albeit not to lead-order term. The largest possible value for the number $C_{n, m}$ of comparisons needed by {\tt QuickSelect}$(n, m)$ is ${n \choose 2}$, corresponding in the natural coupling to any permutation of the~$n$ keys for which the $m - 1$ keys smaller than the target key appear in increasing order, the $n - m$ keys larger than the target key appear in decreasing order, and the target key appears last; thus \[ \P\left( C_{n, m} = {n \choose 2} \right) = \frac{1}{n!} {n - 1 \choose m - 1}, \] which lies between $1 / n!$ and ${n - 1 \choose \lceil (n - 1) / 2 \rceil} / n! \sim 2^{ n - (1 / 2)} / (n! \sqrt{\pi n})$. We conclude that for $x_n = (n - 1) / 2$ we have, uniformly in $t \in [0, 1]$, that \[ P\left( X_n(t) \geq x_n \right) = \P\left( X_n(t) = x_n \right) = \exp[- 2 x_n \ln x_n + O(x_n)]. \] The bound~\eqref{VLD} on $\mathbb{P}(X_n(t) > x)$ is in fact also (by the same proof) a bound on the larger probability $\mathbb{P}(X_n(t) \geq x)$, and in this case implies \[ P\left( X_n(t) \geq x_n \right) = \exp[- x_n \ln x_n + O(x_n \log \log x_n)]. \] The bound~\eqref{VLD} is thus loose only by an asymptotic factor of~$2$ in the logarithm of the tail probability. \end{remark} \begin{remark} (a)~We can use another result of Kodaj and M\'ori, namely, \cite[Lemma 3.2]{MR1454110}, in similar fashion to quantify the Kolmogorov--Smirnov continuity of the process~$Z$ discussed in~\refR{R:KS_conti}. Let $0 \leq t < u \leq 1/2$ and $\delta = u - t$. Then the lemma asserts \[ d_1(Z(t), Z(u)) < 4 \delta (1 + 2 \log \delta^{-1}). \] It follows using Fill and Janson~\cite[Lemma 5.1]{MR1932675} that \[ d_{\rm KS}(Z(t), Z(u)) \leq O((\delta \log \delta^{-1})^{1/2}) = \exp \left[-\tfrac{1}{2} \ln \delta^{-1} + \tfrac{1}{2} \ln \ln \delta^{-1} + O(1) \right], \] uniformly for $|u - t| \leq \delta$, as $\delta \downarrow 0$. We thus have \emph{uniform} Kolmogorov--Smirnov continuity of~$Z$. (b)~Kodaj and M\'{o}ri~\cite{MR1454110} did not consider a lower bound on either of the distances in~(a), but we can rather easily obtain a lower bound on the KS distance that is of order $\delta^2$ uniformly for $t$ and~$u$ satisfying $0 < t < t + \delta = u \leq \min\{1 / 2, 2 t\}$. Indeed, for such~$t$ and~$u$ we have $\P(J(u) \leq u) = 0$ and, by \refT{T:left} (since $t \leq u \leq 1 / 2 \leq \min\{1 - t, 2 t\}$, as required by the hypotheses of the theorem) and in the notation of that theorem, \begin{align*} \P(J(t) \leq u) &= \int_t^u\!f_t(x) \dd x = t \int_0^{(u / t) - 1} \sum_{k = 1}^{\infty} (-1)^{k - 1} c_k z^k\,\dd z \\ &\geq t \int_0^{(u / t) - 1} (c_1 z - c_2 z^2)\,\dd z \\ &= t \left[ \frac{1}{2} c_1 \left( \frac{u}{t} - 1 \right)^2 - \frac{1}{3} c_2 \left( \frac{u}{t} - 1 \right)^3 \right] \\ &\geq \frac{1}{3} c_1 t \left( \frac{u}{t} - 1 \right)^2 > \frac{1}{150} (u - t)^2 = \frac{1}{150} \delta^2, \end{align*} where the penultimate inequality holds because $\frac{u}{t} - 1 = \frac{\delta}{t} < 1$ and $0 < c_2 \leq \frac{1}{2} c_1$. (c)~The lower bound in~(b) can be improved to order~$\delta$ when $t = 0$. Then for every $u \in [0, 1]$ we have $\P(J(0) \leq u) = e^{-\gamma} u$, and so for $u \in [0, 1 / 2]$ we have \[ d_{\rm KS}(Z(0), Z(u)) \geq e^{- \gamma} u. \] \end{remark} \begin{ack} We thank Svante Janson for helpful comments on a draft of this paper; they led to significant improvements. \end{ack}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{\label{sec:Intro}Introduction} Studies of the production of jets in association with a \Zg{} boson, henceforth referred to as \mbox{\ensuremath{Z/\gamma^* + \textrm{jets}}}{} processes, are central topics in hadron collider physics. Differential cross section measurements provide stringent tests for perturbative quantum chromodynamics (QCD) predictions~\cite{Gross:1973ju}. In addition, \mbox{\ensuremath{Z/\gamma^* + \textrm{jets}}}{} production is a background to many rare standard model (SM) processes, such as Higgs-boson production, and searches for non-SM physics. Dedicated measurements can help to improve the theoretical modeling of \mbox{\ensuremath{Z/\gamma^* + \textrm{jets}}}{} production. Differential cross sections have been previously measured in proton-antiproton collisions by the CDF~\cite{Aaltonen:2007ae} and D0~\cite{Abazov:2008ez, *Abazov:2009av, *Abazov:2009pp} collaborations as functions of several variables, including the jet transverse momentum, the jet rapidity, and various angular observables. These measurements are in qualitative agreement with predictions from perturbative QCD at the next-to-leading order (NLO) expansion in the strong-interaction coupling, but are limited by the small number of events with high multiplicity of jets. Recently, measurements have also been published by the ATLAS~\cite{Aad:2013ysa, *Aad:2011qv} and CMS~\cite{Chatrchyan:2011ne, *Chatrchyan:2013tna, *Khachatryan:2014zya} collaborations in proton-proton collisions at the LHC, since the understanding of these SM processes is essential in the search for non-SM physics at the LHC. In this article, measurements of differential cross sections for \mbox{\ensuremath{Z/\gamma^* + \textrm{jets}}}{} production are presented, using the full data sample of proton-antiproton collisions collected with the CDF II detector in Run II of the Tevatron Collider, which corresponds to 9.6~fb$^{-1}$ of integrated luminosity. The results include differential cross sections as functions of jet transverse momentum, \ensuremath{p_{\textrm{T}}}, and rapidity, $y$~\footnote{The rapidity is defined as $y=\frac{1}{2}\ln(\frac{E+p_Z}{E-p_Z})$; the transverse momentum and energy are defined by $\ensuremath{p_{\textrm{T}}} = p \sin{\theta}$ and $\ensuremath{E_{\textrm{T}}} = E \sin{\theta}$}, extended for the first time at CDF to the \mbox{\ensuremath{Z/\gamma^* + \geqslant 3~\textrm{jets}}}{} final state; the total cross section as a function of jet multiplicity up to four jets; and several differential distributions for events with a \Zg{} boson and at least one or two jets. Measurements are compared to NLO~\cite{Campbell:2002tg, Berger:2008sj} and approximate next-to-next-to-leading order (NNLO) perturbative QCD predictions~\cite{Rubin:2010xp}, to NLO QCD predictions including NLO electroweak corrections~\cite{Denner:2011vu}, and to distributions from various Monte Carlo (MC) generators that use parton showers interfaced with fixed-order calculations~\cite{Mangano:2002ea, Alioli:2010qp}. This paper is organized as follows: Section~\ref{sec:CDF} contains a brief description of the CDF II detector. The data sample and the event selection are presented in Sec.~\ref{sec:Data}. The MC samples used across the analysis are listed in Sec.~\ref{sec:MCsamples}. The estimation of the background contributions is described in Sec.~\ref{sec:Backgrounds}. The unfolding procedure is explained in Sec.~\ref{sec:unf}. The systematic uncertainties are addressed in Sec.~\ref{sec:sys}. The theoretical predictions are described in Sec.~\ref{sec:pred}. The measured differential cross sections are shown and discussed in Sec.~\ref{sec:results}. Section~\ref{sec:conclusion} summarizes the results. \section{\label{sec:CDF} The CDF II Detector} The CDF II detector, described in detail in Ref.~\cite{Abulencia:2005ix}, is composed of a tracking system embedded in a $1.4$~T magnetic field, surrounded by electromagnetic and hadronic calorimeters and muon spectrometers. The CDF experiment uses a cylindrical coordinate system in which the $z$ axis lies along the proton beam direction, $\phi$ is the azimuthal angle, and $\theta$ is the polar angle, which is often expressed as pseudorapidity $\eta = -\ln [\tan(\theta/2)]$. The tracking system includes a silicon microstrip detector~\cite{Sill:2000zz} covering a pseudorapidity range of $|\eta|<2$, which provides precise three-dimensional reconstruction of charged-particle trajectories (tracks). The silicon detector is surrounded by a $3.1$~m long open-cell drift chamber~\cite{Affolder:2003ep}, which covers a pseudorapidity range $|\eta|<1$, providing efficient pattern recognition and accurate measurement of the momentum of charged particles. The calorimeter system is arranged in a projective-tower geometry and measures energies of photons and electrons in the $|\eta|<3.6$ range. The electromagnetic calorimeter~\cite{Balka:1987ty, Hahn:1987tx} is a lead-scintillator sampling calorimeter, which also contains proportional chambers at a depth corresponding approximately to the maximum intensity of electron showers. The hadronic calorimeter~\cite{Bertolucci:1987zn} is an iron-scintillator sampling calorimeter. The muon detectors~\cite{Ascoli:1987av}, located outside the calorimeters, consist of drift chambers and scintillation counters covering a pseudorapidity range of $|\eta|<1.0$. Finally, the luminosity is computed from the rate of inelastic \ensuremath{p\bar{p}}{} collisions determined by the Cherenkov counters~\cite{Elias:1999qg} located close to the beam pipe. \section{\label{sec:Data}Data Sample and Event Selection} The data sample consists of \Zee{} and \Zmm{} + jets candidate events, which have been collected using a three-level online event selection system (trigger)~\cite{Winer:2001gj} between February 2002 and September 2011. In the electron channel, the trigger requires a central (\mbox{$|\eta|\leqslant1$}) electromagnetic calorimeter cluster with \mbox{$\ensuremath{E_{\textrm{T}}}{}\geqslant 18$}~GeV matched to a charged particle with \mbox{$\ensuremath{p_{\textrm{T}}}\geqslant9$}~\gevc{}. In the analysis, \Zee{} events are selected by requiring two central electrons with \mbox{$\ensuremath{E_{\textrm{T}}} \geqslant 25$}~GeV and reconstructed invariant mass in the range \mbox{$66 \leqslant M_{ee} \leqslant 116$}~\gevcsq{}. Details on the electron identification requirements are given in Ref.~\cite{Abulencia:2005ix}. In the muon channel, the trigger requires a signal in the muon detectors associated with a charged particle reconstructed in the drift chamber with \mbox{$|\eta|\leqslant1$} and \mbox{$\ensuremath{p_{\textrm{T}}} \geqslant 18$}~\gevc{}. In the analysis, \Zmm{} events are selected by requiring two reconstructed muons of opposite electric charge with \mbox{$|\eta|\leqslant1$} and \mbox{$\ensuremath{p_{\textrm{T}}} \geqslant 25$}~\gevc{}, and reconstructed invariant mass in the range \mbox{$66 \leqslant M_{\mu \mu} \leqslant 116$}~\gevcsq{}. Quality requirements are applied to the tracks in order to reject misidentified muons, and all the muon candidates are required to to be associated with an energy deposit in the calorimeter consistent with a minimum ionizing particle. More details on the muon reconstruction and identification can be found in Ref.~\cite{Abulencia:2005ix}. In addition to a \Z{} boson candidate, one or more jets with \mbox{$\ensuremath{p_{\textrm{T}}} \geqslant 30$}~\gevc{} and rapidity \mbox{$|y|\leqslant2.1$} are required. Jets are reconstructed using the midpoint algorithm~\cite{Abulencia:2005yg} in a cone of radius \mbox{$R=0.7$}~\footnote{The jet cone radius $R$ is defined as $R = \sqrt{\eta^{2}+\phi^{2}}$}. Calorimeter towers are clustered if the energy deposits correspond to a transverse energy larger than 0.1~GeV~\footnote{The transverse energy is evaluated using the position of the tower with respect to the primary interaction vertex.} and used as seeds if larger than 1~GeV. Towers associated with reconstructed electrons and muons are excluded. A split-merge procedure is used, which merges a pair of cones if the fraction of the softer cone's transverse momentum shared with the harder cone is above a given threshold; otherwise the shared calorimeter towers are assigned to the cone to which they are closer. The split-merge threshold is set to 0.75. Jet four-momenta are evaluated by adding the four-momenta of the towers according to the E-scheme, $p^{\mu}_{\textrm{jet}} = \sum{p^{\mu}_{\textrm{towers}}}$, described in Ref.~\cite{Blazey:2000qt}. With such a recombination scheme, jets are in general massive, and in order to study the jet kinematic properties, the variables \ensuremath{p_{\textrm{T}}}{} and $y$ are used, which account for the difference between $E$ and $p$ due to the jet mass. Since the jet transverse momentum measured by the calorimeter, \ptcal, is affected by instrumental effects, an average correction~\cite{Bhatti:2005ai} is applied to \ptcal. These effects, mainly due to the noncompensating nature of the calorimeter and the presence of inactive material, are of the order of 30\% for \ptcal{} around 40~\gevc{} and reduce to about 11\% for high \ptcal{} jets. A further correction is applied to account for the energy contributions to jets from multiple \ensuremath{p\bar{p}}{} interactions, but no modification is made to account for underlying-event contributions or fragmentation effects. The requirement of \mbox{$\ensuremath{p_{\textrm{T}}} \geqslant 30$}~\gevc{} is applied to the corrected jet transverse momentum. Events are selected if the leptons are separated from the selected jets by $\Delta{R}_{\ell-\textrm{jet}} \geqslant 0.7$~\footnote{$\Delta{R}$ is defined as $\Delta{R} = \sqrt{\Delta{y}^{2}+\Delta{\phi}^{2}}$}. \section{\label{sec:MCsamples}Monte Carlo Simulation} Samples of \ensuremath{Z/\gamma^* \rightarrow e^+e^- + \textrm{jets}}{}, \ensuremath{Z/\gamma^* \rightarrow \mu^+\mu^- + \textrm{jets}}{}, and \ensuremath{Z/\gamma^* \rightarrow \tau^+\tau^- + \textrm{jets}}{} events are generated using \alpgen{} v2.14~\cite{Mangano:2002ea} interfaced to \pythia{} 6.4.25~\cite{Sjostrand:2006za} for the parton shower, with CTEQ5L parton distribution functions (PDF)~\cite{Lai:1999wy} and using the set of \emph{tuning} parameters denoted as Tune Perugia 2011~\cite{Skands:2010ak}. The MLM matching procedure~\cite{Alwall:2007fs} is applied to avoid double-counting of processes between the matrix-element calculations and the parton-shower algorithm of \pythia. In addition, samples of \mbox{\ensuremath{t \overline{t}}}, associated production of \ensuremath{W}{} and \Z{} bosons ($WW$, $WZ$, $ZZ$), and inclusive \Zg{} production are generated using \pythia{} v6.2 with the same PDF set and Tune A~\cite{Affolder:2001xt}. All the samples are passed through a full CDF II detector simulation based on \textsc{geant}~\cite{Brun:1987ma}, where the \textsc{gflash}~\cite{Grindhammer:1989zg} package is used for parametrization of the energy deposition in the calorimeters, and corrected to account for differences between data and simulation in the trigger selection and lepton identification efficiencies. The electron \ensuremath{E_{\textrm{T}}}{} and the muon \ensuremath{p_{\textrm{T}}}{} scale and resolution are corrected to match the dilepton invariant mass distributions $M_{\ell\ell}$ observed in the data in the region \mbox{$84 \leqslant M_{\ell\ell} \leqslant 98$}~\gevcsq{}. Simulated \mbox{\ensuremath{Z/\gamma^* + \textrm{jets}}}{} samples are also reweighted with respect to the number of multiple \ensuremath{p\bar{p}}{} interactions in the same bunch crossing so as to have the same instantaneous luminosity profile of the data. The MC samples are used to determine background contributions and derive the unfolding correction factors described in Sec.~\ref{sec:unf}. \section{\label{sec:Backgrounds}Background Contributions} The selected sample of \mbox{\ensuremath{Z/\gamma^* + \textrm{jets}}}{} data events is expected to include events from various background processes. The largest background contributions come from pair production of \ensuremath{W}{} and \Z{} bosons, $WW$, $WZ$, $ZZ$, and top-antitop quarks, \mbox{\ensuremath{t \overline{t}}}; a smaller contribution comes from \ensuremath{Z/\gamma^* \rightarrow \tau^+\tau^- + \textrm{jets}}{} events. Inclusive jets and \mbox{\ensuremath{W + \textrm{jets}}}{} events contribute to the background if one or more jets are misidentified as electrons or muons. Various strategies are used to estimate the background contributions. In the \Zee{} channel, a data-driven method is used to estimate the inclusive jets and \mbox{\ensuremath{W + \textrm{jets}}}{} background contribution. First, the probability for a jet to pass the electron selection requirements is evaluated using an inclusive jet data sample. This is denoted as \emph{fake} rate and is parametrized as a function of the jet transverse energy. The fake rate is applied to jets from a sample of events with one reconstructed electron: for each event, all the possible electron-jet combinations are considered as \Zg{} candidates, the jet transverse energy is corrected to match on average the corresponding electron energy, and all the electron-jet pairs that fulfill the selection requirements are weighted with the corresponding fake rate associated with the jet, and used to estimate the background rate for each observed distribution. In the muon channel, the \mbox{\ensuremath{W + \textrm{jets}}}{} and inclusive jets processes constitute a source of background if a track inside a jet is identified as a muon. To estimate this background contribution, events containing muon pairs are reconstructed following the analysis selection but requiring the charge of the two muons to have the same electric charge. The other background contributions, originating from \mbox{\ensuremath{t \overline{t}}}, associated production of \ensuremath{W}{} and \Z{} bosons ($WW$, $WZ$, $ZZ$), and \Ztt{}~+~jets, are estimated with simulated samples. The \mbox{\ensuremath{t \overline{t}}}{} sample is normalized according to the approximate NNLO cross section~\cite{oai:arXiv.org:0807.2794}, the $WW$, $WZ$ and $ZZ$ samples are normalized according to the NLO cross sections~\cite{oai:arXiv.org:hep-ph/9905386}, and the \Ztt{} + jets sample is normalized according to the \Z{} inclusive NNLO cross section~\cite{Abulencia:2005ix}. The total background varies from about $2\%$ to $6\%$ depending on jet multiplicity as shown in Table~\ref{tab:backg}, which reports the sample composition per jet-multiplicity bin in the electron and muon channels. \begin{table \caption{Estimated background contributions, background systematic uncertainties, and data yield for (a) \ensuremath{Z/\gamma^* \rightarrow e^+e^- + \geqslant N_{\textrm{jets}}}{} and (b) \ensuremath{Z/\gamma^* \rightarrow \mu^+\mu^- + \geqslant N_{\textrm{jets}}}{} channels, with the number of jets $N_{\textrm{jets}} \geqslant 1,2,3$, and $4$.} \begin{center} \label{tab:backg} \begin{tabular}{lcccc} \toprule \multicolumn{1}{c}{$\Zee{}$ + jets } & \multicolumn{4}{c}{Estimated events} \\ \colrule Backgrounds & $\geqslant 1$ jet & $\geqslant 2$ jets & $\geqslant 3$ jets & $\geqslant 4$ jets \\ \colrule QCD, $W$ + jets & $ 25.9 \pm 3.9$ & $ 4.0 \pm 0.6$ & $0.6 \pm 0.1$ & $\leqslant 0.1$\\ $WW$, $ZZ$, $ZW$ & $ 119 \pm 36$ & $ 43 \pm 13$ & $4.2 \pm 1.3$ & $ 0.3 \pm 0.1$\\ \mbox{\ensuremath{t \overline{t}}}{} & $ 45 \pm 13$ & $ 25.4 \pm 7.6$ & $2.9 \pm 0.9$ & $ 0.2 \pm 0.1$\\ \Ztt{} + jets & $ 7.2 \pm 2.2$ & $ 0.5 \pm 0.1$ & $<0.1$ & $<0.1$ \\ \colrule Total background & $ 197 \pm 38 $ & $ 73 \pm 15 $ & $ 7.8 \pm 1.5$ & $ 0.7 \pm 0.1$\\ \colrule Data & $ 12910 $ & $ 1451 $ & $ 137 $ & $ 13 $\\ \botrule \end{tabular}\\ \subfigure[]{} \end{center} \begin{center} \begin{tabular}{lcccc} \toprule \multicolumn{1}{c}{$\Zmm{}$ + jets } & \multicolumn{4}{c}{Estimated events} \\ \colrule Backgrounds & $\geqslant 1$ jet & $\geqslant 2$ jets & $\geqslant 3$ jets & $\geqslant 4$ jets \\ \colrule QCD, $W$ + jets & $ 51 \pm 51$ & $ 18 \pm 18$ & $3 \pm 3$ & $ 1 \pm 1$ \\ $WW$, $ZZ$, $ZW$ & $ 190 \pm 57$ & $ 69 \pm 21$ & $6.7 \pm 2.0$ & $ 0.5 \pm 0.2$\\ \mbox{\ensuremath{t \overline{t}}}{} & $ 68 \pm 21$ & $ 38 \pm 12$ & $4.5 \pm 1.3$ & $ 0.5 \pm 0.1$\\ \Ztt{} + jets & $ 9.4 \pm 2.8$ & $ 1.2 \pm 0.3$ & $\leqslant 0.1$& $< 0.1$ \\ \colrule Total background & $ 318 \pm 79 $ & $ 126 \pm 30 $ & $ 14.3 \pm 3.8$ & $ 2.0 \pm 1.0$\\ \colrule Data & $ 19578$ & $ 2247 $ & $ 196$ & $ 13$\\ \botrule \end{tabular}\\ \subfigure[]{} \end{center} \end{table} Figure~\ref{fig:invmass} shows the invariant mass distribution for \begin{figure*} \subfigure[]{\includegraphics[width=\figsize]{CC_ZMass1_Cal_el.pdf}} \subfigure[]{\includegraphics[width=\figsize]{CC_ZMass1_Cal_mu.pdf}} \caption{Dilepton invariant mass distributions for events with at least one jet in the (a) \Zee{} and (b) \Zmm{} channels. Observed number of events divided by the integrated luminosity (black dots) are compared to the MC expectation (solid blue line), including signal and backgrounds contributions (filled histograms). } \label{fig:invmass} \end{figure*} \mbox{\ensuremath{Z/\gamma^* + \geqslant 1~\textrm{jet}}}{} events in the electron and muon decay channels. The region outside the mass range used in the analysis contains a larger fraction of background processes. Table~\ref{tab:backg_sidebands} shows the comparison between data and \begin{table* \caption{Estimated background events and \mbox{\ensuremath{Z/\gamma^* + \textrm{jets}}}{} MC prediction compared to the data in the low- and high-mass regions outside the mass range used in the analysis, for $\Zee + \geqslant$ 1 jet and $\Zmm + \geqslant$ 1 jet events. Invariant mass ranges are given in \gevcsq{}. Background systematic uncertainties and statistical uncertainties of the \mbox{\ensuremath{Z/\gamma^* + \textrm{jets}}}{} MC prediction are shown.} \begin{center} \label{tab:backg_sidebands} \begin{tabular}{lcccc} \toprule & \multicolumn{2}{c}{$\Zee{} + \geq 1$ jet } & \multicolumn{2}{c}{$\Zmm{} + \geq 1$ jet} \\ Backgrounds & $40 \leqslant M_{ee} < 66\quad$ & $\quad116 < M_{ee} \leqslant 145\quad$ & $\quad40 \leqslant M_{\mu\mu} < 66\quad$ & $\quad116 < M_{\mu\mu} \leqslant 145$ \\ \colrule QCD, $W$ + jets & $ 15.9 \pm 2.4$ & $ 2.9 \pm 0.4$ & $ 37 \pm 37$ & $ 8 \pm 8$ \\ $WW$, $ZZ$, $ZW$ & $ 5.2 \pm 1.6$ & $ 3.2 \pm 1.0$ & $ 7.5 \pm 2.3$ & $ 4.6 \pm 1.4$ \\ \mbox{\ensuremath{t \overline{t}}}{} & $ 19.7 \pm 5.9$ & $ 15.6 \pm 4.7$ & $ 30.1 \pm 9.0$ & $ 22.4 \pm 6.7$ \\ \Ztt{} + jets & $ 10.9 \pm 3.3$ & $ 0.3 \pm 0.1$ & $ 17.5 \pm 5.2$ & $ 0.3 \pm 0.1$ \\ \colrule Total background & $ 51.7 \pm 7.3$ & $ 21.9 \pm 4.8$ & $ 92 \pm 39$ & $ 35 \pm 11$ \\ \Zg{} + jets (\alpgen{}) & $238.6 \pm 6.5$ & $196.7 \pm 5.6$ & $ 335.4 \pm 7.2$ & $ 289.0 \pm 6.4$ \\ \colrule Total prediction & $290.3 \pm 9.8$ & $218.6 \pm 7.3$ & $ 428 \pm 39$ & $ 324 \pm 12$ \\ Data & $312$ & $226$ & $ 486$ & $ 334$ \\ \botrule \end{tabular} \end{center} \end{table*} \mbox{\ensuremath{Z/\gamma^* + \textrm{jets}}}{} signal plus background prediction for \mbox{\ensuremath{Z/\gamma^* + \geqslant 1~\textrm{jet}}}{} events in the low- and high-mass regions $40 \leqslant M_{\ell\ell} < 66$~\gevcsq{} and $116 < M_{\ell\ell} \leqslant 145$~\gevcsq{}, respectively. The good agreement between data and expectation supports the method used to estimate the sample composition. \section{\label{sec:unf}Unfolding} Measured cross sections need to be corrected for detector effects in order to be compared to the theoretical predictions. The comparison between data and predictions is performed at the particle level, which refers to experimental signatures reconstructed from quasi-stable (lifetime greater than 10 ps) and color-confined final-state particles including hadronization and underlying-event contributions, but not the contribution of multiple \ensuremath{p\bar{p}}{} interactions in the same bunch crossing~\cite{Buttar:2008jx}. Detector-level cross sections are calculated by subtracting the estimated background from the observed events and dividing by the integrated luminosity. Measured cross sections are unfolded from detector level to particle level with a bin-by-bin procedure. For each bin of a measured observable $\alpha$, the \alpgenpythia{} \Zee{} + jets and \Zmm{} + jets MC samples are used to evaluate the unfolding factors, which are defined as $U_{\alpha}=\frac{d\sigma^{\textrm{MC}}_{\textrm{p}}}{d\alpha}/\frac{d\sigma^{\textrm{MC}}_{\textrm{d}}}{d\alpha}$, where $\sigma^{\textrm{MC}}_{\textrm{p}}$ and $\sigma^{\textrm{MC}}_{\textrm{d}}$ are the simulated particle-level and detector-level cross sections, respectively. Measured particle level cross sections are evaluated as $\frac{d\sigma_{\textrm{p}}}{d\alpha} = \frac{d\sigma_{\textrm{d}}}{d\alpha} \cdot U_{\alpha}$, where $\sigma_{\textrm{d}}$ is the detector-level measured cross section. The simulated samples used for the unfolding are validated by comparing measured and predicted cross sections at detector level. The unfolding factors account for \Zll{} reconstruction efficiency, particle detection, and jet reconstruction in the calorimeter. Unfolding factors are typically around 2.5 (1.7) in value and vary between 2.3 (1.6) at low \ensuremath{p_{\textrm{T}}}{} and 3 (2) at high \ensuremath{p_{\textrm{T}}}{} for the \Zee{} (\Zmm) channel. At particle level, radiated photons are recombined with leptons following a scheme similar to that used in Ref.~\cite{Denner:2011vu}. A photon and a lepton from \Zll{} decays are recombined when $\Delta{R}_{\gamma-\ell} \leqslant 0.1$. If both charged leptons in the final state are close to a photon, the photon is recombined with the lepton with the smallest $\Delta{R}_{\gamma-\ell}$. Photons that are not recombined to leptons are included in the list of particles for the jet clustering. With such a definition, photons are clustered into jets at the particle level, and \Zg{} + $\gamma$ production is included in the definition of \mbox{\ensuremath{Z/\gamma^* + \textrm{jets}}}{}. The contribution of the \Zg{} + $\gamma$ process to the \mbox{\ensuremath{Z/\gamma^* + \textrm{jets}}}{} cross section is at the percent level, and taken into account in the \pythia{} simulation through photon initial- (ISR) and final-state radiation (FSR). Reconstruction of experimental signatures and kinematic requirements applied at particle level establish the measurement definition. Requirements applied at the detector level are also applied to jets and leptons at the particle level so as to reduce the uncertainty of the extrapolation of the measured cross section. Jets are reconstructed at particle level in the simulated sample with the midpoint algorithm in a cone of radius $R=0.7$, the split-merge threshold set to 0.75, and using as seeds particles with $\ensuremath{p_{\textrm{T}}} \geqslant 1$ \gevc{}. The measured cross sections are defined in the kinematic region \mbox{$66 \leqslant M_{\ell\ell} \leqslant 116$}~\gevcsq{}, \mbox{$|\eta^{\ell}|\leqslant1$}, \mbox{$\ensuremath{p_{\textrm{T}}}^{\ell} \geqslant 25$}~\gevc{} ($\ell=e,~\mu$), \mbox{$\ptjet \geqslant 30$}~\gevc{}, \mbox{$|\yjet|\leqslant2.1$}, and \mbox{$\Delta{R}_{\ell-\textrm{jet}} \geqslant 0.7$}. \section{\label{sec:sys}Systematic Uncertainties} All significant sources of systematic uncertainties are studied. The main systematic uncertainty of the \Zll{} + jets measurement is due to the jet-energy-scale correction. The jet-energy scale is varied according to Ref.~\cite{Bhatti:2005ai}. Three sources of systematic uncertainty are considered: the absolute jet-energy scale, multiple \ensuremath{p\bar{p}}{} interactions, and the $\eta$-dependent calorimeter response. The absolute jet-energy scale uncertainty depends on the response of the calorimeter to individual particles and on the accuracy of the simulated model for the particle multiplicity and \ensuremath{p_{\textrm{T}}}{} spectrum inside a jet. This uncertainty significantly affects observables involving high-\ensuremath{p_{\textrm{T}}}{} jets and high jet multiplicity. The jet-energy uncertainty related to multiple \ensuremath{p\bar{p}}{} interactions arises from inefficiency in the reconstruction of multiple interaction vertices, and mainly affects jets with low \ensuremath{p_{\textrm{T}}}{} and high rapidity, and events with high jet multiplicity. The $\eta$-dependent uncertainty accounts for residual discrepancies between data and simulation after the calorimeter response is corrected for the dependence on $\eta$. Trigger efficiency and lepton identification uncertainties are of the order of $1\%$ and give small contributions to the total uncertainty. A $30\%$ uncertainty is applied to the MC backgrounds yield estimation, to account for missing higher-order corrections on the cross-section normalizations~\cite{Aaltonen:2007ae}. In the \Zee{} channel, a $15\%$ uncertainty is assigned to the data-driven QCD and \mbox{\ensuremath{W + \textrm{jets}}}{} background yield estimation, to account for the statistical and systematic uncertainty of the fake-rate parametrization. In the \Zmm{} channel a $100\%$ uncertainty is applied to the subtraction of QCD and \mbox{\ensuremath{W + \textrm{jets}}}{} background, which accounts for any difference between the observed same-charge yield and the expected opposite-charge background contribution. The impact of both sources to the uncertainties of the measured cross sections is less than $2\%$. The primary vertex acceptance is estimated by fitting the beam luminosity as a function of $z$ using minimum bias data, the uncertainty on the primary vertex acceptance is approximately $1\%$. Finally, the luminosity estimation has an uncertainty of $5.8\%$ which is applied to the measurements~\cite{Klimenko:2003if}. As examples, systematic uncertainties as functions of inclusive jet \ensuremath{p_{\textrm{T}}}{} in the \Zee{} channel and inclusive jet rapidity in the \Zmm{} channel are shown in Fig.~\ref{fig:Total_Sys_1J}, \begin{figure*} \centering \subfigure[]{\includegraphics[width=\figsize]{CC_Pt1_Incl_Syst_Zee.pdf}} \subfigure[]{\includegraphics[width=\figsize]{CC_Y1_Incl_Syst_Zmm.pdf}} \caption{Relative systematic uncertainties as functions of (a) inclusive jet \ensuremath{p_{\textrm{T}}}{} in the \Zee{} channel and (b) inclusive jet rapidity in the \Zmm{} channel, for events with \mbox{\ensuremath{Z/\gamma^* + \geqslant 1~\textrm{jet}}}. \label{fig:Total_Sys_1J}} \end{figure*} the corresponding systematic uncertainties as functions of inclusive jet \ensuremath{p_{\textrm{T}}}{} in the \Zmm{} channel and inclusive jet rapidity in the \Zee{} channel have similar trends. \section{\label{sec:pred}Theoretical Predictions} Measured \mbox{\ensuremath{Z/\gamma^* + \textrm{jets}}}{} differential cross sections are compared to several theoretical predictions such as NLO perturbative QCD calculations evaluated with \mcfm~\cite{Campbell:2002tg} and \blackhatsherpa~\cite{Berger:2008sj}, approximate NNLO \loopsimmcfm{} predictions~\cite{Rubin:2010xp}, perturbative NLO QCD predictions including NLO electroweak corrections~\cite{Denner:2011vu}, and to generators based on LO matrix element (ME) supplemented by parton showers (PS), like \alpgenpythia~\cite{Mangano:2002ea, Sjostrand:2006za}, and NLO generators interfaced to PS as \powhegpythia~\cite{Alioli:2010qp}. For the \loopsimmcfm{} predictions, the notation $\overline{\textrm{n}}^p$N$^q$LO introduced in Ref.~\cite{Rubin:2010xp} is used, which denotes an approximation to the N$^{p+q}$LO result in which the $q$ lowest loop contributions are evaluated exactly, whereas the $p$ highest loop contributions are evaluated with the \loopsim{} approximation; according to such a notation, the approximate NNLO \loopsimmcfm{} predictions are denoted with \nnlo{}. The NLO \mcfm{} predictions are available for final states from \Zg{} production in association with one or more, and two or more jets, \loopsimmcfm{} only for the \mbox{\ensuremath{Z/\gamma^* + \geqslant 1~\textrm{jet}}}{} final state, NLO \blackhatsherpa{} for jet multiplicity up to \mbox{\ensuremath{Z/\gamma^* + \geqslant 3~\textrm{jets}}}{}, and \powhegpythia{} predictions are available for all jet multiplicities but have NLO accuracy only for \mbox{\ensuremath{Z/\gamma^* + \geqslant 1~\textrm{jet}}}. The \alpgen{} LO calculation is available for jet multiplicities up to \Zg{} + 6 jets but, for the current comparison, the calculation is restricted to up to \mbox{\ensuremath{Z/\gamma^* + \geqslant 4~\textrm{jets}}}. Electroweak corrections at NLO are available for the \mbox{\ensuremath{Z/\gamma^* + \geqslant 1~\textrm{jet}}}{} final state. Table~\ref{tab:theory_predictions} lists the theoretical predictions which are compared to measured cross sections. \begin{table*} \caption{Summary of the theoretical predictions compared to the measured cross sections. The order of the expansion in the strong-interaction coupling (QCD order), the order of the expansion in the fine-structure constant (EW order), the matching to a parton shower, and the available jet multiplicities in \mbox{\ensuremath{Z/\gamma^* + \textrm{jets}}}{} production are shown for each prediction. } \begin{center} \label{tab:theory_predictions} \begin{tabular}{lllll} \toprule Prediction & QCD order & EW order & Parton shower & Jets multiplicity \\ \colrule \mcfm{} & LO/NLO & LO & no & $\Zg{} + \geqslant 1$ and $2$ jets \\ \blackhatsherpa{} & LO/NLO & LO & no & $\Zg{} + \geqslant 1, 2$, and 3 jets \\ \loopsimmcfm{} & \nnlo{} & LO & no & $\Zg{} + \geqslant 1$ jet \\ \nloqcdew{} & NLO & NLO & no & $\Zg{} + \geqslant 1$ jet \\ \alpgenpythia{} & LO & LO & yes & $\Zg{} + \geqslant 1,2,3$, and $4$ jets \\ \powhegpythia{} & NLO & LO & yes & $\Zg{} + \geqslant 1,2,3$, and $4$ jets \\ \botrule \end{tabular} \end{center} \end{table*} The input parameters of the various predictions are chosen to be homogeneous in order to emphasize the difference between the theoretical models. The MSTW2008~\cite{Martin:2009iq} PDF sets are used as the default choice in all the predictions. The LO PDF set and one-loop order for the running of the strong-interaction coupling constant $\alpha_s$ are used for the LO \mcfm{} and \blackhatsherpa{} predictions; the NLO PDF set and two-loop order for the running of $\alpha_s$ for \powheg{}, \alpgen{}, NLO \mcfm{}, and NLO \blackhat{} predictions; the NNLO PDF set and three-loop order for the running of $\alpha_s$ for the \nnlo{} \loopsim{} prediction. The contribution to the NLO \mcfm{} prediction uncertainty due to the PDF is estimated with the MSTW2008NLO PDF set at the 68\% confidence level (CL), by using the Hessian method~\cite{Pumplin:2001ct}. There are 20 eigenvectors and a pair of uncertainty PDF associated with each eigenvector. The pair of PDF corresponds to positive and negative 68\% CL excursions along the eigenvector. The PDF contribution to the prediction uncertainty is the quadrature sum of prediction uncertainties from each uncertainty PDF. The impact of different PDF sets is studied in \mcfm{}, \alpgen{} and \powheg{}. The variation in the predictions with CTEQ6.6~\cite{Nadolsky:2008zw}, NNPDF2.1~\cite{Ball:2011mu}, CT10~\cite{Lai:2010vv}, and MRST2001~\cite{Martin:2001es} PDF sets is of the same order of the MSTW2008NLO uncertainty. The LHAPDF 5.8.6 library~\cite{Whalley:2005nh} is used to access PDF sets, except in \alpgen{}, where PDF sets are provided within the MC program. The nominal choice~\cite{Berger:2010vm, *Berger:2009ep, Bauer:2009km} for the functional form of the renormalization and factorization scales is $\mu_0=\hat{H}_T/2=\frac{1}{2}\big(\sum_j p_{T}^{j} + p_{T}^{\ell^+} + p_{T}^{\ell^-}\big)$~\footnote{In \blackhat{} and \powheg{} predictions, the alternative definition $\mu_0=\hat{H'}_{\textrm{T}}/2=\frac{1}{2}\big(\sum_j p_{\textrm{T}}^{j} + E_{\textrm{T}}^{Z}\big)$ with $E_{\textrm{T}}^{Z} = \sqrt{M_{Z}^{2} + p_{\textrm{T},Z}^{2}}$ is used, where the index $j$ runs over the partons in the final state.}, where the index $j$ runs over the partons in the final state. An exception to this default choice is the \alpgen{} prediction, which uses $\mu_0=\sqrt{m_{Z}^{2} + \sum_j p_{T}^{j}}$; the difference with respect to $\mu_0=\hat{H}_T/2$ was found to be negligible~\cite{Camarda:2012yha}. The factorization and renormalization scales are varied simultaneously between half and twice the nominal value $\mu_0$, and the corresponding variations in the cross sections are considered as an uncertainty of the prediction. This is the largest uncertainty associated with the theoretical models, except for the \alpgenpythia{} prediction, where the largest uncertainty is associated with the variation of the renormalization scale using the Catani, Krauss, Kuhn, Webber (CKKW) scale-setting procedure~\cite{Catani:2001cc}. In the \alpgen{} prediction, the value of the QCD scale, $\Lambda_{QCD}$, and the running order of the strong-interaction coupling constant in the CKKW scale-setting procedure, $\alpha_{s}^{\textrm{CKKW}}$, are set to $\Lambda_{QCD}=0.26$ and one loop, respectively~\cite{Cooper:2011gk}. These settings match the corresponding values of $\Lambda_{QCD}$ and the running order of $\alpha_s$ for ISR and FSR of the \pythia{} Tune Perugia 2011. The variation of the CKKW renormalization scale is introduced together with an opposite variation of $\Lambda_{QCD}$ in the \pythia{} tune. Simultaneous variations of the renormalization and factorization scales for the matrix element generation in \alpgen{} were found to be smaller than the variation of the CKKW scale~\cite{Camarda:2012yha}. The differences with respect to the previously used Tune A and Tune DW~\cite{Albrow:2006rt,*Field:2006gq} are studied, with the $\alpha_s$-matched setup of Tune Perugia 2011 providing a better modeling of the shape and normalization of the \mbox{\ensuremath{Z/\gamma^* + \textrm{jets}}}{} differential cross sections. In the case of Tune A and Tune DW, the running of $\alpha_{s}^{\textrm{CKKW}}$ in \alpgen{} and $\Lambda_{QCD}$ in \pythia{} is determined by the PDF set, which is CTEQ5L in both to avoid mismatch. The \powheg{} calculation is performed with the weighted events option, and the Born suppression factor for the reweight is set to $10$~\gevc{}, following Ref.~\cite{Alioli:2010qp}. Further studies on the impact of different choices of the functional form of the renormalization and factorization scales have been performed in Ref.~\cite{Camarda:2012yha}. In the LO and NLO \mcfm{} predictions, jets are clustered with the native \mcfm{} \emph{cone} algorithm with $R=0.7$. This is a seedless cone algorithm that follows the jet clustering outlined in Ref.~\cite{Blazey:2000qt}. The split-merge threshold is set to 0.75, and the maximum $\Delta{R}$ separation $R_{sep}$ for two partons to be clustered in the same jet~\cite{Ellis:1992qq}, is set to \mbox{$R_{sep}=1.3R$}~\cite{Aaltonen:2007ae}. For the \loopsimmcfm{} prediction the minimum jet \ensuremath{p_{\textrm{T}}}{} for the generation is set to $1$ \gevc{}, and the jet clustering is performed with the fastjet~\cite{Cacciari:2011ma} interface to the SISCone~\cite{Salam:2007xv} jet algorithm with cone radius $R=0.7$ and a split-merge threshold of 0.75. The same parameters and setup for the jet clustering are used in the \blackhatsherpa{} calculation, and the predictions are provided by the \blackhat{} authors. A recently developed MC program allows the calculation of both NLO electroweak and NLO QCD corrections to the \mbox{\ensuremath{Z/\gamma^* + \geqslant 1~\textrm{jet}}}{} cross sections~\cite{Denner:2011vu}. In such a prediction, the QCD and electroweak part of the NLO corrections are combined with a factorization ansatz: NLO QCD and electroweak corrections to the LO cross section are evaluated independently and multiplied. Such a combined prediction is referred to as \nloqcdew{}. The prediction is evaluated with the configuration described in Ref.~\cite{Denner:2011vu}, except for the renormalization and factorization scales, which are set to $\mu_0=\hat{H}_T/2$, and the predictions are provided by the authors. Fixed-order perturbative QCD predictions need to be corrected for nonperturbative QCD effects in order to compare them with the measured cross sections, including the underlying event associated with multiparton interactions, beam remnants, and hadronization. Another important effect that is not accounted for in the perturbative QCD predictions and needs to be evaluated is the quantum electrodynamics (QED) photon radiation from leptons and quarks. Both ISR and FSR are considered, with the main effect coming from FSR. The inclusion of QED radiation also corrects the \mbox{\ensuremath{Z/\gamma^* + \textrm{jets}}}{} cross sections for the contribution of \Zg{} + $\gamma$ production, which enters the definition of the \mbox{\ensuremath{Z/\gamma^* + \textrm{jets}}}{} particle level used in this measurement. The nonperturbative QCD effects and the QED radiation are estimated with the MC simulation based on the $\alpha_s$-matched Perugia 2011 configuration of \alpgenpythia{}, where \pythia{} handles the simulation of these effects. To evaluate the corrections, parton-level and particle-level \alpgenpythia{} cross sections are defined: parton-level cross sections are calculated with QED radiation, hadronization, and multiparton interactions disabled in the \pythia{} simulation, whereas these effects are simulated for the particle-level cross sections. Kinematic requirements on leptons and jets and jet-clustering parameters for the parton and particle levels are the same as those used for the measured cross sections, and photons are recombined to leptons in $\Delta{R}=0.1$ if radiated photons are present in the final state. The corrections are obtained by evaluating the ratio of the particle-level cross sections over the parton-level cross sections, bin-by-bin for the various measured variables. Figure~\ref{fig:CC_Pt1Y1_P2H} shows the parton-to-particle \begin{figure*} \begin{center} \subfigure[]{\includegraphics[width=\figsize]{CC_Pt1_Incl_P2H.pdf}} \subfigure[]{\includegraphics[width=\figsize]{CC_Y1_Incl_P2H.pdf}} \caption{Parton-to-particle corrections as functions of (a) inclusive jet \ensuremath{p_{\textrm{T}}}{} and (b) inclusive jet rapidity for \Zg{} + $\geqslant 1$ jet events. The relative contributions of QED radiation, hadronization, and underlying event are shown. \label{fig:CC_Pt1Y1_P2H}} \end{center} \end{figure*} corrections as functions of inclusive jet \ensuremath{p_{\textrm{T}}}{} and inclusive jet rapidity for \mbox{\ensuremath{Z/\gamma^* + \geqslant 1~\textrm{jet}}}{} events, with the contributions from QED ISR and FSR radiation, hadronization, and underlying event. The corrections have a moderate dependence on jet multiplicity, as shown in Fig.~\ref{fig:CC_NJ_Incl_P2H}. Figure~\ref{fig:CC_Pt1Y1_TUNE} shows \begin{figure} \begin{center} \includegraphics[width=\figsize]{CC_NJ_Incl_P2H.pdf} \caption{Parton-to-particle corrections as a function of jet multiplicity. The relative contributions of QED radiation, hadronization, and underlying event are shown. \label{fig:CC_NJ_Incl_P2H}} \end{center} \end{figure} \begin{figure*} \begin{center} \subfigure[]{\includegraphics[width=\figsize]{CC_Pt1_Incl_P2H_TUNES.pdf}} \subfigure[]{\includegraphics[width=\figsize]{CC_Y1_Incl_P2H_TUNES.pdf}} \caption{Parton-to-particle corrections as functions of (a) inclusive jet \ensuremath{p_{\textrm{T}}}{} and (b) inclusive jet rapidity for \Zg{} + $\geqslant 1$ jet events, with various choices of the \pythia{} tune and different matrix element generators \alpgen{} or \powheg{}. \label{fig:CC_Pt1Y1_TUNE}} \end{center} \end{figure*} the parton-to-particle corrections evaluated with various tunes of the underlying-event and hadronization model in \pythia{}, namely Tune A~\cite{Affolder:2001xt}, Tune DW~\cite{Albrow:2006rt,*Field:2006gq}, Tune Perugia 2011~\cite{Skands:2010ak}, and Tune Z1~\cite{Field:2011iq}, and with the \alpgenpythia{} or \powhegpythia{} simulations. The corrections are generally below $10\%$, and independent of the \pythia{} MC tune and of the underlying matrix-element generator. The \mbox{\ensuremath{Z/\gamma^* + \textrm{jets}}}{} cross sections are measured using the midpoint algorithm for the reconstruction of the jets in the final state. The midpoint algorithm belongs to the class of iterative cone algorithms. Though they present several experimental advantages, iterative cone algorithms are not infrared and collinear safe, which means that the number of hard jets found by such jet algorithms is sensitive to a collinear splitting or to the addition of a soft emission. In particular the midpoint jet algorithm used in this measurement is infrared unsafe, as divergences appear in a fixed-order calculation for configurations with three hard particles close in phase space plus a soft one, as discussed in Refs.~\cite{Salam:2009jx,Salam:2007xv}. In order to compare the measured cross sections with a fixed-order prediction, an infrared and collinear safe jet algorithm that is as similar as possible to the midpoint algorithm, is used in the prediction. This is the SISCone algorithm with the same split-merge threshold of 0.75 and the same jet radius $R=0.7$ of the midpoint algorithm used for the measured cross sections. The additional uncertainty coming from the use of different jet algorithms between data and theory is estimated by comparing the particle-level cross sections for the two jet algorithms. Figure~\ref{fig:CC_Pt1_Incl_JETALG} shows the cross section ratios \begin{figure*} \begin{center} \subfigure[]{\includegraphics[width=\figsize]{CC_Pt1_Incl_JALGSIS.pdf}} \subfigure[]{\includegraphics[width=\figsize]{CC_Y1_Incl_JALGSIS.pdf}} \caption{Ratio of differential cross sections evaluated with the midpoint and with the SISCone jet algorithms, as functions of (a) inclusive jet \ensuremath{p_{\textrm{T}}}{} and (b) inclusive jet rapidity in \Zg{} + $\geqslant 1$ jet events. \label{fig:CC_Pt1_Incl_JETALG}} \end{center} \end{figure*} of midpoint and SISCone jet algorithms for inclusive jet \ensuremath{p_{\textrm{T}}}{} and rapidity in the \mbox{\ensuremath{Z/\gamma^* + \geqslant 1~\textrm{jet}}}{} final state. The difference at parton level between SISCone and midpoint is between $2\%$ and $3\%$. Larger differences between midpoint and SISCone are observed if the underlying event is simulated; however, they do not affect the comparison with fixed-order predictions. Figure~\ref{fig:CC_NJ_Incl_JETALG} shows the same \begin{figure} \begin{center} \includegraphics[width=\figsize]{CC_NJ_Incl_JALGSIS.pdf} \caption{Ratio of differential cross sections evaluated with the midpoint and with the SISCone jet algorithms, as a function of jet multiplicity in \mbox{\ensuremath{Z/\gamma^* + \geqslant N_{\textrm{jets}}}}{}. \label{fig:CC_NJ_Incl_JETALG}} \end{center} \end{figure} comparison as a function of jet multiplicity. The difference at parton level between midpoint and SISCone is always below $3\%$ and generally uniform. \section{\label{sec:results}Results} The differential cross sections of \mbox{\ensuremath{Z/\gamma^* + \textrm{jets}}}{} production in \ensuremath{p\bar{p}}{} collisions are measured independently in the \Zee{} and \Zmm{} decay channels and combined using the best linear unbiased estimate (BLUE) method~\cite{Lyons:1988rp}. The BLUE algorithm returns a weighted average of the measurements taking into account different types of uncertainty and their correlations. Systematic uncertainties related to trigger efficiencies, lepton reconstruction efficiencies, and QCD and \mbox{\ensuremath{W + \textrm{jets}}}{} background estimation are considered uncorrelated between the two channels; all other contributions are treated as fully correlated. Inclusive \mbox{\ensuremath{Z/\gamma^* + \geqslant N_{\textrm{jets}}}}{} cross sections are measured for number of jets $N_{\textrm{jets}} \geqslant 1,2,3$, and $4$, various differential cross sections are measured in the \mbox{\ensuremath{Z/\gamma^* + \geqslant 1~\textrm{jet}}}{}, \mbox{\ensuremath{Z/\gamma^* + \geqslant 2~\textrm{jets}}}{}, and \mbox{\ensuremath{Z/\gamma^* + \geqslant 3~\textrm{jets}}}{} final states. Table \ref{tab:results} summarizes the measured cross sections. \begin{table*} \caption{Summary of measured cross sections for each \mbox{\ensuremath{Z/\gamma^* + \geqslant N_{\textrm{jets}}}}{} final state. } \begin{center} \label{tab:results} \begin{tabular}{ll} \toprule Final state & Measured quantity~(Fig.) \\ \colrule \mbox{\ensuremath{Z/\gamma^* + \geqslant N_{\textrm{jets}}}}{} & Inclusive cross section for $N_{\textrm{jets}} \geqslant 1,2,3$, and $4$~(\ref{fig:CC_NJ_Incl_APLB}) \\ \mbox{\ensuremath{Z/\gamma^* + \geqslant 1~\textrm{jet}}}{} & Leading jet \ensuremath{p_{\textrm{T}}}{}~(\ref{fig:CC_Pt1_Lead_APBEL}), inclusive jet \ensuremath{p_{\textrm{T}}}{}~(\ref{fig:CC_Pt1_Incl_APML},\ref{fig:CC_Pt1_Incl_MCFM}), inclusive jet $y$~(\ref{fig:CC_Y1_Incl_APBL},\ref{fig:CC_Y1_Incl_MCFM}), $\ensuremath{p_{\textrm{T}}}^Z$~(\ref{fig:CC_ZPt1_APMEL}), $\Delta{\phi}_{Z,\textrm{jet}}$~(\ref{fig:CC_Zj_DPhi_LPA}), \ensuremath{H_{\textrm{T}}^{\textrm{jet}}}{}~(\ref{fig:CC_Htj1_MAPL})\\ \mbox{\ensuremath{Z/\gamma^* + \geqslant 2~\textrm{jets}}}{} & $2\textit{nd}$ leading jet \ensuremath{p_{\textrm{T}}}{}~(\ref{fig:CC_Pt2_Lead}), inclusive-jet $y$~(\ref{fig:CC_Y2_Incl}), $M_{\mathit{jj}}$~(\ref{fig:CC_DJ_Mass_AM}), dijet $\Delta{R}$~(\ref{fig:CC_DJ_DR_AM}), dijet $\Delta{\phi}$~(\ref{fig:CC_DJ_DPhi_AM}), dijet $\Delta{y}$~(\ref{fig:CC_DJ_DY_AM}), $\theta_{Z,{\mathit{jj}}}$~(\ref{fig:CC_Zjj_Theta_AM})\\ \mbox{\ensuremath{Z/\gamma^* + \geqslant 3~\textrm{jets}}}{} & $3$rd leading jet \ensuremath{p_{\textrm{T}}}{}~(\ref{fig:CC_Pt3_Y3_BH} a), inclusive-jet $y$~(\ref{fig:CC_Pt3_Y3_BH} b) \\ \botrule \end{tabular} \end{center} \end{table*} \subsection{Cross section for the production of a \Zg{} boson in association with $N$ or more jets\label{sec:Znjet_results}} The \mbox{\ensuremath{Z/\gamma^* + \geqslant N_{\textrm{jets}}}}{} production cross sections are measured for $N_{\textrm{jets}}$ up to four and compared to LO and NLO perturbative QCD \blackhatsherpa{}, LO-ME+PS \alpgenpythia{}, and NLO+PS \powhegpythia{} predictions. The \mbox{\ensuremath{Z/\gamma^* + \geqslant 1~\textrm{jet}}}{} cross section is compared also to the \nnlo{} \loopsimmcfm{} prediction. Figure~\ref{fig:CC_NJ_Incl_APLB} shows the inclusive cross \begin{figure*} \begin{center} \includegraphics[width=\figsizestar]{CC_NJ_Incl_APLB_Zll.pdf} \caption{Inclusive \Zg{} + $\geqslant N$ jets cross section as a function of jet multiplicity. The measured cross section (black dots) is compared to the \blackhatsherpa{} NLO prediction (open circles). The black vertical bars show the statistical uncertainty, and the yellow bands show the total systematic uncertainty, except for the $5.8\%$ uncertainty on the luminosity. The lower and right panels show the data-to-theory ratio with respect to other theoretical predictions, with the blue dashed bands showing the scale-variation uncertainty of each prediction, which is associated with the variation of the renormalization and factorization scales $\mu$ or to the combined variation of $\alpha_{s}^{\textrm{CKKW}}$ and $\Lambda_{QCD}$. \label{fig:CC_NJ_Incl_APLB}} \end{center} \end{figure*} section as a function of jet multiplicity for \Zg{} + $\geqslant$ 1, 2, 3 and 4 jets. The measured cross section is in general good agreement with all the predictions. The blue dashed bands show the theoretical uncertainty associated with the variation of the renormalization and factorization scales, except for the \alpgenpythia{} prediction, where the band shows the uncertainty associated with the variation of the CKKW renormalization scale. The \alpgenpythia{} LO-ME+PS prediction provides a good model of the measured cross sections, but has large theoretical uncertainty at higher jet multiplicities. The \blackhatsherpa{} NLO perturbative QCD prediction shows a reduced scale dependence with respect to the \alpgenpythia{} LO-ME+PS prediction. The \powhegpythia{} NLO+PS prediction has NLO accuracy only for \mbox{\ensuremath{Z/\gamma^* + \geqslant 1~\textrm{jet}}}, but it can be compared to data in all the measured jet multiplicities, where a general good agreement is observed. The \loopsimmcfm{} \nnlo{} prediction is currently available only for \mbox{\ensuremath{Z/\gamma^* + \geqslant 1~\textrm{jet}}}, where it shows a very good agreement with the measured cross section and a reduced scale-variation uncertainty at the level of $5\%$. The \mbox{\ensuremath{Z/\gamma^* + \geqslant 3~\textrm{jets}}}{} \blackhatsherpa{} NLO perturbative QCD calculation appears to be approximately $30\%$ lower than data, with the difference covered by the scale-variation uncertainty. Such a difference is not observed in the comparison with LO-ME+PS \alpgenpythia{} and NLO+PS \powhegpythia{} predictions, in agreement with recent measurements using the anti-$k_t$ jet algorithm~\cite{Aad:2013ysa, *Aad:2011qv}, which do not show any difference with the NLO predictions at high jet multiplicities. The reason of this difference has been found to be related to the different $\Delta R$ angular reach~\cite{Salam:2009jx} between the SISCone and anti-$k_t$ algorithms, and how it is influenced by additional radiation between two hard particles~\cite{Camarda:2012yha}. The difference between data or LO-ME+PS with respect to the NLO prediction in the \mbox{\ensuremath{Z/\gamma^* + \geqslant 3~\textrm{jets}}}{} final state is explained with the presence of higher-order QCD radiation, which reduces the angular reach of the SISCone algorithm and increases the cross section in this particular configuration. \subsection{Cross section for the production of a \Zg{} boson in association with one or more jets\label{sec:Z1jet_results}} Figures~\ref{fig:CC_Pt1_Lead_APBEL} and~\ref{fig:CC_Pt1_Incl_APML} show the leading-jet and inclusive-jet cross sections differential in \ensuremath{p_{\textrm{T}}}{} for \mbox{\ensuremath{Z/\gamma^* + \geqslant 1~\textrm{jet}}}{} events. \begin{figure*} \begin{center} \includegraphics[width=\figsizestar]{CC_Pt1_Lead_APBEL_Zll.pdf} \caption{Differential cross section as a function of leading jet \ensuremath{p_{\textrm{T}}}{} for \mbox{\ensuremath{Z/\gamma^* + \geqslant 1~\textrm{jet}}}{} events. The measured cross section (black dots) is compared to the \loopsimmcfm{} \nnlo{} prediction (open circles). The black vertical bars show the statistical uncertainty, and the yellow bands show the total systematic uncertainty, except for the $5.8\%$ uncertainty on the luminosity. The lower and right panels show the data-to-theory ratio with respect to other theoretical predictions, with the blue dashed bands showing the scale-variation uncertainty of each prediction, which is associated with the variation of the renormalization and factorization scales $\mu$ or to the combined variation of $\alpha_{s}^{\textrm{CKKW}}$ and $\Lambda_{QCD}$. \label{fig:CC_Pt1_Lead_APBEL}} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=\figsizestar]{CC_Pt1_Incl_APML_Zll.pdf} \caption{Differential cross section as a function of inclusive jet \ensuremath{p_{\textrm{T}}}{} for \mbox{\ensuremath{Z/\gamma^* + \geqslant 1~\textrm{jet}}}{} events. The measured cross section (black dots) is compared to the \loopsimmcfm{} \nnlo{} prediction (open circles). The black vertical bars show the statistical uncertainty, and the yellow bands show the total systematic uncertainty, except for the $5.8\%$ uncertainty on the luminosity. The lower and right panels show the data-to-theory ratio with respect to other theoretical predictions with the blue dashed bands showing the scale-variation uncertainty of each prediction, which is associated with the variation of the renormalization and factorization scales $\mu$ or to the combined variation of $\alpha_{s}^{\textrm{CKKW}}$ and $\Lambda_{QCD}$. The red dashed band shows the PDF uncertainty evaluated with the \mcfm{} prediction. \label{fig:CC_Pt1_Incl_APML}} \end{center} \end{figure*} All the theoretical predictions are in reasonable agreement with the measured cross sections. The NLO electroweak corrections give a $5\%$ negative contribution in the last \Zg{} and leading jet \ensuremath{p_{\textrm{T}}}{} bin, due to the large Sudakov logarithms that appear in the virtual part of the calculation~\cite{Denner:2011vu}. The scale-variation uncertainty is quite independent of the jet \ensuremath{p_{\textrm{T}}}{} and of the order of $4\%-6\%$ for the \nnlo{} \loopsim{} prediction. Figure~\ref{fig:CC_Pt1_Incl_MCFM} shows variations in the \begin{figure*} \begin{center} \includegraphics[width=\figsizestar]{CC_Pt1_Incl_MCFM_Zll.pdf} \caption{Differential cross section as a function of inclusive jet \ensuremath{p_{\textrm{T}}}{} for \mbox{\ensuremath{Z/\gamma^* + \geqslant 1~\textrm{jet}}}{} events. The measured cross section (black dots) is compared to the \mcfm{} NLO prediction (open circles). The black vertical bars show the statistical uncertainty, and the yellow bands show the total systematic uncertainty, except for the $5.8\%$ uncertainty on the luminosity. The right panels show, from top to bottom, the data-to-theory ratio including variations of $\alpha_s(M_Z)$ (red dashed band) and factorization scale (green dashed band); various PDF sets and PDF uncertainty (red dashed band); and various choice of the functional form of the factorization and renormalization scales and scale-variation uncertainty (blue dashed band). \label{fig:CC_Pt1_Incl_MCFM}} \end{center} \end{figure*} \mcfm{} prediction with different values of the strong-interaction coupling constant at the $Z$ boson mass, $\alpha_s(M_Z)$, factorization scale, PDF sets, and choice of the functional form of the factorization and renormalization scales. Figure~\ref{fig:CC_Y1_Incl_APBL} shows the inclusive-jet cross \begin{figure*} \begin{center} \includegraphics[width=\figsizestar]{CC_Y1_Incl_APBL_Zll.pdf} \caption{Differential cross section as a function of inclusive jet rapidity for \mbox{\ensuremath{Z/\gamma^* + \geqslant 1~\textrm{jet}}}{} events. The measured cross section (black dots) is compared to the \loopsimmcfm{} \nnlo{} prediction (open circles). The black vertical bars show the statistical uncertainty, and the yellow bands show the total systematic uncertainty, except for the $5.8\%$ uncertainty on the luminosity. The lower and right panels show the data-to-theory ratio with respect to other theoretical predictions, with the blue dashed bands showing the scale-variation uncertainty of each prediction, which is associated with the variation of the renormalization and factorization scales $\mu$ or to the combined variation of $\alpha_{s}^{\textrm{CKKW}}$ and $\Lambda_{QCD}$. The red dashed band shows the PDF uncertainty evaluated with the \mcfm{} prediction. \label{fig:CC_Y1_Incl_APBL}} \end{center} \end{figure*} sections differential in rapidity for \mbox{\ensuremath{Z/\gamma^* + \geqslant 1~\textrm{jet}}}{} events. All predictions correctly model this quantity. In the high-rapidity region the measured cross section is higher than predictions; however, the difference is covered by the uncertainty due to the contribution of multiple \ensuremath{p\bar{p}}{} interaction. The \nnlo{} \loopsimmcfm{} prediction has the lowest scale-variation theoretical uncertainty, which is of the order of $4\%-6\%$, and the PDF uncertainty is between $2\%$ and $4\%$. In the high-rapidity region the \alpgen{} prediction is lower than other theoretical models; however, the difference with data is covered by the large CKKW renormalization scale-variation uncertainty of this prediction. Figure~\ref{fig:CC_Y1_Incl_MCFM} shows variations in the \begin{figure*} \begin{center} \includegraphics[width=\figsizestar]{CC_Y1_Incl_MCFM_Zll.pdf} \caption{Differential cross section as a function of inclusive jet rapidity for \mbox{\ensuremath{Z/\gamma^* + \geqslant 1~\textrm{jet}}}{} events. The measured cross section (black dots) is compared to the \mcfm{} NLO prediction (open circles). The black vertical bars show the statistical uncertainty, and the yellow bands show the total systematic uncertainty, except for the $5.8\%$ uncertainty on the luminosity. The right panels show, from top to bottom, the data-to-theory ratio including variations of $\alpha_s(M_Z)$ (red dashed band) and factorization scale (green dashed band); various PDF sets and PDF uncertainty (red dashed band); and various choice of the functional form of the factorization and renormalization scales and scale-variation uncertainty (blue dashed band). \label{fig:CC_Y1_Incl_MCFM}} \end{center} \end{figure*} \mcfm{} prediction with different values of $\alpha_s(M_Z)$, factorization scale, PDF sets, and choice of the functional form of the factorization and renormalization scales. Figure~\ref{fig:CC_ZPt1_APMEL} shows the production cross section \begin{figure*} \begin{center} \includegraphics[width=\figsizestar]{CC_ZPt1_APMEL_Zll.pdf} \caption{Differential cross section as a function of \Zg{} \ensuremath{p_{\textrm{T}}}{} for \mbox{\ensuremath{Z/\gamma^* + \geqslant 1~\textrm{jet}}}{} events. The measured cross section (black dots) is compared to the \loopsimmcfm{} \nnlo{} prediction (open circles). The black vertical bars show the statistical uncertainty, and the yellow bands show the total systematic uncertainty, except for the $5.8\%$ uncertainty on the luminosity. The lower and right panels show the data-to-theory ratio with respect to other theoretical predictions, with the blue dashed bands showing the scale-variation uncertainty of each prediction, which is associated with the variation of the renormalization and factorization scales $\mu$ or to the combined variation of $\alpha_{s}^{\textrm{CKKW}}$ and $\Lambda_{QCD}$. \label{fig:CC_ZPt1_APMEL}} \end{center} \end{figure*} differential in $\ensuremath{p_{\textrm{T}}}(\Zg{})$ for the \mbox{\ensuremath{Z/\gamma^* + \geqslant 1~\textrm{jet}}}{} final state. The perturbative QCD fixed-order calculations \mcfm{} and \loopsimmcfm{} fail in describing the region below the $30$~\gevc{} jet \ensuremath{p_{\textrm{T}}}{} threshold, where multiple-jet emission and nonperturbative QCD corrections are significant. The low \Zg{} \ensuremath{p_{\textrm{T}}}{} region is better described by the \alpgenpythia{} and \powhegpythia{} predictions, which include parton shower radiation, and in which the nonperturbative QCD corrections are applied as part of the \pythia{} MC event evolution. In the intermediate \Zg{} \ensuremath{p_{\textrm{T}}}{} region, the ratios of the data over the NLO \mcfm{}, NLO+PS \powhegpythia{} and \nnlo{} \loopsimmcfm{} predictions show a slightly concave shape, which is covered by the scale-variation uncertainty. The NLO electroweak corrections related to the large Sudakov logarithms are negative and of the order of $5\%$ in the last \ensuremath{p_{\textrm{T}}}{} bin. Figure~\ref{fig:CC_Zj_DPhi_LPA} shows the differential cross section \begin{figure*} \begin{center} \includegraphics[width=\figsizestar]{CC_Zj_DPhi_LPA_Zll.pdf} \caption{Differential cross section as a function of \Zg{}-jet $\Delta{\phi}$ for \mbox{\ensuremath{Z/\gamma^* + \geqslant 1~\textrm{jet}}}{} events. The measured cross section (black dots) is compared to the \alpgenpythia{} prediction (open circles). The black vertical bars show the statistical uncertainty, and the yellow bands show the total systematic uncertainty, except for the $5.8\%$ uncertainty on the luminosity. The right panels show the data-to-theory ratio with respect to other theoretical predictions, with the blue dashed bands showing the scale-variation uncertainty of each prediction, which is associated with the variation of the renormalization and factorization scales $\mu$ or to the combined variation of $\alpha_{s}^{\textrm{CKKW}}$ and $\Lambda_{QCD}$. \label{fig:CC_Zj_DPhi_LPA}} \end{center} \end{figure*} as a function of the \Zg{}-leading jet $\Delta{\phi}$ variable in \mbox{\ensuremath{Z/\gamma^* + \geqslant 1~\textrm{jet}}}{} events. The \alpgenpythia{} prediction shows good agreement with the measured cross section in the region $\Delta{\phi} \geqslant \pi / 2$. In the region $\Delta{\phi} < \pi / 2$ the \alpgenpythia{} prediction is lower than the data, with the difference covered by the scale-variation uncertainty. The \powhegpythia{} prediction has very good agreement with the data over all of the \Zg{}-jet $\Delta{\phi}$ spectrum, and is affected by smaller scale-variation uncertainty. The difference between the \alpgenpythia{} and \powhegpythia{} predictions is comparable to the experimental systematic uncertainty, which is dominated by the uncertainty from the contribution of multiple \ensuremath{p\bar{p}}{} interactions. Hence, the measured cross section cannot be used to distinguish between the two models. The NLO \mcfm{} prediction fails to describe the region $\Delta{\phi} < \pi / 2$ because it does not include the \Zg{} + 3 jets configuration, whereas \nnlo{} \loopsimmcfm{}, which includes the \Zg{} + 3 jets with only LO accuracy, predicts a rate approximately 2--3 times smaller than the rate observed in data in this region. Some \mbox{\ensuremath{Z/\gamma^* + \textrm{jets}}}{} observables have larger NLO-to-LO K-factors, defined as the ratio of the NLO prediction over the LO prediction, and are expected to have significant corrections at higher order than NLO~\cite{Rubin:2010xp}. The most remarkable example is the \ensuremath{H_{\textrm{T}}^{\textrm{jet}}}, defined as $\ensuremath{H_{\textrm{T}}^{\textrm{jet}}} = \sum{\ptjet}$, in \mbox{\ensuremath{Z/\gamma^* + \geqslant 1~\textrm{jet}}}{} events. Figure~\ref{fig:CC_Htj1_MAPL} shows the measured \begin{figure*} \begin{center} \includegraphics[width=\figsizestar]{CC_Htj1_MAPL_Zll.pdf} \caption{Differential cross section as a function of $\ensuremath{H_{\textrm{T}}^{\textrm{jet}}} = \sum{\ptjet}$ for \mbox{\ensuremath{Z/\gamma^* + \geqslant 1~\textrm{jet}}}{} events. The measured cross section (black dots) is compared to the \loopsimmcfm{} \nnlo{} prediction (open circles). The black vertical bars show the statistical uncertainty, and the yellow bands show the total systematic uncertainty, except for the $5.8\%$ uncertainty on the luminosity. The lower and right panels show the data-to-theory ratio with respect to other theoretical predictions, with the blue dashed bands showing the scale-variation uncertainty of each prediction, which is associated with the variation of the renormalization and factorization scales $\mu$ or to the combined variation of $\alpha_{s}^{\textrm{CKKW}}$ and $\Lambda_{QCD}$. \label{fig:CC_Htj1_MAPL}} \end{center} \end{figure*} cross section as a function of \ensuremath{H_{\textrm{T}}^{\textrm{jet}}}{} compared to the available theoretical predictions. The NLO \mcfm{} prediction fails to describe the shape of the \ensuremath{H_{\textrm{T}}^{\textrm{jet}}}{} distribution, in particular it underestimates the measured cross section in the high \ensuremath{H_{\textrm{T}}^{\textrm{jet}}}{} region, where the NLO-to-LO K-factor is greater than approximately two and a larger NLO scale-variation uncertainty is observed. The LO-ME+PS \alpgenpythia{} prediction is in good agreement with data, but suffers for the large LO scale uncertainty. The \powhegpythia{} prediction also is in good agreement with data, but is still affected by the larger NLO scale-variation uncertainty in the high \ensuremath{p_{\textrm{T}}}{} tail. The \nnlo{} \loopsimmcfm{} prediction provides a good modeling of the data distribution, and shows a significantly reduced scale-variation uncertainty. \subsection{Cross section for the production of a \Zg{} boson in association with two or more jets \label{sec:Z2jet_results}} Figures~\ref{fig:CC_Pt2_Lead} to~\ref{fig:CC_Zjj_Theta_AM} show measured differential cross sections in the \mbox{\ensuremath{Z/\gamma^* + \geqslant 2~\textrm{jets}}}{} final state. Figures~\ref{fig:CC_Pt2_Lead} and~\ref{fig:CC_Y2_Incl} show the \begin{figure*} \begin{center} \includegraphics[width=\figsizestar]{CC_Pt2_Lead_AB_Zll.pdf} \caption{Differential cross section as a function of $2\textit{nd}$ leading jet \ensuremath{p_{\textrm{T}}}{} for \mbox{\ensuremath{Z/\gamma^* + \geqslant 2~\textrm{jets}}}{} events. The measured cross section (black dots) is compared to the \blackhatsherpa{} NLO prediction (open circles). The black vertical bars show the statistical uncertainty, and the yellow bands show the total systematic uncertainty, except for the $5.8\%$ uncertainty on the luminosity. The right panels show the data-to-theory ratio with respect to \alpgenpythia{} and \blackhatsherpa{} predictions, with the blue dashed bands showing the scale-variation uncertainty of each prediction, which is associated with the variation of the renormalization and factorization scales $\mu$ or to the combined variation of $\alpha_{s}^{\textrm{CKKW}}$ and $\Lambda_{QCD}$. \label{fig:CC_Pt2_Lead}} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=\figsizestar]{CC_Y2_Incl_AB_Zll.pdf} \caption{Differential cross section as a function of inclusive jet rapidity for \mbox{\ensuremath{Z/\gamma^* + \geqslant 2~\textrm{jets}}}{} events. The measured cross section (black dots) is compared to the \blackhatsherpa{} NLO prediction (open circles). The black vertical bars show the statistical uncertainty, and the yellow bands show the total systematic uncertainty, except for the $5.8\%$ uncertainty on the luminosity. The right panels show the data-to-theory ratio with respect to \alpgenpythia{} and \blackhatsherpa{} predictions, with the blue dashed bands showing the scale-variation uncertainty of each prediction, which is associated with the variation of the renormalization and factorization scales $\mu$ or to the combined variation of $\alpha_{s}^{\textrm{CKKW}}$ and $\Lambda_{QCD}$. \label{fig:CC_Y2_Incl}} \end{center} \end{figure*} measured cross section as a function of the $2\textit{nd}$ leading jet \ensuremath{p_{\textrm{T}}}{} and inclusive jet rapidity compared to \alpgenpythia{} and \blackhatsherpa{} predictions. Measured distributions are in good agreement with the theoretical predictions. Figure~\ref{fig:CC_DJ_Mass_AM} shows the measured cross \begin{figure*} \begin{center} \includegraphics[width=\figsizestar]{CC_DJ_Mass_AM_Zll.pdf} \caption{Differential cross section as a function of dijet mass $M_{\mathit{jj}}$ for \mbox{\ensuremath{Z/\gamma^* + \geqslant 2~\textrm{jets}}}{} events. The measured cross section (black dots) is compared to the \mcfm{} NLO prediction (open circles). The black vertical bars show the statistical uncertainty, and the yellow bands show the total systematic uncertainty, except for the $5.8\%$ uncertainty on the luminosity. The right panels show the data-to-theory ratio with respect to \alpgenpythia{} and \mcfm{} predictions, with the blue dashed bands showing the scale-variation uncertainty of each prediction, which is associated with the variation of the renormalization and factorization scales $\mu$ or to the combined variation of $\alpha_{s}^{\textrm{CKKW}}$ and $\Lambda_{QCD}$. \label{fig:CC_DJ_Mass_AM}} \end{center} \end{figure*} section as a function of the dijet mass, $M_{\mathit{jj}}$. The cross section in the first bin is overestimated by the \mcfm{} prediction, but correctly described by the \alpgenpythia{} prediction. In the $M_{\mathit{jj}}$ region above approximately 160~\gevcsq{}, the measured cross sections are $10\%-20\%$ higher than both predictions. However, the systematic uncertainty, mainly due to the jet-energy scale, is as large as the observed difference. Figure~\ref{fig:CC_DJ_DR_AM} shows the measured cross section as a \begin{figure*} \begin{center} \includegraphics[width=\figsizestar]{CC_DJ_DR_AM_Zll.pdf} \caption{Differential cross section as a function of dijet $\Delta{R}$ for \mbox{\ensuremath{Z/\gamma^* + \geqslant 2~\textrm{jets}}}{} events. The measured cross section (black dots) is compared to the \mcfm{} NLO prediction (open circles). The black vertical bars show the statistical uncertainty, and the yellow bands show the total systematic uncertainty, except for the $5.8\%$ uncertainty on the luminosity. The right panels show the data-to-theory ratio with respect to \alpgenpythia{} and \mcfm{} predictions, with the blue dashed bands showing the scale-variation uncertainty of each prediction, which is associated with the variation of the renormalization and factorization scales $\mu$ or to the combined variation of $\alpha_{s}^{\textrm{CKKW}}$ and $\Lambda_{QCD}$. \label{fig:CC_DJ_DR_AM}} \end{center} \end{figure*} function of the dijet $\Delta{R}$ compared to \alpgenpythia{} and \mcfm{} predictions. Some differences between data and theory are observed at high $\Delta{R}$, where the measured cross section is approximately $50\%$ higher than the theoretical predictions. The dijet $\Delta{\phi}$ and $\Delta{y}$ differential cross sections also are measured, and the results are shown in Figs.~\ref{fig:CC_DJ_DPhi_AM} and~\ref{fig:CC_DJ_DY_AM}. The dijet $\Delta{\phi}$ appears reasonably \begin{figure*} \begin{center} \includegraphics[width=\figsizestar]{CC_DJ_DPhi_AM_Zll.pdf} \caption{Differential cross section as a function of dijet $\Delta{\phi}$ for \mbox{\ensuremath{Z/\gamma^* + \geqslant 2~\textrm{jets}}}{} events. The measured cross section (black dots) is compared to the \mcfm{} NLO prediction (open circles). The black vertical bars show the statistical uncertainty, and the yellow bands show the total systematic uncertainty, except for the $5.8\%$ uncertainty on the luminosity. The right panels show the data-to-theory ratio with respect to \alpgenpythia{} and \mcfm{} predictions, with the blue dashed bands showing the scale-variation uncertainty of each prediction, which is associated with the variation of the renormalization and factorization scales $\mu$ or to the combined variation of $\alpha_{s}^{\textrm{CKKW}}$ and $\Lambda_{QCD}$. \label{fig:CC_DJ_DPhi_AM}} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=\figsizestar]{CC_DJ_DY_AM_Zll.pdf} \caption{Differential cross section as a function of dijet $\Delta{y}$ for \mbox{\ensuremath{Z/\gamma^* + \geqslant 2~\textrm{jets}}}{} events. The measured cross section (black dots) is compared to the \mcfm{} NLO prediction (open circles). The black vertical bars show the statistical uncertainty, and the yellow bands show the total systematic uncertainty, except for the $5.8\%$ uncertainty on the luminosity. The right panels show the data-to-theory ratio with respect to \alpgenpythia{} and \mcfm{} predictions, with the blue dashed bands showing the scale-variation uncertainty of each prediction, which is associated with the variation of the renormalization and factorization scales $\mu$ or to the combined variation of $\alpha_{s}^{\textrm{CKKW}}$ and $\Lambda_{QCD}$. \label{fig:CC_DJ_DY_AM}} \end{center} \end{figure*} modeled by the \alpgenpythia{} and \mcfm{} predictions, whereas the dijet $\Delta{y}$ shows a shape difference, which reaches $50\%$ at $\Delta{y} = 3-3.6$, and is related to the observed difference between data and theory at $\Delta{R} \gtrsim 4$. This region is affected by large experimental uncertainties, mainly due to the pile-up subtraction, and large theoretical uncertainty. Figure~\ref{fig:CC_Zjj_Theta_AM} \begin{figure*} \begin{center} \includegraphics[width=\figsizestar]{CC_Zjj_Theta_AM_Zll.pdf} \caption{Differential cross section as a function of the dihedral angle $\theta_{Z,{\mathit{jj}}}$ for \mbox{\ensuremath{Z/\gamma^* + \geqslant 2~\textrm{jets}}}{} events. The measured cross section (black dots) is compared to the \mcfm{} NLO prediction (open circles). The black vertical bars show the statistical uncertainty, and the yellow bands show the total systematic uncertainty, except for the $5.8\%$ uncertainty on the luminosity. The right panels show the data-to-theory ratio with respect to \alpgenpythia{} and \mcfm{} predictions, with the blue dashed bands showing the scale-variation uncertainty of each prediction, which is associated with the variation of the renormalization and factorization scales $\mu$ or to the combined variation of $\alpha_{s}^{\textrm{CKKW}}$ and $\Lambda_{QCD}$. \label{fig:CC_Zjj_Theta_AM}} \end{center} \end{figure*} shows the measured cross section as a function of the dihedral angle $\theta_{Z,{\mathit{jj}}}$ between the \Zll{} decay plane and the jet-jet plane~\footnote{$\theta_{Z,{\mathit{jj}}}$ is defined as $\theta_{Z,{\mathit{jj}}} = \arccos{ \frac{(\vec{\ell_1} \times \vec{\ell_2}) \cdot (\vec{j_1} \times \vec{j_2})} {|\vec{\ell_1} \times \vec{\ell_2}||\vec{j_1} \times \vec{j_2}|}}$, where $\vec{\ell}$ and $\vec{j}$ are the momentum three-vectors of leptons and jets.}. The measured cross section is in good agreement with the \alpgenpythia{} and \mcfm{} predictions. \subsection{Cross section for the production of a \Zg{} boson in association with three or more jets \label{sec:Z3jet_results}} Figure~\ref{fig:CC_Pt3_Y3_BH} shows the differential cross sections as \begin{figure*} \begin{center} \subfigure[]{\includegraphics[width=\figsize]{CC_Pt3_Lead_BH_Zll.pdf}} \subfigure[]{\includegraphics[width=\figsize]{CC_Y3_Incl_BH_Zll.pdf}} \caption{Differential cross section as a function of (a) $3\textit{rd}$ leading jet \ensuremath{p_{\textrm{T}}}{} and (b) inclusive jet rapidity for \mbox{\ensuremath{Z/\gamma^* + \geqslant 3~\textrm{jets}}}{} events. The measured cross section (black dots) is compared to the \blackhatsherpa{} NLO prediction (open circles). The black vertical bars show the statistical uncertainty, and the yellow bands show the total systematic uncertainty, except for the $5.8\%$ uncertainty on the luminosity. The lower panels show the data-to-theory ratio, with the blue dashed bands showing the scale-variation uncertainty, which is associated with the variation of the renormalization and factorization scales $\mu$. \label{fig:CC_Pt3_Y3_BH}} \end{center} \end{figure*} a functions of $3\textit{rd}$ leading jet \ensuremath{p_{\textrm{T}}}{} and inclusive jet rapidity in events with a reconstructed \Zll{} decay and at least three jets. The NLO \blackhatsherpa{} prediction is approximately $30\%$ lower than the measured cross sections for \Zg{} + $\geqslant 3$ jets events, but data and predictions are still compatible within the approximately $25\%$ scale-variation uncertainty and the $15\%$ systematic uncertainty, dominated by the jet-energy scale. Apart from the difference in the normalization, the shape of the measured differential cross sections is in good agreement with the NLO \blackhatsherpa{} prediction. \section{\label{sec:conclusion}Summary and Conclusions} The analysis of the full proton-antiproton collisions sample collected with the CDF II detector in Run II of the Tevatron, corresponding to $9.6$~fb$^{-1}$ integrated luminosity, allows for precise measurements of \mbox{\ensuremath{Z/\gamma^* + \textrm{jets}}}{} inclusive and differential cross sections, which constitute an important legacy of the Tevatron physics program. The cross sections are measured using the decay channels \Zee{} and \Zmm{} in the kinematic region \mbox{$\ensuremath{p_{\textrm{T}}}^{\ell} \geqslant 25$~\gevc{}}, \mbox{$|\eta^{\ell}| \leqslant 1$}, \mbox{$66 \leqslant M_{\ell^{+}\ell^{-}} \leqslant 116$~\gevcsq{}}, \mbox{$\ptjet \geqslant 30$~\gevc{}}, \mbox{$|\yjet| \leqslant 2.1$}, and \mbox{$\Delta{R}_{\ell-\textrm{jet}} \geqslant 0.7$}, with jets reconstructed using the midpoint algorithm in a radius $R=0.7$. The measured cross sections are unfolded to the particle level and the decay channels combined. Results are compared with the most recent theoretical predictions, which properly model the measured differential cross sections in \mbox{$\Zg + \geqslant 1$}, 2, and 3 jets final states. The main experimental uncertainty is related to the jet-energy scale, whereas the largest uncertainty of the theoretical predictions is generally associated with the variation of the renormalization and factorization scales. Among perturbative QCD predictions, \loopsimmcfm{} shows the lowest scale-variation uncertainty and, therefore, gives the most accurate cross-section prediction for the \mbox{\ensuremath{Z/\gamma^* + \geqslant 1~\textrm{jet}}}{} final state. The \mcfm{} and \blackhatsherpa{} fixed-order NLO predictions are in reasonable agreement with the data in the \mbox{$\Zg + \geqslant 1$}, 2, and 3 jets final states. The \alpgenpythia{} prediction provides a good modeling of differential distributions for all jets multiplicities. The \powhegpythia{} prediction, due to the NLO accuracy of the matrix elements and to the inclusion of nonperturbative QCD effects, provides precise modeling of \mbox{\ensuremath{Z/\gamma^* + \geqslant 1~\textrm{jet}}}{} final states both in the low- and high-\ensuremath{p_{\textrm{T}}}{} kinematic regions. The effect of NLO electroweak virtual corrections to the \Zg{} + jet production is studied and included in the comparison with the measured cross sections: in the high \ensuremath{p_{\textrm{T}}}{} kinematic region, corrections are of the order of $5\%$, which is comparable with the accuracy of predictions at higher order than NLO. The large theoretical uncertainty associated with the variation of the renormalization and factorization scales suggests that the inclusion of higher order QCD corrections, by mean of exact or approximate calculations, will improve the theoretical modeling of \mbox{\ensuremath{Z/\gamma^* + \textrm{jets}}}{} processes. The understanding of associated production of vector bosons and jets is fundamental in searches for non-SM physics, and the results presented in this paper support the modeling of \mbox{\ensuremath{Z/\gamma^* + \textrm{jets}}}{} currently employed in Higgs-boson measurements and searches for physics beyond the standard model. \begin{acknowledgments} We thank the Fermilab staff and the technical staffs of the participating institutions for their vital contributions. This work was supported by the U.S. Department of Energy and National Science Foundation; the Italian Istituto Nazionale di Fisica Nucleare; the Ministry of Education, Culture, Sports, Science and Technology of Japan; the Natural Sciences and Engineering Research Council of Canada; the National Science Council of the Republic of China; the Swiss National Science Foundation; the A.P. Sloan Foundation; the Bundesministerium f\"ur Bildung und Forschung, Germany; the Korean World Class University Program, the National Research Foundation of Korea; the Science and Technology Facilities Council and the Royal Society, UK; the Russian Foundation for Basic Research; the Ministerio de Ciencia e Innovaci\'{o}n, and Programa Consolider-Ingenio 2010, Spain; the Slovak R\&D Agency; the Academy of Finland; the Australian Research Council (ARC); and the EU community Marie Curie Fellowship contract 302103. \end{acknowledgments} \cleardoublepage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} To fully exploit the research and scientific possibilities of the European XFEL \cite{Decking:2020a}, novel detectors have been developed in order to satisfy the challenging FEL requirements, e.g. single photon sensitivity at the specified energy range, time resolution at the level of individual FEL pulses or a high dynamic range~\cite{Graafsma:2009a, Kuster:2014a}. The European XFEL is a high repetition rate facility, delivering pulse trains with a repetition rate of $10\,\text{Hz}$. Every train contains up to $2700$ high intensity X-ray pulses separated by $220\,\text{ns}$ each. An X-ray beam with such characteristics poses a high risk of damaging the detector systems by radiation, thus creates the need for radiation hard detectors. For a detector operated in a scattering geometry close to the sample and being directly illuminated by scattered X-rays, the absorbed doses can amount in up to $1\,\text{GGy}$ using a silicon sensor of $500\,\mu\text{m}$ thickness when considering three years of facility operation \cite{Graafsma:2009a}. While some of the European XFEL detectors have been optimized to incorporate a higher level of radiation hardness into their design, e.g. the Adaptive Gain Integrating Detector (AGIPD)~\cite{Zhang:2014a} and the Large Pixel Detector (LPD)~\cite{Koch:2013a}, other detectors, not specifically built for operation at the European XFEL, have been tested for and offer a certain level of radiation tolerance, e.g. the JUNGFRAU~\cite{Jungmann:2015a} detector. The ePix100a detector, designed for applications at FEL facilities \cite{Blaj:2015a} was studied to evaluate its radiation hardness at the European XFEL. The aim of the presented study is to determine the level of damage caused by the X-ray laser beam, understand and characterize the radiation-induced damage effects and validate their impact on the quality of scientific data. \section{Radiation-Induced Damage Effects on Silicon Detectors} \label{sec:RadEff-theory} Operating silicon sensors in the harsh radiation environment of FELs can have severe implications on a detectors' performance and its life time. Due to their high peak brilliance, FELs can deliver up to $10^{12}\,\text{photons/pulse}$ to the sample interaction region. Due to the direct X-ray illumination of the sensor, radiation tolerance of the sensor and Application Specific Integrated Circuit (ASIC) are of high importance. Sensors based on metal-oxide-semiconductor (MOS) structures, where charge flow takes place close to the surface, are known to be {especially} sensitive to damage induced by ionizing radiation. Therefore, thorough studies of the influence of the FEL radiation on silicon sensors have been conducted during the design and development phase of the first generation MHz detectors for the European XFEL. Results reported by Zhang et al.~\cite{Zhang:2011a, Zhang:2012a, Zhang:2014a} give insight on the parameters determining the damage depending on the delivered dose and their influence on the operation of different silicon sensor designs. These studies provided valuable input for detector designs minimizing radiation damage effects and optimization of sensor operation parameters as reported by Schwandt et al.~\cite{Schwandt:2012a, Schwandt:2013a}. Klanner et al.~\cite{Klanner:2013a} provide an exemplary overview of the AGIPD detector~\cite{Allahgholi:2019a} related sensor design challenges. The studies cited above provide useful knowledge and valuable observations for the here presented experiment results. In general two fundamentally different damage mechanisms dominate the radiation/particle interaction with silicon. These are displacement damage (bulk damage) and surface damage. Bulk damage results in point defects (Frenkel-pairs) or agglomeration of defects caused by {highly} energetic hadrons, leptons or higher energetic gamma rays knocking out the primary atom from its lattice position (interstitial) through a non-ionizing (NIEL) interaction~\cite{Affolder:2011a,Lindstroem:2003a}. { The displacement energy $E_\text{d}$ required to create such a defect is in the range between $10\,\text{eV}$ and $36\,\text{eV}$ \cite{Bourgoin:1976a,Holmstroem:2008a}. $E_\text{d}$ depends amongst other parameters on the lattice orientation. Since the energy transfer of a photon with an energy less than $20\,\text{keV}$ to the silicon atom is significantly below $E_d$, displacement damage is negligible at these photon energies. Consequently, surface damage is the dominating damage mechanism of relevance for our study.} In contrast to bulk damage, surface damage originates in ionization energy losses of X-ray photons or charged particles and subsequently leads to an accumulation of space charges in or close to an interface between e.g. an insulating or dielectric layer and silicon~\cite{Ling:99a}. A typical example are interfaces between $\text{SiO}_2$ and silicon when $\text{SiO}_2$ is used as gate oxide or field oxide as insulating layer between semiconductor structures. The density of the created space charge is proportional to the amount of energy absorbed at or close to the interface. The accumulation of space charge in turn can significantly influences the performance properties of the sensor, like leakage current, noise and dynamic range. For an in-depth overview about surface damage mechanisms, we refer the reader to the available literature \cite{McLean:1987a, Dressendorfer:1989a, oldham1999ionizing} and references therein. \begin{figure} \centering \includegraphics[width=0.35\textwidth]{Images/Figure-epixReadout.png} \qquad \includegraphics[width=0.5\textwidth]{Images/Irradarea_onSensor_white.png} \caption{\label{fig:ASIC_beamSpot} Left: Schematic view of the ePix100a sensor as seen from the experiment's interaction region, showing the location of the signal readout nodes, the arrangement of the 4 ASICs required to read out one sensor module with a size of $352\,\text{pixels} \times 384\,\text{pixels}$. Each ASIC is divided into four banks accommodating 96 columns multiplexed to a single analog output digitized by an external ADC. {Four sensor modules and their ASICs are arranged in a rectangular geometry to build one complete detector module consisting of $704\,\text{pixels} \times 768\,\text{pixels}$.} Right: Noise map of the ePix100a visualizing placement and size of the area irradiated by the FEL beam ({white} square). The area on the bottom left is showing a higher noise level in comparison to the two modules on the top. The bottom right module is not providing data. The image qualitatively illustrates the performance before irradiation.} \end{figure} \section{The ePix100a Detector Module} \label{subsec:ePix} The ePix100a is a backside illuminated direct detection hybrid pixel detector optimized for low noise applications in the energy range between $2\,\text{keV}$ and $18\,\text{keV}$ \cite{Blaj:2016a}. The ePix detector family is based on a modular design, where each ePix100a detector module consists of a fully depleted silicon sensor, which is flip-chip bonded to 4 ePix ASICs, and connected to front-end electronics, the cooling system, its mechanics and housing. One single ASIC provides the readout and signal processing architecture for a sensor region of $352\,\text{pixels} \times 384\,\text{pixels}$, each of a size of $50\,\mu\text{m} \times 50\,\mu\text{m}$. The analog signal provided by a sensor pixel is processed by a low noise charge integrator and subsequently low pass filtered before the signal passes a correlated double sampling (CDS) stage for baseline correction and noise filtering and is finally stored in a buffer. Further processing of the analog signal is organised in a column parallel fashion. The analog output of the pixels of one bank accommodating $96$ columns is multiplexed to a single analog output node, before it is digitized by an external {$14\,\text{bit}$} sigma-delta analog to digital converter (ADC). { An overview of the ePix100a ASIC pixel layout is schematically shown in the left part of figure~\ref{fig:SensorCrosssection}}. The dynamic range of the ePix100a detector, defined by the number of ADC digitisation levels available for photon detection, allows to measure up to $220\,\text{ke}^-\approx100\times 8\,\text{keV}$ photons per pixel at a maximum frame rate of $240\,\text{Hz}$. Figure~\ref{fig:ASIC_beamSpot} (left panel) illustrates the geometric arrangement of the readout banks and the ADCs. The analog output nodes are arranged on the top and bottom sides of the ASICs and sensor. For a detailed description of the ePix ASIC, detector design and a performance review, we refer the interested reader to Markovic et al.~\cite{Markovic:2014a}, Blaj et al.~\cite{Blaj:2016a} and Nishimura et al.~\cite{nishimura2016design}. Figure~\ref{fig:SensorCrosssection} shows a schematic view of the vertical cross-section of the ePix100a sensor {produced by SINTEF}. The photon entrance window is coated with aluminum acting as a light blocking filter for optical, UV and IR light. The bulk of the sensor is made of p-doped high resistivity {$<100>$} silicon. Due to its backside illuminated design, structures potentially vulnerable to radiation damage, i.e. interfaces to the Low Temperature Oxide (LTO) and Field Oxide, are located on the front side of the chip. The interconnection between the sensor and ASIC is provided through $30\,\mu\text{m}$ large solder bump bonds {attached to Ti/Cu metal gates. The pixels and guard rings are biased through the ASIC.} The $500\,\mu\text{m}$ thick sensor enables the detection of X-ray photons with energies between $3\,\text{keV}$ and $13\,\text{keV}$ with a quantum efficiency $\ge 80\,\%$ and at the same time efficiently shields the underlying ASIC from X-ray radiation at low photon energies. The ePix100a camera is currently being used at the High Energy Density (HED) instrument \cite{HED:2021a} and Material Imaging and Dynamics (MID) \cite{MID:2021a,Madsen:2021b} instrument at the European XFEL. \begin{figure} \centering \includegraphics[width=0.53\textwidth]{Images/epix_pixel_scheme.png} \hfill \includegraphics[width=0.43\textwidth]{Images/ePix-Sensor-Schematics.eps} \caption{\label{fig:SensorCrosssection}{Left: Schematic drawing of the ePix100a ASIC pixel layout. Right: }Schematic view of the ePix100a sensor cross-section including interconnection bump bonds and the readout ASIC. The sensor is illuminated from the backside, i.e. from the bottom. Please note that this drawing is not to scale.} \end{figure} \section{Experiment Setup and Methodology} \label{sec:Experiment Setup and Methodology} The radiation damage study was performed at the HED instrument at the European XFEL. The HED instrument can provide a beam of photon pulses with an energy between $5\,\text{keV}$ and $20\,\text{keV}$ and a maximum pulse energy of approximately $1\,\text{mJ}$ \cite{HED:2021a}. For our study we used an ePix100a test module equipped with four ASICs and arranged as shown in figure~\ref{fig:ASIC_beamSpot}. Two of the ASICs were fully functional (area on the top left and right shown in the right image of figure~\ref{fig:ASIC_beamSpot}), one had a significantly higher mean noise before irradiation (image area on the bottom left part of the same image) and one was unresponsive (bottom right part of the image). The beam spot, having an area of approximately $1\,\text{mm}^2$ and covering approximately $20\,\text{pixels}\times20\,\text{pixels}$ of the ePix100a sensor, was {used to irradiate} the area with the lowest pre-irradiation noise referred to as region of interest (ROI) during the remainder of this publication (see figure~\ref{fig:ASIC_beamSpot}, right panel). {Exposing the detector to the direct beam could cause instantaneous and permanent physical damage to the sensor, e.g. by ablation or melting of the sensor material. As shown by Koyama et al.~\cite{Koyama:2013}, the silicon ablation threshold for a $10\,\text{keV}$ photon is $E_{th}=0.78\,\mu\text{J}/\mu\text{m}^2$. To avoid this kind of damage, we initially attenuated the FEL beam to energies well below $E_{th}$ with a configurable stack of Chemical Vapor Deposition (CVD) diamond and Si foils of various thicknesses available at the instrument. Apart from physical damage to the sensor surface, components of the pixel ASIC could be damaged by irradiation with excessive beam energies, as the ASIC [23, 39] does not implement protection circuitry for signals significantly above the detector's dynamic range. To find a beam energy, which on the one hand is sufficient to achieve the anticipated dose rate of $200\,\text{kGy/h}$ and at the same time allows safe operation of the detector, we step by step gradually reduced the absorption of the stack until we found a configuration which allowed us to illuminate the detector up to an exposure time of $> 5\,\text{min}$ without observing failure of the irradiated pixels. By following this procedure, we found that the ePix100a pixels can withstand beam energies of up to $1\,\mu\text{J}$ for longer time periods without individual pixels experiencing permanent failure. Hence, we irradiated the sensor with a beam energy of $1\,\mu\text{J}$. As discussed later in the text, beam energy of $1\,\mu\text{J}$ causes pixels to become unresponsive during the duration of irradiation, but it does not cause permanent failure of the irradiated pixels. This beam energy setting was used deliberately to trigger pixels unresponsiveness to allow for beam position tracking (explained in section~\ref{sec:DoseCalib}).} The beam energy was continuously monitored with the X-ray Gas Monitors (XGM) installed at the HED instrument throughout our study {(see \cite{Maltezopoulos:2019a,Sorokin:2019a} for a technical description of and details about the performance of the European XFEL photon diagnostic XGMs)}. The XGMs are capable of non-invasively measuring the single X-ray pulse energy with an absolute accuracy of $7\% - 10\%$ depending on the measured signal strength and to provide a beam position measurement with a precision between $10\,\mu\text{m}$ and $50\,\mu\text{m}$ if the beam position stays within $\pm 6\,\text{mm}$ of the absolutely calibrated reference position. In addition to the diagnostic information provided by the XGMs we used the ePix100a detector to measure the X-ray spot position and potential drifts of the X-ray spot. A schematic view of the experiment setup is shown in { f}igure~\ref{fig:EXPalignment}. To monitor the system noise, leakage current, gain and energy resolution of the detector, we acquired pre- and post-irradiation calibration data with $t_\text{Int}=50\,\mu\text{s}$ consisting of { dark} and flat-field measurements. {The flat-field measurements were performed by homogeneously illuminating the ePix100a} with {$\text{Cu-K}_{\alpha}$} fluorescence photons originating in a $50\,\mu\text{m}$ thick copper foil which could be moved into the FEL beam. {To acquire dark data the FEL beam was blocked and the detector was located in flat-field configuration in the dark HED vacuum chamber (see figure~\ref{fig:EXPalignment}), excluding illumination with visible light.} A summary of the FEL beam and detector parameters used during our experiment is provided in Table~\ref{tab:parameters}. Operational constraints prevented us from taking calibration data more frequently, e.g. after the completion of each individual irradiation cycle. Noise and offset were monitored on an hourly time scale shorty after the last irradiation cycle and later on, on time scales of days after the self-annealing effects slowly started to reduce in significance. We irradiated the ePix100a module with an attenuated beam of $9\,\text{keV}$ photons. The facility was set up to deliver $100$ pulses per pulse train (i.e. $100$ pulses per $100\,\text{ms}$). We operated the detector with a frame rate of $10\,\text{Hz}$, corresponding to the typical use case of the ePix100a detector at the European XFEL. The irradiation was performed in cycles of $20\,\text{min}$ long individual exposures. Radiation induced changes of the noise and offset were monitored regularly between two consecutive cycles with two different integration time settings, $t_\text{Int}=50\,\mu\text{s}$ and $t_\text{Int}=800\,\mu\text{s}$. The choice of the $800\,\mu\text{s}$ long integration time is motivated by the higher sensitivity to potentially very small changes of the leakage current, which would manifest through a proportional change of the offset. On the other hand, $t_\text{Int}=50\,\mu\text{s}$ is a typical value used during scientific experiments at HED. In total we executed $14$ such irradiation cycles in the course of our study. During each cycle we achieved a typical dose rate of $({204 \pm 18})\,\text{kGy}/\text{h}$ translated to the depth of the $\text{Si}/\text{SiO}_2$ interfaces in the sensor. {We determined the dose rate with a Monte Carlo simulation using the measured average photon flux at the ePix100a sensor surface and the sensor geometry as input parameters, as discussed extensively in section~\ref{sec:DoseCalib}}. Throughout our irradiation cycles the detector was operated under conditions mimicking the typical detectors' experimental usage scenario, that is at a pressure of $1\times10^{-5}\,\text{mbar}$, cooled to $-9\,^{\circ}\,\text{C}$ and biased with a voltage of $200\,\text{V}$. After the $2\,\text{days}$ long irradiation experiment the detector was stored at room temperature under ambient atmospheric conditions, and only powered and {cooled again to the same conditions} during post-irradiation follow-up performance measurements. \begin{table} \centering \caption{\label{tab:parameters}Summary of the detector operation and beam line parameters as used during our irradiation experiment.} \smallskip \begin{tabular}{ll} \hline Beam parameters & \\ \hline Average beam energy at the detector & $10\,$nJ/Pulse \\ Photon energy & $9\,$keV \\ Number of X-ray pulses per pulse train & $100\,$Pulses \\ Dose rate at the $\text{Si/SiO}_2$ interfaces & $200\,$kGy/h \\ Beam intensity monitoring & XGMs at HED beamline \\\hline Detector parameters & \\\hline Pixel size & $50\,\mu\text{m} \times 50\,\mu\text{m}$\\ Sensor size & $704\,\text{pixels} \times 768\,\text{pixels}$ \\ Sensor thickness (depleted bulk) & $500\,\mu\text{m}$ \\ Irradiated sensor area & $20\,\text{pixels}\times20$ pixels ($1\,\text{mm}^2$)\\ Full well capacity & $220\,\text{ke}^-$ \\ Frame rate & $10\,\text{Hz}$ \\ Integration time & $50\,\mu$s and $800\,\mu$s \\ Bias voltage & $200\,$V \\ Sensor temperature & $-9^{\circ}\,\text{C}$ \\ {Operating pressure} & ${\leq 1\times 10^{-5}\,\text{mbar}}$ \\ \end{tabular} \end{table} \begin{figure} \centering \includegraphics[width=0.69\textwidth]{Images/ePix-HED-Experiment-Flatfield-Setup.eps} \caption{\label{fig:EXPalignment} Experimental setup used to irradiate and calibrate the ePix100a detector module at the HED instrument (not to scale). The image shows the setup used for flat-field performance and calibration measurements {(flat-field configuration)}. The attenuated beam is directed onto a {copper} target and the ePix100a module is shifted off the beam axis. The resulting fluorescence radiation illuminates the ePix100a module homogeneously. Since we used the {copper} target in a transmission geometry, the attenuated direct FEL beam is visible on the detector in addition to the {copper} fluorescence's photons (see figure~\ref{fig:Cu_calibSpec}). During irradiation measurements the ePix100a is facing the beam and the fluorescent target is moved out of the field of view of the ePix100a module {(irradiation configuration)}. In both configurations the primary beam intensity is attenuated by two absorbing and configurable stacks located before and after the XGM.} \end{figure} \subsection{Detector Performance Characterization} \begin{figure} \includegraphics[width=0.48\columnwidth]{Images/AnalysisFlowchart.png} \hfill \includegraphics[width=0.48\textwidth]{Images/Cu_calibSpecROI_pre.png} \caption{\label{fig:FlowchartCalib}\label{fig:Cu_calibSpec} Left: Flowchart illustrating the data correction steps applied to the dark and flat-field data and their sequence. Results derived from the analysis of the dark data are used as input for processing the flat-field data as illustrated by blue lines. Right: Energy spectrum derived from flat-field data, showing the {$\text{Cu-K}_{\alpha}$} fluorescence line blend at {$8.041\,\text{keV}$}, a line originating in photons of the FEL beam with an energy of $9\,\text{keV}$ and a fit of {three} Gaussian peaks to the spectral data.} \end{figure} We characterized the offset (mean dark signal), the root mean square (RMS) noise, the pixel-to-pixel variation of the signal amplification, the pixel averaged energy resolution and the absolute gain, i.e. the {Analog Digital Unit (ADU)} to energy conversion before and after irradiation. Particular attention was paid to be able to detect offset and noise changes with a sensitivity of {$2\%$}. {The detector offset is commonly defined as the average value of the dark signal of the detector. The detector noise, quantifying the variations of the dark signal relative to its mean, is calculated as the standard deviation of the measured dark signal. Variations in the voltage supplying the readout electronics can cause additional offset variations affecting groups of channels with a similar amplitude, this effect is known as common mode. For a detailed description of the nature of the components contributing to the dark signal and detector noise we refer the reader to e.g. Knoll~\cite{Knoll:2011} and Lutz~\cite{Lutz:2007}.} Flat-field data {resulted from homogeneous irradiation of the ePix100a sensor area} with {copper} fluorescence photons with an energy of $E_{\text{K}\alpha_1} = 8047.78\,\text{eV}$ and $E_{\text{K}\alpha_2} = 8027.83\,\text{eV}$, originating from a {copper} target installed in the FEL beam (see figure~\ref{fig:EXPalignment}). Since the {two $\text{Cu-K}_{\alpha_1}$ and $\text{Cu-K}_{\alpha 2}$ } fluorescent lines cannot be resolved with the ePix100a detector, we observed instead a blend of lines with a {yield weighted average energy of $8041.13\,\text{eV}$}{, to which we refer to as $\text{Cu-K}_{\alpha}$ lines in the remainder of this paper.} The procedure used for dark and flat-field data processing is outlined in figure~\ref{fig:FlowchartCalib}. {As the first step the per pixel offset $O_{x,y}$ is calculated as mean signal for each pixel individually using $2000$ images of the dark data. With an ASIC-, column- and row-wise common mode (CM) correction we removed signal baseline variations apparent in the dark data before calculating the RMS noise for each pixel. The common mode value is evaluated from the sorted vector $S_{i}$ as \begin{equation} \bar{m}_k = \left\{ \begin{array}{ll} S_{(N-1)/2} & \mathrm{for\, an \,odd\,number\, of\, pixels} \, N\\ \frac{1}{2} (S_{N/2}+S_{N/2+1}) & \mathrm{for\, an \,even\,number\, of\, pixels}\, N, \end{array} \right. \end{equation} where $S_{i}$ is the signal measured in the pixel with the number $i$ after offset subtraction, belonging to a specific ASIC area, one line or to one column numbered with $k$. This common mode value is subsequently subtracted from the signal of the pixels belonging to the area the common mode value was calculated from. Finally the RMS noise results from} \begin{equation} \sigma_{x,y}=\sqrt{\frac{1}{N}\sum_{n=0}^{N-1}(c_{x,y,n}-\overline{c}_{x,y})^{2}}, \end{equation} {where $c_{x,y,n}$ is the offset and common mode subtracted dark signal measured in the pixels with column and line coordinates $x$ and $y$ of the image numbered $n$ . The per-pixel average of the dark signal corrected for offset and common mode is denoted with $\overline{c}_{x,y}$}. After applying the offset and a common mode correction to the flat-field data, we classified clustered signals of discrete photon events by the number of adjacent pixels where the charge is detected (split event/charge sharing correction, see~\cite{Kuster:2021a, pyDetLib:2020a} for a more detailed description of the algorithm). To achieve the best possible energy resolution for subsequent spectral analysis, clusters of events with a {pixel event} multiplicity larger than one were rejected. A per-pixel gain correction removes small pixel-to-pixel variation of the characteristics of the pre-amplifiers implemented in each pixel. The data used for calculating the energy spectrum shown in figure~\ref{fig:Cu_calibSpec} was treated in this way. Fitting a model consisting of three Gaussian lines (two lines for modelling the two {line} peaks and one for the low energy shoulder of the {$\text{Cu-K}_{\alpha}$ line} peak) to the spectrum derived from the data of the ROI before irradiation yields the ADU to energy conversion factor (absolute gain) $g=({70.2}\pm0.5)\,\text{eV}/\text{ADU}$ and an energy resolution of $(641\pm 45)\,\text{eV}$ full width half maximum (FWHM). The spatial distribution of the noise and offset is consistent with a uniform distribution across the pixels inside the ROI, with a mean offset and RMS noise of $1756\,\text{ADU}$ and ${139}\,\text{eV}$, the latter value corresponds to an equivalent noise charge (ENC) of $38\,\text{e}^-$. \subsection{Dose Calibration}\label{sec:DoseCalib} As described in section~\ref{sec:Experiment Setup and Methodology}, we monitored the photon flux throughout our irradiation experiment with the XGM with an estimated uncertainty {of $7\,\%$}. However, the XGM does not provide spatial information, i.e the intensity distribution of the beam profile with a position resolution equivalent or better than the spatial resolution of the ePix100a sensor. Instead we determined the beam profile from direct beam measurements acquired at low X-ray intensities {with the ePix100a}. These measurements provided spatially resolved information about the beam shape and the position of the beam within the ROI. \begin{figure} \centering \includegraphics[width=.5\textwidth]{Images/TotalPhotonCounts.png} \hfill \includegraphics[width=.48\textwidth]{Images/SiO2DoseProfile.png} \includegraphics[width=.49\textwidth]{Images/DepthDose_row.png} \hfill \includegraphics[width=.49\textwidth]{Images/DepthDose_col.png} \caption{\label{fig:dosedepth} {The top left plot shows the spatial distribution of the total number of photons in the region of interest integrated over the time of irradiation with the beam. The spatial distribution of the dose deposited at the depth of the $\text{SiO}_2$ layers is shown in the top right plot for the same region. The dose was simulated using the Monte Carlo simulation tool MULASSIS. The data shown on the top left side of this figure was used as input for the simulation.} The bottom left corner of this picture corresponds to the bottom left corner of the region of interest marked as blue rectangle and being labelled "Irradiated Area" in figure~\ref{fig:ASIC_beamSpot}. Bottom: Depth profiles of the dose along column 10 (left) and row 9 (right) of the irradiated area. The orientation of the image is the same as the schematic view of the sensor shown in figure~\ref{fig:SensorCrosssection}, i.e. the $1\mu\text{m}$ thick Aluminum entrance window is located at the bottom and the pixel structure on the top of the images. Please note that the dose values are shown on a linear scale in the top image and on a logarithmic scale in the bottom images. {The panels on top and bottom of all images show minimum (orange), mean (blue) and maximum (green) values calculated along rows, columns or the sensor depth, respectively.}} \end{figure} Irradiating the detector with high intensity caused {the directly irradiated pixel area} to become unresponsive for the duration of the {irradiation. T}he affected pixels provided a signal {even significantly below the detected dark signal. The average value of signal provided was $130\,\text{ADU}$.} This effect is discussed in more detail in section~\ref{sec:Experiment Results}. The position of the affected area on the sensor relative to the centre of the beam does not change over time. The importance of this effect lies in the possibility to track the position of the beam on the ePix100a sensor, enabling a calculation of the per-pixel dose in the presence of beam position jitter. By using {the Canny} edge detection algorithm~{\cite{Canny:1986}} we determined the borders of the area with unresponsive pixels and assigned a circle to it. The centre of this circle provides the pixel coordinates of the beam core for each image frame individually. {This procedure allows us to align the area with unresponsive pixels to the spatial beam distribution measured prior to the irradiation experiment and finally to reconstruct the number of photons delivered to individual ePix100a pixels with each pulse train as measured by the XGM. The per-pixel photon counts reconstructed for every ePix100a image frame and integrated over the duration of the irradiation results in the top left plot of figure~\ref{fig:dosedepth}.} Finally, we used the GEANT4~\cite{pia:2003} based Monte Carlo simulation tool MULASSIS~\cite{Lei:2002a}, originally developed for dose and particle fluence analysis of shielding materials, to estimate the absorbed dose in different depths of the ePix100a sensor, taking {photoelectric} absorption in the different sensor materials into account. These are specifically the aluminum entrance window, the silicon bulk and the $\text{SiO}_2$ layers in the pixel structure as outlined in figure~\ref{fig:SensorCrosssection}. {The output of the simulation is normalized to the dose deposited per primary $\text{photon}$ and unit area $\text{cm}^2$, such that multiplying it with the measured per-pixel integrated photon number results in the total per-pixel dose.} The resulting spatial distribution of the dose deposited in the sensor at the depth of the $\text{SiO}_2$ structures and integrated over the time of all irradiation cycles is shown in the top {right} image of figure~\ref{fig:dosedepth}. The horizontal asymmetry of the distribution has its origin in the horizontal motion of the beam spot during the course of the irradiation. The vertical cuts through the sensor shown in figure~\ref{fig:dosedepth} illustrate the depth profile of the dose along the central column (bottom left image of figure~\ref{fig:dosedepth}) and row (bottom left image of figure~\ref{fig:dosedepth}). While the region closest to the surface at the entrance window has received a maximum dose of $(1.3{\pm 0.2})\,\text{MGy}$ at column number $10$ and row number $9$, the dose deposited at the depth of the $\text{SiO}_2$ structures is reduced by a factor of $240$ to $(5.4{\pm 0.7)}\,\text{kGy}$ due to absorption by the sensor material above. This corresponds to a total dose of $(180{\pm 13)}\,\text{MGy}$ delivered to the surface of the sensor when neglecting absorption in the sensor material and integrating the dose over the spot profile. The dose received by the ASIC is further reduced by the shielding effect caused by the bump bonds. It can be estimated to $< 24\,\text{kGy}$ {for the ASIC areas located below the bump bonds and to $(670 \pm 140)\,\text{kGy}$ for the not shielded areas between the bump bonds. These numbers are translated to the surface of the ASIC, neglecting further structural details of the ASIC chip and absorption therein.} \section{Experiment Results} \label{sec:Experiment Results} We characterized the pre- and post-irradiation performance of the ePix100a following the methodology as described in section~\ref{sec:Experiment Setup and Methodology}. The observed offset, noise and gain changes can be categorized into immediate and post-irradiation effects. We observed immediate effects during irradiation on time scales shorter than seconds or minutes. With post-irradiation effects we refer to changes of the detector performance on timescales of hours and days after the last irradiation cycle was completed. In the following sections we describe both effect categories in detail. \subsection{Immediate Effects} \label{sec:ImmediateEffects} During irradiation with the high intensity FEL beam at an energy of $E_\text{Beam} = 1\,\mu\text{J}$, the pixels in the $20\,\text{pixels}\times20\,\text{pixels}$ large ROI {were} unresponsive. {We assume that this effect originates in the ASIC~\cite{Markovic:2014a, Carini:2012}. There is no obvious mechanism at the silicon sensor that we are aware of, which could cause the signal of individual pixels to drop to such low values as $130\,\text{ADU}$ on average. The mechanism by which this occurs has to be further investigated.} {We would like to} emphasise that the observed effect {is of short duration} and only present during irradiation. It does not lead to permanent damage: after irradiation, these pixels exhibit a dark signal saturating the detector's dynamic range, but {their signal} recovers to an offset level comparable to the surrounding pixels within the following $48\,\text{hours}$. Furthermore, these pixels are fully functional during dark signal measurements, as will be shown in the following. \begin{figure}[b] \centering \includegraphics[width=0.49\textwidth]{Images/Saturated_pixels_marked.png} \hfill \includegraphics[width=0.49\textwidth]{Images/OffsetDecay_exp_zoomed.png} \caption{\label{fig:SaturatedPx} Left: Image of the per pixel offset as observed in the ROI right after completion of the last irradiation cycle and after recovery from saturation. Pixels delivering an offset signal close or beyond the saturation threshold of the ADC have signal values $> 15500\,\text{ADU}$ (yellow color). Right: The evolution of the offset of those pixels located inside the magenta area {in the figure on the left is shown.} The evolution of the offset was monitored during two and a half hours after completion of the last irradiation cycle.} \end{figure} With an increasing number of irradiation cycles the number of pixels with an offset surpassing the upper end of the dynamic range of the ADC increased. The left part of figure~\ref{fig:SaturatedPx} shows the situation shortly after the last irradiation cycle was completed. The pixels shown in yellow were completely saturated. \subsection{Post-Irradiation and Long Term Effects} \label{sec:Post-IrraditionEffects} \subsubsection{Offset and Noise} \label{sec:OffsetandNoise} \begin{figure}[htbp] \centering \includegraphics[width=0.49\textwidth]{Images/MaxOffsetChange_Days.png} \hfill \includegraphics[width=0.49\textwidth]{Images/MaxNoiseChange_Days.png} \caption{\label{fig:maxChanges} Left: Evolution of the offset as observed in the pixel showing the highest relative offset {increase} for the two integration time settings, i.e. $t_\text{Int}=50\,\mu\text{s}$ and $t_\text{Int}=800\,\mu\text{s}$. The exponential decrease of the offset observed during the first day is followed by a stabilized state at higher offset values in comparison to the pre-irradiation level. Right: Relative change of the noise as observed during the days following the last irradiation cycle. The temporal behaviour of the noise mirrors the temporal evolution of the offset. {The average dose observed in the pixels showing the highest increase was $(4319 \pm 772)\,\text{Gy}$. The detector was kept at a temperature of $20\,^{\circ}\text{C}$ between the individual measurements.}} \end{figure} During the first three hours following the last irradiation cycle the offset of individual pixels decreased exponentially with time with a decay coefficient of $-0.413\,\text{h}^{-1}$ as shown in the right part of figure~\ref{fig:SaturatedPx} for $t_\text{Int} = 800\,\mu\text{s}$. The offset stabilized three days after irradiation at a higher level of $1832\,\text{ADU}$ in comparison to the pre-irradiation level of $1762\,\text{ADU}$ (see figure~\ref{fig:maxChanges}). As is evident from figure~\ref{fig:maxChanges}, the scale of this effect measured $3$ days after irradiation remains the same also for the following measurements. In general we observe a larger offset and corresponding ENC for $t_\text{Int}=800\,\mu\text{s}$, when comparing pre- and post-irradiation conditions. Evaluating the offset change $46$ days after irradiation, yielded an offset increase by approximately $15\%$ for $t_\text{Int}=800\,\mu\text{s}$ and by $1\%$ for $t_\text{Int}=50\,\mu\text{s}$. The measured increase in offset scales linearly with the integration time, i.e. with a factor of $800\,\mu\text{s}/50\,\mu\text{s} = 16$, which is expected if the effect is predominantly due to an increase in dark current. As shown in figure~\ref{fig:IntegrationTimeRatio} the ratio of offset change for two integration times approaches the expected factor of $16$ when the leakage current increases, which happens for doses above $\approx 4000\,\text{Gy}$. {The figure can be divided into three regions, as illustrated by the dotted, solid and dashed lines. In first region (dashed line) the contribution of the leakage current is negligible at $0\,\text{Gy}$, hence no difference in the offset between shorter and longer integration time exists and the observed ratio is $1$. As the dose increases, the leakage current and in turn the ratio increases (second region, solid line). Finally, in the third region, the ratio approaches the ratio between the two integration times of $16$ (dashed line), which is expected when the leakage current dominates.} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Images/Factor_800_50_regions.png} \caption{Ratio of the offset change measured at $800\,\mu\text{s}$ and $50\,\mu\text{s}$ ({grey} dots). The black dashed line shows the scaling factor of $16$ expected from the ratio of the integration times when the leakage current dominates. If the leakage current is negligible the ratio equals $1$ (dotted line.) {The solid black line visualizes the region where the leakage current increases with the increasing dose.}} \label{fig:IntegrationTimeRatio} \end{figure} As shown in the right part of figure~\ref{fig:maxChanges}, the RMS noise observed in these pixels follows the same behaviour. While the noise at $t_\text{Int}=800\,\mu\text{s}$ has increased by 85\%, for $t_\text{Int} = 50\,\mu\text{s}$ the increase is at the level of 30\%. {The maximal increase of the offset and noise 46 days after irradiation is measured for the average dose of $(4683 \pm 659)\,\text{Gy}$.} The spatial distribution of the induced offset (left) and noise (right) changes is shown in figure~\ref{fig:DarkChangesABS} for $t_\text{Int}=800\,\mu\text{s}$. \begin{figure}[htbp] \centering \includegraphics[width=0.47\textwidth]{Images/OffsetChange800us_absolute.png} \qquad \includegraphics[width=0.47\textwidth]{Images/NoiseChange800us_absolute.png} \caption{Radiation-induced changes of the offset (left) and noise (right) for $t_\text{Int}=800\,\mu\text{s}$ evaluated 46 days after irradiation.} \label{fig:DarkChangesABS} \end{figure} Since different pixels within the ROI have received a different dose, we can evaluate the change of the offset and RMS noise between pre- and post-irradiation conditions depending on the dose, when assuming that the pixels inside the ROI react similarly to radiation induced damage. Since the design of the pixels is the same {and the observed pixel-to-pixel variations of the dark current, noise and amplification are $\approx 3\,\%$, we consider this assumption to be justified.} Figure~\ref{fig:Dose_vsChanges} shows the offset (left) and noise (right) changes depending on the absorbed dose measured at the depth of the $\text{SiO}_{2}$ interface 46 days post-irradiation. Here again the influence of the longer integration time is clearly visible. The slope derived from fitting a linear function to the data yields an offset and ENC change rate of $(56.0\pm0.6)\,\text{ADU}/\text{kGy}$ and $(8.7\pm0.1)\,\text{e}^-/\text{kGy}$ for $t_\text{Int}=800\,\mu\text{s}$ and $(1.0\pm0.2)\,\text{ADU}/\text{kGy}$ and $(2.0\pm0.1)\,\text{e}^-/\text{kGy}$ for $t_\text{Int}=50\,\mu\text{s}$, respectively. If absorption of the radiation in silicon is neglected when calculating the dose, the offset and ENC change rate yields $(235.9\pm2.6)\,\text{ADU}/\text{MGy}$ and $(37.4\pm0.6)\,\text{e}^-/\text{MGy}$ for $t_\text{Int}=800\,\mu\text{s}$ and $(4.2\pm0.6)\,\text{ADU}/\text{MGy}$ and $(8.3\pm0.4)\,\text{e}^-/\text{MGy}$ for $t_\text{Int}=50\,\mu\text{s}$. The maximum observed increase of the offset reduces the available dynamic range of the detector by approximately $2\%$ for $t_\text{Int}=800\,\mu\text{s}$ and $\approx 0.1\%$ for $t_\text{Int}=50\,\mu\text{s}$. \begin{figure}[t] \centering \includegraphics[width=0.49\textwidth]{Images/Dose_vs_MeanOffsetChange.png} \hfill \includegraphics[width=0.49\textwidth]{Images/Dose_vs_MeanNoiseChange.png} \caption{\label{fig:Dose_vsChanges} Left: The change of the offset observed in different pixels of the ROI {46 days after irradiation} when comparing pre- and post-irradiation conditions depending on the accumulated dose at the depth of $\text{SiO}_2$ layers is shown for $t_\text{Int}=50\,\mu\text{s}$ and $t_\text{Int}=800\,\mu\text{s}$. Right: Dependency of the ENC observed in different pixels of the ROI on the accumulated dose in units of electrons for the same integration time setting.} \end{figure} \subsubsection{Gain and Energy Resolution} \label{subsec:Gain&EnergyResolution} We took post-irradiation {copper} fluorescence flat-field data approximately one and a half hours after completion of the last irradiation cycle. At that time most of the pixels in the central part of the {region of interest} were still in saturation, thus not capable of detecting charge created by a photon interaction. Later, calibration measurements (with the detector in stabilized post-irradiation state) were not possible due to time-constrained access to the instrument. In order to characterise the gain and energy resolution, we therefore used pixels located in the periphery of the saturated area to compare the pre- and post-irradiation performance. \begin{figure}[t] \centering \includegraphics[width=0.49\textwidth]{Images/Cu_pre-post-normed.png} \hfill \includegraphics[width=0.49\textwidth]{Images/FWHMvsDose_20x20_mskdpx.png} \caption{\label{fig:Cu_spec} Left: The spectral distribution of the {$\text{Cu-K}_{\alpha}$} photons and $9\,\text{keV}$ photon peak as detected by pixels in the periphery of the {region of interest} before irradiation (black) and after irradiation (red). Right: Relation of the {$\text{Cu-K}_{\alpha}$ line} peak width (assessed as FWHM) to the dose absorbed in the $\text{SiO}_2$ layer. Values extracted from measurement are marked with green dots, while the yellow squares shows the expected FWHM calculated from the noise in pixels.} \end{figure} The left part of figure~\ref{fig:Cu_spec} shows a comparison of the measured spectrum of the { $\text{Cu-K}_{\alpha}$} lines and the $9\,\text{keV}$ line resulting from the photon beam before (black) and after irradiation (red) calculated from {the peripheral} pixels in the {region of interest}. The FWHM of the lines is larger after irradiation and the lines have a more pronounced low energy tail. Figure~\ref{fig:Cu_spec} on the right shows the dependency of the peaks' FWHM on the dose accumulated by each individual pixel. We find an increase of the FWHM of $100\,\text{eV}$ per $866\,\text{Gy}$. Following the relation of the intrinsic resolution limit of a semiconductor detector $\Delta E \propto \sqrt{FWE}$, with the Fano factor {$F = 0.120$, the photon energy $E$ and the energy required to create an electron hole pair {$W=3.66\,\text{eV}$} \cite{Lowe:2007a, Mazziotta:2008a} }, we would expect the behaviour indicated by yellow squares (labeled as "Calculated"). The increase of the FWHM values follows the same slope (within the estimated errors) as the calculated intrinsic resolution values. The intercept values of the linear models are separated by ${184}\,\text{eV}$ (best case scenario including $1\,\sigma$ uncertainties), which is approximately consistent with the increased mean noise value measured in the {peripheral pixels of the region of interest} after irradiation. These results suggest, that the observed broadening of the {$\text{Cu-K}_{\alpha}$ line} is driven by the radiation-induced noise increase. It is to note that in order to convert ADU values to eV/$\text{e}^-$ units a pre-irradiation absolute gain value of $g=({70.2}\pm0.5)\,\text{eV}/\text{ADU}$ was used. Before irradiation we find $(114.75\pm0.07)\,\text{ADU}$ for the position of the { $\text{Cu-K}_\alpha$} line blend. Figure~\ref{fig:PeakPos} shows the position of the { $\text{Cu-K}_{\alpha}$} line extracted from single pixel spectra of irradiated pixels in dependence of the absorbed dose. Applying a linear model to the data yields a slope which is consistent with $0$, indicating no change of the {$\text{Cu-K}_{\alpha}$ line} position with dose. The { $\text{Cu-K}_\alpha$ line} after irradiation are located at $(113.62\pm0.37)\,\text{ADU}$ as we determined from the linear model. \begin{figure}[ht] \centering \includegraphics[width=0.49\textwidth]{Images/PeakPosvsDose_20x20_mskdpx.png} \caption{\label{fig:PeakPos} Position of the {$\text{Cu-K}_{\alpha}$} line in dependency of an accumulated dose in different pixels. The line position was determined by fitting our Gaussian model to the spectra.} \end{figure} Since the flat-field calibration measurement contained only pixels with an absorbed dose up to a maximum of $2\,\text{kGy}$, the gain behaviour for the most irradiated pixels could not be studied. In order to study the gain behaviour of these pixels, we performed a charge injection scan with the current sources implemented in each pixel $240$ days after irradiation. During the linearity scan an increasing signal is injected into a preamplifier with an internal 10-bit pulser, thus simulating a charge created by photon interactions in the sensor material. For our measurement, 1024 steps of the pulser were used to scan the full dynamic range of the detector. The observed gain values derived from {$\text{Cu-K}_{\alpha}$} fluorescence data are shown in dependency of the absorbed dose in the left part of figure~\ref{fig:Gains}. The x-axis range is limited to dose values below $2\,\text{kGy}$, as the pixels which have seen a higher dose were still saturated at the time the flat-field data was taken (see section \ref{sec:ImmediateEffects}). The slope $(-2\pm 9)\times 10^{-5}\,\text{ADU}\,\text{keV}^{-1}\text{Gy}^{-1}$ derived from a linear model fitted to the data is consistent with zero, indicating that the gain is not changing significantly up to a dose of $\approx 2\,\text{kGy}$. This behavior changes as soon as higher dose levels are reached. If we take the charge injection data covering the range between {\bf $2750\,\text{Gy}$} and $5500\,\text{Gy}$ into consideration (right part of figure~\ref{fig:Gains}), we find the gain decreasing with the rate of {$(-6\pm 3)\times 10^{-5}\,\text{ADU}\,\text{keV}^{-1}\text{Gy}^{-1}$} as indicated by the black {solid} line in the right panel of {f}igure~\ref{fig:Gains}. {Gain values for lower deposited dose, i.e. deposited dose $\leq 2600\,\text{ADU}$ do not show a decrease as shown by the dashed red line. The significance of the gain decrease with higher doses is visualised by residuals of a constant function fitted to charge injection data up to a dose of $2600\,\text{Gy}$ and extended to the full range of deposited doses. As shown by the red dots in residuals plot in figure~\ref{fig:Gains}, the gain values for doses above $4\,\text{kGy}$ deviate up to $-3\sigma$, thus indicating a weak gain decrease. On the other hand, the residuals visualized by black dots corresponding to the linear fit to gain values for doses above $2750\,\text{Gy}$ show a very good agreement between the measured values and the fitted function.} \begin{figure}[htbp] \centering \includegraphics[width=0.46\textwidth]{Images/Gain_dose_FF_newCImodel.png} \qquad \includegraphics[width=0.46\textwidth]{Images/Gain_CI_wResiduals.png} \caption{\label{fig:Gains} Left: Gain estimated from the {$\text{Cu-K}_{\alpha}$} fluorescence data depending on dose. The black line shows a model derived from charge injection data (marked as "CI model") estimated for the absorbed dose in the range of $500\,\text{Gy}$ and $2600\,\text{Gy}$. Right: Gain calculated from the internal charge injection data in relation to the absorbed dose. { Two regions were fitted separately, lower dose region ($\leq 2600\,\text{Gy}$) where no gain decrease is observed and higher dose region ($> 2600\,\text{Gy}$) showing a weak gain decrease. Residuals (red dots) of the constant function extended to the whole deposited dose range show a systematic deviation from the charge injection data of up to $-3\sigma$ for gain values above $\approx 4\,\text{kGy}$. Hence, demonstrating a weak gain decrease.}} \end{figure} \section{Interpretation} We attribute the measured increase of the leakage current and consequently higher offset after irradiation to radiation induced damage in the sensor by a mechanism discussed by Schwandt et al.~\cite{Schwandt:2012a}. The generation of $\text{e}^{-}\text{-- h}$ pairs close to the $\text{Si--SiO}_2$ interface and build up of positive charge lead to high electric fields near the interface causing changes to the depletion boundary. Coulomb repulsion happening between positively doped boron implantation and positive charges accumulating near the oxide layer might result in a shrinking of the boron implanted region on its edges, thus exposing the metal contact and allowing the depleted area to extend to the region close to the metal contact. Potentially, the bending of the depletion boundary can reach the edges of the metal contact and thus increase the electron leakage current. When the generation of new charge carriers is interrupted, the recombination process dominates and the leakage current will decrease exponentially as observed and shown in the right part of figure~\ref{fig:SaturatedPx}. The effects on the gain were found to be pronounced at higher doses and are visible in the charge injection scan data. This indicates a radiation effect on the readout electronics in the ASIC, as the sensor-induced signal does not significantly contribute to those measurements. We see different mechanisms which could lead to such an observed gain decrease. However, presently, it is not fully understood which of these mechanisms contribute to the gain change observed in the ASIC. To explain the gain changes and pixel saturation, both of which occur on an ASIC-level in more detail, a device simulation based on the specific design of the ePix100a sensor would be required, which is beyond the scope of this work. \section{Detector Lifetime Estimate} The radiation-induced damage presented in the previous section can be used to estimate the lifetime of the detector depending on the beam energy used during experiments and limits for the measurement time beyond which the performance of the detector will significantly degrade. The estimates presented here are based on the extrapolation of the measured relationship of the induced damage and dose absorbed in the $\text{SiO}_2$ layer. Figure~\ref{fig:LifetimeRange} illustrates the time needed for a specific dynamic range reduction depending on the beam energy. A reduction of the dynamic range by $50\%$ can be expected at a dose of ca. $(131{\pm 18)}\,\text{kGy}$ for $t_\text{Int}=800\,\mu\text{s}$ and at ca. $(7.4{\pm 1.0)}\,\text{MGy}$ for $t_\text{Int}=50\,\mu\text{s}$ absorbed in a maximally irradiated pixel{. Assuming a beam spatial distribution similar to the one used during this radiation damage experiment, i.e. the most irradiated pixel receiving $1\,\%$ of the total beam energy, the dose per $20\,\text{pixels}\times20\,\text{pixels}$ area} amounts to ca. $(13{\pm 1)}\,\text{MGy}$ for $t_\text{Int}=800\,\mu\text{s}$ and $(740{\pm 64)}\,\text{MGy}$ for $t_\text{Int}=50\,\mu\text{s}$. Saturation of the ADC dynamic range will occur at $(262{\pm 36)}\,\text{kGy}$ ($t_\text{Int}=800\,\mu\text{s}$), respectively at $(14.8{\pm 2.0)}\,\text{MGy}$ ($t_\text{Int}=50\,\mu\text{s}$), i.e. at $(26{\pm 2)}\,\text{MGy}$ ($t_\text{Int}=800\,\mu\text{s}$) and at $(1.48{\pm 0.13)}\,\text{GGy}$ ($t_\text{Int}=50\,\mu\text{s}$) of the total absorbed dose in ROI. The left panel of figure \ref{fig:LifetimeRange} shows three exemplary cases for the expected dynamic range behaviour; dynamic range reduction as observed in this radiation hardness study (blue dots), extrapolated loss of $50\%$ of the ADC range (orange dots) and saturation of the ADC dynamic range due to the leakage current (green dots) for $800\,\mu\text{s}$ integration time. The same cases are plotted for $t_\text{Int} = 50\,\mu\text{s}$ on the right. The lowest beam energy shown in both graphs corresponds to an energy {deposited to the detector which is} equivalent to the upper limit of the ePix100a dynamic range {within one integration cycle}, i.e. 100 photons at $8\,\text{keV}$. For the estimate we assumed that photons are impinging mostly the same region of the detector during scientific experiments. This assumption is reasonable for small angle scattering experiments. As an example the horizontal lines in both plots visualize the number of hours the detector can be exposed to the beam during one, three and five years of operation at the European XFEL to reach the corresponding dynamic range reduction. For our estimate we assumed $4216$ hours of beam time operation per calendar year at the European XFEL. This value corresponds to the planned X-ray delivery time for the year $2021$. As one beamline serves two scientific instruments, the allocated time is assumed to be shared equally between the two. Moreover, we estimate the detector to be exposed to X-rays only $50\%$ of the available time{, i.e. the detector will be exposed to radiation for $4216\,\text{h}/\text{year} \times 0.5 \times 0.5 =1054\,\text{h}/\text{year}$. Hence the black line visualizing the dynamic range reduction during one year of operation corresponds to $1054$ hours of beam on the detector}. The numbers presented in figure \ref{fig:LifetimeRange} are of general nature and can be transferred to any other usage scenario, e.g. at other X-ray facilities. Significant reduction of the dynamic range is not expected, if the detector is illuminated with a beam energy below the dynamic range of the ADC, i.e. $\leq 100 \times 8\,\text{keV}$ photons. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{Images/LifetimeHours_800us.png} \hfill \includegraphics[width=0.49\textwidth]{Images/LifetimeHours_50us.png} \caption{Estimate of time needed to reach specific level of the dynamic range reduction at a certain beam energy. Three scenarios are shown; the reduction level observed during this radiation hardness study (blue dots), reduction to $50\,\%$ of the initial dynamic range (orange dots) and complete saturation of the ADC by the leakage current (green dots). The left plot shows the estimate for $t_\text{Int} = 800\,\mu\text{s}$ and the right plot is for $t_\text{Int} = 50\,\mu\text{s}$.} \label{fig:LifetimeRange} \end{figure} The ePix100a detector was designed for low noise spectroscopy applications, hence requiring single photon sensitivity, i.e. good photon-to-noise discrimination down to the lowest photon energies. In this context the detector's noise is an important performance parameter. A common requirement for imaging detectors used at FEL facilities is a false hit detection probability per megapixel area, i.e. $P(0|1)< 10^{-6}$, which corresponds to a photon peak-to-noise separation of approximately $5\,\sigma$ at a given energy. The lowest acceptable signal-to-noise value is usually considered to be $3\,\sigma$. Figure~\ref{fig:LifetimeSeparation} shows the evolution of the signal to noise ratio if the detector is exposed to a given beam energy for a specific amount of time. The $5\,\sigma$ (cyan) or $3\,\sigma$ (magenta) peak separation are indicated for a photon energy of $9\,\text{keV}$ . The plot on the left shows the separation power reduction at $800\,\mu\text{s}$ integration time and the right plot at the integration time of $50\,\mu\text{s}$. As in the previous figure, the horizontal lines mark the beam time hours at the European XFEL per calendar year. A critical noise increase, hence reduction in peak-to-noise separation is only expected at beam intensities above the detector's dynamic range. Irradiating the detector for $2\,\text{years}$ with an energy of $2.5\,\text{nJ}$ would cause a drop of the signal-to-noise ratio below $3\,\sigma$ at $800\,\mu\text{s}$ integration time, while at $50\,\mu\text{s}$ the same energy during $5\,\text{years}$ would lead to a drop below $5\,\sigma$. {The dose deposited to the $\text{SiO}_2$ layers delivered to the area of $20\,\text{pixels} \times 20\,\text{pixels}$ resulting from the beam energy of $2.5\,\text{nJ}$ corresponds to $(1.1 \pm 0.1)\,\text{GGy}$. The same beam energy leads to a dose of $(2.7 \pm 0.2)\,\text{GGy}$ in 5 years. The beam energy needed to worsen the peak-to-noise separation below $5\,\sigma$ in 5 years of operation at $800\,\mu\text{s}$ integration time is $\approx 0.4\,\text{nJ}$, which exceeds the dynamic range of the ePix100a by $\approx 3$ orders of magnitude.} \begin{figure} \centering \includegraphics[width=0.49\textwidth]{Images/PeakSeparDegrading_time_800us.png} \hfill \includegraphics[width=0.49\textwidth]{Images/PeakSeparDegrading_time_50us.png} \caption{Estimate of time needed to reduce a peak-to-noise separation power below $5\,\sigma$ (green) and $3\,\sigma$ (pink) for $t_\text{Int} = 800\,\mu\text{s}$, shown on the left plot and for $t_\text{Int} = 50\,\mu\text{s}$, shown on the right in dependency of the used beam energy.} \label{fig:LifetimeSeparation} \end{figure} Table~\ref{tab:LimitDoses} summarizes the dose thresholds deposited at the $\text{SiO}_2$ above which the detector's dynamic range and peak separation power will degrade. \begin{table} \centering \caption{Estimated dose thresholds above which a significant degradation of the detectors' performance will occur, i.e. a reduction of its dynamic range and peak separation power.} \label{tab:LimitDoses} \smallskip \begin{tabular}{crrrr} \hline\hline \multicolumn{1}{l}{} & \multicolumn{4}{c}{Dose absorbed in $\text{SiO}_2$}\\ \cline{2-5} Integration time & \multicolumn{2}{c}{ADC range reduction} & \multicolumn{2}{c}{Peak separation}\\ \hline \multicolumn{1}{l}{} & \multicolumn{1}{c}{$50\,\%$} & \multicolumn{1}{c}{$100\,\%$} & \multicolumn{1}{c}{$<5\,\sigma$} & \multicolumn{1}{c}{$<3\,\sigma$}\\ \cline{2-5} \multicolumn{1}{r}{ $50\,\mu\text{s}$} & $(7.4{\pm 1.0)}\,\text{MGy}$ & $(14.8{\pm 2)}\,\text{MGy}$ & $(28{\pm 4)}\,\text{kGy}$ & $(32{\pm 4)}\,\text{kGy}$\\ \multicolumn{1}{r}{$800\,\mu\text{s}$} & $(131{\pm 18)}\,\text{kGy}$ & $(262{\pm 36)}\,\text{kGy}$ & $(9{\pm 1)}\,\text{kGy}$ & $(10{\pm 1})\,\text{kGy}$\\ \hline\hline \end{tabular} \end{table} \section{Conclusions \& Outlook} We have performed a systematic study of the influence of radiation induced damage on the performance of the ePix100a detector. We irradiated the ePix100a detector under controlled conditions with the direct and attenuated European XFEL beam with X-ray photons with an energy of $9\,\text{keV}$ and a beam energy of $1\,\mu\text{J}$. Pixels irradiated by this energy do not show a signal dependent response upon irradiation but remain functional under normal operating conditions. {Irradiating the detector} beyond the beam energy of $1\,\mu\text{J}$ {for longer time periods, e.g. $> 5\,\text{min}$} will cause {failure of} the irradiated pixels. Furthermore, we provide irradiation limits for typical usage scenarios at the European XFEL, which results in a certain damage level. Our results can be transferred to experimental conditions at other facilities and experiments. The irradiated area of $1\,\text{mm}^2$ has received a dose of approximately $760\,\text{kGy}$ at the depth of $\text{Si}/\text{SiO}_2$ in the sensor, which corresponds to $180\,\text{MGy}$ delivered to the surface of the sensor. The dose dependent increase of the offset and noise is mainly caused by an increase of the leakage current. The observed broadening of the {$\text{Cu-K}_{\alpha}$} fluorescence line measured $90\,\text{min}$ post irradiation is scaling with the increasing noise in the pixels and thus is caused by the radiation-induced leakage current. A change of the gain is not expected for a dose $< 4\,\text{kGy}$. Nevertheless, a charge injection scan showed a weak gain decrease for the most irradiated pixels and suggests a weak damage occurring at the pixel preamplifier. Single photon discrimination at a significance level of $>5\,\sigma$ can be achieved with the ePix100a up to a dose of $9\,\text{kGy}$ at $t_\text{Int} = 800\,\mu\text{s}$ and up to $28\,\text{kGy}$ at $t_\text{Int} =50\,\mu\text{s}$. In the near future, we plan to investigate sensor annealing as a possibility to mitigate the radiation induced performance changes to conclude the ePix100a radiation hardness study. \acknowledgments We acknowledge the European XFEL in Schenefeld, Germany, for provision of a X-ray free-electron laser beamtime at the HED instrument and would like to thank the beam line staff for their assistance. The work presented in this publication was funded by the European XFEL. We would like to thank specifically the following European XFEL groups for their fruitful collaboration, vital contribution to this work and their continuous effort in supporting this project: Control devices were developed by the Controls group led by Darren Spruce, data acquisition and storage is provided by the Information Technology and Data Management (ITDM) group led by Krzysztof Wrona. We would like to thank Theophilos Maltezopoulos from the X-ray Photon Diagnostics (XPD) group for his support in analyzing the XGM data and Dionisio Doering and Maciej Kwiatkowski for support with execution of the charge injection scan.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:introduction} While the recent advent of deep learning has led to impressive progress in Natural Language Processing (NLP), these techniques are known to be particularly data hungry, limiting their applicability in many practical scenarios. An increasingly popular approach to alleviate this issue is to first learn general language representations on unlabeled data, which are then integrated in task-specific downstream systems. This approach was first popularized by word embeddings \citep{mikolov2013distributed,pennington2014glove}, but has recently been superseded by sentence-level representations \citep{peters2018deep,devlin2018bert}. Nevertheless, all these works learn a separate model for each language and are thus unable to leverage information across different languages, greatly limiting their potential performance for low-resource languages. In this work, we are interested in \textbf{universal language agnostic sentence embeddings}, that is, vector representations of sentences that are general with respect to two dimensions: the input language and the NLP task. The motivations for such representations are multiple: the hope that languages with limited resources benefit from joint training over many languages, the desire to perform zero-shot transfer of an NLP model from one language (typically English) to another, and the possibility to handle code-switching. To that end, we train a single encoder to handle multiple languages, so that semantically similar sentences in different languages are close in the embedding space. While previous work in multilingual NLP has been limited to either a few languages \citep{schwenk2017learning,yu2018multilingual} or specific applications like typology prediction \citep{malaviya2017learning} or machine translation \citep{neubig2018rapid}, we learn general purpose sentence representations for 93 languages (see Table \ref{tab:languages}). Using a single pre-trained BiLSTM encoder for all the 93 languages, we obtain very strong results in various scenarios without any fine-tuning, including cross-lingual natural language inference (XNLI dataset), cross-lingual classification (MLDoc dataset), bitext mining (BUCC dataset) and a new multilingual similarity search dataset we introduce covering 112 languages. To the best of our knowledge, this is the first exploration of general purpose massively multilingual sentence representations across a large variety of tasks. \section{Related work} \label{sec:related} Following the success of word embeddings \citep{mikolov2013distributed,pennington2014glove}, there has been an increasing interest in learning continuous vector representations of longer linguistic units like sentences \citep{le2014distributed,kiros2015skipthought}. These sentence embeddings are commonly obtained using a Recurrent Neural Network (RNN) encoder, which is typically trained in an unsupervised way over large collections of unlabelled corpora. For instance, the skip-thought model of \citet{kiros2015skipthought} couple the encoder with an auxiliary decoder, and train the entire system to predict the surrounding sentences over a collection of books. It was later shown that more competitive results could be obtained by training the encoder over labeled Natural Language Inference (NLI) data \citep{conneau2017supervised}. This was later extended to multitask learning, combining different training objectives like that of skip-thought, NLI and machine translation \citep{cer2018universal,subramanian2018learning}. While the previous methods consider a single language at a time, multilingual representations have attracted a large attention in recent times. Most of this research focuses on cross-lingual word embeddings \citep{ruder2017survey}, which are commonly learned jointly from parallel corpora \citep{gouws2015bilbowa,luong2015bilingual}. An alternative approach that is becoming increasingly popular is to separately train word embeddings for each language, and map them to a shared space based on a bilingual dictionary \citep{mikolov2013exploiting,artetxe2018generalizing} or even in a fully unsupervised manner \cite{conneau2018word,artetxe2018robust}. Cross-lingual word embeddings are often used to build bag-of-word representations of longer linguistic units by taking their respective (IDF-weighted) average \citep{klementiev2012inducing,dufter2018embedding}. While this approach has the advantage of requiring weak or no cross-lingual signal, it has been shown that the resulting sentence embeddings work poorly in practical cross-lingual transfer settings \citep{conneau2018xnli}. A more competitive approach that we follow here is to use a sequence-to-sequence encoder-decoder architecture \citep{schwenk2017learning,hassan2018achieving}. The full system is trained end-to-end on parallel corpora akin to multilingual neural machine translation \citep{johnson2017google}: the encoder maps the source sequence into a fixed-length vector representation, which is used by the decoder to create the target sequence. This decoder is then discarded, and the encoder is kept to embed sentences in any of the training languages. While some proposals use a separate encoder for each language \citep{schwenk2017learning}, sharing a single encoder for all languages also gives strong results \citep{schwenk2018filtering}. Nevertheless, most existing work is either limited to few, rather close languages \citep{schwenk2017learning,yu2018multilingual} or, more commonly, consider pairwise joint embeddings with English and one foreign language \citep{espana2017empirical,guo2018effective}. To the best of our knowledge, existing work on learning multilingual representations for a large number of languages is limited to word embeddings \citep{ammar2016massively,dufter2018embedding} or specific applications like typology prediction \citep{malaviya2017learning} or machine translation \citep{neubig2018rapid}, ours being the first paper exploring general purpose massively multilingual sentence representations. All the previous approaches learn a fixed-length representation for each sentence. A recent research line has obtained very strong results using variable-length representations instead, consisting of contextualized embeddings of the words in the sentence \citep{dai2015semisupervised,peters2018deep,howard2018universal,devlin2018bert}. For that purpose, these methods train either an RNN or self-attentional encoder over unnanotated corpora using some form of language modeling. A classifier can then be learned on top of the resulting encoder, which is commonly further fine-tuned during this supervised training. Concurrent to our work, \citet{lample2019cross} propose a cross-lingual extension of these models, and report strong results in cross-lingual natural language inference, machine translation and language modeling. In contrast, our focus is on scaling to a large number of languages, for which we argue that fixed-length approaches provide a more versatile and compatible representation form.\footnote{For instance, there is not always a one-to-one correspondence among words in different languages (e.g. a single word of a morphologically complex language might correspond to several words of a morphologically simple language), so having a separate vector for each word might not transfer as well across languages.} Also, our approach achieves strong results without task-specific fine-tuning, which makes it interesting for tasks with limited resources. \begin{figure*}[t] \centering \includegraphics[width=0.95\textwidth]{fig/architecture-deep.pdf} \caption{Architecture of our system to learn multilingual sentence embeddings.} \label{fig:architecture} \end{figure*} \section{Proposed method} \label{sec:embeddings} We use a single, language agnostic BiLSTM encoder to build our sentence embeddings, which is coupled with an auxiliary decoder and trained on parallel corpora. From Section \ref{subsec:architecture} to \ref{subsec:data}, we describe its architecture, our training strategy to scale to 93 languages, and the training data used for that purpose. \subsection{Architecture} \label{subsec:architecture} Figure \ref{fig:architecture} illustrates the architecture of the proposed system, which is based on \citet{schwenk2018filtering}. As it can be seen, sentence embeddings are obtained by applying a max-pooling operation over the output of a BiLSTM encoder. These sentence embeddings are used to initialize the decoder LSTM through a linear transformation, and are also concatenated to its input embeddings at every time step. Note that there is no other connection between the encoder and the decoder, as we want all relevant information of the input sequence to be captured by the sentence embedding. We use a single encoder and decoder in our system, which are shared by all languages involved. For that purpose, we build a joint byte-pair encoding (BPE) vocabulary with 50k operations, which is learned on the concatenation of all training corpora. This way, the encoder has no explicit signal on what the input language is, encouraging it to learn language independent representations. In contrast, the decoder takes a language ID embedding that specifies the language to generate, which is concatenated to the input and sentence embeddings at every time step. Scaling up to almost one hundred languages calls for an encoder with sufficient capacity. In this paper, we limit our study to a stacked BiLSTM with 1 to 5 layers, each 512-dimensional. The resulting sentence representations (after concatenating both directions) are 1024 dimensional. The decoder has always one layer of dimension 2048. The input embedding size is set to 320, while the language ID embedding has 32 dimensions. \begin{table*}[t!] \insertTabLanguages \caption{List of the 93 languages along with their training size, the resulting similarity error rate on Tatoeba, and the number of sentences in it. Dashes denote language pairs excluded for containing less than 100 test sentences.} \label{tab:languages} \end{table*} \subsection{Training strategy} \label{subsec:strategies} In preceding work \cite{schwenk2017learning,schwenk2018filtering}, each input sentence was jointly translated into all other languages. However, this approach has two obvious drawbacks when trying to scale to a large number of languages. First, it requires an N-way parallel corpus, which is difficult to obtain for all languages. Second, it has a quadratic cost with respect to the number of languages, making training prohibitively slow as the number of languages is increased. In our preliminary experiments, we observed that similar results can be obtained using only two target languages.\footnote{Note that, if we had a single target language, the only way to train the encoder for that language would be auto-encoding, which we observe to work poorly. Having two target languages avoids this problem.} At the same time, we relax the requirement for N-way parallel corpora by considering separate alignments for each language combination. Training minimizes the cross-entropy loss on the training corpus, alternating over all combinations of the languages involved. For that purpose, we use Adam with a constant learning rate of 0.001 and dropout set to 0.1, and train for a fixed number of epochs. Our implementation is based on \texttt{fairseq},\footnote{\url{https://github.com/pytorch/fairseq}} and we make use of its multi-GPU support to train on 16 NVIDIA V100 GPUs with a total batch size of 128,000 tokens. Unless otherwise specified, we train our model for 17 epochs, which takes about 5 days. Stopping training earlier decreases the overall performance only slightly. \begin{table*}[t] \insertTabXNLI \caption{Test accuracies on the XNLI cross-lingual natural language inference dataset. All results from \citet{conneau2018xnli} correspond to max-pooling, which outperforms the last-state variant in all cases. Results involving MT do not use a multilingual model and are not directly comparable with zero-shot transfer. Overall best results are in bold, the best ones in each group are underlined. \\ $^*$ Results for BERT \cite{devlin2018bert} are extracted from its GitHub README\footnotemark[9] \\ % $^\dagger$ Monolingual BERT model for Thai from \url{https://github.com/ThAIKeras/bert} } \label{tab:results_xnli} \end{table*} \subsection{Training data and pre-processing} \label{subsec:data} As described in Section \ref{subsec:strategies}, training requires bitexts aligned with two target languages. We choose English and Spanish for that purpose, as most of the data is aligned with these languages.\footnote{Note that it is not necessary that all input languages are systematically aligned with both target languages. Once we have several languages with both alignments, the joint embedding is well conditioned, and we can add more languages with one alignment only, usually English.} We collect training corpora for 93 input languages by combining the Europarl, United Nations, OpenSubtitles2018, Global Voices, Tanzil and Tatoeba corpus, which are all publicly available on the OPUS website\footnote{\url{http://opus.nlpl.eu}} \cite{tiedmann2012parallel}. Appendix~\ref{app:data} provides a more detailed description of this training data, while Table~\ref{tab:languages} summarizes the list of all languages covered and the size of the bitexts. Our training data comprises a total of 223 million parallel sentences. All pre-processing is done with Moses tools:\footnote{\url{http://www.statmt.org/moses}} punctuation normalization, removing non-printing characters and tokenization. As the only exception, Chinese and Japanese were segmented with Jieba\footnote{\url{https://github.com/fxsjy/jieba}} and Mecab,\footnote{\url{https://github.com/taku910/mecab}} respectively. All the languages are kept in their original script with the exception of Greek, which we romanize into the Latin alphabet. It is important to note that the joint encoder itself has no information on the language or writing script of the tokenized input texts. It is even possible to mix multiple languages in one sentence. \section{Experimental evaluation} \label{sec:experiments} In contrast with the well-established evaluation frameworks for English sentence representations \cite{conneau2017supervised,wang2018glue}, there is not yet a commonly accepted standard to evaluate multilingual sentence embeddings. The most notable effort in this regard is arguably the XNLI dataset \cite{conneau2018xnli}, which evaluates the transfer performance of an NLI model trained on English data over 14 additional test languages (Section \ref{subsec:xnli}). So as to obtain a more complete picture, we also evaluate our embeddings in cross-lingual document classification (MLDoc, Section \ref{subsec:mldoc}), and bitext mining (BUCC, Section \ref{subsec:bucc}). However, all these datasets only cover a subset of our 93 languages, so we also introduce a new test set for multilingual similarity search in 112 languages, including several languages for which we have no training data but whose language family is covered (Section \ref{subsec:tatoeba}). We remark that we use the same pre-trained BiLSTM encoder for all tasks and languages without any fine-tuning. \subsection{XNLI: cross-lingual NLI} \label{subsec:xnli} \begin{table*}[t] \insertTabMLDoc \caption{Accuracies on the MLDoc zero-shot cross-lingual document classification task (test set).} \label{tab:results_mldoc} \end{table*} NLI has become a widely used task to evaluate sentence representations \citep{bowman2015large,williams2018broad}. Given two sentences, a premise and a hypothesis, the task consists in deciding whether there is an \textit{entailment}, \textit{contradiction} or \textit{neutral} relationship between them. XNLI is a recent effort to create a dataset similar to the English MultiNLI for several languages \cite{conneau2018xnli}. It consists of 2,500 development and 5,000 test instances translated from English into 14 languages by professional translators, making results across different languages directly comparable. We train a classifier on top of our multilingual encoder using the usual combination of the two sentence embeddings: $(p, h, p \cdot h, |p-h|)$, where $p$ and $h$ are the premise and hypothesis. For that purpose, we use a feed-forward neural network with two hidden layers of size 512 and 384, trained with Adam. All hyperparameters were optimized on the English XNLI development corpus only, and then, the same classifier was applied to all languages of the XNLI test set. As such, we did not use any training or development data in any of the foreign languages. Note, moreover, that the multilingual sentence embeddings are fixed and not fine-tuned on the task or the language. We report our results in Table~\ref{tab:results_xnli}, along with several baselines from \citet{conneau2018xnli} and the multilingual BERT model \citep{devlin2018bert}.\footnote{\label{foot:bert}Note that the multilingual variant of BERT is not discussed in its paper \citep{devlin2018bert}. Instead, the reported results were extracted from the README of the official GitHub project at \url{https://github.com/google-research/bert/blob/master/multilingual.md} on July 5, 2019.} Our proposed method obtains the best results in zero-shot cross-lingual transfer for all languages but Spanish. Moreover, our transfer results are strong and homogeneous across all languages: for 11 of them, the zero-short performance is (at most) 5\% lower than the one on English, including distant languages like Arabic, Chinese and Vietnamese, and we also achieve remarkable good results on low-resource languages like Swahili. In contrast, BERT achieves excellent results on English, outperforming our system by 7.5 points, but its transfer performance is much weaker. For instance, the loss in accuracy for both Arabic and Chinese is 2.5 points for our system, compared to 19.3 and 17.6 points for BERT.\footnote{Concurrent to our work, \citet{lample2019cross} report superior results using another variant of BERT, outperforming our method by 4.5 points in average. However, note that these results are not fully comparable because 1) their system uses development data in the foreign languages, whereas our approach is fully zero-shot, 2) their approach requires fine-tuning on the task, 3) our system handles a much larger number of languages, and 4) our transfer performance is substantially better (an average loss of 4 vs 10.6 points with respect to the respective English system).} Finally, we also outperform all baselines of \citet{conneau2018xnli} by a substantial margin, with the additional advantage that we use a single pre-trained encoder, whereas \mbox{X-BiLSTM} learns a separate encoder for each language. We also provide results involving Machine Translation (MT) from \citet{conneau2018xnli}. This can be done in two ways: 1) translate the test data into English and apply the English NLI classifier, or 2) translate the English training data and train a separate NLI classifier for each language. Note that we are not evaluating multilingual sentence embeddings anymore, but rather the quality of the MT system and a monolingual model. Moreover, the use of MT incurs in an important overhead with either strategy: translating test makes inference substantially more expensive, whereas translating train results in a separate model for each language. As shown in Table~\ref{tab:results_xnli}, our approach outperforms all translation baselines of \citet{conneau2018xnli}. We also outperform MT BERT for Arabic and Thai, and are very close for Urdu. Thanks to its multilingual nature, our system can also handle premises and hypothesis in different languages. As reported in Appendix \ref{app:xnli_cross}, the proposed method obtains very strong results in these settings, even for distant language combinations like French-Chinese. \subsection{MLDoc: cross-lingual classification} \label{subsec:mldoc} \begin{table*}[t!] \insertTabBucc \caption{F1 scores on the BUCC mining task.} \label{tab:results_bucc} \end{table*} Cross-lingual document classification is a typical application of multilingual representations. In order to evaluate our sentence embeddings in this task, we use the MLDoc dataset of \citet{schwenk2018corpus}, which is an improved version of the Reuters benchmark \citep{lewis2004rcv1,klementiev2012inducing} with uniform class priors and a wider language coverage. There are 1,000 training and development documents and 4,000 test documents for each language, divided in 4 different genres. Just as with the XNLI evaluation, we consider the zero-shot transfer scenario: we train a classifier on top of our multilingual encoder using the English training data, optimizing hyper-parameters on the English development set, and evaluating the resulting system in the remaining languages. We use a feed-forward neural network with one hidden layer of 10 units. As shown in Table~\ref{tab:results_mldoc}, our system obtains the best published results for 5 of the 7 transfer languages. We believe that our weaker performance on Japanese can be attributed to the domain and sentence length mismatch between MLDoc and the parallel corpus we use for this language. \subsection{BUCC: bitext mining} \label{subsec:bucc} Bitext mining is another natural application for multilingual sentence embeddings. Given two comparable corpora in different languages, the task consists in identifying sentence pairs that are translations of each other. For that purpose, one would commonly score sentence pairs by taking the cosine similarity of their respective embeddings, so parallel sentences can be extracted through nearest neighbor retrieval and filtered by setting a fixed threshold over this score \citep{schwenk2018filtering}. However, it was recently shown that this approach suffers from scale inconsistency issues \citep{guo2018effective}, and \citet{artetxe2018margin} proposed the following alternative score addressing it: \begin{multline*} \score(x, y) = \margin (\cos(x, y), \\ \sum_{z \in \nn_k(x)}{\frac{\cos(x, z)}{2k}} + \sum_{z \in \nn_k(y)}{\frac{\cos(y, z)}{2k}}) \end{multline*} where $x$ and $y$ are the source and target sentences, and $\nn_k(x)$ denotes the $k$ nearest neighbors of $x$ in the other language. The paper explores different margin functions, with \textit{ratio} ($\margin(a, b) = \frac{a}{b}$) yielding the best results. This notion of margin is related to CSLS \citep{conneau2018word}. We use this method to evaluate our sentence embeddings on the BUCC mining task \citep{zweigenbaum2017overview,zweigenbaum2018overview}, using exact same hyper-parameters as \citet{artetxe2018margin}. The task consists in extracting parallel sentences from a comparable corpus between English and four foreign languages: German, French, Russian and Chinese. The dataset consists of 150K to 1.2M sentences for each language, split into a sample, training and test set, with about 2--3\% of the sentences being parallel. As shown in Table \ref{tab:results_bucc}, our system establishes a new state-of-the-art for all language pairs with the exception of English-Chinese test. We also outperform \citet{artetxe2018margin} themselves, who use two separate models covering 4 languages each. Not only are our results better, but our model also covers many more languages, so it can potentially be used to mine bitext for any combination of the 93 languages supported. \subsection{Tatoeba: similarity search} \label{subsec:tatoeba} While XNLI, MLDoc and BUCC are well established benchmarks with comparative results available, they only cover a small subset of our 93 languages. So as to better assess the performance of our model in all these languages, we introduce a new test set of similarity search for 112 languages based on the Tatoeba corpus. The dataset consists of up to 1,000 English-aligned sentence pairs for each language. Appendix~\ref{app:tatoeba} describes how the dataset was constructed in more details. Evaluation is done by finding the nearest neighbor for each sentence in the other language according to cosine similarity and computing the error rate. We report our results in Table~\ref{tab:languages}. Contrasting these results with those of XNLI, one would assume that similarity error rates below 5\% are indicative of strong downstream performance.\footnote{We consider the average of en$\rightarrow$xx and xx$\rightarrow$en} This is the case for 37 languages, while there are 48 languages with an error rate below 10\% and 55 with less than 20\%. There are only 15 languages with error rates above 50\%. Additional result analysis is given in Appendix \ref{app:tatoeba_analysis}. We believe that our competitive results for many low-resource languages are indicative of the benefits of joint training, which is also supported by our ablation results in Section \ref{subsec:ablation_languages}. In relation to that, Appendix~\ref{app:unseen} reports similarity search results for 29 additional languages without any training data, showing that our encoder can also generalize to unseen languages to some extent as long as it was trained on related languages. \section{Ablation experiments} \label{sec:ablation} In this section, we explore different variants of our approach and study the impact on the performance for all our evaluation tasks. We report average results across all languages. For XNLI, we also report the accuracy on English. \subsection{Encoder depth} Table~\ref{tab:results:depth} reports the performance on the different tasks for encoders with 1, 3 or 5 layers. We were not able to achieve good convergence with deeper models. It can be seen that all tasks benefit from deeper models, in particular XNLI and Tatoeba, suggesting that a single layer BiLSTM has not enough capacity to encode so many languages. \begin{table}[t] \insertTabDepth \caption{Impact of the depth of the BiLSTM encoder.} \label{tab:results:depth} \end{table} \subsection{Multitask learning} Multitask learning has been shown to be helpful to learn English sentence embeddings \cite{subramanian2018learning,cer2018universal}. The most important task in this approach is arguably NLI, so we explored adding an additional NLI objective to our system with different weighting schemes. As shown in Table \ref{tab:results:nli}, the NLI objective leads to a better performance on the English NLI test set, but this comes at the cost of a worse cross-lingual transfer performance in XNLI and Tatoeba. The effect in BUCC is negligible. \begin{table}[t] \insertTabNLI \caption{Multitask training with an NLI objective and different weightings.} \label{tab:results:nli} \end{table} \subsection{Number of training languages} \label{subsec:ablation_languages} So as to better understand how our architecture scales to a large amount of languages, we train a separate model on a subset of 18 evaluation languages, and compare it to our main model trained on 93 languages. We replaced the Tatoeba corpus with the WMT 2014 test set to evaluate the multilingual similarity error rate. This covers English, Czech, French, German and Spanish, so results between both models are directly comparable. As shown in Table~\ref{tab:results:nb_langs}, the full model equals or outperforms the one covering the evaluation languages only for all tasks but MLDoc. This suggests that the joint training also yields to overall better representations. \begin{table}[t] \insertTabNLangs \caption{Comparison between training on 93 languages and training on the 18 evaluation languages only.} \label{tab:results:nb_langs} \end{table} \section{Conclusions} \label{sec:conclusions} In this paper, we propose an architecture to learn multilingual fixed-length sentence embeddings for 93 languages. We use a single language-agnostic \mbox{BiLSTM} encoder for all languages, which is trained on publicly available parallel corpora and applied to different downstream tasks without any fine-tuning. Our experiments on cross-lingual natural language inference (XNLI), cross-lingual document classification (MLDoc), and bitext mining (BUCC) confirm the effectiveness of our approach. We also introduce a new test set of multilingual similarity search in 112 languages, and show that our approach is competitive even for low-resource languages. To the best of our knowledge, this is the first successful exploration of general purpose massively multilingual sentence representations. In the future, we would like to explore alternative encoder architectures like self-attention \citep{vaswani2017attention}. We would also like to explore strategies to exploit monolingual data, such as using pre-trained word embeddings, back-translation \citep{sennrich2016improving,edunov2018understanding}, or other ideas from unsupervised MT \citep{artetxe2018unsupervised,lample2018phrase}. Finally, we would like to replace our language dependant pre-processing with a language agnostic approach like SentencePiece.\footnote{\url{https://github.com/google/sentencepiece}} Our implementation, the pre-trained encoder and the multilingual test set are freely available at \url{https://github.com/facebookresearch/LASER}. \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\subsection{Effect of a perpendicular magnetic field on resonant tunnelling: experiment and theory} Landau level quantisation induces weak features in $I(V_b)$ when $V_g=0$ for $0.08$ V $<V_b<0.35$ V (see region of the red curve in Fig. \ref{fig:1split2}{\bf a} indicated by the green horizontal bar) and sharp, large amplitude, resonant features in the differential conductance, $G(V_b)=dI/dV_b$, as shown in Fig.~\ref{fig:1split2}{\bf b} for gate voltages $V_g=\pm 40$ V. By combining similar plots at intermediate gate voltages, we generate the colour maps of $G(V_b,V_g)$ shown in Figs.~\ref{fig:Gmaps}{\bf a} and {\bf c}, for $B=2$ and $4$ T, respectively. The regions of high conductance are patterned by small ``islands'' that originate from resonant tunnelling of electrons when LLs in the two graphene layers become aligned in energy (shown schematically in Fig.~\ref{fig:cones}{\bf a},{\bf b}). These islands are sharply defined close to $V_b=0$ but become broadened at high $|V_b|$, which could arise from carrier heating due to high current levels and/or increased lifetime broadening. \begin{figure*}[!] \centering \includegraphics*[width=.7\linewidth]{4} \caption{\textbf{a,b} Dirac cones showing the energy-wavevector dispersion relation, $E(\mathbf{k})$, for electrons in the bottom (red) and top (blue) graphene layers when $B=0$ and $V_b=0.28$ V \textbf{a} and $0.58$ V \textbf{b}. Rings of constant energy on the surface of the cones show the energies and semiclassical $k-$space radii of LLs with indices $n_b$ and $n_t$. The black rings in \textbf{a} and \textbf{b} highlight $n_b=1$ to $n_t=3$ and $n_b=2$ to $n_t=16$ transitions, respectively. Occupied electron states in the bottom (top) layer are shaded dark red (blue) up to the Fermi level, $\mu_{b,t}$, in that layer. {\bf c} Colour map showing tunnelling rates, $W(\nB,\nT)$, for scattering-assisted transitions (taking $\sigma=9$ nm) between LLs with indices $\nB$ and $\nT$ in the bottom and top electrodes. The dotted and solid curves show the loci calculated using Eq. (\ref{eq:rTminusrB}). For all panels, $B=4$ T. \label{fig:cones}} \end{figure*} We model our data (see Fig.~\ref{fig:Gmaps}{\bf b},{\bf d}) using a Bardeen transfer-Hamiltonian approach, taking the full two component form of the LL eigenstates and the following device parameters: the doping densities in the bottom and top graphene layers are $2.0 \times 10^{11}$ cm$^{-2}$ (p-type) and $3.6 \times 10^{11}$ cm$^{-2}$ (n-type) respectively, and the twist angle $\theta=1^\circ$. A fit to the $I(V_b,V_g)$ curves at $B=0$ provides accurate values of these parameters \cite{Mishchenko2014} (also see \cite{Li2009,Ponomarenko2010} and Supplementary Information, SI, for further details). Our model gives a good fit to the magneto-tunnelling data, in particular the shape and relative strength of the islands of high conductance. It enables a detailed analysis of the pattern of conductance peaks (see SI). We now focus on the underlying physics that controls the overall pattern of peak amplitudes, in particular the effect of twist angle and chirality on the tunnelling process. \subsection{Transition rates between chiral LL eigenstates} The displacement, $\Delta K$, of the Dirac cones due to the twist angle is shown schematically in Figs.~\ref{fig:cones}{\bf a},{\bf b}. It can be represented by, and is equivalent to, the effect of a strong pseudo-magnetic field applied parallel to the graphene layers \cite{Leadbeater1991}. We describe the combined effects of the misalignment and the Landau-quantising applied magnetic field by a vector potential in the Landau gauge, \begin{equation} \mathbf{A}_{b,t}=\left(l\hbar\Delta K_{x}^{\pm},-eBx+l\hbar\Delta K_{y}^{\pm},0\right)/e, \label{eq:vectpot} \end{equation} \noindent where $l=0,1$ for the b, t layers. In a perpendicular magnetic field, the electron wavefunctions at the $K^+$ point have the analytic forms \cite{Shon1998,Zheng2002} \begin{equation} \Psi_{\nBT,k}^{K^{+}}(\mathbf{r})\propto\exp\left(iky\right)\left(\begin{array}{c} \phi_{|\nBT|}\\ \textrm{-sgn}(\nBT)i\phi_{|\nBT|-1} \end{array}\right). \label{eq:Psiplusmt} \end{equation} \noindent The two-component chiral states comprise plane waves along $y$ and simple harmonic oscillator (SHO) waves, $\phi$, along $x$ with indices that differ by $1$. The centres of the SHO wavefunctions in the top and bottom layers are shifted by $l_{B}^{2}\Delta K_{y}^{+}$ and there is an additional plane wave factor for the top layer whose argument is $\Delta K_{x}^{+}(x-X_t)$, where $X_{t}=l_B^2(k+\Delta K^+_y)$. The Bloch states near the $K^-$ point have similar form and make an equivalent contribution to the tunnelling matrix element, see SI. The tunnel rates between LLs, $W(\nB,\nT)$, depend on the overlap integrals of the initial and final wavefunctions summed over the $k-$states in the two layers (see SI) and therefore permits tunnelling between SHO states with a range of different $n$ indices. Fig.~\ref{fig:cones}{\bf a},{\bf b} show the energies and semiclassical trajectories (yellow rings) of the quantised Landau states. Fig.~\ref{fig:cones}\textbf{c} is a colour map of the inter-LL transition rate $W(\nB,\nT)$ at $B=4$ T (see Eq. (25) of the SI). It reveals narrow yellow regions where $W(\nB,\nT)$ is high. In other areas (black), tunnelling is suppressed. The regions of high $W(\nB,\nT)$ originate from the {\it spatial} form and relative positions of the wavefunctions in the bottom and top electrodes. Within the upper right and lower left quadrants of the colour map, transitions between equivalent bands (conduction-conduction, c-c, and valence-valence, v-v) are strongly enhanced compared to tunnelling between different bands (c-v and v-c). This asymmetry, found for all values of $B$, is a consequence of {\it chirality}. In contrast, when we remove the effect of chirality from our model by using pure (single component) LL wavefunctions, the tunnelling matrix elements are the same for transitions between equivalent and different bands (see SI). \subsection{Effect of chirality on tunnel current} The asymmetry in the transition rate colour map in Fig.~\ref{fig:cones}{\bf c} manifests itself in the observed pattern of conductance peak amplitudes. In certain regions of the $(V_b,V_g)$ plot, tunnelling is exclusively between equivalent bands, as shown in Fig.~\ref{fig:Gmaps}. Here, the black and white dashed curves bound the regions of $V_b-V_g$ space where tunnelling is either only c-c (upper region, $V_g>0$) or v-v (lower region, $V_g<0$), respectively. Within these regions the amplitudes of the resonant peaks are high, i.e. dark red. Increasing $V_b$ beyond the lower region induces a changeover from tunnelling between equivalent bands to a mixture of tunnelling between equivalent {\it and} different bands and is therefore accompanied by a suppression of the conductance peaks. This is a direct manifestation of electron chirality. This changeover also occurs as $V_b$ decreases across the left hand edge of the upper bounded region of Fig.~\ref{fig:Gmaps}{\bf a}-{\bf d}. The effect of chirality on the peak amplitudes in these regions is seen more clearly in the enlarged lower region of the $G(V_b,V_g)$ maps at $B=2$ T shown in Figs. \ref{fig:Gmapscoff}{\bf a}-{\bf c}. In both our experiment, {\bf a}, and calculations, {\bf b}, the conductance peak amplitudes are larger within the bounded region in the lower left-hand side of the plot, labelled L in Fig.~\ref{fig:Gmapscoff}{\bf d}, where v-v tunnelling dominates and smaller in the bounded region in the lower right-hand side of the plot where tunnelling is a mixture of v-v and v-c transitions (region labelled R in Fig.~\ref{fig:Gmapscoff}{\bf d}). For comparison, in Fig.~\ref{fig:Gmapscoff}{\bf c} we show $G(V_b,V_g)$ calculated when chirality is ``switched off'', i.e. with each eigenstate represented by a single SHO wavefunction with no pseudospin component (see SI). In contrast to the chiral theory and experimental data, the conductance peaks for the non-chiral calculations have similar amplitudes in regions L (v-v) and R (v-v and v-c). To quantify the effect of chirality on the tunnel current, we calculate the ratio of the mean conductance in region L to that in region R, $\langle G \rangle_{L}/\langle G \rangle_{R}$ (see Fig.~\ref{fig:Gmapscoff}{\bf d}). In the bar chart in Fig.~\ref{fig:Gmapscoff}{\bf e} we show $\langle G \rangle_{L}/\langle G \rangle_{R}$ when $B=0$, 2 and 4 T. For each field value, $\langle G \rangle_{L}/\langle G \rangle_{R}$ for the measured data (red) and the chiral calculations (yellow) are similar to each other. In contrast $\langle G \rangle_{L}/\langle G \rangle_{R}$ is significantly smaller for the non-chiral calculations (blue). In addition, with increasing $B$ the difference between the chiral and non-chiral results becomes larger: at higher $B$ there are fewer LL transitions within regions L and R and, for those transitions that do occur, the difference between the chiral and non-chiral conductance is more pronounced. Hence, the measured dependence of the conductance peak amplitudes on $V_g$, $V_b$, and $B$, reveals and demonstrates the chiral nature of the electrons and the associated asymmetry in the tunnelling rates (see Fig.~\ref{fig:cones}{\bf c}). \begin{figure}[t! \centering \includegraphics*[width=1.\linewidth]{5} \caption{{\bf a-c} Colour maps showing $G(V_b,V_g)$ when $B=2$ T. {\bf a,b} are enlargements of the lower parts of the colour maps in Fig. \ref{fig:Gmaps}{\bf a,b} respectively. Panel {\bf a} shows experimental data ($T=4$ K), {\bf b} is calculated using the full model with chiral electrons, and {\bf c} calculated using non-chiral wavefunctions i.e. comprising a single simple harmonic oscillator state. Colour bars in \textbf{a} and \textbf{b,c} are in $\mu$S and normalised units, respectively. Solid curves in {\bf a-c} enclose regions of the colour map where tunnelling is only v-v (labelled L in {\bf d}) or a mixture of v-v and v-c (labelled R in {\bf d}). Bar charts in {\bf e} show the ratio, $\langle G \rangle_{L}/ \langle G \rangle_{R}$, of the mean conductance in regions L and R (see {\bf d}) for the measured data (red), and calculated for chiral (yellow) and non-chiral (blue) electrons. \label{fig:Gmapscoff}} \end{figure} \subsection{Nested and figure of 8 cyclotron orbits} A semiclassical picture, in which electrons undergo cyclotron motion in both real- and $k$-space, provides further insights into the physics of tunnelling in these devices. In $k$-space, the orbital radii $\kappa_{b,t}=\sqrt{2|n_{b,t}|}/l_B$ in the two graphene layers are separated by $\mathbf{\Delta K}^\pm$. The solid and dotted curves in Fig. \ref{fig:cones}{\bf c} are loci of initial and final states along which the corresponding semiclassical orbits just touch, so that the tunnelling electrons can make a continuous classical trajectory in the $(k_x,k_y)$ plane. These loci are defined by \noindent \begin{equation} \kappa_{t}=\Delta K \pm \kappa_b. \label{eq:rTminusrB} \end{equation} \noindent Here the $-$ and $+$ signs specify, respectively, cyclotron orbits that describe a ``figure of 8'' (F-8) and nested (N) form. Examples are shown by the projected circles in the lower parts of Figs. \ref{fig:orbits}{\bf a} and {\bf b}. The spatial variation of the real (dark) and imaginary (light) components of the corresponding two-component LL wavefunctions are also shown ($x$ axis re-scaled by $1/l_B^2$ to enable comparison between the k-space trajectories and the spatial form of the SHO wavefunctions). The maxima in the wavefunction amplitude are located at the turning points of the semiclassical orbit so that, when Eq. (\ref{eq:rTminusrB}) is satisfied, i.e. along the solid (dotted) locus in Fig.~\ref{fig:cones}{\bf c} for N (F-8) orbits, the wavefunction overlap integral is large. The N and F-8 semiclassical orbits determine the dependence of $G$ on $B$, $V_b$ and $V_g$. At the onset of current (see red curve and arrow labelled $V_1$ in Fig.~\ref{fig:1split2}{\bf a}) the energetically aligned LLs correspond to semiclassical orbits with the F-8 form, see black rings in Fig.~\ref{fig:cones}{\bf a}. Consequently the matrix elements are large, allowing tunnel current to flow. At the resonant current peak (see red curve and arrow labelled $V_2$ in Fig.~\ref{fig:1split2}{\bf a}) the Dirac cones just touch and their intersection is a straight line. As a result, all energetically aligned LLs have high matrix elements because all the corresponding semiclassical orbits have either F-8 or N forms, see black and yellow rings in Fig.~\ref{fig:cones}{\bf b}. When $V_b$ increases beyond the current peak, many LLs that become aligned energetically have cyclotron orbits that do not overlap spatially and so the tunnelling matrix elements and current decreases. The semiclassical analysis also highlights the effect of the lattice misalignment on the electron dynamics. At the point of intersection of the $n_b=1$ and $n_t=3$ orbits, the electron ``back-scatters'' in both $k$-space and real space, making a $180^\circ$ direction change where the orbits touch in the $(k_x$-$k_y)$ plane, see lower part of Fig. \ref{fig:orbits}{\bf a}. This change in kinetic momentum at the intersection between the two orbits is induced by the impulse, $\hbar \mathbf{\Delta K}^{\pm}$, arising from the misorientation of the two graphene layers and the associated vector potential, which acts like an in-plane pseudo-magnetic field for tunnelling electrons, see Eq. (\ref{eq:vectpot}). As shown in Fig.~\ref{fig:orbits}{\bf a}, for the F-8 orbits, the tunnelling transition reverses the wavevector in the bottom and top electrodes, $\mathbf{k}_b$ and $\mathbf{k}_t$, measured relative to the Dirac point of the two layers. In contrast, for N orbits the direction of the wavevector in the two electrodes is unchanged during tunnelling; only its magnitude changes (Fig.~\ref{fig:orbits}{\bf b}). \subsection{Cyclotron orbits and Klein tunnelling} In graphene, the chiral nature of an electron in the absence of a magnetic field can be expressed by the expectation value of the pseudospin operator with respect to the eigenstate. For the $K^\pm$ valley this expectation value is $\langle \boldsymbol\sigma \rangle = s (\pm \cos\varphi,\sin \varphi)$, where $\varphi$ is the polar direction of the wavevector. Therefore, in our semiclassical model, for N orbits in both valleys $\langle \boldsymbol \sigma \rangle$ is unchanged for equivalent band transitions but is rotated by 180$\degrees$ for transitions between different bands. In contrast, for F-8 orbits $\langle \boldsymbol \sigma \rangle$ is reversed for transitions between equivalent bands and unchanged for transitions between different bands. When $\langle \boldsymbol\sigma \rangle$ is unchanged, the inter-layer tunnelling process bears an analogy with intra-layer Klein tunnelling \cite{Katsnelson2006,Kim2009,Liu2011}. The Klein paradox is predicted to occur for electrons tunnelling through a barrier in planar graphene where unity transmission is expected when the pseudospin is conserved. In our device, the tunnelling electron makes a ``quantum jump'' across the barrier; hence, the tunnelling rate can be high even if pseudospin is reversed, provided there is strong spatial overlap between the initial and final LL wavefunctions. However, as for the case of Klein tunnelling in planar graphene, the orientation of $\langle \boldsymbol \sigma \rangle$ in the initial and final states determines the tunnelling rate. Physically this is due to the interference between the A and B sublattices of graphene (see Eq. (13) of the SI and \cite{Liu2011}). In our experiments, resonant tunnelling is enabled by the twist of the graphene electrodes. This provides the impulse to induce the momentum and orbit centre change required for energy- and \textbf{k}-conserving tunnel transitions with high matrix elements. In particular, our data indicate that the pseudospin of the electrons is conserved for the tunnelling transitions at the current peak. \begin{figure}[!t] \centering \includegraphics*[width=1.\linewidth]{6} \caption{\textbf{a,b} Upper: vertical (horizontal) curves show the real (imaginary) parts of the real space electron wavefunction in the bottom (red curves) and top (blue curves) graphene electrodes respectively with $B=4$ T and \textbf{a} $n_b=1$ (red) and $n_t=3$ (blue) and \textbf{b} $n_b=2$ (red) and $n_t=16$ (blue). The $x$ axis is scaled by $l_B^2$ for comparison with lower plots: circles show corresponding figure of 8 and nested cyclotron orbits in $k$- space ($k_x,k_y$ axes inset and direction of motion marked by arrows) with orbit centres separated by $\Delta K$. The vertical black lines connecting upper and lower parts of the figure show the classical turning points. \label{fig:orbits}} \end{figure} \subsection{Conclusions} We have investigated how LL quantisation of Dirac-Weyl Fermions reveals the effects of chirality on the resonant tunnelling transitions in graphene-hBN-graphene heterostructures. Semiclassically, when the electron tunnelling trajectory takes the form of off-centred ``nested'' or ``figure of 8'' transitions, the pseudospin is either unchanged or undergoes a pseudospin-flip transition of 180$^\circ$. At the resonant peak of our measured and calculated current-voltage curves the pseudospin is conserved for all transitions, in analogy with Klein tunnelling in single-layer graphene. Analysis of the experimental data confirms that the Dirac-Weyl model for the electronic states of electrons in graphene provides an accurate description of the tunnel current flowing perpendicular to the plane of the barrier in these stacked van der Waals heterostructures, so-called ``vertical'' transport. Our results demonstrate that the chirality provides an important contribution to the characteristics of graphene-based tunnelling devices, and should therefore be taken into account when designing future electronic components based on materials with Dirac-like energy spectra. \subsection{Acknowledgments} This work was supported by the EU Graphene Flagship Programme and ERC Synergy Grant, Hetero2D. M.T.G. acknowledges The Leverhulme Trust for support of an Early Career Fellowship. V.I.F. acknowledges support of a Royal Society Wolfson Research Merit Award. \newpage \onecolumngrid \section{Supplementary Information} \section{Model} \label{sec:model} The graphene lattices in our device are slightly misorientated by an angle $\theta\approx1^{\circ}$ which results in a relative displacement in the positions of the Dirac points in $K$ space, $\Delta\mathbf{K}^\pm=(R(\theta)-1)\textbf{K}^{\pm}$, where $R(\theta)$ is the rotation matrix. The label $\pm$ corresponds to the two inequivalent $K$ points with positions given by $\textbf{K}^{\pm}=\pm\left(4\pi/3a,0\right)$, where $a=2.46$ \AA is the lattice constant of graphene. The Dirac points in the bottom electrode are at $\mathbf{K}_b^{\pm}$ and in the top electrode $\mathbf{K}_t^{\pm}+\Delta\mathbf{K}^{\pm}$. The relative shift of the Dirac points is analogous to an in-plane magnetic field. Therefore, we describe the displacement of the $K$ points using the following vector potential for electrons in the bottom and top layers, which also includes the effect of a magnetic field, $\mathbf{B}$ that is applied perpendicular to the graphene layers, \begin{equation} \mathbf{A}_{b,t}=\left(l\hbar\Delta K_{x}^{\pm},-eBx+l\hbar\Delta K_{y}^{\pm},0\right)/e, \label{eq:vectpot} \end{equation} \noindent where $l=0,1$ in the bottom (b) and top (t) layers respectively. The electron momentum takes the form $\mathbf{p}\rightarrow\mathbf{p}+e\mathbf{A}$, so that the effective mass Hamiltonian for Dirac electrons in graphene becomes \begin{equation} H^{\pm}_{b,t}=v_{F}\left(\begin{array}{cc} 0 & \pm\left(p_{x}+eA_{x,b,t}\right)-i\left(p_{y}+eA_{y,b,t}\right)\\ \pm\left(p_{x}+eA_{x,b,t}\right)+i\left(p_{y}+eA_{y,b,t}\right) & 0 \end{array}\right), \end{equation} where $v_{F}=10^{6}$ ms$^{-1}$. The Hamiltonian has the form of a quantum harmonic oscillator so that the electron has discrete Landau energy levels given by \begin{equation} E_{n_{b,t}}^{2}={\rm sgn}(n_{b,t})|n_{b,t}|2eB\hbar v_{F}^{2}, \label{eq:LLEn} \end{equation} where $n_{b,t}$ is an integer that labels the energy levels in the two electrodes, positive for electrons in the conduction band and negative in the valence band and \begin{equation} \textrm{sgn}(n)=\begin{cases} 1 & (n>0)\\ 0 & (n=0)\\ -1 & (n<0). \end{cases} \end{equation} The electron wavefunctions at the two Dirac points are therefore \begin{equation} \Psi_{n_{b,t},k_{b,t}}^{K^{+}}(\mathbf{r})=\frac{C_{n_{b,t}}}{\sqrt{L}}\exp\left(ik_{b,t}y\right)\left(\begin{array}{c} \phi_{|n_{b,t}|}\\ \textrm{-sgn}(n_{b,t})i\phi_{|n_{b,t}|-1} \end{array}\right) \label{eq:Psiplus} \end{equation} and \begin{equation} \Psi_{n_{b,t},k_{b,t}}^{K^{-}}(\mathbf{r})=\frac{C_{n_{b,t}}}{\sqrt{L}}\exp\left(ik_{b,t}y\right)\left(\begin{array}{c} \textrm{sgn}(n_{b,t})i\phi_{|n_{b,t}|-1}\\ \phi_{|n_{b,t}|} \end{array}\right), \label{eq:Psiminus} \end{equation} where \begin{equation} C_{n}=\begin{cases} 1 & (n=0)\\ 1/\sqrt{2} & (n\neq0) \end{cases} \end{equation} where \begin{equation} \phi_{|n_b|}=\frac{1}{\sqrt{2^{|n_b|}|n_b|!\sqrt{\pi}l_B}}\exp\left[-\frac{1}{2l_B^{2}}\left(x-X_b\right)^{2}\right]H_{|n_b|}\left(\frac{1}{l_B}(x-X_b)\right), \label{eq:SHOb} \end{equation} and \begin{equation} \phi_{|n_t|}=\frac{1}{\sqrt{2^{|n_t|}|n_t|!\sqrt{\pi}l_B}}\exp\left[-\frac{1}{2l_B^{2}}\left(x-X_t\right)^{2}-i\Delta K^\pm_{x}(x-X_t)\right]H_{|n_t|}\left(\frac{1}{l_B}(x-X_t)\right), \label{eq:SHOt} \end{equation} Here $l_B=\sqrt{\hbar/eB}$ and $H_{n}$ is the nth order Hermite polynomial. The orbit centre in the bottom and top electrodes are given by $X_b=l_B^{2}k_y$ and $X_t=l_B^{2}(k_y+\Delta K_{y}^{\pm})$ respectively. The effect of the misorientation of the two graphene sheets is to shift the relative position of their orbit centres by $l_B^{2}\Delta K_{y}^{\pm}$ and introduce a phase difference of $\Delta K_{x}(x-X_t)$. \subsection{Matrix element} We assume that electrons can undergo elastic scattering which we describe using a Gaussian scattering potential: \begin{equation} V_{S}(x,y)=V_{0}e^{-x{}^{2}/2\sigma^{2}-y{}^{2}/2\sigma^{2}}, \end{equation} where $\sigma\approx10$ nm is the scattering length scale. The matrix element for tunnelling between the bottom and top electrodes is given by \begin{equation} M_{bt}=\int_{V}dV\Psi_{t}^{*}(\mathbf{r},z)V_{S}\Psi_{b}(\mathbf{r},z). \label{eq:mateleform} \end{equation} First we consider the integral in the $z$ direction. We assume that the electron wavefunctions decay exponentially into the barrier regions so that the integral is a constant, equal to \begin{equation} \Xi=\frac{V_0}{D}e^{-\kappa d} \end{equation} where $d$ is the barrier width. We assume $\kappa$ to be independent of energy to facilitate analysis of the current. For full analysis of different $V_b$ dependent models for $\kappa$, see Ref. \cite{Britnell2012a}. In the basis of Bloch wavefunctions \cite{Feenstra2012} and \cite{Mishchenko2014}, the matrix element is given by \begin{align} M_{bt}(\nB,\nT,\kB,\kT)=\frac{1}{L}C_{\nB}C_{\nT}\Xi I_{y}(\kB,\kT)\left[I_{x}(|\nB|,|\nT|,\kB,\kT)\mp i\textrm{sgn}(\nB)I_{x}(|\nB|-1,|\nT|,\kB,\kT)\right.\nonumber \\ \left.\pm i \textrm{sgn}(\nB)I_{x}(|\nB|,|\nT|-1,\kB,\kT)+\textrm{sgn}(n\mspt b)\textrm{sgn}(n\mspt T)I_{x}(|n\mspt b|-1,|n_{t}|-1,k\mspt b,k\mspt t)\right] \label{eq:matrixele_allcombs} \end{align} where $I_{x}$ and $I_{y}$ are the overlap integrals of the wavefunctions along the $x$ and $y$ axes respectively. On first inspection, Eq. (\ref{eq:matrixele_allcombs}) appears to reveal that the matrix element is different for tunnelling between $K^+$ valleys (upper sign) compared to that between $K^-$ valleys (lower sign). However, $\Delta\mathbf{K}^{+}=-\Delta\mathbf{K}^{-}$ and, consequently, it can be shown that the matrix element for transitions between the same valleys are equivalent. Our matrix element does not explicitly include the cell-periodic parts of the Bloch functions, $u^{\alpha,\beta} (\mathbf{r})$, where $\alpha$ and $\beta$ label the two atoms in grahene's unit cell. This is because for small relative rotations of the two layers, the spatial overlap integral of the cell-periodic parts of the wavefunction $\int dS u^{*\alpha,\beta} (R(\theta )\mathbf{r}) u^{\alpha,\beta} (\mathbf{r})$ are approximately equivalent for all combinations of $\alpha$ and $\beta$, and therefore will only have a small quantitative effect on the matrix element \cite{Feenstra2012}. \subsubsection{Overlap integrals for scattering assisted tunnelling} The overlap integrals $I_{y}$ and $I_{x}$ can be shown \cite{Drallos1986} to have following form: \begin{equation} I_{y}=\sqrt{2\pi}\sigma\exp\left(-\Delta k^{2}\sigma^{2}/2\right), \end{equation} within which $\Delta k=k_{b}-k_{t}$. The overlap integral in the $x$ direction, $I_{x}$, is given by: \begin{equation} I_{x}\left(\nB,\nT,\kB,\kT\right)=\frac{1}{\zeta l_B}P_{bt}\left(\nB,\nT,\kB,\kT\right)\sum_{j=0}^{\left\lfloor n_{b},n_{t}\right\rfloor }j!\left(\begin{array}{c} \nB\\ j \end{array}\right)\left(\begin{array}{c} \nT\\ j \end{array}\right)\left(1-a^{2}\right)^{(\nB+\nT)/2-j}\times\label{eq:Ixarbn} \end{equation} \begin{equation} \left(2a^{2}\right)^{j}H_{\nB-j}\left[\frac{a\Upsilon-l_Bk_{b}}{\left(1-a^{2}\right)^{1/2}}\right]H_{\nT-j}\left[\frac{a\Upsilon-l_B\left(k_{t}+\Delta K_{y}^{+}\right)}{\left(1-a^{2}\right){}^{1/2}}\right] \end{equation} where $a=1/\zeta l_B$, \begin{equation} \zeta^{2}=\left(\frac{1}{l_B^{2}}+\frac{1}{2\sigma^{2}}\right), \end{equation} \begin{equation} P_{bt}\left(\nB,\nT,\kB,\kT\right)=\frac{\exp\left[\vartheta\left(k_{b},k_{t}\right)\right]}{\sqrt{2^{\nT}\nT!2^{\nB}\nB!}}, \end{equation} within which \begin{equation} \vartheta=\Upsilon^{2}-\frac{l_B^{2}}{2}\left(\left(k_{t}+\Delta K_{y}^{\pm}\right)^{2}+k_{b}^{2}\right)-i\Delta K_{x}^{\pm}\left(k_{t}+\Delta K_{y}^{\pm}\right), \end{equation} and \begin{equation} \Upsilon=\frac{1}{2\zeta}\left(k_{t}+\Delta K_{y}^{\pm}+k_{b}+i\Delta K_{x}^{\pm}\right). \end{equation} \subsection{Current} The current between the layers is given by the sum over states in the top and bottom layers: \begin{equation} I=g_{V}\frac{4\pi e}{\hbar}\sum_{bt}|M_{bt}|^{2}\left[f_{b}(E_{b})-f_{t}(E_{t})\right]\delta(E_{b}-E_{t}),\label{eq:currentsum} \end{equation} where the Fermi functions for the bottom and top layers are given, respectively, by \begin{equation} f_{b}(E_{b})=\frac{1}{1+e^{(E_{b}-\mu_{b})/k_{B}T}} \end{equation} and$ $ \begin{equation} f_{t}(E_{t})=\frac{1}{1+e^{(E_{t}-\mu_{t})/k_{B}T}}. \end{equation} and $k_B T$ is the thermal energy. We assume that the Landau levels (LLs) are broadened in energy by $\Gamma_{b,t}$ in the bottom and top electrodes respectively due to electron - electron interactions, which we model with a set of Gaussian functions (to ensure convergence at low magnetic fields) centered on the energies of the LLs $E_n$ (see equation \ref{eq:LLEn}) \cite{Ponomarenko2010} \begin{equation} \Gamma\left(E\right)=\sum_{n=-\infty}^{\infty}\frac{1}{\sqrt{2\pi}\Gamma_{b,t}}\exp\left(-\frac{\left(E-E_{n}\right)^{2}}{2\Gamma_{b,t}^{2}}\right). \end{equation} The density of states is then given by $D(E)=(2/\pi l_B^{2})\Gamma(E)$. We convert the sum over k states in equation (\ref{eq:currentsum}) to an integral to find the contribution to the current for transitions between LLs $\nT$ and $\nB$ is given by \begin{equation} W(\nB,\nT)=\frac{2L^{4}}{\pi^{2}l_B^{4}}\int\int|M_{bt}|^{2}d\kB d\kT,\label{eq:LLtunnellingrate} \end{equation} where $L$ is the device length, so that after using the $\delta$ function to integrate out $E_t$, we find that the current can now be expressed by: \begin{equation} I=g_{V}\frac{4\pi e}{\hbar}\int W(\nB,\nT)\left[f_{b}(E_{b})-f_{t}(E_{t})\right]D_b(E_{b})D_t(E_b-\phi) dE_{b}.\label{eq:currentsum-1} \end{equation} We model the electrostatics, i.e. the values of $\mu_{b,t}$ and the electrostatic potential energy difference $\phi_b$ between the graphene layers, by solving the following equation: \begin{equation} \phi + \mu_t(\rho_t,\Gamma_t) - \mu_b(\rho_b,\Gamma_b)+eV_b=0 \end{equation} where $d=1.4$ nm is the barrier width, $\rho_{b,t}$ is the charge density on the bottom and top electrodes and the function $\mu(\rho,\Gamma)$ is found using the density of states, $D(E)$ \cite{Britnell2012a}. From Gauss's law, and ensuring charge neutrality, we obtain the following relationships between $V_b$, $V_g$, $\phi$ and $n_{b,t}$: \begin{align} \epsilon \left( F_b - F_g \right)= \rho_b \\ - \epsilon F_b = \rho_t, \end{align} where $F_b=\phi_b/e d$ and $F_g=(e V_g-\mu_b)/e D_g$ are the fields in the tunnel barrier and gate-oxide barrier respectively and $D_g=300$ nm is the oxide thickness. \section{Analysis of conductance peaks} Fig. \ref{fig:Gmaps} shows colour maps of $G(V_b,V_g)=dI/dV_b$ measured ({\bf a},{\bf c}) and calculated ({\bf b}, {\bf d}) when $B=2$ T ({\bf a},{\bf b}) and $B=4$ T ({\bf c},{\bf d}). The parameters used to model the measured data are $\sigma=9$ nm and the LL broadening in the bottom and top graphene electrodes, $\Gamma_{b}$ and $\Gamma_{t}$, is set at 4 meV and 4 meV (6 meV and 8 meV) respectively when $B$ = 2 T (4 T). \begin{figure}[!t \centering \includegraphics*[width=.8\linewidth]{suppmat_Gmaps04_includecombs} \caption{Colour maps showing $G(V_b,V_g)$ measured (\textbf{a}) and calculated (\textbf{b}) when $B = 2$ T and when $B = 4$ T (\textbf{c} measured, \textbf{d} calculated). Colour scales for \textbf{a,c} (\textbf{b,d}) are in $\mu$S (arbitrary units). Filled black circles show loci along which the chemical potential in the top and bottom layer, respectively, intersects with the Dirac point in that layer. Lower panels {\bf A}-{\bf D} show the density of states, $D$, calculated versus energy, $E$, in the bottom (red) and top (blue) graphene electrodes and correspond to the features labelled {\bf A}-{\bf D} in colour maps \textbf{c} and \textbf{d}. Horizontal red and blue dashed lines show position of the chemical potentials in the bottom and top electrodes. \label{fig:Gmaps}} \end{figure} In this section we explain in more detail the origin of the conductance peaks observed in $G(V_b,V_g)$. The filled black circles in Fig. \ref{fig:Gmaps} show the calculated $(V_b,V_g)$ loci for which the chemical potential in the top layer intersects with the zeroth LL in that layer, see inset {\bf A} (filled circles running bottom left to top right), and for which the chemical potential in the bottom layer coincides with the zeroth LL in the bottom layer, see inset {\bf B} (filled circles running top left to bottom right). Therefore, the local conductance peaks that lie along the X-shaped loci correspond to the alignment of the chemical potential in one graphene layer with the peak in the density of states for the LL at the Dirac point. Fig. \ref{fig:Gmaps} shows that in both our experiments and theory, when $V_g\approx5$ V and $V_b \lesssim 0.2$ V, increasing $V_b$ initially has little effect on $G$. But when $V_b\approx\pm0.2$ V, there is a sharp increase in conductance. When $V_b$ increases beyond $\approx0.5$ V, $G$ decreases, becoming negative after the peak in $I(V_b)$. The regions of high $G$ in Fig. \ref{fig:Gmaps} form stripe patterns with similar shapes to the loci marked by the filled circles. This is because they also originate from alignment of the chemical potential and LLs when $\mu_{b,t}=E_{n_{b,t}}$ where, in contrast to the yellow curves, $n_{b,t}\neq0$. The crossing of these loci gives rise to more islands of high $G$, for example those labelled ``B-D'' in Fig. \ref{fig:Gmaps} {\bf c},{\bf d}. \begin{figure}[!t \centering \includegraphics*[width=.8\linewidth]{current_fits_playwithdoping_zoomed_pub_01} \caption{Colour maps showing comparison of {\bf a} measured and {\bf b} modelled $G(V_g,V_b)$ for $V_g<0$ and $V_b<0.2$ V when $B=4$ T. Theory curves are calculated with $\Gamma_{b}=3$ meV and $\Gamma_{t}=5$ meV, $\sigma=9$ nm, and misalignment angle = $1^\circ$. Circled features and corresponding inset plots show the alignments of LLs in bottom (red) and top (blue) electrodes with the chemical potentials indicated by the top of the block colour. \label{fig:5}} \end{figure} When $B=4$ T, we find good qualitative agreement between the measured and calculated $G(V_b,V_g)$ colour maps. Along the loci marked by filled circles in Fig. \ref{fig:Gmaps}{\bf c} both maps reveal a series of conductance maxima in similar positions, for example those labelled ``B-D'' in Fig. \ref{fig:Gmaps}{\bf c} and {\bf d}. As explained above, along the loci, the maxima occur as $\mu_t$ sweeps through the LL spectra in the top and bottom layers. The maxima labelled ``B'' and ``C'' occur when $\mu_t$ coincides with $n_t=-1$ and $n_t=-2$ LLs (see insets labelled ``B'' and ``C''). The strength of the maxima depends on the alignment of the LLs. For example, the conductance maximum labelled ``D'' is stronger than ``B'', because at ``B'' the LL spectra in the top and bottom layers are aligned and tunnelling occurs from $n_b=0$ and $-1$ to $n_t=0$ and $-1$, which have low matrix elements (see main text). By contrast, for case ``D'' the matrix element for tunnelling between the energetically aligned LLs $n_b=3$ and $n_t=1$ is high. \subsection{Conductance peaks in lower island} We now analyse the features that appear in the $G(V_b,V_g)$ colour maps at low $V_b\lesssim\pm0.2$ V when $B=2$ T and 4 T. These features occur whenever the chemical potential in either the bottom or top layer is aligned energetically with one of the LLs in the top or bottom layer respectively. The resulting local maxima in $G(V_b,V_g)$ occur at similar positions in the measured (Figs. \ref{fig:Gmaps}{\bf a},{\bf c}) and calculated (Figs. \ref{fig:Gmaps}{\bf b},{\bf d}) colour maps. However, when $B=2$ T, the theoretical results reveal many more features than the measured data. This is because our calculations assume a constant LL width and therefore omit the increased LL broadening that could occur at high $V_b$ in the actual device, e.g. due to electron heating. However, the general features of the measured and calculated colour maps are similar, in particular the positions of the resonant peaks and the width and shape of the X-shaped low $G$ region. In Figs. \ref{fig:5}{\bf a} and {\bf b} we show an enlargement of Fig. \ref{fig:Gmaps}{\bf c} and {\bf d} focusing on the series of conductance peaks found for low $V_b$ and negative $V_g$ when $B=4$ T. To model the data at low $V_b$, where electron heating is low, we use a narrower broadening ($\Gamma_b=3$ meV and $\Gamma_t=5$ meV) than used for the full range of bias voltage. There is very good correspondence of the positions of the peaks in the modelled and measured data. As for the local conductance peaks considered previously, the peaks arise from a series of alignments of LLs of different index and the alignments of the chemical potentials. To aid understanding of these features we highlight two series of resonant peaks labelled, respectively, ``A-F'' and ``i-v'' showing the alignment of the LLs and the position of the chemical potentials in the graphene layers. \section{Model for non-chiral electrons} \begin{figure}[!t \centering \includegraphics*[width=.75\linewidth]{chiral_nonchiral_mateles_suppmat} \caption{Colour map showing normalised tunnelling rates, $W(\nB,\nT)$ (see Eq. \ref{eq:LLtunnellingrate}), for scattering-assisted transitions (taking $\sigma=9$ nm) between LLs with indices $\nB$ and $\nT$ in the bottom and top electrodes calculated using non chiral {\bf a} and chiral {\bf b} wavefunctions. \label{fig:matelenonchiral}} \end{figure} To understand the effect of pseudospin on our conductance calculations we derive a model for non-chiral electrons. The model has the same structure as that presented for chiral electrons but with the electron described by a single component wavefunction of the form \begin{equation} \Psi_{n_{b,t},k_{b,t}}^{K^{+}}(\mathbf{r})=\frac{1}{\sqrt{L}}\exp\left(ik_{b,t}y\right)\phi_{|n_{b,t}|} \label{eq:Psinonchiral} \end{equation} where the variables have the same form as those given in section \ref{sec:model}. Although this form of the wavefunction does not correspond to a physical system (it is similar to LL states in III-V materials but with massless Fermions) it allows us to distinguish clearly the effect of chirality on the measured and calculated conductance. In Fig. \ref{fig:matelenonchiral}{\bf a} we show $W(\nB,\nT)$ calculated for non-chiral {\bf a} and chiral {\bf b} electrons (see Fig. 3{\bf c} of the main text). The figure reveals that for non-chiral electrons, {\bf a}, transitions between equivalent (c-c and v-v) and different bands (v-c and c-v), have the same magnitude; by contrast, for chiral electrons, c-c and v-v transitions are strongly enhanced compared to v-c and c-v transitions. Fig. \ref{fig:chiral} compares of our conductance calculation for chiral electrons, see section \ref{sec:model}, with for non-chiral electrons. When $B=2$ T and 4 T within the upper (lower) region, above (below) the dotted and dot-dashed yellow curves, the measurements (Figs \ref{fig:chiral}{\bf a},{\bf d}) and full calculation (Figs. \ref{fig:chiral}{\bf c},{\bf f}) reveal that the peak amplitudes are largest where c-c (v-v) transitions dominate (within the region bounded by the black and white dashed curves). Increasing or decreasing $V_b$ outside of this region suppresses the conductance peaks. By contrast, in the calculations using non-chiral wavefunctions (Eq. \ref{eq:Psinonchiral}), the conductance peaks in the lower and upper regions have a constant amplitude over the whole range of $V_b$ (see Fig. \ref{fig:chiral}{\bf b},{\bf e}). This is because the matrix element in the chiral calculations depends on the initial and final band of the tunnelling electron and is enhanced for transitions between equivalent bands compared to those between different bands. However, in our non-chiral model, the matrix element is equal for equivalent transitions between states with the same LL index magnitude for alike and different bands and therefore the conduction peak amplitudes are constant across the lower and upper regions. \begin{figure}[!h \centering \includegraphics*[width=.9\linewidth]{Gmaps03_fullcolouron_off} \caption{Colour maps showing $G(V_b,V_g)$ maps when $B=2$ T ({\bf a}-{\bf c}) and $B=4$ T ({\bf d}-{\bf f}) measured {\bf a},{\bf d} and calculated for non-chiral (\textbf{b},{\bf e}) and chiral (\textbf{c},{\bf f}) electrons. Colour scales are in $\mu$S\xspace for panels {\bf a} and {\bf d} and in arbitrary units for panels {\bf b},{\bf c},{\bf e} and {\bf f}. Black and white dashed curves enclose islands within which \emph{only} conduction-conduction ($V_g>0$) or valence-valence ($V_g<0$) tunnelling occurs. The filled black circles show loci when the chemical potential in the top and bottom layer, respectively, intersects with the Dirac point in the corresponding layer. \label{fig:chiral}} \end{figure} A changeover between regions of high and low conductance can also be seen in our recent studies of the $G(V_b,V_g)$ characteristics of similar tunnel structures when $B$ = 0. However, in the present work, the changeover is more pronounced because the quantizing magnetic field strongly reduces the number of distinct tunnel-coupled states that contribute to the current flow; the effect of chirality is strongly magnetic field dependent. Consequently, the conductance is more sensitive to the matrix elements for tunnelling between each of these states. \subsection{Model for non-chiral electrons in zero field} In zero field we calculate the current for chiral electrons using the model presented in \cite{Feenstra2012,Mishchenko2014}. For our calculation of current for non-chiral electron, we describe the electrons by plane wave states with the form \begin{equation} \Psi_{\mathbf{k}_{b,t}}^{K^{+}}(\mathbf{r})=\frac{1}{\sqrt{A}}\exp\left(i\mathbf{k}_{b,t}.\mathbf{r}\right) \label{eq:Psinonchiralzerofield} \end{equation} where $\mathbf{k}_{b,t}=(k_x,k_y)$ are the wavevectors in the bottom and top electrodes. Therefore the matrix element can be found using Eq. \ref{eq:mateleform}: \begin{equation} M_{bt}=\frac{\Xi}{A}\exp(-\sigma^2|\mathbf{k_b}-\mathbf{k_t}-\Delta \mathbf{K^\pm}|^2/2). \end{equation} We then calculate the current by using this form of the matrix element in Eq. (\ref{eq:currentsum}) summing over $k-$states in zero field. \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In regression analysis, sufficient dimension reduction (SDR) provides a useful statistical framework to analyze a high-dimensional dataset without losing any information. It finds the fewest linear combinations of predictors that capture a full regression relationship. Let $Y$ be an univariate response and $X=(x_1,\ldots,x_p)^{\top}$ be a $p\times 1$ predictor vector, SDR aims to find a $p\times d$ matrix $\bm{\beta}$ such that \begin{equation}\label{eqn1.1} Y \;\, \rule[0em]{.03em}{.67em} \hspace{-.25em X | \bm{\beta}^{\top} X, \tag{1.1} \end{equation} where $\;\, \rule[0em]{.03em}{.67em} \hspace{-.25em$ denotes the statistical independence. The column space of $\bm{\beta}$ satisfying \eqref{eqn1.1} is called a dimension reduction subspace. Under mild conditions \citep{cook1996graphics,yin2008successive}, the intersection of all the dimension reduction subspaces exists and is unique. In this case, if the intersection itself is also a dimension reduction subspace, we call it the central subspace \citep{cook1994interpretation,cook1996graphics} for the regression of $Y$ on $X$ and denote it by $\mathcal{S}_{Y|X}$. Note that the dimension of $\mathcal{S}_{Y|X}$ denoted by $\mbox{dim}(\mathcal{S}_{Y|X})$ is usually much smaller than the original predictor's dimension $p$. Thus, we reduce the dimensionality of the predictor space. The primary interest of SDR is to find such central subspace $\mathcal{S}_{Y|X}$. Since the introduction of sliced inverse regression \citep[SIR;][]{li1991sliced} and sliced average variance estimation \citep[SAVE;][]{cook1991sliced}, many methods have been proposed for estimating the basis of $\mathcal{S}_{Y|X}$, including inverse regression \citep[IR;][]{cook2005sufficient}, directional regression \citep[DR;][]{li2007directional}, minimum average variance estimation method \citep[MAVE;][]{xia2002adaptive}, sliced regression \citep[SR;][]{wang2008sliced}, ensemble approach \citep{yin2011sufficient}, Fouriers transform approach \citep{zhu2006fourier}, integral transform method \citep{zeng2010integral}, Kullback-Leibler distance based estimator \citep{yin2005direction}, likelihood based method \citep{cook2009likelihood}, and semiparametric approach \citep{ma2012semiparametric}, etc. All of the aforementioned dimension reduction methods require certain conditions on the predictors or complicated smoothing technique. In reality, these conditions are not easy to be verified and the results of these methods may be misleading if the conditions are violated. Recently, \citet{sheng2013direction,sheng2016sufficient} proposed a method using distance covariance \citep[DCOV;][]{szekely2007measuring,szekely2009brownian} for estimating the central subspace $\mathcal{S}_{Y|X}$. Distance covariance is an elegant measure that quantifies the dependence strength between two random vectors. Consequently, the DCOV-based SDR method requires only mild conditions on the predictors and does not require any link function or nonparametric estimation. It can be also easily extended to handle regression with multivariate responses. The most challenging part of the DCOV-based SDR methods is that it involves solving a nonconvex and nonsmooth optimization problem over the Stiefel manifold. The present work \citep[e.g.,][]{sheng2013direction,sheng2016sufficient,chen2018efficient} tackled the problems by using sequential quadratic programming \citep[SQP;][chap. 6]{gill1981practical}. The SQP method works well when the dimension $p$ and the sample $n$ is not too large, but optimization is often computationally difficult for moderately high dimensional settings. Another method that seems to work is to use the Matlab package \verb|manopt| by \citet{boumal2014manopt}. This package provides iterative Riemannian optimization techniques, including Trust-regions, BFGS, SGD, Nelder-Mead, and so on. Unfortunately, directly applying this package to solve the DCOV-based SDR problems may often crash since it needs the analytical first-order derivative function. Beyond above, the literature on solving this kind of problem is scarce. In this article, we propose a new algorithm which presents three major contributions to the literature of sufficient dimension reduction and manifold optimization. First, we novelly write the DCOV objective function of the model as a difference of convex functions equivalently. Therefore we design a highly efficient algorithm for solving the corresponding optimization problem based on the new objective function form. Second, we construct the convergence property of the proposed algorithm over the Stiefel manifold. Third, we extend our method to sufficient variable selection based on distance covariance. Simulation studies show our algorithm is ten to hundred times faster than the methods relying on SQP algorithm. A toy example is given to visualize what SDR does and to see the performance of our algorithm and the competitor's. In this example, we generate 800 independent copies one time from \begin{equation*} X = {\bm\Gamma} [\cos(2\pi Y), \sin(2\pi Y)]^\top + 0.1{\bm\Phi}^{1/2}\epsilon, \nonumber \end{equation*} where \begin{equation*} \bm\Gamma = \left( \begin{array}{ccccccc} 1 & 1 & \ldots & 1 & 1\\ 1 & -1 & \ldots & 1 & -1 \\ \end{array} \right)^\top \in \mathbb{R}^{20 \times 2}, \end{equation*} $Y$ is generated from uniform distribution over interval $(0, 1)$, $\Phi_{ij}=0.5^{|i-j|}$ and $\epsilon$ is a standard normal error. In the following figure, we can see how the first two SDR components recover a circle pattern. Our algorithm (MMRN, see details in a later chapter) is about 20 time faster than the competitor. \begin{figure}[!htbp] \centering \includegraphics[scale=.75]{fig/circle} \caption{Computational performance comparison.} \label{FIG:1} \end{figure} \subsection{Notation and the Stiefel Manifold} The following notations and knowledge about the Stiefel manifold discussed in \citet{absil2009optimization,edelman1998geometry} will be used in our exposition. The trace of a matrix ${\bf A}$ is $\mathrm {tr}({\bf A})$ and the Euclidean inner product of two matrices ${\bf A},{\bf B}$ is $\langle {\bf A}, {\bf B} \rangle=\mathrm {tr}({\bf A}^{\top}{\bf B})$. We use $\|\cdot\|_2$ and $\|\cdot\|_{\rm F}$ to denote the Euclidean norm of a vector and the Frobenius norm of a matrix respectively. The notation ${\rm St}(d,p)=\left\{ {\bm \gamma}\in\mathbb{R}^{p\times d}|{\bm \gamma}^{\top}{\bm \gamma}={\bf I}_d \right\}$ with $d\leq p$ is referred to the Stiefel manifold and $\mathcal{T}_{{\bm \gamma}}{\rm St}(d,p)$ is the tangent space to ${\rm St}(d,p)$ at a point ${\bm \gamma}\in{\rm St}(d,p)$. According to \citet{edelman1998geometry}, $\mathcal{T}_{{\bm \gamma}}{\rm St}(d,p)=\left\{{\bm \gamma}{\bf U}+{\bm \gamma}_{\perp}{\bf V}|{\bf U}\in{\rm Skew}(d),{\bf V}\in\mathbb{R}^{(p-d)\times d} \right\}$. Here ${\bm \gamma}_{\perp}$ is the orthogonal complement of ${\bm \gamma}$ and ${\rm Skew}(d)$ denotes the set of $d\times d$ skew-symmetric matrices. We use ${\rm vec}({\bf W})$ to denote the vector formed by stacking the column vectors of ${\bf W}$. For a skew-symmetric matrix ${\bf W}\in {\rm Skew}(d)$, ${\rm veck}({\bf W})$ denotes a $d(d-1)/2$-dimensional column vector obtained by stacking the columns of the lower triangular part of ${\bf W}$. For a square matrix ${\bf W}$, we use ${\rm sym}({\bf W})=\left( {\bf W}+{\bf W}^{\top}\right)/2$ and ${\rm skew}({\bf W})=\left( {\bf W}-{\bf W}^{\top}\right)/2$ to denote the symmetric and skew-symmetric parts of ${\bf W}$ respectively. Induced from the Euclidean inner product, the Riemannian metric on ${\rm St}(d,p)$ we consider here is defined as $\langle {\bm \xi}_1,\,{\bm \xi}_2 \rangle_{{\bm \gamma}}=\mathrm {tr}({\bm \xi}_1^{\top}{\bm \xi}_2),\; \mbox{for any } {\bm \xi}_1,{\bm \xi}_2 \in \mathcal{T}_{{\bm \gamma}}{\rm St}(d,p)$. Under this metric, the orthogonal projection of ${\bf W}$ onto the tangent space $\mathcal{T}_{{\bm \gamma}}{\rm St}(d,p)$ is expressed as $\mbox{Proj}_{\mathcal{T}_{{\bm \gamma}}{\rm St}(d,p)}({\bf W})={\bf W}-{\bm \gamma} {\rm sym} \left({\bm \gamma}^{\top}{\bf W}\right)$. Let $f$ be a smooth function and $\nabla f$ be the Euclidean gradient, the Riemannian gradient of point ${\bm \gamma} \in {\rm St}(d,p)$ is defined as ${\rm grad}f({\bm \gamma})= \mbox{Proj}_{\mathcal{T}_{{\bm \gamma}}{\rm St}(d,p)}(\nabla f({\bm \gamma}))$. Correspondingly, the Riemannian Hessian of point ${\bm \gamma} \in {\rm St}(d,p)$ acting on ${\bm \xi} \in \mathcal{T}_{{\bm \gamma}}{\rm St}(d,p)$ is defined as $\mbox{Hess}\,f({\bm \gamma})[{\bm \xi}] =\mbox{Proj}_{\mathcal{T}_{{\bm \gamma}}{\rm St}(d,p)}\left({\rm D}(\mbox{grad}\,f)({\bm \gamma})[{\bm \xi}]\right)$ and ${\rm D}(\mbox{grad}\,f)({\bm \gamma})[{\bm \xi}]$ is the directional derivative of $\mbox{grad}\,f({\bm \gamma})$ along the direction ${\bm \xi}$. We use Retr to denote the retraction operation. For the Stiefel manifold, the QR retraction is used in the article. \subsection{Organization} The rest of the article is organized as follows. Section 2 reviews briefly key knowledge of the DCOV-based SDR method and illustrates our motivation. Section 3 describes the proposed algorithm for solving DCOV-based SDR models in details and Section 4 extends the proposed algorithm to DCOV-based SVS models. In Section 5, we evaluate the superior numeric performance of the proposed algorithm through various simulation studies. Finally, we draw some concluding remarks about the article in Section 6. All proofs are given in the Appendix. \section{Background Review and Motivation} \subsection{DCOV-based SDR Model} Let $({\bf X},{\bf Y})=\left\{ (X_i,Y_i): i=1,\ldots,n \right\}$ be a random sample from $(X,Y)$. ${\bf X}$ denotes a $p\times n$ data matrix and ${\bf Y}$ denotes a $1 \times n$ response data matrix. We present here an univariate response, however, the method can naturally be extended to multivariate responses without any issue due to the nature of DCOV. The empirical solution of DCOV-based SDR method for these $n$ observations relies on solving the following objective function: \begin{alignat}{1}\label{eqn2.1} \underset{ \bm{\beta}\in\mathbb{R}^{p\times d} }{\mbox{max}} \; & \mathcal{V}_{n}^2(\bm{\beta}^{\top}{\bf X},{\bf Y}) := \frac{1}{n^2} \sum_{k,l=1}^{n} A_{kl}({\bm{\beta}})B_{kl}, \mbox{ s.t. } \bm{\beta}^{\top} \widehat{{\bf\Sigma}}_{X} \bm{\beta} ={\bf I}_d, \tag{2.1} \end{alignat} where $\widehat{{\bf\Sigma}}_X$ is the sample covariance matrix of $X$, ${\bf I}_d$ is a $d$-dimensional identity matrix and for $k,l=1,\ldots,n,$ \begin{eqnarray*} A_{kl}({\bm{\beta}}) &=& a_{kl}({\bm\beta})- \overline{a}_{k\cdot }({\bm\beta})- \overline{a}_{\cdot l}({\bm\beta})+\overline{a}_{\cdot \cdot}({\bm\beta}), \\ a_{kl}({\bm\beta}) &=& \| \bm{\beta}^{\top} X_k-\bm{\beta}^{\top} X_l \|_2, \; \overline{a}_{k\cdot }(\bm{\beta})=\frac{1}{n} \sum_{l=1}^{n} a_{kl}(\bm{\beta}), \\ \overline{a}_{\cdot l}({\bm\beta}) &=& \frac{1}{n} \sum_{k=1}^{n} a_{kl}({\bm\beta}),\; \overline{a}_{\cdot \cdot}({\bm\beta}) = \frac{1}{n^2} \sum_{k,l=1}^{n} a_{kl}({\bm\beta}). \end{eqnarray*} Similarly, define $b_{kl}=\|Y_k-Y_l\|_2$ and $B_{kl}= b_{kl}- \overline{b}_{k\cdot }- \overline{b}_{\cdot l}+\overline{b}_{\cdot \cdot}$. \citet{sheng2013direction,sheng2016sufficient} showed that under mild conditions, the solution of the above problem $(\ref{eqn2.1})$ is a $\sqrt{n}$-consistent estimator of a basis of $\mathcal{S}_{Y|X}$. \subsection{Motivation} In the Appendix of \citet*{szekely2007measuring}, it was proved that $\mathcal{V}_{n}^2(\bm{\beta}^{\top}{\bf X},{\bf Y})$ has another expression, i.e., \begin{equation}\tag{2.2}\label{eqn2.2} \mathcal{V}_{n}^2(\bm{\beta}^{\top}{\bf X},{\bf Y})=S_1+S_2-2S_3, \end{equation} where \begin{equation} \tag{2.3}\label{eqn2.3} \begin{aligned} S_1 &= \frac{1}{n^2}\sum_{k,l=1}^{n} a_{kl}(\bm{\beta})b_{kl},\\ S_2 &= \frac{1}{n^2}\sum_{k,l=1}^{n} a_{kl}(\bm{\beta})\frac{1}{n^2}\sum_{k,l=1}^{n} b_{kl}=\frac{1}{n^2}\sum_{k,l=1}^{n} a_{kl}(\bm{\beta})\overline{b}_{\cdot\cdot},\\ S_3 &= \frac{1}{n^3}\sum_{k=1}^{n}\sum_{l,m=1}^{n}a_{kl}(\bm{\beta})b_{km}= \frac{1}{n^2}\sum_{k,l=1}^{n} a_{kl}(\bm{\beta})\overline{b}_{k\cdot}. \end{aligned} \end{equation} Notice that $\displaystyle \frac{1}{n^2}\sum_{k,l=1}^{n} a_{kl}(\bm{\beta})\overline{b}_{k\cdot}= \frac{1}{n^2}\sum_{k,l=1}^{n} a_{kl}(\bm{\beta})\overline{b}_{\cdot l}$ because for any $k,l=1,\ldots,n$, $a_{kl}(\bm{\beta})\overline{b}_{k\cdot}=a_{lk}(\bm{\beta})\overline{b}_{\cdot k}$. Then, we have the following way to express $2S_3$: \begin{equation}\tag{2.4}\label{eqn2.4} 2S_3=\frac{1}{n^2}\sum_{k,l=1}^{n} a_{kl}(\bm{\beta})\overline{b}_{k\cdot}+ \frac{1}{n^2}\sum_{k,l=1}^{n} a_{kl}(\bm{\beta})\overline{b}_{\cdot l}=\frac{1}{n^2}\sum_{k,l=1}^{n} a_{kl}(\bm{\beta}) \left( \overline{b}_{k\cdot}+\overline{b}_{\cdot l}\right). \end{equation} Substituting equations $(\ref{eqn2.3})$ and $(\ref{eqn2.4})$ into $(\ref{eqn2.2})$, we obtain \begin{equation}\tag{2.5}\label{eqn2.5} \begin{aligned} \mathcal{V}_{n}^2(\bm{\beta}^{\top}{\bf X},{\bf Y}) &= \frac{1}{n^2}\sum_{k,l=1}^{n} a_{kl}(\bm{\beta})b_{kl}+\frac{1}{n^2}\sum_{k,l=1}^{n} a_{kl}(\bm{\beta})\overline{b}_{\cdot\cdot}-\frac{1}{n^2}\sum_{k,l=1}^{n} a_{kl}(\bm{\beta}) \left( \overline{b}_{k\cdot}+\overline{b}_{\cdot l}\right), \\ &= \frac{1}{n^2}\sum_{k,l=1}^{n} a_{kl}(\bm{\beta}) \left( b_{kl}+ \overline{b}_{\cdot\cdot} -\overline{b}_{k\cdot}-\overline{b}_{\cdot l} \right),\\ &= \frac{1}{n^2}\sum_{k,l=1}^{n} a_{kl}(\bm{\beta}) B_{kl}. \end{aligned} \end{equation} In addition, it can be verified that $\sum_{k,l=1}^{n}B_{kl}=0$ and $a_{kl}(\bm{\beta})$ is convex with respect to $\bm{\beta}$. These details make us notice that the objective function $(\ref{eqn2.5})$ can have a difference of convex functions decomposition (DC). Indeed, we can write the function $(\ref{eqn2.5})$ into a DC formulation \begin{equation}\tag{2.6}\label{eqn2.6} \mathcal{V}_{n}^2(\bm{\beta}^{\top}{\bf X},{\bf Y})=\left( \frac{1}{n^2}\sum_{k,l=1}^{n}a_{kl}(\bm{\beta})B_{kl}I(B_{kl}>0)\right)- \left( -\frac{1}{n^2}\sum_{k,l=1}^{n}a_{kl}(\bm{\beta})B_{kl}I(B_{kl}<0)\right) \end{equation} through the indicator function $I(\cdot)$. This equivalent function form $(\ref{eqn2.6})$ motivates us to design a highly efficient algorithm from the viewpoint of difference convex algorithm \citep[DCA;][]{tao1997convex}. More details about DCA and some of its recent developments can be found in \citet{tao2005dc,le2018dc}; \citet{tao1997convex,tao1998dc,dinh2014recent}. Thus, the objective function $(\ref{eqn2.1})$ of the DCOV-based SDR model can be equivalently transformed to \begin{equation}\label{eqn2.7}\tag{2.7} \begin{split} \underset{ \bm{\beta} \in\mathbb{R}^{p\times d}}{\mbox{max}} \; & \mathcal{V}_{n}^2(\bm{\beta}^{\top}{\bf X},{\bf Y}) := \frac{1}{n^2} \sum_{k,l=1}^{n} a_{kl}({\bm{\beta}})B_{kl}, \mbox{ s.t. }\; \bm{\beta}^{\top} \widehat{{\bf\Sigma}}_{X} \bm{\beta} ={\bf I}_d. \end{split} \end{equation} Let ${\bm \gamma}={\widehat{{\bf\Sigma}}_X}^{\frac 12}\bm{\beta}$ and ${\bf Z}={\widehat{{\bf\Sigma}}_X}^{-\frac 12} {\bf X}$, the above function $(\ref{eqn2.7})$ can be rewritten as \begin{equation}\label{eqn2.8}\tag{2.8} \begin{split} \underset{ {\bm \gamma}}{\mbox{max}}\; \mathcal{V}_{n}^2({\bm \gamma}^{\top}{\bf Z},{\bf Y}):=\frac{1}{n^2} \sum_{k,l=1}^{n} a_{kl}({{\bm \gamma}})B_{kl}, \mbox{ s.t. } \; {\bm \gamma} \in {\rm St}(d,p), \end{split} \end{equation} where $a_{kl}({\bm \gamma})=\|{\bm \gamma}^{\top}Z_k-{\bm \gamma}^{\top}Z_l\|_2$. In later sections, we will make full use of the equivalent form $(\ref{eqn2.8})$ rather than $(\ref{eqn2.7})$. \section{Methodology} \subsection{Preliminaries} In fact, DCA is based on MM algorithm which is a principle of designing algorithms. The idea of designing a MM algorithm for finding $\hat{x}=\underset{ x \in \mathscr{X} }{\arg\max}\, f(x)$ where ${\mathscr X}$ is the constraint region is as follows. At each iterate $x^{(t)}$, we need to construct a surrogate function $g(x|x^{(t)})$ satisfying \begin{equation*} \begin{split} f(x^{(t)}) &= g(x^{(t)}|x^{(t)}) \\ f(x) &\geq g(x|x^{(t)}),\qquad \mbox{for any $x \in \mathscr{X} $ }. \end{split} \end{equation*} Then, MM algorithm updates the estimation with \begin{equation*} x^{(t+1)} =\underset{ x \in \mathscr{X} }{\arg\max }\; g(x|x^{(t)}). \end{equation*} Because \begin{equation*} f(x^{(t+1)}) \geq g(x^{(t+1)}|x^{(t)}) \geq g(x^{(t)}|x^{(t)}) = f(x^{(t)}), \end{equation*} the iterate estimates generated by MM algorithm drive the objective function uphill. Under mild conditions, MM algorithm generally converges to a stationary point of the objective function. The most important component of designing a MM algorithm is to find an appropriate surrogate function $g(x|x^{(t)})$. In general, many surrogate functions may be derived from various inequalities stemming from convexity or concavity, see, e.g., \citet{lange2000optimization} or \citet{hunter2004tutorial22}. One of the most used inequalities to construct a surrogate function is the supporting hyperplane inequality. Suppose $f(x)$ is convex with gradient $\nabla f(x)$, the supporting hyperplane inequality is \begin{equation}\tag{3.1} \label{eqn3.1} f(y) \geq f(x)+ \langle \nabla f(x),\; y-x \rangle. \end{equation} Our derivation of the MM algorithm for the DCOV-based SDR model hinges on the convexity of the two functions mentioned in the next lemma. \begin{lem}\label{lem1} (a) The scalar function $\displaystyle f(x)=x^{\frac 12}-\epsilon \log\left(1+ \frac{ x^{\frac 12}}{\epsilon} \right) $ is concave and differentiable in $x>0$ where $\epsilon >0$ is a constant. (b) The matrix function $\displaystyle f({\bf A})=\|{\bf A}c\|_2-\epsilon\log \left( 1 + \frac{\|{\bf A}c\|_2}{\epsilon} \right)$ is convex and differentiable in the $n\times p$ matrix ${\bf A}$ where $ c\in \mathbb{R}^p$ is a constant vector and $\epsilon >0$ is a constant scalar. \end{lem} \subsection{MM Algorithm} It is often challenging to directly optimize the objective function $(\ref{eqn2.8})$ due to the non-smoothness. One way to tackle the difficulty is to perturb objective function slightly to render it differentiable, then to optimize this differentiable function using a MM algorithm \citep{hunter2005variable,yu2015high}. Motivated by this idea, we introduce a perturbed version $\mathcal{V}_{n,\epsilon}^2({\bm \gamma}^{\top}{\bf Z},{\bf Y})$ of the objective function $(\ref{eqn2.8})$ for the DCOV-based SDR model: \begin{equation}\tag{3.2}\label{eqn3.2} \displaystyle \begin{split} \mathcal{V}_{n,\epsilon}^2({\bm \gamma}^{\top}{\bf Z},{\bf Y}) &= \frac{1}{n^2} \sum_{k,l=1}^{n} \left\{ a_{kl}({\bm \gamma}) - \epsilon\log \left( 1 + \frac{a_{kl}({\bm \gamma})}{\epsilon} \right) \right\} B_{kl}, \\ &=\frac{1}{n^2} \sum_{k,l=1}^{n} \left\{ \|{\bm \gamma}^{\top}(Z_k-Z_l) \|_2- \epsilon\log \left( 1 + \frac{\|{\bm \gamma}^{\top}(Z_k-Z_l)\|_2}{\epsilon} \right) \right\} B_{kl}. \end{split} \end{equation} Below we conclude some properties of the perturbed objective function $\mathcal{V}_{n,\epsilon}^2({\bm \gamma}^{\top}{\bf Z},{\bf Y})$. \begin{Pro}\label{Pro1} For $\epsilon>0$, (i) $\mathcal{V}_{n,\epsilon}^2({\bm \gamma}^{\top}{\bf Z},{\bf Y})$ is a continuous and differentiable DC function and a DC decomposition of it is \begin{equation}\tag{3.3}\label{eqn3.3} \begin{split} \mathcal{V}_{n,\epsilon}^2({\bm \gamma}^{\top}{\bf Z},{\bf Y}) &= \left( \frac{1}{n^2} \sum_{k,l=1}^{n} \left\{ a_{kl}({\bm \gamma}) - \epsilon\log \left( 1 + \frac{a_{kl}({\bm \gamma})}{\epsilon} \right) \right\} B_{kl}I(B_{kl}>0) \right) \\ &- \left( -\frac{1}{n^2} \sum_{k,l=1}^{n} \left\{ a_{kl}({\bm \gamma}) - \epsilon\log \left( 1 + \frac{a_{kl}({\bm \gamma})}{\epsilon} \right) \right\} B_{kl}I(B_{kl}<0)\right) , \end{split} \end{equation} where $I(\cdot)$ is an indicator function, (ii) $\mathcal{V}_{n,\epsilon}^2({\bm \gamma}^{\top}{\bf Z},{\bf Y})$ converges to $\mathcal{V}_{n}^2({\bm \gamma}^{\top}{\bf Z},{\bf Y})$ uniformly on the Stiefel manifold ${\bm \gamma}\in {\rm St}(d,p)$ as $\epsilon$ approaches to zero. \end{Pro} Now let ${\bm \gamma}^{(t)}$ denote the current estimate, we plan to construct the minorization $g_{\epsilon}({\bm \gamma}|{\bm \gamma}^{(t)})$ for the perturbed objective function $\mathcal{V}_{n,\epsilon}^2({\bm \gamma}^{\top}{\bf Z},{\bf Y})$ based on the DC decomposition $(\ref{eqn3.3})$. The convexity of the function $\displaystyle {\bf A} \mapsto \|{\bf A}c\|_2-\epsilon\log \left( 1 + \frac{\|{\bf A}c\|_2}{\epsilon} \right)$ implies that \begin{equation}\notag \begin{split} a_{kl}({\bm \gamma}) - \epsilon\log \left( 1 + \frac{a_{kl}({\bm \gamma})}{\epsilon} \right) &=\|{\bm \gamma}^{\top}(Z_k-Z_l)\|_2- \epsilon\log \left( 1 + \frac{\|{\bm \gamma}^{\top}(Z_k-Z_l)\|_2}{\epsilon} \right) \\ &\geq \|{{\bm \gamma}^{(t)}}^{\top}(Z_k-Z_l) \|_2- \epsilon\log \left( 1 + \frac{\|{{\bm \gamma}^{(t)}}^{\top}(Z_k-Z_l)\|_2}{\epsilon} \right) \\ & \quad +\big\langle \frac{(Z_k-Z_l)(Z_k-Z_l)^{\top}{\bm \gamma}^{(t)}}{ \|{{\bm \gamma}^{(t)}}^{\top}(Z_k-Z_l)\|_2+\epsilon },\; {\bm \gamma}-{\bm \gamma}^{(t)} \big\rangle. \end{split} \end{equation} Multiplying both sides by a nonnegative term $B_{kl}I(B_{kl}>0)$ and averaging over all pairs $(k,l)$ leads to the minorization \begin{equation}\tag{3.4}\label{eqn3.4} \begin{split} & \frac{1}{n^2} \sum_{k,l=1}^{n} \left\{ a_{kl}({\bm \gamma}) - \epsilon\log \left( 1 + \frac{a_{kl}({\bm \gamma})}{\epsilon} \right) \right\} B_{kl}I(B_{kl}>0) \\ &\geq \frac{1}{n^2} \sum_{k,l=1}^{n} \left\{ a_{kl}({\bm \gamma}^{(t)}) - \epsilon\log \left( 1 + \frac{a_{kl}({\bm \gamma}^{(t)})}{\epsilon} \right) \right\} B_{kl}I(B_{kl}>0) \\ & \quad + \frac{1}{n^2} \sum_{k,l=1}^{n} \big\langle \frac{(Z_k-Z_l)(Z_k-Z_l)^{\top}{\bm \gamma}^{(t)}}{ \|{{\bm \gamma}^{(t)}}^{\top}(Z_k-Z_l)\|_2+\epsilon },\; {\bm \gamma}-{\bm \gamma}^{(t)} \big\rangle B_{kl}I(B_{kl}>0). \end{split} \end{equation} Next focusing on the term $\displaystyle a_{kl}({\bm \gamma}) - \epsilon\log \left( 1 + \frac{a_{kl}({\bm \gamma})}{\epsilon} \right) B_{kl}I(B_{kl}<0)$, we use the fact that $\displaystyle f(x)=x^{\frac 12}-\epsilon \log\left( 1+\frac{ x^{\frac 12}}{\epsilon} \right)$ is concave in $x>0$ to show \begin{equation}\notag x^{\frac 12}-\epsilon \log\left( 1+\frac{ x^{\frac 12}}{\epsilon} \right) \leq {x^{(t)}}^{\frac 12}-\epsilon \log\left( 1+\frac{ {x^{(t)}}^{\frac 12}}{\epsilon} \right)+ \frac{x-x^{(t)}}{2\left( {x^{(t)}}^{\frac 12}+\epsilon \right)}. \end{equation} Then, we take $x=\|{\bm \gamma}^{\top}(Z_k-Z_l)\|^2_2$ and $x^{(t)}=\|{{\bm \gamma}^{(t)}}^{\top}(Z_k-Z_l)\|^2_2$, the above inequality becomes \begin{equation}\notag \begin{split} & \|{\bm \gamma}^{\top}(Z_k-Z_l) \|_2- \epsilon\log \left( 1 + \frac{\|{\bm \gamma}^{\top}(Z_k-Z_l)\|_2}{\epsilon} \right) \\ & \leq \|{{\bm \gamma}^{(t)}}^{\top}(Z_k-Z_l) \|_2- \epsilon\log \left( 1 + \frac{\|{{\bm \gamma}^{(t)}}^{\top}(Z_k-Z_l)\|_2}{\epsilon} \right)\\ &\quad + \frac{ \|{\bm \gamma}^{\top}(Z_k-Z_l)\|^2_2-\|{{\bm \gamma}^{(t)}}^{\top}(Z_k-Z_l)\|^2_2 }{2\left( \|{{\bm \gamma}^{(t)}}^{\top}(Z_k-Z_l) \|_2 + \epsilon \right)}. \end{split} \end{equation} Multiplying both sides by a nonpositive term $B_{kl}I(B_{kl}<0)$ and averaging over all pairs $(k,l)$, we obtain the minorization \begin{equation}\tag{3.5}\label{eqn3.5} \begin{split} & \frac{1}{n^2} \sum_{k,l=1}^{n} \left\{ a_{kl}({\bm \gamma}) - \epsilon\log \left( 1 + \frac{a_{kl}({\bm \gamma})}{\epsilon} \right) \right\} B_{kl}I(B_{kl}<0) \\ &\geq \frac{1}{n^2} \sum_{k,l=1}^{n} \left\{ a_{kl}({\bm \gamma}^{(t)}) - \epsilon\log \left( 1 + \frac{a_{kl}({\bm \gamma}^{(t)})}{\epsilon} \right) \right\} B_{kl}I(B_{kl}<0) \\ &\quad + \frac{1}{n^2} \sum_{k,l=1}^{n} \frac{ \|{\bm \gamma}^{\top}(Z_k-Z_l)\|_2^2-\|{{\bm \gamma}^{(t)}}^{\top}(Z_k-Z_l)\|_2^2 }{2\left( \|{{\bm \gamma}^{(t)}}^{\top}(Z_k-Z_l) \|_2 + \epsilon \right)} B_{kl}I(B_{kl}<0). \end{split} \end{equation} Combination of the minorizations $(\ref{eqn3.4})$ and $(\ref{eqn3.5})$ gives the overall minorization \begin{equation}\tag{3.6}\label{eqn3.6} \begin{split} g_{\epsilon}({\bm \gamma}|{\bm \gamma}^{(t)}) &= \frac{1}{n^2} \sum_{k,l=1}^{n} \frac{ B_{kl}I(B_{kl}<0) }{2\left( \|{{\bm \gamma}^{(t)}}^{\top}(Z_k-Z_l) \|_2+ \epsilon \right)}\|{\bm \gamma}^{\top}(Z_k-Z_l)\|^2_2 \\ &\quad + \frac{1}{n^2} \sum_{k,l=1}^{n} \big\langle \frac{(Z_k-Z_l)(Z_k-Z_l)^{\top}{\bm \gamma}^{(t)}}{ \|{{\bm \gamma}^{(t)}}^{\top}(Z_k-Z_l)\|_2+\epsilon },\; {\bm \gamma} \big\rangle B_{kl}I(B_{kl}>0) + c^{(t)}, \end{split} \end{equation} where $c^{(t)}$ is an irrelevant constant. To make clear of the surrogate function, we write it in a matrix form. Let ${\bf C}$ be a $n \times n$ matrix with every entry $\displaystyle C_{kl}= \frac{ B_{kl}I(B_{kl}<0) }{ \|{{\bm \gamma}^{(t)}}^{\top}(Z_k-Z_l)\|_2 + \epsilon }$ and ${\bf D}$ be a $n \times n$ matrix with every entry $\displaystyle D_{kl}= \frac{ B_{kl}I(B_{kl}>0) }{ \|{{\bm \gamma}^{(t)}}^{\top}(Z_k-Z_l)\|_2 + \epsilon }$, then the surrogate function $(\ref{eqn3.6})$ becomes \begin{equation*} \begin{split} g_{\epsilon}({\bm \gamma}|{\bm \gamma}^{(t)}) &= \frac{1}{n^2} \sum_{k,l=1}^{n} \frac{C_{kl}}{2} \|{{\bm \gamma}}^{\top}(Z_k-Z_l)\|^2_2 \\ &\quad + \frac{1}{n^2} \sum_{k,l=1}^{n} \big\langle D_{kl}(Z_k-Z_l)(Z_k-Z_l)^{\top}{\bm \gamma}^{(t)},\; {\bm \gamma} \big\rangle + c^{(t)}. \end{split} \end{equation*} After some algebraic manipulation, we have \begin{equation*} \begin{split} g_{\epsilon}({\bm \gamma}|{\bm \gamma}^{(t)}) &= \frac{1}{2} \mathrm {tr}\left( {\bm \gamma}^{\top} {\bf Z} \frac{ 2(\mathrm {diag}({\bf C}{\bf 1}_n)-{\bf C} ) }{ n^2 } {\bf Z}^{\top} {\bm \gamma} \right) \\ &\quad + \mathrm {tr}\left( {{\bm \gamma}^{(t)}}^{\top} {\bf Z} \frac{ 2(\mathrm {diag}({\bf D}{\bf 1}_n)-{\bf D} ) }{ n^2 } {\bf Z}^{\top} {\bm \gamma} \right) + c^{(t)}, \end{split} \end{equation*} where ${\bf 1}_n$ is a $n\times 1$ column vector having all $n$ elements equal to one and $\mathrm {diag}(a)$ is the $n\times n$ diagonal matrix whose entries are the $n$ elements of the vector $a$. Let $\displaystyle {\bf Q}={\bf Z} \frac{ 2(\mathrm {diag}({\bf C}{\bf 1}_n)-{\bf C} ) }{ n^2 } {\bf Z}^{\top}$ and $\displaystyle {\bf L}= {\bf Z} \frac{ 2(\mathrm {diag}({\bf D}{\bf 1}_n)-{\bf D} ) }{ n^2 } {\bf Z}^{\top}{\bm \gamma}^{(t)}$, the surrogate function $g_{\epsilon}({\bm \gamma}|{\bm \gamma}^{(t)})$ finally has the form \begin{equation} \tag{3.7}\label{eqn3.7} g_{\epsilon}({\bm \gamma}|{\bm \gamma}^{(t)})=\frac{1}{2} \mathrm {tr}\left( {\bm \gamma}^{\top} {\bf Q} {\bm \gamma} \right)+ \mathrm {tr}\left( {\bm \gamma}^{\top} {\bf L} \right), \end{equation} subject to ${\bm \gamma}\in{\rm St}(d,p)$. Maximizing the surrogate function $g_{\epsilon}({\bm \gamma}|{\bm \gamma}^{(t)})$ under the constraint drives the loss function uphill. However, due to the existence of the manifold constraint, it is still difficult to accurately solve the subproblem $(\ref{eqn3.7})$ although the objective function is only a quadratic function. In fact, the validity of the ascent property depends only on increasing $g_{\epsilon}({\bm \gamma}|{\bm \gamma}^{(t)})$ over the Stiefel manifold ${\rm St}(d,p)$, not on maximizing $g_{\epsilon}({\bm \gamma}|{\bm \gamma}^{(t)})$. Similar to \citet{lange1995gradient} and \citet{xu2018majorization}, we propose inexactly maximizing the surrogate function $g_{\epsilon}({\bm \gamma}|{\bm \gamma}^{(t)})$ by taking a single Newton's step but over the Stiefel manifold ${\rm St}(d,p)$. At each iterate ${\bm \gamma}^{(t)}$, we need to solve the following Newton's equation of the problem $(\ref{eqn3.7})$ \begin{equation}\tag{3.8}\label{eqn3.8} \mbox{Hess}\, g_{\epsilon}({\bm \gamma}^{(t)})[{\bm \xi}]=-\mbox{grad}\,g_{\epsilon}({\bm \gamma}^{(t)}), \end{equation} subject to ${\bm \xi} \in {\cal T}_{{\bm \gamma}^{(t)}} {\rm St}(d,p)$. After obtaining the Newton's direction ${\bm \xi}$ at the current estimate ${\bm \gamma}^{(t)}$, we can update estimate by \begin{equation*} {\bm \gamma}^{(t+1)} =\mbox{Retr}_{{\bm \gamma}^{(t)}}({\bm \xi})={\rm qf}({\bm \gamma}^{(t)}+{\bm \xi}), \end{equation*} where ${\rm qf}(\cdot)$ denotes the Q factor of the QR decomposition of the matrix. To safeguard the MM algorithm preserving the ascent property, we can take step-having strategy at every iterate. We call this MM algorithm for soving the DCOV-based SDR model MMRN algorithm and the following Algorithm (\ref{alg1}) summarizes the MMRN algorithm using step-halving based on satisfying the Armijo condition. \begin{algorithm} \label{alg1} \caption{MMRN Algorithm for $(\ref{eqn2.7})$} \KwIn{${\bf X}\in\mathbb{R}^{p\times n}$, ${\bf Y}\in\mathbb{R}^{1\times n}$, perturbation constant $\epsilon$} Initialize ${\bm \gamma}^{(0)}\in {\rm St}(d,p)$, $\alpha\in (0,1)$, $\sigma\in (0,1)$, $t= 0$ \\ Precompute ${\widehat{{\bf\Sigma}}_{X}}^{\frac 12}$, ${\bf B}=\left( B_{kl}\right)$, ${\bf Z}={\widehat{{\bf\Sigma}}_{X}}^{-\frac 12} {\bf X} $ \\ \Repeat{objective value converges}{ $\displaystyle C_{kl} \gets \frac{B_{kl}I(B_{kl}<0) }{ \|{{\bm \gamma}^{(t)}}^{\top}(Z_k-Z_l)\|_2+\epsilon } $,\quad $\displaystyle D_{kl} \gets \frac{B_{kl}I(B_{kl}>0) }{ \|{{\bm \gamma}^{(t)}}^{\top}(Z_k-Z_l)\|_2+\epsilon } $, \quad for any $k,l=1,\ldots,n$ \\[0.1in] $\displaystyle {\bf Q}\gets {\bf Z}\frac{2(\mathrm {diag}({\bf C}{\bf 1}_n)-{\bf C}) }{n^2} {\bf Z}^{\top} $, \quad $\displaystyle {\bf L}\gets {\bf Z}\frac{2(\mathrm {diag}({\bf D}{\bf 1}_n)-{\bf D}) }{n^2} {\bf Z}^{\top} {\bm \gamma}^{(t)}$\\[0.1in] Solve the Newton's equation \begin{equation}\notag \mbox{Hess}\, g_{\epsilon}({\bm \gamma}^{(t)})[{\bm \xi}]=-\mbox{grad}\,g_{\epsilon}({\bm \gamma}^{(t)}), \end{equation} for unknown ${\bm \xi} \in {\cal T}_{{\bm \gamma}^{(t)}} {\rm St}(d,p)$ \\ $s\gets 1$ \\ \Repeat{$\displaystyle \mathcal{V}_{n,\epsilon}^2({{\rm Retr}_{{\bm \gamma}^{(t)}}(s{\bm \xi})}^{\top}{\bf Z},{\bf Y}) \geq \mathcal{V}_{n,\epsilon}^2({{\bm \gamma}^{(t)}}^{\top}{\bf Z},{\bf Y})+\alpha s\|{\bm \xi}\|_{\rm F}^2 $}{ $s\gets \sigma s$ } ${\bm \gamma}^{(t+1)}\gets\mbox{Retr}_{{\bm \gamma}^{(t)}}(s{\bm \xi})$\\ $t\gets t+1$\\ } \KwOut{$\hat{{\bm \gamma}}_{\epsilon}={\bm \gamma}^{(t+1)},\; \hat{\bm{\beta}}_{\epsilon}={\widehat{{\bf\Sigma}}_{X}}^{-\frac 12} \hat{{\bm \gamma}}_{\epsilon}$} \end{algorithm} \subsection{Solving the Riemannian Newton's equation $(\ref{eqn3.8})$} The MM algorithm is a well-applicable and simple algorithmic framework for solving DC problems. The key challenge in making the proposed algorithm efficient numerically lies in solving the equation $(\ref{eqn3.8})$. \citet{aihara2017matrix} and \citet{sato2017riemannian} recently proposed an effective way of solving Newton's equation on the Stiefel manifold. The idea of the method is to rewrite original Newton's equation expressed by a system of matrix equations into a standard linear system through the Kronecker product and the vec and veck operators. The resultant linear system can be effectively solved while reducing the dimension of the equation to that of the Stiefel manifold. Before applying their method to solve the Newton's equation of our subproblem $(\ref{eqn3.8})$ formally, we introduce some useful properties of Kronecker, vec, and veck operators. \begin{enumerate} \item For any ${\bf A}\in \mathbb{R}^{m\times p}$, ${\bf X} \in \mathbb{R}^{p\times q}$, and ${\bf B}\in\mathbb{R}^{q\times n}$, we have \begin{equation}\notag {\rm vec}({\bf A}{\bf X}{\bf B})=\left( {\bf B}^{\top}\otimes{\bf A} \right){\rm vec}({\bf X}). \end{equation} \item For any matrix ${\bf U}\in {\rm Skew}(d)$, we have \begin{equation} \notag {\rm vec}({\bf U})= {\bf D}_d {\rm veck}({\bf U}), \end{equation} and \begin{equation} \notag {\rm veck}({\bf U})= \frac 12 {\bf D}_d^{\top} {\rm vec}({\bf U}). \end{equation} Here ${\bf D}_d$ is a $d^2 \times d(d-1)/2$ matrix defined by \begin{equation*} {\bf D}_d=\sum_{ d\geq i>j\geq 1 } \left( {\bf E}^{(d^2 \times d(d-1)/2)}_{d(j-1)+i,\,j(d-(j+1)/2)-d+i } - {\bf E}^{(d^2 \times d(d-1)/2)}_{d(i-1)+j,\,j(d-(j+1)/2)-d+i } \right), \end{equation*} where ${\bf E}^{(p\times q)}_{i,\, j}$ denotes the $p\times q$ matrix that has the $(i,j)$-component equal to $1$ and all other components equal to $0$. \item There exists an $n^2\times n^2$ permutation matrix ${\bf T}_n$ such that \begin{equation}\notag {\rm vec}({\bf W}^{\top})={\bf T}_n{\rm vec}({\bf W}), \quad {\bf W} \in \mathbb{R}^{n\times n}, \end{equation} where ${\bf T}_n=\sum_{i,j=1}^{n}{\bf E}_{ij}^{(n\times n)}\otimes {\bf E}_{ji}^{(n\times n)} $. \end{enumerate} From the above properties, we can easily derive that \begin{equation} \notag {\rm vec}({\rm skew}({\bf W}))=\frac 12 ({\bf I}_{n^2}-{\bf T}_n ) {\rm vec}({\bf W} ), \quad \mbox{for any } {\bf W} \in \mathbb{R}^{n\times n}. \end{equation} After these preparations, we begin to solve the Newton's equation $(\ref{eqn3.8})$. For a given $\tilde{{\bm \gamma}}\in {\rm St}(d,p)$, the Newton's equation $(\ref{eqn3.8})$ is equivalent to \begin{equation}\tag{3.9} \label{eqn3.9} \mbox{Hess}\, g_{\epsilon}(\tilde{{\bm \gamma}})[{\bm \xi}]=-\mbox{grad}\,g_{\epsilon}(\tilde{{\bm \gamma}}), \end{equation} subject to ${\bm \xi} \in {\cal T}_{\tilde{{\bm \gamma}}} {\rm St}(d,p)$. Specifically, the gradient of $g_{\epsilon}$ at a point $\tilde{{\bm \gamma}}\in {\rm St}(d,p)$ is expressed as \begin{equation}\tag{3.10}\label{eqn3.10} \mbox{grad}\,g_{\epsilon}(\tilde{{\bm \gamma}})={\bf Q}\tilde{{\bm \gamma}}+{\bf L}-\tilde{{\bm \gamma}}{\bf S}, \end{equation} and the Hessian acts on ${\bm \xi} \in {\cal T}_{\tilde{{\bm \gamma}}} {\rm St}(d,p)$ as \begin{equation}\tag{3.11}\label{eqn3.11} \mbox{Hess}\, g_{\epsilon}(\tilde{{\bm \gamma}})[{\bm \xi}] = {\bf Q} {\bm \xi} -{\bm \xi}{\bf S}-\tilde{{\bm \gamma}}\mbox{sym}\left( {\tilde{{\bm \gamma}}}^{\top} {\bf Q} {\bm \xi} -{\tilde{{\bm \gamma}}}^{\top}{\bm \xi}{\bf S} \right), \end{equation} where ${\bf S} = \mbox{sym}( {\tilde{{\bm \gamma}}}^{\top} {\bf Q}\tilde{{\bm \gamma}} + {\tilde{{\bm \gamma}}}^{\top} {\bf L} )$. ${\bm \xi} \in {\cal T}_{\tilde{{\bm \gamma}}} {\rm St}(d,p)$ can be expressed as \begin{equation}\tag{3.12}\label{eqn3.12} {\bm \xi} = \tilde{{\bm \gamma}} {\bf U} + \tilde{{\bm \gamma}}_{\perp} {\bf V}, \quad {\bf U} \in \mbox{Skew}(d), {\bf V} \in \mathbb{R}^{(p-d)\times d}. \end{equation} $\mbox{Hess}\, g_{\epsilon}(\tilde{{\bm \gamma}})[{\bm \xi}] \in {\cal T}_{\tilde{{\bm \gamma}}} {\rm St}(d,p)$ can also be written as \begin{equation}\tag{3.13}\label{eqn3.13} \mbox{Hess}\, g_{\epsilon}(\tilde{{\bm \gamma}})[{\bm \xi}] = \tilde{{\bm \gamma}} {\bf U}_{H} + \tilde{{\bm \gamma}}_{\perp} {\bf V}_{H}, \quad {\bf U}_{H} \in \mbox{Skew}(d), {\bf V}_{H} \in \mathbb{R}^{(p-d)\times d}. \end{equation} Substituting the equation $(\ref{eqn3.12})$ into the equation $(\ref{eqn3.11})$ and combining the resultant equation with the equation $(\ref{eqn3.13})$, we can obtain a relationship between ${\bf U}_{H}, {\bf V}_{H}$ and ${\bf U}, {\bf V}$. The following proposition gives the relationship. \begin{Pro} Let $\tilde{{\bm \gamma}} \in {\rm St}(d,p) $ and $\tilde{{\bm \gamma}}_{\perp}$ be its orthonormal complement. If a tangent vector ${\bm \xi} \in {\cal T}_{\tilde{{\bm \gamma}}} {\rm St}(d,p) $ is expressed as $(\ref{eqn3.12})$, then the Hessian $\rm{Hess}\, g_{\epsilon}(\tilde{{\bm \gamma}})[{\bm \xi}]$ of the function $(\ref{eqn3.7})$ acts on ${\bm \xi}$ as $\rm{Hess}\, g_{\epsilon}(\tilde{{\bm \gamma}})[{\bm \xi}] = \tilde{{\bm \gamma}} {\bf U}_{H} + \tilde{{\bm \gamma}}_{\perp} {\bf V}_{H}$ with \begin{equation}\tag{3.14}\label{eqn3.14} {\bf U}_{H}= \rm{skew} \left( { \tilde{{\bm \gamma}} }^{\top} {\bf Q} \tilde{{\bm \gamma}} {\bf U} + { \tilde{{\bm \gamma}} }^{\top} {\bf Q} \tilde{{\bm \gamma}}_{\perp} {\bf V} -{\bf U}{\bf S} \right), \end{equation} and \begin{equation}\tag{3.15}\label{eqn3.15} {\bf V}_{H}= {\tilde{{\bm \gamma}}_{\perp}}^{\top} {\bf Q}\tilde{{\bm \gamma}} {\bf U} + {\tilde{{\bm \gamma}}_{\perp}}^{\top} {\bf Q} \tilde{{\bm \gamma}}_{\perp} {\bf V} -{\bf V}{\bf S}. \end{equation} \label{pro2} \end{Pro} From Equation $(\ref{eqn3.14})$ and $(\ref{eqn3.15})$, we know the Hessian $\rm{Hess} \, g_{\epsilon}(\tilde{{\bm \gamma}})$ at $\tilde{{\bm \gamma}}\in {\rm St}(d,p) $ is a linear transformation $\bf H$ on $\mathbb{R}^{K}$ that transforms a $K$-dimensional vector ${ \left( {\rm{veck}({\bf U})}^{\top}, {{\rm vec}({\bf V})}^{\top} \right) }^{\top}$ into ${ \left( {{\rm veck}({\bf U}_{H})}^{\top}, {{\rm vec}({\bf V}_{H})}^{\top} \right) }^{\top}$. A goal of the method is to obtain the linear transformation $\bf H$. \begin{Pro} Let $K={\rm dim}({\rm St}(d,p))=d(d-1)/2+(p-d)d$, there exists a linear transformation $\bf H$ on $\mathbb{R}^{K}$ such that \begin{equation*} \bf H \left( \begin{array}{c} \rm{veck}({\bf U}) \\ {\rm vec}({\bf V}) \end{array} \right)= \left( \begin{array}{c} {\rm veck}({\bf U}_{H})\\ {\rm vec}({\bf V}_{H}) \\ \end{array} \right), \end{equation*} and the linear transformation $\bf H$ is given by \begin{equation*} \bf H= \left( \begin{array}{cc} \bf H_{11} & \bf H_{12}\\ \bf H_{21} & \bf H_{22}\\ \end{array} \right), \end{equation*} where \begin{eqnarray*} \bf H_{11} &=& \frac 14 {\bf D}^{\top}_d \left[ {\bf I}_d \otimes ({\tilde{{\bm \gamma}}}^{\top}{\bf Q}\tilde{{\bm \gamma}}-{\bf S}) + ({\tilde{{\bm \gamma}}}^{\top}{\bf Q}\tilde{{\bm \gamma}}-{\bf S})\otimes {\bf I}_d \right] {\bf D}_d , \\ \bf H_{12} &=& \frac 14 {\bf D}_d^{\top}({\bf I}_{d^2}-{\bf T}_d ) \left( {\bf I}_d\otimes {\tilde{{\bm \gamma}}}^{\top}{\bf Q}\tilde{{\bm \gamma}}_{\perp} \right),\\ \bf H_{21} &=& ({\bf I}_d\otimes \tilde{{\bm \gamma}}_{\perp}^{\top}{\bf Q}\tilde{{\bm \gamma}}){\bf D}_d, \\ \bf H_{22} &=& {\bf I}_d \otimes {\tilde{{\bm \gamma}}_{\perp}}^{\top}{\bf Q}\tilde{{\bm \gamma}}_{\perp} -{\bf S}\otimes {\bf I}_d. \end{eqnarray*} \label{pro3} \end{Pro} From the Newton's equation $(\ref{eqn3.9})$ together with Equation $(\ref{eqn3.13})$, we have \begin{equation}\tag{3.16}\label{eqn3.16} \begin{cases} {\bf U}_{H}&=-\tilde{{\bm \gamma}}^{\top} {\rm grad}\, g_{\epsilon}(\tilde{{\bm \gamma}}),\\ {\bf V}_{H}&=-\tilde{{\bm \gamma}}^{\top}_{\perp} {\rm grad}\, g_{\epsilon}(\tilde{{\bm \gamma}}). \end{cases} \end{equation} Applying the veck and vec operators to the equations $(\ref{eqn3.16})$ respectively and using equation $(\ref{eqn3.10})$, we immediately obtain \begin{equation}\notag \begin{cases} {\rm veck}({\bf U}_{H})&=-{\rm veck} \left( {\rm skew}( {\tilde{{\bm \gamma}}}^{\top}{\bf Q}\tilde{{\bm \gamma}}+ {\tilde{{\bm \gamma}}}^{\top}{\bf L} ) \right),\\ {\rm vec}({\bf V}_{H})&=-{\rm vec}(\tilde{{\bm \gamma}}^{\top}_{\perp}{\bf Q}\tilde{{\bm \gamma}}+\tilde{{\bm \gamma}}^{\top}_{\perp}{\bf L}). \end{cases} \end{equation} By Proposition 3, we have a standard linear system \begin{equation}\notag \bf H \left( \begin{array}{c} \rm{veck}({\bf U}) \\ {\rm vec}({\bf V}) \end{array} \right)= - \left( \begin{array}{c} {\rm veck} \left( {\rm skew}( {\tilde{{\bm \gamma}}}^{\top}{\bf Q}\tilde{{\bm \gamma}}+ {\tilde{{\bm \gamma}}}^{\top}{\bf L} ) \right)\\ {\rm vec}(\tilde{{\bm \gamma}}^{\top}_{\perp}{\bf Q}\tilde{{\bm \gamma}}+\tilde{{\bm \gamma}}^{\top}_{\perp}{\bf L}) \\ \end{array} \right). \end{equation} If $\bf H$ is invertible, we can solve the above linear equation as \begin{equation}\notag \left( \begin{array}{c} \rm{veck}({\bf U}) \\ {\rm vec}({\bf V}) \end{array} \right)= -\bf H^{-1} \left( \begin{array}{c} {\rm veck} \left( {\rm skew}( {\tilde{{\bm \gamma}}}^{\top}{\bf Q}\tilde{{\bm \gamma}}+ {\tilde{{\bm \gamma}}}^{\top}{\bf L} ) \right)\\ {\rm vec}(\tilde{{\bm \gamma}}^{\top}_{\perp}{\bf Q}\tilde{{\bm \gamma}}+\tilde{{\bm \gamma}}^{\top}_{\perp}{\bf L}) \\ \end{array} \right). \end{equation} In our numerical studies, we have not noticed the case $\bf H$ is not invertible. After ${\rm veck} ({\bf U})$ and ${\rm vec}({\bf V})$ are obtained, we can easily reshape ${\bf U} \in {\rm Skew}(d)$ and ${\bf V} \in \mathbb{R}^{(p-d)\times d}$. Therefore, we can calculate the solution of Newton's equation $(\ref{eqn3.9})$ by ${\bm \xi}=\tilde{{\bm \gamma}}{\bf U}+\tilde{{\bm \gamma}}_{\perp}{\bf V}$. Detailed information can be seen in Algorithm $(\ref{alg2})$. \begin{algorithm} \label{alg2} \caption{Solving the Riemannian Newton's equation $(\ref{eqn3.8})$} \KwIn{${\bf Q}\in\mathbb{R}^{p\times p}$, ${\bf L}\in\mathbb{R}^{p\times d}$, ${\bm \gamma}^{(t)}\in \mathbb{R}^{p\times d}$, $\displaystyle {\bf D}_d\in\mathbb{R}^{d^2\times \frac{d(d-1)}{2}}$, and ${\bf T}_d \in \mathbb{R}^{d^2\times d^2}$ } Compute ${{\bm \gamma}^{(t)}}_{\perp}$ such that ${{\bm \gamma}^{(t)}}^{\top}{\bm \gamma}^{(t)}_{\perp}={\bf 0}$ and ${{\bm \gamma}^{(t)}_{\perp}}^{\top}{\bm \gamma}^{(t)}_{\perp}={\bf I}_{p-d}$ \\ Compute ${\bf S}={\rm sym}( {{\bm \gamma}^{(t)}}^{\top} {\bf Q} {{\bm \gamma}^{(t)}}+{{\bm \gamma}^{(t)}}^{\top}{\bf L} )$\\ Compute the linear transformation ${\bf H }\in \mathbb{R}^{ K \times K}$ by \begin{equation*} \bf H= \left( \begin{array}{cc} \bf H_{11} & \bf H_{12}\\ \bf H_{21} & \bf H_{22}\\ \end{array} \right), \end{equation*} where \begin{eqnarray*} \bf H_{11} &=& \frac 14 {\bf D}^{\top}_d \left[ {\bf I}_d \otimes ({{\bm \gamma}^{(t)}}^{\top}{\bf Q}{\bm \gamma}^{(t)}-{\bf S}) + ({{\bm \gamma}^{(t)}}^{\top}{\bf Q}{\bm \gamma}^{(t)}-{\bf S})\otimes {\bf I}_d \right] {\bf D}_d , \\ \bf H_{12} &=& \frac 14 {\bf D}_d^{\top}({\bf I}_{d^2}-{\bf T}_d ) \left( {\bf I}_d\otimes {{\bm \gamma}^{(t)}}^{\top}{\bf Q}{\bm \gamma}^{(t)}_{\perp} \right),\\ \bf H_{21} &=& ({\bf I}_d\otimes {{\bm \gamma}^{(t)}}_{\perp}^{\top}{\bf Q}{\bm \gamma}^{(t)}){\bf D}_d, \\ \bf H_{22} &=& {\bf I}_d \otimes {{\bm \gamma}^{(t)}_{\perp}}^{\top}{\bf Q}{\bm \gamma}^{(t)}_{\perp} -{\bf S}\otimes {\bf I}_d. \end{eqnarray*}\\ Compute ${\rm veck}({\bf U})$ and ${\rm vec}({\bf V})$ using \begin{equation}\notag \left( \begin{array}{c} \rm{veck}({\bf U}) \\ {\rm vec}({\bf V}) \end{array} \right)= -\bf H^{-1} \left( \begin{array}{c} {\rm veck} \left( {\rm skew}( {{\bm \gamma}^{(t)}}^{\top}{\bf Q}{\bm \gamma}^{(t)}+ {{\bm \gamma}^{(t)}}^{\top}{\bf L} ) \right)\\ {\rm vec}({{\bm \gamma}^{(t)}}^{\top}_{\perp}{\bf Q}{\bm \gamma}^{(t)}+{{\bm \gamma}^{(t)}}^{\top}_{\perp}{\bf L}) \\ \end{array} \right). \end{equation}\\ Construct ${\bf U}\in {\rm Skew}(d)$ and ${\bf V} \in \mathbb{R}^{(p-d)\times d}$ from $\rm{veck}({\bf U})$ and ${\rm vec}({\bf V})$ \\ Compute ${\bm \xi}={{\bm \gamma}^{(t)}}{\bf U}+{\bm \gamma}^{(t)}_{\perp}{\bf V} $\\ \KwOut{${\bm \xi} \in \mathcal{T}_{{\bm \gamma}^{(t)}}{\rm St}(d,p) $} \end{algorithm} \subsection{Convergence Analysis} In this section, we construct the convergence property of the proposed algorithm for solving the DCOV-based SDR model. We first show that the sequence $\left\{ \hat{{\bm \gamma}}_{\epsilon}^{(t)} \right\}_{t\geq 0}$ generated by the MMRN algorithm converge to a stationary point of the perturbed function $(\ref{eqn3.2})$. Then, we show that a maximizer $\hat{{\bm \gamma}}_{\epsilon}$ of the perturbed objective function $(\ref{eqn3.2})$ exhibits a minimal difference from a maximizer $\hat{{\bm \gamma}}$ of the true objective $(\ref{eqn2.8})$ for sufficiently small $\epsilon$. \begin{Pro} Let ${\bm \gamma} \in {\rm St}(d,p)$, $\alpha\in(0,1)$, and $\sigma\in (0,1)$, there exists an integer $t>0$ such that \begin{equation}\notag \mathcal{V}^2_{n,\epsilon}({{\rm Retr}_{{\bm \gamma}}(\sigma^t{\bm \xi})}^{\top}{\bf Z},{\bf Y}) \geq \mathcal{V}^2_{n,\epsilon}({{\bm \gamma}^{\top}{\bf Z}},{\bf Y})+\alpha \sigma^t\|{\bm \xi}\|^2_{\rm F}, \end{equation} where ${\bm \xi}$ is a solution of $\mbox{Hess}\, g_{\epsilon}({\bm \gamma})[{\bm \xi}]=-\mbox{grad}\,g_{\epsilon}({\bm \gamma})$. \label{pro4} \end{Pro} We now prove the convergence of our perturbed MM algorithm safeguarded by the Armijo step-halving strategy. \begin{Pro} For any $\epsilon>0$, the limit point $\hat{{\bm \gamma}}_{\epsilon}$ generated by the Algorithm 1 is a stationary point of $\mathcal{V}^2_{n,\epsilon}({\bm \gamma}^{\top}{\bf Z},{\bf Y})$, that is $\mbox{grad}\,\mathcal{V}^2_{n,\epsilon}({\hat{{\bm \gamma}}_{\epsilon}}^{\top}{\bf Z},{\bf Y}) =0.$ \label{pro5} \end{Pro} \begin{Pro} Consider an arbitrary decreasing sequence $\left\{\epsilon_{m}\right\}_{m=1}^{\infty}$ that converges to $0$. Then, any limit point of $\hat{{\bm \gamma}}_{\epsilon_m}$ is a maximizer of $\mathcal{V}^2_{n}({\bm \gamma}^{\top}{\bf Z},{\bf Y})$ over the Stiefel manifold, provided that $\left\{{\bm \gamma}|\mathcal{V}_{n}^2({\bm \gamma}^{\top}{\bf Z},{\bf Y}) = \mathcal{V}_{n}^2({\hat{{\bm \gamma}}}^{\top}{\bf Z},{\bf Y}) \mbox{ and } {\bm \gamma}^{\top}{\bm \gamma}={\bf I}_d\right\}$ is nonempty. \label{pro6} \end{Pro} Combining Proposition 5 and 6, it is straightforward to see that the MM algorithm generates solutions that converge to a stationary point of $\mathcal{V}_{n}^2({\bm \gamma}^{\top}{\bf Z},{\bf Y})$ as $\epsilon$ decreases to zero. \begin{thm} The sequence of the solutions $\left\{ \hat{{\bm \gamma}}^{(t)}_{\epsilon} \right\}_{t\geq 0}$ generated by the proposed perturbed MM algorithm converges to a maximizer of $\mathcal{V}_{n}^2({\bm \gamma}^{\top}{\bf Z},{\bf Y})$ over the Stiefel manifold. Moreover, the sequence of functionals $\left\{ \mathcal{V}^2_{n,\epsilon}( \hat{{\bm \gamma}}^{(t)\top}_{\epsilon} {\bf Z},{\bf Y} ) \right\}_{t\geq 0}$ converges to the maximum value of $\mathcal{V}_{n}^2({\bm \gamma}^{\top}{\bf Z},{\bf Y})$. \label{thm1} \end{thm} \section{Extension} In this section, we will extend the above proposed method to solve sufficient variable selection (SVS) using distance covariance. The DCOV-based SVS method is developed by \citet*{chen2018efficient} through combining DCOV-based SDR with penalty terms, such as LASSO type penalty terms \citep{tibshirani1996regression, yuan2006model,chen2010coordinate} or adaptive LASSO \citep{zou2006adaptive}, to achieve a sparse solution. Specifically, the model is to solve the following problem \begin{alignat}{1} \underset{ \bm{\beta} }{\mbox{maximize}} \quad & \mathcal{V}_{n}^2(\bm{\beta}^{\top}{\bf X},{\bf Y})-\lambda \sum_{i=1}^{p}\theta_i\|\beta_i\|_2 , \tag{4.1} \label{eqn4.1} \end{alignat} subject to $\bm{\beta}^{\top} \widehat{{\bf\Sigma}}_X \bm{\beta} ={\bf I}_d $, where $\beta_i$ denotes the $i$-th row vector of $\bm{\beta}$, $\theta_i\geq 0$ serves as the $i$-th penalty weight and $\lambda>0$ is a tunning parameter. Plugging ${\bm \gamma}=\widehat{{\bf\Sigma}}_{X}^{\frac 12}\bm{\beta}$ and ${\bf Z}=\widehat{{\bf\Sigma}}_{X}^{-\frac 12}{\bf X}$ into the equation $(\ref{eqn4.1})$ together with using equivalent expression $(\ref{eqn2.5})$ for $\mathcal{V}_{n}^2(\bm{\beta}^{\top}{\bf X},{\bf Y})$, we can transform the objective function $(\ref{eqn4.1})$ to \begin{equation}\tag{4.2}\label{eqn4.2} \phi_{\lambda}({\bm \gamma})=\frac{1}{n^2}\sum_{k,l=1}^{n}a_{kl}({\bm \gamma})B_{kl} -\lambda \sum_{i=1}^{p}\theta_i \rho_i({\bm \gamma}), \end{equation} subject to ${\bm \gamma}\in{\rm St}(d,p)$, where $\rho_i({\bm \gamma})=\|e_{i}^{\top} \widehat{{\bf\Sigma}}_{X}^{-\frac 12}{\bm \gamma} \|_2$ and $e_i$ denotes a column vector with one in the $i$-th position and zero in the others. Correspondingly, a perturbed version $\phi_{\lambda,\epsilon}({\bm \gamma})$ of the objective function $(\ref{eqn4.2})$ is given by \begin{equation}\tag{4.3}\label{eqn4.3} \displaystyle \begin{split} \phi_{\lambda,\epsilon}({\bm \gamma}) &= \frac{1}{n^2} \sum_{k,l=1}^{n} \left\{ a_{kl}({\bm \gamma}) - \epsilon\log \left( 1 + \frac{a_{kl}({\bm \gamma})}{\epsilon} \right) \right\} B_{kl}-\lambda\sum_{i=1}^{p} \theta_i\left\{ \rho_i({\bm \gamma})-\epsilon\log \left( 1 + \frac{\rho_{i}({\bm \gamma})}{\epsilon} \right) \right\}, \\ &=\frac{1}{n^2} \sum_{k,l=1}^{n} \left\{ \|{\bm \gamma}^{\top}(Z_k-Z_l) \|_2- \epsilon\log \left( 1 + \frac{\|{\bm \gamma}^{\top}(Z_k-Z_l)\|_2}{\epsilon} \right) \right\} B_{kl}\\ &\quad -\lambda\sum_{i=1}^{p}\theta_i\left\{ \|e_{i}^{\top} \widehat{{\bf\Sigma}}_{X}^{-\frac 12}{\bm \gamma} \|_2 -\epsilon\log\left( 1+\frac{ \|e_{i}^{\top} \widehat{{\bf\Sigma}}_{X}^{-\frac 12}{\bm \gamma} \|_2 }{\epsilon} \right) \right\}. \end{split} \end{equation} Due to the minorization $(\ref{eqn3.7})$ for the first term, it only needs to minorize the penalty function in the equation $(\ref{eqn4.3})$ to obtain a surrogate function of $\phi_{\lambda,\epsilon}({\bm \gamma})$. The supporting hyperplane minorization for $\displaystyle -\lambda\theta_i\left\{ x^{\frac 12}-\epsilon\log\left(1+\frac{ x^{\frac 12} }{\epsilon} \right) \right\}$ is \begin{equation}\tag{4.4}\label{eqn4.4} -\lambda\theta_i\left\{ x^{\frac 12}-\epsilon \log\left( 1+\frac{ x^{\frac 12}}{\epsilon} \right) \right\} \geq -\lambda\theta_i\left\{ {x^{(t)}}^{\frac 12}-\epsilon \log\left( 1+\frac{ {x^{(t)}}^{\frac 12}}{\epsilon} \right)\right\} + \frac{-\lambda\theta_i(x-x^{(t)})}{2\left( {x^{(t)}}^{\frac 12}+\epsilon \right)}. \end{equation} Taking $x=\|e_i^{\top}\widehat{{\bf\Sigma}}_{X}^{-\frac 12} {\bm \gamma} \|^2_2$ and $x^{(t)}= \|e_i^{\top}\widehat{{\bf\Sigma}}_{X}^{-\frac 12} {\bm \gamma}^{(t)} \|^2_2 $, and summing over $i=1,\ldots,p$ give the minorization for penalty function $-\lambda \sum_{i=1}^{p}\theta_i \rho_i({\bm \gamma})$, i.e., \begin{equation}\tag{4.5}\label{eqn4.5} -\lambda \sum_{i=1}^{p}\theta_i \rho_i({\bm \gamma}) \geq \sum_{i=1}^{p} \frac{-\lambda\theta_i\|e_i^{\top}\widehat{{\bf\Sigma}}_{X}^{-\frac 12} {\bm \gamma} \|^2_2}{2(\|e_i^{\top}\widehat{{\bf\Sigma}}_{X}^{-\frac 12} {\bm \gamma}^{(t)} \|_2+\epsilon)}+c, \end{equation} where $c$ is an irrelevant constant. After some algebraic manipulation, we have \begin{equation}\tag{4.6}\label{eqn4.6} \sum_{i=1}^{p} \frac{-\lambda\theta_i\|e_i^{\top}\widehat{{\bf\Sigma}}_{X}^{-\frac 12} {\bm \gamma} \|^2_2}{2(\|e_i^{\top}\widehat{{\bf\Sigma}}_{X}^{-\frac 12} {\bm \gamma}^{(t)} \|_2+\epsilon)} =\frac 12 \mathrm {tr}\left( {\bm \gamma}^{\top} \widehat{{\bf\Sigma}}_{X}^{-\frac 12} \mathrm {diag}({ \Lambda}) \widehat{{\bf\Sigma}}_{X}^{-\frac 12} {\bm \gamma} \right), \end{equation} where $\displaystyle \Lambda=\left( \frac{-\lambda\theta_1 }{\|e_1^{\top}\widehat{{\bf\Sigma}}_{X}^{-\frac 12} {\bm \gamma}^{(t)} \|_2+\epsilon},\ldots, \frac{-\lambda\theta_p }{\|e_p^{\top}\widehat{{\bf\Sigma}}_{X}^{-\frac 12} {\bm \gamma}^{(t)} \|_2+\epsilon} \right)^{\top}$ is a $p\times 1$ column vector. Combining the minorizations $(\ref{eqn3.7})$ and $(\ref{eqn4.6})$ gives the overall minorization \begin{equation}\tag{4.7}\label{eqn4.7} g_{\lambda,\epsilon}({\bm \gamma}|{\bm \gamma}^{(t)})=\frac 12 \mathrm {tr}\left( {\bm \gamma}^{\top} \left[ {\bf Q}+\widehat{{\bf\Sigma}}_{X}^{-\frac 12} \mathrm {diag}({\Lambda}) \widehat{{\bf\Sigma}}_{X}^{-\frac 12} \right] {\bm \gamma} \right)+\mathrm {tr}({\bm \gamma}^{\top}{\bf L}). \end{equation} Note that the form of surrogate function $(\ref{eqn4.7})$ for the DCOV-based SVS model is the same as the surrogate function $(\ref{eqn3.7})$ for the DCOV-based SDR model. Thus, we can use the same method for solving the DCOV-based SVS model. \section{Numerical Studies} We compare our proposed unified algorithm for solving both DCOV-based SDR and DCOV-based SVS to their corresponding existing algorithms, focusing on computational cost. Since the method in \citet*{chen2018efficient} solving DCOV-based SVS combines SQP and local quadratic approximation \citep[LQA;][]{fan2001variable}, we denote it to SQP+LQA for convenience. SQP and SQP+LQA in all of the simulation studies use the default setups in the original work to guarantee accuracy. In the MMRN, we set the stepsize multiplicative factor $\sigma=0.5$ and perturbation constant $\epsilon=10^{-10}$ to avoid machine precision error. Besides, we set $\alpha=10^{-20}$ to lead fewer number of line search steps. The MMRN algorithm terminates at the $t$-th step when the relative error of the objective function at the $t$-th step computed by $|f({\bm \gamma}^{(t)})-f({\bm \gamma}^{(t-1)})|/|f({\bm \gamma}^{(t-1)})|$ becomes smaller than $10^{-7}$ or the iteration number $t$ exceeds $1000$. Here the function $f$ denotes the objective functions in DCOV-based SDR and SVS. All algorithms use the solutions from existing dimension reduction methods such as SIR or DR as the initial value. All codes are implemented in \verb|Matlab| and run on a standard PC (Intel Core i9-8950HK CPU (2.90 GHz) and 32 GB RAM). For specific details about the implementation of our proposal, please refer to \url{https://github.com/runxiong-wu/MMRN}. \subsection{Simulation for DCOV-based SDR} We use the same simulation settings as in \citet*{sheng2016sufficient} to illustrate the performance comparison of the MMRN algorithm and the SQP algorithm in solving DCOV-based SDR models. There are three different models and two sample size configurations $(n,p)=(100,6)$ and $(500,20)$. Let $\epsilon$, $\epsilon_1$, and $\epsilon_2$ be independent standard normal random variables, the three models are: \begin{equation*} \begin{array}{l} \text { (A) } \quad Y= (\beta_1^{\top} X )^{2}+(\beta_2^{\top} X )+0.1\epsilon, \\ \text { (B) } \quad Y=\operatorname{sign}\left(2 \beta_1^{\top} X +\epsilon_{1}\right) \times \log \left|2 \beta_2^{\top} X +4+\epsilon_{2}\right|, \\ \text { (C) } \quad Y=\exp ( \beta_3^{\top} X ) \epsilon, \end{array} \end{equation*} where $\beta_1,\beta_2,$ and $\beta_3$ are $p$-dimensional vectors with their first six components being $(1,0,0,0,0,0)^{\top},(0,1,0,0,0,0)^{\top},$ and $(1,0.5,1,0,0,0)^{\top}$ and the last $p-6$ components being $0$ if $p>6$. Each model has three different kinds of $ X=(x_1,\ldots,x_p)^{\top}$: Part (1), standard normal predictors $X\sim N(0,{\bf I}_p)$; Part (2), nonnormal predictors; and Part (3), discrete predictors. Specific predictors setups for Part (2) and Part (3) in each model are summarized in Table 1. \begin{table}[width=.9\linewidth,cols=3,pos=h] \caption{Setups for Part (2) and Part (3). Here iid means independent identically distributed.} \label{tabel1} \begin{tabular*}{\tblwidth}{@{} LLL@{} } \toprule & Part (2) & Part (3) \\ \midrule Model A & $\displaystyle \left\{ \frac{x_i+2}{5}\right\}_{i=1}^{p} \stackrel{\rm iid}{\sim}\mbox{Beta}(0.75, 1)$ & $ \left\{ x_i\right\}_{i=1}^{p} \stackrel{\rm iid}{\sim}\mbox{Poisson}(1)$ \\ Model B & $\left\{ x_i\right\}_{i=1}^{p} \stackrel{\rm iid}{\sim}\mbox{Uniform}(-2, 2)$ & $\left\{ x_i\right\}_{i=1}^{p} \stackrel{\rm iid}{\sim}\mbox{Binomial}(10, 0.1)$ \\ Model C & $\displaystyle \left\{ \frac{x_i+1}{2}\right\}_{i=1}^{p} \stackrel{\rm iid}{\sim} \mbox{Beta}(1.5, 1)$ & $ \left\{ x_i \right\}_{i \not = 6} \stackrel{\rm iid}{\sim} \mbox{Poisson}(1) \mbox{ and } x_6 \sim \mbox{Binomial}(10, 0.3) $ \\ \bottomrule \end{tabular*} \end{table} Each simulation scenario repeats 100 times. At each time, we use the following distance to measure the accuracy of the estimator $\hat{\bm{\beta}}$ \begin{equation}\notag \Delta_m(P_{\hat{\bm{\beta}}}, P_{\bm{\beta}})=\|P_{\hat{\bm{\beta}}}-P_{\bm{\beta}} \|, \end{equation} where $\bm{\beta}$ is a basis of the true central subspace, $P_{\hat{\bm{\beta}}}$ and $P_{\bm{\beta}}$ are the respective projections of $\hat{\bm{\beta}}$ and $\bm{\beta}$, and $\|\cdot\|$ is the maximum singular value of a matrix. The smaller the $\Delta_m$ is, the more accuracy the estimator is. We report the mean and the standard error of $\Delta_m$'s and CPU times in Table \ref{tabel2}. We can observe that both the SQP algorithm and the MMRN algorithm have satisfactory performance in terms of estimation accuracy, but the MMRN algorithm takes less time than the SQP algorithm. For part (3) of model A at $n=500$ and $p=20$, the MMRN algorithm takes about 2 seconds on average while the SQP algorithm averages more than 50 seconds. It is approximately 25 times faster. Also, the MMRN algorithm is more stable than the SQP algorithm since the standard deviation of the running time is less. Overall, the MMRN algorithm has almost the same performance as the SQP algorithm across various models, but with less time. \begin{table}[width=\linewidth,cols=7,pos=!htbp] \caption{Simulation results under the same settings as in \citet*{sheng2016sufficient}. The mean (standard error), averaged over $100$ datasets, are reported.} \label{tabel2} \begin{tabular*}{\tblwidth}{@{}LLLLLLL@{} } \toprule \multirow{2}{*}{$(n,\;p)$}&\multirow{2}{*}{Model}&\multirow{2}{*}{Part}&\multicolumn{2}{L}{ SQP }&\multicolumn{2}{L}{ MMRN }\cr \cmidrule(rr){4-5} \cmidrule(rr){6-7} &&& $\bar{\Delta}_m$ &Time (sec)& $\bar{\Delta}_m$ &Time (sec)\cr \midrule $n=100,\; p=6$& A & (1) &0.19(0.06)&0.52(0.16)&0.19(0.06)&0.08(0.03)\\ & & (2) &0.19(0.06)&0.55(0.09)&0.19(0.06)&0.07(0.02)\\ & & (3) &0.00(0.01)&1.18(0.26)&0.00(0.01)&0.12(0.08)\\ & B & (1) &0.29(0.10)&0.49(0.20)&0.29(0.10)&0.18(0.09)\\ & & (2) &0.22(0.07)&0.44(0.08)&0.22(0.07)&0.10(0.03)\\ & & (3) &0.28(0.18)&0.48(0.17)&0.27(0.18)&0.13(0.10)\\ & C & (1) &0.20(0.07)&0.38(0.19)&0.20(0.07)&0.16(0.06)\\ & & (2) &0.31(0.12)&0.33(0.08)&0.30(0.10)&0.25(0.13)\\ & & (3) &0.22(0.10)&0.39(0.11)&0.22(0.10)&0.11(0.05)\\ \midrule $n=500,\; p=20$& A & (1) &0.16(0.02)&11.41(1.84)&0.16(0.02)&1.27(0.12)\\ & & (2) &0.17(0.03)&13.47(1.96)&0.17(0.03)&1.31(0.14)\\ & & (3) &0.00(0.00)&53.61(4.84)&0.00(0.00)&2.02(0.58)\\ & B & (1) &0.24(0.04)&10.26(1.63)&0.24(0.04)&3.03(0.37)\\ & & (2) &0.19(0.03)&10.56(2.40)&0.19(0.03)&1.92(0.20)\\ & & (3) &0.18(0.07)&14.72(3.64)&0.18(0.07)&2.24(0.47)\\ & C & (1) &0.15(0.03)&9.64(0.96)&0.15(0.03)&4.13(0.67)\\ & & (2) &0.24(0.04)&11.20(1.16)&0.24(0.04)&10.59(3.16)\\ & & (3) &0.14(0.03)&12.29(1.37)&0.14(0.03)&3.34(0.55)\\ \bottomrule \end{tabular*} \end{table} To test the performance of our proposed MMRN algorithm in large datasets, we use four different levels for sample size configuration, $(n,p)$: $(500, 50)$, $(1000, 100)$, $(2000, 200)$, and $(3000, 300)$. Here, we only consider the cases with the standard predictors and generate 20 datasets for each study. Figure \ref{fig2} displays a graph of the average runtime for each algorithm under the different problem sizes considered. We can see that our proposed algorithm can outperform the SQP algorithm even in large datasets. Note that we did not run the SQP algorithm on sample size $(n,p)=(3000,300)$ for model C with standard predictors since it will take much time ($> 7$ hours once) to solve the problem. \begin{figure}[pos=!htbp] \centering \subfigure[Model A part (1)]{ \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=\textwidth]{fig/ModelA} \end{minipage}% }% \subfigure[Model B part (1)]{ \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=\textwidth]{fig/ModelB} \end{minipage}% }% \\ \subfigure[Model C part (1)]{ \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=\textwidth]{fig/ModelC} \end{minipage} } \centering \caption{ Computational performance comparison on large problem size for three different models with standard normal predictors. The mean of the CPU time averaged over 20 datasets are reported. There was no significant difference of the two methods in the estimation accuracy. Therefore, estimation accuracy is not displayed the graph.} \label{fig2} \end{figure} \subsection{Simulation for DCOV-based SVS} This part compares the performance of our proposed MMRN algorithm and the SQP+LQA algorithm in solving DCOV-based SVS models. We consider two sample size configurations $(n,p)=(60,24)$ and $(120,24)$ and generate 100 datasets for each simulation. To assess how well the algorithms select variables, we define the true positive rate TPR as the proportion of correctly identified active predictors, and the false positive rate FPR as the proportion of irrelevant predictors that are incorrectly identified to be active. When computing the TPR and FPR in practice, the estimate obtained by the MMRN algorithm is truncated by zeroing out its entries whose magnitude is smaller than $10^{-7}$. In addition, we use the Bayesian information criterion (BIC) to select the tuning parameters, see., \citet*{chen2018efficient}. We conduct the following simulation studies with the same model settings as the scenarios $n > p$ in \citet*{chen2018efficient}. \begin{itemize}[leftmargin= 60 pt] \item[{\it Study 1.}] A nonlinear regression model with four active predictors: \begin{equation}\notag Y=(\beta_1^{\top}X+0.5)^2+0.5\epsilon, \end{equation} where $\epsilon\sim N(0,1)$ and $X\sim N(0, {\bf\Sigma})$ with $\Sigma_{ij}=0.5^{|i-j|}$ for $1\leq i,j\leq 24$. The central subspace is spanned by the vectors $\beta_1=(0.5,0.5,0.5,0.5,0_{p-4})^{\top}$. \item[{\it Study 2.}] A nonlinear regression model with two active predictors: \begin{equation}\notag Y=\frac{\beta_1^{\top}X}{ 0.5+(\beta_2^{\top}X+1.5)^2 }+0.2\epsilon, \end{equation} where $\epsilon\sim N(0,1)$ and $X\sim N(0, {\bf\Sigma})$ with $\Sigma_{ij}=0.5^{|i-j|}$ for $1\leq i,j\leq 24$. The central subspace is spanned by the vectors $\beta_1=(1,0,0_{p-2})^{\top}$ and $\beta_2=(0,1,0_{p-2})^{\top}$. \item[{\it Study 3.}] A nonlinear regression model with four active predictors: \begin{equation}\notag Y=(\beta_1^{\top}X)^2+|\beta_2^{\top}X|+0.5\epsilon, \end{equation} where $\epsilon\sim N(0,1)$. The predictor $X=(x_1,\ldots,x_{24})^{\top}$ is defined as follows: the last $23$ components $(x_2,\ldots,x_{24})^{\top} \sim N(0, {\bf\Sigma})$ with $\Sigma_{ij}=0.5^{|i-j|}$ for $1\leq i,j\leq 23$ and the first component $x_1=|x_2+x_3|+\xi$, where $\xi\sim N(0,1)$. The central subspace is spanned by the vectors $\beta_1=(0.5,0.5,0.5,0.5,0_{p-4})^{\top}$ and $\beta_2=(0.5,-0.5,0.5,-0.5,0_{p-4})^{\top}$. \item[{\it Study 4.}] A multivariate response model with four active predictors: \begin{equation}\notag \left\{ \begin{aligned} Y_1 &=\beta_1^{\top} X+\epsilon_{1}, \\ Y_2 &=(\beta_2^{\top} X+0.5)^2+\epsilon_{2}, \\ \end{aligned} \right. \end{equation} where $\epsilon_{1}, \epsilon_{2} \stackrel{\rm iid}{\sim} N(0,1)\mbox{ and } X \sim N(0, \Sigma)$ with $\Sigma_{ij}=0.5^{|i-j|}$ for $1\leq i,j\leq 24$. The central subspace is spanned by the vectors $\beta_1=(0.5,0.5,0.5,0.5,0_{p-4})^{\top}$ and $\beta_2=(0.5,-0.5,0.5,-0.5,0_{p-4})^{\top}$. \end{itemize} Table \ref{tabel3} gives the simulation results. The MMRN algorithm is much less time-consuming than the SQP+LQA algorithm to achieve the same or even slightly better effect in terms of TPR and FPR. Especially in Study 2 and Study 4, we can observe that the performance of MMRN algorithm in TPR and FPR is better than SQP + LQA, but its speed is nearly 100 times faster. \begin{table}[width=.9\linewidth,cols=8,pos=!htpb] \centering \caption{Simulation results under the same settings as in \citet*{chen2018efficient}. The mean, averaged over $100$ datasets, are reported.} \label{tabel3} \begin{tabular*}{\tblwidth}{@{} LLLLLLLL@{}} \toprule \multirow{2}{*}{}&\multirow{2}{*}{}&\multicolumn{3}{L}{ SQP+LQA }&\multicolumn{3}{L}{ MMRN }\cr \cmidrule(rr){3-5} \cmidrule(rr){6-8} & & TPR &FPR &Time (sec)& TPR &FPR &Time (sec)\cr \midrule Study 1&$n=60$ &0.695&0.063&385.5&0.685&0.077&12.1 \cr &$n=120$&0.990&0.004&532.9&0.988&0.002&27.6\cr Study 2&$n=60$ &0.770&0.031&1051.2&0.870&0.016&5.4 \cr &$n=120$&0.930&0.010&1518.3&0.975&0.004&9.4\cr Study 3&$n=60$ &0.715&0.010&1122.4&0.725&0.002&11.9 \cr &$n=120$&0.785&0.002&1746.3&0.785&0.001&26.8\cr Study 4&$n=60$ &0.655&0.029&1293.8&0.700&0.011&12.9\cr &$n=120$&0.905&0.009&1778.4&0.930&0.007&30.0\cr \bottomrule \end{tabular*} \end{table} \subsection{Real Data Analysis} In this part, we revisit the Boston housing data from \citet{HARRISON197881}; \citet{zhou2008}; and \citet{chen2018efficient} to compare our proposed MMRN algorithm with the SQP+LQA algorithm. Following the previous studies, we remove those observations with crime rate greater than $3.2$. The trimmed Boston housing data contains $374$ observations with the response variable $Y$ being the median value of owner-occupied homes in each of the $374$ census tracts in the Boston Standard Metropolitan Statistical Areas. There are $13$ predictors, which correspond to per capita crime rate by town; proportion of residential land zoned for lots over $25,000$ sq.ft; proportion of nonretail business acres per town; Charles River dummy variable; nitric oxides concentration; average number of rooms per dwelling; proportion of owner-occupied units built prior to $1940$; weighted distances to five Boston employment centers; index of accessibility to radial high- ways; full-value property-tax rate; pupil-teacher ratio by town; proportion of blacks by town; percentage of lower status of the population. It has been found two directions are good to estimate the central subspace. After these preparations, we fit the DCOV-based SVS model using the SQP+LQA algorithm and the MMRN algorithm. There is little difference on their predictive performance, but the computing time for these two methods is very different. As we observe, the total optimization time is about $2464$ seconds for the SQP+LQA algorithm and about $52.34$ seconds for the MMRN algorithm. Our algorithm is approximately $47$ times faster than the competitor. \section{Conclusion} In the article, we notice that the empirical distance covariance can have a difference of convex functions decomposition. Based on this observation, we leverage the MM principle to design powerful and versatile algorithms uniformly for DCOV-based SDR and DCOV-based SVS models. The proposed algorithms take one single Riemannian Newton's step at each iterate to tackle the Manifold constraints. The simulation studies show our proposed algorithms are highly efficient and very stable even in large $n$ and large $p$ scenarios. Furthermore, we establish the convergence property of our proposed algorithms under mild conditions. As a possible future work, we plan to design a new algorithm with the aim to handle the large $p$ small $n$ scenarios directly rather than incorporate it in the framework of sequential SDR \citep{yin2015sequential}. \section*{Acknowledgments} We gratefully thank the Editor, the Associate Editor and two referees for all the questions, constructive comments and suggestion. This work is supported by SUSTech startup funding.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Full excluded middle with restricted conclusions} \label{sec:full-excluded-middle} Consider a system of natural deduction for intuitionistic arithmetic, to which we add restricted classical reasoning in the form of rule $\textsc{em}^{-}$ : \begin{prooftree} \AxiomC{$\Gamma, A \vdash \exists x \emp{}$} \AxiomC{$\Gamma, \neg A \vdash \exists x \emp{}$} \RightLabel{$\textsc{em}^{-}$ } \BinaryInfC{$\Gamma \vdash \exists x \emp{}$} \end{prooftree} That is, we allow to eliminate instances of the excluded middle for arbitrary formulas $A$, but only if the conclusion is a $\Sigma_1^0$ formula. This deduction, similar to the one of the previous chapter, gives a proof of Markov's principle by using the excluded middle rule on the formula $\exists \alpha \emp{}^\bot$ \begin{prooftree} \AxiomC{$[\exists \alpha \emp{}^{\bot}]_{\textsc{em}^-}$} \AxiomC{$[\neg \forall \alpha \emp{}]_{(1)}$} \AxiomC{$[\neg \exists \alpha \emp{}^\bot]_{\textsc{em}^{-}}$} \noLine \UnaryInfC{$\mathcal{D}$} \noLine \UnaryInfC{$\forall \alpha \emp{}$} \BinaryInfC{$\bot$} \UnaryInfC{$\exists \alpha \emp{}^{\bot}$} \RightLabel{$\textsc{em}^{-}$ } \BinaryInfC{$\exists \alpha \emp{}^{\bot}$} \RightLabel{(1)} \UnaryInfC{$\neg \forall \alpha \emp{} \to \exists \alpha \emp{}^{\bot}$} \end{prooftree} Where $\mathcal{D}$ is, as in the previous section, \begin{prooftree} \AxiomC{$\emp{}(\alpha) \lor \emp{}^\bot(\alpha)$} \AxiomC{$[\emp{}(\alpha)]_{\lor\mbox{-E}}$} \AxiomC{$\neg \exists \alpha \emp{}^{\bot}(\alpha)$} \AxiomC{$[\emp{}^\bot (\alpha)]_{(1)}$} \UnaryInfC{$\exists \alpha \emp{}^{\bot} (\alpha)$} \BinaryInfC{$\bot$} \RightLabel{(1)} \UnaryInfC{$\neg \emp{}^\bot(\alpha)$} \AxiomC{$[\emp{}^\bot(\alpha)]_{\lor\mbox{-E}}$} \BinaryInfC{$\bot$} \UnaryInfC{$\emp{}(\alpha)$} \RightLabel{$\lor \mbox{-E}$} \TrinaryInfC{$\emp{}(\alpha)$} \UnaryInfC{$\forall \alpha \emp{} (\alpha)$} \end{prooftree} Conversely, given a system of intuitionistic arithmetic $\HA$ with Markov's principle as axiom $\textsc{mrk}$ : $\neg \forall \alpha \emp{} \to \exists \alpha \emp{}^{\bot}$ we can obtain rule $\textsc{em}^{-}$ as follows: assuming we have proofs \AxiomC{$A$} \noLine \UnaryInfC{\vdots} \noLine \UnaryInfC{$\exists \alpha \emp{}$} \DisplayProof and \AxiomC{$\neg A$} \noLine \UnaryInfC{\vdots} \noLine \UnaryInfC{$\exists \alpha \emp{}$} \DisplayProof build this proof of $\exists \alpha \emp{}$: \begin{prooftree} \AxiomC{$[\forall \alpha \emp{}^{\bot}]_{(1)} $} \AxiomC{$[\neg A]_{(2)}$} \noLine \UnaryInfC{\vdots} \noLine \UnaryInfC{$\exists \alpha \emp{}$} \noLine \BinaryInfC{$\mathcal{D}$} \noLine \UnaryInfC{$\bot$} \RightLabel{(2)} \UnaryInfC{$\neg \neg A$} \AxiomC{$[\forall \alpha \emp{}^{\bot}]_{(1)}$} \AxiomC{$[A]_{(3)}$} \noLine \UnaryInfC{\vdots} \noLine \UnaryInfC{$\exists \alpha \emp{}$} \noLine \BinaryInfC{$\mathcal{D}$} \noLine \UnaryInfC{$\bot$} \RightLabel{(3)} \UnaryInfC{$\neg A$} \BinaryInfC{$\bot$} \RightLabel{(1)} \UnaryInfC{$\neg \forall \alpha \emp{}^{\bot}$} \AxiomC{\textsc{mrk}} \noLine \UnaryInfC{$\neg \forall \alpha \emp{}^{\bot} \to \exists \alpha \emp{}$} \BinaryInfC{$\exists \alpha \emp{}$} \end{prooftree} Where $\mathcal{D}$ is given by \begin{prooftree} \AxiomC{$\exists \alpha \emp{}(\alpha)$} \AxiomC{$\forall \alpha \emp{}^{\bot}$} \UnaryInfC{$\emp{}^\bot(\alpha)$} \AxiomC{$[\emp{}(\alpha)]_{\exists}$} \BinaryInfC{$\bot$} \RightLabel{$\exists$} \BinaryInfC{$\bot$} \end{prooftree} We have now a more general result than the one we had in the previous chapter: Markov's principle is equivalent to allowing instances of the excluded middle to be used as axioms if and only if the conclusion of the $\lor$-elimination rule is a $\Sigma_1^0$ formula. In one sense this tells us that when conclusions are restricted to be $\Sigma_1^0$, allowing premises of arbitrary complexity does not allow us to prove more than what we could prove already with simply existential premises. \section{A new negative translation} \label{sec:new-negat-transl} \subsubsection{Negative translations} \label{sec:negat-transl} Negative translations have been known for long time as a tool to embed classical reasoning into intuitionistic logic. Essentially, they consist in a method to transform every formula provable in a classical theory in another formula that is equivalent in classical logic and that, although it is not intuitionistically equivalent, is provable from the translated theory. The most prominent example is probably the so called \emph{G\"odel-Gentzen} translation \cite{Gödel33}, or also \emph{double negation} translation. It assigns to every formula $F$ a formula $F^N$ defined by induction on its structure: \begin{itemize} \item If $F$ is atomic, $F^N = \neg \neg \ F$ \item $(F_1 \land F_2)^N$ is $F_1^N \land F_2^N$ \item $(F_1 \lor F_2)^N$ is $\neg (\neg F_1^N \land \neg F_2^N)$ \item $(F_1 \to F_2)^N$ is $F_1^N \to F_2^N$ \item $(\neg F)^N$ is $\neg F^N$ \item $(\forall x \ F)^N$ is $\forall x \ \ F^N$ \item $(\exists x \ F)^N$ is $\neg \forall x \ \neg \ F^N$ \end{itemize} The following theorem states the result we anticipated informally: \begin{thm}[G\"odel-Gentzen translation] \label{thm:negat-transl} Let $\Gamma = A_0,\dots A_n$ be a set of formulas. Then $A_1,\dots A_n \vdash A_0$ is classically derivable if and only if $A_1^N,\dots A_n^N \vdash A_0^N$ is intuitionistically derivable. \end{thm} For a complete discussion and a proof of this result, one may refer to \cite{Troelstra73}. The translation proves especially useful in the case of arithmetic, thanks to the following theorem \begin{thm} \label{thm:neg-trans-ha} For any formula $A$ in the language of arithmetic, if $\PA \vdash A$ then $\HA \vdash A^N$ \end{thm} \begin{proof} Thanks to \cref{thm:negat-transl}, we already know that $\PA \vdash A$ if and only if $\HA^N \vdash A^N$. What we need to show is that if $\HA^N \vdash A^N$, then $\HA \vdash A^N$. In order to do so, we need to prove the translated axioms in $\HA$. We know that $\HA \vdash ((s=t) \to \neg \neg (s=t)) \land (\neg \neg (s=t) \to (s=t))$, and therefore since the axioms for equality only use $\forall$ and $\to$, their translation is easily equivalent to the original axiom. Consider the translation of an instance of the induction axiom: \[(\forall x (F(x) \to F (\mathbf{s}x)) \to F (0) \to \forall x F(x))^N = \forall x (F^N(x) \to F^N (\mathbf{s}x)) \to F^N (0) \to \forall x F^N(x)\] Since the second formula is just the instance of the axiom of induction for the formula $F^N$, it is provable in $\HA$. Therefore, we can conclude that $\HA \vdash \HA^N$, and thus $\HA \vdash A^N$ \end{proof} The negative translation allows to embed all of classical arithmetic inside intuitionistic arithmetic. However, the resulting statements often do not provide a clear computational interpretation: consider for example the translation of an existential statement: we obtain something of the form $\neg \forall x \ \neg F$, and it is not clear how one could exctract a witness. For addressing this issues, one needs another translation such as the A-translation of Friedman \cite{Friedman78}. Essentially, it consists in replacing every atomic predicate $\emp{}$ with $\emp{} \lor A$ for an arbitrary formula $A$. When we combine it with the G\"odel translation, we obtain the following definition: given formulas $F_1,F_2,A$, where no free variable of $A$ is quantified in $F_1$ or $F_2$ \begin{itemize} \item $\neg_A F_1 = F_1 \to A$, $\bot^A = A$ \item $F_1^A = \neg_A \neg_A F_1$ if $F_1$ is atomic \item $(F_1 \land F_2)^A = F_1^A \land F_2^A$ \item $(F_1 \lor F_2)^A = \neg_A(\neg_A F_1^A \land \neg_A F_2^A)$ \item $(F_1 \to F_2)^A = F_1^A \to F_2^A$ \item $(\forall x \ F_1)^A = \forall x \ F_1^A$ \item $(\exists x \ F_1)^A = \neg_A \forall x \ \neg_A F_1^A$ \end{itemize} We can see that it behaves very similarly to the usual G\"odel-Gentzen translation, but with the addition that negation is parametrized by the formula $A$. With thechniques very similar to those of \cref{thm:negat-transl} and \ref{thm:neg-trans-ha} we have that if $\Gamma \vdash F$ in $\PA$, then $\Gamma^A \vdash F^A$ in $\HA$. However, the new translation also allows for a major result for the constructive interpretation of some statements of classical arithmetic: \begin{thm}[Friedman] Let $\emp{}$ be an atomic predicate. Then $\PA \vdash \exists x \ \emp{}(x)$ if and only if $\HA \vdash \exists x \ \emp{}(x)$ \end{thm} \begin{proof} Suppose $\PA \vdash \exists x \ \emp{}(x)$; then $\HA \vdash (\exists x \ \emp{}(x))^A$, i.e. $\HA \vdash (\forall x \ \neg_A \neg_A \neg_A \emp{}(x)) \to A$. Since it can be seen that $\neg_A \neg_A \neg_A F \dashv \vdash \neg_A F$ in $\HA$ for all $F$, $\HA \vdash (\forall x \ \neg_A \emp{}(x)) \to A$. Now, since we can use any formula for $A$, we use $\exists x \ \emp{}(x)$: in this way we get $\HA \vdash \forall x \ (\emp{}(x) \to \exists x \ \emp{}(x)) \to \exists x \ \emp{}(x)$. Since the antecedent of the formula is provable, we get $\HA \vdash \exists x \ \emp{}(x)$. \end{proof} \subsubsection{The $\exists$-translation} We will now introduce a new translation and consider it for statements of arithmetic. Like the usual negative translations, it will have the property that translated formulas are classically equivalent to the original ones, and that the translated axioms of arithmetic are intuitionistically provable in $\HA$. However, we will not immediately present a result linking classical provability and intuitionistic provability as we did before; indeed, the synctactic translation method presented here will be used in the next section together with a more proof-theoretic technique in order to provide a new interpretation of the simply existential statements of classical arithmetic. Our translation is particularly simple when compared with the usual ones. It leaves all logical connectives untouched, except for the case of $\forall$, which is substituted by $\neg \exists \neg$. Formally, we define the translation $\cdot^\exists$ by induction on the structure of the formula: \begin{itemize} \item If $F$ is atomic, $F^\exists = F$ \item $(F_1 \land F_2)^\exists$ is $F_1^\exists \land F_2^\exists$ \item $(F_1 \lor F_2)^\exists$ is $F_1^\exists \lor F_2^\exists$ \item $(F_1 \to F_2)^\exists$ is $F_1^\exists \to F_2^\exists$ \item $(\forall x \ F)^\exists$ is $\neg \exists x \ \neg F^\exists$ \item $(\exists x \ F)^\exists$ is $\exists x \ F^\exists$ \end{itemize} We know that $\forall x \ A(x)$ is classically equivalent to $\neg \exists x \ \neg A(x)$ regardless of $A$, and thus we can easily state that $\PA \vdash \PA^\exists$ and $\PA^\exists \vdash \PA$. So it is also easy to see \begin{proposition} \label{prop:pa-to-pa-e} $\PA \vdash F$ if and only if $\PA^\exists \vdash F^\exists$ \end{proposition} \begin{proof} By a straightforward induction on the derivation. \end{proof} The question is a bit more complicated for intuitionistic arithmetic: in general, the translated formula is not intuitionistically equivalent to the original one. Nevertheless, we have the following result: \begin{thm} $\HA \vdash \HA^\exists$. So, every formula provable in $\HA^\exists$ is provable in $\HA$ \label{thm:exists-translation} \end{thm} \begin{proof} The axioms for equality and the definition of the successor are left untouched by the translation. Consider now the translation of the axiom for induction for an arbitrary formula $P$: $$ (Ind)^\exists = (P(0) \land (\forall \alpha \ (P(\alpha) \to P(\alpha+1))) \to \forall \alpha \ P(\alpha))^\exists = $$ $$P(0) \land (\neg \exists \alpha. \ \neg (P(\alpha) \to P(\alpha+1))) \to \neg \exists \alpha \neg P(\alpha)$$ \noindent The formal derivation in \cref{fig:ind-exists} gives a proof of this formula in $\HA$. Therefore, we have that $\HA \vdash \HA^\exists$, and so also whenever $\HA^\exists \vdash F$ $\HA \vdash F$ \end{proof} \begin{sidewaysfigure} \small{ \begin{prooftree} \AxiomC{$[p]_{(1)} : P(0) \land (\neg \exists \alpha. \ \neg (P(\alpha) \to P(\alpha+1)))$} \UnaryInfC{$\pi_1(p) : \neg \exists \alpha. \ \neg (P(\alpha) \to P(\alpha+1))$} \AxiomC{$[v]_{(5)} : \neg \neg P (\alpha)$} \AxiomC{$[u]_{(6)} : \neg P(\alpha + 1)$} \AxiomC{$[y]_{(7)} : P(\alpha) \to P(\alpha+1)$} \AxiomC{$[z]_8 : P(\alpha)$} \BinaryInfC{$yz : P(\alpha+1)$} \BinaryInfC{$u(yz) : \bot$} \RightLabel{(8)} \UnaryInfC{$\lambda z. u(yz) : \neg P(\alpha)$} \BinaryInfC{$v(\lambda z . u(yz)) : \bot$} \RightLabel{$(7)$} \UnaryInfC{$ \lambda y \ v(\lambda z . u(yz)) : \neg (P(\alpha) \to P(\alpha+1))$} \UnaryInfC{$ \langle \alpha, \lambda y \ v(\lambda z . u(yz)) \rangle : \exists \alpha. \ \neg (P(\alpha) \to P(\alpha+1))$} \BinaryInfC{$\pi_1(p)( \langle \alpha, \lambda y \ v(\lambda z . u(yz)) \rangle) : \bot$} \RightLabel{$(6)$} \UnaryInfC{$ \lambda u. \pi_1(p)( \langle \alpha, \lambda y \ v(\lambda z . u(yz)) \rangle) : \neg \neg P(\alpha + 1)$} \RightLabel{$(5)$} \UnaryInfC{$ \lambda v \lambda u. \pi_1(p)( \langle \alpha, \lambda y \ v(\lambda z . u(yz)) \rangle) : \neg \neg P (\alpha) \to \neg \neg P(\alpha + 1)$} \UnaryInfC{$ \lambda \alpha \lambda v \lambda u. \pi_1(p)( \langle \alpha, \lambda y \ v(\lambda z . u(yz)) \rangle) : \forall \alpha (\neg \neg P (\alpha) \to \neg \neg P(\alpha + 1))$} \end{prooftree} \caption{Proof of the inductive step ($\mathcal{D}_1$)} \begin{prooftree} \AxiomC{$[q]_{(2)} : \exists \alpha \neg P(\alpha)$} \AxiomC{$[x]_{(4)} : \neg P(0)$} \AxiomC{$[p]_{(1)} : P(0) \land (\neg \exists \alpha. \ \neg (P(\alpha) \to P(\alpha+1)))$} \UnaryInfC{$\pi_0(p) : P(0)$} \BinaryInfC{$ x \pi_0(p) : \bot$} \RightLabel{$(4)$} \UnaryInfC{$ \lambda x.x \pi_0(p) : \neg \neg P(0)$} \AxiomC{$\mathcal{D}_1$} \noLine \UnaryInfC{$ \lambda \alpha \lambda v \lambda u. \pi_1(p)( \langle \alpha, \lambda y \ v(\lambda z . u(yz)) \rangle) : \forall \alpha (\neg \neg P (\alpha) \to \neg \neg P(\alpha + 1))$} \insertBetweenHyps{\hskip -40pt} \RightLabel{Ind} \BinaryInfC{$\lambda \alpha \mathbf{R} (\alpha \ \lambda x.x \pi_0(p) \ \lambda \alpha \lambda v \lambda u. \pi_1(p)( \langle \alpha, \lambda y \ v(\lambda z . u(yz)) \rangle)) : \forall \alpha \neg \neg P(\alpha)$} \UnaryInfC{$\lambda \alpha \mathbf{R} (\alpha \ \lambda x.x \pi_0(p) \ \lambda \alpha \lambda v \lambda u. \pi_1(p)( \langle \alpha, \lambda y \ v(\lambda z . u(yz)) \rangle)) \beta: \neg \neg P(\beta)$} \AxiomC{$[t]_{(3) (\exists)} : \neg P(\beta)$} \insertBetweenHyps{\hskip -100pt} \BinaryInfC{$(\lambda \alpha \ (\mathbf{R} (\alpha \ \lambda x.x \pi_0(p) \ \lambda \alpha \lambda v \lambda u. \pi_1(p)( \langle \alpha, \lambda y \ v(\lambda z . u(yz)) \rangle)))\beta ) t : \bot$} \insertBetweenHyps{\hskip -120pt} \RightLabel{(3) ($\exists$-E)} \BinaryInfC{$q [(\beta,t).((\lambda \alpha \ (\mathbf{R} (\alpha \ \lambda x.x \pi_0(p) \ \lambda \alpha \lambda v \lambda u. \pi_1(p)( \langle \alpha, \lambda y \ v(\lambda z . u(yz)) \rangle))) \beta) t)] : \bot$} \RightLabel{(2)} \UnaryInfC{$\lambda q \ q [(\beta,t).((\lambda \alpha \ (\mathbf{R} (\alpha \ \lambda x.x \pi_0(p) \ \lambda \alpha \lambda v \lambda u. \pi_1(p)( \langle \alpha, \lambda y \ v(\lambda z . u(yz)) \rangle))) \beta) t)] : \neg \exists \alpha \neg P(\alpha)$} \RightLabel{(1)} \UnaryInfC{$\lambda p \lambda q \ q [(\beta,t).((\lambda \alpha \ (\mathbf{R} (\alpha \ \lambda x.x \pi_0(p) \ \lambda \alpha \lambda v \lambda u. \pi_1(p)( \langle \alpha, \lambda y \ v(\lambda z . u(yz)) \rangle))) \beta) t)] : P(0) \land (\neg \exists \alpha. \ \neg (P(\alpha) \to P(\alpha+1))) \to \neg \exists \alpha \neg P(\alpha) $} \end{prooftree} } \label{fig:ind-exists} \caption{Proof of $(Ind)^\exists$ in $\HA$} \end{sidewaysfigure} \section{Embedding classical proofs in $\HA + \EM^-$} We go back now to the system $\HA+\EM^-$ defined in \cref{sec:full-excluded-middle}. Since in this new system instances of the excluded middle are allowed on arbitrary formulas, we might be tempted to investigate more on how much of a classical proof we can reconstruct in it. A first approach can be the following: in the case the statement to be proved is itself simply existential, we could allow occurrences of the excluded middle rule whenever we are sure they are the lowermost infecences. More formally, we introduce the notation \begin{prooftree} \AxiomC{$\mathcal{D}_1$} \noLine \UnaryInfC{$\exists \alpha \emp{}$} \AxiomC{$\mathcal{D}_2$} \noLine \UnaryInfC{$\exists \alpha \emp{}$} \AxiomC{\dots} \AxiomC{$\mathcal{D}_n$} \noLine \UnaryInfC{$\exists \alpha \emp{}$} \doubleLine \RightLabel{$\textsc{em}^{-}$ } \QuaternaryInfC{$\exists \alpha \emp{}$} \end{prooftree} to indicate that $\mathcal{D}_1$, $\mathcal{D}_2$ \dots $\mathcal{D}_n$ are proofs of $\exists \alpha \emp{}$ not using $\textsc{em}^{-}$ , possibly with open assumptions, and the conclusion is obtained by repeated usage of the $\textsc{em}^{-}$ rule on them (note that $\textsc{em}^{-}$ is indeed used only on a $\Sigma_1^0$ formula). Similarly define the same notation for $\textsc{em}$ . Then clearly the new construct for $\textsc{em}^{-}$ can be directly replaced by instances of Markov's principle using the proof tree from \cref{sec:full-excluded-middle}. Our task for this section is thus to show that any proof (in $\PA$, i.e. $\HA+\EM$) of a simply existential statement can be rewritten into a proof in $\HA+\EM^-$ of the above form. In order to do so, we employ new permutation rules extending the ones defined in \cite{Aschieri16} to move the use of classical reasoning below purely intuitionistic proofs. In general, we could have an unrestricted use of the excluded middle, in the form of the rule $$\textsc{em}$ $. For every intuitionistic rule, one needs to move the classical rule below it: $\to$-introduction: $$\begin{aligned} \AxiomC{$[A]$} \AxiomC{$[B]_{(1)}$} \noLine \BinaryInfC{$\vdots$} \noLine \UnaryInfC{$C$} \AxiomC{$[\neg A]$} \AxiomC{$[B]_{(1)}$} \noLine \BinaryInfC{$\vdots$} \noLine \UnaryInfC{$C$} \RightLabel{$\textsc{em}$ } \BinaryInfC{$C$} \RightLabel{(1)} \UnaryInfC{$B\to C$} \DisplayProof & \mbox{ $\leadsto$ } & \AxiomC{$[A]$} \AxiomC{$[B]_{(1)}$} \noLine \BinaryInfC{$\vdots$} \noLine \UnaryInfC{$C$} \RightLabel{(1)} \UnaryInfC{$B \to C$} \AxiomC{$[\neg A]$} \AxiomC{$[B]_{(2)}$} \noLine \BinaryInfC{$\vdots$} \noLine \UnaryInfC{$C$} \RightLabel{(2)} \UnaryInfC{$B \to C$} \RightLabel{$\textsc{em}$ } \BinaryInfC{$B\to C$} \DisplayProof \\ \end{aligned} $$ $\to$-elimination/1: $$\begin{aligned} \AxiomC{$[A]$} \noLine \UnaryInfC{$\vdots$} \noLine \UnaryInfC{$B \to C$} \AxiomC{$[\neg A]$} \noLine \UnaryInfC{$\vdots$} \noLine \UnaryInfC{$B \to C$} \RightLabel{$\textsc{em}$ } \BinaryInfC{$B\to C$} \AxiomC{$\vdots$} \noLine \UnaryInfC{$B$} \BinaryInfC{$C$} \DisplayProof & \qquad \mbox{$\leadsto$ } \qquad & \AxiomC{$[A]$} \noLine \UnaryInfC{$\vdots$} \noLine \UnaryInfC{$B \to C$} \AxiomC{$\vdots$} \noLine \UnaryInfC{$B$} \BinaryInfC{$C$} \AxiomC{$[\neg A]$} \noLine \UnaryInfC{$\vdots$} \noLine \UnaryInfC{$B \to C$} \AxiomC{$\vdots$} \noLine \UnaryInfC{$B$} \BinaryInfC{$C$} \RightLabel{$\textsc{em}$ } \BinaryInfC{$C$} \DisplayProof \\ \end{aligned} $$ $\to$-elimination/2: $$\begin{aligned} \AxiomC{$\vdots$} \noLine \UnaryInfC{$B \to C$} \AxiomC{$[A]$} \noLine \UnaryInfC{$\vdots$} \noLine \UnaryInfC{$B$} \AxiomC{$[\neg A]$} \noLine \UnaryInfC{$\vdots$} \noLine \UnaryInfC{$B$} \RightLabel{$\textsc{em}$ } \BinaryInfC{$B$} \BinaryInfC{$C$} \DisplayProof & \qquad \mbox{$\leadsto$ } \qquad & \AxiomC{$\vdots$} \noLine \UnaryInfC{$B \to C$} \AxiomC{$[A]$} \noLine \UnaryInfC{$\vdots$} \noLine \UnaryInfC{$B$} \BinaryInfC{$C$} \AxiomC{$\vdots$} \noLine \UnaryInfC{$B \to C$} \AxiomC{$[\neg A]$} \noLine \UnaryInfC{$\vdots$} \noLine \UnaryInfC{$B$} \BinaryInfC{$C$} \RightLabel{$\textsc{em}$ } \BinaryInfC{$C$} \DisplayProof \\ \end{aligned} $$ $\land$-introduction/1: $$\begin{aligned} \AxiomC{$[A]$} \noLine \UnaryInfC{$\vdots$} \noLine \UnaryInfC{$B$} \AxiomC{$[\neg A]$} \noLine \UnaryInfC{$\vdots$} \noLine \UnaryInfC{$B$} \RightLabel{$\textsc{em}$ } \BinaryInfC{$B$} \AxiomC{$\vdots$} \noLine \UnaryInfC{$C$} \BinaryInfC{$B \land C$} \DisplayProof & \qquad \mbox{$\leadsto$ } \qquad & \AxiomC{$[A]$} \noLine \UnaryInfC{$\vdots$} \noLine \UnaryInfC{$B$} \AxiomC{$\vdots$} \noLine \UnaryInfC{$C$} \BinaryInfC{$B \land C$} \AxiomC{$[\neg A]$} \noLine \UnaryInfC{$\vdots$} \noLine \UnaryInfC{$B$} \AxiomC{$\vdots$} \noLine \UnaryInfC{$C$} \BinaryInfC{$B \land C$} \RightLabel{$\textsc{em}$ } \BinaryInfC{$B \land C$} \DisplayProof \\ \end{aligned} $$ $\land$-introduction/2: $$\begin{aligned} \AxiomC{$\vdots$} \noLine \UnaryInfC{$B$} \AxiomC{$[A]$} \noLine \UnaryInfC{$\vdots$} \noLine \UnaryInfC{$C$} \AxiomC{$[\neg A]$} \noLine \UnaryInfC{$\vdots$} \noLine \UnaryInfC{$C$} \RightLabel{$\textsc{em}$ } \BinaryInfC{$C$} \BinaryInfC{$B \land C$} \DisplayProof & \qquad \mbox{$\leadsto$ } \qquad & \AxiomC{$\vdots$} \noLine \UnaryInfC{$B$} \AxiomC{$[A]$} \noLine \UnaryInfC{$\vdots$} \noLine \UnaryInfC{$C$} \BinaryInfC{$B \land C$} \AxiomC{$\vdots$} \noLine \UnaryInfC{$B$} \AxiomC{$[\neg A]$} \noLine \UnaryInfC{$\vdots$} \noLine \UnaryInfC{$C$} \BinaryInfC{$B \land C$} \RightLabel{$\textsc{em}$ } \BinaryInfC{$B \land C$} \DisplayProof \\ \end{aligned} $$ $\land$-elimination/1, $\land$-elimination/2: $$\begin{aligned} \AxiomC{$[A]$} \noLine \UnaryInfC{$\vdots$} \noLine \UnaryInfC{$A_1 \land A_2$} \AxiomC{$[\neg A]$} \noLine \UnaryInfC{$\vdots$} \noLine \UnaryInfC{$A_1 \land A_2$} \RightLabel{$\textsc{em}$ } \BinaryInfC{$A_1 \land A_2$} \UnaryInfC{$A_i$} \DisplayProof & \qquad \mbox{$\leadsto$ } \qquad & \AxiomC{$[A]$} \noLine \UnaryInfC{$\vdots$} \noLine \UnaryInfC{$A_1 \land A_2$} \UnaryInfC{$A_i$} \AxiomC{$[\neg A]$} \noLine \UnaryInfC{$\vdots$} \noLine \UnaryInfC{$A_1 \land A_2$} \UnaryInfC{$A_i$} \RightLabel{$\textsc{em}$ } \BinaryInfC{$A_i$} \DisplayProof \\ \end{aligned} $$ And similarly for $\lor$-introduction, $\lor$-elimination. $\exists$-introduction: $$\begin{aligned} \AxiomC{$[A]$} \noLine \UnaryInfC{$\vdots$} \noLine \UnaryInfC{$B[m/\alpha]$} \AxiomC{$[\neg A]$} \noLine \UnaryInfC{$\vdots$} \noLine \UnaryInfC{$B[m/\alpha]$} \RightLabel{$\textsc{em}$ } \BinaryInfC{$B[m/\alpha]$} \UnaryInfC{$\exists \alpha B$} \DisplayProof & \mbox{ $\leadsto$ } & \AxiomC{$[A]$} \noLine \UnaryInfC{$\vdots$} \noLine \UnaryInfC{$B[m/\alpha]$} \UnaryInfC{$\exists \alpha B$} \AxiomC{$[\neg A]$} \noLine \UnaryInfC{$\vdots$} \noLine \UnaryInfC{$B[m/\alpha]$} \UnaryInfC{$\exists \alpha B$} \RightLabel{$\textsc{em}$ } \BinaryInfC{$\exists \alpha B$} \DisplayProof \\ \end{aligned} $$ $\exists$-elimination/1: $$\begin{aligned} \AxiomC{$[A]$} \noLine \UnaryInfC{$\vdots$} \noLine \UnaryInfC{$\exists \alpha B$} \AxiomC{$[\neg A]$} \noLine \UnaryInfC{$\vdots$} \noLine \UnaryInfC{$\exists \alpha B$} \RightLabel{$\textsc{em}$ } \BinaryInfC{$\exists \alpha B$} \AxiomC{$[B]$} \noLine \UnaryInfC{$\vdots$} \noLine \UnaryInfC{$C$} \BinaryInfC{$C$} \DisplayProof & \qquad \mbox{$\leadsto$ } \qquad & \AxiomC{$[A]$} \noLine \UnaryInfC{$\vdots$} \noLine \UnaryInfC{$\exists \alpha B$} \AxiomC{$[B]$} \noLine \UnaryInfC{$\vdots$} \noLine \UnaryInfC{$C$} \BinaryInfC{$C$} \AxiomC{$[\neg A]$} \noLine \UnaryInfC{$\vdots$} \noLine \UnaryInfC{$\exists \alpha B$} \AxiomC{$[B]$} \noLine \UnaryInfC{$\vdots$} \noLine \UnaryInfC{$C$} \BinaryInfC{$C$} \RightLabel{$\textsc{em}$ } \BinaryInfC{$C$} \DisplayProof \\ \end{aligned} $$ $\exists$-elimination/2: $$\begin{aligned} \centerfloat \AxiomC{\vdots} \noLine \UnaryInfC{$\exists \alpha B$} \AxiomC{$[A]$} \AxiomC{$[B]$} \noLine \BinaryInfC{$\vdots$} \noLine \UnaryInfC{$C$} \AxiomC{$[\neg A]$} \AxiomC{$[B]$} \noLine \BinaryInfC{$\vdots$} \noLine \UnaryInfC{$C$} \RightLabel{$\textsc{em}$ } \BinaryInfC{$C$} \BinaryInfC{$C$} \DisplayProof & \mbox{$\leadsto$ } & \AxiomC{\vdots} \noLine \UnaryInfC{$\exists \alpha B$} \AxiomC{$[A]$} \AxiomC{$[B]$} \noLine \BinaryInfC{$\vdots$} \noLine \UnaryInfC{$C$} \BinaryInfC{$C$} \AxiomC{\vdots} \noLine \UnaryInfC{$\exists \alpha B$} \AxiomC{$[\neg A]$} \AxiomC{$[B]$} \noLine \BinaryInfC{$\vdots$} \noLine \UnaryInfC{$C$} \BinaryInfC{$C$} \RightLabel{$\textsc{em}$ } \BinaryInfC{$C$} \DisplayProof \\ \end{aligned} $$ Now, we would like to define a permutation for the case of the universal quantifier. However, it turns out that this is not possible: for the case of $\forall$-I we have no general way of defining one. Consider for example the proof \begin{prooftree} \AxiomC{$[P(x)]_\textsc{em}$} \UnaryInfC{$(P(x) \lor \neg P(x))$} \AxiomC{$[\neg P(x)]_\textsc{em}$} \UnaryInfC{$(P(x) \lor \neg P(x))$} \RightLabel{$\textsc{em}$ } \BinaryInfC{$(P(x) \lor \neg P(x))$} \RightLabel{$\forall$-I} \UnaryInfC{$\forall x \ (P(x) \lor \neg P(x))$} \end{prooftree} \noindent Here clearly we have no way of moving the the excluded middle below universal introduction, since the variable $x$ is free before $\textsc{em}$ lets us discharge the assumptions. This is where the translation from \cref{sec:new-negat-transl} comes to the rescue: clearly, proofs in $\PA^\exists$ will not contain applications of rules for the universal quantifier, and are thus suitable for our transformations. Therefore, the last rule for which we should give a permutation is the translated rule of induction $(Ind)^\exists$ for $\PA^\exists$: \begin{prooftree} \AxiomC{$\Gamma \vdash A(0)$} \AxiomC{$\Gamma \vdash \neg \exists \alpha \neg \ (A(\alpha) \to A(\mathsf{S}(\alpha)))$} \RightLabel{$Ind^\exists$} \BinaryInfC{$\Gamma \vdash \neg \exists \alpha \ \neg A(\alpha)$} \end{prooftree} The permutations for $Ind^\exists$ will be: \begin{prooftree} \AxiomC{\vdots} \noLine \UnaryInfC{$B(0)$} \AxiomC{[$A$]} \noLine \UnaryInfC{$\vdots$} \noLine \UnaryInfC{$\neg \exists \alpha \ \neg B((\alpha)\rightarrow B(\mathsf{S}(\alpha)))$} \AxiomC{[$\neg A$]} \noLine \UnaryInfC{$\vdots$} \noLine \UnaryInfC{$\neg \exists \alpha \ \neg (B(\alpha)\rightarrow B(\mathsf{S}(\alpha)))$} \RightLabel{$\textsc{em}$ } \BinaryInfC{$\neg \exists \alpha \ \neg (B(\alpha)\rightarrow B(\mathsf{S}(\alpha)))$} \BinaryInfC{$\neg \exists \alpha \ \neg B$} \end{prooftree} converts to: \begin{prooftree} \AxiomC{\vdots} \noLine \UnaryInfC{$B(0)$} \AxiomC{[$A$]} \noLine \UnaryInfC{$\vdots$} \noLine \UnaryInfC{$\neg \exists \alpha \ \neg (B(\alpha)\rightarrow B(\mathsf{S}(\alpha)))$} \BinaryInfC{$\neg \exists \alpha \ \neg B$} \AxiomC{\vdots} \noLine \UnaryInfC{$B(0)$} \AxiomC{[$\neg A$]} \noLine \UnaryInfC{$\vdots$} \noLine \UnaryInfC{$\neg \exists \alpha \ \neg (B(\alpha) \rightarrow B(\mathsf{S}(\alpha)))$} \BinaryInfC{$\neg \exists \alpha \ \neg B$} \RightLabel{$\textsc{em}$ } \BinaryInfC{$\neg \exists \alpha \ \neg B$} \end{prooftree} and \begin{prooftree} \AxiomC{[$A$]} \noLine \UnaryInfC{$\vdots$} \noLine \UnaryInfC{$B(0)$} \AxiomC{[$\neg A$]} \noLine \UnaryInfC{$\vdots$} \noLine \UnaryInfC{$B(0)$} \RightLabel{$\textsc{em}$ } \BinaryInfC{$B(0)$} \AxiomC{\vdots} \noLine \UnaryInfC{$\neg \exists \alpha \ \neg (B(\alpha)\rightarrow B(\mathsf{S}(\alpha)))$} \BinaryInfC{$\neg \exists \alpha \ \neg B$} \end{prooftree} converts to: \begin{prooftree} \AxiomC{[$A$]} \noLine \UnaryInfC{$\vdots$} \noLine \UnaryInfC{$B(0)$} \AxiomC{\vdots} \noLine \UnaryInfC{$\neg \exists \alpha \ \neg (B(\alpha)\rightarrow B(\mathsf{S}(\alpha)))$} \BinaryInfC{$\neg \exists \alpha \ \neg B$} \AxiomC{[$\neg A$]} \noLine \UnaryInfC{$\vdots$} \noLine \UnaryInfC{$B(0)$} \AxiomC{\vdots} \noLine \UnaryInfC{$\neg \exists \alpha \ \neg (B(\alpha)\rightarrow B(\mathsf{S}(\alpha)))$} \BinaryInfC{$\neg \exists \alpha \ \neg B$} \RightLabel{$\textsc{em}$ } \BinaryInfC{$\neg \exists \alpha \ \neg B$} \end{prooftree} \noindent By employing the just defined permutation rules, we can state \begin{proposition} \label{lemma:normal-pa-exists} Every proof of a formula $F$ in $\PA^\exists$ can be transformed into a proof \begin{prooftree} \AxiomC{$\mathcal{D}_1$} \noLine \UnaryInfC{$F$} \AxiomC{$\mathcal{D}_2$} \noLine \UnaryInfC{$F$} \AxiomC{\dots} \AxiomC{$\mathcal{D}_n$} \noLine \UnaryInfC{$F$} \doubleLine \RightLabel{$\textsc{em}$ } \QuaternaryInfC{ $F$} \end{prooftree} Where $\mathcal{D}_1, \mathcal{D}_2 \dots \mathcal{D}_n$ are purely intuitionistic proofs. \end{proposition} \begin{proof} Proceed by induction on the structure of the proof. The base case where the proof only containts axioms and a single rule is vacuous. Otherwise, assume there is at least one use of $\textsc{em}$ (if not the thesis holds vacuosly) and consider the lowermost rule application: \begin{itemize} \item If it is $\textsc{em}$ , then the induction hypothesis can be applied to the subtrees corresponding to the two premises of the rule, yelding the thesis. \item As an example to the case of unary rules, consider $\exists$-introduction; then the proof has the shape \AxiomC{\vdots} \noLine \UnaryInfC{$F^\prime[m/\alpha]$} \RightLabel{$\exists$-I} \UnaryInfC{$\exists \alpha F^\prime$} \DisplayProof Applying the induction hypothesis to the subproof corresponding to the premise, by our assumption we get a proof of the form \begin{prooftree} \AxiomC{$\mathcal{D}_1$} \noLine \UnaryInfC{$F^\prime[m/\alpha]$} \AxiomC{$\mathcal{D}_2$} \noLine \UnaryInfC{$F^\prime[m/\alpha]$} \AxiomC{\dots} \AxiomC{$\mathcal{D}_n$} \noLine \UnaryInfC{$F^\prime[m/\alpha]$} \doubleLine \RightLabel{$\textsc{em}$ } \QuaternaryInfC{$F^\prime[m/\alpha]$} \end{prooftree} Substitute this in the original proof: by applying the permutation rule for $\exists$-introduction $n-1$ times, we move the exist introduction right below the intuitionistic part; the proof then becomes \begin{prooftree} \AxiomC{$\mathcal{D}_1$} \noLine \UnaryInfC{$F^\prime[m/\alpha]$} \UnaryInfC{$\exists \alpha F^\prime$} \AxiomC{$\mathcal{D}_2$} \noLine \UnaryInfC{$F^\prime[m/\alpha]$} \UnaryInfC{$\exists \alpha F^\prime$} \AxiomC{\dots} \AxiomC{$\mathcal{D}_n$} \noLine \UnaryInfC{$F^\prime[m/\alpha]$} \UnaryInfC{$\exists \alpha F^\prime$} \doubleLine \RightLabel{$\textsc{em}$ } \QuaternaryInfC{$\exists \alpha F^\prime$} \end{prooftree} Which satisfies the thesis. The cases of the other unary rules are analogous. \item As an example for the case of binary rules, consider $\to$-elimination; then the proof has the shape \AxiomC{\vdots} \noLine \UnaryInfC{$G \to F$} \AxiomC{\vdots} \noLine \UnaryInfC{$G$} \RightLabel{$\to$-E} \BinaryInfC{$F$} \DisplayProof Applying the induction hypothesis to the subproofs corresponding to the premises, by our assumption we get two proofs where in at least one of the two the last used rule is $\textsc{em}$ : we select one where this is the case. Say we chose the proof of the first premise (the other case is symmetric), then from the induction hypothesis we have obtained a proof of the shape \begin{prooftree} \AxiomC{$\mathcal{D}_1$} \noLine \UnaryInfC{$G \to F$} \AxiomC{$\mathcal{D}_2$} \noLine \UnaryInfC{$G \to F$} \AxiomC{\dots} \AxiomC{$\mathcal{D}_n$} \noLine \UnaryInfC{$G \to F$} \doubleLine \RightLabel{$\textsc{em}$ } \QuaternaryInfC{$G \to F$} \end{prooftree} After substitutig this in the original proof, we can employ the permutation for $\to$-elimination $n-1$ times and obtain the proof \begin{prooftree} \AxiomC{$\mathcal{D}_1$} \noLine \UnaryInfC{$G \to F$} \AxiomC{\vdots} \noLine \UnaryInfC{$G$} \BinaryInfC{$F$} \AxiomC{$\mathcal{D}_2$} \noLine \UnaryInfC{$G \to F$} \AxiomC{\vdots} \noLine \UnaryInfC{$G$} \BinaryInfC{$F$} \AxiomC{\dots} \AxiomC{$\mathcal{D}_n$} \noLine \UnaryInfC{$G \to F$} \AxiomC{\vdots} \noLine \UnaryInfC{$G$} \BinaryInfC{$F$} \doubleLine \RightLabel{$\textsc{em}$ } \QuaternaryInfC{$F$} \end{prooftree} If the proof of $G$ is intuitionistic we have the thesis, so assume it is not. Just as before, we can use the induction hypothesis on it, and obtain: \begin{prooftree} \AxiomC{$\mathcal{D}_1$} \noLine \UnaryInfC{$G \to F$} \AxiomC{$\mathcal{E}_1$} \noLine \UnaryInfC{$G$} \AxiomC{\dots} \AxiomC{$\mathcal{E}_m$} \noLine \UnaryInfC{$G$} \doubleLine \RightLabel{$\textsc{em}$ } \TrinaryInfC{$G$} \BinaryInfC{$F$} \AxiomC{\dots} \AxiomC{$\mathcal{D}_n$} \noLine \UnaryInfC{$G \to F$} \AxiomC{$\mathcal{E}_1$} \noLine \UnaryInfC{$G$} \AxiomC{\dots} \AxiomC{$\mathcal{E}_m$} \noLine \UnaryInfC{$G$} \doubleLine \RightLabel{$\textsc{em}$ } \TrinaryInfC{$G$} \BinaryInfC{$F$} \doubleLine \RightLabel{$\textsc{em}$ } \TrinaryInfC{$F$} \end{prooftree} After applying $m-1$ times the second permutation for $\to$-elimination, we obtain \begin{prooftree} \centerfloat \AxiomC{$\mathcal{D}_1$} \noLine \UnaryInfC{$G \to F$} \AxiomC{$\mathcal{E}_1$} \noLine \UnaryInfC{$G$} \BinaryInfC{$F$} \AxiomC{\dots} \AxiomC{$\mathcal{D}_1$} \noLine \UnaryInfC{$G \to F$} \AxiomC{$\mathcal{E}_m$} \noLine \UnaryInfC{$G$} \BinaryInfC{$F$} \doubleLine \RightLabel{$\textsc{em}$ } \TrinaryInfC{$F$} \AxiomC{\dots} \AxiomC{$\mathcal{D}_n$} \noLine \UnaryInfC{$G \to F$} \AxiomC{$\mathcal{E}_1$} \noLine \UnaryInfC{$G$} \BinaryInfC{$F$} \AxiomC{\dots} \AxiomC{$\mathcal{D}_n$} \noLine \UnaryInfC{$G \to F$} \AxiomC{$\mathcal{E}_m$} \noLine \UnaryInfC{$G$} \BinaryInfC{$F$} \doubleLine \RightLabel{$\textsc{em}$ } \TrinaryInfC{$F$} \doubleLine \RightLabel{$\textsc{em}$ } \TrinaryInfC{$F$} \end{prooftree} Which satisfies the thesis. The cases of the other binary rules are analogous. \end{itemize} \end{proof} After these transformations we are using the excluded middle only with the statement to prove as a conclusion. A similar result was obtained by Seldin \cite{Seldin89}, but with a rule for reduction ad absurdum in place of the excluded middle and without induction. This means that if the statement we are proving is of a certain complexity, we do not need classical reasoning on formulas of higher complexity. \begin{proposition} \label{lemma:pa-ha-exists} Every proof in $\PA^\exists$ of a $\Sigma_1^0$ statement can be transformed into a proof in $\HA^\exists+\EM_1^-$ \end{proposition} \begin{proof} By the \cref{lemma:normal-pa-exists} we know we can transform any proof in $\PA^\exists$ of a statement $\exists \alpha \emp{}$ into a proof of the form \begin{prooftree} \AxiomC{$\mathcal{D}_1$} \noLine \UnaryInfC{$\exists \alpha \emp{}$} \AxiomC{$\mathcal{D}_2$} \noLine \UnaryInfC{$\exists \alpha \emp{}$} \AxiomC{\dots} \AxiomC{$\mathcal{D}_n$} \noLine \UnaryInfC{$\exists \alpha \emp{}$} \doubleLine \RightLabel{$\textsc{em}$ } \QuaternaryInfC{$\exists \alpha \emp{}$} \end{prooftree} Since every application of $\textsc{em}$ happens on a simply existential statement, we can directly replace them with $\textsc{em}^{-}$ . Moreover, from \cref{sec:full-excluded-middle} we know that $\textsc{em}^{-}$ is equivalent to $\textsc{em}_1^{-}$ , and thus we obtain a proof in $\HA^\exists + \EM_1^-$ as desired. \end{proof} Finally, we can conclude the section with the main theorem \begin{thm} If $\PA \vdash \exists x \ \emp{}$ with $\emp{}$ atomic, then $\HA + \EM_1^- \vdash \exists x \ \emp{}$ \end{thm} \begin{proof} Given a proof of $\exists x \ \emp{}$ in $\PA$, by \cref{prop:pa-to-pa-e} we can apply the $\exists$-translation and obtain a proof of $(\exists x \ \emp{})^\exists = \exists x \ \emp{}$ in $\PA^\exists$. Then, by \cref{lemma:pa-ha-exists}, we can transform this in a proof in $\HA^\exists+\EM_1^-$. Finally, thanks to \cref{thm:exists-translation}, we know that $\HA +\EM_1^- \vdash \exists x \ \emp{}$. \end{proof} \section{Exceptions and classical logic} Though successful in establishing links between intuitionistic theories and computational mechanisms, the Curry-Howard correspondence was for a long time regarded as incompatible with classical theories. Indeed, if we try to extend to classical logic the system of natural deduction we have introduced in \cref{cha:introduction}, we need to add a rule either for the excluded middle or for the double negation elimination (i.e. \emph{reductio ad absurdum}). In the first case, we need to do a disjunction elimination without having any possibility of knowing which of the two disjuncts actually holds; in the second, we assert a formula and all we know is that its negation leads to an absurdum. It looks like we have no possibilities of recovering any computational construct. \begin{figure} \begin{align*} \AxiomC{$[\neg(\alpha \to \beta)]$} \noLine \UnaryInfC{$\Pi_1$} \noLine \UnaryInfC{$\bot$} \RightLabel{$\bot_c$} \UnaryInfC{$\alpha \to \beta$} \DisplayProof \qquad \leadsto \qquad \AxiomC{$[\neg \beta]_{(1)}$} \AxiomC{$[\alpha \to \beta]_{(2)}$} \AxiomC{$[\alpha]_{(3)}$} \BinaryInfC{$\beta$} \BinaryInfC{$\bot$} \RightLabel{$\to$-I$_{(2)}$} \UnaryInfC{$\neg (\alpha \to \beta)$} \noLine \UnaryInfC{$\Pi_1$} \noLine \UnaryInfC{$\bot$} \RightLabel{$\bot_{c(1)}$} \UnaryInfC{$\beta$} \RightLabel{$\to$-I$_{(3)}$} \UnaryInfC{$\alpha \to \beta$} \DisplayProof \end{align*} \caption{Prawitz's normalization step for \emph{reductio ad absurdum} on an implication} \label{fig:prawitzabs} \end{figure} However, we have also mentioned that classical systems of natural deduction too are equipped with a normalization theorem. It was exactly in this observation that the solution to the riddle laid undiscovered for many years\footnote{The link between Prawitz's reductions and typing of the $\mathcal{C}$ operator was established only \emph{a posteriori}, for example in \cite{de01}}. Let's take a look at the rules that Prawitz gave for the normalization of the double negation elimination in \cref{fig:prawitzabs}: the aim is to apply the rule $\bot_c$ to a formula of lower complexity, and so one assumes the negation of the conclusion together with the entire implication and the antecedent; classical reasoning is then only applied to the negated assumption. Similar rules were given for the other logical connectives. This reduction looks very similar to the one that Felleisen gave for his $\mathcal{C}$ operator: \[ \mathcal{C} (\lambda k. M) \to \lambda n . \mathcal{C} (\lambda k. M[k:=\lambda f .k (f n)]) \] Presented in \cite{Felleisen88}, this operator introduced the notion of \emph{continuation}, and was the basis for the introduction of such constructs in programming languages (an example being the construct \texttt{call}/\texttt{cc} available in Scheme). It was Griffin then, who in \cite{Griffin89} proposed to type Felleisen's operator as $\neg \neg A \to A$. The idea that sequential control operators could provide a computational correspondent to classical reasoning (as opposed to pure functional flow of computation and intuitionistic reasoning) proved to be very successful, and breathed new life into the \emph{propositions as types} paradigm. Starting from ideas similar to Griffin's several other systems were developed, such as the ones from Parigot \cite{Parigot92} and Krivine \cite{Krivine09}. Generalizing to other control operators, de Groote \cite{de95} showed that mechanisms of exceptions can be put in correspondence with uses of the excluded middle. The approach of enriching systems of lambda calculus with imperative constructs provided also a new way to approach semi-classical principle, by extending Kreisel's modified realizability with delimited control operators. Using this method, Hugo Herbelin introduced in \cite{Herbelin10} a system of intuitionistic logic with the addition of two logical rules crafted in order to correspond to \texttt{catch} and \texttt{throw} instructions for a system of delimited exceptions. \section{Interactive realizability} The possibilities opened by new Curry-Howard correspondences for classical logic did not, on the other side, provoke a similar number of new systems in the field of realizability semantics. The first and major example remains the work of Krivine \cite{Krivine10}, who recently applied ideas of classical realizability to set theory in order to obtain a technique alternative to forcing. Interactive realizability is a new realizability semantics for classical systems introduced by Aschieri and Berardi \cite{Aschieri12,Aschieri13} based on the concept of \emph{learning}: the main idea is that realizers are programs that \emph{make hypotheses}, \emph{test} them and \emph{learn} by refuting the incorrect ones. This is obtained by means of systems of lambda calculus with exceptions mechanisms: a program will continue to execute under some assumptions, and whenever it uses an instance of an assumption, the instance gets tested; if the assumption is discovered to be false an exception is raised, and the program can continue to run using the new knowledge gained from the counterexample. Different systems of interactive realizability have been put forward for various systems such as Heyting Arithmetic with limited classical principles, or more recently full first order logic and non-classical logics. \section{The system $\HA+\EM_{1}$} \label{sec:system-ha+em_1} Following the terminology of \cite{Akama04}, we will now consider the semi-classical principle $\EM_1$, that is excluded middle restricted to formulas of the form $\exists \alpha \emp{}$ with $\emp{}$ an atomic predicate\footnote{This class of formulas corresponds with the class of $\Sigma_1^0$ formulas of the arithmetical hierarchy}. System $\HA+\EM_{1}$, introduced in \cite{Aschieri13}, applies the idea of interactive realizability to an intuitionistic logic extended with this principle. We could view this as adding the axiom $\forall \alpha^\Nat \emp{} \lor \exists \alpha^\Nat \neg \emp{}$ for every atomic $\emp{}$; this however carries no useful computational meaning. The new principle is therefore treated as a disjunction elimination, where the main premise is the classical axiom and gets cut. If we try to fit this in a Curry-Howard system, we have now two proof terms representing a construction of the same conclusion, corresponding to the two proof branches where the first and then the second disjunct are assumed. By looking at the shape of the two assumptions, we can see that in the first case we need a condition to hold for all values, while in the second we are looking for a counterexample. The idea is that we should create a new proof term where we include both possible computations, and during the computation itself we might switch from the first to the second. Hence the $\textsc{em}_1$ rule that we add to the system has the following form: \[ \begin{array}{c} \Gamma, a: \forall \alpha^{\Nat} \emp{} \vdash u: C\ \ \ \Gamma, a: \exists \alpha^{\Nat} \neg \emp{} \vdash v:C\\ \hline \Gamma\vdash \E{a}{u}{v} : C \end{array} \] Here $a$ represents a communication channel between the two possible computations. The hypothesis $\forall \alpha^{\Nat} \emp{}$ is computationally void: it only serves as a certificate for the correctness of $u$; conversely, the branch where we assume $\exists \alpha^{\Nat}\lnot \emp{}$ might ask for an actual witness in order to proceed. Informally, what we want to accomplish with the reduction rules is that we should reduce inside $u$ and check for all the used instances of the universal hypothesis whether $\emp{}[n/\alpha]$ is actually true. Whenever one such instance is refuted, we have found a witness for $\neg \emp{}$, and we can employ it for the execution of $v$. This is obtained by new terms that we should use when we introduce assumptions that are to be eliminated via classical reasoning: we introduce the two typing rules \[\Gamma, a:{\forall \alpha^{\Nat} \emp{}}\vdash \hyp{a}{\alpha}{P}: \forall\alpha^{\Nat} \emp{}\] \[\Gamma, a:{\exists \alpha^{\Nat} \lnot\emp{}}\vdash \mathtt{W}_{a}^{\exists \alpha \neg \mathsf{P}}: \exists\alpha^{\Nat} \lnot \emp{}\] In the first case, we introduce a term that makes the \emph{hypothesis} that $\emp{}$ holds for all values of $\alpha$; in the second, the proof term waits for a \emph{witness} for which $\emp{}$ does not hold. From an operational point of view, terms of the form $\hyp{}{\alpha}{P}$ are the ones who can raise an exception, and terms of the form $\mathtt{W}_{a}^{\exists \alpha \neg \mathsf{P}}$ are those who will catch it. \begin{figure*}[!htb] \begin{description} \item[Grammar of Untyped Terms] \[t,u, v::=\ x\ |\ tu\ |\ tm\ |\ \lambda x u\ |\ \lambda \alpha u\ |\ \langle t, u\rangle\ |\ \pi_0u\ |\ \pi_{1} u\ |\ \inj_{0}(u)\ |\ \inj_{1}(u)\ |\ (m, t)\ | t[x.u, y.v]\ |\ t[(\alpha, x). u]\] \[|\ \E{a}{u}{v}\ |\ \hyp{a}{\alpha}{P}\ |\ \wit{a}{\alpha}{P}\ |\ \True \ |\ \rec u v m \ |\ \mathsf{r}t_{1}\ldots t_{n}\] where $m$ ranges over terms of $\Language$, $x$ over variables of the lambda calculus and $a$ over $\EM_1$ hypothesis variables. Moreover, in terms of the form $\E{a}{u}{v}$ there is a $\emp{}$ such that all the free occurrences of $a$ in $u$ are of the form $\hyp{a}{\alpha}{P}$ and those in $v$ are of the form $\wit{a}{\alpha}{P}$. \item[Contexts] With $\Gamma$ we denote contexts of the form $e_1:A_1, \ldots, e_n:A_n$, where $e_{i}$ is either a proof-term variable $x, y, z\ldots$ or a $\EM_{1}$ hypothesis variable $a, b, \ldots$ \item[Axioms] $\begin{array}{c} \Gamma, x:{A}\vdash x: A \end{array}$ $\begin{array}{c} \Gamma, a:{\forall \alpha^{\Nat} \emp{}}\vdash \hyp{a}{\alpha}{P}: \forall\alpha^{\Nat} \emp{} \end{array}$ $\begin{array}{c} \Gamma, a:{\exists \alpha^{\Nat} \emp{}^\bot}\vdash \wit{a}{\alpha}{P}: \exists\alpha^{\Nat} \emp{}^\bot \end{array}$ \item[Conjunction] $\begin{array}{c} \Gamma \vdash u: A\ \ \ \Gamma\vdash t: B\\ \hline \Gamma\vdash \langle u,t\rangle: A\wedge B \end{array}\ \ \ \ $ $\begin{array}{c} \Gamma \vdash u: A\wedge B\\ \hline\Gamma \vdash\pi_0u: A \end{array}\ \ \ \ $ $\begin{array}{c} \Gamma \vdash u: A\wedge B\\ \hline \Gamma\vdash\pi_1 u: B \end{array}$ \item[Implication] $\begin{array}{c} \Gamma\vdash t: A\rightarrow B\ \ \ \Gamma\vdash u:A \\ \hline \Gamma\vdash tu:B \end{array}\ \ \ \ $ $\begin{array}{c} \Gamma, x:A \vdash u: B\\ \hline \Gamma\vdash \lambda x u: A\rightarrow B \end{array}$ \item[Disjunction Intro.] $\begin{array}{c} \Gamma \vdash u: A\\ \hline \Gamma\vdash \inj_{0}(u): A\vee B \end{array}\ \ \ \ $ $\begin{array}{c} \Gamma \vdash u: B\\ \hline \Gamma\vdash\inj_{1}(u): A\vee B \end{array}$ \item[Disjunction Elim.] $\begin{array}{c} \Gamma\vdash u: A\vee B\ \ \ \Gamma, x: A \vdash w_1: C\ \ \ \Gamma, x:B\vdash w_2: C\\ \hline \Gamma\vdash u [x.w_{1}, x.w_{2}]: C \end{array}$ \item[Universal Quantification] $\begin{array}{c} \Gamma \vdash u:\forall \alpha^{\Nat} A\\ \hline \Gamma\vdash ut: A[t/\alpha] \end{array} $ $\begin{array}{c} \Gamma \vdash u: A\\ \hline \Gamma\vdash \lambda \alpha u: \forall \alpha^{\Nat} A \end{array}$ \\ where $t$ is a term of the language $\Language$ and $\alpha$ does not occur free in any formula $B$ occurring in $\Gamma$. \item[Existential Quantification] $\begin{array}{c}\Gamma\vdash u: A[t/\alpha]\\ \hline \Gamma\vdash ( t,u): \exists \alpha^\Nat. A \end{array}$ \ \ \ \ $\begin{array}{c} \Gamma\vdash u: \exists \alpha^\Nat A\ \ \ \Gamma, x: A \vdash t:C\\ \hline \Gamma\vdash u [(\alpha, x). t]: C \end{array} $\\ where $\alpha$ is not free in $C$ nor in any formula $B$ occurring in $\Gamma$. \item[Induction] $\begin{array}{c} \Gamma\vdash u: A(0)\ \ \ \Gamma\vdash v:\forall \alpha^{\Nat}. A(\alpha)\rightarrow A(\mathsf{S}(\alpha))\\ \hline \Gamma\vdash \rec uv m : A[m/\alpha] \end{array}$ \item[Post Rules] $\begin{array}{c} \Gamma\vdash u_1: A_1\ \Gamma\vdash u_2: A_2\ \cdots \ \Gamma\vdash u_n: A_n\\ \hline\Gamma\vdash u: A \end{array}$ where $A_1,A_2,\ldots,A_n,A$ are atomic formulas of $\HA$ and the rule is a Post rule for equality, for a Peano axiom or for a classical propositional tautology or for booleans and if $n>0$, $u=\mathsf{r} u_{1}\ldots u_{n}$, otherwise $u=\True$. \item[EM1]$\begin{array}{c} \Gamma, a: \forall \alpha^{\Nat} \emp{} \vdash w_1: C\ \ \ \Gamma, a: \exists \alpha^{\Nat} \emp{}^\bot \vdash w_2: C\\ \hline \Gamma\vdash \E{a}{w_{1}}{w_{2}} : C \end{array}$ \end{description} \caption{Term Assignment Rules for $\HA+\EM_{1}$} \label{fig:termassignment} \end{figure*} In \cref{fig:termassignment}, we define a system of natural deduction for $\HA+\EM_{1}$ together with a term assignment in the spirit of Curry-Howard correspondence for classical logic; for a general treatment of this kind of systems, one could refer to textbooks such as \cite{Sorensen06}. Let $\Language$ be the language of $\HA$, three distinct classes of variables appear in the proof terms: one for proof terms, denoted usually as $x, y,\ldots$; one for quantified variables of $\Language$, denoted usually as $\alpha, \beta, \ldots$; one for hypotheses bound by $\EM_{1}$, denoted usually as $a, b, \ldots$. Atomic predicates are denoted by $\emp{}, \emp{0}, \emp{1}, \ldots$; moreover, by $\emp{}^\bot$ we denote the complement predicate of $\emp{}$, and since atomic predicates are decidable in $\HA$ we have that $\emp{}^\bot \equiv \neg \emp{}$. In the term $\E{a}{u}{v} $ all the occurrences of $a$ in $u$ and $v$ are bound. We assume the usual capture-avoiding substitution for the lambda calculus, and in addition to this we add a new kind of substitution: \begin{definition}[Witness substitution] Let $v$ be any term and $n$ a closed term of $\Language$. Then \[ v[a:=n] \] is the term obtained replacing every occurrence of $\wit{a}{\alpha}{P}$ in $v$ by $(n,\True)$ if $\emp{}[n/\alpha] \equiv \False$, and by $(n,\hyp{a}{\alpha}{\alpha=0}\mathsf{S} 0)$ otherwise \end{definition} Note that the reduction rules for the system in \cref{fig:F} make it clear that the second case will never actually happen; however it is needed in order to prove the normalization of the system. \begin{figure*}[!htb] \begin{description} \item[Reduction Rules for $\HA$] \[(\lambda x. u)t\mapsto u[t/x]\qquad (\lambda \alpha. u)t\mapsto u[t/\alpha]\] \[ \pi_{i}\pair{u_0}{u_1}\mapsto u_i, \mbox{ for i=0,1}\] \[\inj_{i}(u)[x_{1}.t_{1}, x_{2}.t_{2}]\mapsto t_{i}[u/x_{i}], \mbox{ for i=0,1} \] \[(n, u)[(\alpha,x).v]\mapsto v[n/\alpha][u/x], \mbox{ for each numeral $n$} \] \[\rec u v 0 \mapsto u\] \[\rec u v (\mathsf{S} n) \mapsto v n (\rec u v n), \mbox{ for each numeral $n$} \] \item[Permutation Rules for $\textsc{em}_1$ ] \[(\E{a} u v) w \mapsto \E{a}{uw}{vw} \] \[\pi_{i}(\E{a} u v) \mapsto \E{a}{\pi_{i}u}{\pi_{i}v} \] \[(\E{a} u v)[x.w_{1}, y.w_{2}] \mapsto \E{a}{u[x.w_{1}, y.w_{2}]}{v[x.w_{1}, y.w_{2}]} \] \[(\E{a} u v)[(\alpha, x).w] \mapsto \E{a}{u[(\alpha, x).w]}{v[(\alpha, x).w]} \] \item[Reduction Rules for $\textsc{em}_1$ ] \[\E{a} u v\mapsto u,\ \mbox{ if $a$ does not occur free in $u$ }\] \[\E{a} u v\mapsto v[a:=n],\ \mbox{ if $\hyp{a}{\alpha}{P}n$ occurs in $u$ and $\emp{} [n/\alpha]$ is closed and $\emp{} [n/\alpha]=\False$ }\] \[(\hyp{a}{\alpha}{P})n \mapsto \True \mbox{ if $\emp{}[n/\alpha]$ is \emph{closed} and $\emp{}[n/\alpha] \equiv \True$}\] \end{description} \caption{Reduction Rules for $\HA$ + $\EM_{1}$} \label{fig:F} \end{figure*} \section{Realizability interpretation of $\HA+\EM_1$} \label{sec:real-interpr-ha+em_1} As we anticipated, this system can be equipped with a realizability interpretation based on the ideas of interactive realizability. In order to do this, we first need to define some classes of terms: \begin{definition}[Terms in normal form]\mbox{} \begin{itemize} \item $\mathsf{SN}$ is the set of strongly normalizing untyped proof terms \item $\mathsf{NF}$ is the set of normal untyped proof terms \item $\mathsf{PNF}$ is the set of the Post normal forms (intuitively, normal terms representing closed proof trees made only of Post rules whose leaves are universal hypothesis followed by an elimination rule), that is: $\True\in\mathsf{PNF}$; for every closed term $n$ of $\Language$, if $\hyp{a}{\alpha}{P}n\in\mathsf{NF}$, then $\hyp{a}{\alpha}{P}n\in\mathsf{PNF}$; if $t_{1}, \ldots, t_{n}\in\mathsf{PNF}$, then $\mathsf{r}t_{1}\ldots t_{n}\in\mathsf{PNF}$. \end{itemize} \end{definition} \begin{definition}[Quasi-Closed terms] If $t$ is an untyped proof term which contains as free variables only $\EM_{1}$-hypothesis variables $a_{1}, \ldots, a_{n}$, such that each occurrence of them is of the form $\hyp{a_i}{\alpha}{P_i}$ for some $\emp{i}$, then $t$ is said to be \emph{quasi-closed}. \end{definition} We can now give the definition of realizers for $\HA+\EM_1$. Realizers will be quasi closed terms, and the definition will be by induction on the formula to be realized; the cases for $\land$, $\to$ and $\forall$ are the same as the ones for intuitionistic realizability we are already familiar with. The case for atomic formulas will need to be extended to take into account the case were we have open universal assumptions (since realizers are quasi-closed). Finally, the realizers for $\lor$ and $\exists$ will need a different kind of definition, with induction done also on the shape of the term. \begin{definition}[Realizability for $\HA +\EM_{1}$] \label{definition-reducibility} Assume $t$ is a {\em quasi-closed} term in the grammar of untyped proof terms of $\HA+\EM_{1}$ and $C$ is a {\em closed} formula. We define the relation $t\real C$ by induction on $C$. \begin{enumerate} \item $t\real \emp{}$ if and only if one of the following holds: \begin{enumerate}[i)] \item $t\in\mathsf{PNF}$ and $\emp{}\evaluates \False$ implies $t$ contains a subterm $\hyp{a}{\alpha}{Q}n$ with $\mathsf{Q}[n/\alpha]\evaluates \False$;\\ \item $t\notin\mathsf{NF}$ and for all $t'$, $t\mapsto t'$ implies $t'\real \emp{}$\\ \end{enumerate} \item $t\real {A\wedge B}$ if and only if $\pi_0t \real {A}$ and $\pi_1t\real {B}$\\ \item $t\real {A\rightarrow B}$ if and only if for all $u$, if $u\real {A}$, then $tu\real {B}$\\ \item $t\real {A\vee B}$ if and only if one of the following holds:\\ \begin{enumerate}[i)] \item $t={\inj_{0}(u)}$ and $u\real A$ or $t={\inj_{1}(u)}$ and $u\real B$;\\ \item $t=\E{a}{u}{v}$ and $u\real A\lor B$ and $v[a:=m]\real A\lor B$ for every numeral $m$;\\ \item $t\notin\mathsf{NF}$ is neutral and for all $t'$, $t\mapsto t'$ implies $t'\real A\lor B$.\\ \end{enumerate} \item $t\real {\forall \alpha^{\Nat} A}$ if and only if for every closed term $n$ of $\Language$, $t{n}\real A[{n}/\alpha]$\\ \item $t\real \exists \alpha^{\Nat} A$ if and only if one of the following holds: \begin{enumerate}[i)] \item $t={(n,u)}$ for some numeral $n$ and $u \real A[{n}/\alpha]$;\\ \item $t=\E{a}{u}{v}$ and $u\real \exists \alpha^{\Nat}A$ and $v[a:=m]\real \exists \alpha^{\Nat}A$ for every numeral $m$;\\ \item $t\notin\mathsf{NF}$ is neutral and for all $t'$, $t\mapsto t'$ implies $t'\real \exists \alpha^{\Nat}A$.\\ \end{enumerate} \end{enumerate} \end{definition} As we said, realizers are quasi closed terms: this means that in general realizers could contain open universal assumptions, and thus their correctness depends on them. The base cases of the definition of the realizers for the disjunction and existential quantifiers are again the same as the ones for modified realizability; however, we add a second clause that takes into account the situation where the realizer has used some assumptions; in these cases, we ask that both parts of a term of the shape $\E{a}{u}{v}$ are realizers of the formula in their turn. In a realizer with such a shape, $u$ will then be a realizer with a new open assumption in the form of a term $\hyp{a}{\alpha}{P}$, as the ones just described; $v$, on the opposite, needs a witness in order to compute and therefore we need to substitute a witness in it in order to obtain a realizer. What this means is that these realizers will still contain a realizer in the usual shape of the clauses (i), but in the form of a \emph{prediction}, as we will see in \cref{proposition-disj}. We conclude the section by giving some properties of the system, as they are found in the original paper \cite{Aschieri13}, that will be employed in the rest of the thesis. First of all, we define a version of the properties of reducibility candidates in the style of Girard \cite{Girard89} \begin{definition} Let $t$ be a realizer of a formula $A$, define the following properties for $t$, plus an inhabitation property \crcinque\ for $A$: \cruno\ If $t\real A$, then $t\in \mathsf{SN}$.\\ \crdue\ If $t \real A$ and $t\mapsto^{*} t'$, then $t' \real A$.\\ \crtre\ If $t\notin\mathsf{NF}$ is neutral and for every $t'$, $t\mapsto t'$ implies $t'\real A$, then $t \real A$.\\ \crquattro\ If $t=\E{a}{u}{v}$, $u\real A$ and $v[a:=m]\real A$ for every numeral $m$, then $t\real A$. \\ \crcinque\ There is a $u$ such that $u\real A$. \end{definition} \begin{proposition}\label{proposition-candidates} Every term $t$ has the properties \cruno, \crdue, \crtre, \crquattro\ and the inhabitation property \crcinque\ holds. \end{proposition} \begin{proof} By induction on $C$. \begin{itemize} \item $C=\emp{}$ is atomic.\\ \cruno.\ By induction on the definition of $t\real \emp{}$. If $t\in\mathsf{PNF}$, then $t\in\mathsf{SN}$. If $t\notin\mathsf{NF}$ is neutral, then $t\mapsto t'$ implies $t\real \emp{}$ and thus by induction hypothesis $t'\in\mathsf{SN}$; so $t\in\mathsf{SN}$. Suppose then $t=\E{a}{u}{v}$. Since $u\real \emp{}$ and for all numerals $n$, $v[a:=n]\real \emp{}$, we have by induction hypothesis $u\in\mathsf{SN}$ and for all numerals $n$, $v[a:=n]\in\mathsf{SN}$; but these last two conditions are easily seen to imply $\E{a}{u}{v}\in\mathsf{SN}$. \\ \crdue.\ Suppose $t\real \emp{}$. It suffices to assume that $t\mapsto t'$ and show that $t'\real \emp{}$. We proceed by induction on the number of the occurrences of the symbol $\E{}{}{}$ in $t$. If $t$ is neutral, since it is not the case that $t\in\mathsf{PNF}$, by definition of $t \real \emp{}$ we obtain $t'\real \emp{}$. Therefore, assume $t$ is not neutral and thus $t=\E{a}{u}{v}$, with $u\real \emp{}$ and for all numerals $n$, $v[a:=n]\real \emp{}$. If $t'=u$ or $t'=v[a:=m]$ for some numeral $m$, we obtain the thesis. If $t'=\E{a}{u'}{v}$, with $u\mapsto u'$, then by induction hypothesis, $u'\real \emp{}$. So $\E{a}{u'}{v}\real \emp{}$ by definition. If $t'=\E{a}{u}{v'}$, with $v\mapsto v'$, then for every numeral $n$, $v[a:=n]\mapsto v'[a:=n]$, and thus by induction hypothesis $v'[a:=n]\real \emp{}$. So $\E{a}{u}{v'}\real \emp{}$ by definition. \\ \crtre\ and \crquattro\ are trivially true by definition of $t\real \emp{}$. \\ \crcinque.\ We have that $\hyp{a}{\alpha}{\,\alpha=0}\mathsf{S} 0\real \emp{}$.\\ \item $C=A\rightarrow B$.\\ \cruno.\ Suppose $t\real A\rightarrow B$. By induction hypothesis \crcinque,\ there is an $u$ such that $u\real A$; therefore, $tu\real B$. By induction hypothesis \cruno,\ $tu\in\mathsf{SN}$ and thus $t\in\mathsf{SN}$. \\ \crdue\ and \crtre\ are proved as in \cite{Girard89}.\\ \crquattro. ($\Rightarrow$) Suppose $\E{a}{u}{v}\real A\rightarrow B$ and let $t\real A$. Then $(\E{a}{u}{v})t\real B$ and by \crdue,\ $\E{a}{ut}{vt}\real B$. By \crquattro,\ $ut\real B$ and for all numerals $n$, $v[a:=n]t=vt[a:=n]\real B$. We conclude that $u\real A\rightarrow B$ and $v[a:=n]\real A\rightarrow B$.\\ ($\Leftarrow$). Suppose $u\real A\rightarrow B$ and $v[a:=n]\real A\rightarrow B$ for every numeral $n$. Let $t\real A$. We show by induction on the sum of the height of the reduction trees of $u, v, t$ (they are all in $\mathsf{SN}$ by \cruno) that $(\E{a}{u}{v})t\real B$. By induction hypothesis \crtre,\ it is enough to assume $(\E{a}{u}{v})t\mapsto z$ and show $z\real B$. If $z=ut$ or $v[a:=n]t$, we are done. If $z=(\E{a}{u'}{v})t$ or $z=(\E{a}{u}{v'})t$ or $(\E{a}{u}{v})t'$, with $u\mapsto u'$, $v\mapsto v'$ and $t\mapsto t'$, we obtain $z\real B$ by \crdue\ and induction hypothesis. If $z=(\E{a}{ut}{vt})$, by induction hypothesis \crquattro,\ $z\real B$. \\ \crcinque.\ By induction hypothesis \crcinque,\ there is a term $u$ such that $u\real B$. We want to show that $\lambda\_. u\real A\rightarrow B$. Suppose $t\real A$: we have to show that $(\lambda\_.u)t\real B$. We proceed by induction on the sum of the height of the reduction trees of $u$ and $t$ (by \cruno,\ $u, t\in\mathsf{SN}$). By induction hypothesis \crtre,\ it is enough to assume $(\lambda\_.u)t\mapsto z$ and show $z\real B$. If $z=u$, we are done. If $z= (\lambda\_.u')t$ or $z=(\lambda\_.u)t'$, with $u\mapsto u'\real B$ (by \crtre) and $t\mapsto t'\real B$ (by \crtre), we obtain $z\real B$ by induction hypothesis.\\ \item $C=\forall \alpha^{\Nat} A$ or $C=A\land B$. Similar to the case $C=A\rightarrow B$.\\ \item $C=A_{0}\lor A_{1}$.\\ \cruno\ By induction on the definition of $t\real A_{0}\lor A_{1}$. If $t=\inj_{i}(u)$, then $u\real A_{i}$, and by induction hypothesis \cruno, $u\in\mathsf{SN}$; therefore, $t\in\mathsf{SN}$. If $t\notin\mathsf{NF}$ is neutral, then $t\mapsto t'$ implies $t'\real A_{0}\lor A_{1}$ and thus $t'\in\mathsf{SN}$ by induction hypothesis; therefore, $t\in\mathsf{SN}$. Suppose then $t=\E{a}{u}{v}$. Since $u\real A_{0}\lor A_{1}$ and for all numerals $n$, $v[a:=n]\real A_{0}\lor A_{1}$, we have by induction hypothesis $u\in\mathsf{SN}$ and for all numerals $n$, $v[a:=n]\in\mathsf{SN}$. We conclude as in the case $C=\emp{}$ that $t\in \mathsf{SN}$. \\ \crdue.\ Suppose $t\real A_{0}\lor A_{1}$. It suffices to assume that $t\mapsto t'$ and show that $t'\real A_{0}\lor A_{1}$. We proceed by induction on the definition of $t\real A_{0}\lor A_{1}$. If $t=\inj_{i}(u)$, then $t'=\inj_{i}(u')$, with $u\mapsto u'$. By definition of $t\real A_{0}\lor A_{1}$, we have $u \real A_{i}$. By induction hypothesis \crdue,\ $u'\real A_{i}$ and thus $t'\real A_{0}\lor A_{1}$. If $t\notin\mathsf{NF}$ is neutral, by definition of $t\real A_{0}\lor A_{1}$, we obtain that $t'\real A_{0}\lor A_{1}$. If $t=\E{a}{u}{v}$, with $u\real A_{0}\lor A_{1}$ and for all numerals $n$, $v[a:=n]\real A_{0}\lor A_{1}$. If $t'=u$ or $t'=v[a:=m]$, we are done. If $t'=\E{a}{u'}{v}$, with $u\mapsto u'$, then by induction hypothesis, $u'\real A_{0}\lor A_{1}$. So $\E{a}{u'}{v}\real A_{0}\lor A_{1}$ by definition. If $t'=\E{a}{u}{v'}$, with $v\mapsto v'$, then for every numeral $n$, $v[a:=n]\mapsto v'[a:=n]$ and thus by induction hypothesis $v'[a:=n]\real A_{0}\lor A_{1}$. So $\E{a}{u}{v'}\real A_{0}\lor A_{1}$ by definition. \\ \crtre\ and \crquattro\ are trivial.\\ \crcinque.\ By induction hypothesis \crcinque,\ there is a term $u$ such that $u\real A_{0}$. Thus $\inj_{0}(u)\real A_{0}\lor A_{1}$. \\ \item $C=\exists \alpha^{\Nat} A$. Similar to the case $t=A_{0}\lor A_{1}$. \end{itemize} \end{proof} \noindent This first property can be used in order to state a first result on the meaning of realizers: if we denote by $\econt{u}$ a term of the form $\E{}{(\E{}{(\E{}{u}{v_{1}})}{v_{2}})\ldots)}{v_{n}}$ for any $n \ge 0$, then \begin{proposition}[Weak Disjunction and Numerical Existence Properties]\label{proposition-disj}\mbox{} \begin{enumerate} \item Suppose $t\real A\lor B$. Then either $t\mapsto^{*} \econt{\inj_{0}(u)}$ and $u\real A$ or $t\mapsto^{*} \econt{\inj_{1}(u)}$ and $u\real B$. \item Suppose $t\real \exists \alpha^{\Nat} A$. Then $t\mapsto^{*} \econt{(n,u)}$ for some numeral $n$ such that $u\real A[n/\alpha]$. \end{enumerate} \end{proposition} \begin{proof}\mbox{} \begin{enumerate} \item Since $t\in\mathsf{SN}$ by \cruno,\ let $t'$ be such that $t\mapsto^{*} t'\in\mathsf{NF}$. By \crdue,\ $t'\real A\lor B$. If $t'=\inj_{0}(u)$, we are done. The only possibility left is that $t'= \E{}{\E{}{\E{}{v}{v_{1}}}{v_{2}}\ldots}{v_{n}}$, with $v$ not of the form $\E{}{w_{0}}{w_{1}}$. By \cref{definition-reducibility}.4.(ii) we have $v\real A\lor B$, and since $v$ is normal and not of the form $\E{}{w_{0}}{w_{1}}$, by \cref{definition-reducibility}.4.(i) we have either $v=\inj_{0}(u)$, with $u\real A$, or $v=\inj_{1}(u)$, with $u\real B$. \item Similar to 1. \end{enumerate} \end{proof} Informally, this means that a realizer of a disjunction ``contains'' a realizer of one of the disjuncts, and a realizer of an existential statement similarly contains a witness. However, these realizers might rely on universal assumptions. We can specialize this theorem in the case of simpler existential formulas: \begin{thm}[Existential Witness Extraction]\label{theorem-extraction} Suppose $t$ is closed, $t\real \exists \alpha^{\Nat} \emp{}$ and $t\mapsto^{*} t'\in\mathsf{NF}$. Then $t'=(n,u)$ for some numeral $n$ such that $\emp{}[n/\alpha]\evaluates \True$. \end{thm} \begin{proof}\mbox{} By \cref{proposition-disj}, there is some numeral $n$ such that $t'=\econt{(n,u)}$ and $u\real \emp{}[n/\alpha]$. So $$t'= \E{a_{m}} {\E{a_{2}} {\E{a_{1}} {(n,u)}{v_{1}}} {v_{2}}\ldots} {v_{m}}$$ Since $t'$ is closed, $u$ is quasi-closed and all its free variables are among $a_{1}, a_{2},\ldots, a_{m}$. We observe that $u$ must be closed. Otherwise, by \cref{definition-reducibility}.1.(i) and $u\real \emp{}[n/\alpha]$ we deduce that $u\in\mathsf{PNF}$, and thus $u$ should contain a subterm $\hyp{a_i}{\alpha}{Q}n$; moreover, $\mathsf{Q}[n/\alpha]\evaluates \False$ otherwise $u$ would not be normal; but then we would have either $m\neq 0$ and $t'\notin\mathsf{NF}$ because $t' \mapsto \E{a_{m}}{\E{a_{2}}{v_{1}[a_1:=n]}{v_{2}}\ldots}{v_{m}}$, or $m=0$ and $t'$ non-closed. Since $u$ is closed, we obtain $t'=(n,u)$, for otherwise $t' \mapsto \E{a_{m}}{\E{a_{2}}{(n,u)}{v_{2}}\ldots}{v_{m}}$ and $t'\notin\mathsf{NF}$. Since $u\real \emp{}[n/\alpha]$, by \cref{definition-reducibility}.1.(i) it must be $\emp{}[n/\alpha]\evaluates \True$. \end{proof} \noindent We now come to the main theorem, the soundness of the realizability semantics: \begin{thm}[Adequacy Theorem]\label{AdequacyTheorem} Suppose that $\Gamma\vdash w: A$ in the system $\HA + \EM_1$, with $$\Gamma=x_1: {A_1},\ldots,x_n:{A_n}, a_{1}: \exists \alpha_{1}^{\Nat} \emp{}_{1}^{\bot},\ldots, a_{m}: \exists \alpha_{m}^{\Nat} \emp{}_{m}^{\bot}, b_{1}: \forall \alpha_{1}^{\Nat}\mathsf{Q}_{1},\ldots, b_{l}:\forall \alpha_{l}^{\Nat}\mathsf{Q}_{l}$$ and that the free variables of the formulas occurring in $\Gamma $ and $A$ are among $\alpha_1,\ldots,\alpha_k$. For all closed terms $r_1,\ldots,r_k$ of $\Language$, if there are terms $t_1, \ldots, t_n$ such that \[\text{ for $i=1,\ldots, n$, }t_i\real A_i[{r}_1/\alpha_1\cdots {r}_k/\alpha_k]\] then \[w[t_1/x_1\cdots t_n/x_n\ {r}_1/\alpha_1\cdots {r}_k/\alpha_k\ a_{1}:=i_{1}\cdots a_{m}:=i_{m} ]\real A[{r}_1/\alpha_1\cdots {r}_k/\alpha_k]\] for every numerals $i_{1}, \ldots, i_{m}$. \end{thm} Before proving this theorem, we need an auxiliary lemma \begin{lemma}\label{proposition-somecases}\mbox{} \begin{enumerate} \item If for every $t\real A$, $u[t/x]\real B$, then $\lambda x\, u\real A\rightarrow B$. \item If for every closed term $m$ of $\Language$, $u[m/\alpha]\real B[m/\alpha]$, then $\lambda \alpha\, u\real \forall \alpha^{\Nat} B$. \item If $u\real A_{0}$ and $v\real A_{1}$, then $\pi_{i}\pair{u}{v}\real A_{i}$. \item If ${w_{0}[x_{0}.u_{0}, x_{1}.u_{1}]}\real C$ and for all numerals $n$, ${w_{1}[x_{0}.u_{0}, x_{1}.u_{1}]}[a:=n]\real C$, then $(\E{a}{w_{0}}{w_{1}})[x_{0}.u_{0}, x_{1}.u_{1}]\real C$. \item If $t\real A_{0}\lor A_{1}$ and for every $t_{i}\real A_{i}$ it holds $u_{i}[t_{i}/x_{i}]\real C$, then $t[x_{0}.u_{0}, x_{1}.u_{1}]\real C$. \item If $t\real \exists \alpha^{\Nat} A$ and for every term $n$ of $\Language$ and $v\real A[n/\alpha]$ it holds $u[n/\alpha][v/x]\real C$, then $t[(\alpha, x).u]\real C$. \end{enumerate} \end{lemma} \begin{proof}[Proof of \cref{proposition-somecases}] \mbox{} \begin{enumerate} \item As in \cite{Girard89}. \item As in \cite{Girard89}. \item As in \cite{Girard89}.\\ \item We may assume $a$ does not occur in $u_{0}, u_{1}$. By hypothesis, $w_{0}[x_{0}.u_{0}, x_{1}.u_{1}]\real C$ and for every numeral $n$, $w_{1}[x_{0}.u_{0}, x_{1}.u_{1}][a:=n]\real C$. By \cruno, in order to show $\E{a}{w_{0}}{w_{1}}[x_{0}.u_{0}, x_{1}.u_{1}]\real C$, we may proceed by induction on the sum of the sizes of the reduction trees of $w_{0}, w_{1}, u_{0}, u_{1}$. By \crtre,\ it then suffices to assume that $\E{a}{w_{0}}{w_{1}}[x_{0}.u_{0}, x_{1}.u_{1}]\mapsto z$ and show $z\real C$. If $z=w_{0}[x_{0}.u_{0}, x_{1}.u_{1}]$ or $w_{1}[a:=n][x_{0}.u_{0}, x_{1}.u_{1}]$ for some numeral $n$, we are done. If $z=\E{a}{w_{0}'}{w_{1}}[x_{0}.u_{0}, x_{1}.u_{1}]$ or $z=\E{a}{w_{0}}{w_{1}'}[x_{0}.u_{0}, x_{1}.u_{1}]$ or $z=\E{a}{w_{0}}{w_{1}}[x_{0}.u_{0}', x_{1}.u_{1}]$ or $z=\E{a}{w_{0}}{w_{1}}[x_{0}.u_{0}, x_{1}.u_{1}']$, with $w_{i}\mapsto w_{i}'$ and $u_{i}\mapsto u_{i}'$, then by \crdue\ we can apply the induction hypothesis and obtain $z\real C$. If $$z=\E{a}{(w_{0}[x_{0}.u_{0}, x_{1}.u_{1}])}{(w_{1}[x_{0}.u_{0}, x_{1}.u_{1}])}$$ then $z\real C$ by \crquattro. \\ \item Suppose $t\real A_{0}\lor A_{1}$ and for every $t_{i}\real A_{i}$ it holds $u_{i}[t_{i}/x_{i}]\real C$. In order to show $t[x_{0}.u_{0}, x_{1}.u_{1}]\real C$, we reason by induction of the definition of $t\real A_{0}\lor A_{1}$. Since by \crcinque\ there are $v_{0},v_{1}$ such that $v_{i}\real A_{i}$, we have $u_{i}[v_{i}/x_{i}]\real A_{i}$, and thus by \cruno,\ $u_{i}[v_{i}/x_{i}]\in\mathsf{SN}$ and $t\in\mathsf{SN}$. We have three cases:\\ \begin{itemize} \item $t=\inj_{i}(u)$. Then $u\real A_{i}$. We want to show that for every $u'\real A_{i}$, $\inj_{0}(u')[x_{0}.u_{0}, x_{1}.u_{1}]\real C$. By \crtre,\ it suffices to assume that $\inj_{0}(u)[x_{0}.u_{0}, x_{1}.u_{1}]\mapsto z$ and show $z\real C$. We reason by induction on the sum of the sizes of the reduction trees of $u, u_{0}, u_{1}$. If $z=\inj_{i}(u')[x_{0}.u_{0}, x_{1}.u_{1}]$ or $z=t[x_{0}.u_{0}', x_{1}.u_{1}]$ or $z=t[x_{0}.u_{0}, x_{1}.u_{1}']$, with $u\mapsto u'$ and $u_{i}\mapsto u_{i}'$, then by \crdue\ we can apply the induction hypothesis and obtain $z\real C$. If $z=u_{i}[u/x_{i}]$, since $u\real A_{i}$, we obtain $z\real C$.\\ \item $t=\E{a}{w_{0}}{w_{1}}$. By induction hypothesis $w_{0}[x_{0}.u_{0}, x_{1}.u_{1}]\real C$ and for all numerals $n$, $w_{1}[a:=n][x_{0}.u_{0}, x_{1}.u_{1}]\real C$. By 4., $\E{a}{w_{0}}{w_{1}}[x_{0}.u_{0}, x_{1}.u_{1}]\real C$.\\ \item $t\notin \mathsf{NF}$ is neutral. We reason by induction on the sum of the sizes of the reduction trees of $ u_{0}, u_{1}$. By \crtre,\ it suffices to assume that $t[x_{0}.u_{0}, x_{1}.u_{1}]\mapsto z$ and show $z\real C$. If $z=t'[x_{0}.u_{0}, x_{1}.u_{1}]$, we apply the (main) induction hypothesis and obtain $z\real C$. If $z=t[x_{0}.u_{0}', x_{1}.u_{1}]$ or $z=t[x_{0}.u_{0}, x_{1}.u_{1}']$, with $u\mapsto u'$ and $u_{i}\mapsto u_{i}'$, then by \crdue\ we can apply the induction hypothesis and obtain $z\real C$.\\ \end{itemize} \item Analogous to 5. \end{enumerate} \end{proof} \begin{proof}{Proof of the Adequacy Theorem} \newcommand{\substitution} [1] { {\overline{#1}} } Notation: for any term $v$ and formula $B$, we denote \[v[t_1/x_1\cdots t_n/x_n\ {r}_1/\alpha_1\cdots {r}_k/\alpha_k\ a_{1}:=i_{1}\cdots a_{m}:=i_{m} ]\] with $\substitution{v}$ and \[B[{r}_1/\alpha_1\cdots {r}_k/\alpha_k]\] with $\substitution{B}$. We proceed by induction on $w$ . Consider the last rule in the derivation of $\Gamma\vdash w: A$: \begin{enumerate} \item If it is the rule $\Gamma \vdash \hyp{b_{j}}{\alpha_{j}}{P_{j}}: \forall\alpha_{j}^{\Nat} \emp{j}$, then $w=\hyp{b_{j}}{\alpha_{j}}{P_{j}}$ and $A= \forall\alpha_{j}^{\Nat} \emp{j}$. So $\substitution{w}=\hyp{b_{j}}{\alpha_{j}}{\substitution{P}_{j}} $. Let $n$ be any closed term of $\Language$. We must show that $\substitution{w}n\real \substitution{\emp{j}}[n/\alpha_{j}]$. We have $\hyp{b_{j}}{\alpha_{j}}{\substitution{P}_{j}}n\in \mathsf{SN}$; moreover, if $\hyp{b_{j}}{\alpha_{j}}{\substitution{P}_{j}}n\mapsto z$, then $z$ is $\True$ and $\substitution{\emp{j}}[n/\alpha_{j}]\evaluates \True$, and thus $z\real \ \substitution{\emp{j}}[n/\alpha_{j}]$; if $\Hyp{\substitution{P}_{j}}{\alpha_{j}}n\in\mathsf{NF}$, then $\substitution{\emp{j}}[n/\alpha_{j}]\evaluates \False$. We conclude $\hyp{b_{j}}{\alpha_{j}}{\substitution{P}_{j}} \real \forall\alpha_{j}^{\Nat} \substitution{\emp{j}}=\substitution{A}$.\\ \item If it is the rule $ \Gamma \vdash \wit{a_{j}}{\alpha_{j}}{P_{j}}: \exists\alpha_{j}^{\Nat} \emp{}_{j}^{\bot}$, then $w=\wit{a_{j}}{\alpha_{j}}{P_{j}}$ and $A= \exists \alpha_{j}^{\Nat} \emp{}_{j}^{\bot}$. We have two possibilities. i) $\substitution{w}=(i_{j},\True)$ and $\substitution{\emp{j}}[i_{j}/\alpha_{j}]\evaluates \False$. But this means that $\substitution{w}\real \exists \alpha_{j}^{\Nat} \substitution{\emp{j}}^{\bot}$. ii) $\substitution{w}=(i_{j}, \hyp{a_{j}}{\alpha}{\,\alpha=0}\mathsf{S} 0)$. Again, $\substitution{w}\real \exists \alpha_{j}^{\Nat} \substitution{\emp{j}}^{\bot}$.\\ \item If it is a $\lor$-I rule, say left (the other case is symmetric), then $w=\inj_{0}(u)$, $A=B\vee C$ and $\Gamma \vdash u: B$. So, $\substitution{w}=\inj_{0}(\substitution{u})$. By induction hypothesis $\substitution{u}\real \substitution{B}$ and thus $\substitution{u}\in\mathsf{SN}$. We conclude $\inj_{0}(\substitution{u}) \real \substitution{B}\lor\substitution{C}= \substitution{A}$. \\ \item If it is a $\vee$-E rule, then \[w= u [x.w_1, y.w_2] \] and $\Gamma \vdash u: B\vee C$, $\Gamma, x: B \vdash w_1: D$, $\Gamma, y: C \vdash w_2: D$, $A=D$. By induction hypothesis, we have $\substitution{u}\real \substitution{B}\lor \substitution{C}$; moreover, for every $t\real \substitution{B}$, we have $\substitution{w}_{1}[t/x]\real \substitution{B}$ and for every $t\real \substitution{C}$, we have $\substitution{w}_{2}[t/y]\real \substitution{C}$. By \cref{proposition-somecases}, we obtain $\substitution{w}=\substitution{u} [x.\substitution{w}_1, y.\substitution{w}_2]\real \substitution{D}$. \\ \item The cases $\exists$-I and $\exists$-E are similar respectively to $\lor$-I and $\lor$-E.\\ \item If it is the $\forall$-E rule, then $w=ut$, $A=B[t/\alpha]$ and $\Gamma \vdash u: \forall \alpha^{\Nat} B$. So, $\substitution{w}=\substitution{u}\substitution{t}$. By inductive hypothesis $\substitution{u}\real \forall\alpha^{\Nat} \substitution{B}$ and so $\substitution{u}\substitution{t}\real \substitution{B}[\substitution{t}/\alpha]$. \\ \item If it is the $\forall$-I rule, then $w=\lambda \alpha u$, $A=\forall \alpha^{\Nat} B$ and $\Gamma \vdash u: B$ (with $\alpha$ not occurring free in the formulas of $\Gamma$). So, $\substitution{w}=\lambda \alpha \substitution{u}$, since we may assume $\alpha\neq \alpha_1, \ldots, \alpha_k$. Let $t$ be any closed term of $\Language$; by \cref{proposition-somecases}), it is enough to prove that $\substitution{u}[t/\alpha]\real \substitution{B}[{t}/\alpha]$, which amounts to show that the induction hypothesis can be applied to $u$. For this purpose, we observe that, since $\alpha\neq \alpha_1, \ldots, \alpha_k$, for $i=1, \ldots, n$ we have \[t_i\real \substitution{A}_i=\substitution{A}_i[t/\alpha]\] \item If it is the induction rule, then $w= \rec u v t$, $A=B(t)$, $\Gamma \vdash u: B(0)$ and $\Gamma \vdash v: \forall \alpha^{\Nat}. B(\alpha)\rightarrow B(\mathsf{S}(\alpha))$. So, $\substitution{w}= \rec \substitution{u}\substitution{v}l$, for some numeral $l=\substitution{t}$. We prove that for all numerals $n$, $\rec \substitution{u}\substitution{v} n \real \substitution{B}({n})$. By \crtre,\ it is enough to suppose that $\rec \substitution{u}\substitution{v} n \mapsto w$ and show that $w\real \substitution{B}({n})$. By induction hypothesis $\substitution{u}\real \substitution{B}(0)$ and $\substitution{v}{m}\real \substitution{B}({m})\rightarrow \substitution{B}({\mathsf{S}(m)})$ for all closed terms $m$ of $\Language$. So by \cruno,\ we can reason by induction on the sum of the sizes of reduction trees of $\substitution{u}$ and $\substitution{v}$ and the size of $m$. If $n=0$ and $w=\substitution{u}$, then we are done. If $n=\mathsf{S}(m)$ and $w=\substitution{v}m(\rec \substitution{u}\substitution{v}m)$, by induction hypothesis $\rec \substitution{u}\substitution{v}m\real \substitution{B}({m})$; therefore, $w\real \substitution{B}(\mathsf{S}{m})$. If $w=\rec u' \substitution{v}m$, with $\substitution{u}\mapsto u'$, by induction hypothesis $w\real\substitution{B}(m)$. We conclude the same if $w=\rec \substitution{u} {v}'m$, with $\substitution{v}\mapsto v'$. We thus obtain that $\substitution{w}\real \substitution{B}(l)=\substitution{B(t)}$. \\ \item If it is the $\EM_{1}$ rule, then $w= \E{a}{u}{v}$, $\Gamma, a: \forall \alpha^{\Nat} \emp{} \vdash u: C$ and $\Gamma, a:\exists \alpha^{\Nat}\emp{}^{\bot} \vdash v: C$ and $A=C$. By induction hypothesis, $\substitution{u}\real \substitution{C}$ and for all numerals $m$, $\substitution{v}[a:=m]\real \substitution{C}$. By \crquattro,\ we conclude $\substitution{w}=\E{a}{\substitution{u}}{\substitution{v}}\real \substitution{C}$. \\ \item If it is a Post rule, the case $w$ is $\True$ is trivial, so we may assume $w=\mathsf{r}t_{1}\ldots t_{n}$, $A=\emp{}$ and $\Gamma\vdash t_{1}: \emp{1}, \ldots, \Gamma \vdash t_{n}: \emp{n}$. By induction hypothesis, for $i=1,\ldots, n$, we have $\substitution{t}_{i}\real \substitution{\emp{i}}$. By \cruno,\ we can argue by induction on the size of the reduction tree of $\substitution{w}$. We have two cases. i) $\substitution{w}\in\mathsf{NF}$. For $i=1,\ldots, n$, by \cref{theorem-extraction}, we obtain $\substitution{t}_{i}\in\mathsf{PNF}$. Therefore, also $\substitution{w}\in\mathsf{PNF}$. Assume now $\substitution{\emp{}}\evaluates \False$. Then, for some $i$, $\substitution{\emp{i}}\evaluates \False$. Therefore, $\substitution{t}_{i}$ contains a subterm $[a]\Hyp{Q}{\alpha}n$ with $\mathsf{Q}[n/\alpha]\evaluates \False$ and thus also $\substitution{w}$. We conclude $\substitution{w}\real \substitution{\emp{}}$. ii) $\substitution{w}\notin\mathsf{NF}$. By \crtre,\ it is enough to suppose $\substitution{w}\mapsto z$ and show $z\real \substitution{\emp{}}$. We have $z=\mathsf{r}\substitution{t}_{1}\ldots \substitution{t}_{i}'\ldots \substitution{t}_{n}$, with $\substitution{t}_{i}\mapsto \substitution{t}_{i}'$, and by \crdue,\ $\substitution{t}_{i}'\real \substitution{\emp{i}}$. By induction hypothesis, $z\real \substitution{\emp{}}$. \end{enumerate} \end{proof} As an easy corollary, we get strong normalization of the system \begin{corollary}[Strong Normalization of $\HA+\EM_1$] All terms of $\HA+\EM_{1}$ are strongly normalizing. \end{corollary} \subsubsection{Future Work} As known for example from the field of proof mining \cite{Kohlenbach08} Markov's principle is fundamental for the aim of extracting constructive information from non purely constructive proofs. A similarly important principle is the \emph{double negation shift}, stated as \[ \forall x \neg \neg A(x) \to \neg \neg \forall x A(x) \] for $A$ atomic. In \cite{Ilik12}, Danko Ilik showed that an intuitionistic logic extended with this principle retains the disjunctive and existential properties, using techniques similar to those of Herbelin. In the same work, he mentions that Herbelin had also extended his calculus of delimited control operators to a system proving this principle. Given the relation between Herbelin's work on Markov's principle and our current work, it is interesting to see if one could develop a modified version of $\HA + \EM^-$ that is able to interpret the double negation shift. A candidate could be the system $\IL +\EM$ presented in \cite{Aschieri16}: we conjecture that a version of this system for arithmetic with restrictions similar to those presented in this thesis would be constructive; its relationship with other principles remains to be studied. \section{Another constructive system based on delimited exceptions} We have just seen that any proof of a simply existential statement can be adequately modified to only use Markov's principle and no other classical reasoning. Given the constructive properties of Markov's principle, this suggests that proofs of this kind should also allow a direct interpretation in the spirit of the Curry-Howard correspondence (in addition to the one obtained by simply transforming the proofs into proofs using Markov's principle). However as we noted before the $\textsc{em}^{-}$ rule does not lead to a clear computational interpretation. We should then define a new system, and prove its equivalence with $\HA+\EM^-$. We consider a generalization of the $\textsc{em}_1^{-}$ rule in the form of the \emenne{n} rules, derived naturally from the ones defined in \cite{Aschieri16}: \begin{prooftree} \AxiomC{$\Gamma, \forall \alpha \emp{} \vdash \exists x C$} \AxiomC{$\Gamma, \exists \alpha \emp{}^\bot \vdash \exists x C$} \RightLabel{$\textsc{em}_1^{-}$ } \BinaryInfC{$\Gamma \vdash \exists x C$} \end{prooftree} \begin{prooftree} \AxiomC{$\Gamma, \forall \alpha A \vdash \exists x C$} \AxiomC{$\Gamma, \exists \alpha A^\bot \vdash \exists x C$} \RightLabel{\emenne{n}} \BinaryInfC{$\Gamma \vdash \exists x C$} \end{prooftree} Where $C$ is quantifier-free, and $A$ is a formula in prenex normal form with $n-1$ alternating quantifiers, the outermost being $\exists$. $A^\bot$ is defined as follows: \begin{itemize} \item For atomic predicates $\emp{}$, $\emp{}^\bot$ is the complement of the preidcate $\emp{}$ \item For prenex formulas with alternating quantifiers $A$, if $A=\mathtt{Q}_{1} \alpha_{1}\,\mathtt{Q}_{2} \alpha_{2}\ldots \mathtt{Q}_{n} \alpha_{n}\, \emp{}$, with $\mathtt{Q}_{i}\in\{\forall,\exists\}$ for $i=1\ldots n$, then $A^{\bot}=\overline{\mathtt{Q}}_{1} \alpha_{1}\,\overline{\mathtt{Q}}_{2} \alpha_{2}\ldots \mathtt{\overline{Q}}_{n} \alpha_{n}\, \emp{}^{\bot}$, where $\mathtt{\overline{Q}}_{i}\in \{\forall,\exists\}\setminus \{\mathtt{Q}_{i}\}$, for $i=1\ldots n$. \end{itemize} The system $\HA+\EM^\bot$ is obtained by adding the \emenne{n} rule to $\HA$ for all $n$. The new system is equivalent to the one we studied so far: \begin{thm} $\HA+\EM^\bot \vdash F$ if and only if $\HA+\EM^- \vdash F$ \end{thm} \begin{proof} System $\HA+\EM^\bot$ is at least as strong as $\HA + \EM^-$: since it contains the rule $\textsc{em}_1^{-}$ , it can prove Markov's principle; in the first section we have seen that the rule $\textsc{em}^{-}$ is equivalent to Markov's principle and thus $\HA+\EM^\bot \vdash \textsc{em}^-$ . Conversely, it is not stronger than $\HA+\EM^-$. We know that we can formalize in the system $\HA + \EM^-$ any classical proof of a simply existential statement. It can be easily seen that $\forall \alpha A \lor \exists \alpha A^\bot$ is a classical tautology, and therefore using an \emenne{n} rule corresponds directly to a classical proof making use of the proof of the tautology in a disjunction elimination. Therefore any \emenne{n} rule can be formalized in $\HA+\EM^-$. \end{proof} \subsubsection{Curry-Howard correspondence for $\HA+\EM^\bot$} In order to provide a proof term for the rule $\textsc{em}_1$ , we introduced a mechanism of delimited exception handling. How can we generalize this mechanism to the new setting? Clearly, in the case of the rule $\textsc{em}_1^{-}$ in $\HA+\EM^\bot$ we can just reuse the term we introduced in $\HA+\EM_1^-$: $$\begin{array}{c} \Gamma, a: \forall \alpha\, \emp{} \vdash u: C\ \ \ \ \ \ \ \Gamma, a: \exists \alpha\,\emp{}^\bot \vdash v:C\\ \hline \Gamma\vdash {u}\, | |_{a}\,{v} : C \end{array}\ \textsc{em}^-_1$$ If we proceed in a similar way, we can introduce the term ${u}\, | | |_{a}\,{v}$ to decorate the conclusion of the rule \emenne{2}: $$\begin{array}{c} \Gamma, a: \forall \alpha\,\exists \beta\, \emp{}^\bot \vdash u: C\ \ \ \ \ \ \ \Gamma, a: \exists \alpha\,\forall \beta\,\emp{} \vdash v: C\\ \hline \Gamma\vdash {u}\, | | |_{a}\,{v} : C \end{array}\ \textsc{em}^-_2$$ \begin{figure*}[!htb] \footnotesize{ \begin{description} \item[Grammar of Untyped Proof Terms] \[t,u, v::=\ x\ |\ tu\ |\ tm\ |\ \lambda x\, u\ |\ \lambda \alpha\, u\ |\ \langle t, u\rangle\ |\ u\,\pi_0\ |\ u\,\pi_{1} \ |\ \inj_{0}(u)\ |\ \inj_{1}(u)\ |\ t[x.u, y.v]\ |\ (m,t)\ |\ t[(\alpha, x). u]\] \[|\ (\Ez{}{u}{v})\ |\ (u\, |\,|\,\ldots |_{a}\,{v})\ |\ \Hypz{a}{\forall \alpha A}\ |\ {\Witz{a}{\exists \alpha A^\bot}}\ \True \ |\ \rec u v m \ |\ \mathsf{r}t_{1}\ldots t_{n} \] where $m$ ranges over terms of $\Language$, $x$ over proof-term variables, $\alpha$ over first-order variables, $a$ over hypothesis variables and ${A}$ is a prenex formula with alternating quantifiers and negative propositional matrix or is a simply universal formula. We assume that in the term ${u}\, |\,|\,\ldots |_{a}\,{v}$, there is some formula $A$, such that $a$ occurs free in $u$ only in subterms of the form $\Hypz{a}{\forall \alpha A}$ and $a$ occurs free in $v$ only in subterms of the form $\Witz{a}{\exists \alpha A^\bot}$, and the occurrences of the variables in $A$ different from $\alpha$ are free in both $u$ and $v$. \item[Contexts] With $\Gamma$ we denote contexts of the form $e_1:A_1, \ldots, e_n:A_n$, where each $e_{i}$ is either a proof-term variable $x, y, z\ldots$ or an $\EM$ hypothesis variable $a, b, \ldots$, and $e_{i}\neq e_{j}$ for $i\neq j$. \item[Axioms] $\begin{array}{c} \Gamma, x:{A}\vdash x: A \end{array}\ \ \ \ $ $\begin{array}{c} \Gamma, a:{\forall {\alpha}\, A}\vdash \Hypz{a}{\forall \alpha A}: \forall{\alpha}\, A \end{array}\ \ \ \ $ $\begin{array}{c} \Gamma, a:{\exists \alpha\, A\vdash {\Witz{a}{\exists \alpha A^\bot}}: \exists{\alpha}\, A} \end{array}$\\ \item[Conjunction] $\begin{array}{c} \Gamma \vdash u: A\ \ \ \Gamma\vdash t: B\\ \hline \Gamma\vdash \langle u,t\rangle: A\wedge B \end{array}\ \ \ \ $ $\begin{array}{c} \Gamma \vdash u: A\wedge B\\ \hline\Gamma \vdash u\,\pi_0: A \end{array}\ \ \ \ $ $\begin{array}{c} \Gamma \vdash u: A\wedge B\\ \hline \Gamma\vdash u\,\pi_1 : B \end{array}$\\\\ \item[Implication] $\begin{array}{c} \Gamma\vdash t: A\rightarrow B\ \ \ \Gamma\vdash u:A \\ \hline \Gamma\vdash tu:B \end{array}\ \ \ \ $ $\begin{array}{c} \Gamma, x:A \vdash u: B\\ \hline \Gamma\vdash \lambda x\, u: A\rightarrow B \end{array}$\\\\ \item[Disjunction Introduction] $\begin{array}{c} \Gamma \vdash u: A\\ \hline \Gamma\vdash \inj_{0}(u): A\vee B \end{array}\ \ \ \ $ $\begin{array}{c} \Gamma \vdash u: B\\ \hline \Gamma\vdash\inj_{1}(u): A\vee B \end{array}$\\\\ \item[Disjunction Elimination] $\begin{array}{c} \Gamma\vdash u: A\vee B\ \ \ \Gamma, x: A \vdash w_1: C\ \ \ \Gamma, y:B\vdash w_2: C\\ \hline \Gamma\vdash u\, [x.w_{1}, y.w_{2}]: C \end{array}$\\\\ \item[Universal Quantification] $\begin{array}{c} \Gamma \vdash u:\forall \alpha A\\ \hline \Gamma\vdash um: A[m/\alpha] \end{array} $ $\begin{array}{c} \Gamma \vdash u: A\\ \hline \Gamma\vdash \lambda \alpha\, u: \forall \alpha A \end{array}$\\ where $m$ is any term of the language $\Language$ and $\alpha$ does not occur free in any formula $B$ occurring in $\Gamma$.\\ \item[Existential Quantification] $\begin{array}{c}\Gamma\vdash u: A[m/\alpha]\\ \hline \Gamma\vdash ( m,u): \exists \alpha A \end{array}$ \ \ \ \ $\begin{array}{c} \Gamma\vdash u: \exists \alpha A\ \ \ \Gamma, x: A \vdash t:C\\ \hline \Gamma\vdash u\, [(\alpha, x). t]: C \end{array} $\\ where $\alpha$ is not free in $C$ nor in any formula $B$ occurring in $\Gamma$.\\ \item[Induction] $\begin{array}{c} \Gamma\vdash u: A(0)\ \ \ \Gamma\vdash v:\forall \alpha^{\Nat}. A(\alpha)\rightarrow A(\mathsf{S}(\alpha))\\ \hline \Gamma\vdash\lambda \alpha^{\Nat} \rec uv\alpha : \forall \alpha^{\Nat} A \end{array}$ \item[Post Rules] $\begin{array}{c} \Gamma\vdash u_1: A_1\ \Gamma\vdash u_2: A_2\ \cdots \ \Gamma\vdash u_n: A_n\\ \hline\Gamma\vdash u: A \end{array}$ where $A_1,A_2,\ldots,A_n,A$ are atomic formulas of $\HA$ and the rule is a Post rule for equality, for a Peano axiom or for a classical propositional tautology or for booleans and if $n>0$, $u=\mathsf{r} u_{1}\ldots u_{n}$, otherwise $u=\True$. \item[$\EM_n^-$]$\begin{array}{c} \Gamma, a: \forall {\alpha}\, A \vdash u: C\ \ \ \ \Gamma, a: \exists \alpha\, A^{\bot} \vdash v: C\\ \hline \Gamma\vdash u\underbrace{|\,|\,\ldots |_{a}}_{n+1 \mbox{{\tiny bars}}}\, v : \mathsf{C} \end{array}$\\ where $\mathsf{C}$ is atomic, $A$ is a formula with alternating quantifiers and: i) when $n=1$, $A=\prop{P}$ with $\prop{P}$ negative; ii) when $n> 1$, $A=\exists \alpha_{1}\,\forall \alpha_{2}\exists \alpha_{3}\ldots\mathtt{Q}_{n-1} \alpha_{n-1}\, \prop{P}$, with $\prop{P}$ negative and $\mathtt{Q}_{n-1}\in\{\forall,\exists\}$.\\\\ \end{description} } \caption{Term Assignment Rules for $\HA+\EM^\bot$}\label{fig:system} \end{figure*} The complete term system together with the type assignment is described in \cref{fig:system} But how can we use this term? Remaining faithful to the ideas of \emph{learning}, assume we start by reducing inside $u$ in order to get a proof of $C$; when in such reduction we need to use an instance $\exists \beta \emp{}^\bot[\alpha/m]$ of the universal assumption on a closed first order term $m$, we do not have a direct way to check its truth: therefore an exception is automatically thrown, which results in the creation of a duplicated environment. In one environment, we assume that the instance holds; therefore, we have \emph{learned} new information: we erase from $u$ the premise $\forall \alpha\,\exists \beta\,\emp{}^\bot$ of all eliminations of $\forall \alpha\,\exists\beta\, \emp{}^\bot$ having as conclusion $\exists\beta\,\emp{}^\bot[m/\alpha]$ and obtain the term $u^-$. In the other environment, we assume that it doesn't: here we have a much bigger progress, since we know that $\forall \beta \emp{}[m/\alpha]$. We obtain $v^{+}$ by replacing all occurrences of the hypothesis $\exists \alpha\,\forall\beta\,\emp{}$ with a proof of it by an introduction rule with premise $\forall\beta\,\emp{}[m/\alpha]$. We can see that the two assumptions are now of a lower complexity: the two environments can be put in communication using the rule $\textsc{em}_1$ . Therefore the application of \emenne{2} can be reduced to \begin{prooftree} \AxiomC{$\Gamma, b:\forall \beta\, \emp{}[m/\alpha] \vdash v^{+}: C\qquad$} \AxiomC{$\Gamma, a: \forall \alpha\,\exists \beta\, \lnot\emp{}, b:\exists\beta\, \lnot\emp{}[m/\alpha]\vdash u^{-}: C$} \AxiomC{$\Gamma, a: \exists \alpha\,\forall \beta\,\emp{}\vdash v: C$} \RightLabel{\emenne{2}} \insertBetweenHyps{\hskip 8pt} \BinaryInfC{$\Gamma, b: \exists \beta\,\lnot\emp{}[m/\alpha]\vdash {u^{-}}\, | | |_{a}\,{v}: C$} \RightLabel{\emenne{1}} \insertBetweenHyps{\hskip -19pt} \BinaryInfC{$\Gamma\vdash \E{b}{v^{+}}{({u^{-}}\, | | |_{a}\, {v})}: C$} \end{prooftree} The two environments are parallel, but can still communicate with each other: an exception may at any moment be raised by $v^{+}$ and a term be passed in particular to $u^{-}$. The reduction rules for all the \emenne{n} rules are just the generalization of this reduction. In order to implement the reduction rules, we will need to introduce new terms associated to the assumptions. In a totally similar way as before, we have $$\begin{array}{c} \Gamma, a: \forall \alpha\, A \vdash \Hypz{a}{\forall \alpha A} : \forall \alpha\, A \end{array}\ \ \ \ \begin{array}{c} \Gamma, a:{\exists \alpha\, A^{\bot}}\vdash \Witz{a}{\exists \alpha A^\bot}: \exists\alpha\, A^{\bot} \end{array}$$ where the first term introduces an hypothesis, the second waits for a witness (again, they can be thought in correspondence with \texttt{raise} and \texttt{catch}) and $a$ represents a communication channel. However, there is a major difference with the system of the previous chapter: as we said, an exception is now automatically raised whenever the term $\Hypz{a}{\forall \alpha A}$ is used, since we cannot check its truth; the ordinary computation might continue, and raise multiple exceptions. The computation is aborted only in the case of an exception relative to an atomic formula, when we can finally check the value of the term. Before defining the reduction rules, we should define a new exception substitution, replacing the old witness substitution \begin{definition}[Exception Substitution]\label{def:witsub} \label{definition-witsub} Suppose $v$ is any proof term and $e=(m,b)$, where $m$ is a term of $\Language$ and $\mathsf{b}$ an $\EM$-hypothesis variable $b$. Then: \begin{enumerate} \item If every free occurrence of $a$ in $v$ is of the form $\hyp{a}{\alpha}{A}$, and $\forall \alpha\, A$ is prenex with alternating quantifiers with $A$ non propositional, we define $$v[a:=e]$$ as the term obtained from $v$ by replacing, without capture of $b$, each subterm $\Hypz{a}{\forall \alpha A}m$ corresponding to a free occurrence of $a$ in $v$ by $ \Witz{b}{A[m/\alpha]}$. \item If every free occurrence of $a$ in $v$ is of the form $\Witz{a}{\exists \alpha A}{}$, and $\exists \alpha\, A$ is prenex with alternating quantifiers with $A$ non propositional, we define $$v[a:=e]$$ as the term obtained from $v$ by replacing, without capture of $b$, each subterm $\Witz{a}{\exists \alpha A}$ corresponding to a free occurrence of $a$ in $v$ by $(m, \Hypz{b}{A[m/\alpha]})$ \item If every free occurrence of $a$ in $v$ is of the form $\Witz{a}{\exists \alpha \mathsf{P^\bot}}{}$, and $\emp{}^\bot$ is atomic, we define $$v[a:=e]$$ as the term obtained from $v$ by replacing, replacing every occurrence of $\wit{a}{\alpha}{P}$ in $v$ by $(n,\True)$ if $\emp{}[n/\alpha] \equiv \False$, and by $(n,\hyp{a}{\alpha}{\alpha=0}\mathsf{S} 0)$ otherwise. \end{enumerate} \end{definition} We can now describe the reduction rules (\cref{fig:red}) \begin{figure*}[!htb] \footnotesize{ \begin{description} \item[Reduction Rules for $\HA$] \[(\lambda x. u)t\mapsto u[t/x]\qquad (\lambda \alpha. u)t\mapsto u[t/\alpha]\] \[ \pi_{i}\pair{u_0}{u_1}\mapsto u_i, \mbox{ for i=0,1}\] \[\inj_{i}(u)[x_{1}.t_{1}, x_{2}.t_{2}]\mapsto t_{i}[u/x_{i}], \mbox{ for i=0,1} \] \[(n, u)[(\alpha,x).v]\mapsto v[n/\alpha][u/x], \mbox{ for each numeral $n$} \] \[\rec u v 0 \mapsto u\] \[\rec u v (\mathsf{S} n) \mapsto v n (\rec u v n), \mbox{ for each numeral $n$} \] \item[Permutation Rules for $\EM_{n}$] \[(\Ecrom{a}{u}{v}) w \mapsto \Ecrom{a}{uw}{vw},\mbox{ if $a$ does not occur free in $w$} \] \[(\Ecrom{a}{u}{v})\pi_{i} \mapsto \Ecrom{a}{u\pi_{i}}{v\pi_{i}} \] \[(\Ecrom{a}{u}{v})[x.w_{1}, y.w_{2}] \mapsto \Ecrom{a}{u[x.w_{1}, y.w_{2}]}{v[x.w_{1}, y.w_{2}]},\mbox{ if $a$ does not occur free in $w_{1},w_{2}$} \] \[(\Ecrom{a}{u}{v})[(\alpha, x).w] \mapsto \Ecrom{a}{u[(\alpha, x).w]}{v[(\alpha, x).w]}, \mbox{ if $a$ does not occur free in $w_{1},w_{2}$} \] \item[Reduction Rules for $\EM_1$] \[\E{a} u v\mapsto u,\ \mbox{ if $a$ does not occur free in $u$ }\] \[\E{a} u v\mapsto v[a:=n],\ \mbox{ if $\hyp{a}{\alpha}{P}n$ occurs in $u$ and $\emp{} [n/\alpha]$ is closed and $\emp{} [n/\alpha]=\False$ }\] \[(\hyp{a}{\alpha}{P})n \mapsto \True \mbox{ if $\emp{}[n/\alpha]$ is \emph{closed} and $\emp{}[n/\alpha] \equiv \True$}\] \item[Reduction Rules for $\EM_{n}$ (n > 1)] \[\Hyp{a}{A}{\alpha}m \mapsto \Witz{b}{A[m/\alpha]}, \mbox{ $b$ fresh}\] \[\Ecrom{a}{u}{v}\mapsto u,\ \mbox{ if $a$ does not occur free in $u$ }\] \[\Esucc{a}{u}{v}\mapsto \Ecrom{\mathsf{b}}{v[a:=e]}{(\Esucc{a}{u[a:=e]}{v})} ,\mbox{ whenever $u$ has some \emph{active} subterm $\Hyp{a}{A}{\alpha}m$, $e=(m,b)$ and}\] \[\mbox{$b$ is a fresh $\EM_{}$-hypothesis variable $b$ }\] \end{description}} \caption{Reduction Rules for $\HA$ + $\EM^\bot$}\label{fig:red} \end{figure*} \begin{proposition}[Normal Form Property]\label{prop:pnf} Let $\prop{P}, \prop{P}_{1}, \ldots, \prop{P}_{n}$ be negative propositional formulas. Suppose that $$\Gamma=x_1: \prop{P}_{1},\ldots,x_n: \prop{P}_{n}, a_{1}: \forall {\alpha}_{1}{A}_{1},\ldots, a_{m}:\forall {\alpha}_{m}{A}_{m}, $$ and $\Gamma\vdash t:\exists {\alpha}\, \prop{P}$ or $\Gamma\vdash t: \prop{P}$, with $t\in\mathsf{NF}$ and having all its free variables among $x_{1}, \ldots, x_{n}, a_{1}, \ldots, a_{m}$. Then:\\ \begin{enumerate} \item Either every occurrence in $t$ of every term $\Hyp{a_{i}}{{A_i}}{{\alpha_{i}}}$ is of the active form $\Hyp{a_{i}}{{A_i}}{{\alpha_{i}}}m$, where $m$ is a term of $\Language$; or $t$ has an active subterm of the form $\Hyp{a_{i}}{{A_i}}{{\alpha_{i}}}m$, for some non simply universal formula $A_i$ and term $m$ of $\Language$. \\ \item Either $t=(m, u)$ or $t=\lambda x\, u$ or $t=\langle u, v \rangle$ or $t=\Ez{}{u}{v}$ or $t=\Ecrom{a}{u}{v}$ or $t=\Hypz{}{\prop{P}}$ or $t=x_{i}\, t_{1}\, t_{2}\ldots t_{n} $ or $t=\Hyp{a_{i}}{{A}_{i}}{{\alpha}}m\, t_{2}\ldots t_{n}$. \end{enumerate} \end{proposition} \begin{proof} We prove 1. and 2. simultaneously and by induction on $t$. There are several cases, according to the shape of $t$:\\ \begin{itemize} \item $t=(m, u)$, $\Gamma\vdash t:\exists {\alpha}\, \prop{P}$ and $\Gamma \vdash u: \prop{P}[m/\alpha]$. We immediately get 1. by induction hypothesis applied to $u$, while 2. is obviously verified.\\ \item $t=\lambda x\, u$, $\Gamma \vdash t: \prop{P}=\prop{Q}\rightarrow \prop{R}$ and $\Gamma, x: \prop{Q} \vdash u: \prop{R}$. We immediately get 1. by induction hypothesis applied to $u$, while 2. is obviously verified.\\ \item $t=\langle u, v\rangle$, $\Gamma \vdash t: \prop{P}=\prop{Q}\land \prop{R}$, $\Gamma \vdash u: \prop{Q}$ and $\Gamma \vdash v: \prop{R}$. We immediately get 1. by induction hypothesis applied to $u$, while 2. is obviously verified.\\ \item $t=\Ez{}{u}{v}$, $\Gamma, a: \prop{Q}^{\bot} \vdash u: \exists {\alpha}\, \prop{P}$ (resp. $u: \prop{P}$) and $\Gamma, a: \prop{Q} \vdash v: \exists {\alpha}\,\prop{P}$ (resp. $v: \prop{P}$). We immediately get the thesis by induction hypothesis applied to $u$ and $v$, while 2. is obviously verified.\\ \item $t=\Ecrom{a}{u}{v}$, $\Gamma, a: \forall \beta\, A \vdash u: \exists {\alpha}\, \prop{P}$ (resp. $u: \prop{P}$) and $\Gamma, a: \exists {\beta}\, A^{\bot}\vdash v: \exists {\alpha}\,\prop{P}$ (resp. $v: \prop{P}$). We first observe that $a$ must occur free in $u$: otherwise, $t=\Ecrom{a}{u}{v}\mapsto u$, which would yield a contradiction to $t\in \mathsf{NF}$. Now, by induction hypothesis, 1. holds with respect to $u$. Moreover, it cannot be that every occurrence in $u$ of every hypothesis variable $a_{i}$ or $a$ corresponds to an active term: otherwise, in particular, $u$ would have an active subterm of the form $\Hyp{a}{A}{\beta}m$, for some $m\in\Language$, and thus $t=\Esucc{a}{u}{v}\mapsto \Ecrom{\mathsf{b}}{v[a:=e]}{(\Esucc{a}{u[a:=e]}{v})}$, with $e=(m,\mathsf{b})$: but $t\in\mathsf{NF}$. Therefore, $u$ has an active subterm of the form $\Hyp{a_{i}}{{A_i}}{{\alpha_{i}}}m$, for some non simply universal formula $A_i$ and $m\in\Language$. We have thus established 1. for $t$, while 2. is obviously verified. \\ \item $t=\Hyp{a_{i}}{A_{i}}{{\alpha}}$. This case is not possible, for $\Gamma\vdash t:\exists {\alpha}\, \prop{P}$ or $\Gamma\vdash t: \prop{P}$. \\ \item $t=\Hypz{}{\prop{P}}$. In this case, 1. and 2. are trivially true. \\ \item $t$ is obtained by an elimination rule and we can write it as $r\, t_{1}\,t_{2}\ldots t_n$ where $r$ is not an application (this notation has been explained in Section~\ref{section-system}). Notice that in this case $r$ cannot be a redex neither a term of the form $\Ecrom{a}{u}{v}$ nor $\Ez{}{u}{v}$ because of the permutation rules and $t\in \mathsf{NF}$. We have now two cases:\\ \begin{enumerate} \item $r=x_{i}$ (resp. $r=\Hypz{}{\prop{P}}$). Then, since $\Gamma \vdash x_{i}: \prop{P}_{i}$ (resp. $\Gamma \vdash \Hypz{}{\prop{P}}: \prop{P}$), we have that for each $i$, either $t_{i}$ is $\pi_{j}$ or $\Gamma\vdash t_{i}: \prop{Q}$, where $\prop{Q}$ is a propositional formula. By induction hypothesis, each $t_{i}$ satisfies 1. and thus also $t$. 2. is obviously verified. \\ \item $r=\Hyp{a_{i}}{{A}_{i}}{{\alpha_i}}$. Then, $t_{1}$ is $m$, for some closed term of $\Language$. If $A_{i}$ is not simply universal, we obtain that $t$ satisfies 1., for $t=\Hyp{a_{i}}{{A}_{i}}{{\alpha_i}}m\, t_{2}\ldots t_{n}$. If $A_{i}=\forall\gamma_1\ldots\gamma_k$ $\prop{Q}$, with $\prop{Q}$ propositional, we have that for each $i$, either $t_i$ is a closed term $m_i$ of $\Language$ or $t_{i}$ is $\pi_{j}$ or $\Gamma\vdash t_{i}: \prop{R}$, where $\prop{R}$ is a propositional formula. By induction hypothesis, each $t_{i}$ satisfies 1. and thus also $t$. 2. is obviously verified. \end{enumerate} \end{itemize} \end{proof} \begin{definition}[Herbrand Normal Forms] \label{definition-hnf} We define by induction a set of proof terms, called \emph{Herbrand normal forms}, as follows: \begin{itemize} \item Every normal proof-term $(m,u)$ is an Herbrand normal form; \item if $u$ and $v$ are Herbrand normal forms, $\Ez{}{u}{v}$ is an Herbrand normal form. \end{itemize} \end{definition} \begin{thm}[Herbrand Disjunction Extraction]\label{theorem-extraction} Let $\exists\alpha\,\prop{P}$ be any closed formula where $\prop{P}$ is atomic. Suppose $\Gamma \vdash t: \exists \alpha\, \prop{P}$, $t$ is quasi-closed and $t\mapsto^{*} t'\in\mathsf{NF}$. Then $\Gamma \vdash t': \exists \alpha\, \prop{P}$ and $t'$ is an Herbrand normal form $$\Ez{}{\Ez{}{\Ez{}{(m_{0}, u_{0})}{(m_{1}, u_{1})}}{}\ldots}{(m_{k}, u_{k})}$$ Moreover, $\Gamma \vdash \prop{P}[m_{1}/\alpha]\lor \dots \lor \prop{P}[m_{k}/\alpha]$. \end{thm} \begin{proof}\mbox{} We proceed by induction on $t'$. By the Subject Reduction Theorem \ref{subjectred}, $t': \exists \alpha\, \prop{P}$. By Proposition \ref{prop:pnf}, $t'$ can only have three possible shapes: \begin{enumerate} \item $t'=\Ecrom{a}{u}{v}$. We show that this cannot happen. First, $a$ must occur free in $u$, otherwise $t'\notin\mathsf{NF}$. By Proposition \ref{prop:pnf}, we have two possibilities. i) Every occurrence in $u$ of every term $\Hyp{a_{i}}{{A_i}}{{\alpha_{i}}}$, with $a_{i}$ free, is of the active form $\Hyp{a_{i}}{{A_i}}{{\alpha_{i}}}m$, where $m\in\Language$; in particular this is true when $a_{i}=a$, which implies $t'\notin\mathsf{NF}$. ii) $u$ has an active subterm of the form $\Hyp{a_{i}}{{A_i}}{{\alpha_{i}}}m$, for some non simply universal formula $A_i$ and $m\in\Language$: since $t'$ is quasi-closed, $a_{i}=a$, which again implies $t'\notin\mathsf{NF}$. In any case, we have a contradiction. \item $t'=\Ez{}{u}{v}$; then, by induction hypothesis, $u, v$ are Herbrand normal forms, and thus by definition \ref{definition-hnf}, $t'$ is an Herbrand normal form as well. \item $t'=(m, u)$; then, we are done. \end{enumerate} We have thus shown that $t'$ is an Herbrand normal form $$\Ez{}{\Ez{}{\Ez{}{(m_{0}, u_{0})}{(m_{1}, u_{1})}}{}\ldots}{(m_{k}, u_{k})}$$ Finally, we have that for each $i$, $\Gamma_{i}\vdash u_{i}: \prop{P}[m_{i}/\alpha]$, for the very same $\Gamma_{i}$ that types $(m_{i}, u_{i})$ of type $\exists \alpha\, \prop{P}$ in $t'$. Therefore, for each $i$, $\Gamma_{i}\vdash u_{i}^{+}: \prop{P}[m_{1}/\alpha]\lor \dots \lor \prop{P}[m_{k}/\alpha]$, where $u_{i}^{+}$ is of the form $\inj_{i_{1}}(\ldots \inj_{i_{k}}(u_{i})\ldots )$. We conclude that $$\Gamma \vdash \Ez{}{\Ez{}{\Ez{}{u_{0}^{+}}{u_{1}^{+}}}{}\ldots}{u_{k}^{+}}: \prop{P}[m_{1}/\alpha]\lor \dots \lor \prop{P}[m_{k}/\alpha]$$ \end{proof} \section{Heyting Arithmetic and Markov's principle} Throughout the introduction we made continuous references to \emph{Arithmetic}. By this name we mean, in its broadest sense, the theory of natural numbers with the usual operations of sum and product. From the point of view of logic, although a complete axiomatization cannot exist because of G\"odel's theorems, the most common axiom system for this theory is known as Peano Arithmetic, $\mathsf{PA}$. It takes the name from Giuseppe Peano, and in its modern presentation it consists of a classical theory over the language with constant terms $0, \mathbf{s},+$ and the predicate $=$, with the axioms \begin{itemize} \item $\forall x (x = x)$ \item $\forall x \forall y (x = y \to y = x)$ \item $\forall x \forall y \forall z (x = y \to y = z \to x = z)$ \item $\forall x \forall y (x = y \to \mathbf{s} x = \mathbf{s} y)$ \item $\forall x \forall y (\mathbf{s} x = \mathbf{s}y \to x = y)$ \item $\forall x (\mathsf{s}x = 0 \to \bot)$ \item $\forall x (x + 0 = x)$ \item $\forall x \forall y (x + \mathbf{s}y = \mathbf{s}(a + b))$ \item $\forall x (x \cdot 0 = 0)$ \item $\forall x \forall y (x \cdot \mathbf{s}y = (x \cdot y) + x)$ \item $\forall x (\varphi(x) \to \varphi (\mathbf{s}x)) \to \varphi (0) \to \forall x \varphi(x)$, for all formulas $\varphi$ \end{itemize} The first four axioms define our notion of equality as an equivalence relation preserved by the successor operation. Then the following two state that the successor is a bijection between the naturals and naturals greater than zero. After them we have the definitions for addition and multiplication, and finally the induction axiom scheme. By Heyting Arithmetic, $\HA$, we mean the intuitionistic theory of the same axioms. In this context, we formulate Markov's principle as the statement \[\neg \neg \exists x A(x) \to \exists x A(x)\] where $A$ is a quantifier-free formula. Alternatively, we can also use the following form, which is equivalent under the axioms of $\HA$: \[ \neg \forall x A (x) \to \exists x \neg A(x)\] It was mentioned in the introduction that the intuitionists did not accept Markov's principle. In line with this, neither of the formulas we just presented can be proved in the system of Heyting's intuitionistic arithmetic; however as we are going to see realizability interpretation provide mixed answer on this. \section{G\"odel's \emph{Dialectica} interpretation} \label{sec:godels-dialec-interpr} Although not usually included under the category of realizability interpretations, the functional interpretation of intuitionistic arithmetic introduced by G\"odel, commonly referred to as the \emph{Dialectica} interpretation \cite{Goedel58}, is probably the first step into this line of research. As is made explicit in the title of the series of lectures where he first introduced his ideas, \emph{In what sense is intuitionistic logic constructive} \cite{Goedel41}, G\"odel aimed at making clearer the constructive meaning of the intuitionistic logical constants. In order to do this, he proposed a system of typed recursive functionals where to interpret intuitionistic theories; this approach was in his opinion finitist, as we noted in the introduction, and therefore more suitable to develop an analysis of constructivity and consistency. Formally, the \emph{Dialectica} interpretation assigns to every formula $F$ of $\HA$ a formula $F_D$ in a system of typed functionals that we will call \textbf{T}; $F_D$ is of the form $\exists y \forall z A(y,z,x)$, where $x,y,z$ are list of variables of arbitrary type and $A$ is quantifier free. The definition is by induction on the structure of the formula: for $A$ atomic, $A_D=A$ (identifying the symbols of the languages $\HA$ and \textbf{T}); if $F_D = \exists x \forall y A(x,y)$ and $G_D = \exists u \forall v B(u,v)$, then \begin{itemize} \item $(F \land G)_D = \exists x,u \forall y,v (A(x,y)\land B(u,v))$ \item $(F \lor G)_D = \exists t,x,u \forall y,v (t=0 \to A(x,y) \land t=1 \to B(u,v))$ \item $(\forall z F)_D = \exists X \forall z,y A(X(z),y,z)$ \item $(\exists z F)_D = \exists z,x \forall y A(x,y,z)$ \item $(F \to G)_D = \exists U,Y \forall x,v A(x, Y(x,v)) \to B(U(x),v)$ \item $(\neg F)_D = \exists Y \forall x \neg A(x, Y(x))$ \end{itemize} Note that 6 follows from 5 when defining $\neg A = A \to \bot$. If we compare this with the usual BHK semantics, which also forms the basis of other realizability semantics, we can see that it is substantially different in particular in the definition of the implication: here we find no mention of a method to transform ``any proof'' as we had in BHK.\footnote{With regard to this, G\"odel noted: ``[the fact that one does not need to quantify over all proofs] shows that the interpretation of intuitionistic logic, in terms of computable functions, in no way presupposes Heyting's and that, moreover, it is constructive and evident in a higher degree than Heyting's. For it is exactly the elimination of such vast generalities as ``any proof'' which makes for greater evidence and constructivity.'' \cite{Gödel72}} If one thinks of the Dialectica as a Game Semantics, its peculiarity becomes clearer: consider a game between two players, where we win if we find a term $u$ such that there is no $t$ for which $A_D(u,t)$ holds; then we have a winning strategy if we can state $\exists x \forall y \ A_D(x,y)$. The cases for the connectives different from $\to$ is quite intuitive in this framework: \begin{itemize} \item In the case of $A \land B$, we need to find winning strategies $x$ for $A$ and $u$ for $B$ \item In the case of $A \lor B$, we declare (depending on $t$) whether we are going to give a winning strategy $x$ for $A$ or $u$ for $B$ \item In the case of $\forall x \ A$, we need to give a winning strategy $X(z)$ for $A(z)$ for every numeral $z$ the opponent might give \item In the case of $\exists x \ A$, we need to give a numeral $z$, together with a winning straregy for $A(z)$. \end{itemize} The case of implication requires more explanation. Here, the opponent gives us a strategy $x$ for $A$: note that it need not be a winning one. In order to win, we need either to provide a winning strategy for $B$, or to show that the strategy he gave us was actually not winning. From this comes the shape of the interpretation of the implication: we need to give a method $U$ to obtain a strategy for $B$ such that, whenever $v$ is a strategy that wins against $U(x)$, we can build a strategy $Y(x,v)$ that wins against $x$. \subsubsection{Markov's principle and the \emph{Dialectica}} The difference between the BHK semantics and the \emph{Dialectica} interpretation goes in fact much farther than this, and although one can easily check that all formulas that are provable in $\HA$ are provable in \textbf{T}, the converse is not the case. It turns out that Markov's principle is precisely one of the formulas that obtain a justification in \textbf{T} but are not provable in $\HA$. If we consider the second form of Markov's principle introduced in the previous section, we have that\\ \\ \noindent $(\forall x A)_D = \forall x A(x)$\\ $(\neg \forall x A)_D = \exists x \ \neg A(x)$\\ \\ $(\neg A)_D = \neg A$\\ $(\exists x \neg A)_D = \exists x \ \neg A (x)$\\ \\ $(\neg \forall x A \to \exists x \neg A)_D = \exists U \forall x (\neg A(x) \to \neg A(U(x)))$ Since $\exists x \neg A(x)$ is already in the required form, it is not touched by the Dialectica. In the case of $\neg \forall x \ A$, the Dialectica interpretation of the negation states that there should be a counterexample, and asks for a functional that maps witnesses of $\forall x \ A$ (which are void in the interpretation) to counterexamples of $A$; this means that the interpretation is once again $\exists x \neg A(x)$. Therefore, since both formulas get the same interpretation, Markov's principle can be trivially interpreted. It is interesting to note that G\"odel was aware of this result and viewed it as yet another example of the fact that intuitionistic logic was not well suited as a basic constructive logic, and the system \textbf{T} was on the other side behaving much better\footnote{``The higher degree of constructivity also appears in other facts, e.g., that Markov's principle $\neg \forall x A(x) \to \exists x \neg A(x)$ (see \cite{Kleene60}, page 157, footnote) is trivially provable for any primitive recursive $A$ and, in a more general setting, for any decidable property o of any objects $x$. This, incidentally, gives an interest to this interpretation of intuitionistic logic (no matter whether in terms of computable functions of higher types or of Turing functions) even if Heyting's logic is presupposed.'' \cite{Gödel72}} One might now wonder how such an interpretation can be used in practice. Consider the case where we have an interpretation of the premise $\neg \forall x A$, and we want to use modus ponens together with Markov's principle to get the conclusion. We can easily see that the Dialectica interpretation validates modus ponens, as shown for example in \cite{Kohlenbach08}: assume we have the two formulas in \textbf{T} \noindent $\forall y \ A_D(t_1,y)$\\ $\forall x,v \ (A_D(x,t_2 (x,v)) \to B_D(t_3(x),v))$ \\ \noindent Then we can take $t_1$ for $x$ in the second formula, and $t_2(t_1,v)$ for $y$ in the first. This results in \noindent $A_D(t_1,t_2(t_1,v))$\\ $A_D(t_1,t_2 (t_1,v)) \to B_D(t_3(t_1),v)$ \\ \noindent Therefore we have $B_D(t_3(t_1),v)$ for all $v$, and thus the functional assigned to $B$ is $t_3(t_1)$. Thus, we can view modus ponens as functional application. In our case we have \noindent $\neg A(t_1)$ \\ $\forall y (\neg A(y) \to \neg A (U(y)))$ \noindent And therefore applying modus ponens results in the application $U(t_1) = t_1$, since $U =\lambda x.x$. \section{Kleene's realizability} Kleene was the first to investigate the notion of realizability, and indeed he was the one to introduce the word itself\footnote{As mentioned in \cite{Kleene45}, the initial development of the system is actually due to Kleene's first student David Nelson.}. Upon developing the system of recursive functions, he aimed at making the system of intuitionistic arithmetic ``more precise'', and he planned to do so by employing the system of recursive functions he contributed to formalize. More precisely, the objects of the domain of the interpretation (i.e. the realizers) are the G\"odel numbers of the recursive functions: thus Kleene's realizability is often referred to as \emph{number realizability}. Consider the standard model of arithmetic $\mathbb{N}$ and a standard pairing function $\langle -,-\rangle: \mathbb{N}^2 \to \mathbb{N}$, together with its corresponding projection functions $\pi_1$, $\pi_2$ such that $\pi_i(\langle n_1, n_2 \rangle) = n_i$. By $\{n\}m$ we represent the result of the computation of the $n$-th partial recursive function on $m$, in a suitable model of the partial recursive functions; by $\overline{n}$ we mean the numeral (in $\HA$) representing $n$. In the classic definition of Kleene, any number $n$ is a realizer of a formula $F$ under the following cirmunstances: \begin{description} \item[$n \, \mathbf{r}\, s=t$] if $s=t$ \item[$n \, \mathbf{r}\, A \land B$] if $\pi_1(n) \, \mathbf{r}\, A$ and $\pi_2(n) \, \mathbf{r}\, B$ \item[$n \, \mathbf{r}\, A \to B$] if for all $m$ such that $m \, \mathbf{r}\, A$, $\{n\}m$ is a terminating computation and $\{n\}m \, \mathbf{r}\, B$ \item[$n \, \mathbf{r}\, A \lor B$] if $\pi_1(n) = 0$ and $\pi_2(n) \, \mathbf{r}\, A$, or if $\pi_1(n) = 1$ and $\pi_2(n) \, \mathbf{r}\, B$ \item[$n \, \mathbf{r}\, \forall x A(x) $] if for all $m$, $\{n\}m$ is a terminating computation and $\{n\}m \, \mathbf{r}\, A (\overline{m})$ \item[$n \, \mathbf{r}\, \exists x A(x) $] if $\pi_1(n) \, \mathbf{r}\, A(\;\overline{ \pi_2(n)}\;)$ \end{description} We can build a realizer for Markov's principle according to this definition. Consider the number $n$ such that $\{n\}m = \langle 0, \mu i .A(i) \rangle$; here, $\mu$ denotes the usual minimization operation from the theory of partial recursive functions. This is a realizer of $\neg \neg \exists x A(x) \to \exists x A(x)$ only if whenever $m$ is a realizer of $\neg \neg \exists x A(x)$, $\{n\}m$ is a realizer of $\exists x A(x)$. Unraveling the definitions, we need $\langle 0, \mu i .A(i) \rangle$ to be a realizer of $\exists x A(x)$, i.e. $0 \, \mathbf{r}\, A(\;\overline{\mu i .A(i) }\;)$. If one assumes that $\mu i .A(i)$ does not correspond to a terminating computation, then this would mean that $\exists x A (x)$ is not realizabile; in turn, if there is no realizer of $\exists x A (x)$ then any number is a realizer of $\exists x A (x) \to \bot \equiv \neg \exists x A (x)$; finally, since we have a realizer for $\neg \exists x A (x) \to \bot$, this would give us a realizer of $\bot$, and thus a contradiction. This ensures the termination of the computation, and therefore we have $A(\;\overline{\mu i .A(i) }\;) \equiv \top$, and any number is a realizer of $\top$. We can easily see the catch here: the termination of the computation is ensured by classical reasoning, and what we have done is a simple shift of the classical reasoning contained in Markov's principle to the metalevel, in this case the theory of partial recursive functions. This is of course not satisfying at all from a strictly constructive point of view. \section{Kreisel's modified realizability} A big step forward in the field of realizability in the direction of computer science was done by Georg Kreisel with his system of \emph{modified realizability} \cite{Kreisel59}. Kreisel's realizability differentiates itself from Kleene's by using a typed lambda calculus as the domain of interpretation. Types here are put in correspondence with formulas of $\HA$, somehow predating Howard's idea of completely identifying them by some years; moreover, the use of lambda calculus and the subsequent success of lambda calculus as the foundation for functional programming languages laid the foundation for the link between computer science and proof theory. We begin the presentation of modified realizability by presenting the system of lambda calculus. First we need to introduce the types we are going to use: \begin{itemize} \item $\Nat$ is a type (intuitively, the type of naturals) \item If $\sigma$, $\tau$ are types, then $\sigma \to \tau$, $\sigma \times \tau$, $\sigma + \tau$ are types \end{itemize} Then, we introduce the typed terms of the system: \begin{itemize} \item For every type $\sigma$, a countable set of variables $x^\sigma,y^\sigma,\dots$ \item $0: \Nat$, $\mathbf{s}: \Nat \to \Nat$ \item For all types $\sigma$, $\rec ^\sigma : \sigma \to (\Nat \to \sigma \to \sigma) \to \Nat \to \sigma$ \item For all types $\sigma$, $\tau$, projections $\pi_1^{\sigma,\tau} : \sigma \times \tau \to \sigma$, $\pi_2^{\sigma,\tau} : \sigma \times \tau \to \tau$ and pairing $\langle -,- \rangle : \sigma \to \tau \to \sigma \times \tau$ \item If $t: \tau$, then $\lambda x^\sigma.f: \sigma \to \tau$ \item If $s: \sigma \to \tau$, $t: \sigma$, then $st : \tau$ \end{itemize} And third, the set of reduction rules: \begin{itemize} \item $(\lambda x.t)s \mapsto t[s/x]$ \item $\pi_1(\langle s,t \rangle) \mapsto s$, $\pi_2(\langle s,t \rangle) \mapsto t$ \item $\rec xy0 \mapsto x$, $\rec xy(\mathbf{s}z) \mapsto yz\rec xyz$ \end{itemize} We are now ready to define the realizability interpretation. We will not treat directly the case of $\lor$, but we will assume that $A\lor B$ is a shorthand for $\exists x ((x=0 \to A) \land (\neg(x=0) \to B))$ We do so by first assigning a type $tp(A)$ to every formula $A$: \begin{align*} tp(\bot) &= tp(s=t) = \Nat & tp (A \land B) &= tp(A) \times tp(B) & tp(A \to B) &= tp(A) \to tp(B) \\ tp(\forall x A) &= \Nat \to tp(A) & tp (\exists x A) &= \Nat \times tp(A)\\ \end{align*} \noindent Finally, we can state \begin{description} \item[$t \, \mathbf{mr}\, s=t$] if $s=t$ \item[$t \, \mathbf{mr}\, A \land B$] if $\pi_1(t) \, \mathbf{mr}\, A$ and $\pi_2(t) \, \mathbf{mr}\, B$ \item[$t \, \mathbf{mr}\, A \to B$] if for all $s : tp(A)$, $ts \, \mathbf{mr}\, B$ \item[$t \, \mathbf{mr}\, \forall x A(x) $] if for all $m : \Nat$, $tm \, \mathbf{mr}\, A (\overline{m})$ \item[$t \, \mathbf{mr}\, \exists x A(x) $] if $\pi_1(t) \, \mathbf{mr}\, A(\;\overline{ \pi_2(t)}\;)$ \end{description} The term calculus comes with some important properties, the main one being strong normalization. This means that every term will reduce to a normal form after a finite number of reduction steps. If we analyze modified realizability from a game semantical point of view as we did with the Dialectica, we will notice that it only differs in the definition of the implication. Indeed, here we go back to a definition in the style of the BHK. Game semantically, here we are only talking about winning strategies: this means that when playing on the formula $A \to B$, the opponent will always give us a winning strategy for $A$, to which we should answer with a winning strategy for $B$. However, winning strategies cannot be effectively recognized, so the correctness of moves cannot be checked: this is why, when it comes to game semantics, the Dialectica represents a clearer interpretation. The fact that Markov's principle cannot be interpreted by means of the modified realizability was already shown by Kreisel \cite{Kreisel62}, and was indeed presented as one of the main points of his system. One can argue like this: assume that Markov's principle is realizable. Then in particular, for every value of $n$ one could realize $\neg \forall x \ \mathbf{T}^\bot(n,n,x) \to \exists x \ \mathbf{T}(n,n,x)$, where \textbf{T} is Kleene's predicate and is interpreted as saying ``the Turing machine $\phi_n$ terminates the computation after $x$ steps on input $n$'' (this is known to be primitive recursive and thus representable in $\HA$). Let then $n$ be fixed, and since we have that $tp(\neg \forall x \mathbf{T}^\bot(n,n,x)) = (\Nat \to \Nat) \to \Nat$, consider the dummy term $d := \lambda x^\Nat y^\Nat.0 : (\Nat \to \Nat) \to \Nat$. By applying the realizer of Markov's principle to this dummy term, we will get a term of type $tp(\exists x \mathbf{T}) = \Nat \times \Nat$; this last term will then normalize to a term of the form $\langle m, t \rangle$, such that $m$ is a numeral. Distinguish two cases: \begin{enumerate} \item If $\mathbf{T}(n,n,m)$ holds, then we have found that the $n$th Turing machine will halt on input $n$ after $m$ steps \item If $\mathbf{T}(n,n,m)$ does not hold, we claim that the $n$th Turing machine does not halt on input $n$. Suppose that it halts, then we would have that $\forall x \ \mathbf{T}^\bot (n,n,x)$ is false and thus not realizable; this in turn means that $\neg \forall x \ \mathbf{T}^\bot (n,n,x)$ is trivially realizable by any term, and in particular by the dummy term $d$; by the definition of realizability, the realizer for Markov's principle applied to $d$ gives a realizer for $\exists x \mathbf{T}(n,n,x)$. We have already denoted the normal form of this term as $\langle m, t \rangle$, and since it is a realizer of $\exists x \mathbf{T}(n,n,x)$ it must be the case that $t$ is a realizer of $\mathbf{T}(n,n,m)$. This means that $\mathbf{T}(n,n,m)$ holds, which is a contradiction. \end{enumerate} Since the term calculus is strongly normalizing, we would have described a procedure that, given any $m$, decides in finite time whether the \emph{n}th Turing machine will halt on input $n$, which is well known to be an undecidable problem. \section{The formalist approach} The birth itself of the modern conception of proof theory is often associated with Hilbert's famous program. As it was stated in the \emph{Grundlagen der Geometrie} the program posed four major problems that should be addressed in order to develop a reliable foundation for a mathematical theory: \begin{itemize} \item The formalization of the theory, including a choice of its basic objects, relations, and axioms. \item The proof of the consistency of the axioms. \item The question of the mutual independence and completeness of the axioms. \item The decision problem: is there an automatic method for deciding truth of statements in the theory? \end{itemize} In this thesis, we are mainly concerned with the first two points. These underline the two main characteristics of Hilbert's thought: \emph{formalism} and \emph{finitism}. The Hilbertian formalism requires the elements of the theory to be expressed as certain statements in a formal language; the mathematical practice thus could be viewed as a manipulation of these statements, in accordance to some rules. This attracted criticism from other philosophical schools, first and foremost the intuitionist school, in that it seemed like it was removing the concept of \emph{mathematical truth}, in favour of giving rise to a mere mechanical game of symbols. However, the second main aspect of the Hilbertian standpoint further clarifies the approach also in relation to this criticism: the main feature of the axiomatic system that was to be sought was its \emph{consistency}, i.e. the inability of deriving a contradiction from the axioms; in the original plan, this crucial feature had to be proved by \emph{finitistic} means. Hilbert meant with this word that they should rely on inspectable\footnote{In German \emph{anschaulich}} evidence. Such a consistency proof was seen as something that nobody could doubt of. Hilbert's program is tightly linked to G\"odel's famous incompleteness results. We will not enter in the debate of what incompleteness meant for the development of the program; however, it is interesting to mention that G\"odel clearly specified his views with respect to the Hilbertian finitism in his Yale lectures \cite{Goedel41}. There, he states that he regards a system as finitist if it satisfies the following points: \begin{itemize} \item All functions and relations that are primitive in the system are respectively computable and decidable. \item The existential quantifier is not primitive in the system. That is, existential quantifications are only an abbreviation for an explicit construction of a witness. \item Universal quantifications can be negated only in the sense that there exists a counterexample in the sense here defined, that is an explicit construction of a counterexample. \end{itemize} In particular, we will draw inspiration from the second point for our notion of constructive system: \begin{definition}[Constructive system] \label{def:constructive} We call a logical system \emph{constructive} if it satisfies the following two properties: \begin{description} \item[Disjunctive property] Whenever $A \lor B$ is provable in the system, then either $A$ is provable or $B$ is provable. \item[Existential property] Whenever $\exists x A(x)$ is provable in the system, then there exists a term $t$ such that $A(t)$ is provable. \end{description} \end{definition} \section{Intuitionitic logic and realizability semantics} The intuitionistic school of L.E.J. Brouwer was probably the main opponent of the formalist approach. We mentioned before that the intuitionists accused Hilbert of reducing the mathematical practice to a game of symbol manipulation without a real meaning. Indeed, the intuitionists appealed to a much more sophisticated notion of mathematics, conceiving essentially mathematical objects as free creations of the mind of the mathematicians. The mathematical practice is then a matter of human communication. Therefore, an object exists only in the moment a mathematician can mentally construct it: how can one accept an indirect argument as a mental construction? Clearly, if we can prove the impossibility of the non existence of an object, we have no way to obtain a construction we can communicate. \subsection*{The BHK explanation of intuitionistic truth} The refusal of formalism made by Brouwer also prevented him from really accepting any formalization of an ``intuitionistic logic''. An explanation of the usual logical connectives from the intuitionistic point of view, and the beginning of the development of an intuitionistic logical system are due to Brouwer's student Arend Heyting; this is usually known as the Brouwer-Heyting-Kolmogorov interpretation, and provides an informal notion of an intuitionistic truth: \begin{itemize} \item There is no construction of $\bot$. \item A construction of $A\land B$ consists of a construction of $A$ and a construction of $B$ \item A construction of $A \lor B$ consists of a construction of $A$ or a construction of $B$ \item A construction of $A\to B$ is a construction which transforms any construction of $A$ into a construction of $B$ \item A construction of $\exists x A(x)$ consists of an element $d$ of the domain and a construction of $A(d)$ \item A construction of $\forall x A(x)$ is a method which transforms every element $d$ of the domain into a construction of $A(d)$. \end{itemize} Negation is then interpreted as $\neg A := A \to \bot$. We can already see from this that the principle of excluded middle $A \lor \neg A$ is not justified under this interpretation: it expands to $A \lor (A \to \bot)$, and asks for either a proof of $A$, or a method to transform proofs of $A$ into the absurdity; but clearly we have no way to do this in general. The underivability of the excluded middle as a rule proved to be the common feature of different systems of constructive logic, and thus intuitionistic logic quickly became interesting per se, regardless of the intuitionistic standpoint in mathematics. \subsection*{Realizability semantics} \label{sec:realizability} The BHK semantics we have defined in the previous paragraph allows us to draw some conclusions and obtain some initial results about systems of intuitionistic logic, such as the simple argument we have used to show that the excluded middle is not justifiable. However, one immediately notices how this semantics is voluntarily informal: the notions of construction and method that are mentioned, are left unspecified. Realizability semantics are a family of semantics that can be thought of as concrete versions of the BHK semantics, whenever we consider a specific intuitionistic theory. Historically, the first such example was the original \emph{number realizability} of Kleene for the intuitionistic system of arithmetic $\HA$ \cite{Kleene45} that used objects of recursion theory in order to give concrete meaning to the concepts of \emph{construction} and \emph{algorithm} we used previously. More formally, it states when a number $e$ realizes a formula $E$ by induction on the shape of the formula: \begin{itemize} \item $e$ realizes $(r = t)$, if $(r = t)$ is true. \item $e$ realizes $(A \land B)$, if $e$ codes a pair $(f,g)$ such that $f$ realizes $A$ and $g$ realizes $B$. \item $e$ realizes $A\lor B$, if $e$ codes a pair $(f,g)$ such that if $f = 0$ then $g$ realizes $A$, and if $f > 0$ then $g$ realizes $B$. \item $e$ realizes $A\to B$, if, whenever $f$ realizes $A$, then the $e$-th partial recursive function is defined at $f$ and its value realizes $B$. \item $e$ realizes $\neg A$, if no $f$ realizes $A$. \item $e$ realizes $\forall x A(x)$, if, for every $n$, the $e$-th partial recursive function is defined at $n$ and its value realizes $A(n)$. \item $e$ realizes $\exists x A(x)$, if $e$ codes a pair $(n,g)$ and $g$ realizes $A(n)$. \end{itemize} Since the objects of the domain of interpretation are numbers, we can internalize the notion we have just defined by formalizing it inside the same theory of arithmetic we are interpreting. A formalized realizability semantics together with a semantic soundness theorem (which is often called adequacy in this framework) allows a finer analysis of intuitionistic systems. For example, given the adequacy of Kleene semantics for a system of intuitionistic arithmetic we could conclude about constructivity of the system according to our \cref{def:constructive}: whenever $A \lor B$ is provable then by adequacy it is realizable, and therefore we will have a realizer coding either a realizer of $A$ or one of $B$; similarly whenever $\exists x A(x)$ is provable, then by adequacy it is realizable and the realizer codes some $n$ and a realizer of $A(n)$. Moreover, realizability is able to tell more about the computational content of intuitionistic systems. Kleene realizers are understood as codes for a G\"odel numbering of the recursive functions, and thus can represent something that we can use in order to compute. Going further in this direction, Kreisel's \emph{modified realizability} \cite{Kreisel59} defines realizers as elements of a system of typed $\lambda$-calculus: these can be in turn very similar to statements of a modern functional programming language. We can think therefore of realizability interpretations as the link between constructive systems and computational systems. \section{Constructive Recursive Mathematics and the controversy about Markov's principle} Constructive recursive mathematics was developed by the Russian school of constructivism starting from the 1940s. Its main contributor was A.A. Markov \cite{Markov54}, and most of the research developments in this field happened until the 1970s. In a fashion similar to the finitistic approach, the focus in CRM is on the fact that mathematical objects should be finitely representable. In particular, they should be representable by means of suitably defined \emph{algorithms}. The main points of the approach of CRM are, as found in \cite{Troelstra88} \begin{itemize} \item The objects of mathematics are algorithms. Algorithms are meant in a mathematically precise sense in the sense that they should be presented as ``words'' in some finite alphabet of symbols. \item Limitations due to finite memory capacity are disregarded, the length of symbol strings is unbounded (though always finite). \item Logically compound statements not involving $\exists$, $\lor$ are understood in a direct way, but existential statements and disjunctions always have to be made explicit. \item If it is impossible that an algorithmic computation does not terminate, we may assume that it does terminate. \end{itemize} The last of these points is what is commonly referred to as ``Markov's principle'', and was the main point of controversy between the intuitionists and the Russian school. Indeed, all the points that were listed fit naturally in classical recursion theory; if we think at Markov's principle in this context, it represents unbounded search: it is certain that the algorithm will halt at some point, but there is no guarantee that this will happen before the end of the universe. This was firmly disagreed by intuitionists and indeed we will see that it cannot be proven from intuitionistic logic. \section{Natural deduction and the Curry-Howard isomorphism} In \cref{sec:realizability} we highlighted how realizability sets a correspondence between constructive systems and models of computation. An even deeper link was noted by Haskell Curry: the rules for implication introduction and elimination of natural deduction (~\cref{fig:natded}) can be put in correspondence with the rules for abstraction and application of Church's simply typed lambda calculus. Even though it was known from the 1940s, this correspondence was not further explored until some decades later. A reason for this delay could be found in the similar lack of success of the proof system of Natural Deduction. Introduced by Gentzen together with the immediately more popular Sequent Calculus, Natural Deduction presents inference rules in couples of \emph{introduction} and \emph{elimination} rules for every logical connective. Its other feature is that proofs are dependent on \emph{assumptions} that can be made and then discharged (represented by bracketing), thus rendering the proof independent of the previously made assumption. A system of natural deduction for intuitionstic logic is presented in \cref{fig:natded}. \begin{figure}[!htb] \begin{align*} \AxiomC{\vdots} \noLine \UnaryInfC{$A_1$} \AxiomC{\vdots} \noLine \UnaryInfC{$A_2$} \RightLabel{$\land$-I} \BinaryInfC{$A_1 \land A_2$} \DisplayProof \qquad \AxiomC{\vdots} \noLine \UnaryInfC{$A_1 \land A_2$} \RightLabel{$\land$-E$_1$} \UnaryInfC{$A_1$} \DisplayProof \qquad \AxiomC{\vdots} \noLine \UnaryInfC{$A_1 \land A_2$} \RightLabel{$\land$-E$_2$} \UnaryInfC{$A_2$} \DisplayProof \end{align*} \begin{align*} \AxiomC{$[A]$} \noLine \UnaryInfC{$B$} \RightLabel{$\to$-I} \UnaryInfC{$A \to B$} \DisplayProof \qquad \AxiomC{\vdots} \noLine \UnaryInfC{$A \to B$} \noLine \AxiomC{\vdots} \noLine \UnaryInfC{$A$} \RightLabel{$\to$-E} \BinaryInfC{$B$} \DisplayProof \end{align*} \begin{align*} \AxiomC{\vdots} \noLine \UnaryInfC{$A$} \RightLabel{$\lor$-I$_1$} \UnaryInfC{$A \lor B$} \DisplayProof \qquad \AxiomC{\vdots} \noLine \UnaryInfC{$B$} \RightLabel{$\lor$-I$_2$} \UnaryInfC{$A \lor B$} \DisplayProof \qquad \AxiomC{\vdots} \noLine \UnaryInfC{$A \lor B$} \AxiomC{$[A]$} \noLine \UnaryInfC{$C$} \AxiomC{$[B]$} \noLine \UnaryInfC{$C$} \RightLabel{$\lor$-E} \TrinaryInfC{$C$} \DisplayProof \end{align*} \begin{align*} \AxiomC{\vdots} \noLine \UnaryInfC{$A$} \RightLabel{$\forall$-I ($x$ not free in the assumptions)} \UnaryInfC{$\forall x A$} \DisplayProof \qquad \AxiomC{\vdots} \noLine \UnaryInfC{$\forall x A$} \RightLabel{$\forall$-E} \UnaryInfC{$A[t/x]$} \DisplayProof \end{align*} \begin{align*} \AxiomC{\vdots} \noLine \UnaryInfC{$A[t/x]$} \RightLabel{$\exists$-I} \UnaryInfC{$\exists x A$} \DisplayProof \qquad \AxiomC{\vdots} \noLine \UnaryInfC{$\exists x A$} \AxiomC{$[A]$} \noLine \UnaryInfC{$C$} \RightLabel{$\exists$-E ($x$ not free in $C$ and in the assumptions)} \BinaryInfC{$C$} \DisplayProof \end{align*} \begin{align*} \AxiomC{$\bot$} \RightLabel{$\bot$-E} \UnaryInfC{$A$} \DisplayProof \end{align*} \caption{Natural deduction for intuitionistic logic} \label{fig:natded} \end{figure} Sequent calculus provided a more technically convenient presentation of classical logic; moreover, Gentzen introduced it with the specific aim of proving its consistency, by means of what became to be known as Gentzen's \emph{Hauptsatz}, or cut-elimination theorem. Since we are not interested in sequent calculus, we will not talk about this theorem further. We are however interested in a somehow corresponding notion in the framework of natural deduction, which is \emph{proof normalization}. A normal proof is one where no detours appear; formally, a detour is a configuration in which an introduction rule is immediately followed by an elimination of the same connective that was introduced. Given that the two kinds of rules are one the inverse of the other, such an inference can be removed in order to make the proof more direct: an example of such procedure is shown in \cref{fig:normal}. Normalization then is the process of removing detours from a proof, with the aim of obtaining a normal one. As we mentioned, the unavailability of a normalization theorem \footnote{In his thesis Gentzen had actually included a set of detour conversions and a proof of normalization for intuitionistic natural deduction. However this remained unknown until 2005, when a manuscript of the thesis was found. For more details see \cite{Von08}}, stating that every proof could be normalized, meant that sequent calculus became the system of choice for a long period, until Dag Prawitz finally crafted a direct normalization proof for natural deduction in 1965. \begin{figure} \centering \begin{align*} \AxiomC{$[A]$} \noLine \UnaryInfC{\vdots} \noLine \UnaryInfC{$B$} \RightLabel{$\forall$-I} \UnaryInfC{$A\to B$} \AxiomC{\vdots} \noLine \UnaryInfC{$A$} \RightLabel{$\forall$-E} \BinaryInfC{$B$} \DisplayProof \qquad \leadsto \qquad \AxiomC{\vdots} \noLine \UnaryInfC{$A$} \noLine \UnaryInfC{\vdots} \noLine \UnaryInfC{$B$} \DisplayProof \end{align*} \caption{Normalization of a non-normal proof} \label{fig:normal} \end{figure} In his work (see for example \cite{Prawitz06}), Prawitz further clarified a key feature of the rules of natural deduction: the introduction rules can be thought of as \emph{definitional} rules that describe when one is allowed to assert a certain connective, and thus its meaning; in the same way, elimination rules can be seen as \emph{operational} rules that describe how one can use a formula depending on its main connective \footnote{This idea was already expressed by Gentzen: \emph{The introductions constitute, as it were, the ``definitions'' of the symbols concerned, and the eliminations are, in the final analysis, only consequences of this, which may be expressed something like this: At the elimination of a symbol, the formula with whose outermost symbol we are dealing may be used only ``in respect of what it means according to the introduction of that symbol''}. (\cite{Gentzen35})}. As natural deduction started gathering more interest, William Howard studied more in depth the relationship between deduction rules of natural deduction and typing rules of typed lambda calculus, and presented what came to be known as the Curry-Howard isomorphism \cite{Howard69}. Under this isomorphism, formulas are put in correspondence with types, hence the title of Howards's work \emph{The formulae as types notion of construction}; the correspondence stretches even further, and takes different names according to the different traditions that originated from the original work. We borrow the terminology of Wadler \cite{Wadler15} and state the full framework as: \begin{itemize} \item \emph{Propositions} as \emph{types}, the original intuition of Howard \item \emph{Proofs} as \emph{programs}: since every proof tree can be made to correspond with a type derivation, we have a lambda term corresponding to the proof. \item \emph{Simplification of proofs} as \emph{evaluation of programs}: the process of detour removal is nothing but a computation, where a complex term gets reduced in order to obtain a result of the computation. \end{itemize} \section{Contents of the thesis} Modern research in the Curry-Howard tradition draws heavily from all the standpoints we briefly discussed. It stems from constructivism, and intuitionistic systems are the base for most Curry-Howard systems; it is formalist in the sense that proofs are the main object of the investigation; it is finitist in the sense that, in addition to the requirement that objects of computation should be finite, it tries to make sense of classical reasoning by these means. We will sit in this tradition, and therefore although the main object of the discussion will be a mathematical principle, we will be interested in its computational and metalogical properties. As it was already mentioned, Markov's principle was already controversial in the debate about constructivism and foundations in the first half of the XX century: \cref{cha:intu-real-mark} will be devoted to a more in-depth accounting of the birth of realizability semantics and of the status of Markov's principle in each of them. After that we will introduce some results in the more modern line of research of realizability and Curry Howard systems for classical logic. In \cref{cha:real-class-syst}, we shall introduce a Curry Howard system able to provide a realizability semantics for the semi-classical system of arithmetic with limited excluded middle ($\HA + \EM_1$). In \cref{cha:markovs-principle-ha} we will prove some additional results on the computational and constructive properties of $\HA + \EM_1$, and we will use them to give a new computational interpretation of Markov's principle. Based on the intuitions of \cref{cha:markovs-principle-ha}, \cref{cha:furth-gener} will introduce a Curry Howard system for a system of full classical arithmetic and a corresponding restricted version that will be shown constructive thanks to Markov's principle. \section{Subject reduction for $\mathsf{HA+EM}_1^{-}$ } The subject reduction property asserts that whenever a proof term has a certain type, and it gets reduced a certain number of times, the reduced term will have the same type. When types are taken to correspond to formulas, subject reduction gives us two very important facts: \begin{itemize} \item From the paradigmatic point of view, it connects the concepts of \emph{proof normalization} and \emph{computation}. Reduction rules for the proof terms are usually direct simulations of proof normalization steps. If the system does enjoy the subject reduction property, we can effectively identify these two notions. \item From a proof-theoretic point of view, when it is added to an adequate realizability interpretation it enables one to draw conclusions on the logical system based on the behaviour of the proof terms. A crucial example of this is given in \cref{sec:disj-exist-prop}. \end{itemize} More formally, we can write \begin{definition}[Subject reduction] \label{subred} A system enjoys subject reduction if whenever $\Gamma \vdash M : \tau$ and $M \mapsto^* N$, then also $\Gamma \vdash N : \tau$ \end{definition} In \cite{Aschieri13} it is mentioned that system $\mathsf{HA+EM}_1$ has the subject reduction property, however the result is not proved. Moreover, classic textbooks such as \cite{Sorensen06} only offer a full proof for simply typed systems (i.e. where the only set of rules is $\to$-I and $\to$-E). We shall now give a detailed proof for the system $\mathsf{HA+EM}_1^{-}$ . We first need two preliminary lemmas, similar to the ones presented in \cite{Sorensen06} but extended for our new rules. The main one is the \emph{Generation Lemma}, that given a typed term will allow us to talk about the terms and types used in its type derivation. Then we will need to make sure that substitutions (both ordinary and the witness substitution we have previously defined) do not affect typing of a term. \begin{lemma}[Generation Lemma] \label{lemma:gen} Suppose $\Gamma \vdash t: \tau$. \begin{enumerate}[(i)] \item If $t$ is of the form $\lambda x.u$ and $x \not \in dom(\Gamma)$, then $\tau = \tau_1 \to \tau_2$ and $\Gamma, x:\tau_1 \vdash u: \tau_2$ \item If $t$ is of the form $uv$, then $\Gamma \vdash u: \sigma \to \tau$ and $\Gamma \vdash v : \sigma$ for some $\sigma$ \item If $t$ is of the form $\lambda \alpha.u$ and $\alpha$ is not free in $\Gamma$ then $\tau = \forall \alpha^{\Nat} \sigma$ and $\Gamma \vdash u: \sigma$ \item If $t$ is of the form $um$, where $m$ is a term in $\mathcal{L}$, then $\tau = \sigma[m/\alpha]$, and $\Gamma \vdash u : \forall \alpha^{\Nat} \sigma$. \item If $t$ is of the form $u[x.w_1,x.w_2]$, then there are $\tau_1$,$\tau_2$ such that $\Gamma \vdash u: \tau_1 \lor \tau_2$, $\Gamma,x:\tau_1 \vdash w_1 : \tau$, $\Gamma,x:\tau_2 \vdash w_2 : \tau$ \item If $t$ is of the form $\inj_i(u)$, then $\tau = \tau_1 \lor \tau_2$ and $\Gamma \vdash u : \tau_i$ \item If $t$ is of the form $\pair{u}{v}$, then $\tau = \tau_1 \land \tau_2$ and $\Gamma \vdash u:\tau_1$, $\Gamma \vdash v:\tau_2$ \item If $t$ is of the form $\pi_i(u)$, then $\Gamma \vdash u : \tau \land \sigma$ or $\Gamma \vdash u:\sigma \land \tau$ (resp. if $i=1$ or 2) \item If $t$ is of the form $u[(\alpha,x).v]$, where $\alpha$ is not free in $\tau$ and $\Gamma$, then there is $\sigma$ such that $\Gamma, x: \sigma \vdash v : \tau$ and $\Gamma \vdash u : \exists \alpha^{\Nat}. \sigma $ \item If $t$ is of the form $(m,u)$, then $\tau = \exists \alpha^{\Nat}. \tau_1$ and $\Gamma \vdash u: \tau_1[m/\alpha]$ \item If $t$ is of the form $\rec u v m$, then $\tau = \sigma(m)$, $\Gamma \vdash u : \sigma (0)$, $\Gamma \vdash v : \forall \alpha^{\Nat}.\sigma (\alpha) \to \sigma(\mathsf{S} \alpha)$ \item If $t$ is of the form $[a]\Hyp{P}{\alpha} $, then $\Gamma \vdash [a]\Hyp{P}{\alpha} : \forall \alpha^{\Nat} \emp{}$ and $\Gamma \vdash a: \forall \alpha^{\Nat} \emp{}$ \item If $t$ is of the form $\E{a}{u}{v}$, then $\Gamma, a: \forall \alpha^{\Nat} \emp{} \vdash u : \tau$ and $\Gamma, a: \exists \alpha^{\Nat} \emp{}^{\bot} \vdash v : \tau$. Moreover, $\tau=\exists \alpha \emp{}$. \end{enumerate} \end{lemma} \begin{proof} Consider for example the case of $t = \lambda x.u$. Then since the term has a type, the type derivation must end with the $\to$-introduction rule. Then it follows that $\tau = \tau_1 \to \tau_2$ and $\Gamma, x:\tau_1 \vdash u: \tau_2$. The other cases are similar. \end{proof} \begin{lemma}[Substitution preserves types] \label{lemma:subs} \mbox{} \begin{enumerate}[(i)] \item If $\Gamma \vdash u: \tau$ and $\Gamma (x) = \Gamma^{\prime} (x)$ for all $x$ free in $u$, then $\Gamma^{\prime} \vdash u : \tau$ \item If $\Gamma, x : \sigma \vdash u : \tau$ and $\Gamma \vdash t : \sigma$, then $\Gamma \vdash u[t/x] : \tau$ \item If $\Gamma \vdash u : \tau$, $m \in \mathcal{L}$, then $\Gamma[m/\alpha] \vdash u[m/\alpha] : \tau[m/\alpha]$ \end{enumerate} \end{lemma} \begin{proof} \mbox{} \begin{enumerate}[(i)] \item By induction on the structure of $u$. The base case is straightforward. Consider $u$ of the form $\lambda y v$. We can rename variable $y$ in a way such that it is not free in $\Gamma \cup \Gamma^\prime$. Then, $\tau = \tau_1 \to \tau_2$ by \cref{lemma:gen} and $\Gamma, y: \tau_1 \vdash v: \tau_2 $. From the induction hypothesis, $\Gamma^\prime, y: \tau_1 \vdash v: \tau_2 $ and using an implication introduction $\Gamma^\prime \vdash v : \tau$. Other cases are analogous. \item By induction on the structure of $u$. \begin{itemize} \item Base case: assume $u = y$ is a variable. Then if $y=x$, $\tau=\sigma$ and $u[t/x]=t$; if $y \not = x$, then the thesis follows from the first point. If $u$ is a $\textsc{em}_1^{-}$ hypothesis, the thesis follows from the first point. \item If $u = \lambda y v$, then we can assume, by (i), that $y \not = x$ and $y$ does not occur in $\Gamma$. By the generation lemma we have $\tau= \tau_1 \to \tau_2 $ and $\Gamma, x: \sigma, y: \tau_1 \vdash v: \tau_2$. By the induction hypothesis $\Gamma, y : \tau_1 \vdash v[t/x]: \tau_2$ and applying implication introduction $\Gamma \vdash \lambda y v[t/x]: \tau_1 \to \tau_2 = \tau$ \item If $u = vw$, then by the generation lemma $\Gamma \vdash v: \sigma \to \tau$ and $\Gamma \vdash w : \sigma$ for some $\sigma$. Then by the induction hypothesis $\Gamma \vdash v[t/x]: \sigma \to \tau$ and $\Gamma \vdash w[t/x] : \sigma$ and applying the implication elimination rule $\Gamma \vdash v[t/x]w[t/x] : \tau$. By the definition of substitution this also means $\Gamma \vdash vw[t/x] : \tau$. \item If $u = \inj_i(v)$, then by \cref{lemma:gen} $\tau = \tau_1 \lor \tau_2$ and $\Gamma \vdash v : \tau_i$. By the induction hypothesis, $\Gamma \vdash v[t/x] : \tau_i$ and using the disjunction introduction rule $\Gamma \vdash \inj_i(v[t/x]) : \tau_i$. By definition of substitution this also means $\Gamma \vdash \inj_i(v)[t/x] : \tau_i$ \item If $u = v[y.w_1,y.w_2]$, then by \cref{lemma:gen} there are $\tau_1$,$\tau_2$ such that $\Gamma, x:\sigma \vdash v: \tau_1 \lor \tau_2$, $\Gamma,x:\sigma,y:\tau_1 \vdash w_1 : \tau$, $\Gamma,x:\sigma,y:\tau_2 \vdash w_2 : \tau$. We can apply the induction hypothesis on all these terms and get $\Gamma \vdash v[t/x]: \tau_1 \lor \tau_2$, $\Gamma,y:\tau_1 \vdash w_1[t/x] : \tau$, $\Gamma,y:\tau_2 \vdash w_2[t/x] : \tau$. Then, using the disjunction elimination rule we obtain $\Gamma \vdash v[t/x][y.w_1[t/x],y.w_2[t/x]]:\tau_1 \lor \tau_2$, which by definition of substitution is the same as $\Gamma \vdash (v[y.w_1,y.w_2])[t/x]:\tau_1 \lor \tau_2$ \end{itemize} The other cases are similar. \item Again by induction on the structure of $u$. \begin{itemize} \item Base case: if $u=x$ is a variable, if judgement $x: \tau$ is in $\Gamma$ we have the judgement $x: \tau[m/\alpha]$ in $\Gamma [m/\alpha]$. Similarly for the cases of $\hyp{a}{\alpha}{P}$ and $\wit{a}{\alpha}{P}$ \item If $u = vn$, then by \cref{lemma:gen} $\tau = \sigma[n/\beta]$ and $\Gamma \vdash v: \forall \beta^{\Nat} \sigma$. By induction hypothesis $\Gamma[m/\alpha] \vdash v[m/\alpha]: \forall \beta^{\Nat} \sigma[m/\alpha]$. If $\alpha=\beta$, then $\forall \beta^{\Nat} \sigma[m/\alpha] = \forall \alpha^{\Nat} \sigma$; by using universal elimination we have $\Gamma [m/\alpha] \vdash v [m/\alpha] (n[m/\alpha]) : \sigma [n [m/\alpha] /\alpha] = \sigma [n/\alpha] [m/\alpha]$ If $\alpha \not = \beta$, then note that $\forall \beta^{\Nat} \sigma[m/\alpha] = \forall \beta^{\Nat} (\sigma[m/\alpha])$, and again using universal elimination $\Gamma[m/\alpha] \vdash v[m/\alpha](n[m/\alpha]): \sigma[n[m/\alpha]/\beta] = \sigma[n/\beta][m/\alpha]$. \item If $u = \lambda \beta v$ then by \cref{lemma:gen} $\beta$ is not free in $\Gamma$, $\tau = \forall \beta^\Nat \sigma$ and $\Gamma \vdash v: \sigma$. Consider first $\alpha \not = \beta$. By induction hypothesis, $\Gamma[m/\alpha] \vdash v[m/\alpha]: \sigma[m/\alpha]$. Using universal introduction (since by renaming of bound variable $\beta$ is never free in $\Gamma [m/\alpha]$) then $\Gamma[m/\alpha] \vdash \lambda \beta v[m/\alpha] : \sigma[m/\alpha][n/\beta]$, and $\sigma[m/\alpha][n/\beta] = \sigma[n/\beta][m/\alpha]$ since $\alpha \not = \beta$. Otherwise, if $\alpha = \beta$, since $\beta$ is not free in $\Gamma, v$ and $\sigma$ the result holds vacuosly. \end{itemize} \end{enumerate} The other cases are similar. \end{proof} \begin{lemma}[Witness substitution preserves type] \label{lemma:wsubs} If $\Gamma \vdash u : \tau$, then $\Gamma \vdash u[a:=n] : \tau$ \end{lemma} \begin{proof} Direct consequence of \cref{lemma:subs} (ii) \end{proof} We are now ready to state the main result for this section: \begin{thm} \label{thm:subj} $\mathsf{HA+EM}_1^{-}$ has the subject reduction property \end{thm} \begin{proof} Assume $\Gamma \vdash t : \tau$ and $t \mapsto_{\beta} t^{\prime}$. Proceed by structural induction on the beta reduction. Reduction rules for $\HA$: \begin{itemize} \item $t = (\lambda x. u)v : \tau$ and $t \mapsto u[v/x]$. By the generation lemma, $\Gamma \vdash (\lambda x. u) : \sigma \to \tau$ and $\Gamma \vdash v : \sigma$ for some $\sigma$. Again by generation lemma, $\Gamma, x: \sigma \vdash u : \tau$. Therefore by \cref{lemma:subs}, $\Gamma \vdash u[v/x] : \tau$. \item $t = (\lambda \alpha. u)v$ and $t \mapsto u[v/\alpha]$. By the generation lemma, $\tau = \tau_1[v/\alpha]$, and $\Gamma \vdash u : \forall \alpha^{\Nat}\tau_1$. Again by Generation, $\Gamma \vdash u:\tau_1$, and by \cref{lemma:subs}, $\Gamma \vdash u[v/\alpha] : \tau_1[v/\alpha]$ \item $t = \pi_{i}\pair{u_0}{u_1}$ and $t\mapsto u_i$. Then by \cref{lemma:gen} $\Gamma \vdash \pair{u_0}{u_1} : \tau_{i} \land \tau_{1-i}$ (with $\tau_0 = \tau$), and again by \cref{lemma:gen} $\Gamma \vdash u_0 : \tau_i$ and $\Gamma \vdash u_1 : \tau_{1-i}$. Then for $i=0,1$ we have $\Gamma \vdash u_i : \tau_0=\tau$ \item $t = \inj_{i}(u)[x_{1}.t_{1}, x_{2}.t_{2}]$ and $ t \mapsto t_{i}[u/x_{i}]$. By \cref{lemma:gen} there are $\tau_1$ and $\tau_2$ such that $\Gamma \vdash \inj_{i}(u) : \tau_1 \lor \tau_2$, $\Gamma, x: \tau_1 \vdash t_1 : \tau$ and $\Gamma, x: \tau_2 \vdash t_2 : \tau$. Again by \cref{lemma:gen}, $\Gamma \vdash u: \tau_i$, and by \cref{lemma:subs} $\Gamma \vdash t_{i}[u/x_{i}] : \tau$ \item $t = (n, u)[(\alpha,x).v]$ and $t \mapsto v[n/\alpha][u/x]$, where $\alpha$ is not free in $\Gamma \cup \{t:\tau\}$. By \cref{lemma:gen}, there is a $\sigma$ such that $\Gamma, x: \sigma \vdash v : \tau$ and $\Gamma \vdash (n,u) : \exists \alpha^{\Nat}. \sigma $. Again by \cref{lemma:gen}, $\Gamma \vdash u: \sigma[n/\alpha]$. Using \cref{lemma:subs} and the fact that $\alpha$ is not free in $\Gamma$ and $\tau$, we can write $\Gamma, x: \sigma[n/\alpha] \vdash v[n/\alpha] : \tau$; finally, again by \cref{lemma:subs}, $\Gamma \vdash v[n/\alpha][u/x] : \tau$ \end{itemize} Rules for induction \begin{itemize} \item $t = \rec u v 0$ and $t \mapsto u$. By \cref{lemma:gen}, $\tau = \sigma (0)$ and $\Gamma \vdash u : \sigma (0)$. \item $t = \rec u v (\mathsf{S} n)$ and $t \mapsto v n (\rec u v n)$. By \cref{lemma:gen}, $\tau = \sigma (\mathsf{S} n)$, $\Gamma \vdash u: \sigma (0)$ and $\Gamma \vdash v : \forall \alpha^{\Nat}.\sigma (\alpha) \to \sigma(\mathsf{S} \alpha)$. In addition, by generation lemma on the term $\rec u v n$ we have $\Gamma \vdash \rec u v n : \sigma_1(n)$ and $\Gamma \vdash u : \sigma_1(0)$. Therefore $\sigma_1 = \sigma$. Using the universal quantification rule on $v$ we get $\Gamma \vdash vn : \sigma (n) \to \sigma (\mathsf{S} n)$. Using the implication elimination rule on this and $\rec u v n$, we get $\Gamma \vdash vn (\rec u v n) : \sigma(\mathsf{S} n)$ \end{itemize} Reduction rules for $\textsc{em}_1^{-}$ (there is no difference with the case of $\textsc{em}_1$ ): \begin{itemize} \item $\Gamma \vdash ([a]\Hyp{P}{\alpha}) n : \tau $ and $([a]\Hyp{P}{\alpha}) n \mapsto \True$. By the generation lemma, $\Gamma \vdash [a]\Hyp{P}{\alpha} : \forall \alpha^{\Nat} \emp{} $ and also $\Gamma \vdash [a]\Hyp{P}{\alpha} : \forall \alpha^{\Nat} \tau_1$ and $\tau = \tau_1[m/\alpha]$. Therefore $\emp{} = \tau_1$, and by the condition of the rewrite rule $\tau = \emp{}[m/\alpha] = \True$. \item $\Gamma \vdash \E{a}{u}{v} : \tau$ and $\E{a}{u}{v} \mapsto u$. Then by the generation lemma we have $\Gamma, a: \forall \alpha^\Nat P \vdash u : \tau$. But $a$ is not free in $u$ by definition of the reduction rule, and so $\Gamma \vdash u : \tau$ \item $\Gamma \vdash \E{a}{u}{v} : \tau$ and $\E{a}{u}{v}\mapsto v[a:=n]$. From \cref{lemma:gen} $\Gamma, a : \exists \alpha^\Nat\neg P \vdash v:\tau$. From \cref{lemma:wsubs}, $\Gamma, a : \exists \alpha^\Nat\neg P \vdash v[a:=n] :\tau$. Since there are no free occurences of $a$ in $v[a:=n]$, $\Gamma \vdash v[a:=n] :\tau$. \end{itemize} Permutation rules for $\textsc{em}_1^{-}$ : \begin{itemize} \item $t = (\E{a}{u}{v}) w$ and $t \mapsto \E{a}{uw}{vw}$, where $a$ does not occur free in $w$. From the generation lemma, $\Gamma \vdash \E{a}{u}{v} : \sigma \to \tau$ and $\Gamma \vdash w : \sigma$ for some $\sigma$. Again by generation, $\Gamma, a: \forall \alpha^{\Nat} \emp{} \vdash u : \sigma \to \tau$ and $\Gamma, a: \exists \alpha^{\Nat} \emp{}^{\bot} \vdash v : \sigma \to \tau$. Applying implication elimination rule to both terms, and then $\textsc{em}_1^{-}$ , we get $\Gamma \vdash \E{a}{uw}{vw} : \tau$ \item $t=(\E{a}{u}{v})[x.w_{1}, y.w_{2}]$ and $t \mapsto \E{a}{u[x.w_{1}, y.w_{2}]}{v[x.w_{1}, y.w_{2}]}$ . From \cref{lemma:gen} there are $\tau_1$, $\tau_2$ s.t $\Gamma \vdash \E{a}{u}{v} : \tau_1 \lor \tau_2$ and $\Gamma, x: \tau_1 \vdash w_1 :\tau$, $\Gamma, x: \tau_2 \vdash w_2 :\tau$. From \cref{lemma:gen} again, $\Gamma, a: \forall \alpha^{\Nat} \emp{} \vdash u : \tau_1 \lor \tau_2$ and $\Gamma, a: \exists \alpha^{\Nat} \emp{}^{\bot} \vdash v : \tau_1 \lor \tau_2$. Using disjunction elimination on both terms, followed by $\textsc{em}_1^{-}$ , we get $\Gamma \vdash \E{a}{u[x.w_{1}, y.w_{2}]}{v[x.w_{1}, y.w_{2}]} : \tau$. \item Cases $\pi_{i}(\E{a}{u}{v}) \mapsto \E{a}{\pi_{i}u}{\pi_{i}v}$ and $(\E{a}{u}{v})[(\alpha, x).w] \mapsto \E{a}{u[(\alpha, x).w]}{v[(\alpha, x).w]}$ are similar to the previous points. \end{itemize} \end{proof} \section{Disjunction and existential properties} \label{sec:disj-exist-prop} The subject reduction theorem we have just proved ensures that, whenever we reduce a proof term with one of the reduction rules, we will obtain another proof term of the same type. This, combined with ~\cref{AdequacyTheorem} (the adequacy theorem), allows us to draw conclusions on the behaviour of the logical system based on the behaviour of the proof terms. Such tools will be employed now to prove two important constructive properties of the system $\mathsf{HA+EM}_1^{-}$ . Let's first recall the two main theorems we have seen in \cref{sec:real-interpr-ha+em_1} (the proofs can easily adapted to the new system $\HA+\EM_1^-$.) \begingroup \def\ref{AdequacyTheorem}{\ref{theorem-extraction}} \begin{thm}[Existential Witness Extraction] Suppose $t$ is closed, $t\real \exists \alpha^{\Nat} \emp{}$ and $t\mapsto^{*} t'\in\mathsf{NF}$. Then $t'=(n,u)$ for some numeral $n$ such that $\emp{}[n/\alpha]\evaluates \True$. \end{thm} \addtocounter{thm}{-1} \endgroup Although this theorem only talks about $\Sigma_1^0$ formulas, this is enough for the purpose of proving the constructivity of $\mathsf{HA+EM}_1^{-}$ . Indeed, this is the only kind of existential statement that we are allowed to prove with our rule. In order to use the properties of the realizers to talk about the logic system, we will need the adequacy theorem: \begingroup \def\ref{AdequacyTheorem}{\ref{AdequacyTheorem}} \begin{thm}[Adequacy Theorem] Suppose that $\Gamma\vdash w: A$ in the system $\HA + \EM_1$, with $$\Gamma=x_1: {A_1},\ldots,x_n:{A_n}, a_{1}: \exists \alpha_{1}^{\Nat} \emp{}_{1}^{\bot},\ldots, a_{m}: \exists \alpha_{m}^{\Nat} \emp{}_{m}^{\bot}, b_{1}: \forall \alpha_{1}^{\Nat}\mathsf{Q}_{1},\ldots, b_{l}:\forall \alpha_{l}^{\Nat}\mathsf{Q}_{l}$$ and that the free variables of the formulas occurring in $\Gamma $ and $A$ are among $\alpha_1,\ldots,\alpha_k$. For all closed terms $r_1,\ldots,r_k$ of $\Language$, if there are terms $t_1, \ldots, t_n$ such that \[\text{ for $i=1,\ldots, n$, }t_i\real A_i[{r}_1/\alpha_1\cdots {r}_k/\alpha_k]\] then \[w[t_1/x_1\cdots t_n/x_n\ {r}_1/\alpha_1\cdots {r}_k/\alpha_k\ a_{1}:=i_{1}\cdots a_{m}:=i_{m} ]\real A[{r}_1/\alpha_1\cdots {r}_k/\alpha_k]\] for every numerals $i_{1}, \ldots, i_{m}$. \end{thm} \addtocounter{thm}{-1} \endgroup Combining these theorems with the new subject reduction theorem, we can now state \begin{thm}[Disjunction property] Suppose $\vdash t : A \lor B$ in the system $\mathsf{HA+EM}_1^{-}$ where $t$ and $A \lor B$ are closed. Then there exists a term $u$ s.t. $\vdash u : A$ or a term $v$ s.t. $\vdash v : A$ \end{thm} \begin{proof} If $t$ is not in normal form take $t^\prime$ such that $t \mapsto^* t^\prime$, and $t^\prime$ is in normal form. By \cref{thm:subj}, $\vdash t^\prime : A \lor B$, and then by the adequacy theorem $t^\prime \real A \lor B$. Consider now the possible cases by the definition of realizer: \begin{itemize} \item If $t^\prime = \inj_i(u)$, from \cref{lemma:gen} we have that $\vdash u : A$ or $\vdash u : B$ resp. when $i=0,1$. \item If $t^\prime= \E{a}{u}{v}$, then by \cref{lemma:gen} we would have $\vdash t^\prime : \exists \alpha \emp{}$ for some atomic $\emp{}$, but this contradicts the fact that $\vdash t^\prime : A \lor B$; so this case cannot be possible. \item Since $t^\prime$ is already in normal form, the third case cannot be possible. \end{itemize} \end{proof} With a very similar argument, we have also \begin{thm}[Existential property] Suppose $\vdash t : \exists \alpha A$ in the system $\mathsf{HA+EM}_1^{-}$ where $t$ and $\exists \alpha A$ are closed. Then there exists a numeral $n$ and a term $u$ s.t. $\vdash u : A[n/\alpha]$ \end{thm} \begin{proof} By the adequacy theorem, $t \real \exists \alpha A$. Distinguish cases on the definition of the realizability relation: \begin{itemize} \item If $t = (n, u)$, then by the generation lemma $\vdash u : A[n/\alpha]$ \item If $t = \E{a}{u}{v}$, then by \cref{lemma:gen} $A$ is atomic. Let $t^\prime$ be such that $t \mapsto^* t^\prime \in \mathsf{NF}$; then, by \cref{theorem-extraction}, $t = (n, t^\prime )$. By \cref{thm:subj} $\vdash (n, t^\prime ) : \exists \alpha A$, and by \cref{lemma:gen} we have $\vdash t^\prime : A[n/\alpha]$. \end{itemize} \end{proof} \section{Rule $\textsc{em}_1^{-}$ is equivalent to Markov's principle} The fundamental reason behind the constructive analysis of the system $\mathsf{HA+EM}_1^{-}$ was its resemblance with Markov's principle. The discussion we have done so far does not depend directly on this; however, the fact that our system is indeed constructive (in the broader sense we have used so far) provides even stronger evidence that the $\textsc{em}_1^{-}$ rule should be equivalent to Markov's principle. Consider the usual system $\mathsf{HA+EM}_1^{-}$ , and state Markov's principle as the axiom $\textsc{mrk}$: $\neg \forall \alpha \emp{} \to \exists \alpha \emp{}^{\bot}$. This gives a proof of the axiom: \begin{prooftree} \AxiomC{$[\neg \forall \alpha \emp{}]_{(1)}$} \AxiomC{$[\forall \alpha \emp{}]_{\textsc{em}_1^{-}}$} \BinaryInfC{$\bot$} \UnaryInfC{$\exists \alpha \emp{}^{\bot}$} \AxiomC{$[\exists \alpha \emp{}^{\bot}]_{\textsc{em}_1^-}$} \RightLabel{$\textsc{em}_1^{-}$ } \BinaryInfC{$\exists \alpha \emp{}^{\bot}$} \RightLabel{(1)} \UnaryInfC{$\neg \forall \alpha \emp{} \to \exists \alpha \emp{}^{\bot}$} \end{prooftree} Conversely, consider the system $\HA$ plus the axiom $\textsc{mrk}$. We can obtain rule $\textsc{em}_1^{-}$ as follows: assuming we have proofs \AxiomC{$\forall \alpha \emp{}$} \noLine \UnaryInfC{\vdots} \noLine \UnaryInfC{$\exists \alpha C$} \DisplayProof and \AxiomC{$\exists \alpha \emp{}^{\bot}$} \noLine \UnaryInfC{\vdots} \noLine \UnaryInfC{$\exists \alpha C$} \DisplayProof build this proof of $\exists \alpha C$: \begin{prooftree} \AxiomC{$[\forall \alpha C^{\bot}]_{(1)} $} \AxiomC{$[\forall \alpha \emp{}]_{(2)}$} \noLine \UnaryInfC{\vdots} \noLine \UnaryInfC{$\exists \alpha C$} \BinaryInfC{$\mathcal{D}_1$} \UnaryInfC{$\bot$} \RightLabel{(2)} \UnaryInfC{$\neg \forall \alpha \emp{}$} \AxiomC{$[\forall \alpha C^{\bot}]_{(1)}$} \AxiomC{$[\exists \alpha \emp{}^{\bot}]_{(3)}$} \noLine \UnaryInfC{\vdots} \noLine \UnaryInfC{$\exists \alpha C$} \BinaryInfC{$\mathcal{D}_1$} \UnaryInfC{$\bot$} \RightLabel{(3)} \UnaryInfC{$\neg \exists \alpha \emp{}^{\bot}$} \UnaryInfC{$\mathcal{D}_2$} \UnaryInfC{$\forall \alpha \emp{}$} \BinaryInfC{$\bot$} \RightLabel{(1)} \UnaryInfC{$\neg \forall \alpha C^{\bot}$} \AxiomC{$\textsc{mrk}$} \noLine \UnaryInfC{$\neg \forall \alpha C^{\bot} \to \exists \alpha C$} \BinaryInfC{$\exists \alpha C$} \end{prooftree} Where $\mathcal{D}_1$ is given by \begin{prooftree} \AxiomC{$\forall \alpha C^{\bot}$} \UnaryInfC{$C^\bot(\alpha)$} \AxiomC{$[C(\alpha)]_{\exists}$} \BinaryInfC{$\bot$} \AxiomC{$\exists \alpha C(\alpha)$} \RightLabel{$\exists$} \BinaryInfC{$\bot$} \end{prooftree} And $\mathcal{D}_2$ is given by \begin{prooftree} \AxiomC{$\emp{}(\alpha) \lor \emp{}^\bot(\alpha)$} \AxiomC{$[\emp{}(\alpha)]_{\lor\mbox{-E}}$} \AxiomC{$\neg \exists \alpha \emp{}^{\bot}(\alpha)$} \AxiomC{$[\emp{}^\bot (\alpha)]_{(1)}$} \UnaryInfC{$\exists \alpha \emp{}^{\bot} (\alpha)$} \BinaryInfC{$\bot$} \RightLabel{(1)} \UnaryInfC{$\neg \emp{}^\bot(\alpha)$} \AxiomC{$[\emp{}^\bot(\alpha)]_{\lor\mbox{-E}}$} \BinaryInfC{$\bot$} \UnaryInfC{$\emp{}(\alpha)$} \RightLabel{$\lor \mbox{-E}$} \TrinaryInfC{$\emp{}(\alpha)$} \UnaryInfC{$\forall \alpha \emp{} (\alpha)$} \end{prooftree} Note that in the last proof we used the axiom $\emp{}(\alpha) \lor \emp{}^\bot(\alpha)$ since $\emp{}$ is atomic and thus decidable in $\HA$. \section{A realizer for Markov's principle} \label{sec:real-mark-princ} Now that we have a proof tree for Markov's principle in $\mathsf{HA+EM}_1^{-}$ , we can decorate it in order to get a realizer of the principle: \begin{prooftree} \AxiomC{$[x : \neg \forall \alpha B]_{(2)}$} \AxiomC{$[\hyp{a}{\alpha}{B} : \forall \alpha B]_{\textsc{em}_1^{-}}$} \BinaryInfC{$x \hyp{a}{\alpha}{B} : \bot$} \UnaryInfC{$\mathsf{r}x \hyp{a}{\alpha}{B} : B^\bot[0/\alpha]$} \UnaryInfC{$(0,\mathsf{r}x \hyp{a}{\alpha}{B}) : \exists \alpha B^\bot$} \AxiomC{$[\wit{a}{\alpha}{B} : \exists \alpha B^{\bot}]_{\textsc{em}_1^-}$} \RightLabel{$\textsc{em}_1^{-}$ } \BinaryInfC{$\E{a}{(0, \mathsf{r} x \hyp{a}{\alpha}{B})}{\wit{a}{\alpha}{B}} : \exists \alpha B^{\bot}$} \RightLabel{(1)} \UnaryInfC{$\lambda x.(\E{a}{(0, \mathsf{r} x \hyp{a}{\alpha}{B})}{\wit{a}{\alpha}{B}}): \neg \forall \alpha B \to \exists \alpha B^{\bot}$} \end{prooftree} The extracted term fully exploits the properties of the system in order to get a more precise computational meaning for Markov's principle. When a realizer for $\neg \forall \alpha B$ is given, it is applied to the hypotetical term. Thus, the computation can proceed by using this assumption and reducing inside the left hand side of the proof term. At some point however, we are guaranteed that the program will use the hypotesis on a term $m$ for which $B[m/\alpha]$ does not hold. At this point, an exception is raised and we gets the witness we were waiting for. \chapter{Introduction} \label{cha:introduction} \input{intro.tex} \chapter{Intuitionistic realizability and Markov's principle} \label{cha:intu-real-mark} \input{intreal.tex} \chapter{Realizability and classical systems} \label{cha:real-class-syst} \input{classreal.tex} \chapter{Markov's principle in $\HA + \EM_1$} \label{cha:markovs-principle-ha} \input{markov.tex} \chapter{Further generalizations} \label{cha:furth-gener} \input{classic.tex} \chapter{Conclusions} \label{cha:conclusions} \input{conclusions.tex} \backmatter \nocite{*} \printbibliography \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Background in Conley Theory} In this section we review fundamental notions of dynamical systems, especially from the point of view of Conley theory. The standard reference for chain recurrence, Conley's decomposition theorem and the Conley index is the monograph \cite{conley1978isolated}. An approachable introduction to chain recurrence and Conley's decomposition theorem is~\cite{norton1995fundamental}, wherein the theorem is christened `The Fundamental Theorem of Dynamical Systems'. A concise and rigorous treatment of the theorem is given in \cite[Chapter 9]{robinson1998dynamical}. Attractors and repellers are of central importance in dynamical systems theory, as they form the means of filtering the global dynamics, and are dual to decomposing the behavior into recurrent and gradient-like parts. These are particularly important topics in Conley theory, and an algebraic treatment of attractors using order theory (and developing the corresponding duality theory to Morse decompositions and chain recurrence) is given in \cite{robbin1992lyapunov,kmv,kmv2,kmv3,kaliespriestly}. A concise review of the Conley index is found in \cite{salamon1985connected}, and overviews in \cite{mischaikow1995conley, mischaikow2002conley}. The relationship between Conley indices is captured using connection matrix theory \cite{franzosa1986index,franzosa1988continuation,franzosa1988connection, franzosa1989connection,harker2021computational}, and our main theorem (Theorem \ref{thm:imposs}) uses elementary results in this direction. Conley index theory has been extended to discrete time dynamics \cite{arai2002tangencies, mrozek1990leray, franks2000shift, richeson1998connection, robbin1992lyapunov, szymczak1995conley} (see applications in \cite{arai2009database,bush2012combinatorial}), as well as multi-valued dynamics~\cite{kaczynski1995conley}. \subsection{Attractors and repellers} We will consider a compact metric space $X$ and a dynamical system $\varphi$ on $X$ (in our application of interest, $X$ will be a product of simplices). As we have already mentioned, the standard mathematical conception of a dynamics is a \emph{semi-flow}, a continuous map $\varphi: {\mathbb{R}^+} \times X\to X$, where ${\mathbb{R}^+}=\{t\in {\mathbb{R}}: t\geq 0\}$, satisfying i) $\varphi(0,x)=x$ and ii) $\varphi(s+t,x)=\varphi(s,\varphi(t,x))$. If in this definition ${\mathbb{R}^+}$ is replaced with ${\mathbb{R}}$ then $\varphi$ is called a \emph{flow}.\footnote{The distinction between semi-flows and flows is `negative time', which corresponds to invertibility of the dynamics.} By a dynamical system, or dynamics, we mean either a flow or semi-flow. We are interested in the asymptotic behavior of dynamics: for a set $Y\subset X$, we define the $\omega$-limit set of $Y$ to be $\omega(Y) = \bigcap_{t>0} \text{cl}\big(\varphi([t,\infty), Y)\big)$ One basic notion in dynamical systems is that of an {\em attractor}: intuitively, a set to which trajectories converge asymptotically. A set $A\subset X$ is an {attractor} if there exists a neighborhood $U$ of $A$ such that $\omega(U)= A$. This is a useful concept for our purposes: we seek a flow such that the set of Nash equilibria of a game (and collection of connecting orbits between) is an attractor. The \emph{dual repeller} of an attractor $A$ is defined as the set \[ R = \{x\in X: \omega (x)\cap A = \emptyset\}; \] that is, the set of points that do not converge to $A$. The pair $(A,R)$ is called an {\em attractor-repeller pair}. The concept of repeller is also key for us: it is not enough that the set of Nash equilibria and connecting orbits constitute an attractor; the corresponding repeller should be empty --- otherwise some starting points will never reach the Nash equilibria. The set of attractors of $\varphi$, denoted by ${\mathsf{ Att}}(\varphi)$, together with binary operations: \[ A\vee A' := A\cup A',\quad\quad A\wedge A' := \omega (A \cap A'), \] and partial order determined by inclusion, forms a bounded distributive lattice~\cite{kmv, kmv2, robbin1992lyapunov}. Similarly, the set of repellers forms a bounded, distributive lattice ${\mathsf{ Rep}}(\varphi)$, which is (anti)-isomorphic to ${\mathsf{ Att}}(\varphi)$ \cite{kmv}. \subsection{Chain recurrence} An \emph{$(\epsilon, \tau)$ chain} from $x$ to $y$ is a finite sequence $\{(x_i,t_i)\}\subset X\times [0,\infty)$, $i=1,\ldots,n$, such that $x=x_1$, $t_i\geq \tau$, $d(\varphi(t_i,x_i),x_{i+1})\leq \epsilon$ and $d(\varphi(t_n,x_n),y)\leq \epsilon$. If there is an $(\epsilon,\tau)$ chain from $x$ to $y$ for all $(\epsilon,\tau)$ we write $x\geq y$. The \emph{chain recurrent set}, denoted, $CR(\varphi)$, is defined by \[ CR(\varphi) = \{x\in X:x\geq x\}. \] The chain recurrent set can be partitioned into a disjoint union of its connected components, called \emph{chain recurrent components}; the chain recurrent components are partially ordered by the dynamics (inheriting the partial order of $\geq)$~\cite{conley1978isolated}. The fundamental theorem of dynamical systems, due to Conley, states that a dynamical system can be decomposed into the (chain) recurrent set, and off of the chain recurrent set the dynamics is gradient-like (on a trajectory between two chain recurrent components). Thus there is a deep dichotomy between points in the phase space: they are either chain recurrent or gradient-like. This can be phrased in terms of a Lyapunov function as follows. \begin{theorem}[Fundamental Theorem of Dynamical Systems] Let $CR_i(\varphi)$ denote the chain components of $CR(\varphi)$. There exists a continuous function $V: X\to [0,1]$ such that \begin{enumerate} \item if $x\not\in CR(\varphi)$ and $t>0$ then $V(x)>V(\varphi(t,x))$; \item for each $i$, there exists $\sigma_i\in [0,1]$ such that $CR_i(\varphi)\subset V^{-1}(\sigma_i)$ and furthermore, the $\sigma_i$ can be chosen such that $\sigma_i\neq \sigma_j$ if $i\neq j$. \end{enumerate} \end{theorem} \subsection{Conley index} The broader focus of Conley theory is on isolating neighborhoods and isolated invariant sets. If $\varphi$ is a semi-flow, then a set $S$ is an \emph{invariant set} if $\varphi(t,S)=S$ for all $t\in {\mathbb{R}^+}$. Given a set $N$ we denote the \emph{maximal invariant set} in $N$ by \[ \text{Inv}(N) = \bigcup \{S\subset N : \text{$S$ is an invariant set} \}. \] An \emph{isolating neighborhood} is a compact set $N$ such that $\text{Inv}(N)\subset \text{int} (N)$. Both an attractor and its dual repeller are examples of isolated invariant sets. The construction of the Conley index proceeds through index pairs. For an isolated invariant set $S$, an \emph{index pair} is a pair of compact sets $(N,L)$ with $L\subset N$ such that \begin{enumerate} \item $S = \text{Inv}(\text{cl}(N\setminus L))$ and $N\setminus L$ is a neighborhood of $S$, \item $L$ is positively invariant in $N$, i.e., if $x\in L$ and $\varphi([0,t],x) \subset N$, then $\varphi([0,t],x)\subset L$. \item $L$ is an exit set for $N$, i.e., if $x\in N$ and $t_1>0$ such that $\varphi(t_1,x)\not\in N$ then there exists a $t_0\in [0,t_1]$ for which $\varphi([0,t_0],x)\subset N$ and $\varphi(t_0,x)\in L$. \end{enumerate} The \emph{Conley index} of an isolated invariant set $S$ is the (relative) homology group \[ CH_\bullet(S) := H_\bullet (N,L), \] where $H_\bullet$ denotes singular homology with integer coefficients\footnote{See \ref{app:singular} for a primer on algebraic topology and singular homology theory.} and $(N,L)$ is an index pair for $S$. The Conley index is independent of the particular choice of index pair and is an algebraic topological invariant of an isolated invariant set; it is a purely topological generalization of another, integer-valued, invariant called the {\em Morse index}, see \cite{conley1978isolated, MI}. Moreover index pairs can be found so that $H_\bullet(N,L)\cong \tilde{H}_\bullet(N/L)$. See Fig. \ref{fig:conley_indices} for elementary Conley index computations for four example isolated invariant sets.\footnote{The final three examples in Fig.~\ref{fig:conley_indices} instantiate a general result: if $S$ is a hyperbolic fixed point with an unstable manifold of dimension $n$ (i.e., the number of positive eigenvalues of the Jacobian), then $CH_i(S) = {\mathbb{Z}}$ if $i=n$, otherwise $CH_i(S) = 0$.} In each case, an appropriate index pair $(N,L)$ is illustrated; the Conley index is computed via $CH_\bullet(S) = H_\bullet(N,L)\cong \tilde{H}_\bullet(N/L)$, the reduced homology of $N/L$. \begin{figure}[h!] \centering \begin{minipage}{.2\textwidth} \centering \begin{tikzpicture}[scale=.5] \def.1{.1} \def2{2} \def1.5{1.5} \filldraw [fill=orange!5,even odd rule] (0,0) circle[radius=2cm] circle[radius=1cm]; \draw (0,0) circle [radius=1.5cm]; \draw [->] (1.5,0) -- (1.5,.1); \draw [->] (-1.5,0) -- (-1.5,-.1); \draw (-.5,.5) node{\scriptsize $N$}; \draw[->] (2,2) .. controls (1.5,1.75) and (1,1.75) .. (0, 1.6); \draw[->] (-2,-2) .. controls (-1.5,-1.75) and (-1,-1.75) .. (0, -1.6); \end{tikzpicture} {\scriptsize \[ CH_i(S) = \begin{cases} {\mathbb{Z}} & i = 0,1 \\ 0 & i > 1 \end{cases} \] } \end{minipage} \begin{minipage}{.2\textwidth} \centering \begin{tikzpicture}[scale=.5] \def.1{.1} \def2{2} \def1.5{1.5} \filldraw[fill=orange!5] (-1.5,-1.5) rectangle (1.5,1.5); \fill (0,0) circle (2pt); \draw[->] (0:2) to (0:.1); \draw[->] (90:2) to (90:.1); \draw[->] (180:2) to (180:.1); \draw[->] (270:2) to (270:.1); \draw (-.5,.5) node{\scriptsize $N$}; \end{tikzpicture} {\scriptsize \[ CH_i(S) = \begin{cases} {\mathbb{Z}} & i = 0 \\ 0 & i > 0 \end{cases} \] } \end{minipage} \begin{minipage}{.2\textwidth} \centering \begin{tikzpicture}[scale=.5] \def.1{.1} \def2{2} \def1.5{1.5} \def.5{.5} \filldraw[fill=orange!5] (-1.5,-1.5) rectangle (1.5,1.5); \filldraw[fill=red!5] (-1.5,-1.5) rectangle (-1.5+.5,1.5); \filldraw[fill=red!5] (1.5,-1.5) rectangle (1.5-.5,1.5); \fill (0,0) circle (2pt); \draw[->] (0:.1) to (0:2); \draw[->] (90:2) to (90:.1); \draw[->] (180:.1) to (180:2); \draw[->] (270:2) to (270:.1); \draw (-.5,.5) node{\scriptsize $N$}; \draw (-1.25,.5) node{\scriptsize $L$}; \end{tikzpicture} {\scriptsize \[ CH_i(S) = \begin{cases} {\mathbb{Z}} & i = 1 \\ 0 & i \neq 1 \end{cases} \] } \end{minipage} \begin{minipage}{.2\textwidth} \centering \begin{tikzpicture}[scale=.5] \def.1{.1} \def2{2} \def1.5{1.5} \def.5{.5} \filldraw[fill=orange!5] (0,0) circle (1.35cm); \filldraw [fill=red!5,even odd rule] (0,0) circle[radius=1.75cm] circle[radius=1.35cm]; \fill (0,0) circle (2pt); \draw[->] (0:.1) to (0:2); \draw[->] (90:.1) to (90:2); \draw[->] (180:.1) to (180:2); \draw[->] (270:.1) to (270:2); \draw (-.5,.5) node{\scriptsize $N$}; \draw (-1.475,.5) node{\scriptsize $L$}; \end{tikzpicture} {\scriptsize \[ CH_i(S) = \begin{cases} {\mathbb{Z}} & i = 2 \\ 0 & i \neq 2 \end{cases} \] } \end{minipage} \caption{Stable periodic orbit with $N$ as the annulus and $L=\emptyset$ [left]. Asymptotically stable fixed point with $L=\emptyset$ [center left]. Hyperbolic saddle point with 1-dimensional unstable manifold [center right]. Hyperbolic unstable fixed point with 2-dimensional unstable manifold, $N$ as the ball, $L$ as the thickened boundary [right].} \label{fig:conley_indices} \end{figure} A fundamental property of the Conley index is the following. \begin{proposition}[Wa{\.z}ewski Property]\label{prop:wazewski} If $S$ is an isolated invariant set and $S=\emptyset$, then $CH_\bullet(S)= 0$. More importantly, if $CH_\bullet(S)\neq 0$ then $S\neq \emptyset$. \end{proposition} The Wa{\.z}ewski Property shows that the Conley index $CH_\bullet(S)$ can be used to deduce information about the structure of the associated isolated invariant set $S$. The Wa{\.z}ewski Property is the most elementary result of this type. More sophisicated results can be used to prove the existence of connecting orbits \cite{franzosa1989connection}, periodic orbits \cite{mccord1995zeta}, or chaos \cite{mischaikow1995chaos}, and in the case that $\varphi$ is a flow, \citet{mccord_1988} has shown Lefschetz fixed point type theorems for Conley indices. \begin{proposition}[Fixed point theorem for Conley indices]\label{prop:fp_conley} If $S$ is an isolated invariant set $S$ for a flow $\varphi$ and there is an $n$ such that \[ CH_i(S) = \begin{cases} {\mathbb{Z}} & i = n \\ 0 & i \neq n, \end{cases} \] then $S$ contains a fixed point. \end{proposition} We prove our main result by examining the relationship between the Conley indices of an attractor-repeller pair. These Conley indices are related by an exact sequence~\cite{conley1978isolated, franzosa1988connection,franzosa1986index}, which follows from the long exact sequence of the triple \cite{hatcher}. \begin{proposition}[Exact sequence for attractor-repeller pair]\label{prop:les} If $(A,R)$ is an attractor-repeller pair, then there is an exact sequence of Conley indices: \[ \cdots \xrightarrow{j_{n+1}}CH_{n+1}(R)\xrightarrow{\partial_{n+1}} CH_n(A)\xrightarrow{i_n} CH_n(X)\xrightarrow{j_n} CH_n(R) \xrightarrow{\partial_n} \cdots \xrightarrow{j_0} CH_0(R)\to 0. \] \end{proposition} \medskip\noindent We are now in place to prove a result about Conley indices of an attractor-repeller pair, which is central to our impossibility theorem. \begin{theorem}\label{prop:r0} Let $(A,R)$ be an attractor-repeller pair. If $CH_\bullet(R)=0$ then $CH_\bullet(A)$ and $CH_\bullet(X)$ are isomorphic. \end{theorem} \begin{proof} It follows from Proposition~\ref{prop:les} that there is an exact sequence relating the Conley indices of the attractor-repeller pair. If $CH_n(R)=0$ for all $n$, this becomes \[ \cdots \xrightarrow{j_{n+1}} 0 \xrightarrow{\partial_{n+1}} CH_n(A)\xrightarrow{i_n} CH_n(X)\xrightarrow{j_n} 0 \xrightarrow{\partial_n} \cdots \] Exactness implies that 1) $0= \text{im } \partial_{n+1} = \text{ker} i_n $, thus $i_n$ is injective and 2) $\text{im } i_n = \text{ker} j_n = CH_n(X)$, so $i_n$ is surjective. Thus $i_n$ is an isomorphism for all $n$. \end{proof} \begin{corollary}\label{cor:neq} Let $(A,R)$ be an attractor-repeller pair. If $CH_\bullet(A)\neq CH_\bullet(X)$ then $R\neq\emptyset$. \end{corollary} \begin{proof} From Theorem~\ref{prop:r0} we have that $CH_\bullet(R)\neq 0$. Therefore $R\neq \emptyset$ by the Wa{\.z}ewski Property. \end{proof} \section{Game dynamics and the Conley index} For a game $g$, we say $\varphi$ is a dynamical system {\em for} $g$ if $\varphi$ is a dynamical system on $X$, where $X$ is the space of the game $g$ (product of simplices). Let $NE(g)$ denote the set of Nash equilibria for $g$, $Fix(\varphi)$ the set of fixed points of $\varphi$, and $CR(\varphi)$ the chain recurrent set of $\varphi$. We highlight three compatibility conditions that $\varphi$ can take with respect to $g$. \begin{description} \item[{\bf Type I}] $NE(g) \subset Fix(\varphi)$, \item[{\bf Type II}] $NE(g) = Fix(\varphi)$, \item[{\bf Type III}] $NE(g) = CR(\varphi)$. \end{description} \begin{remark} Types I and II are only assumptions on the fixed points of the dynamics, and leave open the possibility that there are fixed points which are not Nash equilibria, or more generally, that there are recurrent dynamics that do not contain Nash equilibria (e.g., some trajectories cycle and never encounter a Nash equilibrium). We note, however, that Type III does not assume the Nash equilibria are fixed points, i.e. Type III does not imply Type I or Type II. \end{remark} \begin{remark} Note that we do not require the dynamics to be payoff increasing, uncoupled, etc. Our only assumptions are that the dynamics are a semi-flow (continuous and deterministic). \end{remark} \begin{theorem}[Impossibility Theorem]\label{thm:imposs} There exists a game $g$ that does not admit Type III dynamics. \end{theorem} \begin{proof} Define $g$ to be the following bimatrix game as in~\citet{kohlberg1986strategic, sorin_benaim}: \begin{align}\label{eqn:game} (M_1, M_2) = \begin{pmatrix} 1,1 & 0,-1 & -1,1\\ -1,0 & 0,0 & -1,0\\ 1,-1 & 0,-1 & -2,-2 \end{pmatrix}. \end{align} By way of contradiction, suppose that there exist Type III dynamics $\varphi$ for $g$, i.e., $NE(g)=CR(\varphi)$. In the case of this specific $g$, the set of Nash equilibria forms a topological circle (i.e. a homeomorphic copy of $S^1$)~\cite{kohlberg1986strategic}; thus the chain recurrent set $CR(\varphi)$ consists of a single chain recurrent component. As the number of chain recurrent components is finite, there is an explicit duality to the lattice of attractors ${\mathsf{ Att}}(\varphi)$ \cite[Theorem 6]{kmv3}. In particular, since there is just a single chain recurrent component, it is an attractor and in fact the unique (maximal) attractor. That is, $A :=\omega(X) = NE(g)$ is the unique (maximal) attractor. Choosing $N$ appropriately as an compact neighborhood encompassing the circle of Nash equilibria such that $A=NE(g)\subset \text{int}(N)$ (see Fig.~\ref{fig:index_pair}), the pair $(N,\emptyset)$ form an index pair for $A$. Thus the Conley index of $A$ is determined via \begin{align}\label{CH:A} CH_i(A) = H_i(N,\emptyset) = H_i(S^1) = \begin{cases} {\mathbb{Z}} & i = 0,1 \\ 0 & i > 1. \end{cases} \end{align} On the other hand, since $X$ is a product of simplices we have that \begin{align}\label{CH:X} CH_i(X) = H_i(X,\emptyset) = \begin{cases} {\mathbb{Z}} & i = 0 \\ 0 & i > 0. \end{cases} \end{align} As $CH_\bullet(A)\neq CH_\bullet(X)$, it follows from Corollary~\ref{cor:neq} that $R\neq \emptyset$. Thus $A$ cannot be maximal and there cannot exist such a $\varphi$. \end{proof} \begin{figure}[h!] \centering \begin{minipage}{.45\textwidth} \centering \begin{tikzpicture}[scale=.5] \def{\mathbb{R}}{1} \def2{2} \def3{3} \filldraw[fill=orange!5] (0:3) \foreach .1 in {60,120,...,359} { -- (.1:3) }-- cycle ; \draw[thick] (0:2) \foreach .1 in {60,120,...,359} { -- (.1:2) }-- cycle ; \filldraw[fill=white] (0:{\mathbb{R}}) \foreach .1 in {60,120,...,359} { -- (.1:{\mathbb{R}}) }-- cycle ; \draw (0,2) node{\scriptsize $NE$}; \draw (-2.5,0) node{\scriptsize $N$}; \end{tikzpicture} \end{minipage} \begin{minipage}{.45\textwidth} \centering \begin{tikzpicture}[scale=.5] \def{\mathbb{R}}{1} \def2{2} \def3{3} \draw (0:2) \foreach .1 in {60,120,...,359} { -- (.1:2) }-- cycle ; \fill plot[smooth cycle, tension=.7] coordinates { (.75,0) (.5,.5) (0,.5) (-.5,0) (-.5,-.5) (0,-.5) }; \draw [->] (30:.85) to (30:1.65); \draw [->] (90:.85) to (90:1.65); \draw [->] (150:.85) to (150:1.65); \draw [->] (210:.85) to (210:1.65); \draw [->] (-30:.85) to (-30:1.65); \draw [->] (-90:.85) to (-90:1.65); \draw (0,2) node{\scriptsize $A$}; \draw (-.45,.55) node{\scriptsize $R$}; \end{tikzpicture} \end{minipage} \caption{Cartoon rendering of circle of Nash equilibria ($NE(g)$ lives in 4-dimensions). Index pair $(N,\emptyset)$ for the circle of Nash equilibria $NE$ [left]. The set of Nash equilibria cannot be a maximal attractor; the dual repeller must be nonempty [right].} \label{fig:index_pair} \end{figure} \begin{remark}\label{remark:discrete} As mentioned earlier, an analogous impossibility result may be obtained for discrete time dynamics, i.e., where the dynamics are given by a continuous function $f: X\to X$. The proof is similar to that of Theorem~\ref{thm:imposs}, modified to use the appropriate Conley index theory for discrete time dynamics \cite{mrozek1990leray, franks2000shift, richeson1998connection}. In the discrete time case, for an appropriate notion of index pair $(N,L)$ for an isolated invariant set $S$, there is an induced map $f_{N,L}:H_\bullet(N,L)\to H_\bullet(N,L)$, where $H_\bullet(N,L)$ is a singular homology using field coefficients. Define \[ CH_\bullet(S) := \bigcap_{n>0} f_{N,L}^n\big(H_\bullet(N,L)\big), \] and take $\chi: CH_\bullet(S)\to CH_\bullet(S)$ to be the automorphism induced by $f_{N,L}$. In this case, the Conley index of $S$ is denoted $\text{Con}(S)$ and defined as $\text{Con}_\bullet(S):= (CH_\bullet(S),\chi_\bullet(S))$; this pair is independent of index pair (up to isomorphism) and is an invariant of $S$. For the game \eqref{eqn:game}, postulating discrete time dynamics $f$ instead of a semi-flow $\varphi$, we would have $CH_\bullet(A)$ and $CH_\bullet(X)$ as in \eqref{CH:A} and \eqref{CH:X} (where ${\mathbb{Z}}$ is replaced with a copy of the field), and in addition equipped with automorphisms $\chi_\bullet(A), \chi_\bullet(X)$. However, as $CH_\bullet(A)\neq CH_\bullet(X)$, the result again follows from the principle that if $\text{Con}_\bullet(A)\neq \text{Con}_\bullet(X)$, then $\text{Con}_\bullet(R)\neq 0$, and thus $R\neq \emptyset$. This principle is proved with an appropriate long exact sequence of $\text{Con}_\bullet(A), \text{Con}_\bullet(X)$ and $\text{Con}_\bullet(R)$, see \cite[Proposition 3.1]{richeson1998connection}, and is analogous to the proof of Theorem~\ref{prop:r0} and Corollary~\ref{cor:neq}. \end{remark} For $g$ given by \eqref{eqn:game}, we have shown that $NE(g)$ cannot be the maximal attractor of a dynamical system. It may, however, be the case that $NE(g)$ is an attractor for some $\varphi$, with non-empty dual repeller. In this case, we have $CH_\bullet(A)$ as in \eqref{CH:A}, $CH_\bullet(X)$ as in \eqref{CH:X}, and we may compute $CH_\bullet(R)$ with an elementary homological algebra argument. In fact, $CH_\bullet(R)$ is the Conley index of an unstable fixed point (see Fig. \ref{fig:conley_indices}). \begin{proposition}\label{prop:conley_example} If $\varphi$ is a (semi)-flow with attractor-repeller pair $(A,R)$ such that $CH_\bullet(A)$ and $CH_\bullet(X)$ are given by \eqref{CH:A} and \eqref{CH:X} respectively, then \begin{align}\label{CH:R} CH_i(R) = \begin{cases} {\mathbb{Z}} & i = 2 \\ 0 & i \neq 2. \end{cases} \end{align} \end{proposition} \begin{proof} The Conley indices fit together in an exact sequence: \[ \cdots \xrightarrow{\partial_{n+1}} CH_n(A)\xrightarrow{i_n} CH_n(X)\xrightarrow{j_n} CH_n(R)\xrightarrow{\partial_n} CH_{n-1}(A)\xrightarrow{i_{n-1}} \cdots \xrightarrow{j_0} CH_0(R)\to 0. \] By hypothesis for $i\geq 3$ the sequence takes the form \[ \cdots \to 0\xrightarrow{j_i} CH_i(R)\xrightarrow{\partial_i} 0\to \cdots \] By exactness $0 = \text{im } j_i = \text{ker} \partial_i = CH_i(R)$. For $i = 2$, the sequence has the form: \[ \cdots \to 0 \xrightarrow{j_2} CH_2(R) \xrightarrow{\partial_2} {\mathbb{Z}} \xrightarrow{i_1} 0 \to \cdots \] Note that $\partial_2$ is injective ($\text{ker}\partial_2 = \text{im } j_2 = 0$) and surjective $( \text{im } \partial_2 = \text{ker} i_1 = {\mathbb{Z}}$). Therefore $\partial_2$ is an isomorphism and $CH_2(R)\cong {\mathbb{Z}}$. Finally, the remainder of the sequence is \[ \cdots \to 0\xrightarrow{j_1} CH_1(R) \xrightarrow{\partial_1} \mathbb{Z}\xrightarrow{i_0} \mathbb{Z}\xrightarrow{j_0} CH_0(R)\to 0. \] The map $i_0$ is an isomorphism, mapping the single connected component of $CH_0(A)$ to $CH_0(X)$. By exactness, $0=\text{im } j_1 = \text{ker}\partial_1$. Thus $CH_1(R)\cong \text{im } \partial_1 = \text{ker} i_0 = 0$, where the last equality follows as $i_0$ is an isomorphism. Similarly, ${\mathbb{Z}} = \text{im } i_0 = \text{ker} j_0$. Thus $\text{im } j_0 = 0$. However, $j_0$ is surjective, since by exactness $CH_0(R) = \text{im } j_0$. Therefore $CH_0(R) = 0$, which completes the proof. \end{proof} One might imagine weakening Type III to a Type III$_A$, which instead requires that the set of Nash equilibria $NE(g)$ (taken together with the connecting orbits between) forms an attractor $A$; a Type III game is then necessarily Type III$_A$. We can show an incompatibility result for flows in this case (we expect the result also holds for semi-flows). \begin{theorem}[Incompatibility Theorem] There exists a game $g$ such that no flow $\varphi$ for $g$ can be both Type III$_A$ and Type II. \end{theorem} \begin{proof} We consider again the game $g$ given in \eqref{eqn:game}. If $\varphi$ is Type III$_A$, then the same argument as in the proof of Theorem~\ref{thm:imposs} shows that $\varphi$ has an attractor-repeller pair $(A,R)$ such that $CH_\bullet(A)$, and $CH_\bullet(X)$ are given by \eqref{CH:A} and \eqref{CH:X}, and it follows from Proposition~\ref{prop:conley_example} that $CH_\bullet(R)$ is given by \eqref{CH:R}. It follows from Proposition \ref{prop:fp_conley} that $R$ contains a fixed point. Thus $NE(g)\subsetneq Fix(\varphi)$, and $\varphi$ cannot be a Type II flow. \end{proof} \section{Introduction} The Nash equilibrium, defined and shown universal by John F.~Nash in 1950 \citep{Nash1950}, is paramount in game theory, routinely considered as the default solution concept --- the ``meaning of the game.'' Over the years --- and especially in the past two decades during which game theory has come under intense computational scrutiny --- the Nash equilibrium has been noted to suffer from a number of disadvantages of a computational nature. There are no efficient algorithms for computing the Nash equilibrium of a game, and in fact the problem has been shown to be intractable \citep{DGP,CDT,EY}. Also, there are typically many Nash equilibria in a game, and the selection problem leads to conceptual complications and further intractability, see e.g.~\citep{GP}. A common defense of the Nash equilibrium is the informal argument that ``the players will eventually get there.'' However, no learning behavior has been shown to converge to the Nash equilibrium, and many {\em game dynamics} of various sorts proposed in the past have typically been shown to fail to reach a Nash equilibrium for some games (see the section on related work for further discussion). Given a game, the deterministic way players move from one mixed strategy profile to the next is defined in the theory of dynamical systems as a continuous function $\varphi$ assigning to each point $x$ in the strategy space and each time $t>0$ another point $\varphi(t,x)$: the point where the players will be after time $t$; the curve $\varphi(t,x)$ parameterized by $t$ is called the {\em trajectory} of $x$. Obviously, this function must satisfy $\varphi(t',\varphi(t,x))=\varphi(t+t', x)$. This {\em continuous time dynamics} framework avails a rich analytical arsenal, which we exploit in this paper.\footnote{ In game theory, the dynamics of player behavior (see e.g.~the next section) are often described in terms of discrete time. The concepts we invoke (including the Conley index) have mirror images in discrete time dynamics and our results hold there as well; see Remark \ref{remark:discrete} for more detail.} A very well known and natural dynamical system of this sort are the {\em replicator dynamics} \citep{RD_first_paper}, in which the direction of motion is the best response vector of the players, while the discrete time analogue are the multiplicative weights update dynamics \citep{MWU_WMA_Littlestone,MWU_Arora} --- but of course the possibilities are boundless. We should note immediately that, despite its apparent sweeping generality, this framework does come with certain restrictions: the dynamics thus defined are {\em deterministic and memoryless.} Stochastic techniques, or dynamics in which the direction of motion is computed based on the history of play, are excluded. {\color{black} Of course deterministic and memoryless algorithms suffice for a non-constructive (i.e. non-convergent) proof of Nash equilibria in general games via Brouwer's fixed-point theorem~\citep{nash1951non}.} At this point it seems natural to ask: {\em are there general dynamics of this sort, which are guaranteed to converge to a Nash equilibrium in all games?} Our main result is an impossibility theorem: \medskip\noindent{\bf Main Result (informal)}: {\em There are games in which any continuous, or discrete time, dynamics fail to converge to a Nash equilibrium.} \smallskip\noindent That is to say, we exhibit games in which any dynamics must possess long term behavior which does not coincide with the set of Nash equilibria --- or even approximate Nash equilibria. Thus the concept of Nash equilibria is insufficient for a description of the global dynamics: for some games, any dynamics must have asymptotic behaviors that are {\em not} Nash equilibria. Hence the Nash equilibrium concept is plagued by a form of {\em incompleteness:} it is incapable for capturing the full set of asymptotic behaviors of the players. What does it mean for the dynamics to converge to the game's Nash equilibria? In past work on the subject \citep{DemichelisRitzberger, DeMichelisGermano1}, the requirements that have been posed are the set of fixed points of the dynamics are precisely the set of Nash equilibria (or, that the remaining fixed points may be perturbed away). Unfortunately, such requirements are insufficient for describing global dynamics, e.g. they allow {\em cycling}, a behavior that obviously means that not all trajectories converge to the Nash equilibria. \paragraph{What is the appropriate notion of global convergence?} Given a game and a dynamics, where does the dynamics ``converge''? It turns out that this is a deep question, and it took close to a century to pin down. In the benign case of two dimensions, the answer is almost simple: no matter where you start, the dynamics will effectively either converge to a point, or it will cycle. This is the Poincar\'e-Bendixson Theorem from the beginning of the 20th century \citep{SmaleChaosBook}, stating that the asymptotic behavior of any dynamics is either a fixed point or a cycle (or a slightly more sophisticated configuration combining the two). The intuitive reason is that in two dimensions {\em trajectories cannot cross,} and this guarantees some measure of good behavior. This is immediately lost in three or more dimensions (that is, in games other than two-by-two), since dynamics in high dimensions can be chaotic, and hence convergence becomes meaningless. Topologists strived for decades to devise a conception of a ``cycle'' that would restore the simplicity of two dimensions, and in the late 1970s they succeeded! The definition is simple (and has much computer science appeal). Ordinarily a point is called {\em \color{black} periodic} with respect to specific dynamics if it will return to itself: it is either a fixed point, or it lies on a cycle. {\color{black} If we slightly expand our definition to allow points that get arbitrarily close to where they started infinitely often then we get the notion of {\emph recurrent points}.} Now let us generalize this as follows: a point $x_0$ is {\em chain recurrent} if for every $\epsilon>0$ there is an integer $n$ and a cyclic sequence of points $x_0, x_1,\ldots,x_n =x_0$ such that for each $i<n$ the dynamical system will bring point $x_i$ {\em inside the $\epsilon$-ball around $x_{i+1}$.} That is, the system, started at $x_0$, will cycle after $n-1$ segments of dynamical system trajectory, interleaved with $<\epsilon$ jumps. Intuitively, it is as if an adversary manipulates the round-off errors of our computation to convince us that we are on a cycle! Denote by $CR(\varphi)$ the set of all chain recurrent points of the system. The main result of this theory is a decomposition theorem due to Conley, called the Fundamental Theorem of Dynamical Systems, which states that the dynamics decompose into the chain recurrent set and a gradient-like part \citep{conley1978isolated}. Informally, the dynamical system will eventually converge to $CR(\varphi)$. Now that we know what convergence means, the scope of our main result becomes more clear: there is a game $g$ for which given any dynamical system $\varphi$, $CR(\varphi)$ is \emph{not} $NE(g)$, the set of Nash equilibria of $g$. That is, some initial conditions will necessarily cause the players to cycle, or converge to something that is not Nash, or abandon a Nash equilibrium. This is indeed what we prove. Very inspirational for our main result was the work of~\citep{sorin_benaim}, wherein they make in passing a statement, without a complete proof, suggesting our main result: that there are games $g$ such that for all dynamics $\varphi$ (in fact, a more general class of multi-valued dynamics) $NE(g)\neq CR(\varphi)$. The argument sketched in~\citep{sorin_benaim} ultimately rests on the development of a fixed point index for components of Nash equilibria, and its comparison to the Euler characteristic of the set of Nash. However, the argument is incomplete (indeed, the reader is directed to two papers, one of which is a preprint that seems to have not been published). Instead, in our proof of our main theorem we leverage an existing index theory more closely aligned to attractors: that of the {\em Conley index.} Conley index theory \citep{conley1978isolated} provides a very general setting in which to work, requiring minimal technical assumptions on the space and dynamics. We first establish a general principle for dynamical systems stated in terms of the Conley index: if the Conley index of an attractor and that of the entire space are not isomorphic, then there is a non-empty dual repeller, and hence some trajectories are trapped away from the attractor. The proof of our main result applies this principle to a classical degenerate two-player game $g$ with three strategies for each player --- a variant of rock-paper-scissors due to \citep{kohlberg1986strategic}. The Nash equilibria of $g$ form a continuum, namely a six-sided closed walk in the 4-dimensional space. We then consider an arbitrary dynamics on $g$ assumed to converge to the Nash equilibria, and show that the Conley index of the Nash equilibria attractor is not isomorphic to that of the whole space (due to the unusual topology of the former), which implies the existence of a nonempty dual repeller. In turn this implies that $NE(g)$ is a strict subset of $CR$. An additional algebraic topological argument shows that the dual repeller contains a fixed point, thus $NE(g)$ is in fact a strict subset of the set of fixed points. Two objections can be raised to this result: degenerate games are known to have measure zero --- so are there dynamics that work for almost all games? (Interestingly, the answer is ``yes, but''.) Secondly, in view of intractability, exact equilibria may be asking too much; are there dynamics that converge to an arbitrarily good approximation of the Nash equilibrium? There is much work that needs to be dome in pursuing these research directions, but here we show two results. First, we establish that, in some sense, degeneracy is {\em required} for the impossibility result: we give an algorithm which, given any nondegenerate game, specifies somewhat trivial dynamics whose $CR$ is precisely the set of Nash equilibria of the game.\footnote{Assuming that PPAD $\neq$ NP, this construction provides a counterexample to a conjecture by \citet{DynMeaningOfTheGame} (last paragraph of Section 5), where impossibility was conjectured unless P = NP.} The downside of this positive result is that the algorithm requires exponential time unless P = PPAD, and {\em we conjecture that such intractability is inherent.} In other words, we suspect that, in non-degenerate games, it is {\em complexity theory,} and not topology, that provides the proof of the impossibility result. Proving this conjecture would require the development of a novel complexity-theoretic treatment of dynamics, which seems to us a very attractive research direction. Second, we exhibit a family of games, in fact with nonzero measure (in particular, perturbations of the game used for our main result), for which any dynamics will fail to converge (in the above sense) to an $\epsilon$-approximate Nash equilibrium, for some fixed additive $\epsilon>0$ (our technique currently gives an $\epsilon$ up to about $0.09$ for utilities normalized to $[0,1]$). \section{Related work} The work on dynamics in games is vast, starting from the 1950s with fictitious play \citep{BrownFictitious,RobinsonFictitious}, the first of many alternative definitions of dynamics that converge to Nash equilibria in zero-sum games (or, sometimes, also in $2\times s$ games), see, e.g., \citet{KaniovskyYoung}. There are many wonderful books about dynamics in games: \citet{Fudenberg} examine very general dynamics, often involving learning (and therefore memory) and populations of players; populations are also involved, albeit implicitly, in evolutionary game dynamics, see the book of \citet{HofbauerSigmund} for both inter- and intra-species games, the book of \citet{sandholm_population_2010} for an analysis of both deterministic and stochastic dynamics of games, the book of \citet{weibull_evolutionary_1998} for a viewpoint on evolutionary dynamics pertaining to rationality and economics and the book of \citet{hart2013simple} focusing on simple dynamics including some of the earliest impossibility results for convergence to Nash for special classes of uncoupled dynamics \citep{hart2003uncoupled}. Regarding convergence to Nash equilibria, we have already discussed the closely related work of \citet{sorin_benaim}. The work of \citet{DemichelisRitzberger} considers \emph{Nash dynamics}, whose fixed points are precisely the set of Nash equilibria, while \citet{DeMichelisGermano1} considers {\em Nash fields}, where fixed points which are not Nash equilibria may be perturbed away; in both cases they use (fixed point) index theory to put conditions on when components of Nash equilibria are stable. Here we point out that both Nash dynamics and Nash fields (akin to what we call Type I or Type II dynamics here) have the undesirable property of recurrence. Uncoupled game dynamics (where each player decides their next move in isolation) of several forms are considered by \citet{HartMasColell2}; some are shown to fail to converge to Nash equilibria through ``fooling arguments,'' while another {\em is shown to converge} --- in apparent contradiction to our main result. The converging dynamic is very different from the ones considered in our main theorem: it is converging to approximate equilibria (but we have results for that), and is discrete-time (our results apply to this case as well). The important departure is that the dynamics is {\em stochastic,} and such dynamics can indeed converge almost certainly to approximate equilibria. From the perspective of optimization theory and theoretical computer science, regret-minimizing dynamics in games has been the subject of careful investigation. The standard approach examines their time-averaged behaviour and focuses on its convergence to coarse correlated equilibria, (see, e.g.,~\citep{roughgarden2015intrinsic,stoltz2007learning}). This type of analysis, however, is not able to capture the evolution of day-to-day dynamics. Indeed, in many cases, such dynamics are non-convergent in a strong formal sense even for the seminal class of zero-sum games~\citep{piliouras2014optimization,mertikopoulos2018cycles,bailey2019multi,BaileyEC18}. Perhaps even more alarmingly strong time-average convergence guarantees may hold regardless of whether the system is divergent~\citep{BaileyEC18}, periodic~\citep{boone2019darwin}, recurrent~\citep{bailey2020finite}, or even formally chaotic~\citep{palaiopanos2017multiplicative,CFMP2019,cheung2019vortices,cheung2020chaos,bielawski2021follow,kuen2021chaos}. Recently \textit{all} FTRL dynamics have been shown to fail to achieve (even local) asymptotic stability on \textit{any} partially mixed Nash in effectively \textit{all} normal form games despite their optimal regret guarantees \citep{flokas2020no,pmlr-v134-giannou21a}. Finally, \citet{andrade21} establish that the orbits of replicator dynamics can be \textit{arbitrarily complex}, e.g., form Lorenz-chaos limit sets, even in two agent normal form games. The proliferation of multi-agent architectures in machine learning such as Generative Adversarial Networks (GANs) along with the aforementioned failure of standard learning dynamics to converge to equilibria (even in the special case of zero-sum games) has put strong emphasis on alternative algorithms as well as the development of novel learning algorithms. In the special case of zero-sum games, several other algorithms converge provably to Nash equilibria such as optimistic mirror descent \citep{rakhlin2013optimization,daskalakis2018training,daskalakis2018last}, the extra-gradient method (and variants thereof) \citep{korpelevich1976extragradient,gidel2019a,mertikopoulos2019optimistic}, as well as several other dynamics \citep{gidel2019negative,letcher2019differentiable,perolat2021poincare}. Such type of results raise a hopeful optimism that maybe a simple, practical algorithm exists that reliably converges to Nash equilibria in all games \textit{at least asymptotically}. Naturally, the difficulty of learning Nash equilibria grows significantly when one broadens their scope to a more general class of games than merely zero-sum games~\citep{daskalakis2010learning,paperics11,galla2013complex,DynMeaningOfTheGame}. Numerical studies suggest that chaos is typical~\citep{sanders2018prevalence} and emerges even in low dimensional systems~\citep{sato2002chaos,palaiopanos2017multiplicative, 2017arXiv170109043P}. Such non-convergence results have inspired a program on the intersection of game theory and dynamical systems \citep{Entropy18,DynMeaningOfTheGame}, specifically using Conley's fundamental theorem of dynamical systems \citep{conley1978isolated}. Interestingly, it is exactly Conley index theory that can be utilized to establish a universal negative result for game dynamics, even if we relax our global attractor requirements from the set of exact Nash to approximations thereof. In even more general context, {\em computational} (as opposed to topological) impossibility results are known for the problem of finding price equilibria in markets \citep{PapYannPrice}. If one extends even further to the machine learning inspired class of differential games several negative results have recently been established~\citep{letcher2021impossibility,balduzzi2020smooth,hsieh2021limits,farnia2020gans}. \section{Preliminaries on Game Theory} In this section we outline some fundamental game theoretic concepts, essential for the development of our paper. For more detailed background, we refer the interested reader to the books by \citet{AGT_main_book,HofbauerSigmund,weibull_evolutionary_1998}. Consider a finite set of $K$ players, each of whom has a finite set of actions/strategies (say, for example, that player $k\in [K]$ can play any strategy $s_k$ within their strategy space $S_k$). The utility that player $k$ receives from playing strategy $s_k\in S_k$ when the other players of the game choose strategies $s_{-k} \in \prod_{l\in [K]\setminus \{k\}} S_l$ is a function $u_k : \prod_{l\in [K]} S_l \to {\mathbb{R}}$. These definitions are then multilinearly extended into their mixed/randomized equivalents, using probabilities as the weights of the strategies and utilities with respect to the strategies chosen. We call the resulting triplet $(K, \prod_{l\in [K]} S_l, \prod_{l\in [K]} u_l)$ a game in {\em normal form}. A two-player game in normal form is said to be a {\em bimatrix} game, because the utilities in this case may equivalently be described by two matrices: $M_1\in{\mathbb{R}}^{m\times n}, M_2\in{\mathbb{R}}^{n\times m}$ for the two players respectively, when player 1 has $m$ (pure) strategies available and player 2 has $n$ (pure) strategies available. We say that such a game is an $m\times n$ bimatrix game. For mixed strategy profiles $x \in {\mathbb{R}_+^m}, y \in {\mathbb{R}_+^n}$ with $\sum_{i\in [m]} x_i = 1, \sum_{j\in [n]} y_j = 1$, we say that $(x, y)$ is a Nash equilibrium for the bimatrix game as above if \[ \begin{cases} \langle x, M_1 y \rangle \geq \langle x', M_1 y \rangle\ \text{for all } x' \in {\mathbb{R}_+^m}, \sum_{i\in [m]} x_i' = 1 \\ \langle x, M_2 y \rangle \geq \langle x, M_2 y' \rangle\ \text{for all } y' \in {\mathbb{R}_+^n}, \sum_{j\in [n]} y_j' = 1. \end{cases} \] (or alternatively, that $x$ is best-response of player 1 to player 2's $y$ mixed strategy, and $y$ is best-response of player 2 to player 1's $x$ mixed strategy) where $\langle \cdot, \cdot \rangle$ denotes the inner product of the respective vector space. We also say that $(x, y)$ is an $\epsilon$-approximate Nash equilibrium for the bimatrix game if \[ \begin{cases} \langle x, M_1 y \rangle \geq \langle x', M_1 y \rangle - \epsilon\ \text{for all } x' \in {\mathbb{R}_+^m}, \sum_{i\in [m]} x_i' = 1 \\ \langle x, M_2 y \rangle \geq \langle x, M_2 y' \rangle - \epsilon\ \text{for all } y' \in {\mathbb{R}_+^n}, \sum_{j\in [n]} y_j' = 1. \end{cases} \] Finally, we denote by $(M_1 y)_i$ the $i$-th coordinate of the vector $M_1 y\in{\mathbb{R}}^m$. \input{conley3} \section{Nondegenerate games and approximate equilibria} \subsection{Nondegenerate games} Our impossibility result in the previous section is constructed around a degenerate normal-form game with a continuum of equilibria. What if the game is nondegenerate? \begin{theorem}\label{thm:nondeg_dynamics} For any nondegenerate game $g$ there is a Type III dynamical system $\varphi_g$. \end{theorem} \begin{proof} Since $g$ is nondegenerate, it has an odd number of isolated Nash equilibria \citep{Shapley1974}. Fix one such equilibrium and call it $y$. We next define $\varphi_g$ in terms of $y$. We shall define it at point $x$ implicitly, in terms of the direction of motion, and the speed of motion; if this is done, $\varphi_g(t,x)$ is easily computed through integration on $t$. The direction of motion is the unit vector of $y-x$: the dynamics heads to $y$. The speed of motion is defined to be $c\cdot D_g(x)$, where $c>0$ is a constant, and by $D_g(x)$ we denote the {\em deficit at $x$:} the sum, over all players, of the difference between the best-response utility at $x$, and the actual utility at $x$. It is clear that $D_g(x)\geq 0$, and it becomes zero precisely at any Nash equilibrium. Now it is easy to check that $\varphi_g$ is a well defined dynamics. Furthermore, since the underlying flow is acyclic in a very strong star-like sense, its chain recurrent set coincides with the set of its fixed points, which coincides with the set of Nash equilibria of $g$ --- since there is no opportunity to close extra cycles by $\epsilon$ jumps --- completing the proof. \end{proof} Now note that the algorithm for specifying $\varphi_g$ requires finding $y$, a PPAD-complete (and FIXP-complete for more than two players) problem. We believe that the dependence is inherent: \begin{conjecture} The computational task of finding, from the description of a game $g$, either a degeneracy of $g$ or an algorithm producing the direction of motion and speed of a dynamical system of Type III is PPAD-hard (and FIXP-hard for three or more players). \end{conjecture} We believe this is an important open question in the boundary between computation, game theory, and the topology of dynamical systems, whose resolution is likely to require the development of new complexity-theoretic techniques pertinent to dynamical systems. \subsection{$\epsilon$-approximate Nash equilibria} Next, it may seem plausible that the difficulty of designing dynamics that converge to Nash equilibria can be overcome when only $\epsilon$-approximation is sought, for some $\epsilon>0$. Recall that an $\epsilon$-Nash equilibrium is a mixed strategy profile in which all players' utilities are within an additive $\epsilon>0$ of their respective best response. We go on to show that our impossibility theorem extends to this case as well. Let us denote by $NE_\epsilon(g)$ the set of $\epsilon$-approximate Nash equilibria of a game $g$. Finally, let us call the dynamics $\varphi$ for a game $g$ to be of Type III$_\epsilon$ if $CR(\varphi)=NE_{\epsilon}(g)$. \begin{theorem} There is a game $g$ which admits no Type III$_\epsilon$ dynamics. In fact, the set of games which admit no Type III$_\epsilon$ dynamics has positive measure. \end{theorem} \begin{proof} Consider again the Kohlberg-Mertens game \eqref{eqn:game}. \iffalse where we choose $\delta = \epsilon/2$: \begin{align*} A = \begin{pmatrix} 1+\delta & 0 & -1\\ -1 & \delta & -1\\ 1 & 0 & -2+\delta \end{pmatrix} \end{align*} is non-degenerate, with NE of $(1,0,0 \| 1,0,0)$, $(0,1,0 \| 0,1,0)$, $(\frac{\delta}{2(1+\delta)}, 1-\frac{\delta}{2(1+\delta)}, 0 \| \frac{\delta}{2(1+\delta)}, 1-\frac{\delta}{2(1+\delta)}, 0)$. \fi We claim that its set of $\epsilon$-Nash equilibria $NE_{\epsilon}(g)$ is homotopy equivalent to $S^1$ (a circle) for sufficiently small $\epsilon > 0$. To prove the claim, we subdivide the set of all strategy profiles into nine polytopes $P^{ij}\ $ for all $ i,j \in\{1,2,3\}$, where the polytope $P^{ij}$ is the subset of the strategy space such that the best response of player 1 is $i$, and that of player 2 is $j$. Obviously, these regions can be defined by linear inequalities. Let $P^{ij}_{\epsilon}$ denote the intersection of $P^{ij}$ and $NE_{\epsilon}(g)$; it can be defined by the linear inequalities defining $P^{ij}$ plus the inequality stating that $(x,y)$ is in $NE_\epsilon(g)$: \[ x^T M_1 y \geq (M_1 y)_i - \epsilon, \] and a similar inequality for player 2. \footnote{As an example, for $P^{22}_{\epsilon}$ and $\epsilon=0.27$, the solution of these inequalities is of the form \[ x_1=0\land \left(\left(0\leq x_2<0.73\land y_1=0\land \frac{x_2-0.73}{x_2-1}\leq y_2\leq 1\right)\lor \left(0.73\leq x_2\leq 1\land y_1=0\land 0\leq y_2\leq 1\right)\right) . \]} Now it is clear that $NE_{\epsilon}(g)$ is the union of these nine manifolds. However, it is known for the Kohlberg-Mertens game that strategies 2 and 3 are weakly dominated by the first strategy for the first player (and by symmetry, also for the second player) \citep{DemichelisRitzberger}.\footnote{This can straightforwardly be observed by verifying that the following system of inequalities is tautological: \[ \begin{cases} 2y_1 + y_2 - 1 \geq y_2 - 1 \\ 2y_1 + y_2 - 1 \geq 3y_1 + 2y_2 - 2. \end{cases} \]} Hence, all manifolds $P^{ij}_\epsilon$ are contained within $P^{11}_\epsilon$. Thus $NE_\epsilon(g)$ is a connected, compact $4$-dimensional manifold (with boundary) which is homotopy equivalent to $NE(g)$ for a sufficiently small $\epsilon$.\footnote{Fig. \ref{fig:approx_projection} shows a projection (in a particular direction of ${\mathbb{R}}^4$) of $NE_\epsilon(g)$ for $\epsilon = 0.09$. See the supplementary material for a video of {\em 3-dimensional slices} of $NE_\epsilon(g)$. Computations performed using Mathematica show that $NE_\epsilon(g)$ is homotopy equivalent to $S^1$ up to at least $\epsilon = 0.09$, and it becomes homotopy equivalent to a ball for some $\epsilon \in (0.09, 0.12)$.} Assuming $NE_\epsilon(g)$ is an attractor for a dynamical system $\varphi$, we may reason similarly to the proof of Theorem~\ref{thm:imposs} and again invoke Corollary~\ref{cor:neq} to show that there cannot exist Type III$_\epsilon$ dynamics for $g$. To show that the set of such games which admit no Type III$_\epsilon$ dynamics has positive measure, consider the approximation problem for an $\epsilon$ substantially smaller than the limit near $0.09$ --- say $\epsilon = 0.03$ --- and all perturbations of the same game where each utility value of the normal form of $g$ in \eqref{eqn:game} is perturbed independently, so that the 18-dimensional vector of perturbations has norm $\frac{\epsilon}{c}$, for some appropriately large constant $c$. It is clear that this set of games has positive measure. Let us consider a game $g'$ in this ball. First, it is self-evident that any equilibrium of $g$ is an $\epsilon$-equilibrium of $g'$, hence the set of $NE_{\epsilon}(g')$ contains $NE(g)$. Furthermore, it is also clear that any strategy profile that is {\em not} a $0.09$-equilibrium of $g$ is not in $NE_{\epsilon}(g')$. Thus $NE_\epsilon(g')$ is contained in $NE_{0.09}(g)$. It follows that $NE_\epsilon(g')$ is homotopy equivalent to $S^1$, and thus the argument above holds for $g'$, which completes the proof. \end{proof} \begin{figure}[ht] \centering \includegraphics[scale=0.4]{images/projection.png} \caption{3-dimensional projection of $NE_\epsilon(g)$ when $g$ is the Kohlberg-Mertens game \eqref{eqn:game}, and for utility-normalized $\epsilon=0.09$. The object depicted is homotopy equivalent to $S^1$ (a circle).} \label{fig:approx_projection} \end{figure} \begin{remark} A generic perturbation of the Kohlberg-Mertens game has a finite set of isolated Nash equilibria (and is nondegenerate), and we know from Theorem~\ref{thm:nondeg_dynamics} that Type III dynamics do exist for this perturbed game. It may, therefore, appear surprising that we can prove a stronger impossibility result (positive measure of such games) despite the goal being more modest (just approximation instead of an exact equilibrium). The reason is that as soon as the sought approximation $\epsilon$ becomes much larger than the perturbation of the game (equivalently, the perturbation of the game being much smaller than the approximation), and it is required that all approximate equilibria be the only chain recurrent points of the dynamics, $NE_\epsilon(g)$ is once again is homotopy equivalent to $S^1$, and our characterization of the Conley indices (Corollary~\ref{cor:neq}) once again applies. \end{remark} \section{Conclusion} In this paper we have argued that the notion of Nash equilibria is fundamentally incomplete for describing the global dynamics of games. More precisely, we have shown that there are games where the set of Nash equilibria does not comprise the entire chain recurrent set, and thus the Nash equilibria cannot account for all of the long-term dynamical behavior. Moreover, this is true even when one relaxes the focus from Nash equilibria to approximate Nash equilibria. We have utilized the chain recurrent set in this paper in order to characterize game dynamics. However, ultimately we believe that it is not the right tool for the analysis of game dynamics. The chain recurrent set is a brittle description of the fine structure of dynamics, i.e. not robust under perturbations. In contrast, the rules which are suggested to govern players' behavior are not meant as precise models derived from first principles, but instead as rough approximations. Thus the appropriate mathematical objects for analysis for game dynamics must be robust to perturbation; in the words of~\citep{conley1978isolated}, \emph{"...if such rough equations are to be of use it is necessary to study them in rough terms"}. Instead, we propose a focus on the coarser concept of \emph{Morse decomposition} (if a system has a finest Morse decomposition, then it is the chain recurrent set). Morse decompositions are posets of isolated invariant sets which possess a duality theory with lattices of attractors, and in addition have an associated homological theory using the Conley index \citep{conley1978isolated}. Ultimately, these ideas culminate in theory of the connection matrix, which describes the global dynamics by means of a chain complex, and which would provide a robust, homological theory of the global dynamics of games. The intersection of these ideas with the solution concepts of online learning and optimization (e.g., regret) as well as that of game theory (e.g., Nash/(coarse) correlated equilibria) holds the promise of a significantly more precise understanding of learning in games. Although these future theories are yet to be devised, one of their aspects is certain: Nash equilibria are not enough. \section*{Acknowledgements} The work of K.S. was partially supported by EPSRC grant EP/R018472/1. K.S. would like to thank Rob Vandervorst for numerous enlightening discussions regarding Conley theory.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{Acknowledgements} \label{sec:acknowledgements} This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No.~865855). The authors also acknowledge support by the state of Baden-Württemberg through bwHPC and the German Research Foundation (DFG) through grant No.~INST~40/467-1~FUGG (JUSTUS cluster) and by the Stuttgart Center for Simulation Science (SimTech). \section{\texorpdfstring{$U_{\textrm{eff}}$ parameter space}{Ueff parameter space}}\label{sec:pho} Results calculated by the DFT$+U$ method are sensitive to the input parameter $U_{\textrm{eff}}$, as shown by Table~\ref{tab:pre}. We have explored the DFT$+U$ parameter space by testing a range of $U_{\textrm{eff}}$ values from \SI{0}{\electronvolt} to \SI{8}{\electronvolt} for the FM cubic BaFeO$_{3}$. Figure~\ref{fig:u}(a) shows the $U_{\textrm{eff}}$ dependence of the phonon dispersion. With a relatively small $U_{\textrm{eff}}$ value ($\leq \SI{2}{\electronvolt}$), imaginary modes (which indicate dynamic instability at \SI{0}{\kelvin}) are found at all commensurate $\boldsymbol{q}$-points included here, i.e., $\Gamma$, M, R, X. Less imaginary modes are present at larger $U_{\textrm{eff}}$. Specifically, imaginary modes at the M point disappear when $U_{\textrm{eff}} \approx \SI{6}{\electronvolt}$ while at the R point they disappear when $U_{\textrm{eff}} \approx \SI{5}{\electronvolt}$. No imaginary mode is found for phonon calculations with a $U_{\textrm{eff}}$ value larger than \SI{7}{\electronvolt}. The result for $U_{\textrm{eff}} = \SI{5}{\electronvolt}$ is found to be different from previous studies~\cite{Cherair2017, Zhang2018a} (shown in Table~\ref{tab:pre}), which may be due to the usage of different functionals or codes. \begin{figure*}[tbp] \centering \includegraphics[width=17.8cm]{fig/u.pdf} \caption{$U_{\textrm{eff}}$ dependence of (a) the phonon dispersion, (b) the lattice constant and (c) the local magnetic moment on Fe for the FM cubic structure BaFeO$_{3}$. Results with $U_{\textrm{eff}} = \SI{3}{\electronvolt}$ are highlighted by a red lines. Phonon dispersion calculated by $U_{\textrm{eff}} = 3$ eV is also shown in Fig.~\ref{fig:pho3}(b). Commensurate $\boldsymbol{q}$-points are indicated by tick labels. Imaginary modes are shown by negative frequencies. Experimental data are obtained from Ref.~\cite{Hayashi2011}. } \label{fig:u} \end{figure*} The change of the lattice constant is shown in Fig.~\ref{fig:u}(b). With $U_{\textrm{eff}} = \SI{0}{\electronvolt}$ (GGA functional), the calculated value is higher than the experimental value~\cite{Hayashi2011} (about 0.5\%). The overestimation of the lattice constant calculated by the GGA functional is also shown for the SrTiO$_{3}$ and the BaTiO$_{3}$ cubic perovskites~\cite{Wahl2008}. The DFT$+U$ method increases the amplitude of the overestimation. With $U_{\textrm{eff}} = \SI{3}{\electronvolt}$, the lattice constant is about 1\% larger than the experimental value. The change of the local magnetic moment on Fe is shown in Fig.~\ref{fig:u}(c). The GGA functional underestimates the local magnetic moment by about 15\%, which is corrected by the DFT$+U$ method. With $U_{\textrm{eff}} = \SI{3}{\electronvolt}$, the calculated local magnetic moment is between the values given by experiments~\cite{Hayashi2011} and HSE06 (calculated in this study). \begin{comment} \section{Magnetic states}\label{sec:mag} While various magnetic configurations of cubic BaFeO$_3$ were previously investigated with \textit{ab initio} simulations~\cite{Ribeiro2013, Maznichenko2016, Rahman2016}, to our knowledge, no such investigation has been reported for vacancy-ordered monoclinic BaFeO$_{2.67}$. In this Appendix, we consider several possible magnetic configurations for BaFeO$_{2.67}$ and compare their energies among each other and with cubic BaFeO$_3$. Figure~\ref{fig:mag} shows four magnetic configurations for cubic BaFeO$_{3}$. In the FM state, all Fe atoms have the same spin orientation and carry the majority of the magnetic moments, while the magnetic moments for Ba and O atoms are negligible . For A-, C-, and G-AFM, the Fe atoms on the same $\{100\}_\mathrm{cub}$, $\{110\}_\mathrm{cub}$, and $\{111\}_\mathrm{cub}$ planes, respectively, show the same spin orientation. Magnetic configurations for the vacancy-ordered monoclinic phase can be derived by removing the corresponding O atoms from the cubic phase as discussed in Sec.~\ref{sec:str}. Note that for the monoclinic phase, there are several symmetrically inequivalent A- and C-AFM states due to the existence of the O vacancies. For example, the $(011)_\mathrm{cub}$ and the $(110)_\mathrm{cub}$ planes give different A-AFM configurations. We have systematically considered such different spin arrangements by utilizing large enough supercells. \begin{figure}[tbp] \centering \includegraphics[width=8.6cm]{fig/mag.pdf} \caption{The spin ordering of (a) FM, (b) A-, (c) C- and (d) G-AFM for the cubic structure. Arrows show the spin orientations carried by the Fe atoms. The highlighted planes for the AFM phases indicate the Fe atoms with the same spin orientation.} \label{fig:mag} \end{figure} As stressed in Sec.~\ref{sec:intro}, results from DFT$+U$ depend on the input parameter $U_{\textrm{eff}}$, and this likewise applies to the magnetic phase stability. Similar to the dynamic stability investigation, the more accurate HSE06 hybrid functional was used as the reference for the calibration of $U_{\textrm{eff}}$. A range of $U_\mathrm{eff}$ values from \SI{0}{\electronvolt}, i.e., no Hubbard correction, to \SI{9}{\electronvolt} was tested for the cubic structure. Figure~\ref{fig:ueng} shows energies of FM cubic BaFeO$_3$ as a function of $U_\mathrm{eff}$ with NM as the reference. It is found that $U_{\textrm{eff}} = \SI{5.3}{\electronvolt}$ gives almost the same relative energy for DFT$+U$ and HSE06. Therefore, we have determined the magnetic ground states of cubic BaFeO$_{3}$ and monoclinic BaFeO$_{2.67}$ using $U_{\textrm{eff}} = \SI{5.3}{\electronvolt}$. Note that the parameter utilized here is larger than that utilized for the dynamic stability investigation ($U_{\textrm{eff}} = \SI{3}{\electronvolt}$), implying that the proper $U_\mathrm{eff}$ value depends on the considered materials property. \begin{figure}[tbp] \centering \includegraphics[width=8.6cm]{fig/ueng.pdf} \caption{$U_{\textrm{eff}}$ dependence of the relative energy for the cubic structure. NM is set as the reference. } \label{fig:ueng} \end{figure} Table~\ref{tab:energy} shows the relative energies of the structures in various magnetic states. Cubic BaFeO$_{3}$, intermediate BaFeO$_{2.67}$, and ideal monoclinic BaFeO$_{2.67}$ correspond to Figs.~\ref{fig:trans}(d)--(f), respectively. For the cubic structure, FM has the lowest energy. During the vacancy formation process, several O atoms, originally located between the Fe atoms, are removed. The removal of the O atoms (without further relaxation) leads to an intermediate configuration where pairs of Fe atoms are created in which the two Fe atoms directly face each other. Thereby, the FM configuration is dramatically destabilized (\SI{834.6}{\milli\electronvolt\per{f.u.}}) compared to the G-AFM configuration, for which the two Fe atoms facing each other obtain antiparallel spins. During the structural optimization, the interaction of the neighboring Fe atoms is weakened due to the distortion of the corresponding Fe coordination tetrahedra. The energy of the FM structure decreases strongly, although the G-AFM state is still more stable (\SI{81.7}{\milli\electronvolt\per{f.u.}} lower in energy). The local magnetic moments on the Fe atoms in the G-AFM state amount to XXX, while the Ba and O atoms have negligible moments. For the ideal monoclinic phase, two classes of AFM configurations can be observed; one lower energy class with energies of a few meV/f.u. and a higher energy class with energies of 40 to 60 meV/f.u. The AFM configurations of the lower energy class contain a finite component along the third lattice vector in their spin plane normal, i.e., AFM ($xx$1)$_{\textrm{cub}}$, with $x$ either 1 or 0. The orientation of the alternating spin planes ensures that the Fe atoms facing each other have anti-parallel spins. For the AFM configurations of the higher energy class, these Fe atoms have parallel spins. In summary, the magnetic ground states of the cubic and the monoclinic phases are found to be FM and G-AFM, respectively, which is consistent with previous studies~\cite{Ribeiro2013, Maznichenko2016, Rahman2016, Wollstadt2021}. \begin{table}[tbp] \caption{Relative energies of cubic BaFeO$_{3}$, intermediate BaFeO$_{2.67}$, monoclinic BaFeO$_{2.67}$ phases with different magnetic states. The magnetic ground states are set as references. Planes with the same spin ordering are indicated for each AFM. The DFT+$U$ method with $U=\SI{5.3}{\electronvolt}$ were used. } \begin{ruledtabular} \begin{tabular}{cccc} Phase & Shown~in & Magnetic state & $\Delta E$ (\SI{}{\milli\electronvolt\per{f.u.}}) \\ \hline \multirowcell{4}{cubic\\BaFeO$_{3}$} & \multirowcell{4}{Fig.~\ref{fig:trans}(d)} & FM & \num{0.0} \\ & & A-AFM $\{001\}_{\textrm{cub}}$ & \num{39.4} \\ & & C-AFM $\{110\}_{\textrm{cub}}$ & \num{57.1} \\ & & G-AFM $\{111\}_{\textrm{cub}}$ & \num{71.3} \\ \hline \multirowcell{2}{intermediate\\BaFeO$_{2.67}$} & \multirowcell{2}{Fig.~\ref{fig:trans}(e)} & G-AFM $\{111\}_{\textrm{cub}}$ & \num{0.0} \\ & & FM & \num{834.6} \\ \hline \multirowcell{8}{monoclinic\\BaFeO$_{2.67}$} & \multirowcell{8}{Fig.~\ref{fig:trans}(f)} & G-AFM $\{111\}_{\textrm{cub}}$ & \num{0.0} \\ & & C-AFM (011)$_{\textrm{cub}}$ & \num{3.5} \\ & & C-AFM (101)$_{\textrm{cub}}$ & \num{3.5} \\ & & A-AFM (001)$_{\textrm{cub}}$ & \num{6.3} \\ & & C-AFM (110)$_{\textrm{cub}}$ & \num{40.9} \\ & & A-AFM (100)$_{\textrm{cub}}$ & \num{59.8} \\ & & A-AFM (010)$_{\textrm{cub}}$ & \num{59.8} \\ & & FM & \num{81.7} \\ \end{tabular} \end{ruledtabular} \label{tab:energy} \end{table} \begin{table}[tbp] \caption{Relative energies of cubic BaFeO$_{3}$, intermediate BaFeO$_{2.67}$, monoclinic BaFeO$_{2.67}$ phases with different magnetic states. The magnetic ground states are set as references. Planes with the same spin ordering are indicated for each AFM. The DFT+$U$ method with $U=\SI{3}{\electronvolt}$ were used. } \begin{ruledtabular} \begin{tabular}{cccc} Phase & Shown~in & Magnetic state & $\Delta E$ (\SI{}{\milli\electronvolt\per{f.u.}}) \\ \hline \multirowcell{4}{cubic\\BaFeO$_{3}$} & \multirowcell{4}{Fig.~\ref{fig:trans}(d)} & FM & \num{0.0} \\ & & A-AFM $\{001\}_{\textrm{cub}}$ & \num{} \\ & & C-AFM $\{110\}_{\textrm{cub}}$ & \num{} \\ & & G-AFM $\{111\}_{\textrm{cub}}$ & \num{} \\ \hline \multirowcell{2}{intermediate\\BaFeO$_{2.67}$} & \multirowcell{2}{Fig.~\ref{fig:trans}(e)} & G-AFM $\{111\}_{\textrm{cub}}$ & \num{0.0} \\ & & FM & \num{} \\ \hline \multirowcell{8}{monoclinic\\BaFeO$_{2.67}$} & \multirowcell{8}{Fig.~\ref{fig:trans}(f)} & G-AFM $\{111\}_{\textrm{cub}}$ & \num{0.0} \\ & & C-AFM (011)$_{\textrm{cub}}$ & \num{-41.1} \\ & & C-AFM (101)$_{\textrm{cub}}$ & \num{-41.1} \\ & & A-AFM (001)$_{\textrm{cub}}$ & \num{-16.5} \\ & & C-AFM (110)$_{\textrm{cub}}$ & \num{27.3} \\ & & A-AFM (100)$_{\textrm{cub}}$ & \num{48.4} \\ & & A-AFM (010)$_{\textrm{cub}}$ & \num{48.5} \\ & & FM & \num{96.4} \\ \end{tabular} \end{ruledtabular} \label{tab:energy} \end{table} \end{comment} \section{Introduction} \label{sec:intro} Vacancy-ordered perovskites attract increasing attention in fields such as optoelectronics~\cite{Shao2019}, photovoltaics~\cite{Ju2018}, and electrochemistry~\cite{Sengodan2015, IqbalWaidha2021} due to their tunable electronic, magnetic, and catalytic properties. The versatility of these materials is related to a stability competition among various structural arrangements. The Ba--Fe--O system offers a particularly rich class of perovskite-type structures with various vacancy-orderings, e.g., hexagonal BaFeO$_{2.65}$ ($P6_{3}/mmc$)~\cite{Gomez2001}, monoclinic BaFeO$_{2.5}$ ($P2_{1}/c$)~\cite{Clemens2014} or BaFeO$_{2.67}$ ($P2_{1}/m$)~\cite{Wollstadt2021}. Despite considerable experimental efforts, the actual vacancy ordering and, thus, the exact arrangement of the atoms in the non-stoichiometric Ba--Fe--O phases is often not resolved~\cite{Clemens2014} on account of the structural complexity introduced by the vacancies. Experiments face practical challenges, e.g., in achieving high phase purity or in coping with multi-stage phase transitions that hinder the observation of single phases in extended temperature intervals~\cite{Wollstadt2021}. From a more fundamental perspective, it is the intricate interplay of vacancy ordering and the dynamic stability that poses a decisive challenge. Traditionally, the Goldschmidt tolerance factor~\cite{Goldschmidt1926} and its extended forms~\cite{Bartel2019, Sato2016, Kieslich2015}, most of which take the chemical formula and the ionic radii as the basic input data, have been used to predict the dynamic stability of perovskites. These empirical rules are, however, oversimplified and hence insufficient to reveal the details of the dynamic stability. For example, they do not capture structural deformations and, likewise, do not predict the dynamic stabilization at elevated temperatures. In order to properly take such features into account, \textit{ab initio} simulations, specifically in the form of density-functional theory (DFT), are required. In \textit{ab initio} simulations, the harmonic approximation is commonly used to determine the dynamic (in)stability at \SI{0}{\kelvin}~\cite{Togo2015}. Indeed, for BaFeO$_3$ (i.e., the ``perfect'' cubic perovskite without ordered vacancies), a few studies have been reported to date, as summarized in Table~\ref{tab:pre}, specifically employing the DFT$+U$ approach~\cite{Dudarev1998}. DFT+$U$ is computationally much more affordable than DFT supplemented with advanced hybrid functionals, but an additional input parameter, i.e., $U_{\textrm{eff}}$, is needed to correct for the over-delocalization error of the standard exchange--correlation functionals of DFT. The specific $U_{\textrm{eff}}$ value is not trivial to determine and, as Table~\ref{tab:pre} reveals, a small change can lead to qualitatively different results. For example, cubic BaFeO$_{3}$ was predicted to be dynamically stable with $U_{\textrm{eff}} \approx \SI{5}{\electronvolt}$~\cite{Cherair2017, Zhang2018a}, while Jahn--Teller distortions were observed with $U_{\textrm{eff}} = \SI{4}{\electronvolt}$~\cite{Cherair2018, Hoedl2021}. These differences highlight that the $U_{\textrm{eff}}$ value needs to be chosen with care, ideally by calibration with respect to a higher level method. For vacancy-ordered Ba--Fe--O perovskites (i.e., compositions with lower O content than BaFeO$_3$), to our knowledge, \textit{ab initio} investigations of the dynamic (in)stability do not exist. \begin{table}[tbp] \caption{Theoretical studies of the \SI{0}{\kelvin} dynamic stability of Ba--Fe--O perovskites. Cubic BaFeO$_{3}$ is ferromagnetic (FM) while monoclinic BaFeO$_{2.67}$ is G-type (in the Wollan--Koehler notation~\cite{Wollan1955}) anti-ferromagnetic (AFM). Discussions on the phonon dispersions are given in the Appendix. } \begin{ruledtabular} \begin{tabular}{cccccc} Year & Phase & Method & $U_{\textrm{eff}}$ (eV) & Stability \\ \hline 2017~\cite{Cherair2017} & cub. BaFeO$_{3}$ & DFT+\textit{U} & 5 & Stable \\ 2018~\cite{Cherair2018} & cub. BaFeO$_{3}$ & DFT+\textit{U} & 4 & Unstable \\ 2018~\cite{Zhang2018a} & cub. BaFeO$_{3}$ & DFT+\textit{U} & 5.2 & Stable \\ 2021~\cite{Hoedl2021} & cub. BaFeO$_{3}$ & DFT+\textit{U} & 4 & Unstable \\ this study & cub. BaFeO$_{3}$ & HSE06 & N/A & Unstable \\ this study & cub. BaFeO$_{3}$ & DFT+\textit{U} & 3 & Unstable \\ this study & mon. BaFeO$_{2.67}$ & DFT+\textit{U} & 3 & Unstable \\ \end{tabular} \end{ruledtabular} \label{tab:pre} \end{table} Dynamic stabilization at finite temperatures can be investigated via \textit{ab initio} molecular dynamics (AIMD) simulations. One type of approaches utilizes effective force constants extracted from AIMD. For example, the temperature-dependent effective potential (TDEP) method~\cite{Hellman2011} fits effective harmonic force constants to the forces generated by AIMD. With the self-consistent phonon (SCPH) method, Tadano \textit{et al.}~\cite{Tadano2015} showed how to obtain renormalized phonon frequencies from anharmonic force constants extracted from AIMD. In general, sufficiently long AIMD runs are required for these methods to obtain reliable results, especially for materials with significant anharmonicity. Another strategy is to analyze the lattice distortions of the structures directly with AIMD. The dynamic stabilization can be then visualized explicitly as a function of temperature, and the anharmonic contribution fully considered. Such an approach was applied to cubic perovskites~\cite{Klarbring2018, Sun2014}, though not yet to vacancy-ordered perovskites. The aim of the present study is to compare the dynamic stability of cubic BaFeO$_{3}$ ($Pm\bar{3}m$)~\cite{Hayashi2011} (no ordered vacancies) with vacancy-ordered monoclinic BaFeO$_{2.67}$ ($P2_{1}/m$)~\cite{Wollstadt2021} at \SI{0}{\kelvin} and at finite temperatures with AIMD and, thereby, to reveal the impact of ordered vacancies on the dynamic stability of perovskites. We first demonstrate that $U_\mathrm{eff} = \SI{3}{eV}$ should be chosen, since it well reproduces the dynamic instability predicted by the more accurate HSE06 functional~\cite{Krukau2006, Wahl2008}. Based on the optimized $U_\mathrm{eff}$, we investigate the link between the ordered vacancies and the imaginary harmonic phonon modes at \SI{0}{\kelvin}. We analyze trajectories of the atoms near the ordered vacancies at finite temperatures and investigate the influence of the ordered vacancies on the stabilization mechanism. To this end, we introduce a structural descriptor that captures the temperature-driven transformation for both, the vacancy-free cubic and vacancy-ordered monoclinic structures. \section{Methodology} \label{sec:metho} \subsection{Cubic and monoclinic structures}\label{sec:str} Figure~\ref{fig:trans}(a) shows the utilized simulation cell of the ideal cubic BaFeO$_{3}$ structure. While the unit cell contains 5 atoms, in the present study, a \num{2 x 2 x 2} supercell (40 atoms), which contains eight symmetrically equivalent corner-shared regular Fe coordination octahedra, was considered to capture the low temperature distortion of the structure. Each octahedron consists of one central Fe atom and six surrounding O atoms. Ba atoms are located between the octahedra. To derive the vacancy-ordered structure, a coordinate transformation~\cite{Clemens2015, Wollstadt2021, IqbalWaidha2021} is performed on the unit cell of the cubic structure. The lattice vectors of the transformed unit cell $\boldsymbol{a}_{\textrm{cub}'}$, $\boldsymbol{b}_{\textrm{cub}'}$ and $\boldsymbol{c}_{\textrm{cub}'}$ are related to that of the original unit cell $\boldsymbol{a}_{\textrm{cub}}$, $\boldsymbol{b}_{\textrm{cub}}$ and $\boldsymbol{c}_{\textrm{cub}}$ according to \begin{equation} \begin{pmatrix} \boldsymbol{a}_{\textrm{cub}'} \\ \boldsymbol{b}_{\textrm{cub}'} \\ \boldsymbol{c}_{\textrm{cub}'} \end{pmatrix} = \begin{pmatrix} 1 & -1 & 0 \\ 1 & 1 & 1 \\ -1 & -1 & 2 \end{pmatrix} \begin{pmatrix} \boldsymbol{a}_{\textrm{cub}} \\ \boldsymbol{b}_{\textrm{cub}} \\ \boldsymbol{c}_{\textrm{cub}} \end{pmatrix}. \end{equation} While this transformation leads to a six-times larger unit cell, to simulate the dynamic stability of the vacancy-ordered structure, as well as to realize the G-type AFM ordering (see Sec.~\ref{sec:comp} for details), it was further expanded by \num{1 x 1 x 2}. The thus obtained simulation cell contains 12 corner-shared regular Fe coordination polyhedra, as shown in Fig.~\ref{fig:trans}(b). The orientation relationship between the two unit cells is shown in Fig.~\ref{fig:trans}(c). The $(101)_{\textrm{cub}'}$ plane projection of the transformed cubic structure with four highlighted octahedra is shown in Fig.~\ref{fig:trans}(d). Ordered vacancies are created by removing the four O atoms belonging to these four octahedra, as shown in Fig.~\ref{fig:trans}(e). The composition of the supercell changes from Ba$_{12}$Fe$_{12}$O$_{36}$ to Ba$_{12}$Fe$_{12}$O$_{32}$, i.e., the formula unit (f.u.) changes from BaFeO$_{3}$ to BaFeO$_{2.67}$. After the formation of the vacancies, four of the twelve previously octahedrally coordinated Fe cations become tetrahedrally coordinated. The symmetry of the structure is lowered, and the space group is changed to monoclinic $P2_{1}/m$. As a final step, the vacancy-ordered structure is optimized in an \textit{ab initio} manner, during which relaxation of the four tetrahedra and a shear of the supercell in the $[100]_{\textrm{mon}}$ direction take place, as shown in Fig.~\ref{fig:trans}(f). \begin{figure*}[tbp] \centering \includegraphics[width=17.8cm]{fig/trans.pdf} \caption{Derivation of the vacancy-ordered monoclinic structure from the cubic structure. (a) Simulation cell (\num{2 x 2 x 2} supercell) of the cubic structure. (b) Simulation cell (\num{1 x 1 x 2} supercell) after coordinate transformation. (c) Orientation comparison of the original and the transformed unit cells. (d) $(010)_{\textrm{cub}'}$ plane projection of the supercell (b) with the four highlighted octahedra. (e) Vacancy creation on the O sites. (f) Simulation cell (\num{1 x 1 x 2} supercell) of the monoclinic structure, with indication of the shearing during the structural optimization process. The numbers of atoms (formula unit) in the cells are shown below for (d)--(f). Crystal structures are visualized by \textsc{vesta}~\cite{Momma2011}.} \label{fig:trans} \end{figure*} \subsection{\texorpdfstring{Structural descriptor $\Delta$}{Structural descriptor Delta}}\label{sec:strdes} To quantify the displacive phase transformation at the atomistic level, we introduce a structural descriptor $\Delta$ along similar lines as done previously for materials showing the bcc--$\omega$ phase transformation~\cite{Korbmacher2019, Ikeda2021, Gubaev2021}. The key requirements are as follows: \begin{itemize} \item The structural descriptor $\Delta(T)$ is a temperature dependent, scalar value that condenses the relevant information on the displacive phase transformation from AIMD trajectories. \item Thermal vibrations (i.e., random displacements) are filtered out to a good degree to increase the contrast in $\Delta$ between the low-temperature, low-symmetry phase and the high-temperature, high-symmetry phase. \item Displacements of atoms unrelated to symmetry breaking are filtered out effectively as well as similarly for structures restricted by different space groups. \item The vacancy-free cubic and the vacancy-ordered monoclinic structure are treated on an equal footing, i.e., O anions in octahedral environments are similarly considered as O anions in tetrahedral environments. \end{itemize} To fulfill these requirements we define $\Delta$ as follows. For a polyhedron composed of a central Fe cation $i$ and surrounding O anions labelled with $j$, the temperature dependent relative position vectors $\boldsymbol{r}_{ij}(T)$ are defined as \begin{equation} \boldsymbol{r}_{ij}\left(T\right) = \left\langle\boldsymbol{R}_{j}\right\rangle_{T} - \left\langle\boldsymbol{R}_{i}\right\rangle_{T}, \label{eq:posvec} \end{equation} where $\boldsymbol{R}_i$ is the position vector of the $i$th cation and $\boldsymbol{R}_j$ of the $j$th anion. The time-averaged position of an ion at a given temperature $T$ is represented by $\left<\cdots\right>_{T}$. Figures~\ref{fig:fe_o}(a) and \ref{fig:fe_o}(b) give examples of the relative position vectors $\boldsymbol{r}_{ij}$ in the octahedron and the tetrahedron cases, respectively. Based on the position vectors $\boldsymbol{r}_{ij}$, we define a structure-dependent projection as \begin{numcases}{p_{ij} = } \boldsymbol{r}_{ij} \cdot \hat{\boldsymbol{r}}_{ij} & cubic, \label{eq:cub} \\ \boldsymbol{r}_{ij} \cdot \hat{\boldsymbol{u}}_{[010]_{\textrm{mon}}} & monoclinic, \label{eq:mon} \end{numcases} which gives a parameter $p_{ij}$. The relative position vector $\boldsymbol{r}_{ij}$ is projected onto the unit vectors in the same direction of $\boldsymbol{r}_{ij}$ ($\hat{\boldsymbol{r}}_{ij}$) for the cubic structure and in the symmetry broken direction ($\hat{\boldsymbol{u}}_{[010]_{\textrm{mon}}}$) for the monoclinic structure. \begin{figure}[tbp] \centering \includegraphics[width=8.6cm]{fig/fe_o.pdf} \caption{Illustration of relative position vectors in (a) the octahedron and (b) the tetrahedron cases. } \label{fig:fe_o} \end{figure} To emphasize the distortion, the parameter of the ideal structure $p^{\textrm{ideal}}$ is subtracted from the simulated parameters $p_{ij}$ at different temperatures. The structural descriptor is obtained as the mean value of the referenced $p_{ij}$ parameters of the cation-anion pairs inside the polyhedra, \begin{equation} \Delta = \frac{1}{I\times J} \sum_{i = 1}^{I} \sum_{j = 1}^{J} \left|p_{ij} - p^{\textrm{ideal}}\right|, \label{eq:delta} \end{equation} where $I$ is the number of cations and $J$ is the number of anions surrounding each cation. The ideal cubic structure belongs to the space group $Pm\bar{3}m$ (No.~221), and all O atoms are at the Wyckoff site $3c$ for which all three fractional coordinates are fixed. This means that the space group of the cubic structure restricts all O atoms to a single spatial point in the simulation cell. The elongation or contraction of a cation--anion bonding in all directions belongs to the distortion of the cubic structure. Therefore, the relative position vector $\boldsymbol{r}_{ij} (T)$ is projected to itself, i.e., the module of $\boldsymbol{r}_{ij}$ is calculated [Eq.~\eqref{eq:cub}]. All Fe--O bondings in each of the octahedra in the simulation cell are taken into account for Eq.~\eqref{eq:delta}. For the monoclinic structure, not all atoms are spatially restricted by the symmetry operations included in the space group, i.e., the structure can still belong to the same space group $P2_{1}/m$ as long as the displacements of atoms do not break the symmetry. As will be discussed in Sec.~\ref{sec:mon}, the dynamic instability of the monoclinic structure is mainly related to the distortion of the four tetrahedra, in which only the four O atoms moving in the $[010]_{\textrm{mon}}$ direction contribute to breaking of the symmetry. To investigate the dynamic instability of $P2_{1}/m$ BaFeO$_{2.67}$, we therefore consider the Fe--O pairs [Fig.~\ref{fig:fe_o}(b)] involving the four O atoms for estimating the distortion and calculating $\Delta$. Mathematically, the four $\boldsymbol{r}_{ij} (T)$ [Eq.~\eqref{eq:posvec}] are projected to the unit vector along the $[010]_{\textrm{mon}}$ direction [Eq.~\eqref{eq:mon}], and $I=4, J=1$ in Eq.~\eqref{eq:delta}. \subsection{Computational details}\label{sec:comp} Spin alignment is important for Fe oxides. Experiments revealed that cubic BaFeO$_{3}$ has an A-type spiral spin structure near \SI{0}{\kelvin}~\cite{Hayashi2011}, which was subsequently investigated by \textit{ab initio} simulations~\cite{Li2012a}. Further, a magnetic transition to the FM state was observed experimentally at \SI{111}{\kelvin}~\cite{Hayashi2011}. The FM state was also shown to be energetically the most stable within calculations with collinear spin alignment~\cite{Ribeiro2013, Maznichenko2016, Rahman2016}. For the vacancy-ordered monoclinic BaFeO$_{2.67}$ structure, experimental and \textit{ab initio} results show agreement on a G-AFM ordering~\cite{Wollstadt2021}. In the present study, collinear spin alignment was applied, and the FM and the G-AFM states were considered for the cubic and the monoclinic phases, respectively. Electronic structure calculations were carried out under the DFT framework using the projector augmented wave (PAW) method~\cite{Bloechl1994} and the generalized gradient approximation (GGA) in the Perdew--Burke--Ernzerhof (PBE) parametrization~\cite{Perdew1996} as implemented in \textsc{vasp}~\cite{Kresse1995, Kresse1996, Kresse1999}. Electrons in the atomic orbitals $5s^{2}5p^{6}6s^{2}$ (Ba), $3d^{6}4s^{2}$ (Fe) and $2s^{2}2p^{4}$ (O) were treated as valance electrons. For calculations using the DFT$+U$ method, the effective Hubbard potential ($U_{\textrm{eff}}$)~\cite{Dudarev1998} was added to electrons in the $d$-orbitals of the Fe atoms to capture the strong on-site Coulomb interaction. Additionally, DFT calculations with the HSE06 hybrid functional~\cite{Krukau2006} were performed. The first-order Methfessel--Paxton scheme~\cite{Methfessel1989} with a smearing width of \SI{0.1}{\electronvolt} was used for structural optimization and force calculations, and the tetrahedron method with Blöchl corrections~\cite{Bloechl1994a} was used for accurate energy calculations. The plane-wave cutoff was set to \SI{520}{\electronvolt}. The reciprocal space was sampled by $\Gamma$-centered \num{4 x 4 x 4} and \num{3 x 6 x 3} $\boldsymbol{k}$-point meshes for the 40-atom cubic BaFeO$_{3}$ and the 54-atom monoclinic BaFeO$_{2.67}$ simulation cells, respectively. For the Kohn--Sham self-consistent calculation, the energy was minimized until the energy difference converged to less than \SI{d-5}{\electronvolt} per simulation cell. Ionic relaxation was performed with the conjugate gradient algorithm until the maximum residual force was less than \SI{d-2}{\electronvolt\angstrom{}^{-1}}. The \SI{0}{\kelvin} harmonic phonon dispersions were calculated using the finite displacement method implemented in \textsc{phonopy}~\cite{Togo2015}. The displacement amplitude was set to \SI{d-2}{\angstrom}, and a $\Gamma$-centered \num{14 x 14 x 14} $\boldsymbol{q}$-point mesh was used for sampling the reciprocal space. Tests show that better energy convergences do not significantly alter the results, so the current criterion (\SI{d-5}{\electronvolt} per simulation cell) was also used for the phonon calculations. The residual forces in the optimized structure were subtracted from the forces of the displaced structure for accurate calculations of the interatomic force constants. AIMD simulations were conducted using the Langevin thermostat and the canonical ensemble implemented in \textsc{vasp}~\cite{Kresse1995, Kresse1996, Kresse1999}. A \SI{2}{\femto\second} time step, a \SI{10}{\pico\second^{-1}} friction coefficient and the Fermi--Dirac smearing adjusted to the MD temperature were utilized. For each AIMD step, the criterion for energy convergence was set to \SI{d-3}{\electronvolt} per simulation cell, and the atomic positions were calibrated to fix the center of mass. Other parameters were the same as those chosen for the electronic structure calculations. In total 5000 AIMD steps (\SI{10}{\pico\second}) were performed, and the first 300 steps were subtracted for thermalization. \section{Results}\label{sec:result} \subsection{\texorpdfstring{Cubic BaFeO\textsubscript{3}}{Cubic BaFeO3}} \label{sec:cubic} Figure~\ref{fig:pho3}(a) shows the \SI{0}{\kelvin} phonon dispersion calculated by HSE06 for the cubic BaFeO$_{3}$ structure. As indicated by the arrows, one imaginary mode $\boldsymbol{e}^{\textrm{M}}_{1}$ at the M point [$\boldsymbol{q} = \left(1/2, 1/2, 0\right)$] and two degenerate imaginary modes $\boldsymbol{e}^{\textrm{R}}_{1}$ and $\boldsymbol{e}^{\textrm{R}}_{2}$ at the R point [$\boldsymbol{q} = \left(1/2, 1/2, 1/2\right)$] are observed, which reveals dynamic instability for the cubic structure at \SI{0}{\kelvin}. The imaginary modes correspond to collective displacements on the O atom sublattice. These displacements are related to the Jahn--Teller effect~\cite{Cherair2018, Hoedl2021} and they deform the octahedra in specific ways as illustrated in Figs.~\ref{fig:pho3}(c) and~\ref{fig:pho3}(d). There are, for example, breathing-type displacements, in which the O atoms in one plane move simultaneously inward or outward. The displacements cause a decrease of the symmetry, i.e., the space group is changed from cubic $Pm\bar{3}m$ to tetragonal $P4/mbm$, $I4/mcm$ and $I4/mmm$ for distortions along the $\boldsymbol{e}_{1}^{\textrm{M}}$, $\boldsymbol{e}_{1}^{\textrm{R}}$ and $\boldsymbol{e}_{2}^{\textrm{R}}$ imaginary modes, respectively. \begin{figure*}[tbp] \centering \includegraphics[width=17.8cm]{fig/pho3.pdf} \caption{Dynamic instability of the cubic structure at \SI{0}{\kelvin}. Phonon dispersions calculated by (a) HSE06 and (b) DFT$+U$ ($U_{\textrm{eff}} = \SI{3}{\electronvolt}$). Commensurate $\boldsymbol{q}$-points are indicated by tick labels. Negative frequencies indicate the imaginary modes. (c) Four distortion types of an octahedron. Arrows indicate the movement of the atoms. Stationary atoms are anchored by black circles. (d) Collective motion of atoms for the three imaginary modes indicated in (a). (e) The double-well potentials of the three imaginary modes. The corresponding displacements of the O atoms are indicated in (c).} \label{fig:pho3} \end{figure*} The double-well potentials corresponding to the imaginary modes are shown in Fig.~\ref{fig:pho3}(e) (black for HSE06). They reveal that the minimum in energy is reached already at a sub-ångström level for any of the three imaginary modes. It can be also observed that the potential wells are rather shallow (few tenths of meV/f.u.~which translates to a few meV/atom) such that the O anions should be able to overcome the potential barrier by thermal energy already at a low temperature. As shown later, AIMD simulations do confirm this statement. For the other simulations in this study (AIMD, vacancy-ordered structure), HSE06 can hardly be used due to its high computational cost, a fact that motivates the application of the computationally cheaper DFT+$U$ approach. In the DFT$+U$ method, dynamic stability of the Ba--Fe--O perovskites depends on the input parameter $U_\mathrm{eff}$, as discussed in Sec.~\ref{sec:intro}. By comparing calculated and experimental oxidation energies, Wang~\textit{et al.}~\cite{Wang2006} recommended $U_{\mathrm{eff}}\approx\SI{4}{\electronvolt}$, which, however, is not necessarily suitable for dynamic stability investigations. In the present study, the $U_{\textrm{eff}}$ parameter has been calibrated by the just discussed HSE06 results. A range of $U_{\textrm{eff}}$ values was tested with a focus on the phonon dispersion of the FM cubic structure (results are shown in the Appendix). The best match between the two methods is obtained with $U_{\textrm{eff}} = \SI{3}{\electronvolt}$, which is close to the value used by Wollstadt \textit{et al.}~\cite{Wollstadt2021}. A comparison between Figs~\ref{fig:pho3}(a) and~\ref{fig:pho3}(b) exemplifies the good agreement for the phonon dispersion. The qualitative dependence of the imaginary branches is similar, with quantitative differences of about \SI{1}{\THz}. The reasonable agreement between the potential energies [Fig.~\ref{fig:pho3}(e) red~vs.~black] further supports the usage of $U_{\textrm{eff}} = \SI{3}{\electronvolt}$. DFT$+U$ predicts a similar width and about half of the depth of the potential wells as compared with HSE06. Since the absolute energy difference at the lowest point of the potential well is quite small (less than \SI{50}{\milli\electronvolt\per{f.u.}}), the DFT$+U$ method with $U_{\textrm{eff}} = \SI{3}{\electronvolt}$ is considered acceptable for the dynamic stability analysis. To investigate the dynamic stability at elevated temperatures, AIMD simulations have been performed from \SI{2}{\kelvin} up to \SI{1500}{\kelvin} for the FM cubic BaFeO$_{3}$. At low temperatures, large displacements are observed for the O anions, as compared to the Ba or Fe cations, which reinforces the dynamic instability. The instability falls into the imaginary-mode regime identified from the phonon calculations, in which only O atoms show displacements [Figs.~\ref{fig:pho3}(c) and~\ref{fig:pho3}(d)]. Further relaxation (with fixed cell shape and volume) of the low-temperature distorted structure shows a decrease of the energy by \SI{-28.5}{\milli\electronvolt\per{f.u.}} as compared to the ideal cubic structure, which is the same as the energy minimum of the potential wells of the $\boldsymbol{e}_{2}^{\textrm{R}}$ mode [red lines in Fig.~\ref{fig:pho3}(e), cf.~\SI{-26.9}{\milli\electronvolt\per{f.u.}} for $\boldsymbol{e}_{1}^{\textrm{R}}$ and \SI{-11.2}{\milli\electronvolt\per{f.u.}} for $\boldsymbol{e}_{1}^{\textrm{M}}$]. Since all O atoms within the cubic structure are symmetrically equivalent, one specific O atom is used for demonstration, as depicted in Fig.~\ref{fig:cubmd}(a). The trajectories of the O anion at various temperatures are displayed in Fig.~\ref{fig:cubmd}(b). At \SI{2}{\kelvin}, the O anion is trapped at a displaced position in the $x$ direction with a distance of about \SI{0.1}{\angstrom}. The potential barrier is overcome at about \SI{30}{\kelvin}, and a relatively homogeneous distribution of the O anion is observed at \SI{130}{\kelvin} and at higher temperatures. \begin{figure*}[tbp] \centering \includegraphics[width=17.8cm]{fig/cubmd.pdf} \caption{AIMD simulation of the FM cubic BaFeO$_{3}$. (a) Indication of one specific O atom. (b) Trajectories of the O anion at elevated temperatures. (c) The change of the structural descriptor at an elevated temperature. The DFT$+U$ method with $U_{\textrm{eff}} = \SI{3}{\electronvolt}$ was used. } \label{fig:cubmd} \end{figure*} To quantitatively describe the transformation observed in AIMD, we utilize the structure descriptor $\Delta$ introduced in Sec.~\ref{sec:strdes}. Figure~\ref{fig:cubmd}(c) shows the change of $\Delta$ as a function of temperature. A sharp decrease of $\Delta$ is observed just below \SI{130}{\kelvin}, while it remains almost constant at higher temperatures. The AIMD simulations thus predict that the FM cubic structure is stabilized at about \SI{130}{\kelvin} due to vibrational entropy. The local magnetic moment of Fe for the ideal cubic structure at \SI{0}{\kelvin} is $\SI{3.7}{\mu_{\textrm{B}}}$ per Fe ion. For all simulated temperatures, the FM spin state remains unchanged. \subsection{\texorpdfstring{Monoclinic BaFeO\textsubscript{2.67}}{Monoclinic BaFeO2.67}} \label{sec:mon} The phonon dispersion of the G-AFM monoclinic BaFeO$_{2.67}$ structure with the symmetry of $P2_1/m$ [Fig.~\ref{fig:trans}(f)], obtained from the construction process described in Sec.~\ref{sec:str}, has been calculated at \SI{0}{\kelvin} with the DFT$+U$ method ($U_{\textrm{eff}} = \SI{3}{\electronvolt}$). Figure~\ref{fig:pho2.67}(a) shows frequencies of the first few eigenmodes at the $\Gamma$ point (the only commensurate $\boldsymbol{q}$-point) in ascending order. Seven imaginary modes are found, which clearly reveals dynamic instability of the $P2_1/m$ monoclinic structure at \SI{0}{\kelvin}. Note that the relaxation applied during the construction process cannot ``remove'' the dynamic instability observed in Fig.~\ref{fig:pho2.67}(a) due to symmetry constraints. The monoclinic structure after the relaxation corresponds to a saddle point on the potential energy surface, and a vibrational mode analysis as performed here is necessary to detect the dynamic instability. The analysis of the phonon eigenvectors shows that the first four imaginary modes [red shaded in Fig.~\ref{fig:pho2.67}(a)] mainly correspond to displacements of several specific O anions. The heavier Ba and Fe cations are, instead, involved in the other (three) imaginary modes at less negative frequencies (about~\SI{-3}{\THz}). Since the lighter O atoms move faster than the Ba and Fe atoms, it seems reasonable to assume that the monoclinic structure is destabilized along a combination of the lowest four imaginary modes as the temperature is lowered. This assumption is indeed confirmed by the AIMD simulations (discussed in more detail below), which reveal that over $90\%$ of the low temperature distortion is contributed by these four lowest imaginary modes. We can therefore also deduce that displacements along the lowest four imaginary modes stabilize the other three imaginary modes involving Ba and Fe atom displacements through phonon--phonon interactions. \begin{figure*}[tbp] \centering \includegraphics[width=17.8cm]{fig/pho2.67.pdf} \caption{Dynamic instability of the monoclinic structure at \SI{0}{\kelvin}. (a) Frequencies of the first few eigenmodes at the commensurate $\Gamma$ point for the G-AFM monoclinic structure. The first four imaginary modes (highlighted in red) are of relevance for the \SI{0}{\kelvin} dynamical instability. (b) The double-well potentials of these modes. The corresponding displacements of the O atoms are indicated in (d). (c) Projections of the optimized monoclinic structure [Fig.~\ref{fig:trans}(f)] onto the $yz$ and $xz$ planes. Only the four highlighted tetrahedra and the O atoms forming them are displayed. (d) The two distortion types of a tetrahedron. Arrows indicate the movement of the atoms. Stationary atoms are anchored by black circles. (e) Collective motion of atoms for the four highlighted imaginary modes. Shearing of the simulation cell during the structural optimization process [Fig.~\ref{fig:trans}(f)] is indicated.} \label{fig:pho2.67} \end{figure*} The four lowest imaginary modes are visualized in real space in Fig.~\ref{fig:pho2.67}(c)--(e). Only the relevant parts of the monoclinic structure are displayed, specifically the four tetrahedra with their corresponding O atoms. These are the essential components describing the low-temperature distortion. The four lowest imaginary modes can be built up from two types of symmetrically related deformations of the tetrahedra [Fig.~\ref{fig:pho2.67}(d)]. We recall that the tetrahedra are a result of the introduction of the ordered vacancies and the subsequent relaxation [Figs.~\ref{fig:trans}(d)--(f)]. The ideal monoclinic BaFeO$_{2.67}$ belongs to the space group $P2_{1}/m$ (No.~11). During the transformation of a tetrahedron, the O atom at the Wyckoff site $2e$ is displaced by $a_{n}^{\Gamma}$ along the $[010]_{\textrm{mon}}$ direction, which breaks the reflection symmetry with respect to the $(010)_\mathrm{mon}$ plane and thus lowers the symmetry from $P2_1/m$ to, e.g., $P2_1$. The two O atoms of the tetrahedron located at the Wyckoff site $4f$ move along the $[100]_{\textrm{mon}}$ direction, each by the same distance $b_{n}^{\Gamma}$ but in opposite direction to each other. Their positions are not restricted by the space group. No quantitative relationship between $a_{n}^{\Gamma}$ and $b_{n}^{\Gamma}$ can be derived based on the analysis of the imaginary phonon modes. Figure~\ref{fig:pho2.67}(b) shows the potential energy along the four lowest imaginary modes. The double-well potentials affirm the dynamic instability of the ideal monoclinic structure at \SI{0}{\kelvin}. These potential wells are shallower than the ones of the cubic structure [Fig.~\ref{fig:pho3}(e)], which, however, does not immediately mean that the monoclinic structure is easier to be stabilized at elevated temperatures. The true local minimum of the distorted monoclinic structure is not captured by displacements along single modes. Further relaxation of the low temperature distorted structure (obtained from AIMD, which includes a combination of the imaginary modes) gives a lower energy minimum of \SI{-16.1}{\milli\electronvolt\per{f.u.}} at larger O atom displacements (about $\SI{0.2}{\angstrom}$). Additionally, the energy scales of the cubic and the monoclinic structures are not directly comparable because of the different formula units. To properly capture the low temperature displacive transformation and the corresponding dynamic stabilization at elevated temperatures, AIMD simulations have been performed from \SI{2}{\kelvin} up to \SI{1500}{\kelvin} for G-AFM BaFeO$_{2.67}$. One of the O atoms involved in the low temperature transformation is focused on for demonstration [Fig.~\ref{fig:monmd}(a)]. Figure~\ref{fig:monmd}(b) shows the trajectories of this O atom at various temperatures. At \SI{2}{\kelvin}, the O atom is trapped in a displaced position in the $y$ direction at a distance of about \SI{0.2}{\angstrom}, which once more substantiates the dynamic instability of the G-AFM monoclinic structure at low temperatures. At \SI{50}{\kelvin} the O atom is able to pass the barrier to the other side within the available simulation time. As the temperature further increases, the O atom can frequently cross the barrier until the trajectory becomes homogeneously spread over a larger region (about \SI{130}{\kelvin}), with no more obvious signs of the displacement (i.e., the average position of the O anion gradually shifts to the ideal position). It is noteworthy that for temperatures \SI{130}{\kelvin} and higher the trajectories become asymmetric along the $z$ axis (which is perpendicular to the double well potential along $y$), with an increased probability to find the O anion at positive $z$ displacements. This finding indicates that the potential energy hypersurface is asymmetric along the $z$ direction. To quantify the transformation, we utilize again the structure descriptor $\Delta$ extracted from the AIMD simulations. Figure~\ref{fig:monmd}(c) reveals a similar temperature dependence of $\Delta$ as observed for the cubic structure. In particular, the original $P2_{1}/m$ monoclinic structure is stabilized at \SI{130}{\kelvin} due to vibrational entropy. For the ideal monoclinic structure at \SI{0}{\kelvin}, the local magnetic moment for the tetrahedrally coordinated Fe ion is $\SI{3.6}{\mu_{\textrm{B}}}$, and for the other two types of Fe ions are $\SI{3.9}{\mu_{\textrm{B}}}$ and $\SI{4.0}{\mu_{\textrm{B}}}$. The G-AFM state remains stable at all simulated temperatures. \begin{figure*}[tbp] \centering \includegraphics[width=17.8cm]{fig/monmd.pdf} \caption{AIMD simulation of the G-AFM vacancy-ordered monoclinic BaFeO$_{2.67}$. (a) Indication of one specific O atom. (b) Trajectories of the O anion at elevated temperatures. (c) The change of the structural descriptor at an elevated temperature. The DFT$+U$ method with $U_{\textrm{eff}} = \SI{3}{\electronvolt}$ was used. } \label{fig:monmd} \end{figure*} \section{Discussion} \label{sec:dis} The dynamic instability of the ideal monoclinic BaFeO$_{2.67}$ structure is inherently linked to the ordered vacancies. For the cubic BaFeO$_{3}$ structure, in contrast, all O atoms are symmetrically equivalent and thus contribute equally to the dynamic instability. Due to the formation of the ordered vacancies, four of the twelve Fe coordination polyhedra change from octahedra to tetrahedra, and the site symmetries of the O atoms are modified. For the monoclinic structure, the O atoms near the ordered vacancies mainly contribute to the dynamic instability (Sec.~\ref{sec:mon}). This may be intuited by considering that the O atoms near the ordered vacancies can move more freely than other atoms. The instability may also be related to the Jahn--Teller effect on the Fe$^{4+}$, which is allocated on the tetrahedral site in the previous study~\cite{Wollstadt2021}. The ordered vacancies also induce additional anharmonicity for the investigated perovskites. Of course, both the vacancy-free cubic and the vacancy-ordered monoclinic structures are inherently anharmonic along the imaginary modes. As clarified by the double-well potentials [Figs.~\ref{fig:pho3}(e) and \ref{fig:pho2.67}(b)], higher order polynomials with even symmetry are required to stabilize the potential energies. Consistently, the trajectories at temperatures above the transition temperature (about \SI{130}{\kelvin}) show an even-symmetric distribution along the symmetry breaking direction ($x$ in Fig.~\ref{fig:cubmd} and $y$ in Fig.~\ref{fig:monmd}). However, for the vacancy-ordered monoclinic structure an additional anharmonicity in the direction perpendicular to the symmetry-breaking direction is observed ($z$ direction in Fig.~\ref{fig:monmd}). The trajectory distribution along this perpendicular direction is asymmetric, i.e., the O anion prefers on average to be located at positive $z$ displacements. Thus, the creation of ordered vacancies induces an asymmetric local, effective interatomic potential for some O atoms, specifically along the direction perpendicular to the symmetry-breaking direction. With the DFT$+U$ method, both the cubic and the monoclinic structures are predicted to be stabilized at about \SI{130}{\kelvin}. Based on the comparison of double-well potentials [Fig.~\ref{fig:pho3}(e)], in which deeper potential wells are found for HSE06, a stabilization temperature higher than \SI{130}{\kelvin} is expected for both structures by AIMD with HSE06. In contrast, quantum fluctuations, which are not included in AIMD, may decrease the stabilization temperature. In the case of SrTiO$_{3}$, for which displacing along an antiferrodistortive mode was shown to produce an energy decrease of about $\SI{10}{\milli\electronvolt\per{f.u.}}$~\cite{Wahl2008} (depending on the exchange-correlation functional), the transition temperature decreases from \SI{130}{\kelvin} to \SI{110}{\kelvin} due to the quantum fluctuations~\cite{Zhong1996}. \begin{comment} \begin{figure*}[tbp] \centering \includegraphics[width=17.8cm]{fig/md2k2.67.pdf} \caption{Time-resolved trajectories of O atoms for G-AFM BaFeO$_{2.67}$ at \SI{2}{\kelvin}. } \label{fig:md2k2.67} \end{figure*} \begin{figure}[tbp] \centering \includegraphics[width=8.6cm]{fig/dos.pdf} \caption{Phonon density of states at 0 K of BaFeO$_{3}$ and BaFeO$_{2.67}$ using $U_{\textrm{eff}} = \SI{3}{\electronvolt}$. } \label{fig:phdos} \end{figure} \subsection{Lattice parameters}\label{sec:lat} Figure~\ref{fig:ulat} shows that lattice constant obtained by HSE06 is similar to the experimental measurements~\cite{Hayashi2011} while DFT+$U$ calculations tend to overestimate it. The result indicates that it may strategically better to consider the relative volume for exploring the DFT+\textit{U} parameter space with advanced algorithms~\cite{Tavadze2021}. \begin{figure}[tbp] \centering \includegraphics[width=8.6cm]{fig/ulat.pdf} \caption{$U_{\textrm{eff}}$ dependence of the lattice constant calculated with cubic BaFeO$_{3}$. The experimental value was obtained from Ref.~\cite{Hayashi2011}. } \label{fig:ulat} \end{figure} \begin{table}[tbp] \caption{Lattice parameters for G-AFM BaFeO$_{2.67}$ optimized by the DFT+\textit{U} method. } \begin{ruledtabular} \begin{tabular}{ccccc} & \SI{0}{\electronvolt} & \SI{3}{\electronvolt} & \SI{5.3}{\electronvolt} & Exp.\footnotemark[1] \\\hline $a$ (\SI{}{\angstrom}) & 5.707 & 5.656 & 5.656 & 5.650 \\ $b$ (\SI{}{\angstrom}) & 7.015 & 7.031 & 7.043 & 6.952 \\ $c$ (\SI{}{\angstrom}) & 10.252 & 10.337 & 10.357 & 10.160 \\ $\beta$ (\SI{}{\degree}) & 93.067 & 92.398 & 92.264 & 92.053 \\ $V$ (\SI{}{\angstrom^{3}\per{f.u.}}) & 68.307 & 68.455 & 68.704 & 66.47(2) \\ \end{tabular} \end{ruledtabular} \footnotetext[1]{Ref.~\cite{Wollstadt2021}} \label{tab:latpar} \end{table} \subsection{Magnetic moments}\label{sec:elec} Magnetic moment is another important parameter to calibrate the $U_{\textrm{eff}}$. Figure~\ref{fig:umag} shows magnetic states. However, the optimal $U_{\textrm{eff}}$ cannot be determined by the density of states. \begin{figure}[tbp] \centering \includegraphics[width=8.6cm]{fig/umag.pdf} \caption{$U_{\textrm{eff}}$ dependence of local magnetic moments on the Fe atoms calculated with cubic BaFeO$_{3}$. The experimental value was obtained from Ref.~\cite{Hayashi2011}. } \label{fig:umag} \end{figure} \subsection{Density of states}\label{sec:den} \begin{figure}[tbp] \centering \includegraphics[width=8.6cm]{fig/udos.pdf} \caption{Electronic density of states for FM BaFeO$_{3}$.} \label{fig:eldos} \end{figure} Conclusion: In the following discussion, only FM for BaFeO$_{3}$, G-AFM for BaFeO$_{2.67}$ and BaFeO$_{2.5}$ are considered. \begin{figure}[tbp] \centering \includegraphics[width=8.6cm]{fig/bs.pdf} \caption{Band structure.} \label{fig:bs} \end{figure} \vspace{5cm} \subsection{O-vacancy formation energy} \begin{figure}[tbp] \centering \includegraphics[width=8.6cm]{fig/uvac.pdf} \caption{O-vacancy formation energy.} \label{fig:uvac} \end{figure} \begin{table}[tbp] \caption{Lattice parameters calculated by DFT+\textit{U} with $U_{\textrm{eff}} = 5.3$ eV and HSE06.} \begin{ruledtabular} \begin{tabular}{cccccc} & \multicolumn{3}{c}{FM BaFeO$_{3}$} & \multicolumn{2}{c}{G-AFM BaFeO$_{2.67}$} \\ \cline{2-6} & DFT & HSE06 & Exp.\footnotemark[1] & DFT & Exp.\footnotemark[2] \\\hline $a$ (\SI{}{\angstrom}) & 4.029 & & 3.954 & 5.656 & 5.650 \\ $b$ (\SI{}{\angstrom}) & -- & -- & -- & 7.043 & 6.952 \\ $c$ (\SI{}{\angstrom}) & -- & -- & -- & 10.357 & 10.160 \\ $\beta$ (\SI{}{\degree})& -- & -- & -- & 92.264 & 92.053 \\ $V$ (\SI{}{\angstrom^{3}\per{f.u.}}) & 65.381 & & 61.817 & 68.704 & 66.47(2) \\ \end{tabular} \end{ruledtabular} \footnotetext[1]{Ref.~\cite{Hayashi2011}} \footnotetext[2]{Ref.~\cite{Wollstadt2021}} \label{tab:latpar} \end{table} \begin{table}[tbp] \caption{Lattice parameters for G-AFM BaFeO$_{2.67}$ before optimization (pseudocubic) and optimized by the DFT+\textit{U} method. } \begin{ruledtabular} \begin{tabular}{cccccc} & Pseudo. & \SI{0}{\electronvolt} & \SI{3}{\electronvolt} & \SI{5.3}{\electronvolt} & Exp.\footnotemark[1] \\\hline $a$ (\SI{}{\angstrom}) & 5.728 & 5.707 & 5.656 & 5.656 & 5.650 \\ $b$ (\SI{}{\angstrom}) & 7.015 & 7.015 & 7.031 & 7.043 & 6.952 \\ $c$ (\SI{}{\angstrom}) & 9.921 & 10.252 & 10.337 & 10.357 & 10.160 \\ $\beta$ (\SI{}{\degree})& 90 & 93.067 & 92.398 & 92.264 & 92.053 \\ $V$ (\SI{}{\angstrom^{3}\per{f.u.}}) & 66.448 & 68.307 & 68.455 & 68.704 & 66.47(2) \\ \end{tabular} \end{ruledtabular} \footnotetext[1]{Ref.~\cite{Wollstadt2021}} \label{tab:latpar} \end{table} \end{comment} \section{Conclusions} \label{sec:conclusions} The structures of perovskites with ordered vacancies are much more complex than their vacancy-free cubic counterparts. This challenges an accurate determination of the exact arrangement of atoms. A particularly intricate aspect is the interplay of the ordered vacancies with the dynamic (in)stability of perovskites. In order to gain information on this interplay, we have investigated the dynamic stability of the vacancy-free cubic BaFeO$_{3}$ ($Pm\bar{3}m$) and the vacancy-ordered monoclinic BaFeO$_{2.67}$ ($P2_1/m$) structures with AIMD in this study. Our results reveal that the ideal monoclinic structure for vacancy-ordered BaFeO$_{2.67}$ is---in contrast to previous expectations~\cite{Wollstadt2021}---dynamically unstable at \SI{0}{\kelvin}. As temperature increases, the ideal monoclinic structure is dynamically stabilized at about \SI{130}{\kelvin}. Interestingly, the ordered vacancies do not significantly alter the critical temperature at which Ba–Fe–O perovskites are dynamically stabilized. The calculated critical temperature is consistent with the dynamic stability requirement in the temperature range at which the high-symmetry phases are experimentally observed (about \SI{300}{\kelvin} to \SI{700}{\kelvin})~\cite{Wollstadt2021}. From a broader perspective, similar results, i.e., dynamical instability at \SI{0}{\kelvin} and stabilization at a relatively low temperature, may also be expected for the vacancy-ordered structures proposed for other perovskite-type phases composed of Ba--Fe--O or even other elements. Such a generalization implies that the dynamic stability is \textit{not} a critical issue for determination of vacancy orderings of perovskites. Additionally, we have found that strong anharmonicity is induced by the ordered vacancies for the monoclinic structure, along a direction that is perpendicular to the symmetry-breaking direction. This can result in characteristic properties of the material, e.g., a small diffusion barrier of O atoms, which may contribute to the fast ionic diffusion. While the origin of fast ionic diffusion remains an interesting open question~\cite{Wollstadt2021a}, our results provide hints for a possible atomistic mechanism.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Two-line expression for $I_{l}(r)$} \label{sec:integrating} KT give their $S_{2n+1}(\tilde{r})$ (not quite equal to ours, see Sec.~\ref{sec:checks-comparisons}) in their Eqs.~(29-36) in terms of a quadruple sum of rational polynomials and the $\arctan$ function. We find two-line expressions for $I_{l}(r)$ in terms of special functions: \newcommand{{\textstyle\frac{ir}{a}}}{{\textstyle\frac{ir}{a}}} \begin{widetext} \begin{align} \label{eq:I_l-using-Q} I_{l}(r) ={}& -\frac{(l+1)}{24(a^{2}+r^{2})^{4}} \left\{\frac{r}{a} \left[a^4 (l^{3}-l^{2}-16l+4) +2 a^2 r^{2} \left(l^2 (l+3)-16\right) + r^{4} (l+2)^2 (l+3)\right] \left[Q_l\left(+{\textstyle\frac{ir}{a}} \right)+Q_l\left(-{\textstyle\frac{ir}{a}}\right) \right]\right.\nonumber\\ &\qquad\left.{}+2 \left[a^4 \left(l^2+l-2\right)-2 a^2 r^{2} \left(l^2+l-8\right)-3 r^{4} \left(l^2+l+2\right)\right] \frac{1}{i} \left[Q_{l+1}\left(+{\textstyle\frac{ir}{a}}\right)-Q_{l+1}\left(-{\textstyle\frac{ir}{a}}\right)\right]\right\}\,, \end{align} or equivalently, \begin{align} \label{eq:I_l-using-F} I_{l}(r) ={}& \frac{ \varepsilon(l)\sqrt{\pi } \Gamma (l+2) \, (a/r)^{l}} {3\, (2)^{l+3} \left(a^2+r^2\right)^4 \Gamma (l+\frac{3}{2})} \left\{ \left[a^4 (l^{3}-l^{2}-16l+4) +2 a^2 r^{2} \left(l^2 (l+3)-16\right) + r^{4} (l+2)^2 (l+3)\right] F(\textstyle\frac{l+1}{2},\frac{l+2}{2};l+\frac{3}{2};-\frac{a^{2}}{r^{2}}) \right. \nonumber\\ &\left. -\frac{2(l+1)a^{2}}{(2l+3)r^{2}} \left[a^4 \left(l^2+l-2\right)-2 a^2 r^{2} \left(l^2+l-8\right)-3 r^{4} \left(l^2+l+2\right)\right] F(\textstyle\frac{l+2}{2},\frac{l+3}{2};l+\frac{5}{2};-\frac{a^{2}}{r^{2}}) \right\} \end{align} \end{widetext} where $\varepsilon(l)=+1$ for $l=1,5,9,\ldots$, $\varepsilon(l)=-1$ for $l=3,7,11,\ldots$, and $\varepsilon(l)=0$ for even $l$. Below we describe how to find these expressions. We make use of the identity% \footnote{This identity is correct in the fourth edition~\cite{1965tisp.book.....G} but incorrect in the seventh edition~\cite{2007tisp.book.....G}. I did not have access to other editions to check where the error was made.} \begin{equation} \begin{split} \int_{-1}^{+1} c^k (z-c)^{-1} (1-c^2)^{m/2} P_l^m(c) dc \\ = (+2) (z^2-1)^{m/2} Q_l^m(z) z^k \end{split} \end{equation} where $m\le l, k=0,1,\ldots l-m$, and $z$ is in the complex plane with a cut along $(-1,+1)$ on the real axis. To use this identity, we will have $m=0=k$, \begin{equation} \label{eq:Pl-identity} \int_{-1}^{+1} \frac{P_l(c)}{z-c} dc = +2 Q_l(z) \end{equation} with the same restriction on $z$ as above. To get Eq.~\eqref{eq:I-l-def} into a form where Eq.~\eqref{eq:Pl-identity} may be applied, use a complex partial fractions decomposition for the rational polynomial (i.e.~the denominator $(r^{2}+a^{2}c^{2})^{5}$ is an irreducible polynomial over $\mathbb{R}$ but it is reducible over $\mathbb{C}$). The decomposition is \begin{multline} \frac{a c r (3r^{2}-a^{2}c^{2}) (r^{2}-3a^{2}c^{2})}{(r^{2}+a^{2}c^{2})^{5}} = \frac{r}{2 (a c - i r)^5} + \frac{r}{2 (a c + i r)^5} \\ - \frac{i}{4 (a c - i r)^4} + \frac{i}{4 (a c + i r)^4} \,. \end{multline} Now the original integral $I_{l}$ has been converted to two integrals of the form $\int P_{l}(c)/(ac\pm ir)^{n} dc$ where $n=4,5$. The power $n$ may be reduced through integration by parts, i.e.~integrating $(ac\pm ir)^{-n} dc$ while differentiating $P_{l}(c)$. After again performing a partial fractions decomposition, this creates two types of terms. First, terms of the form $\int P_{l'}(c)/(ac\pm ir)^{n'} dc$ where $n'=1,2,\ldots, n-1$ and $l' = l, l+1$, via~\cite{NIST:DLMF} \begin{equation} (1-x^{2})\frac{dP^{\mu}_{\nu}(x)}{dx} =(\mu-\nu-1)P^{\mu}_{\nu+1}(x)+(\nu+1)xP^{\mu}_{\nu}(x)\,. \end{equation} Second, terms of the form $\int P_{l'}(c)/(c\pm 1) dc$. The former terms with $n'=1$ may be evaluated directly with Eq.~\eqref{eq:Pl-identity} and the other $n'$ may be repeatedly integrated by parts as just described. The remaining terms of the form $\int P_{l'}(c)/(c\pm 1) dc$ can all be combined together into integrals of the form \begin{equation} \int \frac{P_{l'-1}(c)-P_{l'+1}(c)}{1-c^2}dc\,. \end{equation} Here the integrand is subject to the identity \begin{equation} P_{n-1}(x)-P_{n+1}(x)=\frac{(2 n+1) \left(1-x^2\right) P_n'}{n(n+1)} \end{equation} which immediately yields \begin{equation} \int_{a}^{b} \frac{P_{l'-1}(c)-P_{l'+1}(c)}{1-c^2}dc = \frac{(2 n+1)}{n(n+1)}\left[P_n(b)-P_n(a)\right]\,. \end{equation} Applying these identities allows us to integrate $I_{l}(r)$ and gives Eq.~\eqref{eq:I_l-using-Q}. Though the argument of $Q_{l}$ is purely imaginary in Eq.~\eqref{eq:I_l-using-Q}, the combinations $(Q_{l}(ix)+Q_{l}(-ix))$ and $\frac{1}{i}(Q_{l}(+ix)-Q_{l}(-ix))$ are purely real. This can be seen with the identity~\cite{NIST:DLMF} \begin{equation} Q_{l}(z) = \frac{\sqrt{\pi}\Gamma(l+1)}{(2z)^{l+1}\Gamma(l+\frac{3}{2})} F\left(\frac{l+1}{2},\frac{l+2}{2};l+\frac{3}{2};\frac{1}{z^{2}}\right)\,. \end{equation} Using this identity gives Eq.~\eqref{eq:I_l-using-F} which is manifestly real. \section{Checks and comparisons} \label{sec:checks-comparisons} For any given $l$, typical computer algebra systems (such as \textsc{Mathematica}) can perform the explicit integral $I_{l}(r)$, since it is nothing but a rational polynomial function. We have checked that Eq.~\eqref{eq:I_l-using-F} agrees with the explicit evaluation of these integrals for a large number of $l$'s. We have also compared our expressions (given in Appendix~\ref{sec:source-moments-small-l}) with those given in KT. We have verified the relationship \begin{equation} S_{l}^{\text{KT}} = \frac{2l+1}{2} I_{l} \end{equation} where $S_{l}^{\text{KT}}$ are the expressions given in Appendix A of KT. This suggests that KT have dropped the factor of $96 C$ (they scale all dimensional quantities by $M$). Their expressions should be multiplied by this factor, which they take as $3/2\pi$. \newpage \acknowledgments The author would like to acknowledge Barry Wardell for helpful discussions. LCS acknowledges that support for this work was provided by the National Aeronautics and Space Administration through Einstein Postdoctoral Fellowship Award Number PF2-130101 issued by the Chandra X-ray Observatory Center, which is operated by the Smithsonian Astrophysical Observatory for and on behalf of the National Aeronautics Space Administration under contract NAS8-03060. \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Consider the following abstract game which proceeds as a sequence of $T$ rounds. In each round $t$, a player has to choose a subset $S_t$ from a universe $U$ of $n$ objects. Without loss of generality, assume $U=\{1,2,..,n\}=[n]$. Each object $i \in U$ has an associated loss $c_{t,i}$, which is unknown to the player and may be chosen by an adversary. On choosing $S_t$, the player incurs the cost $c_t(S_t) = \sum_{i \in S_t} c_{t,i}$. In addition the player receives some feedback about the costs of this round. The goal of the player is to choose the subsets such that the total cost incurred over a period of rounds is close to to the total cost of the best subset in hindsight. This difference in costs is called the \textit{regret} of the player. Formally, regret is defined as: $$R_T = \sum_{t=1}^T c_t(S_t) - \min_{S \subseteq U} \sum_{t=1}^T c_t(S)$$ We can re-formulate the problem as follows. The $2^n$ subsets of $U$ can be mapped to the vertices of the $\{0,1\}^n$ hypercube. The vertex corresponding to the set $S$ is represented by its characteristic vector $X(S) = \sum_{i=1}^n 1\{i \in S\} e_i$. From now on, we will work with the hypercube instead of sets and use losses $l_{t,i}$ instead of costs. In each round, the player chooses $X_t \in \{0,1\}^n$. The loss vector $l_{t}$ is be chosen by an adversary and is unknown to the player. The loss of choosing $X_t$ is $X_t^\top l_t$. The player receives some feedback about the loss vector. The goal is to minimize regret, which is now defined as: $$R_T = \sum_{t=1}^T X_t^\top l_t - \min_{X \in \{0,1\}^n}\sum_{t=1}^T X^\top l_t$$ This is the \textit{Online Linear Optimization(OLO)} problem on the hypercube. As the loss vector $l_t$ can be set by an adversary, the player has to use some randomization in its decision process in order to avoid being foiled by the adversary. At each round $t=1,2,\dots,T$, the player chooses an action $X_t$ from the decision set $\{0,1\}^n$, using some internal randomization. Simultaneously, the adversary chooses a loss vector $l_t$, without access to the internal randomization of the player. Since the player's strategy is randomized and the adversary could be adaptive, we consider the expected regret of the player as a measure of the player's performance. Here the expectation is with respect to the internal randomization of the player and the adversary's randomization. We consider two kinds of feedback for the player. \begin{enumerate} \item \textit{Full Information setting:} At the end of each round $t$, the player observes the loss vector $l_t$. \item \textit{Bandit setting:} At the end of each round $t$, the player only observes the scalar loss incurred $X_t^\top l_t$. \end{enumerate} In order to make make quantifiable statements about the regret of the player, we need to restrict the loss vectors the adversary may choose. Here we assume that $\norm{l_t}_{\infty} \leq 1$ for all $t$, also known as the $L_\infty$ assumption. There are three major strategies for online optimization, which can be tailored to the problem structure and type of feedback. Although, these can be shown to be equivalent to each other in some form, not all of them may be efficiently implementable. These strategies are: \begin{enumerate} \item Exponential Weights (EW)\cite{freund1997decision,littlestone1994weighted} \item Follow the Leader (FTL)\cite{kalai2005efficient} \item Online Mirror Descent (OMD) \cite{nemirovsky1983problem}. \end{enumerate} For problems of this nature, a commonly used EW type algorithm is Exp2 \cite{audibert2011minimax, audibert2013regret, bubeck2012towards}. For the specific problem of Online Linear Optimization on the hypercube, it was previously unknown if the Exp2 algorithm can be efficiently implemented \cite{bubeck2012towards}. So, previous works have resorted to using OMD algorithms for problems of this kind. The main reason for this is that Exp2 explicitly maintains a probability distribution on the decision set. In our case, the size of the decision set is $2^n$. So a straightforward implementation of Exp2 would need exponential time and space. \subsection{Our Contributions} We use the following key observation: In the case of linear losses the probability distribution of Exp2 can be factorized as a product of $n$ Bernoulli distributions. Using this fact, we design an efficient polynomial time algorithm called \textit{PolyExp} for sampling sampling from and updating these distributions. We show that PolyExp is equivalent to Exp2. In addition, we show that PolyExp is equivalent to OMD with entropic regularization and Bernoulli sampling. This allows us to analyze PolyExp's using powerful analysis techniques of OMD. \begin{proposition}For the Online Linear Optimization problem on the $\{0,1\}^n$ Hypercube, Exp2, OMD with Entropic regularization and Bernoulli sampling, and PolyExp are equivalent. \end{proposition} This kind of equivalence is rare. To the best of our knowledge, the only other scenario where this equivalence holds is on the probability simplex for the so called experts problem. In our paper, we focus on the $L_\infty$ assumption. Directly analyzing Exp2 gives regret bounds different from PolyExp. In fact, PolyExp's regret bounds are a factor of $\sqrt{n}$ better than Exp2. These results are summarized by the table below. \begin{center} \begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{$L_\infty$}\\ \hline & Full Information & Bandit \\ \hline Exp2 (direct analysis) & $O(n^{3/2} \sqrt{T})$ & $O(n^{2} \sqrt{T})$ \\ \hline PolyExp & $O(n \sqrt{T})$ & $O(n^{3/2} \sqrt{T})$ \\ \hline Lowerbound & $\Omega(n \sqrt{T})$ & $\Omega(n^{3/2} \sqrt{T})$ \\ \hline \end{tabular} \end{center} However, since we show that Exp2 and PolyExp are equivalent, they must have the same regret bound. This implies an improvement on Exp2's regret bound. \begin{proposition}For the Online Linear Optimization problem on the $\{0,1\}^n$ Hypercube with $L_\infty$ adversarial losses, Exp2, OMD with Entropic regularization and Bernoulli sampling, and PolyExp have the following regret: \begin{enumerate} \item Full Information: $O(n\sqrt{T})$ \item Bandit: $O(n^{3/2} \sqrt{T})$. \end{enumerate} \end{proposition} We also show matching lower bounds proving that these algorithms are also optimal. \begin{proposition}For the Online Linear Optimization problem on the $\{0,1\}^n$ Hypercube with $L_\infty$ adversarial losses, the regret of any algorithm is at least: \begin{enumerate} \item Full Information: $\Omega\left(n\sqrt{T} \right)$ \item Bandit: $\Omega(n^{3/2} \sqrt{T})$. \end{enumerate} \end{proposition} Finally, in \cite{bubeck2012towards}, the authors state that it is not known if it is possible to sample from the exponential weights distribution in polynomial time for $\{-1,+1\}^n$ hypercube. We show how to use PolyExp on $\{0,1\}^n$ for $\{-1,+1\}^n$. We show that the regret of such an algorithm on $\{-1,+1\}^n$ will be a constant factor away from the regret of the algorithm on $\{0,1\}^n$. Thus, we can use PolyExp to obtain a polynomial time algorithm for $\{-1,+1\}^n$ hypercube. We present the proofs of equivalence and regret of PolyExp within the main body of the paper. The remaining proofs are deferred to the appendix. \subsection{Relation to Previous Works} In previous works on OLO \cite{dani2008price, koolen2010hedging, audibert2011minimax, cesa2012combinatorial, bubeck2012towards, audibert2013regret} the authors consider arbitrary subsets of $\{0,1\}^n$ as their decision set. This is also called as Online Combinatorial optimization. In our work, the decision set is the entire $\{0,1\}^n$ hypercube. Moreover, the assumption on the adversarial losses are different. Most of the previous works use the $L_2$ assumption \cite{bubeck2012towards,dani2008price, cesa2012combinatorial} and some use the $L_\infty$ assumption \cite{koolen2010hedging,audibert2011minimax}. The Exp2 algorithm has been studied under various names, each with their own modifications and improvements. In its most basic form, it corresponds to the Hedge algorithm from \cite{freund1997decision} for full information. For combinatorial decision sets, it has been studied by \cite{koolen2010hedging} for full information. In the bandit case, several variants of Exp2 exist based on the exploration distribution used. These were studied in \cite{dani2008price, cesa2012combinatorial} and \cite{bubeck2012towards}. It has been proven in \cite{audibert2011minimax} that Exp2 is provably sub optimal for some decision sets and losses. Follow the Leader kind of algorithms were introduced by \cite{kalai2005efficient} for the full information setting, which can be extended to the bandit settings as well. Mirror descent style of algorithms were introduced in \cite{nemirovsky1983problem}. For online learning, several works \cite{aber2009compe, koolen2010hedging, bubeck2012towards, audibert2013regret} consider OMD style of algorithms. Other algorithms such as Hedge, FTRL etc can be shown to be equivalent to OMD with the right regularization function. In fact, \cite{srebro2011universality} show that OMD can always achieve a nearly optimal regret guarantee for a general class of online learning problems. Under the $L_\infty$ assumption, \cite{koolen2010hedging} and \cite{audibert2011minimax} present lower bounds that match our lower bounds. However, they prove that there exists a subset $S \subset \{0,1\}^n$ and a sequence of losses on $S$ such that the regret is at least some lower bound. So, these results are not directly applicable in our case. So, we derive lower bounds specific for the entire hypercube, showing that there exists a sequence of losses on $\{0,1\}^n$ such that the regret is at least some lower bound. We refer the readers to the books by \cite{cesa2006prediction}, \cite{bubeck2012regret}, \cite{shalev2012online}, \cite{hazan2016introduction} and lectures by \cite{rakhlin2009lecture}, \cite{bubeck2011introduction} for a comprehensive survey of online learning algorithms. \section{Algorithms, Equivalences and Regret} In this section, we describe and analyze the Exp2, OMD with Entropic regularization and Bernoulli Sampling, and PolyExp algorithms and prove their equivalence. \subsection{Exp2} \begin{figure}[h] \noindent\fbox{% \parbox{\textwidth}{% \textbf{Algorithm:} Exp2\\ \textbf{Parameters:} Learning Rate $\eta$\\ Let $w_1(X) = 1$ for all $X \in \{0,1\}^n$. For each round $t=1,2,\dots,T$: \begin{enumerate} \item Sample $X_t$ as below. Play $X_t$ and incur the loss $X_t^\top l_t$. \begin{enumerate} \item Full Information: $X_t \sim p_t(X) = \frac{w_t(X)}{\sum \limits_{Y \in \{0,1\}^n} w_t(Y)}$. \item Bandit: $X_t \sim q_t(X) = (1-\gamma)p_t(X) + \gamma \mu(X)$. Here $\mu$ is the exploration distribution. \end{enumerate} \item See Feedback and construct $\tilde{l_t}$. \begin{enumerate} \item Full Information: $ \tilde{l_t} = l_t$. \item Bandit: $\tilde{l}_{t} = P_t^{-1}X_t X_t^\top l_t$, where $P_t = \mathbb{E}_{X \sim q_t}[XX^\top]$ \end{enumerate} \item Update for all $X \in \{0,1\}^n$ $$w_{t+1}(X) = \exp(-\eta X^\top \tilde{l}_t)w_t(X)$$or equivalently$$w_{t+1}(X) = \exp(-\eta \sum_{\tau=1}^tX^\top \tilde{l}_\tau)$$ \end{enumerate} }% } \end{figure} For all the three settings, the loss vector used to update Exp2 must satisfy the condition that $\mathbb{E}_{X_t}[\tilde{l_t}] = l_t$. We can verify that this is true for the three cases. In the bandit case, the estimator was first proposed by \cite{dani2008price}. Here, $\mu$ is the exploration distribution and $\gamma$ is the mixing coefficient. We use uniform exploration over $\{0,1\}^n$ as proposed in \cite{cesa2012combinatorial}. Exp2 has several computational drawbacks. First, it uses $2^n$ parameters to maintain the distribution $p_t$. Sampling from this distribution in step 1 and updating it step 3 will require exponential time. For the bandit settings, even computing $\tilde{l_t}$ will require exponential time. We state the following regret bounds by analyzing Exp2 directly. The proofs are in the appendix. Later, we prove that these can be improved. These regret bounds are under the $L_\infty$ assumption. \begin{restatable}{theorem}{ExpFReg} \label{Theorem1}In the full information setting, if $\eta = \sqrt{\frac{\log 2}{nT}}$, Exp2 attains the regret bound: $$E[R_T] \leq 2 n^{3/2}\sqrt{T\log 2}$$ \end{restatable} \begin{restatable}{theorem}{ExpBanReg} In the bandit setting, if $\eta = \sqrt{\frac{\log 2}{9n^2T}}$ and $\gamma = 4n^2 \eta$, Exp2 with uniform exploration on $\{0,1\}^n$ attains the regret bound: $$\mathbb{E}[R_T] \leq 6 n^{2} \sqrt{T \log 2}$$ \end{restatable} \subsection{PolyExp} \begin{figure}[h] \noindent\fbox{% \parbox{\textwidth}{% \textbf{Algorithm:} PolyExp\\ \textbf{Parameters:} Learning Rate $\eta$\\ Let $x_{i,1} = 1/2$ for all $i \in [n]$. For each round $t=1,2,\dots,T$: \begin{enumerate} \item Sample $X_t$ as below. Play $X_t$ and incur the loss $X_t^\top l_t$. \begin{enumerate} \item Full information: $X_{i,t} \sim Bernoulli(x_{i,t})$ \item Bandit: With probability $1-\gamma$ sample $X_{i,t} \sim Bernoulli(x_{i,t})$ and with probability $\gamma$ sample $X_t \sim \mu$ \end{enumerate} \item See Feedback and construct $\tilde{l}_t$ \begin{enumerate} \item Full information: $\tilde{l}_t = l_t$ \item Bandit: $\tilde{l}_{t} = P_t^{-1}X_t X_t^\top l_t$, where $P_t = (1-\gamma)\Sigma_t + \gamma \mathbb{E}_{X \sim \mu}[XX^\top]$. The matrix $\Sigma_t$ is $\Sigma_t[i,j] = x_{i,t}x_{j,t}$ if $i\neq j$ and $\Sigma_t[i,i] = x_i$ for all $i,j \in [n]$ \end{enumerate} \item Update for all $i \in [n]$: $$x_{i,t+1} = \frac{x_{i,t}}{x_{i,t} + (1-x_{i,t})\exp(\eta \tilde{l}_{i,t})}$$or equivalently $$x_{i,t+1} = \frac{1}{1+\exp(\eta \sum_{\tau=1}^t \tilde{l}_{i,\tau})}$$ \end{enumerate} }% } \end{figure} To get a polynomial time algorithm, we replace the sampling and update steps with polynomial time operations. PolyExp uses $n$ parameters represented by the vector $x_t$. Each element of $x_t$ corresponds to the mean of a Bernoulli distribution. It uses the product of these Bernoulli distributions to sample $X_t$ and uses the update equation mentioned in step 3 to obtain $x_{t+1}$. In the Bandit setting, we can sample $X_t$ by sampling from $\prod_{i=1}^n Bernoulli(x_{t,i})$ with probability $1-\gamma$ and sampling from $\mu$ with probability $\gamma$. As we use the uniform distribution over $\{0,1\}^n$ for exploration, this is equivalent to sampling from $\prod_{i=1}^n Bernoulli(1/2)$. So we can sample from $\mu$ in polynomial time. The matrix $P_t = \mathbb{E}_{X\sim q_t}[XX^\top] = (1-\gamma)\Sigma_t +\gamma \Sigma_\mu$. Here $\Sigma_t$ and $\Sigma_\mu$ are the covariance matrices when $X \sim \prod_{i=1}^n Bernoulli(x_{t,i})$ and $X \sim \prod_{i=1}^n Bernoulli(1/2)$ respectively. It can be verified that $\Sigma_t[i,j] = x_{i,t}x_{j,t}, \Sigma_\mu[i,j] = 1/4$ if $i\neq j$ and $\Sigma_t[i,i] = x_i, \Sigma_\mu[i,i] = 1/2$ for all $i,j \in [n]$. So $P_t^{-1}$ can be computed in polynomial time. \subsection{Equivalence of Exp2 and PolyExp} We prove that running Exp2 is equivalent to running PolyExp. \begin{restatable}{theorem}{Equiv}Under linear losses $\tilde{l}_t$, Exp2 on $\{0,1\}^n$ is equivalent to PolyExp. At round $t$, The probability that PolyExp chooses $X$ is $\prod_{i=1}^n (x_{i,t})^{X_i} (1-x_{i,t})^{(1-X_i)}$ where $x_{i,t} = (1+\exp(\eta \sum_{\tau=1}^{t-1} \tilde{l}_{i,\tau}))^{-1}$. This is equal to the probability of Exp2 choosing $X$ at round $t$, ie: $$\prod_{i=1}^n (x_{i,t})^{X_i} (1-x_{i,t})^{(1-X_i)} = \frac{\exp(-\eta \sum_{\tau=1}^{t-1}X^\top \tilde{l}_\tau)}{Z_t}$$ where $Z_t = \sum_{Y \in \{0,1\}^n}\exp(-\eta\sum_{\tau=1}^{t-1} Y^\top \tilde{l}_\tau)$. \end{restatable} At every round, the probability distribution $p_t$ in Exp2 is the same as the product of Bernoulli distributions in PolyExp. Lemma \ref{Lemma2} is crucial in proving equivalence between the two algorithms. In a strict sense, Lemma \ref{Lemma2} holds only because our decision set is the entire $\{0,1\}^n$ hypercube. The vector $\tilde{l}_t$ computed by Exp2 and PolyExp will be same. Hence, Exp2 and PolyExp are equivalent. Note that this equivalence is true for any sequence of losses as long as they are linear. \subsection{Online Mirror Descent} We present the OMD algorithm for linear losses on general finite decision sets. Our exposition is adapted from \cite{bubeck2012regret} and \cite{shalev2012online}. Let $\mathcal{X} \subset \mathbb{R}^n$ be an open convex set and $\mathcal{\bar{X}}$ be the closure of $\mathcal{X}$. Let $\mathcal{K} \in \mathbb{R}^d$ be a finite decision set such that $\mathcal{\bar{X}}$ is the convex hull of $\mathcal{K}$. The following definitions will be useful in presenting the algorithm. \begin{definition}\textbf{Legendre Function:} A continuous function $F: \mathcal{\bar{X}} \to \mathbb{R}$ is Legendre if \begin{enumerate} \item $F$ is strictly convex and has continuous partial derivatives on $\mathcal{X}$. \item $\lim \limits_{x \to \mathcal{\bar{X}}/\mathcal{X}} \|\nabla F(x)\| = +\infty$ \end{enumerate} \end{definition} \begin{definition}\textbf{Legendre-Fenchel Conjugate:} Let $F:\mathcal{\bar{X}} \to \mathbb{R}$ be a Legendre function. The Legendre-Fenchel conjugate of $F$ is: $$F^\star(\theta) = \sup_{x \in \mathcal{X}} (x^\top \theta - F(x))$$ \end{definition} \begin{definition} \textbf{Bregman Divergence:} Let $F(x)$ be a Legendre function, the Bregman divergence $D_F:\mathcal{\bar{X}} \times \mathcal{X} \to \mathbb{R} $ is: $$D_F(x\|y) = F(x) - F(y) - \nabla F(y)^\top (x-y)$$ \end{definition} \begin{figure}[h] \noindent\fbox{% \parbox{\linewidth}{% \textbf{Algorithm:} Online Mirror Descent with Regularization $F(x)$\\ \textbf{Parameters:} Learning Rate $\eta$\\ Pick $x_1 = \arg \min \limits_{x \in \mathcal{\bar{X}}} F(x)$. For each round $t=1,2,\dots,T$: \begin{enumerate} \item Let $p_t$ be a distribution on $\mathcal{K}$ such that $\mathbb{E}_{X \sim p_t}[X] = x_t$. Sample $X_{t}$ as below and incur the loss $X_t^\top l_t$ \begin{enumerate} \item Full information: $X_{t} \sim p_t$ \item Bandit: With probability $1-\gamma$ sample $X_{t} \sim p_t$ and with probability $\gamma$ sample $X_t \sim \mu$. \end{enumerate} \item See Feedback and construct $\tilde{l}_t$ \begin{enumerate} \item Full information: $\tilde{l}_t = l_t$ \item Bandit: $\tilde{l}_{t} = P_t^{-1}X_t X_t^\top l_t$, where $P_t = (1-\gamma)\mathbb{E}_{X \sim p_t}[XX^\top] + \gamma \mathbb{E}_{X \sim \mu}[XX^\top]$. \end{enumerate} \item Let $y_{t+1}$ satisfy: $y_{t+1} = \nabla F^\star(\nabla F(x_t)-\eta \tilde{l_t})$ \item Update $x_{t+1} = \arg\min_{x \in \mathcal{\bar{X}}}D_F(x||y_{t+1})$ \end{enumerate} }% } \end{figure} \subsection{Equivalence of PolyExp and Online Mirror Descent} For our problem, $\mathcal{K} = \{0,1\}^n$, $\mathcal{\bar{X}} = [0,1]^n$ and $\mathcal{X} = (0,1)^n$. We use entropic regularization:$$F(x) = \sum_{i=1}^n x_i \log x_i + (1-x_i) \log (1-x_i)$$ This function is Legendre. The OMD algorithm does not specify the probability distribution $p_t$ that should be used for sampling. The only condition that needs to be met is $\mathbb{E}_{X \sim p_t}[X] = x_t$, i.e, $x_t$ should be expressed as a convex combination of $\{0,1\}^n$ and probability of picking $X$ is its coefficient in the linear decomposition of $x_t$. An easy way to achieve this is by using Bernoulli sampling like in PolyExp. Hence, we have the following equivalence theorem: \begin{restatable}{theorem}{EqOMD} Under linear losses $\tilde{l}_t$, OMD on $[0,1]^n$ with Entropic Regularization and Bernoulli Sampling is equivalent to PolyExp. The sampling procedure of PolyExp satisfies $\mathbb{E}[X_t] = x_t$. The update of OMD with Entropic Regularization is the same as PolyExp. \end{restatable} In the bandit case, if we use Bernoulli sampling, $\mathbb{E}_{X \sim p_t}[XX^\top] = \Sigma_t$. \subsection{Regret of PolyExp via OMD analysis} Since OMD and PolyExp are equivalent, we can use the standard analysis tools of OMD to derive a regret bound for PolyExp. These regret bounds are under the $L_\infty$ assumption. \begin{restatable}{theorem}{RegPoly}\label{Theorem2} In the full information setting, if $\eta = \sqrt{\frac{\log 2}{T}}$, PolyExp attains the regret bound: $$E[R_T] \leq 2n\sqrt{T\log 2}$$ \end{restatable} \begin{restatable}{theorem}{PExpBanReg} In the bandit setting, if $\eta = \sqrt{\frac{3\log 2}{8nT}}$ and $\gamma = 4n \eta$, PolyExp with uniform exploration on $\{0,1\}^n$ attains the regret bound: $$\mathbb{E}[R_T] \leq 4 n^{3/2} \sqrt{6T \log 2}$$ \end{restatable} We have shown that Exp2 on $\{0,1\}^n$ with linear losses is equivalent to PolyExp. We have also shown that PolyExp's regret bounds are tighter than the regret bounds that we were able to derive for Exp2. This naturally implies an improvement for Exp2's regret bounds as it is equivalent to PolyExp and must attain the same regret. \section{Comparison of Exp2's and PolyExp's regret proofs} Consider the results we have shown so far. We proved that PolyExp and Exp2 on the hypercube are equivalent. So logically, they should have the same regret bounds. But, our proofs say that PolyExp's regret is $O(\sqrt{n})$ better than Exp2's regret. What is the reason for this apparent discrepancy? The answer lies in the choice of learning rate $\eta$ and the application of the inequality $e^{-x} \leq 1+x-x^2$ in our proofs. This inequality is valid when $x \geq -1$. When analyzing Exp2, $x$ is $\eta X^\top l_t = \eta L_t(X)$. So, to satisfy the constraints $x \geq -1$ we enforce that $|\eta L_t(X)| \leq 1$. Since $|L_t(X)|\leq n$, $\eta \leq 1/n$. This governs the optimal $\eta$ parameter that we are able to get get using Exp2's proof technique. When analyzing PolyExp, $x$ is $\eta l_{t,i}$ and we enforce that $|\eta l_{t,i}| \leq 1$. Since we already assume $|l_{t,i}| \leq 1$, we get that $\eta \leq 1$. PolyExp's proof technique allows us to find a better $\eta$ and achieve a better regret bound. \section{Lower bounds} We state the following lower bounds that establish the least amount of regret that any algorithm must incur. The lower bounds match the upper bounds of PolyExp proving that it is regret optimal. The proofs of the lower bounds can be found in the appendix. \begin{restatable}{theorem}{LBFull} For any learner there exists an adversary producing $L_{\infty} $ losses such that the expected regret in the full information setting is: \begin{equation*} \mathbb{E}\left[R_T \right] = \Omega\left(n \sqrt{T} \right). \end{equation*} \end{restatable} \begin{restatable}{theorem}{LBBandit} For any learner there exists an adversary producing $L_{\infty} $ losses such that the expected regret in the Bandit setting is: \begin{equation*} \mathbb{E}\left[R_T \right] = \Omega\left(n^{3/2} \sqrt{T} \right). \end{equation*} \end{restatable} \section{$\{-1,+1\}^n$ Hypercube Case} Full information and bandit algorithms which work on $\{0,1\}^n$ can be modified to work on $\{-1,+1\}^n$. The general strategy is as follows: \begin{figure}[h] \noindent\fbox{% \parbox{\linewidth}{% \begin{enumerate} \item Sample $X_t \in \{0,1\}^n$, play $Z_t=2X_t - \textbf{1}$ and incur loss $Z_t^\top l_t$. \begin{enumerate} \item Full information: $X_t \sim p_t$ \item Bandit: $X_t \sim q_t = (1-\gamma)p_t + \gamma \mu$ \end{enumerate} \item See feedback and construct $\tilde{l}_t$ \begin{enumerate} \item Full information: $\tilde{l}_t = l_t$ \item Bandit: $\tilde{l}_t = P_t^{-1}Z_t{Z_t}^\top l_t$ where $P_t = \mathbb{E}_{X \sim q_t}[(2X-\textbf{1})(2X-\textbf{1})^\top]$ \end{enumerate} \item Update algorithm using $2\tilde{l}_t$ \end{enumerate} }% } \end{figure} \begin{restatable}{theorem}{Hypereq} Exp2 on $\{-1,+1\}^n$ using the sequence of losses $l_t$ is equivalent to PolyExp on $\{0,1\}^n$ using the sequence of losses $2\tilde{l}_t$. Moreover, the regret of Exp2 on $\{-1,1\}^n$ will equal the regret of PolyExp using the losses $2\tilde{l}_t$. \end{restatable} Hence, using the above strategy, PolyExp can be run in polynomial time on $\{-1,1\}^n$ and since the losses are doubled its regret only changes by a constant factor. \section{Proofs} \subsection{Equivalence Proofs of PolyExp} \subsubsection{Equivalence to Exp2} \begin{lemma} \label{Lemma2} For any sequence of losses $\tilde{l}_t$, the following is true for all $t=1,2,..,T$: $$\prod_{i=1}^n(1+\exp(-\eta \sum_{\tau=1}^{t-1} \tilde{l}_{i,\tau})) = \sum_{Y \in \{0,1\}^n}\exp(-\eta\sum_{\tau=1}^{t-1} Y^\top \tilde{l}_\tau)$$ \end{lemma} \begin{proof} Consider $\prod_{i=1}^n(1+\exp(-\eta \sum_{\tau=1}^{t-1} \tilde{l}_{i,\tau}))$. It is a product of $n$ terms, each consisting of $2$ terms, $1$ and $\exp(-\eta \sum_{\tau=1}^{t-1} \tilde{l}_{i,\tau})$. On expanding the product, we get a sum of $2^n$ terms. Each of these terms is a product of $n$ terms, either a $1$ or $\exp(-\eta \sum_{\tau=1}^{t-1} \tilde{l}_{i,\tau})$. If it is $1$, then $Y_i=0$ and if it is $\exp(-\eta \sum_{\tau=1}^{t-1} \tilde{l}_{i,\tau})$, then $Y_i=1$. So, \begin{align*} \prod_{i=1}^n(1+\exp(-\eta \sum_{\tau=1}^{t-1} \tilde{l}_{i,\tau})) &= \sum_{Y \in \{0,1\}^n} \prod_{i=1}^n \exp(-\eta \sum_{\tau=1}^{t-1} \tilde{l}_{i,\tau})^{Y_i}\\ &= \sum_{Y \in \{0,1\}^n} \prod_{i=1}^n \exp(-\eta \sum_{\tau=1}^{t-1} \tilde{l}_{i,\tau}{Y_i})\\ &=\sum_{Y \in \{0,1\}^n} \exp(-\eta \sum_{i=1}^n \sum_{\tau=1}^{t-1} \tilde{l}_{i,\tau}{Y_i})\\ &= \sum_{Y \in \{0,1\}^n}\exp(-\eta\sum_{\tau=1}^{t-1} Y^\top \tilde{l}_\tau) \end{align*} \end{proof} \Equiv* \begin{proof} The proof is via straightforward substitution of the expression for $x_{i,t}$ and applying Lemma \ref{Lemma2}. \begin{align*} &\prod_{i=1}^n (x_{i,t})^{X_i} (1-x_{i,t})^{(1-X_i)} = \prod_{i=1}^n \frac{\left(\exp(\eta \sum_{\tau=1}^{t-1} \tilde{l}_{i,\tau}) \right)^{1-X_i}}{1+\exp(\eta \sum_{\tau=1}^{t-1} \tilde{l}_{i,\tau})}\\ &= \prod_{i=1}^n \frac{\exp(-\eta \sum_{\tau=1}^{t-1} \tilde{l}_{i,\tau})^{X_i}}{1+\exp(-\eta \sum_{\tau=1}^{t-1} \tilde{l}_{i,\tau})} \end{align*} \begin{align*} &= \frac{\prod_{i=1}^n\exp(-\eta \sum_{\tau=1}^{t-1} \tilde{l}_{i,\tau})^{X_i} }{\prod_{i=1}^n(1+\exp(-\eta \sum_{\tau=1}^{t-1} \tilde{l}_{i,\tau}))}\\ &= \frac{\prod_{i=1}^n\exp(-\eta \sum_{\tau=1}^{t-1} \tilde{l}_{i,\tau}X_i)}{\prod_{i=1}^n(1+\exp(-\eta \sum_{\tau=1}^{t-1} \tilde{l}_{i,\tau}))}\\ &= \frac{\exp(-\eta\sum_{i=1}^n \sum_{\tau=1}^{t-1} \tilde{l}_{i,\tau}X_i)}{\prod_{i=1}^n(1+\exp(-\eta \sum_{\tau=1}^{t-1} \tilde{l}_{i,\tau}))}\\ &= \frac{\exp(-\eta\sum_{\tau=1}^{t-1} X^\top \tilde{l}_{i,\tau})}{\prod_{i=1}^n(1+\exp(-\eta \sum_{\tau=1}^{t-1} \tilde{l}_{i,\tau}))}\\ &= \frac{\exp(-\eta\sum_{\tau=1}^{t-1} X^\top \tilde{l}_{i,\tau})}{\sum_{Y \in \{0,1\}^n} \exp(-\eta \sum_{\tau=1}^{t-1} Y^\top \tilde{l}_\tau)} \end{align*} \end{proof} \subsubsection{Equivalence to OMD} \begin{lemma} \label{lemma4} The Fenchel Conjugate of $F(x) = \sum_{i=1}^n x_i \log x_i + (1-x_i) \log (1-x_i)$ is: $$F^\star(\theta) = \sum_{i=1}^n \log(1+\exp(\theta_i))$$ \end{lemma} \begin{proof}Differentiating $x^\top \theta - F(x)$ wrt $x_i$ and equating to $0$: \begin{align*} \theta_i - \log x_i + \log (1-x_i) &= 0\\ \frac{x_i}{1-x_i} = \exp(\theta_i)\\ x_i = \frac{1}{1+\exp(-\theta_i)} \end{align*} Substituting this back in $x^\top \theta - F(x)$, we get $F^\star(\theta) = \sum_{i=1}^n \log(1+\exp(\theta_i))$. It is also straightforward to see that $\nabla F^\star(\theta)_i = (1+\exp(-\theta_i))^{-1}$ \end{proof} \EqOMD* \begin{proof} It is easy to see that $E[X_{i,t}] = \Pr(X_{i,t}=1) = x_{i,t}$. Hence $E[X_t] = x_t$. The update equation is $y_{t+1} = \nabla F^\star(\nabla F(x_t) - \eta \tilde{l}_t)$. Evaluating $\nabla F$ and using $\nabla F^\star$ from Lemma \ref{lemma4}: \begin{align*} y_{t+1,i} &= \frac{1}{1 + \exp(- \log(x_{t,i}) + \log (1-x_{t,i}) + \eta \tilde{l}_{t,i})}\\ &= \frac{1}{1 + \frac{1-x_{t,i}}{x_{t,i}} \exp(\eta \tilde{l}_{t,i})}\\ &= \frac{x_{t,i}}{x_{t,i} + (1-x_{t,i})\exp(\eta\tilde{l}_{t,i})} \end{align*} Since $0\leq(1+\exp(-\theta))^{-1}\leq 1$, we have that $y_{i,t+1}$ is always in $[0,1]$. Bregman projection step is not required. So we have $x_{i,t+1} = y_{i,t+1}$ which gives the same update as PolyExp. \end{proof} \subsection{PolyExp Regret Proofs} \subsubsection{Full Information} \begin{lemma}[see Theorem 5.5 in \cite{bubeck2012regret}] \label{lemma3} For any $x \in \mathcal{\bar{X}}$, OMD with Legendre regularizer $F(x)$ with domain $\mathcal{\bar{X}}$ and $F^\star$ is differentiable on $\mathbb{R}^n$ satisfies: \begin{align*} \sum_{t=1}^T x_t^\top l_t - \sum_{t=1}^T x^\top l_t &\leq \frac{F(x)-F(x_1)}{\eta} \\&+ \frac{1}{\eta} \sum_{t=1}^T D_{F^\star} (\nabla F(x_t) - \eta l_t \|\nabla F(x_t)) \end{align*} \end{lemma} \begin{lemma}\label{lemma5} If $|\eta l_{t,i}| \leq 1$ for all $t \in [T]$ and $i \in [n]$, OMD with entropic regularizer $F(x) = \sum_{i=1}^n x_i \log x_i + (1-x_i) \log (1-x_i) $ satisfies for any $x \in [0,1]^n$,: $$\sum_{t=1}^T x_t^\top l_t - \sum_{t=1}^T x^\top l_t \leq \frac{n \log 2}{\eta} + \eta \sum_{t=1}^T x_t^T l_t^2$$ \end{lemma} \begin{proof} We start from Lemma \ref{lemma3}. Using the fact that $x \log(x) + (1-x) \log(1-x) \geq -\log 2$, we get $F(x)-F(x_1) \leq n \log 2$. Next we bound the Bregmen term using Lemma \ref{lemma4} \begin{align*} D_{F^\star} (\nabla F(x_t) &- \eta l_t \|\nabla F(x_t)) = F^\star(\nabla F(x_t) - \eta l_t) \\&- F^\star(\nabla F(x_t)) + \eta l_t^\top \nabla F^\star(\nabla F(x_t)) \end{align*} Using that fact that $\nabla F^\star = (\nabla F)^{-1}$, the last term is $\eta x_t^\top l_t$. The first two terms can be simplified as: \begin{align*} &F^\star(\nabla F(x_t) - \eta l_t) - F^\star(\nabla F(x_t)) \\&= \sum_{i=1}^n \log \frac{1+\exp(\nabla F(x_t)_i - \eta l_{t,i})}{1+\exp(\nabla F(x_t)_i)} \\ &=\sum_{i=1}^n \log \frac{1+\exp(-\nabla F(x_t)_i + \eta l_{t,i})}{\exp(\eta l_{t,i})(1+\exp(-\nabla F(x_t)_i )} \end{align*} Using the fact that $\nabla F(x_t)_i = \log x_i - \log (1-x_i)$: \begin{align*} &= \sum_{i=1}^n \log \frac{x_{t,i}+(1-x_{t,i})\exp(\eta l_{t,i})}{\exp(\eta l_{t,i})}\\ &=\sum_{i=1}^n \log(1-x_{t,i}+x_{t,i}\exp(-\eta l_{t,i})) \end{align*} Using the inequality: $e^{-x} \leq 1-x+x^2$ when $x\geq -1$. So when $|\eta l_{t,i}| \leq 1$: $$\leq \sum_{i=1}^n \log(1-\eta x_{t,i}l_{t,i}+\eta^2 x_{t,i}l_{t,i}^2)$$ Using the inequality: $\log(1-x) \leq -x$ $$\leq -\eta x_t ^\top l_t +\eta^2 x_t^\top l_t^2 $$ The Bregman term can be bounded by $-\eta x_t ^\top l_t +\eta^2 x_t^\top l_t^2 + \eta x_t ^\top l_t = \eta^2 x_t^\top l_t^2 $ Hence, we have: $$\sum_{t=1}^T x_t^\top l_t - \sum_{t=1}^T x^\top l_t \leq \frac{n \log 2}{\eta} + \eta \sum_{t=1}^T x_t^T l_t^2$$ \end{proof} \RegPoly* \begin{proof} Applying expectation with respect to the randomness of the player to definition of regret, we get: \begin{align*} \mathbb{E}[R_T] &= \mathbb{E}[\sum_{t=1}^T X_t^\top l_t - \min_{X^\star \in \{0,1\}^n} \sum_{t=1}^T {X^\star} ^\top l_t] \\&= \sum_{t=1}^T x_t^\top l_t - \min_{X^\star \in \{0,1\}^n} \sum_{t=1}^T {X^\star} ^\top l_t \end{align*} Applying Lemma \ref{lemma5}, we get $E[R_T] \leq \frac{n \log 2}{\eta} + \eta \sum_{t=1}^T x_t^T l_t^2$. Using the fact that $|l_{i,t}|\leq 1$, we get $\sum_{t=1}^T x_t^T l_t^2 \leq nT$. $$\mathbb{E}[R_T] \leq \eta nT + \frac{n \log 2}{\eta}$$ Optimizing over the choice of $\eta$, we get that the regret is bounded by $2 n \sqrt{T \log 2}$ if we choose $\eta = \sqrt{\frac{\log 2}{T}}$. \end{proof} \subsubsection{Bandit} \begin{lemma} \label{lemma5_2} Let $ \tilde{l}_{t} = P_t^{-1}X_tX_t^\top l_t$. If $|\eta \tilde{l}_{t,i}| \leq 1$ for all $t\in [T]$ and $i \in [n]$, OMD with entropic regularization and uniform exploration satisfies for any $x \in [0,1]^n$: $$\sum_{t=1}^T x_t^\top l_t - \sum_{t=1}^T x^\top l_t \leq \eta \mathbb{E}[\sum_{t=1}^T x_t^\top \tilde{l}_t^2] + \frac{n\log 2}{\eta} + 2\gamma nT$$ \end{lemma} \begin{proof} We have that: \begin{align*} \sum_{t=1}^T x_t^\top \tilde{l}_t - \sum_{t=1}^T x^\top \tilde{l}_t &= (1-\gamma)(\sum_{t=1}^T x_{p_t}^\top \tilde{l}_t - \sum_{t=1}^T x^\top \tilde{l}_t) \\&+ \gamma (\sum_{t=1}^T x_{\mu}^\top \tilde{l}_t - \sum_{t=1}^T x^\top \tilde{l}_t) \end{align*} Since the algorithm runs OMD on $\tilde{l}_t$ and $|\eta \tilde{l}_t| \leq 1$, we can apply Lemma \ref{lemma5}: \begin{align*} \sum_{t=1}^T x_t^\top \tilde{l}_t - \sum_{t=1}^T x^\top \tilde{l}_t &\leq (1-\gamma)(\eta \sum_{t=1}^T x_{p_t}^T \tilde{l}_t^2 + \frac{n \log 2}{\eta}) \\&+ \gamma(\sum_{t=1}^T x_{\mu}^\top \tilde{l}_t - \sum_{t=1}^T x^\top \tilde{l}_t) \end{align*} Apply expectation with respect to $X_t$. Using the fact that $\mathbb{E}[\tilde{l}_t] = l_t$ and $x_{\mu}^\top l_t - x^\top l_t \leq 2n$: \begin{align*} \sum_{t=1}^T x_t^\top l_t - \sum_{t=1}^T x^\top l_t &\leq (1-\gamma)(\eta \mathbb{E}[\sum_{t=1}^T x_{p_t}^T \tilde{l}_t^2] + \frac{n \log 2}{\eta}) \\&+ 2\gamma nT\\ &\leq \eta \mathbb{E}[\sum_{t=1}^T x_{t}^T \tilde{l}_t^2] + \frac{n \log 2}{\eta} + 2\gamma nT \end{align*} \end{proof} \PExpBanReg* \begin{proof} Applying expectation with respect to the randomness of the player to the definition of regret, we get: \begin{align*} \mathbb{E}[R_T] &= \mathbb{E}[\sum_{t=1}^T X_t^\top l_t - \min_{X^\star \in \{0,1\}^n}\sum_{t=1}^T {X^\star}^\top l_t ] \\&= \sum_{t=1}^T x_t^\top l_t - \min_{X^\star \in \{0,1\}^n}\sum_{t=1}^T {X^\star}^\top l_t \end{align*} Assuming $|\eta \tilde{l}_{t,i}| \leq 1$, we apply Lemma \ref{lemma5_2} $$\mathbb{E}[R_T] \leq \eta \mathbb{E}[\sum_{t=1}^T x_{t}^T \tilde{l}_t^2] + \frac{n \log 2}{\eta} + 2\gamma nT$$ We have that: $$\eta x_{t}^T \tilde{l}_t^2 = \frac{1}{\eta}(\eta\tilde{l}_t)^T \text{diag}(x_t)(\eta\tilde{l}_t) \leq \frac{\|\eta \tilde{l}_t\|^2_2}{\eta} \leq \frac{n}{\eta} \leq \frac{2n \log 2}{\eta}$$ This gives us: $$\mathbb{E}[R_T] \leq \frac{3n\log 2}{\eta} + 2\gamma n T$$ To satisfy $|\eta \tilde{l}_{t,i}| \leq 1$, we need the following condition: \begin{align*} |\eta \tilde{l}_{t,i}| &= \eta |\tilde{l}_t^\top e_i| = \eta |(P_t^{-1}X_tX_t^\top l_t)^\top e_i| \\&\leq n \eta |X_t^\top P_t^{-1}e_i| \leq n \eta |X_t^\top e_i|\|P_t^{-1}\| \end{align*} Since $P_t \succeq \frac{\gamma}{4}I_n$ and $|X_t^\top e_i|\leq 1$, we should have $\frac{4n \eta}{\gamma} \leq 1$. Taking $\gamma = 4n \eta$, we get: $$\mathbb{E}[R_T] \leq \frac{3n\log 2}{\eta} + 8\eta n^2 T$$ Optimizing over $\eta$, we get $\mathbb{E}[R_T] \leq 2n^{3/2} \sqrt{24T\log 2}$ if $\eta = \sqrt{\frac{3 \log 2}{8nT}}$. \end{proof} \bibliographystyle{abbrv}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The Faraday rotation technique provides a powerful probe of astrophysical magnetic fields across different elements of Large Scale Structure (LSS), from galaxies \citep{2015A&ARv..24....4B} to galaxy clusters and the inter-cluster medium \citep{galaxies6040142}. It has also been used to constrain the magnetic field strength in the intergalactic medium \citep{1994RPPh...57..325K,Blasi:1999hu,Neronov:2013lta,Pshirkov:2015tua,Aramburo-Garcia:2022ywn}. Measurements of the weakest intergalactic magnetic fields (IGMF) using the Faraday rotation technique are challenging. The observational signal is determined by the Rotation Measure (RM), which is an integral along the line of sight toward the source of the polarized signal: \begin{equation} \text{RM} = \frac{e^3}{2\pi m_{\rm e}^2}\int \frac{n_{\rm e} B_{\parallel}}{(1+z)^2} \frac{{\rm d}\ell}{{\rm d}z} {\rm d}z, \label{eq:FRMeq} \end{equation} where $e,\,m_{\rm e}$ are the charge and mass of the electron, $n_{\rm e}$ is the density of free electrons in the medium, $z$ is the redshift, and $B_{||}$ is the magnetic field component parallel to the line of sight. This integral has contributions from the intergalactic medium (IGM) and the Milky Way along a line-of-sight (LoS). The Milky Way part of LoS has a small length but large $n_{\rm e}$ and $B_{||}$ values, while the IGM part is significantly longer but has smaller $n_{\rm e}$ and $B_{||}$. In addition to the contributions from the Galactic RM and the IGM, the integral in Eq.~\eqref{eq:FRMeq} also has a contribution from the source host galaxy and possibly from parts of the LoS passing through magnetized regions of other galaxies occasionally found close to the LoS \citep{Bernet:2008qp}. Uncertainties in modeling the Galactic component of the RM \citep{jansson12,2012A&A...542A..93O,Oppermann:2014cua,Hutschenreuter2020}, of the source host galaxy, as well as the elements of LSS along the LoS, limit the sensitivity of searches for the contribution from the intergalactic medium and IGMF in the integral. Detailed modeling for both the evolution of the primordial field and the baryonic feedback from galaxies have been performed within the IllustrisTNG cosmological simulations \citep{nelson18,springel18,pillepich18,naiman18,marinacci18}. Recent work by \cite{Garcia:2020kxm} has specifically considered the result of the baryonic feedback in IllustrisTNG that leads to the appearance of cosmological-scale magnetic bubbles, with magnetic field values in excess of $B\gtrsim 10^{-12}$~cG (comoving Gauss), that occupy up to $15\%$ of the simulation volume. \citet{bondarenko21} studied the effect of these bubbles on searches for the IGMF using $\gamma$-ray\ measurements. This technique is sensitive mainly to the magnetic fields in the voids of the LSS and showed that the presence of such magnetized bubbles has only a minor effect on the $\gamma$-ray\ measurements. However, unlike $\gamma$-ray\ measurements that are sensitive only to the volume-filling IGMF but not to the free electron density $n_{\rm e}$, measurements using the RM technique may be much more influenced by these magnetized bubbles where both $B$ and $n_{\rm e}$ are enhanced. Preliminary estimates from~\citet{Garcia:2020kxm} show that the contribution to RMs from these magnetic bubbles can be comparable to that of the Galactic RM and hence dominate over possible contributions to the RM from the adiabatically compressed primordial magnetic field. \begin{figure} \centering \includegraphics[width=0.48\textwidth]{Plots/Prim_vs_bubbles_B9_new2.pdf} \\ \includegraphics[width=0.48\textwidth]{Plots/Prim_vs_bubbles_Median_B9_new.pdf} \caption[]{Prediction for the mean (upper panel) and median (lower panel) values of $|\text{RM}|$ from the IllustrisTNG simulation as a function of redshift for the homogeneous primordial magnetic field with $B_0 = 10^{-9}$~cG. Red continuous lines show the conservative prediction for the primordial magnetic field, blue dashed lines show the contribution from magnetic bubbles for which we excluded lines of sight with |RM|$>400\text{ rad}/\text{m}^2$ that come from the intersecting galaxies, see text for details. For comparison we also show the prediction from~\cite{Pshirkov:2015tua} (upper panel) and~\cite{Blasi:1999hu} (lower panel), in which RM was estimated based on analytic log-normal distribution for electron number density (green dotted line).} \label{fig:prim-vs-bubbles} \end{figure} In this work, we make a detailed assessment of the effect of magnetized bubbles around galaxies on the extragalactic RM. We show that the presence of these bubbles can account for a large part of the extragalactic RM at high redshifts $z\gtrsim 2$ and, in fact, that the IllustrisTNG model saturates the current upper limit on extragalactic RM. We also study the consistency of the baryonic feedback model in the IllustrisTNG simulations using RM data from the NVSS. The structure of this paper is as follows: in Section~\ref{sec:predictedRM} we describe the IllustrisTNG simulations and the properties of magnetic bubbles; we discuss the separation of the volume-filling component of the IGM from that of magnetic bubbles and extract a redshift-dependent prediction for the RM from the volume-filling magnetic field component and magnetic bubble component, respectively. In Section~\ref{sec:RM-bubbles} we compare our predictions for the RM from magnetic bubbles from the IllustrisTNG simulations to observational measurements from the NVSS survey. In Section~\ref{sec:host-galaxies} we describe a simple analytic model for the RM contribution from host galaxies and compare it to predictions from the IllustrisTNG simulations. In Section~\ref{sec:conclusions} we describe the implications of these results and draw our conclusions. Cosmological parameters from \cite{Plank2016A&A...594A..13P} are assumed throughout this work. \section{Comparing RM from bubbles and primordial magnetic field} \label{sec:predictedRM} The IllustrisTNG simulations and the method used here to extract data on rotation measures for random lines of sight are described in detail in our companion paper~\cite{Aramburo-Garcia:2022ywn}. We note that IllustrisTNG is a state-of-the-art gravo-magnetohydrodynamic simulation incorporating a comprehensive model of galaxy formation. In our work, we use the high-resolution TNG100-1 simulation~\citep[hereinafter TNG100 or just TNG; ][]{Nelson2019ComAC...6....2N} with a box size of $\sim (110~\text{cMpc})^3$, which contains $1820^3$ dark matter particles and an equal number of initial gas cells. The initial seed magnetic field in this simulation was chosen to be homogeneous with a magnitude of $10^{-14}$~cG. We divide the simulation volume into magnetic bubbles and primordial magnetic field components using a limiting magnetic field strength of $10^{-12}$~cG as a boundary condition between the two regions (see details in~\citealt{Garcia:2020kxm,Aramburo-Garcia:2022ywn}). The component with $|B|>10^{-12}$~cG is used to make predictions for magnetic bubbles, while the other component we rescale by a factor $B_0/10^{-14}$~cG and used to predict a conservative contribution from the primordial magnetic field with a field strength $B_0$. The IllustrisTNG simulation data between redshifts $z=0$ and $5$ is stored in the form of snapshots at 13 redshift points. From each of the snapshots, we extract data for electron number density and magnetic field along 1000 random lines of sight. We found that for some lines of sight intersection of galaxies happened, which resulted in a very large |RM| value. From all the 13000 lines of sight, we exclude four lines of sight that are strong outliers with |RM|$>400\text{ rad}/\text{m}^2$, see more details about outliers in Appendix~\ref{app:RM-distribution}. We create 1000 continuous random lines of sight between redshifts $z=0$ and $z=5$ following the procedure described in~\cite{Aramburo-Garcia:2022ywn}. In Figure~\ref{fig:prim-vs-bubbles} we show the predictions for the mean (upper panel) and median (lower panel) absolute RM value, $|\text{RM}|$, for the primordial magnetic field and for magnetic bubbles using $B_0 = 10^{-9}$~cG. The two contributions have different redshift dependence: the RM from the primordial magnetic field grows steadily with redshift, while the prediction from magnetic bubbles saturates around $z\sim 1.5$. This comes from the fact that magnetic bubbles are formed only at small redshifts $z\lesssim 2$~\cite{Garcia:2020kxm}, while the primordial magnetic field exists at all redshifts. We see that for $B_0 = 10^{-9}$~cG, the contribution from magnetic bubbles dominates the mean |RM| at low redshifts and approximately equal contribution at large redshifts. For the median value, the contribution from magnetic bubbles is more modest, resulting in the strong dominance of the primordial magnetic field contribution at large redshifts. These differences between the mean and median $|\text{RM}|$ values are due to the RM distribution for bubbles having a long high-RM tail that significantly influences estimates of the mean |RM|. For comparison we also show predictions from~\cite{Pshirkov:2015tua} and~\cite{Blasi:1999hu}. One can see that those previous results show a different redshift dependence to that of our primordial RM. This can be explained by the differences in electron number density distribution between the analytical model of \cite{Pshirkov:2015tua,Blasi:1999hu} and the numerical model of IllustrisTNG, see \citet{Aramburo-Garcia:2022ywn} for details. In the following sections, we consider the case that the $B_0$ value is small enough ($B_0 \ll 10^{-10}$) to make its contribution negligible compared to that of magnetic bubbles. \begin{figure} \centering \includegraphics[width=\linewidth]{Plots/AvRMbubbles.pdf} \\ \includegraphics[width=\linewidth]{Plots/MedianRM_bubbles_vs_obs.pdf} \caption{Mean (upper panel) and median (lower panel) observed $|\text{RRM}|$ at different redshifts (orange lines) and contribution from magnetic bubbles in TNG simulation calculated using 1000 random lines of sight (blue lines).} \label{fig:AvRMbubbles-vs-AvRRM-RMvoxel-20} \end{figure} \section{Comparison between TNG model of magnetic bubbles and observations} \label{sec:RM-bubbles} We use observations of 3650 radio sources with Faraday rotation measures, and redshift information cataloged by~\cite{2012arXiv1209.1438H}, where objects close to the Galactic plane ($\ell < 20^\circ$) are removed. These data were produced from the NRAO VLA Sky Survey~\citep{Condon1998,Taylor2009} catalog, in which polarization was measured at two close frequencies. This results in a wrapping uncertainty~\citep{Taylor2009}, which means that one cannot distinguish RMs that differ by integer multiples of $\delta\text{RM} = 652.9 \text{ rad}/\text{m}^2$. Therefore, all absolute RM values in the catalog are smaller than $520\text{ rad}/\text{m}^2$, and this is taken into account when we compare them to simulations. We estimate the extragalactic contribution as the residual rotation measure (RRM), which we obtain by subtracting the Galactic RM (GRM) using the model of~\cite{Hutschenreuter2020}. The comparison between observed RRM and our prediction for magnetic bubbles from the IllustrisTNG simulation for both mean and median values of the RRM is shown in Figure~\ref{fig:AvRMbubbles-vs-AvRRM-RMvoxel-20}, where for the simulated data we include lines of sight that intersect galaxies (opposite to Section~\ref{sec:predictedRM}), as we cannot exclude line of sight with crossing galaxies in experimental data. However, we apply a wrapping correction in order to ensure consistency with observations, see Appendix~\ref{sec:wrapping-correction} for details. From Figure~\ref{fig:AvRMbubbles-vs-AvRRM-RMvoxel-20}, it can be seen that the prediction of the mean $|\text{RM}|$ from bubbles in the simulation grows quickly with redshift and is almost constant at $z>2$. It is also interesting to notice that the prediction for the mean |RM| from magnetic bubbles at large redshifts almost coincides with the observed data. For median |RM| values at $z\gtrsim 2$, we see that the contribution from magnetic bubbles is much smaller than the observed RRM, so one might naively conclude that the observed extragalactic RM at large redshifts cannot be explained by magnetic bubbles. However, one should keep in mind that for the observed extragalactic RMs, two systematic factors can increase the observed RM, particularly for small RM values. First, the observed extragalactic RMs have large statistical errors for small RM values. Indeed, almost all data points with extragalactic RMs smaller than $10\text{ rad}/\text{m}^2$ have an associated uncertainty that is of the order of the measured value itself. The second factor is that the procedure for measurement of the Galactic RM depends on the extragalactic sources themselves, which introduces a systematic error into the extragalactic RM, see e.g.~\cite{Oppermann:2014cua}. Both these factors result in a wider extragalactic RM distribution, creating a significant bias in the observed median values of the $|\text{RM}|$. Consequently, from these results, we conclude that the magnetic bubbles in the IllustrisTNG simulation do not contradict the available observational data. If the model from the IllustrisTNG simulation is correct, the magnetic bubbles provide a lower bound for future extragalactic RM measurements at $z\gtrsim 2$, with the characteristic property that the mean value is much larger than the median. \section{Contribution from host galaxies} \label{sec:host-galaxies} An additional contribution to the extragalactic RM potentially comes from the host galaxies and could mask that from magnetic bubbles. In this section, we consider a simple model for host galaxies and argue that their contribution at large redshift should be small. In general, prediction of the host galaxy contribution in simulations is very tricky, as one should properly choose and correctly model sources of polarized radio emission as similar as possible to those present in the observational sample. In this section, we will discuss a simple qualitative model for host galaxies and compare its predictions with those from the IllustrisTNG simulation. \subsection{Simple analytic model} \label{sec:host-model} The RM from the host galaxy at redshift $z$ is given by \begin{equation} \text{RM} \propto \frac{1}{(1+z)^2} \int n_e B_{\parallel} dL, \label{eq:RM-host} \end{equation} where the integral is taken along the line of sight in the circumgalactic medium of the host galaxy. Let us consider that electron number density near the galaxy $n_{\rm e}$ behaves like a cosmological average and is proportional to $(1+z)^3$. At the same time, the characteristic size of the region that gives a significant contribution to the RM is constant in comoving coordinates, $L\propto 1/(1+z)$. We see that in this case, the $z$-dependence from $n_{\rm e}$ and $L$ in Eq.~\eqref{eq:RM-host} cancels out, so the redshift dependence of the RM from host galaxies is defined only by the evolution of the magnetic field near the galaxy. Magnetic field evolution in the circumgalactic medium was studied in detail by~\cite{2012MNRAS.422.2152B}: using cosmological MHD simulations and analytic models, it was shown that the magnetic field near galaxies grows quickly at large redshifts, then reaches a maximum at some intermediate redshift and slowly decays after that. We expect that the RM contribution from host galaxies should exhibit similar behavior. \begin{figure} \centering \includegraphics[width=\linewidth]{Plots/RM_host2.pdf} \caption{Prediction for the mean (blue points) and median (red points) $|\text{RM}|$ values from host galaxies in the IllustrisTNG simulation. With error bars for the mean, we show an estimate of the standard errors, while the shaded red region shows a central region in which $50\%$ of data lies. By continuous lines of corresponding colors we show simple broken power-law fits to the data.} \label{fig:RM-host} \end{figure} \subsection{Host galaxies in IllustrisTNG} To model host galaxies in the IllustrisTNG simulation, we assume that the main sources of observed RM are the radio lobes at the end of AGN jets. We assume that two radio lobes are located symmetrically around an AGN and that we cannot resolve these two radio lobes in the observational data. We also assume that both radio lobes have the same intensity of polarized emission so that the observed RM is an average of their individual RMs. For each given redshift in the simulations, we choose $\sim100$ random galaxies that contain supermassive black holes (the minimal mass of the supermassive black hole in IllustrisTNG is $10^6M_{\odot}$, and it is placed in the center of each dark matter halo when it reaches a virial mass of $6\cdot 10^{10}M_{\odot}$). For these galaxies, we generate two symmetric radio lobes pointing in a random direction from the galaxy center with an isotropic distribution and a randomly selected distance between the two radio lobes in the range from 50 to 300\,kpc. These distances are chosen according to the experimentally measured distribution from~\cite{2020MNRAS.499...68T}. For each pair of radio lobes, we generate six lines of sight in random directions and use the first 1\,Mpc along these lines of sight to define the contribution from the host galaxy. Using these data we calculate the mean and median |RM| from host galaxies, the result of which is shown in Figure~\ref{fig:RM-host}. The continuous lines show best fits to the model \begin{equation} |\text{RM}|(z) = \frac{a + b z^c}{1 + (z/d)^e}, \end{equation} with best-fit parameters $a=1.65(0.242)\text{ rad}/\text{m}^2$, $b=14.4(0.670)\text{ rad}/\text{m}^2$, $c=0.300(0.468)$, $d=3.81(2.26)$, $e = 29.8(10.6)$ for mean (median). Qualitatively, the RM from host galaxies changes with redshift according to the simple analytic model from Section~\ref{sec:host-model}: it decays at large redshifts and has a maximum at intermediate redshifts of $z\sim 3$ for the mean and $z\sim 1.5$ for the median. Comparing the contribution from host galaxies with Figure~\ref{fig:prim-vs-bubbles} we conclude that within the IllustrisTNG model the RM of high redshift sources ($z>3$) is dominated by the contribution from magnetic bubbles along the line-of-sight, rather than by the host galaxy RM, for both the mean and median absolute RM values. \section{Discussion and Conclusions} \label{sec:conclusions} In this work, we have considered the effect of magnetized bubbles around galaxies driven by baryonic feedback processes on the extragalactic Rotation Measure. We have used the IllustrisTNG simulation to separate the contributions of the volume-filling intergalactic magnetic fields and the contribution of magnetized outflows from galaxies to the RM integral. We have demonstrated that the IllustrisTNG model of such magnetized bubbles predicts that the extragalactic RM at $z>2$ almost saturates current estimates of the mean residual RM from the NVSS, see Figure~\ref{fig:AvRMbubbles-vs-AvRRM-RMvoxel-20}. The contribution of magnetized bubbles to the extragalactic RM at $z>2$ (including wrapping correction, see Appendix~\ref{sec:wrapping-correction}) has a value of $\langle |{\rm RM}| \rangle \simeq 13$~rad/m$^2$, which is very close to the mean residual RM of $16$~rad/m$^2$ found from NVSS data in this redshift range when accounting for the Galactic RM model of \cite{Hutschenreuter2020}. Without the survey-dependent wrapping correction the prediction for the mean absolute RM from magnetic bubble is $\langle |{\rm RM}| \rangle \simeq 7$~rad/m$^2$, where rare lines of sight with |RM|$>400\text{ rad}/\text{m}^2$ that came from intersecting galaxies were excluded (see Section~\ref{sec:predictedRM} for details). While our work suggests that there are two main contributions in the IGM: (i) from magnetic bubbles and (ii) from the volume-filling magnetic field, the results found here indicate that the contributions from these two components have different redshift dependencies: the volume-filling magnetic field exists at all redshifts, and its contribution constantly grows with $z$, while magnetic bubbles are formed at later times, mostly below $z \approx 2$, and so at larger redshifts their contribution is fixed. This should allow one to distinguish the separate contributions in future observations. We also consider a simple analytic model for the RM contribution from host galaxies and confirm it using data from the IllustrisTNG simulation. We show that the contribution from host galaxies to the mean and median |RM| values quickly decreases at large redshifts. This provides a possibility to isolate the contribution of magnetic bubbles along the line of sight into the overall extragalactic RM. This can be done through a comparison of the RM of high-redshift sources ($z\gtrsim 2...3$) that of the lower redshift sources. If the main source of the extragalactic RM is the magnetic field around the source host galaxies, high-redshift sources should have systematically lower RM. A caveat of this approach may be the cosmological evolution of the source population, which is not considered in our simple source model (radio lobes around the host galaxy). While the predicted mean absolute RM from the IllustrisTNG simulations found here is compatible with observational measurements from the NVSS survey, we note that the wrapping correction implemented in this work may represent a systematic uncertainty in this result that artificially lowers the predicted and measured RM. Next-generation polarisation surveys such as those expected from the SKA telescope and its precursors will not be subject to this same wrapping uncertainty in their RM measurements due to the broadband nature of their measurements. Given the closeness of the current IllustrisTNG model predictions to the observed residual RM estimates derived from current data suggested that the IllustrisTNG model of baryonic feedback will be falsifiable with the improvement of RM measurements from these new surveys. Furthermore, compared to the NVSS data considered in this work, the SKA will provide an RM grid containing several orders of magnitude more sources than the $\sim 4\times 10^3$ source sample considered here. A denser RM grid provided by the new surveys and better precision stemming from broadband rather than a two-frequency sampling of the polarised signal will also improve the knowledge of the Galactic component of the RM. This will result in smaller systematic uncertainty of the RRM (specifically for the median of the absolute value, which is possibly dominated by the systematic uncertainty). If the RRM level found with the SKA data is lower than the current estimates derived from NVSS, the IllustrisTNG model will be in tension with the data. This suggests that the IllustrisTNG baryonic feedback model is falsifiable through the RM measurements. \section*{Acknowledgements} KB is partly funded by the INFN PD51 INDARK grant. AB is supported by the European Research Council (ERC) Advanced Grant ``NuBSM'' (694896). AMS gratefully acknowledges support from the UK Alan Turing Institute under grant reference EP/V030302/1. AS is supported by the Kavli Institute for Cosmological Physics at the University of Chicago through an endowment from the Kavli Foundation and its founder Fred Kavli. This work has been supported by the Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy, Office of High Energy Physics. \section*{Data Availability} The data underlying this article is available on reasonable request. \bibliographystyle{mnras}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{sect:intro} The epoch of reionization (EoR) is one of the last unobserved eras of our Universe. During this era, the radiation emitted by the first cosmic structures causes the Universe to go from neutral to fully ionised. The study of the EoR thus provides invaluable information on the early stages of structure formation. The current constraints on reionization from quasar spectra \citep{fan2006,becker2013} and the Cosmic Microwave Background \citep[CMB,][]{wmap9-cosmo,zahn2012,planck-cosmo} imply a complex reionization history, going on from $z=20$--30 to $z=6$. The most promising way to probe the EoR is through the observation of the 21\,cm hyperfine transition of neutral hydrogen \citep[see e.g.,][for a review]{furlanetto2006, morales2010, pritchard2012}. A number of projects are currently under way to detect the 21\,cm signal from reionization: the Low Frequency Array (LOFAR)\footnote{http://www.lofar.org}; the Giant Metrewave Radio Telescope (GMRT)\footnote{http://gmrt.ncra.tifr.res.in}; the Murchison Widefield Array (MWA)\footnote{http://www.mwatelescope.org}; the Precision Array to Probe the Epoch of Reionization (PAPER)\footnote{http://astro.berkeley.edu/dbacker/eor}, the 21 Centimeter Array (21CMA)\footnote{21cma.bao.ac.cn}. Such instruments aim at a statistical detection of the signal and are currently providing the first upper limits \citep{paciga2013,dillon2013,parsons2013}. The next generation instruments, like the Square Kilometer Array \citep[SKA,\footnote{http://www.skatelescope.org}][]{mellema2013} will be able to accurately map this radiation over a wide redshifts range. One of the most difficult aspects of the 21\,cm measurement is the presence foreground emission, due to our Galaxy and extragalactic sources, which is about four orders of magnitude brighter than the cosmological signal \citep{jelic2008,jelic2010,bernardi2010,yatawatta2013,moore2013}. Most approaches envisage a first step, where bright point-like sources are subtracted \citep{dimatteo2002,dimatteo2004} and a second one, which deals with the diffuse components. The latter can be done by fitting the foregrounds in frequency by assuming that they are spectrally smooth \citep{oh2003,santos2005,gleser2008,jelic2008,harker2009,liu2009,bernardi2010,liu2011,petrovic2011,liu2012,dillon2013,moore2013}. Alternatively, blind methods have been proposed, which do not rely on hypotheses regarding the foreground spectra \citep{chapman2012,chapman2013}. In fact, a concern related to the spectral fitting approach is the ability to correctly model and accurately fit the spectra \citep[see e.g.][for the effect of incorrect modelling and fitting]{morales2006}. In this work we adapt to the EoR data the Correlated Component Analysis \citep[CCA,][]{bonaldi2006,ricciardi2010} method. The CCA is a ``model learning'' algorithm, which estimates the frequency spectrum of the foreground components from the data exploiting second-order statistics. This method can be referred to as semi-blind, as it exploits some previous knowledge of the foreground emission but it estimates the relevant information from the data. As such, it falls somewhere in between the two categories of approaches outlined above. The main motivation for introducing this approach is for its ability to improve the understanding of the foregrond components. For CMB studies, for which this method has been originally developed, the CCA has been successfully used to improve the modelling of the poorly known anomalous microwave emission \citep{bonaldi2007, special, gouldbelt}. For the current application, some outstanding questions are the smoothness of the foreground components at the relevant frequencies and scales and which are the best spectral models to describe them. The ability to test our hypotheses regarding the spectral properties of the foregrounds and to refine our spectral models will be a crucial prerequisite for the application of all parametric foreground removal approaches. For this reason, we test the CCA method on foreground simulations of increasing complexity. For a direct comparison of the CCA method with other EoR foreground-removal methods we refer the reader to \cite{chapman_SC}. The paper is organised as follows: in Section \ref{sec:one} we describe the CCA method; in Section \ref{sec:two} we describe the simulations that we use to test our analysis. In Section \ref{sec:analysis} we describe in details the analysis performed; In Section~\ref{sec:results} we assess the quality of the foreground cleaning on maps and power spectra. Finally we present our conclusions in Section \ref{sec:conclu}. \section{The Correlated Component Analysis}\label{sec:one} \subsection{Fourier-domain CCA}\label{sec:method} For component separation purposes, it is convenient to model the data as a linear mixture of the components. In this work we apply the linear mixture model to the Fourier domain. This is the most natural choice for interferometric data, since measurements are performed directly in the $uv$ plane. What follows is a brief description of the Fourier-domain implementation of the CCA method \citep[for more details see][]{ricciardi2010}. For each point in the transformed $uv$ plane we write the data model as \begin{equation} \bmath{x}=\bmath{\sf B}\bmath{\sf H}\bmath{s}+\bmath{n}\label{modhcca}. \end{equation} The vectors $\bmath{x}$ and $\bmath{n}$ have dimension $N_{\rm f}$ (number of frequency channels) and contain the data and the noise in the Fourier space, respectively. The vector $\bmath{s}$ has dimension $N_{\rm c}$ (number of components) and contains the astrophysical emission components; the diagonal $N_{\rm f}\times N_{\rm f}$ matrix $\bmath{\sf B}$ contains the instrumental beams in Fourier space and the $N_{\rm f}\times N_{\rm c}$ matrix $\bmath{\sf H}$, called the mixing matrix, contains the intensity of the components at all frequencies. The mixing matrix is the key ingredient of component separation: if $\bmath{\sf H}$ is known, the problem reduces to a suitable inversion of eq.~(\ref{modhcca}). Unfortunately, the mixing matrix is in general not known, at least not to the precision required for an accurate component separation. This led to the development of methods to estimate the mixing matrix from the data. The CCA is one of these methods. It exploits the correlation of the data between different frequencies to estimate the mixing matrix. The additional assumptions made by the CCA are that the mixing matrix is constant within the considered area of the sky, and that its unknown elements can be reduced by adopting a suitable parametrization $\bmath{\sf H}=\bmath{\sf H}(\bmath{p})$. We will come back to these assumptions in Sect.~\ref{sec:analysis}. Starting from the linear mixture data model, and assuming that $\bmath{\sf H}$ is constant for all the points in the $uv$ plane, we can easily derive the following relation between the cross-spectra of the data $\bmath{\sf C}_{\bmath{x}}(k)$, sources $\bmath{\sf C}_{\bmath{s}}(k)$ and noise, $\bmath{\sf C}_{\bmath{n}}(k)$, all depending on the Fourier mode $k$: \begin{equation} \bmath{\sf C}_{\bmath{x}}(k)=\bmath{\sf B}(k)\bmath{\sf H}\bmath{\sf C}_{\bmath{s}}(k)\bmath{\sf H}^{\rm T}\bmath{\sf B}^\dagger(k)+\bmath{\sf C}_{\bmath{n}}(k), \label{hcca_constr} \end{equation} where the dagger superscript denotes the adjoint matrix. If $\bmath{x}(i,j)$ is the two-dimensional discrete Fourier transform of the data on a planar grid, the power spectrum can be obtained as the average of $\bmath{x}(i,j) \bmath{x}^{\dag}(i,j)$ in annular bins $D_{k}$, $k=1,\ldots,k_{\rm max}$: \begin{equation} \bmath{\sf C}_{\bmath{x}}(k) = \frac{1}{M_{k}} \sum_{i,j \in D_{k}} \bmath{x}(i,j) \bmath{x}^{\dag}(i,j), \label{dataspectrum} \end{equation} where $M_{k}$ is the number of pairs $(i,j)$ contained in the spectral bin denoted by $D_{k}$. The minimum and maximum $k$ for the spectral bins depend on the area of the sky considered and on the instrumental resolution, respectively. Since the foreground spectra are a smooth function of $k$, the number of bins has little effect on the results. To write the likelihood in a compact form, we define vectors containing all the elements of the matrices $\bmath{\sf C}$ in eq.~(\ref{hcca_constr}) for all frequencies/components and for a set of spectral bins $k$. If $\bmath{d}$ contains the elements of ${\bmath{\sf C}}_{\bmath{x}}(k) - {\bmath{\sf C}}_{\bmath{n}}(k)$ and $\bmath{c}$ contains the elements of ${\bmath{\sf C}}_{\bmath{s}}(k)$, eq.~(\ref{hcca_constr}) becomes \begin{equation} \bmath{d} = \bmath{\sf H}_{kB}\bmath{c}+ \bmath{\epsilon},\label{fd_cca_error} \end{equation} where $\bmath{\sf H}_{kB}$ contains the elements of the Kronecker product $[\bmath{\sf B}(k)\bmath{\sf H}]\otimes[\bmath{\sf B}(k)\bmath{\sf H}]$ and $\bmath{\epsilon}(k)$ represents the error on the noise power spectrum. The unknowns (the parameter vector $\bmath{p}$ describing the mixing matrix and the source cross-spectra $\bmath{c}$) are finally obtained by minimizing the functional \begin{eqnarray}\label{hcca_objective} \!\!&&\bmath{\Phi}[\bmath{p},\bmath{c}]=\\ \!\!&&\!\!\!\![\bmath{d}-\bmath{\sf H}_{kB}(\bmath{p})\cdot \bmath{c}]^T \bmath{\sf N}_{\epsilon}^{-1} [\bmath{d}-\bmath{\sf H}_{kB}(\bmath{p})\cdot \bmath{c}]+\lambda\bmath{c}^T\bmath{C}\bmath{c} \nonumber \end{eqnarray} where the diagonal matrix $\bmath{\sf N}_{\epsilon}$ contains the covariance of the noise error $\bmath{\epsilon}$. The CCA method also provides an estimate of the statistical errors on the parameters (see Ricciardi et al. 2010 for more details). The quadratic form in eq.~(\ref{hcca_objective}) represents the log-likelihood for $\bmath{p}$ and $\bmath{c}$. The term $\lambda\mathbf{c}^T\mathbf{C}\mathbf{c}$ is a quadratic stabilizer for the source power cross-spectra, where the matrix $\mathbf{C}$ must be suitably chosen. This term can be viewed as a log-prior density for the source power cross-spectra, with the parameter $\lambda$ being tuned to balance the effects of data fit and regularization. However, in a high signal-to-noise case such as the one considered here, there is no need for such regularization, so in this work we have used $\lambda=0$. The minimization in eq.~(\ref{hcca_objective}) is perfomed with the ``simulated annealing'' (SA) method. This algorithm employs a random search which not only accepts changes that decrease the objective function $\bmath{\Phi}$, but also some changes that increase it. The latter are accepted with a probability which depends on $\Delta \bmath{\Phi}$ and on a control parameter, known as the system ``temperature'', decreasing as the minimization proceeds. The major advantage of SA over other methods is its ability to span the entire parameter range and avoid becoming trapped at local minima. \subsection{Application to 21\,cm data}\label{sec:himodel} Essentially, the linear mixture data model with constant mixing matrix postulates that the components have the same spatial distribution at all frequencies and vary only in intensity. This is a reasonable approximation for the diffuse foreground components we consider in this work (Galactic synchrotron and free-free emission). The spatial variation of their frequency spectra is due to changes in the properties of the inter-Galactic medium responsible for the emission (see Sec.~\ref{sec:foreg}). It is therefore reasonable to assume that these properties vary smoothly and not significantly over limited regions of the sky. The 21\,cm signal, however, varies with frequency much more than foregrounds. In this case, a different frequency corresponds to a different redshift, and the spatial variation is due to the combined effect of changing position along the line of sight and evolution. As a result, the 21\,cm signal is expected to be essentially uncorrelated over frequency separations of the order of MHz \citep[see, e.g,][]{Bharadwaj2005,santos2005,mellema2006}. The correlation between subsequent redshift slices for our simulation is computed in Sec.~\ref{sec:21cm}. Following \cite{chapman2012,chapman2013}, we can model the 21\,cm signal as a noise contribution. When applying eq.~(\ref{modhcca}) to our case, we interpret the source vector $\bmath{s}$ as containing the foreground components only, and we model the noise vector as $\bmath{n}=\bmath{n}_{\rm inst}+ \bmath{n}_{\rm HI}$, where $\bmath{n}_{\rm inst}$ is the instrumental noise and $\bmath{n}_{\rm HI}$ is the signal of interest. \subsection{Foreground cleaning} Starting from the linear mixture model of eq.~(\ref{modhcca}), we can obtain an estimate $\bmath{ \hat s}$ of the components $\bmath{s}$ through a suitable linear mixture of the data: \begin{equation}\label{recon} \bmath{ \hat s}=\bmath{\sf W}\bmath{x}, \end{equation} where $\bmath{\sf W}$ is called reconstruction matrix. In this work we use a reconstruction matrix given by \begin{equation}\label{gls} \bmath{\sf W}=[\bmath{\sf \hat H_{B}}^{T}\bmath{\sf C}_{\rm n}^{-1}\bmath{\sf \hat H_B}]^{-1}\bmath{\sf \hat H_B}^{T}\bmath{\sf C}_{\rm n}^{-1}, \end{equation} which is called the generalised least square solution (GLS). It depends on the noise covariance matrix $\bmath{\sf C}_{\rm n}$ and on $\bmath{\sf \hat H_B}=\bmath{\sf B} \bmath{\sf \hat H}$, where $\bmath{\sf \hat H}$ is the estimated mixing matrix. The beam matrix is necessary when we work with frequency maps of different resolution; in this case we recover deconvolved components which we need to convolve again with the beam at each frequency. If we work with common resolution data, the beam matrix can be substituted with the Identity matrix and no deconvolution/convolution is performed. We finally clean the frequency maps in Fourier space by subtracting the reconstructed foreground components scaled by means of the estimated mixing matrix \begin{equation}\label{subtr} \bmath{n}_{\rm HI}+\bmath{n}_{\rm inst}=\bmath{x}-\bmath{\sf \hat H}\bmath{\hat{s}}. \end{equation} As a refinement to this simple subtraction scheme, we may generalise eq.~(\ref{subtr}) as \begin{equation}\label{subtr2} \bmath{n}_{\rm HI}+\bmath{n}_{\rm inst}=\bmath{x}-\bmath{\sf R} \bmath{\sf \hat H}\bmath{\hat{s}}, \end{equation} where $\bmath{\sf R}$ is a diagonal $N_{\rm f} \times N_{\rm f}$ matrix whose diagonal elements, $r_{ii}$, are chosen to improve the subtraction. Specifically, each of them is a constant factor to adjust the amplitude of the predicted foreground contamination to be subtracted at a given frequency. Such small adjustments may be necessary as a result of errors in the foreground model. The estimation of the $r_{ii}$ is performed at the foreground subtraction stage, when both the mixing matrix $\bmath{\sf \hat H}$ and the foreground components $\bmath{\hat s}$ have been recovered. For each frequency channel $i$, $r_{ii}$ can be found by minimizing the power spectrum of the residual $\bmath{n}_{\rm HI}+\bmath{n}_{\rm inst}$ for the considered frequency. In the present application, we used as a figure of merit the integral of the residual power spectrum within a specified $k$ range. This is conceptually similar to minimizing the variance of the solution, but with the additional option of selecting the angular scales that are dominated by foreground emission rather than noise or 21\,cm signal. We will come back to this in Sect.~\ref{sec:three}. Since in this case the parameter space is one-dimensional and we need to sample a limited range around 1, we implemented this minimization simply as a grid search. \section{Simulated data} \label{sec:two} \subsection{EoR signal}\label{sec:21cm} We used the semi-numerical code {\tt 21cmFast} \citep{mesinger2011} to generate 3D realizations of the 21\,cm signal, the brightness temperature $T_{\rm b}$, as a function of redshift. The {\tt 21cmFast} code uses the excursion-set formalism and perturbation theory; it runs much quicker than hydrodynamic simulations and produces accurate results up to scales of $\sim 1$\,Mpc. We adopted the best-fit cosmological model from \cite{planck-cosmo} (\emph{Planck+WP} results), defined by $\Omega_{\rm m}=0.315$, $\Omega_{\rm b}=0.046$, $H_0=67.3$, $\sigma_8=0.829$, $n_{\rm S}=0.96$. Besides the cosmological parameters, there are other ones describing the reionization mechanism, which are poorly known and have a significant effect on the amplitude of the 21\,cm signal as a function of redshift. One of the most important is the reionization efficiency, $\zeta$ \citep{FZH2004}, which determines the mass of ionised material per unit mass of the overdensity. For our simulation we adopt $\zeta=25$. We simulated the evolution of the 21\,cm signal in a box of comoving size of $(1200$\,Mpc$)^3$ from $z=6$ to $z=13$ through 70 redshifts spaced by $\Delta z=0.1$. The $T_{\rm b}$ maps are slices (of thickness 2.4\,Mpc) extracted from each box after choosing one direction as the line of sight (LoS). The $\Delta z=0.1$ separation between the boxes corresponds to a comoving separation \begin{equation}\label{comoving} \Delta r_i^{i+1}=\int_{z_i}^{z_{i+1}}c/H(z')dz' \end{equation} between two consecutive slices; such separation is larger at low redshift and smaller at high redshift (for our fiducial cosmology, $\Delta r\sim 40$\,Mpc at $z=6$ and $\sim 15$\,Mpc at $z=13$). We used eq.~(\ref{comoving}) to compute the position of the slice along the LoS for each cube. This mimics the realistic situation, in which the 21\,cm signal varies with frequency as an effect of evolution (change of the redshift box) and distance from the observer (change of the position of the slice within the box). We finally cut the 21\,cm maps to have an angular size of $3.25^\circ\times3.25^\circ$ (the common field of view for all frequencies, see next section) at each redshift, and regridded it to $256\times256$ pixels. In Fig.~\ref{fig:21cm} we show the rms per redshift (frequency) of the 21\,cm maps (for a pixel size $\sim 0.8$\,arcmin) and a map of the signal at $z=11$. The signal has an amplitude of $\sim 8$\,mK at $z=13$, which increases to $\sim 11$\,mK at $z=11.5$ and then decreases until it disappears around $z=7.5$. As already mentioned, this particular behaviour depends on the reionization parameters used for the simulation. \begin{figure} \includegraphics[width=6cm,angle=90]{fig1a.ps} \includegraphics[width=8cm]{fig1b.eps} \caption{Simulated 21\,cm signal: rms per frequency for a pixel size of $\sim 0.8$\,arcmin (top) and map at $z=11$ in units mK$_{\rm RJ}$. The sky patch is $3.25^\circ \times 3.25^\circ$ wide.} \label{fig:21cm} \end{figure} \begin{figure} \includegraphics[width=6cm,angle=90]{fig2.ps} \caption{Pearson correlation coefficients between the simulated 21\,cm maps at $z=8$, 9, 10, 11 and 12, as detailed in the legend, and all other redshifts. The maps are separated by $\Delta z=0.1$. No correlation is shown at $z<7.5$ because the signal is null.} \label{fig:corr} \end{figure} In Fig.~\ref{fig:corr} we show the Pearson correlation coefficients between the 21\,cm maps of the simulation at some representative redshifts ($z=8,9,10,11$ and 12) and all others. No correlation is shown for $z<7.5$ because the 21\,cm signal is null. The measured coefficients allow us to test the hypothesis on the statistics of the signal we made in Sec.~\ref{sec:himodel}, namely that it could be modelled as a noise term, uncorrelated between frequencies. Indeed, maps separated by $\Delta z=0.1$ are still correlated at the $10$--20\% level; they can more safely assumed to be uncorrelated at $\Delta z=0.2$. Such separation is larger than what is typically considered for EoR experiments. Given the possible implications for component separation, the correlation of the HI signal between different frequency maps should not be neglected in the simulations. \subsection{Foregrounds}~\label{sec:foreg} We neglect in this work the contamination from discrete sources, either resolved or as a background, and only consider the diffuse synchrotron and free-free emission from our Galaxy. Synchrotron radiation is due to cosmic-ray electrons spiralling in the Galactic magnetic field. Its frequency behaviour reflects the spectrum of the electrons and can be described to first order by a power-law model with spectral index $\beta_{\rm s}$. At frequencies below a few GHz, $\beta_{\rm s}$ is ranging from $-2.5$ to $-2.7$ as a function of the position in the sky \citep{broad1989}. Free-free radiation is due to brehmstrahlung emission. Its spectrum is quite uniform and can be predicted to good accuracy; in the optically-thin regime, which holds at high latitudes, it is well approximated between 100 and 200\,MHz by a power-law with spectral index of -2.08 \citep{draine2011}. To simulate the synchrotron and free-free emission we based on existing foreground templates: the \cite{haslam} 408\,MHz map reprocessed by \cite{remazeilles2014} \footnote{http://lambda.gsfc.nasa.gov/product/foreground/2014\_\\haslam\_408\_info.cfm} and the H$\alpha$ map described in \cite{dickinson}, respectively. Using real data as a template for the emission components preserves the non-Gaussian statistics of the foreground emission and the spatial correlation between the two components. These properties could be relevant for component separation purposes and are difficult to reproduce when simulating the components as a random realization. We upgraded the templates in resolution from the original one ($\sim 1^\circ$) by adding to each map a Gaussian random field. We used for this field a power-law behaviour $\ell^{-\beta}$, which is consistent with what observed for these components, from $\ell=200$ ($\theta=1^\circ$) to $\ell=4000$ ($\theta=3$\,arcmins). The spectral index $\beta$ and the intensity have been chosen to obtain a smooth power spectrum of the final template around the scale of the original beam. The processing of the full-sky templates has been done with the Healpix \citep{gorski} package. In Fig.~\ref{fig:synch_templ} we show the full-sky power spectra of the original templates and the high-resolution ones. \begin{figure} \includegraphics[width=6cm,angle=90]{fig3.ps} \caption{Full-sky power spectra of the original (grey) and high-resolution (black) synchrotron and free-free templates at 408\,MHz. The vertical lines at lower and higher multipole correspond to the size of the sky patch and the size of the instrumental beam, respectively.} \label{fig:synch_templ} \end{figure} We extracted $3.25^\circ\times3.25^\circ$ patches from the high resolution templates. The data on the sphere are projected on the plane tangential to the centre of the patch and re-gridded with a suitable number of bins in order to correctly sample the original resolution. Each pixel in the projected image is associated with a specific vector normal to the tangential plane and it assumes the value of the HEALPix pixel nearest to the corresponding position on the sphere. Clearly, the projection and re-gridding process will create some distortion in the image at small scales. However, we verified that this has negligible impact on the scales considered in this work. By choosing different central coordinates for the patches we obtain different ``realizations'' of the foreground emission components, which are realistic in terms of their auto- and cross-correlation, at least at large scales. We produced a sample of ten foreground realizations extracted from a different area of the full-sky templates. The intensity of the components has been scaled to match typical high-latitude values (20\,K at 325\,MHz for synchrotron and two orders of magnitude lower for free-free, \citealt{jelic2008}). This means that different realizations have similar foreground contamination but a different morphology of the synchrotron and free-free amplitudes. A foreground realization at 150\,MHz is shown in Fig.~\ref{fig:foreg}. \begin{figure} \includegraphics[width=8cm]{fig4a.eps} \includegraphics[width=8cm]{fig4b.eps} \caption{Foreground realization at 150\,MHz in K$_{\rm RJ}$. The sky patch is $3.25^\circ \times 3.25^\circ$ wide.} \label{fig:foreg} \end{figure} We scaled the free-free component in frequency with a power-law model, $I(\nu)\propto \nu^{\beta_{\rm ff}}$, with $\beta_{\rm ff}=-2.08$. For the synchrotron component we adopted four simulated spectra of different complexity, which we called S0, S1, S2 and S3: \begin{itemize} \item S0: power-law spectrum $I(\nu)\propto \nu^{\beta_{\rm s}}$, with constant spectral index $\beta_{\rm s}=-2.6$. This model represents the ideal case, where there are no spatial variations or departure from the smooth power-law behaviour. This is not a realistic model, but it is useful as a term of comparison for more complicated models. \item S1: power-law spectrum $I(\nu)\propto \nu^{\beta_{\rm s}}$, with spatially-varying spectral index. The spectral index map is a random field with mean of -2.6 and standard deviation of 0.02 on the $3.25^\circ \times 3.25^\circ$ sky patch and for a pixel size of 0.8\,arcmin. The random field has a power-law behaviour in $\ell$, similar to what is expected for the amplitude of the components. \item S2: non-smooth spectrum oscillating around a power-law with $\beta_{\rm s}=-2.6$. The oscillations (at the 3\,\% level) are obtained by multiplying the power-law spectrum by random numbers having a Gaussian distribution with mean of 1 and standard deviation of 0.03. Rather than being physically motivated, this model has been introduced to challenge the hypothesis of spectral smoothness, which is used by many EoR component separation approaches. \item S3: non-parametric, curved synchrotron spectrum produced with the {\tt GalProp} code \citep{galprop}. This spectrum is physically motivated and it exploits models for the Galactic magnetic field. For this particular model, the spectral index steepens significantly in the frequency range of interest: from $\sim -2.65$ at $\nu=100$\,MHz to $\sim -2.85$ at $\nu=200$\,MHz. \end{itemize} In the left panel of Fig.~\ref{fig:foreg_models} we compare the synchrotron spectrum for the 4 models considered; in the right panel we show the spectral index map for simulation S1 and the foreground realization of Fig.~\ref{fig:foreg}. \begin{figure*} \includegraphics[width=6cm,angle=90]{fig5a.ps} \includegraphics[width=8cm]{fig5b.eps} \caption{\emph{Left}: Synchrotron spectra for all the considered models divided by the spectrum S0. Black solid line: spectrum S0; grey shaded region: area spanned by the spatially-varying spectra of S1; diamonds: spectrum S2; black dashed line: spectrum S3. All the spectra are normalised to be the same at 100\,MHz. \emph{Right}: spectral index map for the spectrum S2.} \label{fig:foreg_models} \end{figure*} \subsection{Instrument} \label{sec:inst} The SKA instrument relevant for EoR observation is the low frequency aperture array (LFAA). This is an aperture array consisting of around 260 thousands wide-bandwidth antennas of a single design, to be mounted on the Australian SKA site. The station diameter is around 35\,m. The configuration is very compact, with 75\% of the antennas within a 2\,km diameter core. The frequency coverage is from 50 to 350\,MHz. In this work we consider only the core of the low-frequency array for SKA phase 1 over a frequency range of 100-200\,MHz. With these instrumental specifications we obtain a field of view (FoV) of $\sim6.5^\circ\times6.5^\circ$ and a synthesized beam of FWHM $\sim6.7$\,arcmins at 100\,MHz, which become $\sim3.25^\circ\times3.25^\circ$ and $\sim3.35$\,arcmins respectively at 200\,MHz. However, building a simulation with frequency-dependent beam is quite computationally demanding. For this work, we construct a mask for the $uv$ plane that only contains the $uv$ points sampled at all frequencies. By applying this mask to the data in the Fourier domain we equalize the resolution of the maps between frequencies. The price we pay is a data loss, which results in a worsening of the instrumental performances. In Fig.~\ref{fig:inst} we show the $uv$ coverage we used for all frequencies, obtained by selecting the points sampled at all bands from 100 to 200\,MHz. The single-frequency coverage corresponds to a proposed array configuration and a Zenith observation; the rotation of the sky has been neglected for simplicity. The masking of the $uv$ plane results in a loss of $\sim 3$\,\% of the $uv$ samples with respect to a single frequency observation (compare the black points to the grey ones in Fig.~\ref{fig:inst}, corresponding to 100\,MHz). A cut of the dirty beam corresponding to this sampling is show in the bottom panel of Fig~\ref{fig:inst}. The synthesized beam has a FWHM of 6.7\,arcmin and the common FoV is $3.25^\circ \times 3.25^\circ$. We stress that these specifications are pessimistic: by considering the whole array instead of just the core, and by optimising the way we deal with the frequency-dependent beam and FoV, these can be improved significantly. The sensitivity requirement for SKA phase 1 to measure the EoR signal is an rms noise of $\sim 1$\,mK on scales of 5\,arcmins at frequencies around 100\,MHz. For a given array configuration, the exact noise levels depend on the integration time, the declination of the source and the bandwidth. They also depend on frequency, since both the total noise and the gain corresponding to each element of the array are a function of $\nu$. An accurate simulation of the noise properties in our case should also account for the masking in the $uv$ space that we performed to equalize the resolution. We note, however, that such a simulation is beyond the scope of this paper. The focus of this work is on foreground emission which, for a sensitive instrument such as the SKA, is orders of magnitude stronger than noise. Therefore, we simplified the noise description by neglecting the frequency dependence and we assumed a brightness sensitivity of 1\,mK over the whole frequency range. The noise maps have been simulated as a Gaussian random field in the $uv$ space and filtered with the $uv$ coverage shown in Fig.~\ref{fig:inst}. They have been subsequently transformed to the pixel space and divided by the rms per pixel (computed for a pixel size of $3.04 \times 3.04$\,arcmins) to obtain the 1\,mK rms level. \begin{figure} \includegraphics[width=6cm,angle=90]{fig6a.eps} \includegraphics[width=6cm,angle=90]{fig6b.ps} \caption{Top: $uv$ coverage containing the points sampled at all frequencies (black dots), for a Zenith observation and neglecting the rotation of the sky. The grey crosses show the additional points sampled at 100\,MHz that are lost after the $uv$ masking. Bottom: cut of the dirty beam corresponding to the coverage shown in the top panel.} \label{fig:inst} \end{figure} \section{Analysis}\label{sec:analysis} \subsection{Fitting for the synchrotron spectrum}\label{sec:cca_fit} We fitted for the spectra of the foregrounds for the simulations S0, S1, S2 and S3 with the CCA method described in Sect. \ref{sec:method}. The estimation has been performed for ten foreground realizations (see Sect. \ref{sec:foreg}). We modelled the mixing matrix as consisting of two components, free-free and synchrotron. We assumed the free-free spectrum to be known and focussed on the estimation of the synchrotron spectrum. This choice is reasonable because the uncertainties on the spectrum of the free-free emission are much smaller than those on the synchrotron emission. Moreover, the free-free emission is orders of magnitude fainter than synchrotron, so that the uncertainty on the latter component dominates the error budget. In principle, we should characterize the noise component as consisting of both instrumental noise and HI signal. This is particularly true for the SKA, given that the instrumental noise is lower than the expected signal for a wide redshift range. Nonetheless, we characterised only the instrumental noise, not to exploit our previous knowledge of the intensity of the simulated 21\,cm signal. We verified that, because the foreground emission is so bright, the estimated synchrotron spectrum is very stable against sensible changes of the assumed noise levels. We modelled the synchrotron emission as a power law, and fitted for a synchrotron spectral index. This model is fully adequate only for S0 and it is simpler than the actual input foreground models for the other simulations. This reproduces the typical situation in which the models that we use are only an approximate representation of the real data. In order to assess the goodness of the CCA results, for each model we computed a true ``effective'' spectral index $\beta_{\rm s}$ to be compared with the estimated one, $\hat \beta_{\rm s}$. Such effective spectral index is the true spectral index for S0; the average of the spatially-varying spectral index map for S1; the slope of the underlying smooth power-law spectrum for S2; and the slope between 100 and 200\,MHz for S3. In Fig.~\ref{fig:histog} we show the histogram of the errors $\Delta \beta_{\rm s}=\hat{\beta}_{\rm s}-\beta_{\rm s}$ for the 10 foreground realizations and the 4 spectral models considered. This represents the random estimation error due to the CCA method, without considering errors due to incorrect modelling of the true synchrotron spectrum. Given the strength of the contaminants, it is very important that the random error is small. For S0 and S2 the estimation is very good both in terms of width of the distribution and offset from zero. Spatial variations of the spectral index (S1) mainly broaden the error distribution while keeping the offset from zero low. Conversely, the steepening of the spectrum (S3) mainly shifts the distribution from zero without enlarging it. In this case, the estimated index is a good representation of the spectral index at low frequencies, where the synchrotron emission is stronger and the spectral index is flatter. \begin{figure} \begin{center} \includegraphics[width=7cm,angle=90]{fig7.ps} \caption{Histogram of $\Delta \beta_{\rm s}= \hat{\beta}_{\rm s}-\beta_{\rm s}$ (estimated index$-$true effective index) for 10 foreground realizations and for the 4 simulated synchrotron spectra as detailed in the legend.} \label{fig:histog} \end{center} \end{figure} Overall, the CCA method is able to fit the slope of the spectrum with good accuracy, not being challenged too much by the higher complexity of the true spectrum with respect to the adopted model. For the simulation with steepening of the spectral index there is a bias towards the flattest slope. \subsection{Cleaned maps} \label{sec:three} In this section we visualise the residual foreground contamination due to the random and model error on the estimated mixing matrix. We show the cleaned maps in pixel space at the central frequency ($\nu=150$\,MHz, $z=8.5$) and for the foreground realization shown in Fig.\ref{fig:foreg}. The cleaned maps are obtained by Fourier-transforming the data cleaned in the visibility plane and are convolved with the instrumental beam. In Fig.~\ref{fig:true_150} we show the true foreground contamination and the true HI signal. Both maps are convolved with the beam. \begin{figure} \includegraphics[width=8cm]{fig8.eps} \caption{True foreground emission (left) and HI signal (right) at 150\,MHz ($z=8.5$).} \label{fig:true_150} \end{figure} \subsubsection{Simulation S0} In Fig.~\ref{fig:m0} we show the reconstructions at 150\,MHz for S0 with and without noise in the data. The residual foreground contamination is well below the noise, due to the lack of any model error and any non-idealities of the data. \begin{figure} \includegraphics[width=8cm]{fig9.eps} \caption{Ideal case (S0): reconstructed HI map at 150\,MHz ($z=8.5$) with (left) and without (right) noise in the data, to be compared with the true input signal (right panel of Fig~\ref{fig:true_150}).} \label{fig:m0} \end{figure} \subsubsection{Simulation S1} The spatial variability of the synchrotron spectral index (tested by S1) is something that we cannot model explicitly in the CCA method. In fact, its second-order statistics constraint assumes that the mixing matrix is constant in the considered area of the sky. In principle, one could divide the FoV into smaller areas and process them separately. This strategy is successfully used in \cite{ricciardi2010} for CMB data, as we consider large sky areas where the spectral properties of foregrounds are likely to vary significantly. However, the sky patches cannot be arbitrarily small, because a robust computation of the data spectra and cross-spectra needs good statistics. This means that in the present application, with sky areas considered are already relatively small, this approach could be inefficient. Alternatively, following \cite{stolyarov2005}, a single component having spatially-varying frequency scaling can be modelled as the sum of multiple components with uniform frequency scaling. A suitable decomposition is obtained by expanding the spectral model of the synchrotron component in a Taylor series around the mean spectral index. Thus, the first-order component has spectrum $I(\nu) \propto \nu^{\hat{\beta}_{\rm s}}$, the second-order one has spectrum $I(\nu) \propto \nu^{\hat{\beta}_{\rm s}-1}$, and so on. In Fig.~\ref{fig:m1} we show the results at 150\,MHz for two reconstructions: one where we fit for the synchrotron and free-free components and one where we included an additional foreground component, with spectrum $I(\nu) \propto \nu^{\hat{\beta}_{\rm s}-1}$, to absorb the errors due to the spatial variations of the synchrotron spectral index. In the first case the errors in the foreground models cause a residual contamination which is too large compared to the signal we aim to recover. The morphology of the residual map reflects that of the spatially-varying spectral index map (shown in the right panel of Fig.~\ref{fig:foreg_models}). The inclusion of the extra component improves the results substantially; the residual map, reduced by $\sim 80\,\%$, is now below the 21\,cm signal. In principle, one could add more terms of the Taylor expansion to reach even higher accuracy, however the noise in the reconstruction increases with the number of components. In our case, the inclusion of a 4th component to the model does not improve appreciably the cleaning. \begin{figure} \includegraphics[width=8cm]{fig10.eps} \caption{Effect of spatially-varying synchrotron spectral index (S1): reconstructed HI map at 150\,MHz ($z=8.5$) with two (left) and three (right) effective foreground components, to be compared with the true input signal (right panel of Fig~\ref{fig:true_150}).} \label{fig:m1} \end{figure} \subsubsection{Simulation S2} In the left panel of Fig.~\ref{fig:m2} we show the reconstructed HI emission at 150\,MHz for S2 obtained with the simple subtraction method [eq.~(\ref{subtr})]. The 3\,\% oscillations around the power-law spectrum are enough to swamp the detection of the HI signal. The morphology of the reconstructed map is that of the foreground component shown in the left panel of Fig.~\ref{fig:true_150}. In the right panel of Fig.~\ref{fig:m2} we show the results of the improved subtraction method [eq.~(\ref{subtr2})]. This method is able to remove efficiently the foreground contamination. As mentioned in Sect.~\ref{sec:method}, the improved subtraction consists in estimating a factor, $r_{ii}$, to calibrate the foreground map to be subtracted from the $i$-th frequency map. These factors have been chosen as those minimising the power of the fluctuations of the residual map. Because the estimated foreground component is obtained by linearly combining all the frequency maps, it should not correlate significantly with any single HI map or noise map. Therefore, this procedure is unlikely to subtract the signal of interest. However, we minimised only the large-scale power (larger than several arcminutes) which, in the presence of subtraction errors, is dominated by foreground emission. As we can see from Fig.~\ref{fig:recalib}, the power of the difference map varies substantially for small variations of $r_{ii}$, and may prefer a value of $r_{ii} \neq 1$. This means that the original model either slightly underestimated or overestimated the foreground contamination, as a result of the unmodelled random fluctuations. We verified that the improved subtraction method is only necessary in the presence of significant model errors; for the random oscillations considered here, the simple subtraction method of eq.~(\ref{subtr}) is good enough if the oscillations are below 1\,\%. \begin{figure} \includegraphics[width=8cm]{fig11.eps} \caption{Effect of 3\,\% random oscillations around the smooth spectrum (S2): reconstructed HI map at 150\,MHz ($z=8.5$) with simple (left) and improved (right) subtraction method, to be compared with the true input signal (right panel of Fig~\ref{fig:true_150}).} \label{fig:m2} \end{figure} \begin{figure} \includegraphics[width=6cm,angle=90]{fig12.ps} \caption{Power of the residual map vs factors $r_{ii}$ multiplied to the predicted foreground contamination before subtraction (thick lines) and resulting optimal $r_{ii}$ value (thin lines), at five different frequencies (different line styles).} \label{fig:recalib} \end{figure} \subsubsection{Simulation S3} In Fig.~\ref{fig:m3} we show the results at 150\,MHz for the simulation S3. Similarly to the previous results, the simple subtraction of the predicted power-law component (left panel) is not accurate enough, but the improved subtraction scheme of eq.~(\ref{subtr2}) (right panel) is able to correct for departures from the spectral model. \begin{figure} \includegraphics[width=8cm]{fig13.eps} \caption{Effect of curvature of the synchrotron spectral index (S3): reconstructed HI map at 150\,MHz ($z=8.5$) with simple (left) and improved (right) subtraction method, to be compared with the true input signal (right panel of Fig~\ref{fig:true_150}).} \label{fig:m3} \end{figure} \subsubsection{Summary} We were able to obtain good reconstructions of the underlying HI signal for all the simulations considered. The fitted CCA spectrum achieved a very good first-order subtraction of the foreground contamination (the contamination is reduced by more than 3 orders of magnitude). Second-order corrections were necessary to compensate for departures of the true spectra from the adopted models. In particular, the simulation S1 required the addition of a further foreground component, while simulations S2 and S3 required adjustments in the subtraction method. We note that these two corrections are conceptually different. The addition of extra components modifies the morphology of the total foreground emission with frequency, and therefore it compensates for errors on the forground pattern, such as the one induced by a spatial variation of the spectral properties. On the other hand, the calibration of the subtraction modifies the intensity of the foreground emission and not its pattern, so it is useful in the presence of an error in the average foreground frequency spectrum, such as the one induced by an incorrect modelling. In a real situation, both errors are likely to be present, and these two corrections can be applied sequentially. In the next section we show results obtained by performing both corrections for all the models, irrespective to whether they are needed by the data or not. In the latter case, the amplitude of the correction results to be null or negligible. \section{Results}\label{sec:results} In this Section we present a more quantitative assessment of the results for the whole frequency range considered and all the foreground models. \subsection{Statistics on pixel maps} We first considered statistics on the 21\,cm maps in pixel space, and in particular the rms of the foreground-cleaned signal compared to the true one, and the Pearson's correlation coefficient between the cleaned and true signals. Both the cleaned and the true signals are convolved with the beam and sampled with a 0.8\,arcmin pixel size. The clean signal is noisy while the true one is noiseless. The rms and correlation are shown respectively in the top and in the bottom panel of Fig.~\ref{fig:correlations}. Note the difference between the rms of the true signal in Figs.~\ref{fig:correlations} and 1. The signal in Fig.~\ref{fig:correlations} is significantly lower, especially at high redshift. This is the effect of the convolution of the maps with the 6.7\,arcmin instrumental beam. Since most of the power of the high-redshift signal comes from small scales, the convolution suppresses the rms significantly in this case. Both the rms and the correlation plots are also affected by the presence of noise in the recovered signal. To interpret the noise contribution correctly, in the rms plot we show the noise level, and in the correlation plot we show the correlation between the true noisy signal and the true noiseless signal, which represents the highest correlation we can obtain in the presence of noise. The rms and the correlation plots give very similar indications: there is a wide frequency range $110$--150\,MHz ($z=12$--8.5) where the cleaning is very good; at higher frequencies the signal is dominated by noise while at lower one there is an excess due to residual foreground contamination. The details of the performance vary for the different foreground simulations considered. Overall, the worse results are obtained for S1, featuring spatial variability of the spectral properties. The unsmooth features, tested with the simulation S2, are troublesome mostly at frequency lower than 110\,MHz. It is interesting to note that the curved synchrotron spectrum performs nearly at the level of the ideal model S0. \begin{figure} \includegraphics[width=6cm,angle=90]{fig14a.ps} \includegraphics[width=6cm,angle=90]{fig14b.ps} \caption{\emph{Top}: rms of the true signal (black line) compared to rms of the foreground-subtracted signals for different models (coloured lines). The foreground-subtracted signal is noisy; the noise level is shown by the black dot-dashed line. \emph{Bottom}: Pearson correlation coefficients between the true smoothed HI signal and the reconstructed for different models (coloured lines). The black line is the correlation with the true noisy signal and represents the best result we can achieve. Both the rms and the correlations refer to a pixel size of 0.8\,arcmin and a beam of 6.7 \,arcmin.} \label{fig:correlations} \end{figure} The pixel statistics discussed so far are easy to interpret and very useful to visualize the results. However, they may not give a complete picture because they depend on the choice of the pixel size, which is quite arbitrary, and they typically probe the signal at the smallest scales. In the next subsection we also consider 2D power spectra, which give a description of the signal, noise and residual foreground as a function of the spatial scale. \subsection{Power spectra} \label{sec:four} The 2D power spectra of the 21\,cm maps, ${C}_{\rm 2D}(k)$, are computed by averaging the 2D Fourier transform of the map in circular bins corresponding to a Fourier mode $k$ [eq.~(\ref{dataspectrum})]. The results are presented in terms of $\Delta^2_{\rm 2D} (k)=\frac{A}{2\pi} {C}_{2D}(k)$, where $A$ is the area of the simulation map. In Fig.~\ref{fig:2dspecs} we show the power spectrum of the true convolved signal at four redshifts ($z=6.5, 8.5, 10.5$ and 12.5, thick black lines) compared to the power spectrum of the residual foreground contamination (cleaned noiseless signal $-$ true signal) for all the models (coloured lines) and the power spectrum of the noise averaged over 100 noise Monte Carlo (MC) realizations (grey dotted lines). The accuracy of the spectra depends on the balance between the residual foreground contamination, the noise level, and the intensity of the 21\,cm signal. At $z=6.5$, where our simulation has null HI signal, foreground residuals constitute an upper limit to the detection. At $z= 8.5$ and $z= 12.5$ the signal is above the residual contamination for a wide range of scales; the recovery of the largest scales is hampered by foreground contamination, at a level that varies for different foreground models. At $z=10.5$ we have the best recovery; foreground residuals are around two orders of magnitude below the input power spectrum. The simulation S1 gives the highest large-scales residuals at low redshift (high frequency) but it performs reasonably well at high redshift (low frequency). This means that, for this model, the cleaning is more efficient for the most contaminated channels. This is a consequence of the spatial variations of the synchrotron spectral index, which causes the morphology of the synchrotron component to change with frequency. Because this component is stronger at lower frequency, the reconstructed synchrotron map is more similar to the low-frequency synchrotron emission, hence it performs better when subtracted from the low-frequency channels. Conversely, the simulation S2 performs better at high frequency than at low frequency. The presence of non-smooth features in the spectrum causes a random error in the estimation of the mixing matrix. The resulting foreground contamination is worse where the foregrounds are stronger, i.e. at low frequency. For all the considered foreground models, we have a good cleaning of the signal for a wide range in scales and redshifts. This shows that our approach is a powerful one for cleaning the HI signal from foreground contamination. \begin{figure} \includegraphics[angle=90,width=8cm]{fig15.ps} \caption{Power spectra of the true HI signal (solid thick lines) for different redshifts, as specified in each panel, compared to the power spectrum of foreground residuals (coloured lines, colours and line styles as in Fig.\ref{fig:correlations}) and of the noise averaged for the 100 noise realizations (grey dotted lines).} \label{fig:2dspecs} \end{figure} \section{Conclusions}\label{sec:conclu} We have tested the performance of the CCA component separation method \citep{bonaldi2006,ricciardi2010}, developed for CMB data analysis, on simulated SKA data to study the EoR. Our simulation includes the EoR signal, generated with the {\tt 21cmFast} code \citep{mesinger2011}, and diffuse synchrotron and free-free foregrounds, simulated starting from the 408\,MHz \cite{haslam} map reprocessed by Remazeilles et al. (in prep) and the H$\alpha$ \cite{dickinson} template. We considered 70 frequency bands from 100 to 200\,MHz ($z=6$--13, spaced by $\Delta z=0.1$). The formalization of the component separation problem used in this and previous work \citep[e.g.,][]{chapman2012,chapman2013} models the 21\,cm signal from neutral hydrogen as noise, based on the low correlation between different redshift slices. We tested this hypothesis with our HI simulations and found that the correlation is still 10--20\% for a redshift separation of $\Delta z=0.1$ and can be neglected only for $\Delta z\geq0.2$. Another crucial aspect for EoR component separation is the complexity of the frequency behaviour of the foregrounds and our ability to model them accurately. In our case we focussed on the synchrotron emission, which is by far the dominant foreground component. We considered four synchrotron models to test the effect of different non-idealities in the frequency spectrum: spatial variations, curvature, and the presence of non-smooth features. Such effects are in principle very problematic for parametric methods such as the CCA. However, when modelling the signal as a power-law, we still get a good recovery of the spectral index. There is a small random error ($\Delta \beta_{\rm s}=0.01$--0.05 depending on the foreground simulation) and, for the model with curved spectrum a systematic error of 0.15. This accuracy in the synchrotron spectrum reduces the foreground contamination by several orders of magnitude when using a standard method to reconstruct the foreground maps and subtract them from the data. However, the cleaning is not always sufficient for the measurement of the tiny 21\,cm signal. The results can be improved substantially by modifying the foreground reconstruction and subtraction methods to make them robust against model errors. With these modifications, we finally obtain a very good cleaning of the cosmological signal for a wide range of frequencies (110--150\,MHz, $z=12$--8.5) and scales ($\log(k)\geq 1.5$\,Mpc$^{-1}$) for all the considered foreground simulations. This work showed that the CCA method is very promising for improving our knowledge of the foreground emission and cleaning the EoR signal from foreground contamination, also in the presence of random and systematic departures of the true spectra from the parametric models adopted. The next steps towards applying this method to real SKA data is to test it against a frequency-dependent resolution and in the presence of point-source contaminants, both resolved and as a background. \section{Aknowledgements} We thank the anonymous referee for useful suggestions that improved the paper. AB and MLB acknowledge support from the European Research Council under the EC FP7 grant number 280127. MLB also acknowledges support from an STFC Advanced/Halliday fellowship. We thank K. Grainge for providing the SKA specifications, and F. Abdalla and E. Chapman for useful comments and suggestions. \bibliographystyle{mn2e}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} \vspace{-0.2cm} Dynamic Programming (DP) algorithms return an optimal policy, given a model of the environment. Their convergence in the presence of lookahead policies~\cite{bertsekas1996neuro,efroni2019combine} and their performance in different approximate settings~\cite{bertsekas1996neuro,munos2007performance,scherrer2012approximate,geist2013algorithmic,abel2017near,efroni2018multiple} have been well-studied. Standard DP algorithms require simultaneous access to the {\em entire} state space at run time, and as such, cannot be used in practice when the number of states is too large. Real Time Dynamic Programming (RTDP)~\cite{barto1995learning,strehl2006pac} is a DP-based algorithm that mitigates the need to access all states simultaneously. Similarly to DP, RTDP updates are based on the Bellman operator, calculated by accessing the model of the environment. However, unlike DP, RTDP learns how to act by interacting with the environment. In each episode, RTDP interacts with the environment, acts according to the greedy action w.r.t.~the Bellman operator, and samples a trajectory. RTDP is, therefore, an online planning algorithm. Despite the popularity and simplicity of RTDP and its extensions~\cite{bonet2000planning,bonet2003labeled,mcmahan2005bounded,bulitko2006learning,strehl2006pac,kolobov2012lrtdp}, precise characterization of its convergence was only recently established for finite-horizon MDPs~\cite{efroni2019tight}. While lookahead policies in RTDP are expected to improve the convergence in some of these scenarios, as they do for DP~\cite{bertsekas1996neuro,efroni2019combine}, to the best of our knowledge, these questions have not been addressed in previous literature. Moreover, previous research haven't addressed the questions of how lookahead policies should be used in RTDP, nor studied RTDP's sensitivity to possible approximation errors. Such errors can arise due to a misspecified model, or exist in value function updates, when e.g.,~function approximation is used. In this paper, we initiate a comprehensive study of lookahead-policy based RTDP with approximation errors in \emph{online planning}. We start by addressing the computational complexity of calculating lookahead policies and study its advantages in approximate settings. Lookahead policies can be computed naively by exhaustive search in $O(A^h)$ for deterministic environments or $O(A^{Sh})$ for stochastic environments. Since such an approach is infeasible, we offer in Section~\ref{sec: episodic complexity} an alternative approach for obtaining a lookahead policy with a computational cost that depends linearly on a natural measure: the total number of states reachable from a state in $h$ time steps. The suggested approach is applicable both in deterministic and stochastic environments In Section~\ref{sec: mutiple step rtdp}, we introduce and analyze $h$-RTDP, a RTDP-based algorithm that replaces the 1-step greedy used in RTDP by a $h$-step lookahead policy. The analysis of $h$-RTDP reveals that the sample complexity is improved by increasing the lookahead horizon $h$. To the best of our knowledge, this is the first theoretical result that relates sample complexity to the lookahead horizon in online planning setting. In Section~\ref{sec: approximate mutiple step rtdp}, we analyze $h$-RTDP in the presence of three types of approximation: when (i) an inexact model is used, instead of the true one, (ii) the value updates contain error, and finally (iii) approximate state abstraction is used. Interestingly, for approximate state abstraction, $h$-RTDP convergence and computational complexity depends on the size of the \emph{abstract state space}. In a broader context, this work shows that RTDP-like algorithms could be a good alternative to Monte Carlo tree search (MCTS)~\cite{browne2012survey} algorithms, such as upper confidence trees (UCT)~\cite{kocsis2006bandit}, an issue that was empirically investigated in~\cite{kolobov2012lrtdp}. We establish strong convergence guarantees for extensions of $h$-RTDP: under no assumption other than initial optimistic value, RTDP-like algorithms combined with lookahead policies converge in polynomial time to an optimal policy (see Table~\ref{tab:rtdp vs dp}), and their approximations inherit the asymptotic performance of approximate DP (ADP). Unlike RTDP, MCTS acts by using a $\sqrt{\log N/N}$ bonus term instead of optimistic initialization. However, in general, its convergence can be quite poor, even worse than uniformly random sampling~\cite{coquelin2007bandit,munos2014bandits}. \vspace{-0.2cm} \section{Preliminaries} \label{sec:prelim} \vspace{-0.2cm} {\bf Finite Horizon MDPs. } A finite-horizon MDP~\cite{bertsekas1996neuro} with time-independent dynamics\footnote{The results can also be applied to time-dependent MDPs, however, the notations will be more involved.} is a tuple $\mathcal{M} = \br*{\mathcal{S},\mathcal{A}, r, p, H}$, where $\mathcal{S}$ and $\mathcal{A}$ are the state and action spaces with cardinalities $S$ and $A$, respectively, $r(s,a)\in [0,1]$ is the immediate reward of taking action $a$ at state $s$, and $p(s'|s,a)$ is the probability of transitioning to state $s'$ upon taking action $a$ at state $s$. The initial state in each episode is arbitrarily chosen and $H\in \mathbb{N}$ is the MDP's horizon. For any $N\in \mathbb{N}$, denote $[N] := \brc*{1,\ldots,N}$. A deterministic policy $\pi: \mathcal{S}\times[H]\rightarrow \mathcal{A}$ is a mapping from states and time step indices to actions. We denote by $a_t := \pi_t(s)$ the action taken at time $t$ at state $s$ according to a policy $\pi$. The quality of a policy $\pi$ from a state $s$ at time $t$ is measured by its value function, i.e.,~$V_t^\pi(s) := \mathbb{E}\big[\sum_{t'=t}^H r\br*{s_{t'},\pi_{t'}(s_{t'})}\mid s_t=s\big]$, where the expectation is over all the randomness in the environment. An optimal policy maximizes this value for all states $s\in\mathcal{S}$ and time steps $t\in [H]$, i.e.,~$V_t^*(s) := \max_{\pi} V_t^\pi(s)$, and satisfies the optimal Bellman equation, \vspace{-0.15in} \begin{align} V_t^*(s) &= TV_{t+1}^*(s) := \max_{a}\big(r(s,a) + p(\cdot|s,a) V_{t+1}^*\big) \nonumber \\ &= \max_{a}\mathbb{E}\big[r(s_1,a) + V_{t+1}^*(s_2) \mid s_1 = s\big].\label{eq:bellman} \end{align} \vspace{-0.15in} By repeatedly applying the optimal Bellman operator $T$, for any $h\in[H]$, we have \vspace{-0.15in} \begin{align} \label{eq:multistep bellman} V_t^*(s) = T^{h}V_{t+h}^*(s) &= \max_{a}\big(r(s,a) + p(\cdot|s,a) T^{h-1}V_{t+h}^*\big) \nonumber \\ &= \max_{\pi_t,\ldots,\pi_{t+h-1}} \mathbb{E}\Big[\sum_{t'=1}^h r(s_{t'},\pi_{t+t'-1}(s_{t'})) + V_{t+h}^*(s_{h+1}) \mid s_1 = s\Big]. \end{align} \vspace{-0.15in} We refer to $T^h$ as the $h$-step optimal Bellman operator. Similar Bellman recursion is defined for the value of a given policy, $\pi$, i.e.,~$V^\pi$, as $V_t^\pi(s) = T^h_\pi V^\pi_{t+h}(s) := r(s,\pi_t(s)) + p(\cdot|s,\pi_t(s)) T^{h-1}_\pi V_{t+h}^\pi$, where $T^h_\pi$ is the $h$-step Bellman operator of policy $\pi$. {\bf $h$-Lookahead Policy.} An $h$-lookahead policy w.r.t.~a value function $V\in\mathbb R^{S}$ returns the optimal first action in an $h$-horizon MDP. For a state $s\in\mathcal S$, it returns \vspace{-0.2in} \begin{align} \hspace{-0.1in} a_h(s) &\in \arg\max_{a}\big(r(s,a) + p(\cdot|s,a) T^{h-1}V\big)\nonumber \\ &=\arg\max_{\pi_1(s)} \max_{\pi_2,\ldots,\pi_{h}} \mathbb{E}\Big[\sum_{t=1}^{h}r(s_t,\pi_t(s_t)) + V(s_{h+1}) | s_1 = s\Big] \label{eq: lookahead h greedy preliminaries}. \end{align} \vspace{-0.15in} We can see $V$ represent our `prior-knowledge' of the problem. For example, it is possible to show~\cite{bertsekas1996neuro} that if $V$ is close to $V^*$, then the value of a $h$-lookahead policy w.r.t.~$V$ is close to $V^*$. For a state $s\in\mathcal S$ and a number of time steps $h\in[H]$, we define the set of reachable states from $s$ in $h$ steps as $\mathcal{S}_h(s) = \brc*{s'\mid \exists \pi: p^\pi(s_{h+1}=s' \mid s_1=s,\pi)> 0}$, and denote by $S_h(s)$ its cardinality. We define the set of reachable states from $s$ in up to $h$ steps as $\mathcal{S}^{Tot}_{h}(s) := \cup_{t=1}^h \mathcal{S}_t(s)$, its cardinality as $S^{Tot}_{h}(s) := \sum_{t=1}^h S_{t}(s)$, and the maximum of this quantity over the entire state space as $S^{Tot}_h = \max_s S^{Tot}_h(s)$. Finally, we denote by $\mathcal{N} := S^{Tot}_1$ the maximum number of accessible states in $1$-step (neighbors) from any state. {\bf Regret and Uniform-PAC. } We consider an agent that repeatedly interacts with an MDP in a sequence of episodes $[K]$. We denote by $s_t^k$ and $a_t^k$, the state and action taken at the time step $t$ of the $k$'th episode. We denote by $\mathcal{F}_{k-1}$, the filtration that includes all the events (states, actions, and rewards) until the end of the $(k-1)$'th episode, as well as the initial state of the $k$'th episode. Throughout the paper, we denote by $\pi_k$ the policy that is executed during the $k$'th episode and assume it is $\mathcal{F}_{k-1}$ measurable. The performance of an agent is measured by its \textit{regret}, defined as $\mathrm{Regret}(K):= \sum_{k=1}^K \br*{V_1^*(s_1^k) - V_1^{\pi_k}(s_1^k)}$, as well as by the \textit{Uniform-PAC} criterion~\cite{dann2017unifying}, which we generalize to deal with approximate convergence. Let $\epsilon,\delta>0$ and $N_{\epsilon}=\sum_{k=1}^\infty \mathbbm{1}\brc*{V_1^*(s_1^k)-V_1^{\pi_k}(s_1^k)\geq\epsilon}$ be the number of episodes in which the algorithm outputs a policy whose value is $\epsilon$-inferior to the optimal value. An algorithm is called Uniform-PAC, if $\Pr\br*{\exists \epsilon>0: N_\epsilon\geq F(S,1/\epsilon,\log1/\delta,H)}\leq \delta$, where $F(\cdot)$ depends polynomially (at most) on its parameters. Note that Uniform-PAC implies $(\epsilon,\delta)$-PAC, and thus, it is a stronger property. As we analyze algorithms with inherent errors in this paper, we use a more general notion of $\Delta$-Uniform-PAC by defining the random variable $N^{\Delta}_{\epsilon}\!\!=\!\!\sum_{k=1}^\infty \mathbbm{1}\brc*{V_1^*(s_1^k)\!-\!V_1^{\pi_k}(s_1^k)\geq \Delta \!+\! \epsilon}$, where $\Delta>0$. Finally, we use $\tilde{\mathcal{O}}(x)$ to represent $x$ up to constants and poly-logarithmic factors in $\delta$, and $O(x)$ to represent $x$ up to constants. \vspace{-0.2cm} \section{Computing $h$-Lookahead Policies} \label{sec: episodic complexity} \vspace{-0.2cm} Computing an action returned by a $h$-lookahead policy at a certain state is a main component in the RTDP-based algorithms we analyze in Sections~\ref{sec: mutiple step rtdp} and~\ref{sec: approximate mutiple step rtdp}. A `naive' procedure that returns such action is the exhaustive search. Its computational cost is $O(A^h)$ and $O(A^{Sh })$ for deterministic and stochastic systems, respectively. Such an approach is impractical, even for moderate values of $h$ or $S$. Instead of the naive approach, we formulate a Forward-Backward DP (FB-DP) algorithm, whose pseudo-code is given in Appendix~\ref{supp: epsiodic complexity h rtdp}. The FB-DP returns an action of an $h$-lookahead policy from a given state $s$. Importantly, in both deterministic and stochastic systems, the computation cost of FB-DP depends linearly on the total \emph{number of reachable states} from $s$ in up to $h$ steps, i.e.,~$S^{Tot}_h(s)$. In the worst case, we may have $S_h(s)=O(\min\br*{A^h,S})$. However, when $S^{Tot}_h(s)$ is small, significant improvement is achieved by avoiding unnecessary repeated computations. FB-DP has two subroutines. It first constructs the set of reachable states from state $s$ in up to $h$ steps, $\{\mathcal{S}_t(s)\}_{t=1}^h$, in the `forward-pass'. Given this set, in the second `backward-pass' it simply applies backward induction (Eq.~\ref{eq: lookahead h greedy preliminaries}) and returns an action suggested by the $h$-lookahead policy, $a_h(s)$. Note that at each stage $t\in[h]$ of the backward induction (applied on the set $\{\mathcal{S}_t(s)\}_{t=1}^h$) there are $S_t(s)$ states on which the Bellman operator is applied. Since applying the Bellman operator costs $O(\mathcal NA)$ computations, the computational cost of the `backward-pass' is $O\big(\mathcal{N} AS^{Tot}_{h}(s)\big)$. In Appendix~\ref{supp: epsiodic complexity h rtdp}, we describe a DP-based approach to efficiently implement `forward-pass' and analyze its complexity. Specifically, we show the computational cost of the `forward-pass' is equivalent to that of the `backward-pass' (see Propsition~\ref{proposition: computational complexity of forward pass}). Meaning, the computational cost of FB-DP is $O\big(\mathcal{N} AS^{Tot}_{h}(s))$ - same order as the cost of backward induction given the set $\mathcal{S}^{Tot}_h(s)$. \vspace{-0.2cm} \section{Real-Time Dynamic Programming} \label{sec:RTDP} \vspace{-0.2cm} Real-time dynamic programming (RTDP)~\cite{barto1995learning} is a well-known online planning algorithm that assumes access to a transition model and a reward function. Unlike DP algorithms (policy, value iteration, or asynchronous value iteration)~\cite{bertsekas1996neuro} that solve an MDP using offline calculations and sweeps over the entire states (possibly in random order), RTDP solves it in real-time, using samples from the environment (either simulated or real) and DP-style Bellman updates from the current state. Furthermore, unlike DP algorithms, RTDP needs to tradeoff exploration-exploitaion, since it interacts with the environment via sampling trajectories. This makes RTDP a good candidate for problems in which having access to the entire state space is not possible, but interaction is. Algorithm~\ref{algo: RTDP} contains the pseudo-code of RTDP in finite-horizon MDPs. The value is initialized optimistically, ~$\bar{V}^0_{t+1}(s)=H-t\geq V^*_{t+1}(s)$. At each time step $t\in[H]$ and episode $k\in[K]$, the agent updates the value of the current state $s_t^k$ by the optimal Bellman operator. It then acts greedily w.r.t.~the current value at the next time step $\bar{V}^{k-1}_{t+1}$. Finally, the next state, $s_{t+1}^k$, is sampled either from the model or the real-world. When the model is exact, there is no difference in sampling from the model and real-world, but these are different in case the model is inexact as in Section~\ref{sec: appr model}. The following high probability bound on the regret of a Decreasing Bounded Process (DBP), proved in~\cite{efroni2019tight}, plays a key role in our analysis of exact and approximate RTDP with lookahead policies in Sections~\ref{sec: mutiple step rtdp} and~\ref{sec: approximate mutiple step rtdp}. An adapted process ~$\brc*{X_k,\mathcal{F}_k}_{k\geq 0}$ is a DBP, if for all $k\geq 0$, {\bf (i)} $X_k\leq X_{k-1}$ almost surely (a.s.), {\bf (ii)} $X_k\geq C_2$, and {\bf (iii)} $X_0=C_1\geq C_2$. Interestingly, contrary to the standard regret bounds (e.g.,~ in bandits), this bound does not depend on the number of rounds $K$. \begin{theorem}[Regret Bound of a DBP \cite{efroni2019tight}] \label{theorem: regret of decreasing bounded process} Let $\brc*{X_k,\mathcal{F}_k}_{k\geq 0}$ be a DBP and $R_K = \sum_{k=1}^K X_{k-1} - \mathbb{E}[X_{k}\mid \mathcal{F}_{k-1}]$ be its $K$-round regret. Then, $$\Pr\brc*{\exists K>0: R_K \geq 9(C_1-C_2)\ln(3/\delta)} \le \delta.$$ \end{theorem} \vspace{-0.2cm} \section{RTDP with Lookahead Policies} \label{sec: mutiple step rtdp} \vspace{-0.2cm} In this section, we devise and analyze a lookahead-based RTDP algorithm, called $h$-RTDP, whose pseudo-code is shown in Algorithm~\ref{algo: multi step RTDP}. Without loss of generality, we assume that $H/h\in \mathbbm{N}$. We divide the horizon $H$ into $H/h$ intervals, each of length $h$ time steps. $h$-RTDP stores $HS/h$ values in the memory, i.e.,~the values at time steps $\mathcal H=\{1,h+1,\ldots,H+1\}$.\footnote{In fact, $h$-RTDP does not need to store $V_1$ and $V_{H+1}$, they are only used in the analysis.} For each time step $t\in[H]$, we denote by $h_c\in\mathcal H$, the next time step for which a value is stored in the memory, and by $t_c=h_c-t$, the number of time steps until there (see Figure~\ref{fig:h greedy policy}). At each time step $t$ of an episode $k\in[K]$, given the current state $s_t^k$, $h$-RTDP selects an action $a_t^k$ returned by the $t_c$-lookahead policy w.r.t.~$\bar {V}_{h_c}^{k-1}$, \vspace{-0.15in} \begin{equation} a_t^k = a_{t_c}(s_t^k)\in \arg\max_{\pi_1(s_t^k)} \max_{\pi_2,\ldots,\pi_{t_c}}\mathbb{E}\Big[\sum_{t'=1}^{t_c}r(s_{t'},\pi_{t'}(s_{t'})) + \bar{V}^{k-1}_{h_c}(s_{t_c+1}) \mid s_1=s_t^k\Big]. \label{eq: h greedy policy} \end{equation} \vspace{-0.1in} \begin{wrapfigure}{r}{0.47\textwidth} \vspace{-0.15in} \centering \def5cm{5cm} \input{hgreedy.pdf_tex} \caption{\begin{small}Varying lookahead horizon of a $h$-greedy policy in $h$-RTDP (see Eq.~\ref{eq: h greedy policy}) with $h=3$ and $H=6$. The blue arrows show the lookahead horizon from a specific time step $t$, and the red bars are the time steps for which a value is stored in memory, i.e.,~${\mathcal H=\{1\;,\;h+1=4\;,\;2h+1=H+1=7\}}$.\end{small}} \label{fig:h greedy policy} \end{wrapfigure} Thus, $h$-RTDP uses a varying lookahead horizon $t_c$ that depends on how far the current time step is to the next one for which a value is stored. Throughout the paper, with an abuse of notation, we refer to this policy as a $h$-lookahead policy. Finally, it can be seen that $h$-RTDP generalizes RTDP as they are equal for $h=1$. We are now ready to establish finite-sample performance guarantees for $h$-RTDP; see Appendix~\ref{supp: multistep rtdp} for the detailed proofs. We start with two lemmas from which we derive the main convergence result of this section. \begin{restatable}{lemma}{multistepRtdpProperties} \label{lemma:multistep rtdp properties} For all $s\in \mathcal{S}$, $n\in \{0\}\cup[\frac{H}{h}]$, and ${k\in [K]}$, the value function of $h$-RTDP is (i) Optimistic: $V^*_{nh+1}(s) \leq \bar{V}^k_{nh+1}(s)$, and (ii) Non-Increasing: $\bar{V}^{k}_{nh+1}(s) \leq \bar{V}^{k-1}_{nh+1}(s)$. \end{restatable} \vspace{-0.2cm} \begin{minipage}[t]{0.49\linewidth} \footnotesize \begin{algorithm}[H] \begin{algorithmic} \caption{ \footnotesize Real-Time DP (RTDP)} \label{algo: RTDP} \STATE {\bf init:} $\forall s\in \mathcal S,\; \forall t\in \{0\}\cup[H],$ \STATE \quad \quad$\bar{V}^0_{t+1}(s)=H-t$ \FOR{$k\in [K]$} \STATE Initialize $s^k_1$ arbitrarily \FOR{$t\in [H]$} \STATE $\bar{V}^{k}_{t}(s_t^k) = T \bar{V}^{k-1}_{t+1}(s_t^k)$ \STATE $a_t^k\in \arg\max_a r(s_t^k,a)+ p(\cdot|s_t^k,a) \bar{V}^{k-1}_{t+1}$ \STATE Act by $\;a_t^k\;$, observe $\;s_{t+1}^k\sim p(\cdot \mid s_t^k,a_t^k)$ \ENDFOR \ENDFOR \end{algorithmic} \vspace*{1.605 cm} \end{algorithm} \end{minipage} \hspace{3pt} \begin{minipage}[t]{0.49\linewidth} \footnotesize \begin{algorithm}[H] \begin{algorithmic} \caption{\footnotesize RTDP with Lookahead ($h$-RTDP)} \label{algo: multi step RTDP} \STATE {\bf init:}: $\forall s\in \mathcal S,\; n\in \{0\}\cup[\frac{H}{h}],$ \STATE \quad \quad$\bar{V}^0_{n h +1}(s)=H-nh$ \FOR{$k\in [K]$} \STATE Initialize $s^k_1$ arbitrarily \FOR{$t\in [H]$} \IF{$(t-1) \mod h = 0$} \STATE $h_c = t + h$ \STATE $\bar{V}^{k}_{t}(s_t^k) = T^{h}\bar{V}^{k-1}_{h_c}(s_t^k)$ \ENDIF \STATE $a_t^k\in$ \STATE ${\arg\max_a r(s_t^k,a)+ p(\cdot|s_t^k,a) T^{h_c-t-1}\bar{V}^{k-1}_{h_c}}$ \STATE Act by $\;a_t^k\;$, observe $\;s_{t+1}^k\sim p(\cdot \mid s_t^k,a_t^k)$ \ENDFOR \ENDFOR \end{algorithmic} \end{algorithm} \end{minipage} \vspace{0.2cm} \begin{restatable}[Optimality Gap and Expected Decrease]{lemma}{MultistepRdtpExpectedValueUpdate} \label{lemma: multistep RTDP expected value difference} The expected cumulative value update at the $k$'th episode of $h$-RTDP satisfies $\bar{V}_1^{k}(s^k_1)-V_1^{\pi_k}(s^k_1) = \sum_{n=1}^{\frac{H}{h}-1}\sum_{s\in\mathcal{S}} \bar{V}^{k-1}_{nh+1}(s) - \mathbb{E}[\bar{V}^{k}_{nh+1}(s)\mid \mathcal{F}_{k-1}]$. \end{restatable} Properties (i) and (ii) in Lemma~\ref{lemma:multistep rtdp properties} show that $\{\bar V^k_{nh + 1}(s)\}_{k\geq 0}$ is a DBP, for any $s$ and $n$. Lemma~\ref{lemma: multistep RTDP expected value difference} relates $\bar{V}_1^{k}(s^k_1)-V_1^{\pi_k}(s^k_1)$ (LHS) to the expected decrease in $\bar V^{k}$ at the $k$'th episode (RHS). When the LHS is small, then $\bar V_1^{k}(s_1^k)\simeq V^*_1(s_1^k)$, due to the optimism of $\bar V_1^{k}$, and $h$-RTDP is about to converge to the optimal value. This is why we refer to the LHS as the {\em optimality gap}. Using these two lemmas and the regret bound of a DBP (Theorem~\ref{theorem: regret of decreasing bounded process}), we prove a finite-sample convergence result for $h$-RTDP (see Appendix~\ref{supp: multistep rtdp} for the full proof). \begin{restatable}[Performance of $h$-RTDP]{theorem}{TheoremRegretMultistepRTDP} \label{theorem: regret multistep rtdp} Let $\epsilon,\delta>0$. The following holds for $h$-RTDP: $\;\;$ 1. With probability $1-\delta$, for all $K>0$, $\;\;\mathrm{Regret}(K) \leq \frac{9SH(H-h)}{h}\ln(3/\delta)$. $\;\;$ 2. $\;\;\Pr\brc*{\exists \epsilon>0 \; : \; N_\epsilon \geq \frac{9SH(H-h)\ln(3/\delta)}{h\epsilon}}\leq \delta$. \end{restatable} \vspace{-0.2cm} \begin{proofsketch} Applying Lemmas~\ref{lemma:multistep rtdp properties} and~\ref{lemma: multistep RTDP expected value difference}, we may write \vspace{-0.25in} \begin{align} \mathrm{Regret}(K) &\leq \sum_{k=1}^K \bar{V}_1^{k}(s^k_1)- V_1^{\pi_k}(s^k_1) = \sum_{k=1}^K \sum_{n=1}^{\frac{H}{h}-1}\sum_s \bar{V}^{k-1}_{nh+1}(s) - \mathbb{E}[\bar{V}^{k}_{nh+1}(s)\mid \mathcal{F}_{k-1}] \nonumber \\ &= \sum_{k=1}^K X_{k-1} - \mathbb{E}[X_k \mid \mathcal{F}_{k-1}]. \label{eq:bound-temp0} \end{align} \vspace{-0.15in} Where we define ${X_k := \sum_{n=1}^{\frac{H}{h}-1}\sum_s \bar{V}^{k-1}_{nh+1}(s)}$ and use linearity of expectation. By Lemma~\ref{lemma:multistep rtdp properties}, $\brc*{X_k}_{k\geq 0}$ is decreasing and bounded from below by $\sum_{n=1}^{\frac{H}{h}-1}\sum_{s} V^*_{nh +1}(s) \geq 0$. We conclude the proof by observing that $X_0\leq \sum_{n=1}^{\frac{H}{h}-1}\sum_{s} V^0_{nh +1}(s) \leq SH(H-h)/h$, and applying Theorem~\ref{theorem: regret of decreasing bounded process}. \end{proofsketch} \begin{remark}[RTDP and Good Value Initialization] A closer look into the proof of Theorem~\ref{theorem: regret multistep rtdp} shows we can easily obtain a stronger result which depends on the initial value $V^0$. The regret can be bounded by $\mathrm{Regret}(K)\leq \tilde{\mathcal{O}}\br*{\sum_{n=1}^{\frac{H}{h}-1} \br*{V^0_{nh +1}(s) -V^*_{nh +1}(s)}},$ which formalizes the intuition the algorithm improves as the initial value $V^0$ better estimates $V^*$. For clarity purposes we provide the worse-case bound. \end{remark} \begin{remark}[Computational Complexity of $h$-RTDP]\label{remark: space-comp compleixty of h rtdp} Using FB-DP (Section~\ref{sec: episodic complexity}) as a solver of a $h$-lookahead policy, the per-episode \emph{computation cost} of $h$-RTDP amounts to applying FB-DP for $H$ time steps, i.e.,~it is bounded by $O(H\mathcal{N}AS^{Tot}_{h})$. Since $S^{Tot}_{h}$ -- the total number of reachable states in up to $h$ time steps -- is an increasing function of $h$, the computation cost of $h$-RTDP increases with $h$, as expected. When $S^{Tot}_{h}$ is significantly smaller than $S$, the per-episode computational complexity of $h$-RTDP is $S$ independent. As discussed in Section~\ref{sec: episodic complexity}, using FB-DP, in place of exhaustive search, can significantly improve the computational cost of $h$-RTDP \end{remark} \begin{remark}[Improved Sample Complexity of $h$-RTDP]\label{remark: sample compleixty of h rtdp} Theorem~\ref{theorem: regret multistep rtdp} shows that $h$-RTDP improves the \emph{sample complexity} of RTDP by a factor $1/h$. This is consistent with the intuition that larger horizon of the applied lookahead policy results in faster convergence (less samples). Thus, if RTDP is used in a real-time manner, one way to boost its performance is to combine with lookahead policies. \end{remark} \begin{remark}[Sparse Sampling Approaches] In this work, we assume $h$-RTDP has access to a $h$-lookahead policy~\eqref{eq: lookahead h greedy preliminaries} solver, such as FB-DP presented in Section~\ref{sec: episodic complexity}. We leave studying the sparse sampling approach~\cite{kearns2002sparse, sidford2018variance} for approximately solving $h$-lookahead policy for future work. \end{remark} \vspace{-0.4cm} \section{Approximate RTDP with Lookahead Policies} \label{sec: approximate mutiple step rtdp} \vspace{-0.2cm} In this section, we consider three approximate versions of $h$-RTDP in which the update deviates from its exact form described in Section~\ref{sec: mutiple step rtdp}. We consider the cases in which there are errors in the {\bf 1)} {\em model}, {\bf 2)} {\em value updates}, and when we use {\bf 3)} {\em approximate state abstraction}. We prove finite-sample bounds on the performance of $h$-RTDP in the presence of these approximations. Furthermore, in Section~\ref{sec: appr abstractions}, given access to an approximate state abstraction, we show that the convergence of $h$-RTDP depends on the cardinality of the \emph{abstract state space} -- which can be much smaller than the original one. The proofs of this section generalize that of Theorem~\ref{theorem: regret multistep rtdp}, while following the same `recipe'. This shows the generality of the proof technique, as it works for both exact and approximate settings. \subsection{$h$-RTDP with Approximate Model ($h$-RTDP-AM)}\label{sec: appr model} In this section, we analyze a more practical scenario in which the transition model used by $h$-RTDP to act and update the values is not exact. We assume it is close to the true model in the total variation ($TV$) norm,~$\forall (s,a) \in\mathcal{S}\times \mathcal{A},\ ||p(\cdot|s,a) - \hat{p}(\cdot|s,a) ||_1\leq \epsilon_P$, where~$\hat{p}$ denotes the approximate model. Throughout this section and the relevant appendix (Appendix~\ref{supp: multistep rtdp approximate model}), we denote by $\hat T$ and $\hat{V}^*$ the optimal Bellman operator and optimal value of the approximate model $\hat{p}$, respectively. Note that $\hat T$ and $\hat{V}^*$ satisfy~\eqref{eq:bellman} and~\eqref{eq:multistep bellman} with $p$ replaced by $\hat{p}$. $h$-RTDP-AM is exactly the same as $h$-RTDP (Algorithm~\ref{algo: multi step RTDP}) with the model $p$ and optimal Bellman operator $T$ replaced by their approximations $\hat p$ and $\hat T$. We report the pseudocode of $h$-RTDP-AM in Appendix~\ref{supp: multistep rtdp approximate model}. Although we are given an approximate model, $\hat p$, we are still interested in the performance of (approximate) $h$-RTDP on the \emph{true MDP}, $p$, and relative to its optimal value, $V^*$. If we solve the approximate model and act by its optimal policy, the Simulation Lemma~\cite{kearns2002near,strehl2009reinforcement} suggests that the regret is bounded by $O(H^2\epsilon_P K)$. For $h$-RTDP-AM, the situation is more involved, as its updates are based on the approximate model and the samples are gathered by interacting with the true MDP. Nevertheless, by properly adjusting the techniques from Section~\ref{sec: mutiple step rtdp}, we derive performance bounds for $h$-RTDP-AM. These bounds reveal that the asymptotic regret increases by at most $O(H^2\epsilon_P K)$, similarly to the regret of the optimal policy of the approximate model. Interestingly, the proof technique follows that of the exact case in Theorem~\ref{theorem: regret multistep rtdp}. We generalize Lemmas~\ref{lemma:multistep rtdp properties} and~\ref{lemma: multistep RTDP expected value difference} from Section~\ref{sec: mutiple step rtdp} to the case that the update rule uses an inexact model (see Lemmas~\ref{lemma: approximate model properties} and~\ref{lemma: RTDP approximate modle expected value difference} in Appendix~\ref{supp: multistep rtdp approximate model}). This allows us to establish the following performance bound for $h$-RTDP-AM (proof in Appendix~\ref{supp: multistep rtdp approximate model}). \begin{restatable}[Performance of $h$-RTDP-AM]{theorem}{TheoremRegretRTDPApproximateModel} \label{theorem: regret rtdp approximate model} Let $\epsilon,\delta>0$. The following holds for $h$-RTDP-AM: $\;\;$ 1. With probability $1-\delta$, for all $K>0$, $\;\;\mathrm{Regret}(K) \leq \frac{9SH(H-h)}{h}\ln(3/\delta)+ H(H-1)\epsilon_P K$. $\;\;$ 2. Let $\Delta_P=H(H-1)\epsilon_P$. Then, $\;\;\Pr\Big\{\exists \epsilon>0 \; : \; N^{\Delta_P}_\epsilon \geq \frac{9SH(H-h)\ln(3/\delta)}{h\epsilon}\Big\}\leq \delta$. \end{restatable} \vspace{-0.2cm} These bounds show the approximate convergence resulted from the approximate model. However, the asymptotic performance gaps -- both in terms of the regret and Uniform PAC -- of $h$-RTDP-AM approach those experienced by an optimal policy of the approximate model. Interestingly, although $h$-RTDP-AM updates using the approximate model, while interacting with the true MDP, its convergence rate (to the asymptotic performance) is similar to that of $h$-RTDP (Theorem~\ref{theorem: regret multistep rtdp}). \vspace{-0.1cm} \subsection{$h$-RTDP with Approximate Value Updates ($h$-RTDP-AV)}\label{sec: appr value} \vspace{-0.1cm} Another important question in the analysis of approximate DP algorithms is their performance under approximate value updates, motivated by the need to use function approximation. This is often modeled by an extra noise $|\epsilon_V(s)|\leq \epsilon_V$ added to the update rule~\cite{bertsekas1996neuro}. Following this approach, we study such perturbation in $h$-RTDP. Specifically, in $h$-RTDP-AV the value update rule is modified such that it contains an error term (see Algorithm~\ref{algo: multi step RTDP}), \begin{align*} \bar{V}^{k}_{t}(s_t^k) =\epsilon_V(s_t^k)+ T^{h}\bar{V}^{k-1}_{h_c}(s_t^k). \end{align*} For $\epsilon_V(s_t^k)=0$, the exact $h$ is recovered. The pseudocode of $h$-RTDP-AV is supplied in Appendix~\ref{supp: multistep rtdp approximate value updates}. Similar to the previous section, we follow the same proof technique as for Theorem~\ref{theorem: regret multistep rtdp} to establish the following performance bound for $h$-RTDP-AV (proof in Appendix~\ref{supp: multistep rtdp approximate value updates}). \begin{restatable}[Performance of $h$-RTDP-AV]{theorem}{TheoremRegretApproximateValueRTDP} \label{theorem: regret rtdp appoximate value updates} Let $\epsilon,\delta>0$. The following holds for $h$-RTDP-AV: $\;\;$ 1. With probability $1-\delta$, for all $K>0$, $\;\;\mathrm{Regret}(K) \leq \frac{9SH(H-h)}{h}(1+\frac{H}{h}\epsilon_V)\ln(\frac{3}{\delta})+ \frac{2H}{h}\epsilon_V K$. $\;\;$ 2. Let $\Delta_V = 2H\epsilon_V$. Then, $\;\;\Pr\Big\{\exists \epsilon>0 \; : \; N^{\frac{\Delta_V}{h}}_\epsilon \geq\frac{9SH(H-h)(1+\frac{\Delta_V}{2h})\ln(\frac{3}{\delta})}{h \epsilon}\Big\}\leq \delta$. \end{restatable} \vspace{-0.2cm} As in Section~\ref{sec: appr model}, the results of Theorem~\ref{theorem: regret rtdp appoximate value updates} exhibit an asymptotic linear regret $O(H\epsilon_V K/h )$. As proven in Proposition~\ref{prop: approximate value updates} in Appendix~\ref{supp: approximate dp bounds}, such performance gap exists in ADP with approximate value updates. Furthermore, the convergence rate in $S$ to the asymptotic performance of $h$-RTDP-AV is similar to that of its exact version (Theorem~\ref{theorem: regret multistep rtdp}). Unlike in $h$-RTDP-AM, the asymptotic performance of $h$-RTDP-AV \emph{improves} with $h$. This quantifies a clear benefit of using lookahead policies in online planning when the value function is approximate \vspace{-0.1cm} \subsection{$h$-RTDP with Approximate State Abstraction ($h$-RTDP-AA)}\label{sec: appr abstractions} \vspace{-0.1cm} We conclude the analysis of approximate $h$-RTDP with exploring the advantages of combining it with approximate state abstraction~\cite{abel2017near}. The central result of this section establishes that given an approximate state abstraction, $h$-RTDP converges with sample, computation, and space complexity \emph{independent} of the size of the state space $S$, as long as $S^{Tot}_h$ is smaller than $S$ (i.e., when performing $h$-lookahead is $S$ independent, Remark~\ref{remark: space-comp compleixty of h rtdp}). This is in contrast to the computational complexity of ADP in this setting, which is still $O(HSA)$ (see Appendix~\ref{supp: adp approximate abstractions} for further discussion). State abstraction has been widely investigated in approximate planning~\cite{dearden1997abstraction,dean1997model,even2003approximate,abel2017near}, as a means to deal with large state space problems. Among existing approximate abstraction settings, we focus on the following one. For any $n\in\{0\}\cup[\frac{H}{h}-1]$, we define ${\phi_{nh+1}: \mathcal{S}\rightarrow \mathcal{S}_\phi}$ to be a mapping from the state space $\mathcal{S}$ to reduced space $\mathcal{S}_\phi$,~$S_\phi=|\mathcal{S}_\phi|\ll S$. We make the following assumption: \begin{restatable}[Approximate Abstraction, \cite{li2006towards}, definition 3.3]{assumption}{assumptionModelAbstraction} \label{assumption: model abstraction} For any $s,s'\in \mathcal{S}$ and $n\in\{0\}\cup[\frac{H}{h}-1]$ for which $\phi_{nh+1}(s) = \phi_{nh+1}(s')$, we have $|V_{nh+1}^*(s)-V_{nh+1}^*(s')|\leq \epsilon_A$. \end{restatable} \vspace{-0.2cm} Let us denote by $\{\bar{V}^k_{\phi,nh+1}\}_{n=0}^{H/h}$ the values stored in memory by $h$-RTDP-AA at the $k$'th episode. Unlike previous sections, the value function per time step contains $S_\phi$ entries, $\bar V^k_{\phi,1+nh}\in \mathbb{R}^{S_\phi}$. Note that if $\epsilon_A=0$, then optimal value function can be represented in the reduced state space~$\mathcal{S}_\phi$. However, if $\epsilon_A$ is positive, exact representation of $V^*$ is not possible. Nevertheless, the asymptotic performance of $h$-RTDP-AA will be `close', up to error of $\epsilon_A$, to the optimal policy. Furthermore, the definition of the multi-step Bellman operator~\eqref{eq:multistep bellman} and $h$-greedy policy~\eqref{eq: lookahead h greedy preliminaries} should be revised, and with some abuse of notation, defined as \vspace{-0.5cm} \begin{align} &a_t^k\in \arg\max_{\pi_0(s_t^k)} \max_{\pi_1,\ldots,\pi_{t_c-1}}\mathbb{E}\brs*{ \sum_{t'=0}^{t_c-1}r_{t'} + \bar{V}^{k-1}_{\phi, h_c}(\phi_{h_c}(s_{t_c}))\mid s_0=s_t^k}, \label{eq: h greedy policy and bellman abstraction}\\ &T_\phi^{h}\bar{V}^{k-1}_{\phi, h_c}(s_{t}^k) := \max_{\pi_0,\ldots,\pi_{h-1}}\mathbb{E}\brs*{ \sum_{t'=0}^{h-1} r_{t'} + \bar{V}^{k-1}_{\phi,t+h}(\phi_{t+h}(s_{h})) \mid s_0 =s_t^k}. \label{eq: h value and bellman abstraction} \end{align} \vspace{-0.4cm} Eq.~\eqref{eq: h greedy policy and bellman abstraction} and~\eqref{eq: h value and bellman abstraction} indicate that similar to~\eqref{eq: lookahead h greedy preliminaries}, the $h$-lookahead policy uses the given model to plan for $h$ time steps ahead. Differently from~\eqref{eq: lookahead h greedy preliminaries}, the value after $h$ time steps is the one defined in the \emph{reduced state} space $\mathcal{S}_\phi$. Note that the definition of the $h$-greedy policy for $h$-RTDP-AA in~\eqref{eq: h greedy policy and bellman abstraction} is equivalent to the one used in Algorithm~\ref{algo: RTDP with abstractions}, obtained by similar recursion as for the optimal Bellman operator~\eqref{eq:multistep bellman}. $h$-RTDP-AA modifies both the value update and the calculation of the $h$-lookahead policy (the value update and action choice in algorithm~\ref{algo: multi step RTDP}). The $h$-lookahead policy is replaced by $h$-lookahead defined in~\eqref{eq: h greedy policy and bellman abstraction}. The value update is substituted by~\eqref{eq: h value and bellman abstraction}, i.e, $\bar{V}_{\phi,t}^k(\phi_{t}(s_t^k)) = T_\phi^{h}\bar{V}^{k-1}_{\phi, h_c}(s_{t}^k)$. The full pseudocode of $h$-RTDP-AA is supplied in Appendix~\ref{supp: multistep rtdp abstractions}. By similar technique, as in the proof of Theorem~\ref{theorem: regret multistep rtdp}, we establish the following performance guarantees to $h$-RTDP-AA (proof in Appendix~\ref{supp: multistep rtdp abstractions}). \begin{restatable}[Performance of $h$-RTDP-AA]{theorem}{TheoremRegretRTDPAbstraction} \label{theorem: regret rtdp abstraction} Let $\epsilon,\delta>0$. The following holds for $h$-RTDP-AA: $\;\;$ 1. With probability $1-\delta$, for all $K>0$, $\;\;\mathrm{Regret}(K) \leq \frac{9S_\phi H(H-h)}{h}\ln(3/\delta)+ \frac{H\epsilon_A}{h}K$. $\;\;$ 2. Let $\Delta_A = H\epsilon_A$. Then, $\;\;\Pr\Big\{\exists \epsilon>0 \; : \; N^{\frac{\Delta_A}{h}}_\epsilon \geq \frac{9S_\phi H(H-h)\ln(3/\delta)}{h \epsilon}\Big\}\leq \delta$. \end{restatable} \vspace{-0.2cm} Theorem~\ref{theorem: regret rtdp abstraction} establishes $S$-independent performance bounds that depend on the size of the reduced state space $S_\phi$. The asymptotic regret and Uniform PAC guarantees are approximate, as the state abstraction is approximate. Furthermore, they are improving with the quality of approximation $\epsilon_A$, i.e.,~their asymptotic gap is $O(H\epsilon_A/h)$ relative to the optimal policy. Moreover, the asymptotic performance of $h$-RTDP-AA improves as $h$ is increased. Importantly, since the computation complexity of each episode of $h$-RTDP is independent of $S$ (Section~\ref{sec: episodic complexity}), the computation required to reach the approximate solution in $h$-RTDP-AA is also $S$-independent. This is in contrast to the computational cost of DP that depends on $S$ and is $O(SHA)$ (see Appendix~\ref{supp: adp approximate abstractions} for further discussion). \vspace{-0.2cm} \section{Discussion and Conclusions} \label{sec: rtdp vs dp} \vspace{-0.2cm} \paragraph{RTDP vs.~DP.} The results of Sections~\ref{sec: mutiple step rtdp} and~\ref{sec: approximate mutiple step rtdp} established finite-time convergence guarantees for the exact $h$-RTDP and its three approximations. In the approximate settings, as expected, the regret has a linear term of the form $\Delta K$, where $\Delta$ is linear in the approximation errors $\epsilon_P$, $\delta$, and $\epsilon_A$, and thus, the performance is continuous in these parameters, as we would desire. We refer to $\Delta K$ as the \emph{asymptotic regret}, since it dominates the regret as $K\rightarrow \infty$. A natural measure to evaluate the quality of $h$-RTDP in the approximate settings is comparing its regret to that of its corresponding approximate DP (ADP). Table~\ref{tab:rtdp vs dp} summarizes the regrets of the approximate $h$-RTDPs studied in this paper and their corresponding ADPs. ADP calculates approximate values $\{V_{nh+1}^*\}_{n=0}^{H/h}$ by backward induction. Based on these values, the same $h$-lookahead policy by which $h$-RTDP acts is evaluated. In the analysis of ADP, we use standard techniques developed for the discounted case in~\cite{bertsekas1996neuro}. From Table~\ref{tab:rtdp vs dp}, we reach the following conclusion: \emph{the asymptotic performance (in terms of regret) of approximate $h$-RTDP is equivalent to that of a corresponding approximate DP algorithm}. Furthermore, it is important to note that the asymptotic error decreases with $h$ for the approximate value updates and approximate abstraction settings for both RTDP and DP algorithms. In these settings, the error is caused by approximation in the value function. By increasing the lookahead horizon $h$, the algorithm uses less such values and relies more on the model which is assumed to be correct. Thus, the algorithm becomes less affected by the value function approximation. \begin{table}[t] \begin{center} \begin{tabular}{|c | c | c | c | }\hline { Setting} & {$h$-RTDP Regret (This work)} & {ADP Regret~\cite{bertsekas1996neuro}} & {UCT} \\ \hline Exact~(\ref{sec: mutiple step rtdp}) & $\tilde{\mathcal{O}}\big(SH(H\!-\!h)/h\big)$ & 0 & $\Omega(\exp(\exp(H)) )$~\cite{coquelin2007bandit}\\ \hline App. Model~(\ref{sec: appr model}) & $\tilde{\mathcal{O}}\big(SH(H\!-\!h)/h \!+\! \Delta_P K\big)$ & $\Delta_P K$ & N.A \\ \hline App. Value~(\ref{sec: appr value})& $\tilde{\mathcal{O}}\big(SH(H\!-\!h)g^\epsilon_{H/h}/h \!+\!\Delta_V K/h\big)$ & $ \Delta_V K/h$ & N.A\\ \hline App. Abstraction~(\ref{sec: appr abstractions}) & $\tilde{\mathcal{O}}\big(S_\phi H(H\!-\!h)/h +\Delta_A K/h\big) $ & $ \Delta_A K/h$ & N.A\\ \hline \end{tabular} \end{center} \caption{\begin{small} The lookhead horizon is $h$ and the horizon of the MDP is $H$. We denote $g^\epsilon_{H/h}=(1+H\epsilon_V/h)$, $\Delta_P=H(H-1)\epsilon_P$, $\Delta_V = 2H\epsilon_V$, and $\Delta_A=H \epsilon_A$. The table summarizes the regret bounds of the $h$-RTDP settings studied in this work and compares them to those of their corresponding ADP approaches. The performance of ADP is based on standard analysis, supplied in Propositions~\ref{prop: misspecified model bound},~\ref{prop: approximate value updates},~\ref{prop: approximate abstraction} in Appendix~\ref{supp: approximate dp bounds}.\end{small}} \label{tab:rtdp vs dp} \end{table} \paragraph{Conclusions.} In this paper, we formulated $h$-RTDP, a generalization of RTDP that acts by a lookahead policy, instead of by a 1-step greedy policy, as in RTDP. We analyzed the finite-sample performance of $h$-RTDP in its exact form, as well as in three approximate settings. The results indicate that $h$-RTDP converges in a very strong sense. Its regret is constant w.r.t. to the number of episodes, unlike in, e.g., reinforcement learning where a lower bound of $\tilde{\mathcal{O}}(\sqrt{SAHT})$ exists~\cite{azar2017minimax,jin2018q}. Furthermore, the analysis reveals that the sample complexity of $h$-RTDP improves by increasing the lookahead horizon $h$ (Remark~\ref{remark: sample compleixty of h rtdp}). Moreover, the asymptotic performance of $h$-RTDP was shown to be equivalent to that of ADP (Table~\ref{tab:rtdp vs dp}), which under no further assumption on the approximation error, is the best we can hope for. We believe this work opens interesting research venues, such as studying alternatives to the solution of the $h$-greedy policy (see Section~\ref{supp: epsiodic complexity h rtdp}), studying a Receding-Horizon extension of RTDP, RTDP with function approximation, and formulating a Thompson-Sampling version of RTDP, as the standard RTDP is an `optimistic' algorithm. As the analysis developed in this work was shown to be quite generic, we hope that it can assist with answering some of these questions. On the experimental side, more needs to be understood, especially comparing RTDP with MCTS and studying how RTDP can be combined with deep neural networks as the value function approximator. \section{Broader Impact} Online planning algorithms, such as $A^*$ and RTDP, have been extensively studied and applied in AI for well over two decades. Our work quantifies the benefits of using lookahead-policies in this class of algorithms. Although lookahead-policies have also been widely used in online planning algorithms, their theoretical justification was lacking. Our study sheds light on the benefits of lookahead-policies. Moreover, the results we provide in this paper suggest improved ways for applying lookahead-policies in online planning with benefits when dealing with various types of approximations. This work opens up the room for practitioners to improve their algorithms and base lookahead policies on solid theoretical ground. \section{Acknowledgements} We thank the reviewers for their helpful comments and feedback. \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:introduction} We know that dark matter exists, but we do not know the dark matter's particle nature. Even so, we can naturally imagine that the dark matter stays in the invisible sector, and the invisible and visible sectors are connected by the portal. Through the portal, the energy flow from the visible sector to the invisible sector, which is known as freeze-in~\cite{Hall:2009bx}, or vice versa, which is known as freeze-out. Hence, the dark matter reaches $\Omega_{\text{DM}}h^2 \simeq 0.12$, being compatible with the CMB anisotropy~\cite{Holdom:1985ag, Planck:2018vyg}. The kinetic mixing~\cite{Holdom:1985ag}, one of the three major portals~\cite{Holdom:1985ag, Falkowski:2009yz, Lindner:2010rr, GonzalezMacias:2015rxl, Batell:2017cmf, Batell:2017rol, Berlin:2018ztp, Silveira:1985rk, McDonald:1993ex, Burgess:2000yq, Patt:2006fw}, connects the photon and the dark photon as $\mathcal{L} \supset \epsilon F_{\mu \nu} F'^{\mu \nu}/2$. In the last few decades, the research on the time-independent kinetic mixing has boosted on both the experimental and theoretical sides~\cite{Fabbrichesi:2020wbt, Caputo:2021eaa, Dienes:1996zr, Abel:2003ue, Goodsell:2009xc, Goodsell:2011wn, DelZotto:2016fju, Gherghetta:2019coi,Benakli:2020vng, Obied:2021zjc, Rizzo:2018ntg, Wojcik:2022rtk, Chiu:2022bni}, with only a few discussions on the spacetime-varying scenarios~\cite{Banerjee:2019asa, Baldes:2019tkl, Chakraborty:2020vec, Davoudiasl:2022ubh}. In the meantime, other varying constants are extensively discussed \cite{Bekenstein:1982eu, Olive:2001vz, Dvali:2001dd, Chacko:2002mf, Fardon:2003eh, Fardon:2005wc, Weiner:2005ac, Ghalsasi:2016pcj,Baker:2016xzo, Baker:2017zwx, Baker:2018vos, Bian:2018mkl, Bian:2018bxr, Croon:2020ntf, Hashino:2021dvx, Guo:2022vxr, Baldes:2016gaf, Bruggisser:2017lhc,Ellis:2019flb,Berger:2019yxb, Ipek:2018lhm, Croon:2019ugf, Berger:2020maa, Howard:2021ohe, Fung:2021wbz, Fung:2021fcj, Allali:2022yvx}. Moreover, extremely strong tensions exist between the early-time dark matter production through the portal and the late-time constraints on the portal, such as the freeze-in of the dark matter as the dark photon~\cite{Pospelov:2008jk, Redondo:2008ec, An:2014twa} or the sterile neutrino~\cite{Dodelson:1993je, Abazajian:2017tcc}, and the freeze-out of the dark matter as the dark Higgs~\cite{Escudero:2016gzx}. To solve such tension in the simplest way, we allow the portal evolves during the universe's expansion. However, there is no free lunch to evade the constraints without consequences. To be more specific, controlling the portal leaves significant imprints in our universe, which changes the early cosmological history and can be detected by experiments designed for general relativity testing and ultralight dark matter detection. In this work, we study the scalar-controlled kinetic mixing by meticulously exploring the top-down and bottom-up theories, cosmological history, $\text{keV}-\text{MeV}$ dark photon dark matter production, and experimental signals from both the dark photon dark matter and the nonrelativistic ultralight scalar relic. To vary the kinetic mixing, we couple the ultralight scalar $\phi$, the CP-even degree of freedom predicted by the string theory~\cite{Wu:1986ac, Maeda:1987ku, Damour:1990tw, Damour:1994zq, Damour:1994ya, Damour:2002mi,Damour:2002nv}, to the heavy fermionic messengers doubly charged under the standard model $U(1)$ and dark $U(1)$. Here, the constant kinetic mixing is eliminated when the $\mathbb{Z}_2$ symmetry under the dark charge conjugation is imposed. Given this, in the low energy limit, the varying-mixing operator $\phi F F'$ emerges, along with the scalar-photon coupling, such as $\phi F^2$ or $\phi^2 F^2$~\footnote{When $\phi \sim f$, $\phi$ is replaced by $f \sin(\phi/f)$ according to the discussions in Sec.~\ref{sec:UV_model}. However, since the perturbative form is already enough for the late-time experiments where $\phi \ll f$, we use it in the whole text to simplify the notation.}. Initially, $\phi$ has the early misalignment opening the portal for the dark photon dark matter production with the kinetic mixing $\epsilon_\text{FI} \sim 10^{-12}$, which stems from the early-time $\mathbb{Z}_2$-breaking of the system. Afterward, $\phi$'s damped oscillation gradually and partially closes the portal, which stems from the late-time $\mathbb{Z}_2$-restoration. Through the evolution during the cosmological expansion, $\phi$ sets the benchmark kinetic mixing of the dark photon dark matter, which is free from stringent late-time constraints, such as stellar energy loss~\cite{Redondo:2008aa, Redondo:2013lna, An:2013yfc, Hardy:2016kme}, direct detection~\cite{An:2014twa, Bloch:2016sjj, XENON:2018voc, XMASS:2018pvs, XENON:2019gfn, XENON:2020rca, XENONCollaboration:2022kmb}, and late-time decay bounds~\cite{Pospelov:2008jk, Redondo:2008ec, Essig:2013goa, Slatyer:2016qyl}. At the same time, via the scalar-photon coupling, the ultralight scalar as the nonrelativistic relic in the mass range $10^{-33}\text{eV} \lesssim m_0 \ll \text{eV}$ changes the fine-structure constant, and the scalar as the mediator contributes to the extra forces between two objects. Based on these facts, the experiments such as the equivalence principle~(EP) violation test~\cite{Smith:1999cr, Schlamminger:2007ht, Berge:2017ovy}, clock comparison~\cite{Arvanitaki:2014faa, VanTilburg:2015oza, Hees:2016gop, Barontini:2021mvu, collaboration2021frequency}, resonant-mass detector~\cite{Arvanitaki:2015iga}, PTA~\cite{Kaplan:2022lmz}, CMB~\cite{Stadnik:2015kia, Hart:2019dxi}, and BBN~\cite{Stadnik:2015kia, Sibiryakov:2020eir, Bouley:2022eer} can be used to test the scalar-controlled kinetic mixing, and the experimental targets are set by the dark photon freeze-in. In the meantime, the experimental targets for $\text{keV}-\text{MeV}$ dark photon dark matter detections are set by the ultralight scalar mass. If the signals from the dark photon dark matter and the ultralight scalar experiments appear consistently, we can confidently declare the verification of our model. In addition, given the scalar-photon coupling in the strong region, the scalar's high-temperature evolution is affected by the standard model plasma, which sets the early displacement, modifies the start of oscillation, and even enhances the scalar's signal. To understand the whole setup classified under the exactness of the $\mathbb{Z}_2$ symmetry, one can refer to \Fig{fig:phiKM}. \begin{figure}[t] \centering \includegraphics[width=0.497\columnwidth]{Dynamical_Portal.pdf} \includegraphics[width=0.497\columnwidth]{UV_IR_Small.pdf} \caption{{\bf Left}: The schematic diagram of the cosmologically varying kinetic mixing. The dark and standard model sectors are connected through the kinetic mixing controlled by the CP-even scalar $\phi$, the subcomponent of the dark matter in the late universe. Based on this model, the energy flows from the dark sector to the standard model sector in the early universe for the dark matter production with the portal opened. In the late universe, with the portal partially closed, the dark matter is safe from stringent constraints. {\bf Right}: The cosmologically varying kinetic mixing in the UV and IR theories. In the UV theory, there are heavy messengers $\Psi$ and $\Psi^\prime$, both charged under $U(1)_Y$ and $U(1)_d$. Here, $\Psi$ and $\Psi^\prime$ carry the same electromagnetic charge but the opposite dark charge. To generate the varying kinetic mixing, $\Psi$ and $\Psi^\prime$ are coupled with the scalar $\Phi$ via the Yukawa interactions $y$ and $y^\prime$. To eliminate the constant kinetic mixing, the mass degeneracy between $\Psi$ and $\Psi^\prime$ is imposed, which is protected by the $\mathbb{Z}_2$ symmetry. In the IR theory, the nondynamical heavy messengers are integrated out, which induces the varying kinetic mixing~($\phi F F'$) and the scalar-photon coupling~($\phi F^2$ or $\phi^2 F^2$). In the nonperturbative region, $\phi$ should be replaced by $f\sin(\phi/f)$ according to \Eq{eq:Min_eps} and \Eq{alpha_change} because it is the angular part of $\Phi$. Based on whether the $\mathbb{Z}_2$ symmetry is slightly broken by the Yukawas or not, we classify the theory into two types with totally different phenomenologies: The type-A model with the linear scalar-photon coupling~($y \neq 0, y'=0$) and the type-B model with the quadratic scalar-photon coupling~($y'=-y$). These scalar-photon interactions lead to nontrivial cosmological history caused by the thermal effect and the signals from the $\alpha_\text{em}$ variation and the equivalence principle violation. } \label{fig:phiKM} \end{figure} To protect the CP-even scalar's naturalness caused by the heavy messengers and inspired by the former works on the discrete symmetries~\cite{Frieman:1995pm, Hook:2018jle, Hook:2019mrd, Das:2020arz, Dror:2020zru, Brzeminski:2020uhm, DiLuzio:2021pxd, Banerjee:2022wzk}, we embed the varying kinetic mixing into $N$ copies of the universes, where the $\mathbb{Z}_N$ symmetry rebuilds the global $U(1)$ shift symmetry in the discrete form. In such $\mathbb{Z}_N$-protected model, the scalar's lowest order mass term becomes $\Phi^{N}/\Lambda^{N-4}$, which reveals the exponential suppression of the quantum correction. For example, we only need $N \sim 10$ to suppress the scalar mass correction all the way down to $10^{-33}\text{eV}$. Furthermore, to understand the additional cancellation from the exact $\mathbb{Z}_2$ symmetry and gain an accurate analytical result, we expand the $\mathbb{Z}_N$ Coleman-Weinberg formally calculated to the leading order in \cite{Hook:2018jle, Brzeminski:2020uhm} to the arbitrary orders with two entirely different methods, i.e., the Fourier transformation and the cosine sum rules. Other topics of the varying kinetic mixing from the supersymmetric Dirac gaugino model and the dark matter models via other cosmologically varying portals are preliminarily discussed in our work. The remainder of this paper is organized as follows. In Sec.~\ref{sec:UV_model}, we build the minimal model for the scalar-controlled kinetic mixing and show that the scalar-photon coupling appears simultaneously. Based on whether the scalar-messenger couplings are $\mathbb{Z}_2$-invariant, we categorize the theory into the type-A model with the linear scalar-photon coupling and the type-B model with the quadratic scalar-photon coupling. In Sec.~\ref{sec:cos_his}, for the type-A and the type-B models, we do the systematic classification of the scalar evolution jointly affected by the thermal effect, bare potential effect, and cosmological expansion. In Sec.~\ref{sec:dpdm_fi}, we discuss the dark photon dark matter freeze-in via the varying kinetic mixing. We also discuss the detection of the dark photon dark matter with the experimental targets set by the scalar mass and the experiments of the non-relativistic ultralight scalar relic with targets set by the dark photon dark matter freeze-in. In Sec.~\ref{sec:zn_varykm}, we build the $\mathbb{Z}_N$ model to protect the scalar's naturalness from the heavy messengers, discuss the extra cancellation from the exact $\mathbb{Z}_2$, and calculate the $\mathbb{Z}_N$-invariant Coleman-Weinberg potential utilizing the Fourier transformation and the cosine sum rules. In Sec.~\ref{sec:dirac_gaugino}, we generate the varying kinetic mixing and the dark axion portal simultaneously from the Dirac gaugino model. In Sec.~\ref{sec:other_portal}, we preliminarily explore the dark matter models via other cosmologically varying portals and their experimental signals. Finally, in Sec.~\ref{sec:conclusion} we summarize our results. We also provide a series of appendices containing the calculational details, such as the analytical solutions of the high-temperature scalar evolution in \App{appx:analyt_sol}, the freeze-in of the dark photon dark matter in \App{appx:dpfi}, and the exact expansion of the $\mathbb{Z}_N$ Coleman-Weinberg potential using the Fourier transformation and cosine sum rules in \App{appx:zn_vcw}. \section{Minimal Model} \label{sec:UV_model} In the beginning, let us recall the well-known constant kinetic mixing of the field strengths of the standard model $U(1)_Y$ and the dark $U(1)$, which can be written as \begin{eqnarray}\begin{aligned} \frac{\epsilon}{2} F^{\mu \nu} F'_{\mu \nu}. \end{aligned}\end{eqnarray} Here, the kinetic mixing can be generated at the one-loop level by a pair of vector-like fermions $\Psi$ and $\Psi'$ charged as $(e, e')$ and $(e, -e')$ under $U(1)_Y \times U(1)_d$~\cite{Holdom:1985ag}. In the low energy limit, the value of the constant kinetic mixing is \begin{equation} \epsilon = \frac{e e'}{6 \pi^2} \log\frac{M}{M'}, \end{equation} where $M$ and $M^\prime$ are the masses of $\Psi$ and $\Psi^\prime$, respectively. To build the varying kinetic mixing, we eliminate the constant kinetic by imposing the mass degeneracy $M = M'$ and promote $\epsilon$ to a dynamical variable $\epsilon(x)$ by imposing the $\mathbb{Z}_2$ symmetry under the dark charge conjugation, i.e., \begin{eqnarray}\begin{aligned} \label{eq:CD_0} \mathcal{C}_d: A'(x) \rightarrow - A'(x), \quad \epsilon(x) \rightarrow -\epsilon(x). \end{aligned}\end{eqnarray} By doing so, the constant kinetic mixing is forbidden because $F F'$ is odd under $\mathcal{C}_d$, whereas the varying kinetic mixing $\epsilon(x) F F'$ is permitted. To realize this from the top-down theory, we introduce the Lagrangian \begin{eqnarray}\begin{aligned} \label{eq:simpuv} \mathcal{L}_\text{UV} \supset y \Phi \bar{\Psi} \Psi + y' \Phi \bar{\Psi}' \Psi' + \text{h.c.} - M (\bar{\Psi} \Psi + \bar{\Psi}' \Psi') - \lambda \left( \left|\Phi\right|^2 - \frac{f^2}{2}\right)^2, \end{aligned}\end{eqnarray} where $\Phi$ is a neutral complex scalar and $y^{(\prime)}$, $M$, $\lambda$, $f$ are chosen to be real parameters for simplicity. In the ground state, the approximate global $U(1)$ symmetry of the potential is spontaneously broken by $\Phi=\frac{f}{\sqrt{2}} e^{i( \phi/f+c)}$ whose angular components include a CP-even pseudo-Goldstone boson $\phi$ and an arbitrary phase $c$. Knowing that the transformation on the real scalar $\phi\rightarrow -\phi-\left(2c-\pi\right)f$ is equivalent to $\Phi\rightarrow -\Phi^\dagger$, under which the Yukawa interactions in \Eq{eq:simpuv} flip signs, we choose $c=\pi/2$ to fix the phase factor throughout the paper. Given this, we define the dark charge conjugation of the UV theory as \begin{eqnarray}\begin{aligned} \label{eq:CD_1} \mathcal{C}_d: A \rightarrow A, \,\, A' \rightarrow - A', \,\, \phi \rightarrow -\phi, \,\, \Psi \leftrightarrow \Psi'. \end{aligned}\end{eqnarray} If $y'=-y$ and $ M'=M$, the Lagrangian is invariant under $\mathcal{C}_d$. We will see from the later discussion that as long as $\phi$ has a nonzero vacuum expectation value~(VEV), the dynamical kinetic mixing can be generated. This is because, in this case, $\Psi^{(\prime)}$'s effective masses become \begin{eqnarray}\begin{aligned} M^{(\prime)}(\phi) = M\left[ 1 + r^{(\prime)} \sin\frac{\phi}{f} \right],\,\,\,\, \text{where $r^{(\prime)} = \frac{\sqrt{2}\, y^{(\prime)} f}{M}$,} \label{eq:M_phi} \end{aligned}\end{eqnarray} From \Eq{eq:M_phi}, we know that the degeneracy of the effective masses is broken by $\phi$ when $y\neq y'$. In \Eq{eq:M_phi} and the rest part of this paper, we use \enquote{$(\prime)$} to denote the physical quantities in the SM sector~(without \enquote{$\prime$}) and the dark sector~(with \enquote{$\prime$}). After integrating out $\Psi^{(\prime)}$, the Lagrangian of the effective kinetic mixing becomes \begin{equation} \label{eq:Min_eps} \mathcal{L}_\text{IR} \supset \frac{\epsilon}{2} F_{\mu \nu} F'^{\mu \nu}, \text{\quad where $\epsilon = \frac{\sqrt{2}\, e e'\left( y - y'\right)}{ 6 \pi^2 M} f \sin\frac{\phi}{f}$}. \end{equation} When $\phi \ll f$, the kinetic mixing in \Eq{eq:Min_eps} can be linearized as \begin{eqnarray}\begin{aligned} \epsilon \simeq \frac{\phi}{\Lambda_\text{KM}}, \text{\quad where $\Lambda_\text{KM} = \frac{6\pi^2 M}{\sqrt{2}\, e e' (y-y')}$.} \end{aligned}\end{eqnarray} Therefore, in our model, the kinetic mixing varies with $\phi$'s cosmological evolution. Since the crucial point of this model is that the kinetic mixing vanishes when $\phi$ relaxes to the minimum, we should align the potential's minimum with the kinetic mixing's zero. To realize this, we introduce the $\mathcal{C}_d$-even tadpole term \begin{eqnarray}\begin{aligned} \mathcal{L}_\text{UV} \supset -\frac{im_0^2 f}{\sqrt2} \Phi + \text{h.c.}, \end{aligned}\end{eqnarray} which arises as the potential \begin{eqnarray}\begin{aligned} \label{eq:V0} V_{0} = m_0^2 f^2 \left[ 1 - \cos\left(\frac{\phi}{f}\right) \right] \end{aligned}\end{eqnarray} when the phase factor is $c=\pi/2$. Conversely, because the system is invariant under $\mathcal{C}_d$ listed in \Eq{eq:CD_1}, $V_0$ has to be the even function of $\phi$, while $\epsilon$ is the odd function of $\phi$. In this case, the phase difference between $\epsilon$ and $V_0$ is $(n+1/2)\pi$, where $ n=0,\pm 1,\pm 2\cdots$. Therefore, under the protection of $\mathbb{Z}_2$ symmetry of the dark charge conjugation $\mathcal{C}_d$, the potential's minima are necessarily aligned with the zeros of the kinetic mixing. During the cosmological evolution, $\phi$ initially stays at the nonzero displacement, so the portal is opened. As the universe expands, the pseudo-Goldstone boson $\phi$ begins the damped oscillation around zero, so the portal is partially closed. In today's universe, $\phi$ consists of the nonrelativistic ultralight relic in the mass range $10^{-33}\text{eV} \lesssim m_0 \ll \text{eV}$, so the kinetic mixing has the nonzero residual. More fundamentally, we can interpret the time-varying kinetic mixing from the perspective of symmetry, since the nonzero kinetic mixing results from the $\mathbb{Z}_2$ violation. In the early universe, the $\mathbb{Z}_2$ symmetry is spontaneously broken by $\phi$'s nonzero displacement. In the late universe, $\phi$ oscillates toward zero, and the $\mathbb{Z}_2$ symmetry is restored. Such early-time breaking and late-time restoration of the global symmetry is regarded as the inverse symmetry breaking in \cite{Weinberg:1974hy, Ramazanov:2021eya, Chang:2022psj, Ireland:2022quc} for other motivations. Let us classify the theory based on whether the Lagrangian is exactly invariant under the dark charge conjugation $\mathcal{C}_d$ or not. As shown in \Eq{eq:CD_1}, when $y' = -y$, the $\mathbb{Z}_2$ symmetry of $\mathcal{C}_d$ is strictly preserved. When $y' \neq -y$, the $\mathbb{Z}_2$ symmetry is broken, which induces the two-loop constant kinetic mixing and the one-loop tadpole of $\Phi$. Here, the small Yukawas highly suppress the two-loop constant mixing, which is $\epsilon \sim (M/\Lambda_\text{KM})^2/e e'$. The cancellation of $\Phi$-tadpole needs fine-tuning, which can be entirely solved in the framework of $\mathbb{Z}_N$-protected model discussed in Sec.~\ref{sec:zn_varykm}. Therefore, the $y' \neq -y$ case still has the approximate $\mathbb{Z}_2$ symmetry. Based on the discussion above, we consider the following two types of the models in this work: \begin{eqnarray}\begin{aligned} \label{eq:typeAB_Z2} \text{Type-A: $y'=0$, Approximate $\mathbb{Z}_2$}. \quad \text{Type-B: $y' = - y$, Exact $\mathbb{Z}_2$.} \end{aligned}\end{eqnarray} Due to $\phi \mh \Psi^{(\prime)}$ interaction, the scalar-photon coupling emerges from UV physics. At the one-loop level, the coupling between the CP-even scalar $\phi$ and the photon can be written as \begin{eqnarray}\begin{aligned} \mathcal{L}_\text{IR} \supset \frac{1}{4} \left(\frac{\Delta \alpha_\text{em}}{\alpha_\text{em}}\right) F_{\mu \nu} F^{\mu \nu}, \end{aligned}\end{eqnarray} where \begin{eqnarray}\begin{aligned} \label{alpha_change} \frac{\Delta \alpha_\text{em}}{\alpha_\text{em}} = - \frac{e^2}{6 \pi^2} \left[ \log M(\phi) + \log M'(\phi) \right] \supset \frac{\sqrt{2} e^2}{6 \pi^2} \left[- \frac{\left(y+y'\right) f}{M} \sin\left(\frac{\phi}{f}\right)+\frac{\left(y^2 + y'^2\right) f^2}{\sqrt{2} M^2} \sin^2\left(\frac{\phi}{f}\right)\right]. \end{aligned}\end{eqnarray} Utilizing the classification in \Eq{eq:typeAB_Z2}, we have \begin{equation} \text{Type-A: $y'=0$,\,\, $\frac{\Delta \alpha_\text{em}}{\alpha_\text{em}} \sim \frac{\alpha_\text{em} y \phi}{M}$,\quad \quad \quad Type-B: $y'=-y$,\,\, $\frac{\Delta \alpha_\text{em}}{\alpha_\text{em}} \sim \frac{\alpha_\text{em} y^2 \phi^2}{M^2 }$}, \label{TypeAB_Def} \end{equation} where the type-A and type-B models have linear and quadratic scalar-photon couplings, respectively. To compare with the experiments testing the fine-structure constant variation and the equivalence principle violation, we define the dimensionless constants $d_{\gamma,i} \,\,(i=1,2)$ through \begin{eqnarray}\begin{aligned} \label{d1_d2_def} \frac{\Delta \alpha_\text{em}}{\alpha_\text{em}} \coloneqq \left( \frac{\sqrt{4\pi} \phi}{m_\text{pl}} \right)^i \frac{d_{\gamma,i}}{i} \quad \quad \,\, (i=1,2), \end{aligned}\end{eqnarray} where the indices ``$i=1$'' and ``$i=2$'' denote the type-A and type-B models, respectively. Comparing \Eq{d1_d2_def} with \Eq{TypeAB_Def}, we have \begin{equation} \label{d1_d2} d_{\gamma,i} \sim \left(\frac{m_\text{pl}}{e' \Lambda_\text{KM}}\right)^i \,\,\,\,\,(i=1,2). \end{equation} Some other papers quantify the fine-structure constant variation through the notation $\Delta \alpha_\text{em}/\alpha_\text{em} = \phi^i/\Lambda_{\gamma,i}^i \,\,(i=1,2)$. Combining this notation with \Eq{TypeAB_Def}, we have $\Lambda_{\gamma,i} \sim e' \Lambda_\text{KM} \,\,(i=1,2)$, which means that $e'$ determines the hierarchy between the effective energy scale of the varying kinetic mixing and the varying fine structure constant. In the rest part of this paper, we use the first notation, i.e., $d_{\gamma,i}\,\,(i=1,2)$. In \Eq{d1_d2}, we find that for the fixed $\phi F F'$ operator inducing the effective kinetic mixing, smaller $e'$ leads to larger $y^{(\prime)}$ such that the testable signals of the $\alpha_\text{em}$ variation get stronger. Such small dark gauge coupling can be naturally generated in the large volume scenario in string compactification~\cite{Burgess:2008ri, Goodsell:2009xc, Cicoli:2011yh}. To understand the setup of our model intuitively, one can refer to \Fig{fig:phiKM}. The left panel shows that when $\mathbb{Z}_2$ symmetry is imposed, the constant kinetic mixing is canceled, but the scalar-controlled kinetic mixing survives. In this case, as $\phi$ evolves during the cosmological evolution, the kinetic mixing becomes time-dependent. This provides a novel mechanism to generate a small but non-zero kinetic mixing. The right panel reveals that the scalar-photon couping is also generated as the byproduct when UV physics is considered. Based on the exactness of the $\mathbb{Z}_2$ symmetry, the theory can be classified as the type-A model with the linear scalar-photon couping and the type-B model with the quadratic scalar-photon couping. We will see from the later discussion that such scalar-photon couplings affect $\phi$'s evolution via the thermal effect from the SM plasma. They also change the fine-structure constant and violate the equivalence principle, which provides essential prospects for the experimental tests. \section{Cosmological History} \label{sec:cos_his} In this section, we jointly discuss the ultralight scalar's cosmological evolution, which is affected by the scalar bare potential, thermal effect, and cosmological expansion. According to \cite{Brzeminski:2020uhm, Kapusta:2006pm}, the lowest order thermal contribution containing $\alpha_\text{em}$ is at the two-loop level, and the free energy contribution coming from that can be written as \begin{eqnarray}\begin{aligned} \label{2loop_F} F_T \simeq \frac{5 \pi \sum_i q_i^2}{72} \alpha_\text{em} T^4 \times \mathcal{S}(T), \,\,\,\,\, \text{where}\,\,\, \mathcal{S}(T)= \left\{ \begin{aligned} & 1 & (T \gtrsim m_e)\\ & \frac{18}{5 \pi^3} \frac{m_e^2}{T^2} e^{-2m_e/T} & (T \ll m_e) \end{aligned}. \right. \end{aligned}\end{eqnarray} In \Eq{2loop_F}, the suppression factor $\mathcal{S}(T)$ is $1$ when $e^{\pm}$ (as well as other species of heavier particles carrying the $U(1)_\text{em}$ charges) are relativistic but decreases exponentially after $e^{\pm}$ becomes non-relativistic. The factor $\sum_i q_i^2$~($q_i$ is the electric charge of the particle ``$i$'') quantifies the thermal contribution to the total free energy when $U(1)_\text{em}$ charged particles are relativistic in the thermal bath. As shown in \Eq{alpha_change}, when $\phi$ has nonzero VEV, the fine structure constant is modified, based on which $\phi$'s thermal potential can be obtained by replacing $\alpha_\text{em}$ in \Eq{2loop_F} by $\alpha_\text{em}(\phi)$. After combining \Eq{alpha_change} and \Eq{2loop_F}, we have \begin{eqnarray}\begin{aligned} \label{TypeAB_VT} \left\{ \begin{aligned} \text{Type-A:} & \quad V_T \simeq -m_T^2 f^2 \sin\frac{\phi}{f}, & \text{where} &\,\,\, \frac{m_T}{\mathcal{S}^{1/2}} \sim \alpha_\text{em} r^{1/2} \frac{T^2}{f}\\ \text{Type-B:} & \quad V_T \simeq \frac{1}{2} m_T^2 f^2 \sin^2 \frac{\phi}{f}, & \text{where} &\,\,\, \frac{m_T}{\mathcal{S}^{1/2}} \sim \alpha_\text{em} r \frac{T^2}{f} \end{aligned}, \right. \end{aligned}\end{eqnarray} where the definition of $r$ can be found in \Eq{eq:M_phi}. Here, we choose $T_\text{rh}$ to be much smaller than the heavy fermion mass, i.e., $M$, so $\Psi$'s contribution to the thermal potential is exponentially suppressed. In addition, because we do not include the dark electrons and positrons, the thermal effect from the dark sector is also negligible. \Fig{fig:V_AB} shows how the potential changes as the temperature varies and how $\phi$ moves under different circumstances. From lighter~(yellow) to darker color~(dark red), the ratios of thermal mass over the bare mass are $m_T/m_0=5,4,3,2,1,0$. For the bare potential, $V_0$ has the minimum at $\phi/f=0$. As for the thermal potential, the local minimums of the type-A and type-B models are~(Here $n=0, \pm 1, \pm 2, \cdots$) \begin{equation} \label{eq:TypeAB_VT_min} \text{Type-A:} \,\,\, \frac{\phi_{\min}}{f} = \arctan\left(\frac{m_T^2}{m_0^2}\right) + 2n\pi , \,\,\,\,\, \text{Type-B:} \,\,\, \frac{\phi_{\min}}{f} = \left\{ \begin{aligned} & n \pi & \,\,\, (m_T > m_0)\\ & 2 n \pi & \,\,\, (m_T \leq m_0) \end{aligned} \right. . \end{equation} In the following discussion, we focus on the range $- 2 \pi f \leq \phi \leq 2 \pi f$ without loss of generality. For the type-A model, since $m_T \gg m_0$ in the early epoch, the minimum is $\phi_{\min} \simeq \pi f/2$. When $m_T \ll m_0$, the potential's minimum continuously shifts to zero. For the type-B model, when $m_T > m_0$, within the $2\pi f$ periodicity there are three local minimums, i.e., $\phi_{\min} = 0, \pm \pi f$. When $m_T \leq m_0$, only $\phi_{\min} =0 $ is the minimum. We should notice that during the entire process, $\phi$ keeps the classical motion without the tunneling: Even though $\phi$ is initially located inside the false facuum $ \pi f/2 \lesssim \phi_\text{rh} \lesssim 3\pi f/2$, because $f$ is large, the vacuum decay from $\pi f$ to $0$ is highly suppressed such that it is much slower than the universe's expansion~\cite{Coleman:1977py, Callan:1977pt, Linde:1981zj, Lee:1985uv, Duncan:1992ai, Intriligator:2006dd, Pastras:2011zr}. \begin{figure}[t] \centering \includegraphics[width=0.497\columnwidth]{ThermalHistory_eta_ll1.pdf} \hfill \includegraphics[width=0.497\columnwidth]{ThermalHistory_eta_gg1.pdf} \caption{The evolution of $m_T$~(red), $m_0$~(blue) and $H$~(orange) as the universe expands. Because $H$ and $m_T$ are both proportional to $T^2$ when $T \gtrsim m_e$, the orange and red lines are approximately parallel with each other in the $\log$-$\log$ diagram until $T \sim m_e$. Here, $\eta \sim m_T/H$ when $T \gtrsim m_e$, which is already defined in \Eq{eq:T_thermal}. {\bf Left}:\, $\eta \ll 1$. Because the thermal effect from the SM bath is negligible, the ultralight scalar's evolution can be classified as the standard misalignment. In this case, $\phi$ stays constant at the early stage and begins the oscillation when $H \sim m_0$, which is labeled as $T_{\text{osc}}$. {\bf Right}:\, $\eta \gg 1$. The thermal effect is important for the scalar's evolution. In the plot, point $T_*$ and point ``$Q$'' denote the moment when $m_T = m_0$ and $H \sim m_T$, respectively. Here, $H_Q \sim 10^{-16}/\log^2 \eta \,\, \text{eV} $. When $T\gg T_*$, $\phi$ converges to the minimum of the thermal potential obeying the power law $\abs{\phi - \phi_{\min}} \propto T^{1/2}$ where $\phi_{\min} = \pi f/2$ for the type-A model and $\phi_{\min} = \pm \pi f$ for the type-B model. We merely show the $m_0 \gtrsim H_Q$ case, because if $m_0 \lesssim H_Q$, $\phi$'s movement is nothing more than the late-time standard misalignment with the initial condition fixed by the early time thermal effect. For the type-A model, when $T \gtrsim m_e$, $\phi$ tracks the potential minimum $\phi_{\min} = f \arctan(m_T^2/m_0^2)$. As the temperature drops far below the electron mass, $\phi$ cannot track the minimum and begins to oscillate. For the type-B model, we focus on the $\pi f/2 \lesssim \phi_\text{rh} \lesssim 3 \pi f/2$ case. When $T \gg T_*$, $\phi$ is trapped inside the local minimum $\phi_{\min} = \pi f$. Thereafter, when $T \lesssim T_*$, $\phi$ begins oscillating around the bare minimum $\phi_{\min} = 0$. Here we have $T_* \ll T|_{3H=m_0}$, which means the oscillation is postponed. Therefore, the scalar is in the phase of trapped misalignment.} \label{fig:morHEvo} \end{figure} Knowing from \Eq{TypeAB_VT} that $m_T$ has the same temperature power law as $H$ when $T\gtrsim m_e$~($\mathcal{S} \simeq 1$), we define a dimensionless quantity \begin{equation} \label{eq:eta_def} \eta \coloneqq \frac{2 m_T}{\mathcal{S}^{1/2} H} \sim \alpha_\text{em} \frac{m_\text{pl}}{f} \times \left\{ \begin{aligned} & r^{1/2} & \text{(Type-A)}\\ & r & \text{(Type-B)} \end{aligned}\,\,\, \right. \end{equation} to classify $\phi$'s evolution. Here, $r \sim y f/M$, which is defined in \Eq{eq:M_phi}. The motion of $\phi$ is underdamped if $\eta > 1$, because the thermal effect dominates over the universe's expansion; In contrast, if $\eta < 1$, $\phi$'s motion is overdamped under the Hubble friction. $\eta=1$ is the critical value separating the two moving patterns mentioned above. To relate $\eta$ with experimental observables which are quantified by $d_{\gamma,i}~(i=1,2)$, we write $\eta$ as \begin{equation} \label{eta_from_de} \text{Type-A: $\eta \sim \left(\frac{\alpha_\text{em} d_{\gamma,1} m_\text{pl}}{f} \right)^{\frac{1}{2}}$, \,\, Type-B: $\eta \sim \left( \alpha_\text{em} d_{\gamma,2} \right)^{\frac{1}{2}}$}. \end{equation} In the high-$T$ limit where $m_T$ dominates over $m_0$, $\phi$'s thermal evolution can be solved in the linear approximation of the equation of motion, which can be found in \App{appx:analyt_sol}. Since we compare $m_T$ and $m_0$ to determine whether the thermal effect dominates over the effect from the bare potential, we define the critical temperature $T_*$ at which $m_T = m_0$. Combining \Eq{TypeAB_VT} and \Eq{eq:eta_def}, we have \begin{eqnarray}\begin{aligned} \label{eq:T_thermal} T_* \sim \max\left[0.1 m_e, \left(\frac{m_0 \,m_\text{pl}}{\eta}\right)^{1/2} \right], \end{aligned}\end{eqnarray} where the concrete formulas of $\eta$ for the type-A and the type-B models can be found in \Eq{eta_from_de}. Now, let us briefly explain the meaning of \Eq{eq:T_thermal}. For $m_T$ to be smaller than $m_0$, we only need one of the following two conditions to be satisfied: The first condition is $T \ll m_e$ such that $m_T$ is exponentially suppressed. The second condition is that the unsuppressed part of the scalar thermal mass, i.e., $m_T/\mathcal{S}^{1/2}$, is smaller than $m_0$. Since $T_*$ is defined as the temperature when $m_T = m_0$, we use $T \lessgtr T_*$ and $m_T \gtrless m_0$ interchangeably. In \Fig{fig:phi_evo} showing the typical numerical solutions of the $\phi$ evolution, we define the dimensionless temperature $\widetilde{T} \coloneqq T/T|_{3H = m_0}$ with the extra ``$\sim$'', then we have the dimensionless critical temperature $\widetilde{T}_* \sim \max[m_e/(m_0 m_\text{pl})^{1/2},1/\eta^{1/2}]$ for the moment when $m_T = m_0$. Before discussing the scalar's evolution, let us give the following definitions: \begin{eqnarray}\begin{aligned} \label{eq:misalignment_classify} \text{\bf Standard Misalignment:}\,\, \eta \ll 1, \quad \quad \text{\bf Thermal Misalignment:}\,\, \eta \gtrsim 1. \end{aligned}\end{eqnarray} These definitions are based on the value of $\eta$, which quantifies how large the thermal effect is compared with the effect of the bare potential shown in \Eq{eq:V0}. When $\eta$ is much smaller than one, the thermal effect is negligible, so it can be classified as the standard misalignment in which one solely needs to compare the effects of the bare potential and the universe expansion. When $\eta$ is of order unity or even larger, the thermal effect can never be omitted, and one should consider all three effects~(thermal effect, bare potential, universe expansion) jointly. Now, let us discuss each situation one by one. \begin{figure}[t] \centering \begin{tikzpicture} \node at (-9,0){\includegraphics[width=0.43\columnwidth]{TypeA_eta_ll1.pdf}}; \node at (0, -0.025) {\includegraphics[width=0.429\columnwidth]{TypeB_eta_ll1.pdf}}; \node at (-9.12,-7.37){\includegraphics[width=0.44\columnwidth]{TypeA_eta_gg1.pdf}}; \node at (0.015,-7.3) {\includegraphics[width=0.423\columnwidth]{TypeB_eta_geq1.pdf}}; \end{tikzpicture} \caption{The typical scalar evolution trajectories in the early universe. The lighter~(darker) color in these diagrams denotes higher~(lower) temperature. From the yellow line to the dark red line, $m_T/m_0 = 5,4,3,2,1,0$. Here, $\eta$ quantifies the thermal effect, whose definition can be found in \Eq{eq:eta_def}. {\bf Upper left and upper right}: For both type-A and type-B models, when $\eta \ll 1$, the thermal effects are negligible. Here, $\phi$ is nearly frozen at the early stage~(gray arrowed line from point \rom{1} to point \rom{2}). When $3H \simeq m_0$~(point \rom{2}), $\phi$ begins the damped oscillation with the amplitude $\abs{\phi} \propto T^{3/2}$~(black arrowed line from point \rom{2} to point \rom{3}) and converges to the origin~(point \rom{3}). {\bf Lower left}: For the type-A model with $\eta \gg 1$, in the high-temperature environment, $\phi$ converges to $\pi f/2$~(point \rom{1}) following $\abs{\phi - \pi f/2} \propto T^{1/2}$. As the temperature falls, $m_T/m_0$ also decreases. During the stage when $T \gtrsim m_e$, the variation of the thermal potential is adiabatic, so $\phi$ tracks the potential minimum $\phi_{\min} = f \arctan(m_T^2/m_0^2) $ as shown in \Eq{eq:TypeAB_VT_min}~(gray arrowed line from point \rom{1} to point \rom{2}). When $T \ll m_e$, since the too fast variation of the thermal potential violates the adiabatic condition, $\phi$ can no longer stay in the minimum and starts to oscillate following $\abs{\phi} \propto T^{3/2}$. We do not show this stage of $\phi$'s movement in the diagram because the amplitude is too small to visualize compared with $f$. {\bf Lower right}: For the type-B model with $\eta \gtrsim 1$ and $\phi$ initially trapped inside the wrong vacuum, $\phi$ does damped oscillation $\abs{\phi - \pi f} \propto T^{1/2}$ when $T \gg T_*$ and converges to $\pi f$. When $T \lesssim T_*$, $V''|_{\phi = \pi f} \lesssim 0$, so $\phi$ begins the damped oscillation with $\abs{\phi} \propto T^{3/2}$~(black arrowed line from point \rom{1} to point \rom{2}). } \label{fig:V_AB} \end{figure} \subsection{Standard Misalignment: $\eta \ll 1$}\label{subsec:eta<<1} In this case, when $3H \gtrsim m_0$, $\phi$ obeys \begin{equation} \label{eq:approx_delphi_eta<<1} \phi - \phi_{\min} \propto T^{\frac{\eta^2}{4}}, \end{equation} which means $\phi$ is nearly frozen in the early universe. To have an overall understanding of the cosmological history, one could refer to the left panel of \Fig{fig:morHEvo}: The $H$-line~(orange) is higher than the $m_T$-line~(red) during the whole process, meaning that the thermal effect is negligible so one only needs to focus on the $H$-line~(orange) and $m_0$-line~(blue). After the crossing of $H$-line and $m_0$-line, the scalar begins the damped oscillation whose amplitude obeys the power law $\abs{\phi} \propto T^{3/2}$. The upper left and upper right panels of \Fig{fig:V_AB} show how $\phi$ moves for the type-A and type-B models separately when $\eta\ll 1$: From point \rom{1} to point \rom{2}, $\phi$ keeps nearly the same. This means that the initial field displacement determines $\phi$'s starting oscillation amplitude $\abs{\phi}_\text{osc}$. Here, we focus on the model with the natural initial condition, i.e., $\abs{\phi}_\text{osc}/{f} \sim \mathcal{O}(1)$. In the late time, $\phi$ begins to oscillate at the temperature \begin{equation} \label{eq:T_phi_eta<<1} T_{\text{osc}} = T|_{3H=m_0} \sim \text{few} \times 10^{-1} \text{eV} \times \left( \frac{m_0}{10^{-28}\text{eV}}\right)^{\beta_T}, \quad \text{where} \,\, \beta_T = \left \{ \begin{aligned} \frac{1}{2} & \quad \,\,\, (m_0 \gtrsim 10^{-28}\text{eV}) \\ \frac{2}{3} & \quad \,\,\, ( 10^{-33} \text{eV} \lesssim m_0 \lesssim 10^{-28}\text{eV}) \end{aligned} \right. . \end{equation} $\beta_T$ in \Eq{eq:T_phi_eta<<1} and later mentioned $\beta_\phi$ in \Eq{phi_evo_eta<<1} are determined by $H$'s power law of $T$, which is $T^{2}$ at the radiation-dominated universe, and $T^{3/2}$ at the matter-dominated universe. In \Eq{eq:T_phi_eta<<1}, we choose $m_0 \simeq 10^{-28}\text{eV}$ as the reference value simply because the scalar oscillation happens at the matter-radiation equality $T \sim \text{eV}$ for such scalar mass. Before or after $T\sim \text{eV}$, $H$ has different power laws of $T$. Such oscillation is labeled as black oscillatory lines from point \rom{2} to point \rom{3} in \Fig{fig:V_AB} and is also represented as blue lines in \Fig{fig:phi_evo}. Because the inflation smears out the field anisotropy, $\phi$ does spatially homogeneous oscillation, which can be written as \begin{equation} \label{phi_evo_eta<<1} \phi(t) \simeq \abs{\phi} \cos\left(m_0 t\right), \,\, \text{where $\abs{\phi} = \frac{\sqrt{2 \rho_\phi}}{m_0} \propto T^{3/2}$.} \end{equation} From~\cite{Preskill:1982cy, Arias:2012az} one knows that $\phi$ satisfies $\rho_\phi \propto T^3$ and $w_\phi = p_\phi/\rho_\phi \simeq 0$, so $\phi$ is part of the dark matter with the fraction $\mathcal{F} = \Omega_\phi/\Omega_\text{dm}$. Without loss of generality, we choose $\mathcal{F}=10^{-3}$ as a benchmark value, such that astrophysical and cosmological measurements like Lyman-$\alpha$ forest~\cite{Irsic:2017yje, Kobayashi:2017jcf, Armengaud:2017nkf, Zhang:2017chj, Nori:2018pka, Rogers:2020ltq}, CMB/LSS~\cite{Hlozek:2014lca, Lague:2021frh}, galaxy rotational curves~\cite{Bernal:2017oih, Robles:2018fur, Bar:2018acw, Bar:2019bqz, Bar:2021kti}, and the observations of the ultra-faint dwarf galaxies~\cite{Dalal:2022rmp} do not exclude the parameter space where $m_0 \lesssim 10^{-19}\text{eV}$. After substituting \Eq{eq:T_phi_eta<<1} into \Eq{phi_evo_eta<<1}, one can easily get the starting oscillation amplitude \begin{equation} \label{eq:phi_osc_eta<<1} \abs{\phi}_{\text{osc}} \sim \text{few} \times 10^{16}\text{GeV} \times \left( \frac{\mathcal{F}}{10^{-3}}\right)^{1/2} \left(\frac{10^{-28}\text{eV}}{ m_0}\right)^{\beta_\phi}, \quad \,\,\text{where}\,\,\beta_\phi = \left \{ \begin{aligned} \frac{1}{4} & \quad \,\,\, (m_0 \gtrsim 10^{-28}\text{eV})\\ 0 & \quad \,\,\, ( 10^{-33} \text{eV} \lesssim m_0 \lesssim 10^{-28}\text{eV}) \end{aligned} \right. \end{equation} In the case where $m_0 \gg 10^{-25}\text{eV}$, $\phi$'s de Broglie wavelength is much smaller than the scale of the Milky Way halo, so $\phi$ behaves like a point-like particle similar to other cold DM particles. For this reason, $\phi$'s local density is $\rho_{\phi,\text{local}} \simeq \mathcal{F} \rho_\text{local}$, where $\rho_\text{local} \simeq 0.4 \text{GeV}/\text{cm}^3$ is DM's local density near earth. By effectively adding an enhancement factor \begin{equation} \label{eq:Enhance_Factor} \mathcal{E} = \left(\frac{\rho_\text{local}}{\rho_{\text{average}}} \right)^{1/2} \simeq 6 \times 10^2 \end{equation} where $\rho_{\text{average}} = \rho_c \Omega_\text{DM} \simeq 1.3 \,\text{keV}/\text{cm}^{3} $ on $\abs{\phi}$ in \Eq{phi_evo_eta<<1}, we get $\phi$'s amplitude today near the earth, which is $\abs{\phi}_0 \simeq (2 \mathcal{F} \rho_\text{local})^{1/2}/m_0$. Here, $\abs{\phi}_0$ denotes $\phi$'s local oscillation amplitude today. If $m_0 \ll 10^{-25}\text{eV}$, oppositely, $\phi$ cannot be trapped inside the Milky Way halo's gravitational potential well. In this case, today's $\phi$ field is homogeneous, so the enhancement factor in \Eq{eq:Enhance_Factor} should not be included, from which we have $\abs{\phi}_0 = (2 \mathcal{F} \rho_\text{average})^{1/2}/m_0$. $\phi$'s oscillation amplitude in the middle mass range needs numerical simulation, which is left for future exploration. \begin{figure}[t] \centering \begin{tikzpicture} \node at (-9.2,0){\includegraphics[width=0.472\columnwidth]{TypeA_phi_Evolution.png}}; \node at (0, -0.01) {\includegraphics[width=0.48\columnwidth]{TypeB_phi_Evolution.png}}; \end{tikzpicture} \caption{$\phi$'s typical numerical solutions. In these two plots, we define the dimensionless quantity $\widetilde{T} \coloneqq T/T|_{3H=m_0}$, so at point $\widetilde{T} = 1$ we have $3H =m_0$. The red, green, and blue lines represent the $\eta \gg 1$, $\eta \sim 1$, and $\eta \ll 1$ cases, respectively. {\bf Left}: The type-A model~($y'=0$). For the red line~($\eta\gg 1$), when $\widetilde{T} \gg \widetilde{T}_*$, $\phi$ does the damped oscillation scaled as $\abs{\phi- \pi f/2} \propto \widetilde{T}^{1/2}$, so it converges to $\pi f/2$. Remembering the $T_*$'s definition in \Eq{eq:T_thermal}, we label the point where $T \simeq T_*$ as ``$m_T \simeq m_0$''. As the temperature decreases, $\phi$ tracks potential's minimum $\phi_{\min} = f \arctan(m_T^2/m_0^2)$ until the temperature drops below the electron mass. After that, $\phi$ cannot track the instant changing of the potential and begins the damped oscillation scaled as $\abs{\phi} \propto \widetilde{T}^{3/2}$. In this plot, we have not shown $\phi$'s oscillation afterward because its amplitude is too small to be visualized. For the green line~($\eta \sim 1$), when $\widetilde{T}\gtrsim 1$, $\phi$ gradually slides to the thermal potential minimum $\pi f/2$ following \Eq{eq:approx_delphi_eta}. When $\widetilde{T} \sim 1$, $\phi$ begins the damped oscillation obeying $\abs{\phi} \propto \widetilde{T}^{3/2}$. The blue line~($\eta \ll 1$) represents the standard misalignment where the thermal effect is negligible during the process. There, $\phi \simeq \text{const}$ until $\widetilde{T}\simeq 1$. After that, $\phi$ begins to oscillate following $\abs{\phi} \propto \widetilde{T}^{3/2}$. {\bf Right}: The type-B model~($y'=-y$). The red line represents the situation where $\eta\gg 1$ and $\phi$'s initial condition is $\pi f/2 \lesssim \phi_\text{rh} \lesssim 3 \pi f/2$. In this case, when $\widetilde{T} \gtrsim \widetilde{T}_*$, $\phi$ does the damped oscillation $\abs{\phi- \pi f} \propto \widetilde{T}^{1/2}$ during which $\phi$ converges to $\pi f$. When $\widetilde{T} \lesssim \widetilde{T}_*$, or equivalently speaking, $m_T \lesssim m_0$, we have $V''|_{\phi = \pi f} \lesssim 0$, so $\phi$ rolls down from $\pi f$ and oscillates like $\abs{\phi} \propto \widetilde{T}^{3/2}$. We can see that the red line's oscillation is postponed compared with the standard misalignment beginning at $\widetilde{T} \simeq 1$. For this reason, the red line is in the phase of the trapped misalignment. The green line represents the situation where $\eta \sim 1$ and $\pi f/2 \lesssim \phi_\text{rh} \lesssim 3 \pi f/2$. In this case, $\phi$ gradually slides to the thermal potential minimum $\pi f$ as shown in \Eq{eq:approx_delphi_eta}. At the point $\widetilde{T} \simeq 1$, $\phi$ begins the damped oscillation scaled as $\abs{\phi} \propto T^{3/2}$. The blue line represents the standard misalignment mechanism where the thermal effect is inferior to the bare potential effect, i.e., $\eta \ll 1$. Similar to the discussion of the blue line in the left panel, $\phi$ stays constant in the early universe and then begins the oscillation when $\widetilde{T} \simeq 1$. Even though the red lines in the left and right panels are qualitatively different, the green and blue lines are quite similar in terms of the oscillation temperature~($\widetilde{T} \simeq 1$). The main difference between the green and blue lines is that the green lines' initial displacements are thermally determined to be $\abs{\phi}_\text{osc} \simeq \abs{\phi_{\min}}$, but the blue lines' oscillation amplitude depends on their initial conditions. } \label{fig:phi_evo} \end{figure} \subsection{Thermal Misalignment: $\eta \gtrsim 1$}\label{subsec:eta>=1} As long as $\eta$ is of order unity or even larger, the thermal effect from the SM plasma plays a decisive role in $\phi$'s evolution in the early universe. To understand the combined effects on $\phi$'s movement from the thermal potential, bare potential, and the universe's expansion, one can refer to the right panel of \Fig{fig:morHEvo}: At the early stage, since $m_T$~(red) and $H$~(orange) are both proportional to $T^2$, their lines are approximately parallel in the plot with the $\log$-$\log$ scale. Here we neglect the $g_*$ variation, which only leads to an $\mathcal{O}(1)$ modification. In the plot, we label the cross point of $m_T$-line~(red) and $m_0$-line~(blue) with ``$T_*$'', which is already defined in \Eq{eq:T_thermal}. When $T \lesssim T_*$, the effect from the bare potential dominates over the effect from the thermal potential. When $T \ll m_e$, the $m_T$-line drops fastly following $e^{-m_e/T}$ and crosses the $H$-line. We label the cross point of the $m_T$-line and the $H$-line as ``$Q$'', and we have \begin{eqnarray}\begin{aligned} \label{eq:TQ_HQ} T_Q \sim m_e/\log \eta, \quad H_Q \sim 10^{-16}/\log^2 \eta \,\,\text{eV}. \end{aligned}\end{eqnarray} As we will see in the rest of this section, comparing $m_0$ and $H_Q$ is vital in determining the temperature at which $\phi$ starts the late-time oscillation. Let us first discuss $\phi$'s movement in the stage $T \gg T_*$, during which the bare potential can be omitted in the high-temperature environment. In \App{appx:analyt_sol}, we solve the scalar evolution for this situation. Because $\phi-\phi_{\min}$ is the linear combination of $T^{\frac{1\pm\sqrt{1-\eta^2}}{2}}$, to describe the thermal convergence more quantitatively, we need the specific value of $\eta$. We recall that the potential minimums are $\phi_{\min}=\pi f/2$ for the type-A model and $\phi_{\min}=0, \pm \pi f$ for the type-B model. Given the initial condition $\dot{\phi}_\text{rh} \simeq 0$, we approximately have~ \begin{equation} \label{eq:approx_delphi_eta} \phi - \phi_{\min} \propto \left\{ \begin{aligned} & T^{\frac{1-\sqrt{1-\eta^2}}{2}} & \quad (\eta \leq 1)\\ & T^{\frac{1}{2}} \cos\big[ \sqrt{\eta^2-1} \log\left(T/T_\text{rh}\right)/2 \big] & \quad (\eta > 1) \end{aligned} \right. , \end{equation} where $\eta = 1$ is the critical value determining how $\phi$ evolves towards the local minimum. If $\eta \leq 1$~(but not too small), $\phi$ gradually slides to $\phi_{\min}$, as shown in the green lines in the left and right panels of \Fig{fig:phi_evo}. Incidentally, in the limit $\eta \ll 1$, we come back to \Eq{eq:approx_delphi_eta<<1} where $\phi$ is nearly frozen at the early universe, which is related to the blue lines in both panels of \Fig{fig:phi_evo}. Here, the $T^{\frac{1+\sqrt{1-\eta^2}}{2}}$ term is negligible because it describes the movement with non-zero $\dot{\phi}_\text{rh}$. If $\eta > 1$, $\phi$'s movement towards $\phi_{\min}$ is oscillating because $T^{\frac{1 \pm i\sqrt{\eta^2-1}}{2}} = T^{1/2} e^{\pm i \sqrt{\eta^2-1} \log T/2}$. Obeying the power law $T^{1/2}$, $\phi$'s oscillation amplitude decreases as the temperature goes down. This can be explained more intuitively: When $T \gtrsim m_e$, the adiabatic condition of the WKB approximation is satisfied~($\dot{m}_T/m_T^2 \sim 1/\eta \lesssim 1$), so there is no particle creation or depletion for $\phi$, or equivalently speaking, its number density is conserved. For this reason, $\phi$'s oscillation amplitude obeys $\abs{\phi - \phi_{\min}} \simeq \sqrt{2 n_\phi/m_T} \propto T^{\frac{1}{2}}$. \Eq{eq:approx_delphi_eta} reveals an interesting phenomenon: As long as there is a hierarchy between $T_\text{rh}$ and $m_e$, which is natural in most of the inflation models, $\phi$ converges to the local minimum of the thermal potential. For example, in $\eta\gtrsim 1$ case, given that $T_\text{rh} \sim 1\text{TeV}$, when the universe temperature drops to $T\sim m_e$, the field deviation from the local minimum becomes $\abs{\phi-\phi_{\min}}/\phi_{\min} \sim 10^{-3}$. Higher $T_\text{rh}$ leads to even smaller field displacement from the thermal minimum. Such determination of scalar's misalignment through the thermal effect is named the thermal misalignment in several recent works~\cite{Brzeminski:2020uhm, Batell:2021ofv, Batell:2022qvr}. Other mechanisms setting scalar's nonzero initial displacement can be found in ~\cite{Co:2018mho, Takahashi:2019pqf, Huang:2020etx}. As shown in \Eq{eq:misalignment_classify}, in our paper, we use the term thermal misalignment for the cases satisfying $\eta \gtrsim 1$ to distinguish them from the standard misalignment in which the thermal effect does not play a role. We use such a definition because when this condition is satisfied, the thermal effect from the SM bath washes out $\phi$'s sensitivity of the initial condition and dynamically sets the field displacement. When $T \lesssim T_*$, the bare potential becomes important in $\phi$'s evolution. In the $m_0 \lesssim H_Q$ case, the ultralight scalar's evolution is simply the combination of the early time wrong vacuum damped oscillation and the late-time true vacuum damped oscillation around zero with \begin{eqnarray}\begin{aligned} T_\text{osc} \simeq T|_{3H=m_0}, \quad \abs{\phi}_\text{osc} \simeq \abs{\phi_{\min}}, \end{aligned}\end{eqnarray} where $\phi_{\min}= \pi f/2$ for the type-A model and $\phi_{\min} = \pm \pi f$ for the type-B model with $\phi$ initially in the wrong vacuum. We do not put more words on that because this kind of the thermal misalignment can be treated as the standard misalignment with the thermally determined initial condition. Now, we shift our focus to the $m_0 \gtrsim H_Q$ case in the rest of this section. Since the total potentials of the type-A and type-B models have different shapes depending on exactness of the $\mathbb{Z}_2$ symmetry, and the scalar evolution depends on the numerical value of $\eta$, let us describe the scalar evolution case by case: \begin{itemize} \item {\bf Type-A, $\eta \gg 1$}. When $T \gg T_*$, $\phi$ does the damped oscillation, which converges to $\pi f/2$. Afterward, when $T \sim T_*$, or equivalently speaking, $m_T \sim m_0$, the potential minimum begins to shift towards from $\pi f/2$ to $0$, obeying $\phi= f \arctan(m_T^2/m_0^2)$ as shown in \Eq{eq:TypeAB_VT_min}. When the universe temperature is much higher than $m_e$, the adiabatic condition is satisfied because $\dot{m}_T/m_T^2 \sim 1/\eta \ll 1$. We show the movement of $\phi$ in the lower left panel of \Fig{fig:V_AB} and the red line in the left panel of \Fig{fig:phi_evo}. However, as the temperature drops below $m_e$, because $\dot{m}_T/m_T^2 \sim e^{m_e/T}/\eta \gg 1$, $\phi$ is not able to respond to the sudden variation of the potential minimum anymore and begins the oscillation. Here, the oscillation temperature and the starting amplitude can be written as \begin{equation} \label{phiTe_arctan} T_\text{osc} \sim T_Q , \quad \abs{\phi}_\text{osc} \sim f \,(T_Q/T_*)^4, \end{equation} where $T_Q\sim m_e/\log \eta$ as defined in \Eq{eq:TQ_HQ}. Given $f \ll m_\text{pl}$, the hierarchy between $T_*$ and $T_Q$ strongly suppresses $\phi$'s late-time oscillation amplitude, leading to $\phi$'s minuscule relic abundance. Even though this phase does not appear in the following context of the dark photon dark matter freeze-in in \text{Sec}.~\ref{sec:dpdm_fi}, we still discuss this phase in this section for completeness. \item {\bf Type-A, $\eta \sim 1$}. In the early time when $T \gg T|_{3H=m_0}$, $\phi$ slides to the thermal minimum $\pi f/2$. When $T \sim T|_{3H=m_0}$, the scalar begins the late-time damped oscillation obeying $\abs{\phi} \propto T^{3/2}$ with the starting temperature and amplitude \begin{eqnarray}\begin{aligned} T_\text{osc} \sim T|_{3H = m_0}, \quad \abs{\phi}_\text{osc} \simeq \pi f/2. \end{aligned}\end{eqnarray} In \Fig{fig:V_AB}, we do not list such a case because, even though categorized as the thermal misalignment, it can be decomposed into the standard misalignment~(upper left panel of \Fig{fig:V_AB}) plus the determined initial amplitude. In the left panel of \Fig{fig:phi_evo}, the green line describes such situation: When $\widetilde{T}\gtrsim 1$, the green line gradually slides to the $\pi f/2$. When $\widetilde{T}\sim 1$, the green line begins the damped oscillation with the amplitude scaled as $\abs{\phi} \propto \widetilde{T}^{3/2}$. \item {\bf Type-B, $\eta \gtrsim 1$}. Unlike the type-A model, there is no continuous shift of the potential minimum during the cosmological evolution. Therefore, we only need to focus on the moment when the local minimum flips. Here, we mainly focus on the case in which $\phi$'s initial condition satisfies $\pi f/2 \lesssim \phi_\text{rh} \lesssim 3\pi f/2$. In this case, because of the thermal effect when $T \gg T_*$, $\phi$ converges to the local minimum $\pi f$ inside the wrong vacuum. When $T \lesssim T_*$, at the point $\phi = \pi f$, the second order derivative becomes nonpositive, i.e., $V''|_{\phi = \pi f} \lesssim 0$, thereafter $\phi$ begins the oscillation around zero. From \Eq{eq:T_thermal}, we have \begin{eqnarray}\begin{aligned} \label{eq:Tosc_phiosc_TypeB_eta<<1} T_\text{osc} \simeq T_*, \quad \abs{\phi}_\text{osc} \simeq \pi f. \end{aligned}\end{eqnarray} According to \Eq{eq:T_thermal}, we recall that $T_* \sim T|_{3H=m_0}/\eta^{1/2} \sim (m_0 m_\text{pl}/\eta)^{1/2}$ when $m_0 \gtrsim \eta H_Q$, and $T_* \sim 0.1 m_e$ when $H_Q \lesssim m_0 \lesssim \eta H_Q$. For the $\eta \gg 1$ case, one could look at the red line in the right panel of \Fig{fig:phi_evo}. Here, $\phi$ oscillates around the thermal minimum $\pi f$ with the power law $\abs{\phi - \pi f} \propto T^{1/2}$ when $T \gg T_*$. Afterward, when $T \simeq T_*$, alternatively speaking, $m_T \simeq m_0$, $\phi$ begins the oscillation following $\abs{\phi} \propto T^{3/2}$. The plot shows the apparent postponement of the scalar oscillation for the red line compared with the other two lines. Because the CP-even scalar $\phi$'s oscillation is postponed, the evolution of $\phi$ can be classified as the trapped misalignment, which is formally investigated in axion models~\cite{Nakagawa:2020zjr, DiLuzio:2021gos}. In this case, $\phi$'s relic abundance is enhanced given the same $\abs{\phi}_\text{osc}$ or $f$. We can also think about such characteristics inversely: For $\phi$ to reach the same abundance quantified by $\mathcal{F}$, one only needs smaller $\abs{\phi}_\text{osc}$ or $f$. To be more quantitative, one can write the misalignment at the beginning of the oscillation as \begin{eqnarray}\begin{aligned} \label{eq:typeB_phiosc_rescale} \abs{\phi}_\text{osc} \simeq \abs{\phi}_{\text{osc}, \, \text{std}} \left(T_*/T|_{3H=m_0}\right)^{3/2}, \end{aligned}\end{eqnarray} where $\abs{\phi}_{\text{osc},\text{std}}$ denotes the starting amplitude for the standard misalignment as shown in \Eq{eq:phi_osc_eta<<1}. From \Eq{eq:typeB_phiosc_rescale}, one can see that the necessary early misalignment $\abs{\phi}_\text{osc}$ is rescaled by a factor of $(T_*/T|_{3H=m_0})^{3/2}$, which shows that $\abs{\phi}_\text{osc}$ is much smaller compared with $\abs{\phi}_{\text{osc},\text{std}}$ when $\mathcal{F}$ is determined. For the $\eta \sim 1$ case, one could refer to the green line in the right panel of \Fig{fig:phi_evo}. In the early stage, i.e., $T \gtrsim T|_{3H=m_0}$, $\phi$ slowly moves to $\pi f$ obeying $\abs{\phi - \pi f} \propto T^{\frac{1-\sqrt{1-\eta^2}}{2}}$. When $T \simeq T|_{3H=m_0}$, $\phi$ begins the damped oscillation with the power law $\abs{\phi} \propto T^{3/2}$. From the discussion above, we can find that the $\eta \sim 1$ case has the thermal determination of the initial condition but does not have the postponement of the oscillation. Therefore, $\phi$ is in the phase of the thermal misalignment but not in the phase of the trapped misalignment. Finally, let us briefly discuss the case where $- \pi f/2\lesssim \phi_\text{rh} \lesssim \pi f/2$. Here we have that $\phi$ oscillates obeying $\abs{\phi} \propto T^{1/2}$ when $T \gg T_*$, and then oscillates obeying $\abs{\phi} \propto T^{3/2}$ when $T\lesssim T_*$. Because the late-time amplitude is suppressed by a factor of $(T_\text{rh}/T_*)^{1/2}$, given the nonnegligible fraction of $\phi$ among the dark matter~(For example, $\mathcal{F} \sim 10^{-3}$), $f$ depends on $T_\text{rh}$ and may be larger than $m_\text{pl}$. Therefore, this paper focuses on the case where $\phi$ is initially localized inside the wrong vacuum. \end{itemize} \section{Dark Photon Dark Matter} \label{sec:dpdm_fi} In this section, we discuss the freeze-in of the $\text{keV}-\text{MeV}$ dark photon dark matter via varying kinetic mixing. As shown in \cite{Pospelov:2008jk, Redondo:2008ec, An:2014twa}, the dark photon freeze-in through the time-independent kinetic mixing~($\epsilon_\text{FI} \sim10^{-12}$) is entirely ruled out since it causes the strong excess in stellar energy loss, the CMB energy injection, and the galactic photon spectrum. Alternative dark photon dark matter production mechanisms include misalignment~\cite{Nelson:2011sf, Arias:2012az, Alonso-Alvarez:2019ixv, Nakayama:2019rhg}\footnote{The minimal vector field misalignment model~\cite{Nelson:2011sf} is not viable because the inflationary e-folds exponentially suppresses the vector field displacement. To avoid such washout, the non-minimum coupling with the gravity or the modified kinetic term is needed~\cite{Arias:2012az, Alonso-Alvarez:2019ixv, Nakayama:2019rhg}.}, gravitational production~\cite{Graham:2015rva, Ema:2019yrd, Ahmed:2020fhc, Kolb:2020fwh, Wang:2022ojc, Redi:2022zkt}, the radiation of the cosmic string network~\cite{Long:2019lwl, Kitajima:2022lre}, and axion tachyonic instability~\cite{Agrawal:2018vin, Dror:2018pdh, Co:2018lka, Bastero-Gil:2018uel, Co:2021rhi}. Realizing that all the production mechanisms mentioned above do not rely on kinetic mixing, which is indispensable for dark photon detection, we provide a minimal extension of the dark photon dark matter freeze-in where the ultralight scalar's evolution sets the experimental benchmark of kinetic mixing dynamically. In addition, we want to stress the enormous significance of detecting the ultralight scalar in the whole mass range, i.e., $10^{-33}\text{eV} \lesssim m_0 \ll \text{eV}$: Even though it cannot be the main component of dark matter in the mass range $10^{-33}\text{eV} \lesssim m_0 \lesssim 10^{-17}\text{eV}$, which is excluded by the fuzzy dark matter constraints~\cite{Irsic:2017yje, Kobayashi:2017jcf, Armengaud:2017nkf, Zhang:2017chj, Nori:2018pka, Rogers:2020ltq, Dalal:2022rmp} and the superradiance constraints from the supermassive black holes~\cite{Arvanitaki:2014wva, Stott:2018opm, Davoudiasl:2019nlo, Unal:2020jiy}, such tiny amount of the ultralight scalar relic can still open the gate for the main component dark matter's production. At the beginning of this section, let us give a brief introduction of the setup: In our model, the dark photon dark matter is produced through the operator $\mathcal{L} \supset \phi F_{\mu \nu} F'^{\mu \nu}/2 \Lambda_\text{KM}$ whose effective kinetic mixing is supported by $\phi$'s VEV in the early universe. Thereafter, when $T \simeq T_\text{osc}$, $\phi$ begins the oscillation with its amplitude scaled as $\abs{\phi} \propto \left(T/T_\text{osc}\right)^{3/2}$. Because most of the existing constraints are imposed when $T \ll T_\text{osc}$, the dark photon's parameter space can be vastly extended, and the ratio $T_0/T_\text{osc}$ determines today's local kinetic mixing for the future dark photon dark matter detections. In addition, since the $\phi F^2$ or $\phi^2 F^2$ operator is induced as the byproduct of the varying kinetic mixing from the UV theory discussed in \text{Sec}.~\ref{sec:UV_model}, testing the fine-structure constant variation and the equivalence principle violation through the ground-based experiments~(torsion balance, clock comparison, resonant-mass detector, AION, MAGIS)~\cite{Smith:1999cr, Schlamminger:2007ht, VanTilburg:2015oza, Hees:2016gop, Hees:2018fpg, Barontini:2021mvu, collaboration2021frequency, Banerjee:2022sqg, Baggio:2005xp, Arvanitaki:2015iga, Badurina:2019hst, MAGIS-100:2021etm}, satellite-based experiments~(MICROSCOPE, Space-Q, AEDGE)~\cite{Berge:2017ovy, Tsai:2021lly, AEDGE:2019nxb, Brzeminski:2022sde}, astrophysics~(PTA, SKA)~\cite{Kaplan:2022lmz, Hamaide:2022rwi}, and cosmology~(CMB, BBN, Lyman-$\alpha$ forest)~\cite{Stadnik:2015kia, Hart:2019dxi, Sibiryakov:2020eir, Bouley:2022eer, Hamaide:2022rwi} open another brand new window for the dark matter experimentalists. \subsection{Dark Photon Production} \begin{figure}[t] \centering \includegraphics[width=0.63\columnwidth]{DPDM_mAp_eps0_const.pdf} \caption{The parameter space of the dark photon dark matter~($\Omega_{A'} h^2 \simeq 0.12$) with the constant kinetic mixing. Here, $m_{A'}$ is the dark photon mass, and $\epsilon$ is the constant kinetic mixing. The yellow line labeled with ``$\epsilon_\text{FI}$'' is the dark photon freeze-in line, i.e., the kinetic mixing necessary for the dark photon to reach today's dark matter relic abundance. In the mass range $m_{A'}<2m_e$, the dominant dark photon production channel is photon to dark photon resonant conversion, i.e., $\gamma \rightarrow A'$. When $m_{A'} \geq 2m_e$, $A' \rightarrow e^- e^+$ dominates, which explains the slight decreasing of the freeze-in line. In the plot, the purple region represents the constraints from the stellar energy loss of red giants, horizontal branches, and sun~\cite{Redondo:2008aa, Pospelov:2008jk, Redondo:2008ec, Redondo:2013lna, An:2013yfc, An:2014twa, Hardy:2016kme}. The shaded red region labeled by ``DD'' denotes the constraints from the dark matter direct detection experiments~\cite{An:2014twa, Bloch:2016sjj, XENON:2018voc, XMASS:2018pvs, XENON:2019gfn, XENON:2020rca, XENONCollaboration:2022kmb}. Up to now, the most stringent constraint within $\text{keV}-\text{MeV}$ range comes from the XENONnT experiment~\cite{XENONCollaboration:2022kmb}. The dashed red line is the projection of the next-generation LUX-ZEPLIN~(LZ) experiment~\cite{LZ:2021xov}. The blue region denotes the constraints from the dark photon dark matter decay, whose dominant channel is $A'\rightarrow 3 \gamma$ when $m_{A'}<2 m_e$, or $A' \rightarrow e^- e^+$ when $m_{A'} \geq 2 m_e$. Since the dark photon decay affects CMB and the late-time photon spectrum, constraints can be imposed on the dark photon parameter space~\cite{Pospelov:2008jk, Redondo:2008ec, An:2014twa, McDermott:2017qcg}. From \cite{An:2014twa, Slatyer:2016qyl, Liu:2020wqz}, we know that the constraints from CMB and the late-time processes are comparable for the constant kinetic mixing model. Therefore, we label the constraint from the dark photon decay with ``CMB+Late''. In the plot, one can find that the freeze-in line is covered by the imposed constraints, which means that the constant kinetic mixing model is completely ruled out. Such exclusion of the dark photon dark matter freeze-in is formally discussed in \cite{Pospelov:2008jk, Redondo:2008ec, An:2014twa}. } \label{fig:DP_bound} \end{figure} Let us begin with the dark photon freeze-in through the varying kinetic mixing. To simplify the discussion, we focus on the case where the dark photon is produced before the ultralight scalar's oscillation. When $T \gtrsim T_\text{osc}$, $\phi$ is frozen with the nonzero displacement such that the dark photon is produced through $\gamma \rightarrow A'$ and $e^- e^+ \rightarrow A'$ channels. Our numerical calculation of the required kinetic mixing to reach today's dark matter relic abundance is shown as the yellow line labeled as ``$\epsilon_\text{FI}$'' in \Fig{fig:DP_bound}. In the following text, let us briefly introduce how to estimate the dark photon abundance from the freeze-in process semianalytically. In the case where $m_{A'} < 2m_e$, we solve the Boltzmann equation\footnote{Even though essential in the indirect detection discussed later, the decay channels such as $A' \rightarrow e^- e^+$ and $A' \rightarrow 3 \gamma$ nearly do not affect $A'$'s abundance. This is because, in the experimentally allowed region, the dark photon decay rate is much smaller than the universe's expansion rate, not to mention the suppressed late-time kinetic mixing compared with the mixing in the early universe.} \begin{eqnarray}\begin{aligned} \label{eq:Boltz_AToAp} \dot{n}_{A'} + 3H n_{A'} \simeq n_\gamma \langle \Gamma_{\gamma \rightarrow A'} \rangle, ~~ \text{where $\langle \Gamma_{\gamma \rightarrow A'} \rangle \sim \frac{\epsilon^2 m_{A'}^4}{T} \delta(m_\gamma^2 - m_{A'}^2)$}. \end{aligned}\end{eqnarray} In \Eq{eq:Boltz_AToAp}, $n_{\gamma}$ and $n_{A'}$ are the number density of the photon and dark photon, respectively. $\langle \Gamma_{\gamma \rightarrow A'} \rangle$ is the thermal average of the photon to dark photon transition rate, from which we know that the resonant oscillation, i.e., $\gamma \rightarrow A'$, happens when $m_\gamma \simeq m_{A'}$. In the mass range $0.1 \text{MeV} \lesssim m_{A'}\lesssim 1 \text{MeV}$, when $\gamma \rightarrow A'$ happens, $e^\pm$ are relativistic. Therefore, the plasmon mass can be approximately written as $m_\gamma^2 \simeq 2 \pi \alpha_\text{em} T^2/3$. Combining $m_{\gamma}^2$ and \Eq{eq:Boltz_AToAp}, we have \begin{eqnarray}\begin{aligned} \label{eq:T_Omega_m_Ap<2me} T_{\gamma \rightarrow A'} \sim 8 m_{A'}, \quad \, \Omega_{A', \, \gamma \rightarrow A'} \sim \epsilon^2 \alpha_\text{em}^{3/2} \,\frac{m_\text{pl}}{T_\text{eq}}, \end{aligned}\end{eqnarray} where $T_{\gamma \rightarrow A'}$ is the resonant temperature of $\gamma \rightarrow A'$, $T_\text{eq} \sim \text{eV}$ is the temperature of the matter-radiation equality, and $\Omega_{A', \, \gamma \rightarrow A'}$ is the related dark photon relic abundance. In \Eq{eq:T_Omega_m_Ap<2me}, $\Omega_{A'}$'s dependence on $m_{A'}$ is approximately canceled out. Therefore, the freeze-in line in the region where $m_{A'}<2 m_e$ is approximately a constant. Since $\Omega_{A'}h^2\simeq 0.12$, from \Eq{eq:T_Omega_m_Ap<2me} we have \begin{eqnarray}\begin{aligned} \label{eq:eps_FI} \epsilon_\text{FI} \sim 10^{-12}, \end{aligned}\end{eqnarray} which explains the behavior of the yellow line in \Fig{fig:DP_bound} in the mass range $ 0.1 \text{MeV} \lesssim m_{A'} \lesssim 1 \text{MeV}$. In the mass range $m_{A'} \lesssim 0.1 \text{MeV}$, the resonant oscillation happens when $e^{\pm}$ are non-relativistic. To be more quantitative, $T_\text{res} \sim 0.1 m_e$, which is insensitive to $m_{A'}$. In this case, we have $\Omega_{A'} \propto \epsilon^2 m_{A'}^3$, which explains the slope of the freeze-in line in \Fig{fig:DP_bound} in this mass range. In the discussion above, we focus on the transverse dark photon because the produced longitudinal dark photon is subdominant, which is checked by us numerically and consistent with \cite{Redondo:2013lna}. In the case where $m_{A'} \gtrsim 2 m_e$, the inverse decay channel, i.e., $e^- e^+ \rightarrow A'$, opens up and dominates over $\gamma \rightarrow A'$. Now the Boltzmann equation can be written as \begin{eqnarray}\begin{aligned} \label{eq:Boltz_EEToAp} \dot{n}_{A'} + 3 H n_{A'} \simeq n_{e^-} n_{e^+} \langle \sigma_{e^- e^+ \rightarrow A'} \rangle, \quad \, \text{where $n_{e^-} n_{e^+} \langle \sigma_{e^- e^+ \rightarrow A'} \rangle \sim \epsilon^2 \alpha_\text{em} m_{A'}^{5/2} T^{3/2} e^{-m_{A'}/T}$}. \end{aligned}\end{eqnarray} Inside the collision term of \Eq{eq:Boltz_EEToAp}, there is an exponential suppression factor $e^{-m_{A'}/T}$ cutting off the dark photon production when the universe cools down. From \Eq{eq:Boltz_EEToAp}, we estimate the cut-off temperature of $e^- e^+ \rightarrow A'$ near which most of the dark photons are produced and the related dark photon relic abundance as \begin{eqnarray}\begin{aligned} \label{eq:T_Omega_m_Ap>2me} T_{e^- e^+ \rightarrow A'} \sim m_{A'}, \quad \, \quad \Omega_{A', \,e^- e^+ \rightarrow A'} \sim \epsilon^2 \alpha_\text{em} \frac{m_\text{pl}}{T_\text{eq}}, \end{aligned}\end{eqnarray} respectively. In \Eq{eq:T_Omega_m_Ap>2me}, one can find that $\Omega_{A', \,e^- e^+ \rightarrow A'}$ is also almost independent of the dark photon mass, which is similar to the result in \Eq{eq:T_Omega_m_Ap<2me} but different by an $\alpha_\text{em}^{1/2}$ factor. For this reason, in the mass range $m_{A'} \gtrsim 2m_e$, the needed kinetic mixing to reach today's relic abundance is slightly smaller than the one in the range $m_{A'} < 2m_e$ by an $\mathcal{O}(1)$ factor, which explains the slight lowering of the freeze-in lines in \Fig{fig:DP_bound} and \Fig{fig:DP_bound_vary}. Being different from the model with the constant kinetic mixing, our model has extra dark photon production channels where $\phi$ is the particle rather than the VEV, such as $\gamma \rightarrow A' \phi$ and $e^- e^+ \rightarrow A' \phi$. Nevertheless, one can see that these channels are subdominant from the discussion below. By estimating the ratio of the $A'$ abundance produced through these channels over the $A'$ abundance produced through the varying kinetic mixing, we have \begin{eqnarray}\begin{aligned} \label{eq:NonVEV_vs_VEV} \frac{\Omega_{\gamma \rightarrow A' \phi}}{\Omega_{\gamma \rightarrow A'}} \sim \frac{m_{A'} T_\text{rh}}{\abs{\phi}_\text{osc}^2} \ll 1, \end{aligned}\end{eqnarray} where $\Omega_{\gamma \rightarrow A'}$ represents the dark photon abundance from the channels in which $\phi$ serves as the VEV, and $\Omega_{\gamma \rightarrow A' \phi}$ is for the dark photon abundance from the channels where $\phi$ is the particle. From \Eq{eq:NonVEV_vs_VEV}, one can find that the majority of the dark photon comes from the channels with the $\phi$ as the VEV, because the channels with $\phi$ as the particle are highly suppressed by the large $\Lambda_\text{KM}$, while the channels with the $\phi$ as the VEV are compensated with large $\abs{\phi}_\text{osc}$. \subsection{Signatures and Constraints} In our model, the experiments can be categorized into two classes: 1.~The detections of the dark photon dark matter. These rely on the portal between the visible and dark sectors, i.e., the varying kinetic mixing through which the energy and momentum flow. Today's local kinetic mixing is set by the scalar mass $m_0$. 2.~The detection of the ultralight scalar $\phi$, i.e., the handle controlling the portal. Such detections are based upon the scalar-photon couplings, which vary the fine-structure constant or violate the equivalence principle. The targets for the experiments detecting the ultralight CP-even scalar are set by the dark photon dark matter freeze-in. \subsubsection{Detection of the Dark Photon Dark Matter} \begin{figure}[t] \centering \includegraphics[width=0.49\columnwidth]{DPDM_mAp_eps0_leq1em25eV.pdf} \hfill \includegraphics[width=0.49\columnwidth]{DPDM_mAp_eps0_geq1em25eV.pdf} \caption{The parameter space of the dark photon dark matter~($\Omega_{A'} h^2 \simeq 0.12$) produced from the freeze-in mechanism through the varying kinetic mixing. Here, $m_{A'}$ is the dark photon mass, and $\epsilon_0$ is today's local kinetic mixing near the earth. One should note that the CMB~(light blue) and BBN~(green) bounds are imposed on the early universe with larger kinetic mixing. Based on the evolution of $\phi$, we recast these constraints to the $m_{A'}-\epsilon_0$ plane. {\bf Left}: $m_0 \ll 10^{-25}\text{eV}$. In the plot, we choose $m_0 \simeq 10^{-30}, 10^{-28}, 10^{-26}\,\text{eV}$ as the benchmark values. Here, the strongest constraints come from the stellar energy loss~(purple), $A'$ decay during CMB~(light blue) and the late time~(dark blue), and the direct detection experiments~(red). The dashed red line denotes the projection of LUX-ZEPLIN~(LZ). The yellow line labeled by ``$\epsilon_\text{FI}$'' is the kinetic mixing required to reach the dark matter relic abundance during the freeze-in. If $m_0 \lesssim 10^{-28} \text{eV}$, one has $T_\text{osc} \lesssim T_\text{CMB}$, so $\epsilon_\text{CMB} \simeq \epsilon_\text{FI}$. For this reason, the dark photon heavier than $0.1 \text{MeV}$ is excluded, while the lighter one is not. {\bf Right}: $m_0 \gtrsim 10^{-25}\text{eV}$. Here we choose $m_0 \simeq 10^{-25}, 10^{-21}, 10^{-17}, 10^{-13}\,\text{eV}$ as the benchmark values. The light gray region denotes the parameter space where our calculation breaks up when the oscillation begins before the freeze-in process, i.e., $T_\text{osc} \gtrsim T_\text{FI}$. Here, the most relevant constraints comes from the $A'$ decay during CMB~(light blue) and the late time~(dark blue), where $A' \rightarrow 3 \gamma$ dominates when $m_{A'} < 2m_e$, and $A' \rightarrow e^- e^+$ dominates when $m_{A'} \gtrsim 2 m_e$. Since the kinetic mixing is large during the BBN, the dark photon decay is also constrained by BBN~(green). One should notice that the BBN bound is cut off at $m_{A'} \simeq 5\,\text{MeV}$, below which the $A'$ decay cannot disintegrate the relevant light elements~\cite{Forestell:2018txr, Fong:2022cmq}. Since the oscillation temperatures for the type-A and the type-B model have some slight quantitative differences in the mass range $m_{0} \gtrsim 10^{-18}\text{eV}$, here we mainly use the type-A model as an example. } \label{fig:DP_bound_vary} \end{figure} Let us first investigate the phenomenology of the dark photon dark matter established on the nonzero kinetic mixing. Before starting the discussion, one needs to recall that in our model, the kinetic mixing varies during the universe's evolution. Hence, the experimental detectability based on some specific process depends on the universe's epoch when this process happens. To understand this, we start by reviewing the ultralight scalar's evolution. As we know, when $T\gtrsim T_\text{osc}$, $\phi$ is at rest with the nonzero field displacement. Afterward, when $T \lesssim T_\text{osc}$, $\phi$ begins the damped oscillation scaled as $\abs{\phi} \propto T^{3/2}$. Given that the dark photon is the main component of dark matter, we have $\epsilon_\text{FI} \sim 10^{-12}$, based on which today's local kinetic mixing, denoted as $\epsilon_0$, is set by $T_\text{osc}$. From Sec.~\ref{sec:cos_his}, we write $\epsilon_0$ as \begin{eqnarray}\begin{aligned} \label{eq:eps0_today+local} \epsilon_0 \sim \epsilon_{\text{FI}} \left(\frac{T_0}{T_\text{osc}}\right)^{3/2} \times \left\{ \begin{aligned} & 1 ~ & (m_0 \ll 10^{-25}\text{eV})\\ & \mathcal{E} & (m_0 \gtrsim 10^{-25}\text{eV}) \end{aligned}, \right. \end{aligned}\end{eqnarray} where the universe temperature today is $T_0 \sim 10^{-3}\text{eV}$, the enhancement factor from the structure formation is $\mathcal{E} \sim 600$, as shown in \Eq{eq:Enhance_Factor}, and the kinetic mixing during the freeze-in is $\epsilon_\text{FI}\sim 10^{-12}$, as shown in \Eq{eq:eps_FI}. Based upon \Eq{eq:eps0_today+local}, we divide the discussion into two parts separately, as shown in \Fig{fig:DP_bound_vary}: The left panel denoting the case $m_0 \ll 10^{-25}\text{eV}$ where $\rho_{\phi,\,\text{local}} \simeq \mathcal{F} \rho_{\text{average}}$, and the right panel representing the case $m_0 \gtrsim 10^{-25}\text{eV}$ where $\rho_{\phi,\,\text{local}} \simeq \mathcal{F} \rho_{\text{local}}$. Let us begin with the $m_0 \ll 10^{-25}\text{eV}$ case with the dark photon parameter space shown in the left panel of \Fig{fig:DP_bound_vary}. One can see that the most relevant constraints come from the dark matter direct detection~(red), the stellar energy loss~(purple), and the dark photon decay during the CMB~(light blue) and the late time~(blue). In the plot, the yellow line represents the required kinetic mixing during the dark photon freeze-in to reach the relic abundance $\Omega_{A'} h^2 \simeq 0.12$, from which one can easily find that the constant kinetic mixing scenario is thoroughly excluded. In contrast, the varying kinetic mixing model opens the parameter space and provides the benchmark values determined by $m_0$. Here, we choose $m_0 \simeq 10^{-30}, 10^{-28}, 10^{-26}\,\text{eV}$ to plot the freeze-in lines on the $m_{A'}-\epsilon_0$ plane. Since $T_\text{osc} \sim \left( m_0 m_\text{pl} \right)^{1/2}$, larger $m_0$ leads to an earlier start of the oscillation, such that $\epsilon_0$ is smaller. Let us briefly introduce the constraints appearing in the left panel of \Fig{fig:DP_bound_vary} in the following paragraphs. In the mass range $m_{A'} \lesssim 0.1 \text{MeV}$, the most relevant constraints come from the dark matter direction~(red) and the stellar energy loss~(purple). The direct detection experiments test scintillation photons from the electron recoil inside the noble liquids or semiconductors due to the absorption of the dark photon~\cite{An:2014twa, Bloch:2016sjj, XENON:2018voc, XMASS:2018pvs, XENON:2019gfn, XENON:2020rca, XENONCollaboration:2022kmb}. Up to now, the most strict constraint within $\text{keV}$ to $\text{MeV}$ mass range comes from the XENONnT experiment~\cite{XENONCollaboration:2022kmb}. The LUX-ZEPLIN~(LZ) experiment, represented by the dashed red line in the plot, can test smaller kinetic mixing by a half to one order of magnitude~\cite{LZ:2021xov}. Furthermore, we can expect that future dark matter direct detection experiments, such as DarkSide-20k~\cite{DarkSide-20k:2017zyg} and DARWIN~\cite{DARWIN:2016hyl}, with more giant detectors and lower backgrounds, can have better detection capability. In this mass range, our model is also constrained by the stellar energy loss of the sun, horizontal branch, and red giant via the channel $\gamma \rightarrow A'$, which changes the stellar evolutions~\cite{Redondo:2008aa, Pospelov:2008jk, Redondo:2008ec, Redondo:2013lna, An:2013yfc, An:2014twa, Hardy:2016kme}. The direct detection of the non-relativistic dark photons produced by the sun, known as the solar basin, can also impose comparable constraints~\cite{Lasenby:2020goo}. In the mass range $m_{A'} \gtrsim 0.1 \text{MeV}$, our model is constrained by the decay of the dark photon dark matter. When $m_{A'} < 2 m_e$, since the two-photon channel is forbidden according to the Landau-Yang theorem~\cite{Landau:1948kw, Yang:1950rg}, the dark photon decays through $A' \rightarrow 3 \gamma$ induced by the electron loop. When $m_{A'} \gtrsim 2 m_e$, the dominant channel is $A' \rightarrow e^- e^+$. These two channels are constrained by the observations of the CMB and the late-time photon background, which give comparable constraints in the constant kinetic mixing scenario. From \cite{Redondo:2008ec, An:2014twa, Essig:2013goa, Slatyer:2016qyl}, we know that $\Gamma_{A' \rightarrow 3 \gamma} \lesssim 10^{-9} H_0$ and $\Gamma_{A' \rightarrow e^- e^+} \lesssim 10^{-7} H_0$ for $m_{A'} \sim \text{MeV}$. However, in our model, one should realize that these two physical processes happen in different stages of the universe. To be more specific, the stage of CMB is at $T_\text{CMB} \sim \text{eV}$ while the galactic photon is emitted in today's universe. From the discussion of $\phi$'s evolution in Sec.~\ref{sec:cos_his}, we know that the kinetic mixing during CMB is \begin{eqnarray}\begin{aligned} \label{eq:eps_CMB_m0_leq10em25} \epsilon_\text{CMB} \sim \epsilon_0 \times \left[\frac{\min(T_\text{CMB}, T_\text{osc})}{T_0} \right]^{3/2}, \end{aligned}\end{eqnarray} where $T_\text{osc} \sim (m_0 m_\text{pl})^{1/2}$. Based on \Eq{eq:eps_CMB_m0_leq10em25}, we can recast the constraint on the kinetic mixing from the dark photon dark matter decay during the CMB stage to the $m_{A'}-\epsilon_0$ diagram. When $m_0 \lesssim 10^{-28}\text{eV}$, one knows $T_\text{osc} \lesssim T_\text{CMB}$, so $\epsilon_\text{CMB} \simeq \epsilon_\text{FI}$. Given this, one can figure out that the dark photon mass region $m_{A'} \gtrsim 0.1\text{MeV}$ is excluded by CMB, while the smaller dark photon mass region is not. This explains CMB bound's cutting off at $m_{A'} \simeq 0.1 \text{MeV}$ in the left panel of \Fig{fig:DP_bound_vary}. When $m_0 \gtrsim 10^{-28}\text{eV}$, one has $T_\text{osc} \gtrsim T_\text{CMB}$, so the kinetic mixing during the stage of CMB is $\epsilon_\text{CMB} \sim \epsilon_0 \,(T_\text{CMB}/T_0)^{3/2}$, which explains the strengthening of the CMB bound at the range $m_{A'} \gtrsim 0.1 \text{MeV}$ compared with the constraints from the late-time photon. Now let us discuss the $m_0 \gtrsim 10^{-25}\text{eV}$ case appearing on the right panel of \Fig{fig:DP_bound_vary}. Here, the most relevant constraints come from the decay of the dark photon dark matter during CMB~(light blue) and BBN~(green). To recast the constraints in the early universe to the $m_{A'}-\epsilon_0$ plane, we use the formula \begin{eqnarray}\begin{aligned} \label{eq:eps_CMB_m0_geq10em25} \epsilon_{\text{CMB}} \sim \epsilon_0 \times \left[\frac{T_{\text{CMB}}}{T_0} \right]^{3/2} \frac{1}{\mathcal{E}}, \quad \quad \epsilon_{\text{BBN}} \sim \epsilon_0 \times \left[\frac{\min(T_{\text{BBN}}, T_\text{osc})}{T_0} \right]^{3/2} \frac{1}{\mathcal{E}}. \end{aligned}\end{eqnarray} From the discussion in Subsec.~\ref{subsec:eta>=1}, we know that thermal effect may change $T_\text{osc}$ for the type-B model when the scalar is heavier than $10^{-16}/\log^2 \eta \, \text{eV}$. Even though the type-B model does not cause the qualitative differences, to simplify the discussion, we only discuss the type-A model as an example such that the oscillation temperature still satisfies $T_\text{osc} \sim (m_0 m_\text{pl})^{1/2}$ in the experimentally allowed region. Similar to the discussion above, we recast the CMB bound to the $m_{A'}-\epsilon_0$ plot. During the BBN, $\epsilon_\text{BBN}$ ranges from $ 10^{-12}$ to $10^{-14}$, which depends on the concrete value of $m_0$. In this range, the dark photon decay through $A' \rightarrow e^- e^+$ leads to the light elements' disintegration, based on which the BBN constraint is imposed~\cite{Forestell:2018txr, Fong:2022cmq}. However, as long as $m_{A'} \lesssim 5 \text{MeV}$, the dark photon decay cannot change the light element abundance since the injected energy is smaller than the deuterium binding energy, which is the smallest among all the relevant light elements~(except ${}^{7}\text{Be}$ whose abundance does not affect the main BBN observables). In the right panel of \Fig{fig:DP_bound_vary}, there is a gray region in the lower left corner, which represents the parameter space where the calculations of freeze-in lines break because the freeze-in happens after the start of the oscillation. At the end of this paragraph, we point out one nontrivial character: Through the varying kinetic mixing, the dark photon heavier than $1\text{MeV}$ can be produced and avoid the constraints of the decay $A' \rightarrow e^- e^+$ because the kinetic mixing portal fastly closes right after the dark photon production. Before ending this subsection, we want to point out that since the dark photon is produced through $\gamma \rightarrow A'$ with the initial momentum $p_{A'} \sim T$, it washes out the dark matter substructure in the late universe, which is the character of the warm dark matter~\cite{Irsic:2017ixq, Dvorkin:2020xga, DEramo:2020gpr, Zelko:2022tgf}. To avoid this, we need $m_{A'}\gtrsim \text{few} \times 10\,\text{keV}$. Since imposing such constraints needs detailed analyses based on the dark matter phase space distribution, which deviates from the main point of this paper, we leave this to future work. \begin{figure}[t] \centering \includegraphics[width=0.499\columnwidth]{m0_de1_eps2em12.pdf} \hfill \includegraphics[width=0.492\columnwidth]{m0_de2_eps2em12.pdf} \caption{The parameter space of the scalar-photon couplings. The nonrelativistic scalar relic's fraction among the dark matter is $\mathcal{F} \simeq 10^{-3}$. The kinetic mixing before the oscillation is $\epsilon_\text{FI} \simeq 2 \times 10^{-12}$, which is the benchmark value for the freeze-in production of the dark photon dark matter satisfying $\Omega_{A'} h^2 \simeq 0.12$. {\bf Left}: The type-A model~($y'=0$). We choose $e' = 1, 10^{-3}, 10^{-6}$ to demonstrate the experimental targets for the dark photon freeze-in through the varying kinetic mixing, which appear as the black lines. The thick dashed magenta line is the $\eta \simeq 1$ contour, upon which the scalar's early displacement is set to be $\pi f/2$ by the thermal misalignment. The gray region denotes the constraints from the equivalence principle tests~(gray: EP tests) from the MICROSCOPE satellite~\cite{Berge:2017ovy} and the torsion balance experiments~\cite{Smith:1999cr, Schlamminger:2007ht}. The blue region represents current constraints from the clock comparisons~\cite{Arvanitaki:2014faa, VanTilburg:2015oza, Hees:2016gop, Kalaydzhyan:2017jtv, collaboration2021frequency}. We can see, for the type-A model, most of the model's parameter space~($e' \lesssim 1$) is inside the projections of the proposed experiments, such as Lyman-$\alpha$ UVES~(dashed purple)~\cite{Hamaide:2022rwi}, the clock comparison~(dashed pink: optical-optical, dashed red: optical-nuclear)~\cite{Arvanitaki:2014faa}, the cold-atom interferometer~(dashed dark green: AEDGE, dashed green: AION-km, dashed light green: MAGIS-km)~\cite{Arvanitaki:2016fyj, AEDGE:2019nxb, Badurina:2019hst, MAGIS-100:2021etm}, and the resonant-mass oscillator~(dashed orange: DUAL)~\cite{Arvanitaki:2015iga}. {\bf Right}: The type-B model~($y' = -y$) beginning with the wrong vacuum~($\pi f/2 \lesssim \phi_\text{rh} \lesssim 3 \pi f/2$). We choose $e' = 10^{-6}, 10^{-8}, 10^{-10}$ as the benchmark values to plot the black lines. The thick magenta line is the $\eta \simeq 1$ contour. In the region beyond such a magenta line, the scalar acquires the field displacement $\pi f$ via the thermal misalignment. The thick dashed cyan line, i.e., the contour of $m_T|_{3H=m_0} \simeq m_0$, envelopes the region where the thermal effect postpones the oscillation starting, i.e., $T_\text{osc} <T|_{3H=m_0}$. This is the region where the ultralight scalar does the trapped misalignment, which is previously investigated in \cite{Nakagawa:2020zjr, DiLuzio:2021gos} for the axion models. Given the scalar abundance, since the trapped misalignment needs smaller oscillation amplitude, as discussed in \Eq{eq:typeB_phiosc_rescale}, the power law of the black lines is changed within the upper right corner enveloped by the thick dashed cyan line. In the plot, one can find that the constraints from the cosmology~(light orange: CMB, purple: BBN) are comparable with the current constraints from clock comparison experiments~(blue)~\cite{Arvanitaki:2014faa, VanTilburg:2015oza, Hees:2016gop, Kalaydzhyan:2017jtv, Hees:2018fpg, collaboration2021frequency} and the equivalence principle tests~(gray: EP tests)~\cite{Smith:1999cr, Schlamminger:2007ht, Berge:2017ovy}. For the type-B model, one needs $e' \lesssim 10^{-6}$ for the dark photon dark matter freeze-in to be detected by future experiments. } \label{fig:scalar_plt} \end{figure} \subsubsection{Detection of the Ultralight Scalar} From the former discussion in Sec.~\ref{sec:UV_model}, we know that the scalar-photon coupling appears along with the varying kinetic mixing from the UV physics, so our model can be tested from another entirely different perspective, i.e., the phenomenology of the ultralight CP-even scalar. Depending on whether the $\mathbb{Z}_2$ symmetry is exactly preserved or not, the scalar-photon coupling is $\phi F^2$~(Type-A), or $\phi^2 F^2$~(Type-B). From \Eq{d1_d2}, we know that the dimensionless coefficients $d_{\gamma,i} \,\,\, (i=1,2)$ contain the production of the $e'$ and $\Lambda_\text{KM}$, and $\Lambda_\text{KM}$ can be written as $\Lambda_\text{KM} \simeq \abs{\phi}_\text{osc}/\epsilon_\text{FI}$. We also know that in the freeze-in model of the dark photon dark matter, there is $\epsilon_\text{FI} \sim 10^{-12}$. Therefore, the targets of the scalar experiments are set by the freeze-in of the dark photon dark matter. To be more quantitative, we have \begin{equation} \label{eq:d1_d2_FI} d_{\gamma,i} \sim \left(\frac{m_\text{pl} \,\epsilon_\text{FI}}{e' \,\abs{\phi}_\text{osc} }\right)^i \propto \left(\frac{m_0 \,\epsilon_\text{FI}}{e' \, \mathcal{F}^{1/2} \, T_\text{osc}^{3/2} }\right)^i \,\,\,\,\,\,\,\,\,\,\,\,\,\, (i=1,2), \end{equation} where ``$i=1$'' and ``$i=2$'' denote the type-A and type-B models, respectively. From \Eq{eq:d1_d2_FI}, we know that with $\epsilon_\text{FI}$ determined, the smaller $e'$ is, the larger $d_{\gamma, i}\,\,\,(i=1,2)$ are, so our model can be more easily to be detected. The reason is that from the UV model in Sec.~\ref{sec:UV_model}, to maintain a determined kinetic mixing, $y^{(\prime)}$ should be turned up when $e'$ is turned down. Therefore, the coefficients in front of the scalar-photon couplings are turned up. Since the phenomenologies for the linear~(type-A) and quadratic~(type-B) scalar-photon couplings are quite different, we discuss these two models individually in the rest of this subsection. \paragraph{\bf Type-A Model} Let us first discuss the type-A model shown in the left panel of \Fig{fig:scalar_plt}. To illustrate how the experimental signals vary with the dark gauge coupling, we plot the $d_{\gamma,1}$ lines~(black) choosing $e' = 1, 10^{-3}, 10^{-6}$ as the benchmark values. From \Eq{eq:T_phi_eta<<1} we know that $T_\text{osc} \propto m_0^{2/3}$ when $m_0 \lesssim 10^{-28}\text{eV}$, and $T_\text{osc} \propto m_0^{1/2}$ when $m_0 \gtrsim 10^{-28}\text{eV}$. This explains the shapes of the black lines, which are represented as \begin{eqnarray}\begin{aligned} \label{d1_line_TypeA} d_{\gamma,1} \propto \frac{1}{e' \mathcal{F}^{1/2} } \times \left\{ \begin{aligned} m_0^{1/4} & \,\,\,\quad (m_0 \gtrsim 10^{-28} \text{eV})\\ \text{const} & \,\,\,\quad (10^{-33} \text{eV} \lesssim m_0 \lesssim 10^{-28}\text{eV}) \end{aligned} \right. \end{aligned}\end{eqnarray} In \Fig{fig:scalar_plt}, the thick dashed magenta line is the contour of $\eta \simeq 1$. In the upper right corner beyond this contour, $\eta$ is larger but still of the order of one to ten, so the ``$\arctan$'' suppression in the type-A $\eta \gg 1$ case mentioned in Sec.~\ref{subsec:eta>=1} does not appear. According to our former discussion in Sec.~\ref{subsec:eta>=1}, for the type-A model with $\eta \sim 1$, the ultralight scalar's early field displacement is set to be $\pi f/2$ by the thermal misalignment. Since $\abs{\phi}_\text{osc}/f \sim \mathcal{O}(1)$, from \Eq{eta_from_de} one can know the power law of the thick dashed magenta line satisfies $d_{\gamma,1} \propto m_0^{-1/4}$. In the region below this thick dashed magenta line, we have $\eta \ll 1$, so the ultralight scalar does the standard misalignment. One of the strongest constraints for the type-A model comes from the equivalence principle experiments testing the acceleration difference of two objects made by different materials attracted by the same heavy object. In the left panel of \Fig{fig:scalar_plt}, such a constraint is shown as the shaded gray region. Until now, the most stringent constraint is imposed by the space mission named MICROSCOPE~\cite{Berge:2017ovy} and the torsion balance experiments named E\"{o}t-Wash~\cite{Smith:1999cr, Schlamminger:2007ht}, which gives $d_{\gamma,1} \lesssim 10^{-4}$. Since such acceleration difference is caused by the Yukawa interaction mediated by the ultralight scalar, these constraints are independent of the nonrelativistic scalar relic's fraction among the dark matter, i.e., $\mathcal{F}$. The clock comparisons of Dy/Dy~\cite{VanTilburg:2015oza}, Rb/Cs~\cite{Hees:2016gop}, and $\text{Al}^+$/$\text{Hg}^+$~\cite{collaboration2021frequency} also give stringent constraints based on testing the time-varying $\alpha_\text{em}$. These constraints are shown as the shaded blue region. To go beyond the current constraints, there are several experiments proposed. For the future proposed clock comparison experiments~\cite{Arvanitaki:2014faa}, the projection of the improved optical-optical clock comparison is shown as the pink line, and the projection of the optical-nuclear clock comparison is shown as the red line. For these projections, the dashed parts denote the projection of the $\alpha_\text{em}$ oscillation testing, and the dotted parts denote the projection of the $\alpha_\text{em}$ drift testing. According to \cite{Arvanitaki:2014faa}, the projection has the optimistic assumption that the measurement takes place when the scalar is swiping through the zero such that $\dot{\alpha}_\text{em}$ is independent of $m_0$. Following this, we extrapolate the projections of the optical-optical and optical-nuclear experiments to $10^{-32}\text{eV}$ considering the homogeneity of the ultralight scalar when the scalar's de Broglie wavelength is much larger than the size of the Milky Way halo. The cold-atom interferometer experiments such as AEDGE~\cite{AEDGE:2019nxb}, AION~\cite{Badurina:2019hst}, and MAGIS~\cite{MAGIS-100:2021etm} have strong detection capability in the mass range $10^{-19}\text{eV} \lesssim m_0 \lesssim 10^{-12}\text{eV}$. In the plot, the projections of AEDGE~(broadband, resonant mode), AION-km, and MAGIS-km are shown in the dashed dark green, dashed green, and dashed light green lines, respectively. The region of the thermal misalignment located in the upper right corner of the left panel of \Fig{fig:scalar_plt} can be tested by the proposed resonant-mass detectors, such as DUAL shown in the dashed orange line~\cite{Arvanitaki:2015iga}. The CP-even scalar in the mass range $10^{-32}\text{eV} \lesssim m_0 \lesssim 10^{-28}\text{eV}$ can be tested via the proposed Lyman-$\alpha$ UVES observation~\cite{Hamaide:2022rwi} shown in the dashed purple line. Since all the constraints and projections except the EP tests rely on the $\alpha_\text{em}$ variation, and $\Delta \alpha_\text{em} \propto d_{\gamma,1} \abs{\phi}$, they are all scaled by $\mathcal{F}^{-1/2}$ for different ultralight scalar fractions among the dark matter. We know from \Eq{d1_line_TypeA} that, as $\mathcal{F}$ varies, the relative position between the experimental targets~(black lines) and the constraints/projections from the non-EP experiments keeps the same. Knowing that $e'$ cannot be much larger than $\mathcal{O}(1)$~(Otherwise, the Landau pole is shifted to the low energy and the theory's perturbativity is broken), one can find from the left panel of \Fig{fig:scalar_plt} that most of the theory space is within the detection capabilities of the proposed experiments. \paragraph{\bf Type-B Model} Now let us discuss the type-B model whose parameter space is shown in the right panel of \Fig{fig:scalar_plt}. Here, we make $\phi$ to have the initial wrong vacuum~($\pi f/2 \lesssim \phi_\text{rh} \lesssim 3 \pi f/2$). In the plot, the thick dashed magenta line is the contour obeying $\eta \simeq 1$. In the region upon~(below) this line, the ultralight scalar does the thermal~(standard) misalignment. From \Eq{eta_from_de} we know that $\eta \sim (\alpha_\text{em} d_{\gamma,2} )^{1/2}$. Thus, the thick dashed magenta line approximately obeys $d_{\gamma, 2} \sim 10^2$. As discussed in Subsec.~\ref{subsec:eta>=1}, the thermal effect makes $\phi$ converge to $\pi f$ in the early universe in the parameter space upon the thick dashed magenta line. The thick dashed cyan line denotes the contour satisfying $T_\text{osc} \simeq T|_{3H=m_0}$, or equivalently speaking, $m_T|_{3H=m_0}\simeq m_0$. In the region enveloped by the thick dashed cyan line, we have $T_\text{osc} < T|_{3H=m_0}$ meaning that the thermal effect postpones $\phi$'s oscillation, which represents the phase of trapped misalignment. From the plot, one can find that the thick dashed cyan line is approximately flat when $m_0 \gtrsim 10^{-16}\text{eV}$. Such flatness appears because $m_T \simeq m_0$ happens when $e^\pm$ are relativistic, which makes $m_T/H \sim \eta$. As long as $\eta \gg 1$, the starting of the oscillation is postponed by the thermal effect\footnote{Here, one may be curious about the nonoverlapping between the thick dashed cyan and thick dashed magenta lines when $m_0 \gtrsim 10^{-16}\text{eV}$. To understand such splitting, one needs to review the exact definition of $\eta$ in \Eq{eq:eta_def}, which is $\eta \coloneqq 2 m_T/H$~(Here $\mathcal{S} \simeq 1$). When $3H = m_0$, to make $m_T > m_0$, one needs $\eta>6$. Since $\eta \propto d_{\gamma,2}^{1/2}$, the values of $d_{\gamma,2}$ for the thick dashed cyan line~($m_T|_{3H=m_0} = m_0$) and the thick dashed magenta line ~($\eta = 1$) have a factor ``$36$'' difference. }. One can also find that the left side of the thick dashed cyan line becomes vertical, because $m_T \simeq m_0$ happens when $e^{\pm}$ are non-relativistic. In this case, the condition for $T_\text{osc} < T|_{3H=m_0}$ becomes $m_0 \gtrsim m^2_e/m_\text{pl}$. The experimental benchmarks are shown as the black lines in $m_0-d_{\gamma,2}$ plane. In the region of the standard misalignment~($\eta \ll 1$), the black lines obey \begin{eqnarray}\begin{aligned} \label{d1_line_TypeB} d_{\gamma,2} \propto \frac{1}{e'^2 \mathcal{F} } \times \left\{ \begin{aligned} m_0^{1/2} & \,\,\,\quad ( 10^{-28} \text{eV} \lesssim m_0 \lesssim m_*)\\ \text{const} & \,\,\,\quad (10^{-33} \text{eV} \lesssim m_0 \lesssim 10^{-28}\text{eV}) \end{aligned} \right. , \end{aligned}\end{eqnarray} where $m_*$ denotes the abscissa of the cross point of the black line and the thick dashed cyan line. To understand the power law of $d_{\gamma,2}$ in \Eq{d1_line_TypeB}, one can refer to \Eq{eq:T_phi_eta<<1} which tells that $T_\text{osc} \propto m_0^{2/3}$ when $m_0 \lesssim 10^{-28}\text{eV}$, and $T_\text{osc} \propto m_0^{1/2}$ when $m_0 \gtrsim 10^{-28}\text{eV}$. After plugging $T_\text{osc}$ into \Eq{eq:d1_d2_FI}, one can have \Eq{d1_line_TypeB}. Now let us discuss the behaviors of the black lines in the scalar mass range $m_0\gtrsim m_*$. If the black line intersects with the thick dashed cyan line on its vertical side, i.e., $m_* \lesssim 10^{-16}\text{eV}$, one can have $T_\text{osc} \sim m_e$. If the black line crosses with the thick dashed cyan line on its horizontal side, i.e., $m_* \gtrsim 10^{-16}\text{eV}$, we have $T_\text{osc} \sim T|_{3H=m_0}/\eta^{1/2} \propto m_0^{1/2}/d_{\gamma,2}^{1/4}$. Substituting the equations of $T_\text{osc}$ into \Eq{eq:d1_d2_FI}, we have \begin{eqnarray}\begin{aligned} \text{$d_{\gamma,2} \propto \frac{1}{e'^2 \mathcal{F}} \times m_0^2$ \quad if $m_* \lesssim 10^{-16}\text{eV}$,\, \quad \quad $d_{\gamma,2} \propto \frac{1}{e'^8 \mathcal{F}^4} \times m_0^2$ \quad if $m_* \gtrsim 10^{-16}\text{eV}$}\,\,\,\quad ( m_0 \gtrsim m_*), \end{aligned}\end{eqnarray} which explains the black lines' tilting up when entering the region of the trapped misalignment, i.e., the area enclosed by the thick dashed cyan line. Such enhancement of the signal comes from the postponement of the oscillation which is the major feature of the trapped misalignment. Similar to the type-A model, the type-B model can be tested through terrestrial experiments. In the plot, the current constraints based on clock comparison~\cite{VanTilburg:2015oza, Hees:2016gop, Hees:2018fpg, collaboration2021frequency, Banerjee:2022sqg} are shown as the shaded blue region, and the constraints from the equivalence principle tests~\cite{Berge:2017ovy, Hees:2018fpg, Banerjee:2022sqg} are shown as the shaded gray region. The projections of the proposed optical-optical and optical-nuclear clock comparisons are shown in orange and red, respectively~\cite{Arvanitaki:2014faa, Banerjee:2022sqg}. Here, the dashed and dotted parts of the projections denote the detection capabilities of the $\alpha_\text{em}$ oscillation and the $\alpha_\text{em}$ drift, respectively. In addition, our model can also be tested by the cold-atom interferometers. The projection of AEDGE~\cite{AEDGE:2019nxb} and AION-km~\cite{Badurina:2019hst} are shown in the dashed dark green and dashed green lines, respectively. One may find that the constraints and projections from the ground-based and low-altitude experiments get weakened or cut off in the strong coupling region. This is caused by the scalar's matter effect sourced by the earth, one of the characteristics of the quadratic scalar-SM coupling. Following ~\cite{Hees:2018fpg, Banerjee:2022sqg}, we have \begin{eqnarray}\begin{aligned} \label{eq:alphaem_typeB_screen} d_{\gamma,2,\text{crit}} = \frac{m_\text{pl}^2 R_{\bigoplus}}{3 M_{\bigoplus} Q_{\gamma, \bigoplus}} \sim \text{few}\times 10^{11}, \end{aligned}\end{eqnarray} where $d_{\gamma,2,\text{crit}}$ is the scalar's critical value for the matter effect to appear, $R_{\bigoplus} \sim 6 \times 10^3 \text{km}$ is the earth's radius, $M_{\bigoplus} \sim 6 \times 10^{24} \text{kg}$ is the earth's mass, and $Q_{\gamma, \bigoplus} \sim 10^{-3}$ is the earth's dilaton charge. When $d_{\gamma,2} \gtrsim d_{\gamma,2,\text{crit}}$, the scalar's Compton wavelength is smaller than the earth's radius. In this situation, the scalar easily overcomes the spatial gradient and is pulled toward the origin, so its near-ground and underground oscillation amplitudes are highly suppressed. Even so, if the experiments are carried on in space with altitudes comparable with the earth's radius~(for example, AEDGE), the screen effect sourced by the earth can be largely alleviated. Unlike the type-A model, the type-B model has strong constraints from the early universe processes, such as CMB and BBN, which are comparable with the terrestrial constraints. The reason for this character is that when tracing back to the early universe with the $\phi$'s abundance determined, $\Delta \alpha_\text{em}$ increases much more for the type-B model than the type-A model. For the former analysis of the CMB and BBN bounds on the quadratic scalar-photon coupling, one can refer to \cite{Stadnik:2015kia, Sibiryakov:2020eir, Bouley:2022eer}\footnote{The results in \cite{Sibiryakov:2020eir, Bouley:2022eer} cannot be recast to our model directly because these works constrain the coupling $\phi^2 F^2$ which can be classified into the case where $\phi$ starts in the range $- \pi f/2\lesssim \phi_\text{rh} \lesssim \pi f/2$. We are discussing the initial wrong vacuum case where $\pi f/2\lesssim \phi_\text{rh} \lesssim 3 \pi f/2$. }. Considering the thermal effect in Subsec.~\ref{subsec:eta>=1}, we impose the constraints on the type-B scalar's parameter space using the current cosmological constraints on $\Delta \alpha_\text{em}/\alpha_\text{em}$, which are~\cite{Stadnik:2015kia,Hart:2019dxi}\footnote{To double-check our results, we reproduce the CMB and BBN constraints in \cite{Stadnik:2015kia} by using the former CMB bound $\Delta \alpha_\text{em}/\alpha_\text{em} \lesssim 10^{-2}$ used in \cite{Stadnik:2015kia} and by only including the bare potential effect during BBN, respectively.} \begin{eqnarray}\begin{aligned} (\Delta \alpha_\text{em}/\alpha_\text{em})_\text{CMB} \lesssim 2\times 10^{-3}, \quad \quad (\Delta \alpha_\text{em}/\alpha_\text{em})_\text{BBN} \lesssim 6\times 10^{-3} \,. \end{aligned}\end{eqnarray} In the right panel of \Fig{fig:scalar_plt}, the CMB constraint is shown as the shaded orange region, and the BBN constraint is shown as the shaded purple region. Based on the left panel of \Fig{fig:scalar_plt}, let us discuss the detectability of the dark photon dark matter freeze-in via the varying kinetic mixing. Given the scalar fraction $\mathcal{F} \simeq 10^{-3}$, for the model to be tested by the proposed experiments but not excluded by CMB/BBN, the dark gauge coupling needs to be within the range $ 10^{-10} \lesssim e' \lesssim 10^{-6}$. When $\mathcal{F}$ varies, the relative position between the black lines and the non-EP experiments does not change qualitatively because all these constraints essentially come from $\Delta \alpha_\text{em} \propto d_{\gamma,2} {\abs{\phi}^2}$. In this case, their positions are rescaled by $\mathcal{F}^{-1}$ in the weak coupling region. Differently, what the equivalence principle experiments are testing is the acceleration difference of the objects A and B, which obeys $\abs{a_A-a_B}/\abs{a_A+a_B} \propto d_{\gamma,2}^2 \, \phi \abs{\nabla{\phi}}$ according to \cite{Hees:2018fpg, Banerjee:2022sqg}. From this, we know that the constraints on $d_{\gamma,2}$ from the equivalence principle test should be rescaled by $\mathcal{F}^{-1/2}$ in the weak coupling region. When $\mathcal{F} \lesssim 10^{-8}$, the equivalence principle constraints are stronger than the current CMB, BBN, and clock constraints. We leave detailed discussions on the $\mathcal{F}$-dependence of the constraints and theoretical benchmarks to our future work. At the end of this section, we want to comment on the constraints imposed by the black hole superradiance. For the $\mathcal{F} \simeq 1$ case, the current constraints from the supermassive black holes exclude the region $10^{-21}\text{eV} \lesssim m_0 \lesssim 10^{-17}\text{eV}$~\cite{Arvanitaki:2014wva, Stott:2018opm, Davoudiasl:2019nlo, Unal:2020jiy, Du:2022trq}, and the constraints from the solar mass black holes exclude the region $10^{-13}\text{eV} \lesssim m_0 \lesssim 10^{-11}\text{eV}$~\cite{Baryakhtar:2020gao}. However, since $\phi$'s self-interaction is $\lambda_\phi \sim m_0^2/f^2$, smaller $\mathcal{F}$ leads to smaller $f$, which increases $\lambda_\phi$. From \cite{Baryakhtar:2020gao, Unal:2020jiy} we know that, for the scalar as the subfraction of the dark matter, the superradiance constraints are alleviated by the scalar's large attractive self-interaction. \section{$\mathbb{Z}_N$-Protected Scalar Naturalness} \label{sec:zn_varykm} Since the Yukawa interaction in \Eq{eq:simpuv} breaks $\Phi$'s global $U(1)$ symmetry, the ultralight CP-even scalar $\phi$ suffers from large quantum correction, which is a common problem in many ultralight scalar models. Taking the type-B model in Sec.~\ref{sec:UV_model} as an example, we have the mass correction $\Delta m_\phi\sim yM$, and its benchmark value \begin{eqnarray}\begin{aligned} \Delta m_{\phi} & \sim 10^{-11} \text{eV}\left(\frac{\epsilon_\text{FI} }{10^{-12}}\right)\left(\frac{M}{10\text{TeV}}\right)^2\left(\frac{10^{17}\text{GeV}}{ \abs{\phi}_\text{osc} }\right)\left(\frac{1}{e'} \right), \label{Min_Cancel_phimass} \end{aligned}\end{eqnarray} which is much larger than the scalar's bare mass. For the type-A model, the Coleman-Weinberg potential is $V_\text{cw} \sim y M^3 f \sin(\phi/f)$. Given that $r\ll 1$, the type-A model's mass quantum correction $\Delta m_\phi \sim yM/r^{1/2}$ is even much larger. Because $V_\text{cw}$'s minimum of the type-A model is not at zero, one need to introduce additional tadpole counterterm to forbid $\phi$ to have the late-time nonzero displacement. Although such a naturalness problem exists in the minimal model, by imposing an extra $\mathbb{Z}_N$ symmetry~\cite{Hook:2018jle, Frieman:1995pm}, the global $U(1)$ symmetry can be approximately restored, so the quantum correction is exponentially suppressed. In such setup, for the type-A and type-B models, the mass fine-tuning is exponentially solved. In addition, for the type-A model, the one-loop tadpole is also negligible. To realize this, we introduce $N$ copies of the worlds containing the standard model sector ($\text{SM}_k$), the dark sector ($\text{DS}_k$), and the portal~(${\mathcal{O}_P}_k$) where $k=0,1,\dots,N-1$. Because of the $\mathbb{Z}_N$ symmetry, the system is invariant under the transformation \begin{eqnarray}\begin{aligned} \Phi\rightarrow\Phi\exp\left(i\frac{2\pi}{N}\right)\;,\;(\text{SM}+\mathcal{O}_P+ \text{DS})_k\rightarrow (\text{SM}+ \mathcal{O}_P +\text{DS})_{k+1}\;, \end{aligned}\end{eqnarray} so the lowest order effective operator is \begin{equation}\label{eq:u1break} \mathcal{L}_{\cancel{U(1)}}=\text{const} \times \frac{\Phi^N}{\Lambda^{N-4}_1}+\text{h.c.} \end{equation} which is invariant under the $\mathbb{Z}_N$ transformation but not invariant under the $U(1)$ transformation. Such dimension-N operator shows that even though the global $U(1)$ symmetry is broken, as long as the symmetry under the discrete subgroup $\mathbb{Z}_N$ still exists, the quantum correction $\Delta m_\phi^2 \propto f^{N-2}/\Lambda^{N-4}_1$ is exponentially suppressed by $N$. If $N$ is an odd number and the exact $\mathbb{Z}_2$ symmetry under $\mathcal{C}_d$~($\Phi \rightarrow - \Phi^\dagger$) is imposed, the lowest order effective operator is $\Phi^{2N}/\Lambda^{2N-4}_2$, so the mass quantum correction is $f^{2N-2}/\Lambda^{2N-4}_2$\footnote{Here we put the subscript ``$l$'' for the effective energy scale $\Lambda_l$ in the $\mathbb{Z}_N$ invariant operator $\mathcal{L} \supset \text{const} \times \Phi^{lN}/\Lambda_l^{lN-4}+\text{h.c.}$, because these scales are hierarchically different for different ``$l$''s when the Yukawas are small, which reveals the necessity of more detailed calculation to go beyond the purely qualitative level of understanding how the $\mathbb{Z}_N$ symmetry protects the CP-even scalar's naturalness. From \Eq{eq:cwphi} in the later context, we will find that $\Lambda_l \sim M/y^{lN/(lN-4)} \sim \Lambda_{\gamma,i}/y^{4/(lN-4)}$.}. As long as $\Delta m_\phi^2 \lesssim m_0^2$, the scalar's bare mass $m_0$ softly breaking the $\mathbb{Z}_N$ symmetry can be naturally small. \begin{figure}[t] \centering \includegraphics[width=0.85\columnwidth]{ZN_Z2_Small.pdf} \caption{The schematic diagram of the $\mathbb{Z}_N$ invariant UV model. The messenger particles $\Psi^{(\prime)}_k \, (k=0,1,\cdots,N-1)$ are all coupled with the complex scalar $\Phi$, where ``$k$'' labels the $k$th world and $k=0$ is our world which experienced the reheating. Because of the suppression from $y^{(\prime)}$, $k$th and $j$th universes~($k \neq j$) do not talk with each other. Here, the $\mathbb{Z}_N$ rotation $\Phi \rightarrow \Phi\exp(i ~2 \pi/N), \Psi^{(\prime)}_k \rightarrow \Psi^{(\prime)}_{k+1}$ is labeled by the gray arrowed circles. The $\mathbb{Z}_2$ symmetry between $\Psi_k$ and $\Psi_k^{\prime}$, which is exact for the type-B model ($y'=-y$) and approximate for the type-A model ($y\neq0, y'=0$), eliminates the time-independent part of the kinetic mixing.} \label{fig:ZN_Z2} \end{figure} \subsection{$\mathbb{Z}_N$-Protected Model} To build a concrete varying kinetic mixing model without the naturalness problem, we embed the simple model described by \Eq{eq:simpuv} into the Lagrangian \begin{eqnarray}\begin{aligned} \label{eq:znuv} \mathcal{L}_\text{UV} \supset \sum_{k=0}^{N-1}\left( y e^{i \frac{2\pi k}{N}} \Phi \bar{\Psi}_k \Psi_k + y' e^{i \frac{2 \pi k}{N}} \Phi \bar{\Psi}'_k \Psi'_k + \text{h.c.} \right) - M \sum_{k=0}^{N-1}(\bar{\Psi}_k \Psi_k + \bar{\Psi}_k' \Psi_k') - \lambda \left( \left|\Phi\right|^2 - \frac{f^2}{2}\right)^2 \end{aligned}\end{eqnarray} being invariant under the $\mathbb{Z}_N$ transformation $\Phi \rightarrow \Phi \exp(i\frac{2 \pi}{N}), \, \Psi^{(\prime)}_k \rightarrow \Psi^{(\prime)}_{k+1}, \, A^{(\prime)}_k \rightarrow A^{(\prime)}_{k+1}$. Here, $\Psi_k$ and $\Psi_k'$, two doubly charged fermions in the $k$-th universe, carry the same $k$th-hypercharge but the opposite $k$th-dark hypercharge. Following the phase choice in Sec.~\ref{sec:UV_model}, we choose $c=\pi/2$, $\Phi=\frac{i f}{\sqrt{2}} e^{i\phi/f}$ and introduce the $\mathbb{Z}_2$-invariant bare potential in \Eq{eq:V0} to ensure the late-time closing of the kinetic mixing portal. When the ultralight scalar $\phi$ is nonzero, $\Psi_k^{(\prime)}$'s effective masses become\footnote{$r$ and $r^\prime$ representing the Yukawa couplings in \Eq{eq:M_k_phi} are already defined in \Eq{eq:M_phi}. However, we still list them here as a reminder because $r$ and $r^\prime$ appear in the suppression factors of the quantum correction in the $\mathbb{Z}_N$-naturalness. } \begin{eqnarray}\begin{aligned} \label{eq:M_k_phi} M_k^{(\prime)}(\phi) = M\left[1+r^{(\prime)}\sin\left(\frac{\phi}{f} + \frac{2 \pi k}{N}\right)\right], \,\,\,\, \text{where $r^{(\prime)} = \frac{\sqrt{2}\, y^{(\prime)} f}{M}$.} \end{aligned}\end{eqnarray} To gain more intuition of the setup of the $\mathbb{Z}_N$-invariant model, our readers can refer to \Fig{fig:ZN_Z2}, the schematic diagram of the $\mathbb{Z}_N$ UV Lagrangian described by \Eq{eq:znuv}. In this figure, $\Phi$ is the axle attached to all $N$ copies of the universes, and ``$\mathbb{Z}_2$'' refers to $\Psi_k^{(\prime)}$'s mass degeneracy which is exact when $y'=-y$ (Type-B) or approximate when $y' \neq y$ (Type-A). In the low energy limit, the heavy fermions $\Psi_k^{(\prime)}$ are integrated out, so the IR Lagrangian becomes \begin{eqnarray}\begin{aligned}\label{eq:znir} \mathcal{L}_\text{IR} \supset \sum_{k=0}^{N-1}\left[\frac{1}{2} \epsilon_k {F_k}_{\mu \nu} F'^{\mu \nu}_k + \frac{1}{4} \left(\frac{\Delta \alpha_\text{em}}{\alpha_\text{em}}\right)_k {F_k}_{\mu \nu} F_k^{\mu \nu}\right], \end{aligned}\end{eqnarray} where \begin{eqnarray}\begin{aligned}\label{eq:znir_eps} \epsilon_k= \frac{\sqrt{2}e e' (y-y')f }{6 \pi^2 M}\sin \left(\frac{\phi}{f} + \frac{2 \pi k}{N}\right) \end{aligned}\end{eqnarray} and \begin{eqnarray}\begin{aligned}\label{eq:znir_alpha} \left(\frac{\Delta \alpha_\text{em}}{\alpha_\text{em}}\right)_k = \frac{e^2}{6 \pi^2} \left[-\frac{\sqrt{2}(y+y')f}{M} \sin\left( \frac{\phi}{f} + \frac{2 \pi k}{N} \right) + \frac{(y^2+y'^2)f^2}{M^2}\sin^2\left( \frac{\phi}{f} + \frac{2 \pi k}{N} \right) \right]. \end{aligned}\end{eqnarray} From \Eq{eq:znir}, \Eq{eq:znir_eps} and \Eq{eq:znir_alpha}, one can easily find that the IR Lagrangian is invariant under the $\mathbb{Z}_N$ transformation $\phi\rightarrow \phi + \frac{2 \pi}{N}, \, A^{(\prime)}_{k} \rightarrow A^{(\prime)}_{k+1}$. Here, $(\text{SM} + \mathcal{O}_P + \text{DS})_0$ is our universe which experiences the reheating, but the other universes are not reheated at all. When $m_0 \lesssim H$, $\phi$ begins the damped oscillation, so the kinetic mixing $\epsilon_0$ between $\text{SM}_0$ and $\text{DS}_0$ gradually decreases towards zero as described in Sec.~\ref{sec:UV_model} and Sec.~\ref{sec:cos_his}. \subsection{Quantum Correction of $\phi$} For the ultralight scalar $\phi$ in \Eq{eq:znuv}, the leading order quantum correction can be described by the one-loop Coleman-Weinberg potential \begin{eqnarray}\begin{aligned} \label{eq:cwphifield} V_{\text{cw}}(\phi)=-\frac{1}{16\pi^2 }\sum_{k=0}^{N-1}M_{k}^4(\phi)\left[\log\left(\frac{M_{k}^2(\phi)}{\mu^2}\right)-\frac{3}{2}\right]+ \left(M_{k}\rightarrow M_{k}'\right). \end{aligned}\end{eqnarray} As discussed in \cite{DiLuzio:2021pxd}, the Fourier series of any scalar potential respecting the $\mathbb{Z}_N$ symmetry only receives contributions on $lN$th modes~($l$ is a positive integer number) so that the Coleman-Weinberg potential can be written as \begin{equation}\label{eq:fourierV} V_\text{cw}(\theta)=N\sum_{\ell =1}^\infty\widetilde{V}_\text{cw} \left(\ell N\right)\cos\left(\ell N\theta\right), \quad \text{where $\theta = \frac{\phi}{f}+\frac{\pi}{2}$.} \end{equation} In \Eq{eq:fourierV}, $\widetilde{V}_\text{cw}\left(\ell N\right)$ denotes the Fourier coefficient of the single-world Coleman-Weinberg potential, the prefactor $N$ comes from the contribution from $N$ worlds which have the equal contribution to the Fourier coefficient, and $\cos(l N \theta)$ comes from the effective operator $\mathcal{L} \supset \text{const} \times (\Phi^{lN}+{\Phi^\dagger}^{lN})/\Lambda^{lN-4}_l$. Now, let us calculate the Fourier coefficient in (\ref{eq:fourierV}) through the equation \begin{eqnarray}\begin{aligned} \label{eq:cwfourier} \widetilde{V}_\text{cw}\left(lN\right)=-\frac{M^4}{4\pi^3}\int_0^{\pi}\cos(lN\theta) (1-r\cos\theta)^4\log(1-r\cos\theta) d\theta +(r\rightarrow r')\;. \end{aligned}\end{eqnarray} Expanding the polynomial $(1-r\cos \theta)^4$ and carrying the integration over $\theta$, the Fourier coefficient when $N>4$ can be exactly written as\footnote{In \cite{gradshteyn2014table} we find the integral \begin{equation} \int_0^\pi\log[1-r\cos(\theta)]\cos(n\theta)d\theta=-\frac{\pi}{n}[X(r)]^n\;,\quad \text{where $X(r)=\frac{1-\sqrt{1-r^2}}{r}$.} \end{equation}} \begin{eqnarray}\begin{aligned}\label{eq:cwfourier_exact} \widetilde{V}_\text{cw}(lN) = \frac{M^4}{4 \pi^2} \sum_{j=0}^4 \sum_{m=0}^j \bmtx4\\4-j,j-m, m\end{pmatrix} \frac{(-1)^j}{lN - j + 2m} \left( \frac{r}{2} \right)^{lN+2m} \left(\frac{ 1 - \sqrt{1 - r^2}}{r^2/2}\right)^{lN - j + 2m} + (r \rightarrow r'). \end{aligned}\end{eqnarray} In the \App{appx:zn_vcw}, we list two ways to derive a compact formula for the $\mathbb{Z}_N$ Coleman-Weinberg potential precisely expanded to all orders of $r$, which is obtained for the first time as far as we have known. After omitting $r$'s higher order terms in the exact Fourier coefficient (\ref{eq:cwfourier_exact}), we express the Coleman-Weinberg potential as \begin{eqnarray}\begin{aligned}\label{eq:cwphi} V_{\text{cw}}(\phi) & \simeq\frac{M^4 N}{8\pi^2}\Bigg\{\left(r^N+r'^N + \cdots \right)G(N)\cos\left[N\left(\frac{\phi}{f} + \frac{\pi}{2} \right)\right]\\ & \quad \quad \quad \quad + \left(r^{2N}+r'^{2N} + \cdots \right)G(2N)\cos\left[2N\left(\frac{\phi}{f} + \frac{\pi}{2} \right)\right]+\cdots\Bigg\}, \end{aligned}\end{eqnarray} where ``$\cdots$''s in the brackets of (\ref{eq:cwphi}) denote $r^{(\prime)}$'s higher order terms with the same parity as the lowest order terms. In (\ref{eq:cwphi}), the function $G(n)$ is \begin{eqnarray}\begin{aligned} \label{eq:Gn} G(n) \coloneqq \frac{1}{2^{n-1}} \sum_{j=0}^{4} \binom{4}{j} \frac{(-1)^j }{n-j} =\frac{3 \, (n-5)!}{2^{n-4} \, n \,(n-1)!}. \end{aligned}\end{eqnarray} Alternatively, \Eq{eq:cwphi} can be derived from \Eq{eq:cwphifield} by applying the cosine function sum rules \begin{equation}\label{cos_sum_rule_brief} \sum_{k=1}^N\cos^m\left(\theta+\frac{2\pi k}{N}\right)=\sum_{l=0}^{[m/N]} \mathcal{C}_{lmN} \cos\left(lN\theta\right)+\mathcal{D}_m, \,\,\, \text{where $\mathcal{C}_{l m N}\big|_{m=lN} = \frac{1}{2\,^{lN-1}}$.} \end{equation} According to \Eq{eq:M_k_phi}, $M_k$ is a cosine function of $\theta$, so we can expand the Coleman-Weinberg potential in \Eq{eq:cwphifield} as a polynomial function of $\cos(\theta+2\pi k/N)$. Afterward, by employing these cosine sum rules, we can also write the Coleman-Weinberg potential in the form as shown in \Eq{eq:cwphi}. The concrete formulas of the constants $\mathcal{C}_{lmN}$ and $\mathcal{D}_m$ can be found in \App{appx:zn_vcw}, \Eq{cos_sum_rule}. \Eq{cos_sum_rule_brief} shows that only when the cosine function's power in the effective potential is greater than $N$, the non-constant terms emerge. Since the cosine function appearing in the potential is always accompanied by $r$, based on the cosine sum rules, one can find that the lowest order $\mathbb{Z}_N$ potential is proportional to $r^N$ if there is no further cancellation coming from the exact $\mathbb{Z}_2$ symmetry. Given that $r \ll 1$, the factor $r^N G(N)$ in \Eq{eq:cwphi} indicates that the quantum correction of $m_\phi^2$ is suppressed by the $(r/2)^{N-5/2}$ factor in the type-A model, and $(r/2)^{N-2}$ factor in the type-B model. For the freeze-in of dark photon dark matter, $r$'s benchmark value is \begin{eqnarray}\begin{aligned} r \sim 10^{-10} \left(\frac{\epsilon_\text{FI}}{10^{-12}}\right) \left( \frac{1}{e'} \right) \left( \frac{f}{\abs{\phi}_\text{osc}} \right). \end{aligned}\end{eqnarray} In such a case, for $e' \sim \mathcal{O}(1)$, as long as $N\gtrsim 7$, $\phi$'s naturalness problem can be solved entirely. For smaller $e'$, $r$ is larger, so $N$ needs to be larger to solve the naturalness. However, as long as $e' \gg 10^{-10}$ which is in the permitted region of the current constraints, $r \ll 1$, the $\mathbb{Z}_N$ scenario protecting the pseudo Goldstone boson $\phi$'s naturalness always works. Since $V_{\text{cw}}$ receives contribution from both $r$ and $r'$, $N$'s parity matters. For even $N$, the first term in \Eq{eq:cwphi} shows that both $r$ and $r'$ provide a positive contribution, so the leading order correction starts from $r^N$. In contrast, if $N$ is odd, $r$ and $r'$ have opposite signs, so the quantum correction from $r^N + r'^N$ is reduced. When $r=-r'$ \,(Type-B model), the $(r^N+r'^N) \cos[ N (\phi/f+\pi/2)]$ term has the exact cancellation, so the leading order correction starts from the $(r^{2N}+r'^{2N})\cos[2N(\phi/f+\pi/2)]$ term. We want to remind our readers that such cancellation happens in all orders because the front factor of the higher order terms $\cos[N(\phi/f+\pi/2)]$ in \Eq{eq:cwfourier_exact} or \Eq{eq:cwphifield_allorders} are $r^{(\prime) N+2j}$ which has the same parity as $r^{(\prime)N}$. This is consistent with the EFT analysis at the beginning of this section. Before closing the discussion in this section, we want to make some extra comments besides solving the ultralight CP-even scalar's naturalness problem in our concrete situation: Our model, i.e., $\mathbb{Z}_N$-protected varying kinetic mixing, provides a systematic approach to solve the scalar's naturalness problem for both the linear~(Type-A) and quadratic~(Type-B) scalar-photon couplings. This is the extension and unification of \cite{Brzeminski:2020uhm} and \cite{Banerjee:2022sqg}, which focus on the linear and quadratic scalar-photon coupling, respectively. Furthermore, armed with two different methods, i.e., the Fourier transformation and the cosine sum rules discussed in this section and in \App{appx:zn_vcw}, for the first time, one can completely and precisely calculate the pseudo-Goldstone boson's $\mathbb{Z}_N$-invariant Coleman-Weinberg potential previously appearing in \cite{Frieman:1995pm, Hook:2018jle, Brzeminski:2020uhm} to arbitrary orders. We wish such calculations could shed light on the formalism of the $\mathbb{Z}_N$-naturalness itself. \section{Varying Kinetic Mixing From Dirac Gaugino}\label{sec:dirac_gaugino} Motivated by stabilizing the hierarchy between the light scalars and the heavy fermions, we discuss one of the possible supersymmetric extensions of the varying kinetic mixing in the Dirac gaugino model~\cite{Fox:2002bu, Alves:2015kia} with the superpotential \begin{equation}\label{eq:dgugino} \mathcal{W}=\frac{\sqrt{2} \, W_\alpha' W_j^\alpha A_j}{\Lambda_{\text{D}, j}}.\; \end{equation} In \Eq{eq:dgugino}, $W_j$ is the gauge field strength of the SM gauge group $G_{\text{SM}, j}$ where the label $j=1,2,3$ denotes the SM gauge groups $U(1)_Y$, $SU(2)_L$ and $SU(3)_c$, respectively, $W'$ is the gauge field strength of $U(1)_d$, and $A_j$ is the chiral multiplet. The operator in \Eq{eq:dgugino} in hidden sector models was firstly introduced by~\cite{Polchinski:1982an} and further understood by~\cite{Fox:2002bu} afterward as the \textit{supersoft} operator such that it provides the Dirac gaugino masses and does not give logarithmic divergent radiative contributions to other soft parameters. Writing $A_j$, $W_j$ and $W'$ in terms of the Taylor expansion of the Grassmann variable $\theta$, we have $A_j\supset \left(S_j+iP_j\right)/\sqrt{2}+\sqrt{2}\theta\tilde{a}_j$, $W_j^\alpha\supset \lambda_j^\alpha+F_j^{\mu\nu}(\sigma_{\mu\nu}\theta)^\alpha + D \,\theta^\alpha$, and $W'_\alpha \supset F'_{\mu\nu}(\sigma^{\mu\nu}\theta)_\alpha+\langle D'\rangle \theta_\alpha$. Plugging these expanded fields into \Eq{eq:dgugino}, and doing the integration over $\theta^2$ firstly and then the auxiliary fields, we have the effective Lagrangian \begin{eqnarray}\begin{aligned}\label{eq:superW} \mathcal{L \supset -\frac{1}{\Lambda_{\text{D}, j}} \tr\left(S_j F_j^{\mu\nu}\right) F_{\mu\nu}'-\frac{1}{\Lambda_{\text{D},j}} \tr\left(P_j F_j^{\mu\nu}\right) \widetilde{F}_{\mu\nu}'-m_{D,j}\tilde{a}_j\lambda_j-2m_{D,j}^2S_j^2+\cdots\;, \end{aligned}\end{eqnarray} where $S_j$($P_j$) is the CP-even~(CP-odd) scalar, $\lambda_j$ is the gaugino, and $m_{D,j}=\langle D'\rangle/\Lambda_{\text{D}, j}$ is the gaugino's Dirac mass given by the scale of SUSY breaking. One should note that in \Eq{eq:superW}, $S_j$'s mass term $\mathcal{L} \supset -2 m_{D,j}^2 S_j^2$ comes from integrating out the D-term~(On the contrary, $P_j$ has no extra mass contribution). It is because the supersymmetry is protected that the mass of $S_j$ is correlated with the mass of the gaugino $\lambda_j$. Consequently, $S_j$ is pushed to the heavier mass range given that the gaugino mass is highly constrained: According to \cite{DELPHI:2003uqw,ATLAS:2021yqv,CMS:2020bfa,ATLAS:2020syg}, the LHC has already excluded the electroweakinos and the gluinos masses below $\mathcal{O}(100\text{GeV})$ and $\mathcal{O}(\text{TeV})$ respectively. Unlike the previous discussion, in the Dirac gaugino model, $S_j$ cannot be the ultralight scalar where the misalignment mechanism provides a natural way to open the portal in the early time but gradually close it in the late time. Even so, it is still possible to realize the temporary period of $\langle S_j \rangle\neq 0$ in the early universe through the two-step phase transition, also referred to as the VEV Flip-Flop in some specific dark matter models~\cite{Baker:2016xzo,Baker:2018vos,Baker:2017zwx}. The concrete model building and the phenomenology are beyond the scope of this paper. In the $j=1$ case, the first term in \Eq{eq:superW} containing the CP-even scalar $S_1$ corresponds to the kinetic mixing portal between $U(1)_Y$ and $U(1)_d$ which is determined by $S_1$'s VEV. The second term in \Eq{eq:superW} containing $P_1$~(axion) leads to the dark axion portal which is investigated in ~\cite{Kaneta:2016wvf, Kaneta:2017wfh, Choi:2018mvk, Choi:2019jwx, Hook:2019hdk, Hook:2021ous, Arias:2020tzl, Deniverville:2020rbv, Ge:2021cjz, Domcke:2021yuz, Gutierrez:2021gol}. There are several major differences between the kinetic mixing portal $\mathcal{L} \supset -S_1 F^{\mu \nu} F'_{\mu \nu}/\Lambda_{\text{D}, 1}$ and the dark axion portal $\mathcal{L} \supset -P_1 F^{\mu \nu} \widetilde{F}'_{\mu \nu}/\Lambda_{\text{D},1}$: 1.~In the Dirac gaugino model, $S_1$'s mass is correlated with the $\lambda_1^\alpha$ mass which is pushed to $\mathcal{O}(100\text{GeV})$ scale by LHC, while $P_1$'s shift symmetry protests its arbitrarily small bare mass. 2.~The VEV of $P_1$ does not play a direct physical role because it only contributes to the total derivative term in the Lagrangian~($P_1$'s time or spatial derivative still has nontrivial physical effects nonetheless). In the $j=2,3$ cases, if $\langle S_j^a \rangle \neq 0$, the first term in \Eq{eq:superW} would mix the non-Abelian gauge field with the dark photon such that the non-Abelian gauge symmetry is broken. Being referred to as the non-Abelian kinetic mixing\cite{Arkani-Hamed:2008hhe, Arkani-Hamed:2008kxc, Chen:2009ab, Barello:2015bhq, Arguelles:2016ney, Fuyuto:2019vfe, Barello:2015bhq, Gherghetta:2019coi}, the constant mixing models are highly constrained by the collider experiments. However, in the high-temperature environment of the early Universe, the large non-Abelian kinetic mixing can possibly be realized for the non-Abelian vector dark matter production and other intriguing phenomena. We leave the detailed discussion in future work. \section{Other Cosmologically Varying Portals}\label{sec:other_portal} Let us begin with the general form of the cosmologically varying portals through which the dark and the visible sectors are connected. To be more generic, we write them as \begin{equation}\label{eq:genportal} \mathcal{L} \supset \frac{\phi}{\Lambda^{d-4}} \,\mathcal{O}_\text{DS} \mathcal{O}_\text{SM} , \,\,\text{where $d = d_\text{SM} + d_\text{DS} + 1$}. \end{equation} In \Eq{eq:genportal}, $\mathcal{O}_\text{SM}$ and $\mathcal{O}_\text{DS}$ are the operators of the visible sector and the dark sector, respectively, $d_\text{SM}$ and $d_\text{DS}$ denote the dimensions of these two operators, and $d$ is the dimension of the time-varying portal. To simplify the notation of \Eq{eq:genportal}, we drop the (spacetime, spin, flavor, \ldots)~indices of $\mathcal{O}_\text{SM}$ and $\mathcal{O}_\text{DS}$ whose contraction makes the varying portal to be a singlet. For simplicity, we only keep the linear form of the CP-even scalar $\phi$, even though in the UV theory, the non-linearity may appear, as we have seen in \Eq{eq:Min_eps}. Based on the EFT, we know that when the effective operator \Eq{eq:genportal} is introduced, the operators merely containing $\phi$ and $\mathcal{O}_\text{SM}$ also appear because the symmetry does not forbid them. The co-appearance of the effective operator shown in \Eq{eq:genportal} and the scalar-SM coupling provides an excellent chance to test these kinds of models from the experiments detecting the portal itself and the ones measuring the scalar-SM coupling. In the rest of this section, we will give some specific examples of the varying portals and show how these minimal extensions illuminate the dark matter model building. Let us briefly review the varying kinetic mixing portal in the EFT language. After choosing \begin{eqnarray}\begin{aligned} \text{$\mathcal{O}_{\text{SM},\mu \nu} = F_{\mu \nu}$ \, and \, $\mathcal{O}_{\text{DS}, \mu \nu} = F'_{\mu \nu}$,} \end{aligned}\end{eqnarray} \Eq{eq:genportal} goes back to the operator $\mathcal{L} \supset \phi \, F_{\mu \nu} F'^{\mu \nu}/\Lambda$ discussed before. Through this operator, the dark photon dark matter can be produced without violating the stringent constraints as shown in Sec.~\ref{sec:dpdm_fi}. Since the spacetime indices of $F_{\mu \nu}$ need to be contracted, the lowest order operator of the scalar-SM coupling is $\phi \,F_{\mu \nu} F^{\mu \nu}$. If there is an exact $\mathbb{Z}_2$ symmetry invariant under the dark charge conjugation $\phi \rightarrow -\phi$, $F'_{\mu \nu} \rightarrow - F'_{\mu \nu}$, $\phi F_{\mu \nu} F^{\mu \nu}$ is forbidden, so the lowest order operator of the scalar-SM coupling becomes $\phi^2 \,F_{\mu \nu} F^{\mu \nu}$. The table-top experiments testing the $\alpha_\text{em}$ variation and the equivalence principle violation can be used to test this model because of the existence of $\phi F_{\mu \nu} F^{\mu \nu}$ or $\phi^2 F_{\mu \nu} F^{\mu \nu}$. In other situations where $\mathcal{O}_\text{SM}$ is invariant under arbitrary global and gauge transformations, there are no more indices to contract, so the lowest order operator of the scalar-SM coupling is $\phi \,\mathcal{O}_\text{SM}$ or $\phi^2 \,\mathcal{O}_\text{SM}$ depending on whether the exact $\mathbb{Z}_2$ symmetry, i.e., the invariance under $\phi \rightarrow -\phi$, $\mathcal{O}_\text{DS} \rightarrow -\mathcal{O}_\text{DS}$ transformation, exists or not. One typical example is that \begin{eqnarray}\begin{aligned} \text{$\mathcal{O}_\text{SM} = \abs{H}^2$ \,\, and \,\, $\mathcal{O}_\text{DS} = s^2$}, \end{aligned}\end{eqnarray} where $s$ is a scalar singlet in the dark sector. Here, $\mathcal{L} \supset \lambda_s \mathcal{O}_\text{DS} \mathcal{O}_\text{SM} = \lambda_s s^2 \abs{H}^2$ is well-known as the singlet-scalar Higgs portal~(SHP) through which the dark matter $s$ reaches today's relic abundance~(The dominant channels are $s s \rightarrow f^- f^+, W^- W^+, ZZ, h h, \cdots$. Here, $f$ refers to the SM fermions.)~\cite{Silveira:1985rk, McDonald:1993ex,Burgess:2000yq}. Besides, because this Lagrangian is invariant under the $\mathbb{Z}_2$ transformation $s \rightarrow -s$, $s$ is stable. Although SHP provides such a simple and effective way to generate the scalar dark matter in accordance with the CMB measurement~($\Omega_s h^2 \simeq 0.12$), in the mass range $m_s \lesssim 1\text{TeV}$, most of its parameter space except the narrow window of the resonance~($m_s \simeq m_h/2$) is excluded by the Higgs invisible decay $h \rightarrow s s$~\cite{ATLAS:2015gvj, CMS:2016dhk}, the dark matter direct detection experiments~(LUX, XENON1T, PandaX-II) and the indirect detections~(AMS, Fermi)~\cite{Escudero:2016gzx, Casas:2017jjg, Hardy:2018bph, Curtin:2021alk}. By introducing the time-varying SHP \begin{equation} \mathcal{L} \supset \frac{\phi}{\Lambda}s^2|H|^2 \end{equation} as the minimum extension, the parameter space is widely extended. There, $\phi$'s misalignment supports $s$'s freezeout at the early universe and then starts the damped oscillation such that $\langle \sigma v\rangle_{ss} \propto (T/T_\text{osc})^3$ when $T \lesssim T_\text{osc}$. For this model, there are two types of experiments: 1.~The future direct detection experiments~(XENONnT, LUX-ZEPLIN, DarkSide-20k, DARWIN) which rely on partially opened SHP today. 2.~The table-top experiments and the astrophysical observations testing the $\phi \abs{H}^2$ or $\phi^2 \abs{H}^2$ operator~(See \cite{Piazza:2010ye, Graham:2015ifn, Arvanitaki:2016fyj, Batell:2022qvr} for the detections of the $\phi \abs{H}^2$ operator). Another example of the singlet $\mathcal{O}_\text{SM}$ is that \begin{eqnarray}\begin{aligned} \text{$\mathcal{O}_\text{SM} = \bar{Q}H q_R \,\,\,\text{or}\,\,\, \bar{L}H e_R$ \,\, and \,\, $\mathcal{O}_\text{DS} = \tilde{s}$}. \end{aligned}\end{eqnarray} In this model, $\tilde{s}$ is the scalar mediator which interacts with the dark matter particle $\chi$ via the CP-odd coupling $\mathcal{L} \supset i y_{\chi} \tilde{s} \, {\overline{\chi}} \gamma^5 \chi $~(Here we denote the scalar mediator by ``$\sim$'' to distinguish it from the scalar dark matter discussed in the previous paragraph). Given the constant portal $\mathcal{L} \supset \mathcal{O}_\text{DS} \mathcal{O}_\text{SM}/\Lambda = y_{f} \tilde{s} \bar{f}f$ where $y_{f} = v_h/\sqrt{2} \Lambda$, $\chi$ reaches today's relic abundance through the freezeout channel $\chi^-\chi^+ \rightarrow f^- f^+$ whose cross section is $\langle \sigma v \rangle_{\tilde{s}\tilde{s}} \sim y_{f}^2 y_{\chi}^2 m_\chi^2/m^4_{\tilde{s}}$. Here, $v_h$ is the Higgs VEV, and $\Lambda$ is the effective scale of the constant portal. Since $\chi^-\chi^+ \rightarrow f^- f^+$ is s-wave~(The leading order of $\langle \sigma v \rangle_{\chi^- \chi^+}$ is independent of dark matter relative velocity), the region $m_\chi \lesssim 10\,\text{GeV}$ is excluded by the measurement of CMB anisotropy~\cite{Planck:2018vyg}. To produce lighter but CMB-friendly dark matter, we introduce the operator \begin{eqnarray}\begin{aligned} \mathcal{L} \supset \frac{\phi}{\Lambda^2} \tilde{s} \bar{L} H e_R \,\,\, \text{or} \,\,\,\, \frac{\phi}{\Lambda^2} \tilde{s} \bar{Q} H q_R \end{aligned}\end{eqnarray} as the minimal extension. Through the $\phi$-dependent Yukawa coupling $\mathcal{L} \supset $ $y_{f}(\phi) \tilde{s} \bar{f} f$ where $y_{f}(\phi) = \phi v /\sqrt{2} \Lambda^2$ and the aforementioned CP-odd Yukawa coupling $\mathcal{L} \supset i y_{\chi} \tilde{s}\, {\overline{\chi}} \gamma^5 \chi$, the dark matter lighter than $10\,\text{GeV}$ can reach today's relic abundance without violating CMB annihilation bound as long as $\phi$'s damped oscillation begins earlier than the CMB stage~($T_\text{CMB} \sim \text{eV}$). For this model, there are two kinds of experiments: 1.~The next-generation CMB experiments~(such as the CMB-S4~\cite{Madhavacheril:2013cna, Wu:2014hta, Green:2018pmd, Abazajian:2019eic, Dvorkin:2022bsc}) and the future direct and indirect detection experiments. 2.~The table-top experiments and astrophysical observations testing the variation of the SM fermion masses~($\phi \bar{f} f$ or $\phi^2 \bar{f} f$ operator)~\cite{Arvanitaki:2014faa, Arvanitaki:2016fyj, Arvanitaki:2015iga, Hees:2016gop, Kalaydzhyan:2017jtv, Hees:2018fpg, Banerjee:2022sqg, Kaplan:2022lmz}. \newpage \section{Conclusion} \label{sec:conclusion} In this work, we study the scalar-controlled kinetic mixing varying with the ultralight CP-even scalar's evolution during the cosmological evolution. Via such a dynamical portal, $\text{keV}-\text{MeV}$ dark photon dark matter is frozen-in, free from the late-universe exclusions, and has the kinetic mixing set by the scalar mass. More generally, this model can serve as the minimal solution for the tension between the portal dark matter's early-time production and late-time constraints. Furthermore, the scalar-photon coupling inevitably emerges from UV physics, modifying the universe's thermal history, varying the fine-structure constant, and violating the equivalence principle. These phenomenons provide excellent targets to test our model via the ultralight scalar experiments in the mass range $10^{-33}\text{eV} \lesssim m_0 \ll \text{eV}$. In the meantime, the scalar mass also determines the target values of the kinetic mixing for $\text{keV}-\text{MeV}$ dark photon dark matter experiments. In the rest of the paper, we study the $\mathbb{Z}_N$-protected naturalness of the ultralight CP-even scalar, the varying kinetic mixing from the Dirac gaugino model, and the dark matter via other cosmologically varying portals. We begin our study by constructing the minimal model of the scalar-controlled kinetic mixing arising from the heavy doubly charged messengers. In our model, the $\mathbb{Z}_2$ symmetry under the dark charge conjugation is imposed to elimite the constant kinetic mixing but preserve the varying kinetic mixing. From the perspective of symmetry, the early-time openning and the late-time closing of the kinetc mixing portal results from the system's inverse $\mathbb{Z}_2$-breaking. Importantly, in such a setup, the scalar-photon coupling also occurs from the top-down theory. To classify the model, we name the theory with the approximate $\mathbb{Z}_2$ symmetry as the type-A model and the theory with the exact $\mathbb{Z}_2$ symmetry as the type-B model. Given this, the type-A model has linear scalar-photon coupling, and the type-B model has quadratic scalar-photon coupling. Then we systematically categorize the scalar's evolution under the combined effects of the thermal correction, bare potential, and the universe expansion. For the type-A and type-B models, there exist the experimentally allowed regions for the thermal misalignment, where the scalar acquires the nonzero displacement insensitive to the initial condition. We also find the thermal postponement of the oscillation starting in the type-B model initially staying in the wrong vacuum, which is the major feature of the trapped misalignment. Such delaying of the oscillation enhances the experimental signals. Afterward, we study $\text{keV}-\text{MeV}$ dark photon dark matter production via the varying kinetic mixing, where the kinetic mixing is $\epsilon_\text{FI} \sim 10^{-12}$ for the dark photon freeze-in when $T\sim m_{A'}$ and decreases obeying the power law $T^{3/2}$ when the scalar does the late-universe damped oscillation. Therefore, the late-time kinetic mixing set by the scalar mass is free from constraints from stellar energy loss, dark matter direct detection, and the dark photon dark matter decay. In addition, the kinetic mixing during freeze-in provides the perfect experimental benchmarks in the ultralight scalar experiments in the mass range $10^{-33}\text{eV}\lesssim m_0 \ll \text{eV}$, including the test of the fine-structure constant variation and the equivalence principle violation. Since smaller dark gauge coupling leads to stronger signals, most of the $e'$ space in the type-A model and the region $e' \lesssim 10^{-6}$ in the type-B model can be tested by proposed experiments such as clock comparison and cold-atom interferometry. For the smaller scalar relic fraction among dark matter, the constraints from the fine-structure constant variation keep the same, while the constraints from the equivalence principle experiments become more stringent. We also detailedly study the $\mathbb{Z}_N$-protection of the ultralight CP-even scalar's naturalness in the varying kinetic mixing model. There, we embed the aforementioned minimal model into the model containing $N$ worlds, such that the broken $U(1)$ shift symmetry is approximately restored. Generally, the lowest order potential in this setup comes from the $\Phi^N/\Lambda^{N-4}$ operator, which exponentially suppresses the scalar's quantum correction from heavy messengers. As long as $N \sim 10$, the quantum correction of the scalar mass can be much lighter than $10^{-33}\text{eV}$. For the type-B model with odd $N$, the lowest order operator is $\Phi^{2N}/\Lambda^{2N-4}$ because of the cancellation from the exact $\mathbb{Z}_2$. Moreover, we expand the $\mathbb{Z}_N$ Coleman-Weinberg potential to all orders utilizing the Fourier transformation and the cosine sum rules. In the paper's final part, we discuss the Dirac gaugino model simultaneously inducing the varying kinetic mixing and dark axion portal. We also briefly discuss the dark matter models via other cosmologically varying portals. More specifically, we discuss the freeze-out of the dark Higgs dark matter lighter than $1 \text{TeV}$ via the varying Higgs portal and the CMB-friendly fermionic dark matter lighter than $10 \text{GeV}$ via the varying fermion portal. \section*{Acknowledgments} We want to thank Cédric Delaunay, Joshua T. Ruderman, Hyungjin Kim, Raffaele Tito D'Agnolo, Pablo Quílez, Peizhi Du, Huangyu Xiao for their helpful discussions and comments on the draft. We also want to thank Neal Weiner, John March-Russell, Ken Van Tilburg, Hongwan Liu, Asher Berlin, Isabel Garcia Garcia, Junwu Huang, Gustavo Marques Tavares, Andrea Mitridate, Erwin H. Tanin, Xuheng Luo for useful discussions. DL acknowledges funding from the French Programme d'investissements d'avenir through the Enigmass Labex. XG is supported by James Arthur Graduate Associate~(JAGA) Fellowship.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Given a metric space $(M,d)$, we consider a finite subset $X$ of $M$. We denote by $l_{\operatorname{opt}}(X)$ the minimal length of a path which visits all points of $X$. Now assume that $T$ is a total order on $M$. For a finite subset $X \subset M$ we consider the restriction of the order $T$ on $X$, and enumerate the points of $X$ accordingly: $$ x_1 \leq_T x_2 \leq_T x_3 \leq_T \dots \leq_T x_{k} $$ where $k = \#X$. Here and in the sequel $\#X$ denotes the cardinality of the set $X$. We denote by $l_T(X)$ the length of the corresponding path $$ l_T(X) := d(x_1, x_2) + d(x_2, x_3) + \dots + d(x_{k-1}, x_k). $$ Given an ordered metric space $\operatorname{M}=(M, d,T)$ containing at least two points and $k\ge 1$, we define the {\it order ratio function} $$ \operatorname{OR}_{M,T}(k) := \sup_{X\subset M | 2 \le \# X \le k+1} \frac{l_T(X)}{l_{\operatorname{opt}}(X)}. $$ See Definition \ref{def:OR} and Section \ref{sec:firstexamples} for more on this definition. We say that a metric space $M$ is {\it uniformly discrete}, if there exists $\delta>0$ such that for all pairs of points $x \ne y$ the distance between $x$ and $y$ is at least $\delta$. The travelling salesman problem aims to construct a cycle of minimal total length that visits each of $k$ given points. Bartholdi and Platzman introduced the idea to order all points of a metric space and then, given a $k$-point subset, visit its points in the corresponding order \cite{bartholdiplatzman82}, \cite{bartholdiplatzman89}. Such approach is called {\it universal travelling salesman problem}. (One of motivations of Bartholdi and Platzman was that this approach works fast for subsets of a two-dimensional plane). Their argument implies a logarithmic upper bound for the function $\operatorname{OR}_{\mathbb{R}^2}(k)$. For some metric spaces the function $\operatorname{OR}$ is even better. Our result in \cite{ErschlerMitrofanov1} (Thm B) shows that the best possible situation, when $\operatorname{OR}(k)$ is bounded by above by a constant, holds true for uniformly discrete $\delta$-hyperbolic spaces. While in the original paper of Bartholdi and Platzman it was suggested that such efficient behavior (with bounded ratio) holds for $\mathbb{Z}^d$, it is known that it is not the case (unless $d=1$). The initial argument of Bartholdi and Platzman shows the existence of an order with $\operatorname{OR}(k) \le {\rm Const} \ln k$. Orders with at most logarithmic $\operatorname{OR}$ also exist on spaces with doubling, by the result of Jia et al \cite{jiadoubling}. It seems natural to conjecture that this upper bound is optimal for $\mathbb{Z}^d$, $d\ge 2$. A question of existence of an order on $\mathbb{Z}^2$ that violates a logarithmic lower bound is asked in \cite{ Christodoulou}. Under additional assumption that the order is hierarchical this logarithmic bound (for a unit square, and hence for $\mathbb{Z}^2$) is proven in \cites{eades2}, \cite{eades}, Thm.1, Section 3.3. Observe that among finitely generated groups only those that are virtually nilpotent satisfy doubling property (see \ref{subsection:doubling} for definitions and background). We show that many groups of exponential growth also admit a logarithmic upper bound for an appropriate choice of the order, as we explain below. The notion of Assouad-Nagata dimension goes back to \cite{Nagata58}, and the term {\it Nagata dimension} is used in \cite{Assouad82}. This notion provides a control of both local and global properties of the space. If the space is uniformly discrete, only large scales (and thus global properties) matter, and this notion coincides with {\it linearly controlled metric dimension} (also called linearly controlled asymptotic dimension). For uniformly discrete spaces $AN$-dimension is a quasi-isometric invariant. We recall the definition of Assouad-Nagata dimension in Section \ref{section:finitedimension}. Here we mention that the following spaces have finite $AN$-dimension. Spaces with doubling property \cite{LangSchlichenmaier}, wreath products of groups of linear growth with finite ones \cite{BrodskiyDydakLang}, polycyclic groups \cite{HigesPeng}, trees (see e.g. \cite{roe}), hyperbolic groups (and $\delta$-hyperbolic spaces that are "doubling in the small" \cite{LangSchlichenmaier}), and more generally, graphs and groups admitting quasi-isometric imbedding into finite products of trees (for imbedding of hyperbolic groups see \cite{BuyaloDranishnikovSchroeder}) such as Coxeter groups \cite{DranishnikovJanuszkiewicz} (and more generally for virtually special groups see \cite{HaglundWise08}). Groups relatively hyperbolic with respect to groups of finite Assouad-Nagata dimension also have finite Assouad-Nagata dimension \cite{Hume17}. A not necessarily finitely presented $C'(1/6)$ small cancellation groups have $AN$-dimension at most $2$ (cite \cite{Sledd1}). In fact, the argument of \cite{jiadoubling} for $\operatorname{OR}(k)$ in the case of doubling spaces can be adapted to prove a logarithmic upper bound more generally for spaces of finite Assouad-Nagata dimension. In \cite{jiadoubling} the authors study the notion of $(\sigma, I)$-partitioning schemes, the existence of which can be shown to be equivalent to the finiteness of Assouad-Nagata dimension. In the proof on the theorem below we argue directly in terms of $AN$-dimension. The second claim of the theorem below provides not only the asymptotic bound on $OR$, but also discusses the value of the order breakpoint $\operatorname{Br}(M,T)$, which is defined as the smallest integer $s$ such that $\operatorname{OR}_{M,T}(s) < s$ (Definition \ref{def:mins}). For some basic examples of $\operatorname{Br}$, see also \cite{ErschlerMitrofanov1}, where this notion was introduced. \begin{namedthm*}{Theorem I}(=Thm \ref{thm:nagata}) If $M$ is a metric space of finite Assouad-Nagata dimension $m$ with $m$-dimension control function at most $Kr$, then there exists an order $T$ such that for all $k\ge 2$ \begin{enumerate} \item $\operatorname{OR}_{M,T}(k) \le C \ln k$, where a positive constant $C$ can be chosen depending on $m$ and $K$ only. \item $\operatorname{Br}(M,T) \leqslant 2m + 2$. Moreover, the elongations of snakes on $2m+3$ points are bounded by some constant depending on $m$ and $K$ only. \end{enumerate} \end{namedthm*} This theorem gives us upper estimation on invariants of $M$: $\operatorname{OR}_M(k) = O(\log k)$ and $\operatorname{Br}(M) \leqslant 2m+2$. "Snakes" mentioned in the second claim of the theorem are order-increasing sequences of points which oscillate between neighborhoods of two points (We discuss the notion of snakes in more detail in Section \ref{sec:firstexamples}). To prove Theorem \ref{thm:nagata} above, we can assume that $M$ is uniformly discrete. We choose an appropriate constant $\lambda$, defined by the linearity constant $K$ for the control function (a possible choice is $\lambda= 4K$), and consider coverings from the definition of Assouad-Nagata dimension with $r= \lambda^n$, $n \in \mathbb{N}$. In Lemma \ref{lem:finitedimensionimpliesANfiltration} we modify this family of coverings to enforce a certain hierarchical structure on the sets of these coverings. This hierarchical structure guarantees that the sets of the coverings satisfy the assumption of Lemma \ref{le:convex}, which provides a sufficient condition for the existence of an order, such that given sets on some space are "convex" with respect to this order. We discuss this notion at the end of Section \ref{sec:firstexamples}. In Lemma \ref{lem:ANfiltrationCorollary} we show that such orders satisfy an upper bound for the order ratio function in the claim of the theorem. \bigskip The worst possible case for solving the universal travelling salesman problem are spaces with linear $\operatorname{OR}(k)$. An example of a sequence of finite graphs with linear $\operatorname{OR}(k)$ is constructed in Gorodezky et al \cite{gorodezkyetal}, see also Bhalgat et al \cite{bhalgatetal} who show that a sequence of Ramanujan graphs of large girth and of bounded diameter-by-girth ratio has this property. The above mentioned paper considered both the question of the dependence of the number of required points $k$ as well as the dependence on the cardinality of a finite graph $n$. As we have mentioned, in this paper we consider the dependence on $k$, the question that makes sense both for finite and infinite spaces. Since a result of Osajda \cite{osajda} allows to imbed subsequences of graphs with large girth into Cayley graphs, combining his result with that of \cite{gorodezkyetal} one can conclude that there exist groups with linear $\operatorname{OR}(k)$. While the above mentioned argument uses both a remarkable construction of Ramanujan graphs of large girth and a recent graphical small cancellation technique, we prove that there is a large class of metric spaces and sequences of metric spaces with infinite order breakpoint (and thus claiming that $\operatorname{OR}(k)=k$, not only that this function is linear). Easy examples of groups of this kind can be obtained from Thm III. Before stating this theorem, we formulate Thm II, the first claim of which gives a sufficient spectral condition for infinite $\operatorname{Br}$. Informally speaking, infinite $\operatorname{Br}$ means that whatever orders we choose on vertices of our graphs, there are arbitrary large subsets of vertices on which this order is extremely far from optimal for the travelling salesman problem. The second claim of Thm II provides additional information about snakes for a sequence of expander graphs. \begin{namedthm*}{Theorem II}(=(1) and (3) of Thm \ref{thm:expanders}) Let $\Gamma_i$ be a sequence of finite graphs of degree $d_i\ge 3$ on $n_i$ vertices. Let $T_i$ be an order on $\Gamma_i$. \begin{enumerate} \item Assume that the normalized spectral gap $\delta_i= (\lambda^{\Gamma_i}_1 - \lambda^{\Gamma_i}_2 ) /d_i$ satisfies $$ 1/\delta_i = o \left( \frac{\log_{d_i}n_i}{\ln \log_{d_i}n_i} \right). $$ Then the order breakpoint of the sequence $(\Gamma_i, T_i)$ is infinite. \item For a sequence of bounded degree expander graphs the following holds. If $d_i=d\ge 3$ and $\delta=\inf_i \delta_i >0$, then for each $k$ the graphs $(\Gamma_i, T_i)$ admit snakes on $k$ points of bounded width $C_k$ and of length at least $\log_{d-1}n_i -C'_k$, for some $C_k, C'_k>0$. \end{enumerate} \end{namedthm*} In Theorem II above we have formulated claims (1) and (3) of Thm \ref{thm:expanders}. In claim (2) of Thm \ref{thm:expanders} we will also give an estimate on possible length and width of a snake, in terms of a spectral gap of a graph. In general, if the decay of $\delta_i$ is quick, one can not understand whether $\operatorname{Br}$ is infinite or not, given a sequence of graphs and knowing their cardinality $n_i$, degree $d_i$ and spectral gap $\delta_i$, see Remark \ref{rem:nospectral}. For Claim (2) of the Theorem, the assumption of expansion can not be weakened if we want a criterion in terms of $n_i$, $d_i$ and $\delta_i$, see Remark \ref{rem:closetoexpandersnobs}. One can ask whether $o(\log_{d_i}^2(n_i))$ (which is a sufficient condition for infinite $AN$-dimension of bounded degree graphs by \cite{humeetal}, see Remark \ref{rem:ANpoincare}) is sufficient for infiniteness of $\operatorname{Br}$. If $1/\delta_i \sim (\log_{d_i}{n_i})^2$ then $\operatorname{Br}$ can be finite (see Remark \ref{rem:wreath}). Since a sequence of expander graphs can have a diameter close to $\log_{d-1}n_i$, the lower bound on the length of the snake in the claim (2) can not be significantly improved (see also Remark \ref{rem:sardari}). In Section \ref{sec:infinitegirth} we provide another sufficient condition of a different nature for infinite $\operatorname{Br}$. It can be deduced from Lusternik-Schnirelmann theorem that any order on an $\varepsilon$-net of a sphere $S^k$ admits snakes on $k+2$ points, alternating between $\varepsilon$-neighborhoods of antipodal points (see Lemma \ref{lem:sphere}). Combining it with the control of order ratio function for weak imbeddings of cubes we get \begin{namedthm*}{Theorem III}(= Corollary \ref{cor:cubes}) If a metric space $M$ weakly contains arbitrarily large cubes of dimension $d$, then for any order $T$ on $M$ it holds $$ \operatorname{OR}_{M,T}(d) = d. $$ \noindent In particular, if a metric space $M$ weakly contains a sequence of arbitrarily large cubes, then for any order $T$ on $M$ the order breakpoint of $(M, T)$ is infinite. \end{namedthm*} We recall again that the informal meaning of infinite order breakpoint is that whatever order we choose on $M$, this order behaves extremely bad (for the travelling salesman problem) on some arbitrary large subsets. In Section \ref{sec:infinitegirth} we will define for metric spaces the property of containing a sequence of arbitrarily large cubes. Here we mention that the class of such spaces includes spaces admitting uniform imbeddings of $\mathbb{Z}^d$ (or $\mathbb{Z}_+^d$) for all $d$. In particular, this condition holds for any finitely generated group $G$ that contains the direct sum $\mathbb{Z}^\infty$ as a subgroup. These include many classes of solvable groups and many other classes of amenable groups. Here we mention that Grigorchuk groups (and moreover all known constructions of groups of intermediate growth) admit uniform imbeddings of $\mathbb{Z}_+^d$, for all $d$. These also include many examples of not amenable groups, as well as some groups such as Thompson group (where famous amenability question remains open). Further examples of spaces that weakly contain sequences of arbitrarily large cubes are $\mathbb{Z}^2\wr \mathbb{Z}/2\mathbb{Z}$ and more generally $B \wr A$, where $B$ is an infinite group of not linear growth and $A$ is any finite or infinite group containing at least two elements (this statement is inspired by the argument in \cite{BrodskiyDydakLang}, see Lemma \ref{lem:wcubeswr} where we study weak imbeddings of cubes in wreath products). We also mention that imbeddings of cubes appear naturally as lower estimates for $AN$-dimension in various classes of groups and spaces, see \cites{higes, Sledd}. In view of Theorems $D$ and $E$ mentioned above we ask \begin{question*} Let $M$ be a metric space of infinite Assouad-Nagata dimension. Is it true that the order breakpoint of $M$ is infinite? \end{question*} We recall that there are various known examples of groups of finite asymptotic dimension which have infinite $AN$-dimension (see Nowak \cite{nowak}, Brodskyi Dydak Lang \cite{BrodskiyDydakLang}). As we have already mentioned, some of them satisfy the assumption of our Theorem III. In view of Thm I, if the answer to the question is positive, this would provide an equivalent characterization of spaces of finite $AN$-dimension. Taking in account the above mentioned examples, for a question of a possible characterization it is essential to speak about $AN$-dimension and not about asymptotic dimension. Observe also that if the answer to the above mentioned question is positive, in view of Theorem \ref{thm:nagata} this would provide a positive answer to the following \begin{question*}[Gap problem for existence of orders] Let $M$ be a metric space. Is it true that either for any order $T$ on $M$ and all $k\ge 1$ it holds $\operatorname{OR}_{M,T}(k) = k$ or there exists an order $T$ such that for all $k\ge 2$ it holds $ \operatorname{OR}_{M,T}(k) \le \rm{Const} \ln k$? \end{question*} Given a metric space, one can formulate a stronger Gap problem, which describes behavior of all orders (rather then searches an order on the space). Our next result below solves this problem for spaces with doubling property. As we have already mentioned, the argument of Bartholdi and Platzman for Euclidean plane (and generalizations of their argument for the spaces with doubling) provides orders with logarithmic upper bound for the function $\operatorname{OR}(k)$. This is in contrast with the lexicographic order on $\mathbb{R}^2$ ($(x_1, y_1) < (x_2, y_2)$ if $x_1 <x_2$, or $x_1=x_2$ and $y_1<y_2$), where it is easy to see that $\operatorname{OR}(s)=s$ for all $s$. Our theorem below shows that any order on a space with doubling condition (in particular, any order on $\mathbb{R}^d$) satisfies the same dichotomy: \begin{namedthm*}{Theorem IV}[Gap for order ratio functions on spaces with doubling] (=Thm \ref{thm:gap}) Let $M$ be a metric space with doubling and $T$ be an order on $M$. Then either for all $s$ it holds $$ \operatorname{OR}_{T,M}(s)= s $$ or there exists $C$ (depending only on the doubling constant of $M$, $s$ and $\varepsilon$ such that $\operatorname{OR}_{M,T}(s)\le s- \varepsilon$) such that for all $s \ge 1$ $$ \operatorname{OR}_{M,X}(s) \le C \ln s $$ \end{namedthm*} {\bf Acknowledgements. } We would like to thank David Hume for discussions about $AN$ dimension, Tatiana Nagnibeda for explanations and references on spectra of finite graphs; Karim Adiprasito for helpful conversations. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No.725773). The work of the second named author is also supported by Russian Science Foundation (grant no.17-11-01377). \section{Preliminaries and basic properties}\label{sec:firstexamples} Below we recall some basic properties of the order ratio function and order breakpoint, discussed in \cite{ErschlerMitrofanov1}. \begin{definition}\label{def:OR} [Order ratio function] Given an ordered metric space $\operatorname{M}=(M, d,T)$ containing at least two points and $k\ge 1$, we define the {\it order ratio function} $$ \operatorname{OR}_{M,T}(k) := \sup_{X\subset M | 2 \le \# X \le k+1} \frac{l_T(X)}{l_{\operatorname{opt}}(X)}. $$ If $M$ consists of a single point, then the supremum in the definition above is taken over an empty set. We use in this case the convention that $\operatorname{OR}_{M,T}(k)=1$ for all $k\ge 1$. Given an (unordered) metric space $(M,d)$, we also define the {\it order ratio function} as $$ \operatorname{OR}_{M}(k) := \inf_{T} \operatorname{OR}_{M,T}(k). $$ \end{definition} Denote by $L$ the diameter of $X$. It is clear that $L \leqslant l_{\operatorname{opt}}(X) \leqslant l_T(X)$ and $l_T(X) \leqslant L(\#X - 1)$. Hence $1 \leq \operatorname{OR}_{M,T}(k) \leq k$ for any $M$, $T$ and $k \ge 1$. \begin{definition} \label{def:mins} [Order breakpoint] Let $M$ be a metric space, containing at least two points, and let $T$ be an order on $M$. We say that the order breakpoint $\operatorname{Br}(M,T) = s$ if $s$ is the smallest integer such that $\operatorname{OR}_{M,T}(s) < s$. If such $s$ does not exist, we say that $\operatorname{Br}(M,T)=\infty$. A possible convention for a one-point space $M$ is to define $\operatorname{Br}(M,T)=1$. Given an (unordered) metric space $M$, we define $\operatorname{Br}(M)$ as the minimum over all orders $T$ on $M$: $$ \operatorname{Br}(M) = \min_{T} \operatorname{Br}(M,T). $$ \end{definition} \noindent The property $\operatorname{Br}_{M,T} \geq k$ means that on some $k$-point subsets of $M$ the order $T$ behaves extremely non-optimal as an universal order for the universal travelling salesman problem. We will give an idea of such non-optimal subsets. Consider a sequence of $s$ points: $x_1 <_T x_2 <_T \dots <_T x_s$, let $a$ be the diameter of this set and let $b$ be the maximal distance $d(x_i, x_j)$ where $i$ and $j$ are of the same parity. Then $a$ is called the {\it length} of the {\it snake} $(s_i)$, $b$ is called its {\it width} and the ratio $a/b$ is the {\it elongation} of the snake $(s_i)$. We say that elongation is $\infty$ is $b = 0$. If a set $X$ consists of $k+1$ points and the value of $l_T(X)$ is close to $kl_{\operatorname{opt}}(X)$, then it can be shown that any two points with indices of the same parity are relatively close to each other, compared with the diameter of $X$. Hence $\operatorname{OR}_{M,T}(s) = s$ if and only if in $(M,T)$ there exist snakes of arbitrary large elongation on $s+1$ points (\cite{ErschlerMitrofanov1}, Lemma 2.11). If we allow a metric to assume the value $\infty$, then a natural way to define $\operatorname{OR}$ is to consider the ratio $l_T(X)/l_{\operatorname{opt}}(X)$ for subsets $X$ with finite diameter. In this setting we can speak about $\operatorname{OR}$ and $\operatorname{Br}$ of disjoint unions of metric spaces. \begin{wrapfigure}{l}{0.2\textwidth} \includegraphics[width=0.2\textwidth]{pictures/orderedcircle.png} \end{wrapfigure} We mention a simple example (which we discussed already in \cite{ErschlerMitrofanov1}, Lemma 3.1) when the metric space $M$ is a circle $S^1$ with its inner metric. It is not difficult to show, that $\operatorname{OR}_{S^1}(k)=2$ for all $k \geqslant 2$. A natural order (a clockwise order) provides the estimation $\operatorname{OR}_{S^1}(k)\leq 2$, and the lower estimation follows from \begin{lemma}\label{le:examplecircle} For any order $T$ of $S^1$ and any $\varepsilon > 0$ there exist two antipodal points $x,y$ of the circle and a snake $z_1<_T z_2 <_y <_T z_3$ on $3$ points such that $z_1$ and $z_3$ belong to the $\varepsilon$-neighborhood of $x$ and $z_2$ is in the $\varepsilon$-neighborhood of $y$. \end{lemma} \begin{proof} We call a point $x\in S^1$ {\it $T$-small} if $x<_T \Bar{x}$, where $\Bar{x}$ denotes the antipodal point to $x$. Otherwise we call it {\it $T$-large}. Since $(S^1,T)$ contains both $T$-small and $T$-large points, we can find two points $s_1,s_2$ of different types that are $\varepsilon$-close to each other. Assume $s_1 <_T \Bar{s_1}$ and $s_2 >_T \Bar{s_2}$. If $\Bar{s_1}<_T s_2$, we take the snake $s_1 <_T \Bar{s_1} <_T s_2$. Otherwise, we take the snake $\Bar{s_2} <_T s_2 <_T \Bar{s_1}$. \end{proof} We recall the notion of quasi-isometric imbedding. Given metric spaces $N$ and $M$, a map $\alpha$ from $N$ to $M$ is a {\it quasi-isometric imbedding} if there exist $C_1$, $C_2$ such that for any $x_1, x_2 \in N$ it holds \[ \frac{1}{C_1}(d_N(x_1, x_2)) -C_2 \le d_M(\alpha(x_1), \alpha(x_2)) \le C_1(d_N(x_1, x_2)) +C_2 \] If $M$ is at bounded distance from $\alpha(N)$, this map is called a {\it quasi-isometry}, and the spaces $X$ and $Y$ are called {\it quasi-isometric.} If $\alpha$ is bijective and $C_2=0$, then the spaces are said to be {\it bi-Lipschitz equivalent}. We also recall a weaker condition of uniform imbeddings. Given metric spaces $N$ and $M$, a map $\alpha$ from $N$ to $M$ is an {\it uniform imbedding} (also called {\it coarse imbedding}) if there exist two non-decreasing functions $\rho_1$, $\rho_2:$ $[0, +\infty) \to [0, + \infty)$, with $\lim_{r\to \infty} \rho_1(r) = \infty$, such that \[ \rho_1 (d_X(x_1, x_2)) \leqslant d_M(\alpha(x_1), \alpha(x_2)) \le \rho_2 (d_X(x_1, x_2)). \] Given $\varepsilon, \delta > 0$, a subset $U$ in a metric set $M$ is said to be an $(\varepsilon, \delta)${\it -net} if any point of $M$ is at distance at most $\varepsilon$ from $U$ and the distance between any two distinct points of $U$ is at least $\delta$. It is easy to show that for any metric space $M$ and any $\varepsilon > 0$ there exists an $(\varepsilon, \varepsilon)$-net of $M$. \begin{definition} We say that two functions $f_1,f_2(r) :\mathbb{N} \to \mathbb{R}$ are equivalent up to a multiplicative constant if there exists $K_1,K_2>0$ such that $f_1(r) \le K_1 f_2(r)$ and $f_2(r) \le K_1 f_1(r)$ for all $r\ge 1$. \end{definition} It is easy to observe (\cite{ErschlerMitrofanov1}, Lemma 2.8) that if two metric spaces $M$ and $N$ are quasi-isometric, $M'$ is an $(\varepsilon, \delta)$-net of $M$ and $N'$ is an $(\varepsilon, \delta)$-net of $N$, then the order ratio functions $\operatorname{OR}_{M'}$ and $\operatorname{OR}_{N'}$ are equivalent up to a multiplicative constant. In particular, the asymptotic class of $\operatorname{OR}$ is a quasi-isometric invariant of uniformly discrete metric spaces. Moreover, if a metric space $N$ is quasi-isometrically imbedded into a space $M$, then $\operatorname{OR}_M$ is asymptotically larger or equal to $\operatorname{OR}_N$. We mention in this context that given (a not necessary injective) map $\phi: N \to M$ and an order $T_M$ on $M$, one can construct an order $T_N$ on $N$ such that $\phi(x)<_{T_M}\phi(y)$ always implies $x<_{T_N}y$. We call any such order a {\it pullback of} $T_M$. While the asymptotic class of the order ratio function is invariant under quasi-isometries, specific values of $\operatorname{OR}$ can change under quasi-isometries, as can be easily seen already from finite space examples. But for any $s$ the equality $\operatorname{OR}(s) = s$ is preserved under quasi-isometries of uniformly discrete spaces (\cite{ErschlerMitrofanov1} Lemma 2.12). To show this one can consider a pullback and argue that snakes of large elongation map under quasi-isometries to snakes of large elongation. Moreover, an uniformly discrete metric space can not be quasi-isometrically imbedded to a metric space with smaller $\operatorname{Br}$. From the definition it is clear that if $M$ has at least two points then $\operatorname{Br}(M,T) \ge 2$ for any order $T$. A result of M.~Kapovich \cite{kapovich} about $(R,\varepsilon)$-tripods can be used to characterise uniformly discrete metric spaces with $\operatorname{Br} \le 2$ (see Lemma 4.1 in \cite{ErschlerMitrofanov1}): such spaces are either bounded or quasi-isometric to a ray or to a line. It is not difficult to see that virtually free groups have $\operatorname{Br} \le 3$, and Theorem $A$ of the above mentioned paper shows the converse. Let $G$ be a finitely generated group. Then $G$ admits an order $T$ with $\operatorname{Br}(G,T) \le 3$ if and only if $G$ is virtually free. While the property $\operatorname{OR}_M(s) = s$ is not preserved under uniform mappings of uniformly discrete spaces, there is a similar property which is inherited by such imbeddings. Below we recall this property, studied in \cite{ErschlerMitrofanov1}. We say that an ordered metric space $(M, T)$ admits {\it a sequence of snakes of bounded width} on $s$ points if there is a sequence of snakes on $s$ points of uniformly bounded width and with diameters tending to infinity. It is not difficult to see that if $N$ admits an uniform imbedding to a metric space $M$ and if $N$ admits a sequence of snakes of bounded width on $s$ points for any order on this space, then the same happens for $M$; and in particular $\operatorname{OR}_M(s-1) = s-1$ (\cite{ErschlerMitrofanov1}, Lemma 2.14). Using a compactness argument one obtains the following. \begin{lemma}\label{rem:goedel}[\cite{ErschlerMitrofanov1}, Lemma 2.17] Let $M$ be a metric space. Consider a function $F:\mathbb{Z}_+ \to \mathbb{R}_+$ and assume that for any finite subset $M' \subset M$ there exists an order $T'$ satisfying $\operatorname{OR}_{M', T'} (k) \le F(k)$ for all $k\ge 1$. Then there exists an order $T$ on $M$ satisfying $\operatorname{OR}_{M, T} (k) \le F(k)$ for all $k\ge 1$. \end{lemma} Using an induction on the cardinilty of $\mathcal{A}$ in case when this family is finite and a compactness argument for the general case, one also obtains \begin{lemma} \label{le:convex}[\cite{ErschlerMitrofanov1}, Lemma 2.17] Let $\mathcal{A}$ be a family of subsets of $M$ such that for any two sets $A_1, A_2 \in \mathcal{A}$ either their intersection is empty, or one of the sets is contained in another one. Then there exists an order $T$ on $M$ such that for any set $V\in \mathcal{A}$ and any three points $x,y,z\in M$ such that $x <_T y <_T z$, the condition $x, z\in V$ implies $y\in V$. \end{lemma} We say that the subset $V$ of $M$ is {\it convex with respect to the order} $T$ if $V$ and $T$ satisfy the property described in Lemma \ref{le:convex}. This elementary lemma can be used to construct orders with given properties in various spaces. We will use it several times in the proof of Theorem I about spaces of finite AN dimension and of Theorem IV about spaces with doubling. \section{Gap for the order ratio functions. Nilpotent groups and spaces with doubling.} \label{sec:doubling} We have mentioned that a crucial observation for using orders for the travelling salesman problem is the result of Bartholdi and Platzman, who have shown existence of orders (related to space-filling curves) such that for any finite set $X$ the order provides a tour with length at most $\rm{Const} (\ln \#X)$ times longer than the optimal one. Consider two orders on the square $K = [0;1]^2$. The first order $T_{BP}$ is the one considered by Bartholdi and Platzman in \cite{bartholdiplatzman82} and their estimate (combined with obtained later lower bound in \cite{bertsimasgrigni}) implies that $\operatorname{OR}_{K,T_{BP}}(k)\sim \log k$. The second order $T_{lex}$ is the lexicographical order. We define this order by saying $(x_1,y_1) <_{T_{lex}} (x_2, y_2)$ if $x_1 < x_2$ or $x_1 = x_2$ and $y_1 < y_2$. It is clear that $(K,T_{lex})$ contains snakes of diameter $1$ and arbitrary small width, and hence $\operatorname{OR}_{K,T_{lex}}(k) = k$ for all $k$. \medskip \newcommand{\bpone}[3] {\begin{scope}[shift={(#1,#2)},rotate=#3] \draw[ thick, rounded corners, ->] (0.3,0.1)--(0.9,0.1)-- (0.9,0.9) -- (1.1,0.9) -- (1.1,0.1) -- (1.7,0.1); \end{scope}} \newcommand{\bptwo}[3] {\begin{scope}[shift={(#1,#2)},rotate=#3] \bpone{0}{0}{0} \bpone{2}{0}{90} \bpone{2}{2}{270} \bpone{2}{0}{0} \draw[ thick, rounded corners](1.7,0.1) -- (1.9, 0.1) -- (1.9, 0.3); \draw[ thick, rounded corners](2.3,0.1) -- (2.1, 0.1) -- (2.1, 0.3); \draw[ thick, rounded corners](1.9,1.7) -- (1.9, 1.9) -- (2.1, 1.9) -- (2.1, 1.7); \end{scope}} \newcommand{\bpthree}[3] {\begin{scope}[shift={(#1,#2)},rotate=#3] \bptwo{0}{0}{0} \bptwo{4}{0}{90} \bptwo{4}{4}{270} \bptwo{4}{0}{0} \draw[ thick, rounded corners](3.7,0.1) -- (3.9, 0.1) -- (3.9, 0.3); \draw[ thick, rounded corners](4.3,0.1) -- (4.1, 0.1) -- (4.1, 0.3); \draw[ thick, rounded corners](3.9,3.7) -- (3.9, 3.9) -- (4.1, 3.9) -- (4.1, 3.7); \end{scope}} \resizebox{13cm}{6cm}{ \begin{tikzpicture}[scale = 1.5] \bpthree{0}{0}{0} \bpthree{8}{0}{90} \bpthree{8}{8}{180} \bpthree{0}{8}{-90} \draw[ thick, rounded corners](7.7,0.1) -- (7.9, 0.1) -- (7.9, 0.3); \draw[ thick, rounded corners](7.9,7.7) -- (7.9, 7.9) -- (7.7, 7.9); \draw[ thick, rounded corners](0.3,7.9) -- (0.1, 7.9) -- (0.1, 7.3); \begin{scope}[shift = {(10,0)}] \foreach \x in {0,...,16}{ \draw[thick, ->] (\x/2,4) -- (\x/2,8) -- (\x/2+0.5,0) -- (\x/2 + 0.5, 4); } \end{scope} \end{tikzpicture} } \medskip Our result below will prove that neither on $K$ nor on any other space with doubling property it is possible to construct an order $T$ such that $\operatorname{OR}_{K,T}(k)$ is sub-linear but grows faster than a logarithmic function in $k$. \subsection{Doubling spaces}\label{subsection:doubling} \begin{definition} A metric space $(M, d)$ is said to be {\it doubling} if there exists a doubling constant $D>0$ such that for any $x\in M$ and $r>0$ the ball $B(x,2r)$ of radius $2r$ can be covered by at most $D$ balls of radius $r$. \end{definition} \noindent In subsection \ref{subs:gap_proof} we will prove the following \begin{thm}[Gap for order ratio functions on doubling spaces] \label{thm:gap} Let $X$ be a doubling metric space and $T$ be an order on $X$. Then either for all $k$ it holds $$ \operatorname{OR}_{X,T}(k)= k $$ or there exists $C$, depending only on the doubling constant of $X$, on the order breakpoint $s$ of $(X,T)$ and on $\varepsilon$ such that $\operatorname{OR}_{X,T}(s)\le s- \varepsilon$, such that for all $k\ge 2$ $$ \operatorname{OR}_{X,T}(k) \le C \ln k. $$ \end{thm} \noindent Spaces $\mathbb{R}^d$ and their subsets are examples of doubling spaces. It is easy to see that any group of polynomial growth has doubling property for infinitely many $r$. It is also known in fact that any group of polynomial growth has doubling property for all $r$, but known proofs use that these groups are virtually nilpotent (by Polynomial Growth theorem \cite{gromovpolynomial}). Below we recall the definition and state these remarks more precisely. \begin{definition} A {\it growth function} of a finitely generated group $G$ with respect to a finite generating set $S$ is the number of elements of $G$ of the word length at most $n$ $$ v_{G,S}(n) = \# B_{G,S}(n) = \# \{ g: l_{G,S} (g) \le n\} $$ \end{definition} \noindent Suppose that a group $G$ has polynomial growth, that is $v_{G,S} (n) \le C_S n^d$ for some (and hence for all) generating set $S$. It is clear directly from the definition that there exists infinitely many $n$ such that $v_{G,S} (2 n ) \le C v_{G,S}(n)$ for some constant $C$. Using Polynomial Growth Theorem, we can claim that $G$ is virtually nilpotent, and in this case in holds $ C_1 n^d \le v_{G,S}(n) \le C_2 n^d $ for some integer $d$ (and some positive constants $C_1$ and $C_2$, depending on $S$; for this basic fact about growth see e.g. Chapter 4 of \cite{MannHowgroupsgrow} ). In particular, there exists a positive constant $C$ such that $$ v_{G,S} (2 n ) \le C v_{G,S}(n) $$ \noindent for some positive constant $C_1$, depending on $S$, and all $n$. \begin{rem} Let $G$ be a finitely generated group, $R>0$. Let $U$ be a $(R,R)$-net of the ball of radius $2R$ in $G$. Consider the balls of radius $R$ centered at points of the nets. The family of these balls covers the ball of radius $2R$, and the number of the balls is at most $v(2.5 R)/v(R/2)$. In particular, if $v(2.5R)/v(R/2) \le v(2^3 R/2)/ v(R/2) \le C^3$, then $G$ has doubling property on the scale $R$ with constant $C^3$. \end{rem} \subsection{Proof of Theorem \ref{thm:gap}}\label{subs:gap_proof} We assume that there exists $s_0$ such that $\operatorname{OR}_{M,T}(s_0 ) = s_0 - \varepsilon$ for some $\varepsilon > 0$. Observe that there exists a number $\lambda$, depending only on $s_0$ and $\varepsilon$, such that any snake on $s_0+1$ points in $(X,T)$ has elongation (the ratio of its diameter and its width) at most $\lambda$. \begin{lemma}\label{le:gap1} Let $M$ be a doubling space with doubling constant $D$, let $T$ be an order on $M$ and let $\lambda>1$ be such that all snakes on $s_0+1$ points in $(M,T)$ have elongations at most $\lambda$. Then there exists a constant $N(D, s_0, \lambda)$ such that whatever $x\in M$ and $R > 0$ we choose, there are no sequences $x_1 <_T x_2 <_T \dots <_T x_{2N}$ satisfying \begin{enumerate} \item $x_i \in B(x,4R)$ for all $i$ \item $d(x_{2i-1},x_{2i}) \geqslant R$ for all $i$. \end{enumerate} \end{lemma} \begin{proof} Arguing by induction on $k$ and using the definition of doubling spaces, we observe that for any $k$, any $x$ and any $R>0$ the ball $B(x,4R)$ can be covered by $D^{k+3}$ balls of radius at most $2^{-k-1}R$, and thus of diameter not greater than $2^{-k}R$. Choose $k$ such that $2^k > \lambda$. Put $N= s_0D^{k+3}$. Assume that there exist points $x_1 <_T x_2 <_T \dots <_T x_{2N}$ satisfying properties (1) and (2) in the formulation of the Lemma. Cover the ball $B(x,4R)$ by $m \leqslant D^{k+3}$ balls $B_1, \dots, B_m$ of diameters at most $4R/2^{k+2} < \frac{R}{\lambda}$. We have $N$ pairs $(x_i,x_{i+1})$ of points at distance at least $R$. Each pair belongs to the union $B_j \cup B_l$ for some $j$ and $k$. For given $j$ and $l$, it is clear that there is at most $s_0$ such pairs are in $B_j \cup B_l$, since otherwise we find a snake on $s_0+1$ points with diameter at least $R$ and width smaller than $R/\lambda$. This is in a contradiction with the assumption of the lemma. \end{proof} \begin{claim}\label{le:dividepath} Let $M$ be a metric space and let $(x_0,x_2,\dots,x_n)$ be a sequence of points such that \[\sum_{i=0}^{n-1} d(x_i, x_{i+1}) = L.\] Let $L = a_1 + a_2 + \dots + a_m$, all $a_i > 0$. Then the set $\{x_i\}$ can be partitioned into $m$ subsets $A_1,\dots, A_m$ such that $\operatorname{diam}(A_j) \leqslant a_j$ for all $j$. \end{claim} \begin{proof} Induction on $m$. The base of induction: if $m=1$, then the claim of the lemma is straightforward. Induction step: find the maximal $l\ge 0$ such that $\sum_{i=0}^{l-1} d(x_i, x_{i+1}) \leqslant a_1$ and put $A_1:= \{x_0, x_1,\dots,x_l\}$. Then $\sum_{i=l}^{n-1} d(x_i, x_{i+1}) \leqslant a_2 + \dots + a_m$ and $\operatorname{diam}(A_1) \leqslant a_1$. \end{proof} \begin{lemma}\label{le:gap3} Let $M$ be a doubling space with doubling constant $D$ and let $T$ be an order on $M$ such that all snakes on $s_0+1$ points in $(M,T)$ have elongation at most $\lambda$. Then there exists a constant $N(D, s_0, \lambda)$ such that the following holds. Take a sequence of points $(x_i)$ in $M$, $i: 1 \le i \le m$ satisfying $$ x_1 <_T x_2 <_T \dots <_T x_m. $$ Assume that the minimal length of a path visiting these points is $L$. Then for all $k\ge 0$ it holds \[ \# \left\{ i \in (1,\dots, m-1) \Big| \frac{L}{2^{k+1}} \le d(x_i, x_{i+1}) \le \frac{L}{2^{k}}\right\} \leqslant N 2^k. \] \end{lemma} \begin{proof} Applying Claim \ref{le:dividepath} for $m=2^k$ and $a_1=a_2 = \dots =a_{m}=L/2^k$ we conclude that there exist $2^{k}$ points $y_1,\dots y_{2^{k}}$ such that \[\{x_i\}_{i=1}^m \subset B\left(y_1, \frac{L}{2^{k}}\right) \bigcup \dots \bigcup B\left(y_{2^{k}}, \frac{L}{2^{k}}\right). \] Then for each $i$ such that $d(x_i,x_{i+1}) \leqslant \frac{L}{2^{k}}$ we can find $y_j$ such that \[ x_i, x_{i+1} \in B\left(y_j, \frac{L}{2^{k-1}}\right). \] Put $R:=2^{-k-1}L$ and apply Lemma \ref{le:gap1} for each $y_i$ and $R$. We can take the same $N(D, s_0, \lambda)$ as in Lemma \ref{le:gap1}. \end{proof} Now we are ready to complete the proof of Theorem \ref{thm:gap}. Consider a finite subset $X\subset M$ of cardinality $n$ and enumerate its points $x_1 <_T \dots <_T x_n$. Let $2^{k-1} < n \leqslant 2^k$. As we have already mentioned, there exists $\lambda$, depending on $s_0$ and $\varepsilon>0$ satisfying $\operatorname{OR}(s_0)= s_0 -\varepsilon$, such that all snakes in $(M,T)$ have elongation at most $\lambda$. Consider $N=N(D, s_0,\lambda,)$ from the claim of Lemma \ref{le:gap3}. Let us show that \[ l_{T}(X) = d(x_1,x_2) + \dots + d(x_{n-1}, x_n) < N l_{\operatorname{opt}}(Y)k. \] Denote $l_{\operatorname{opt}}(X)$ by $L$. Enumerate the distances $d(x_i,x_{i+1})$ in a non-increasing order: $d_1 \geqslant d_2 \geqslant \dots \geqslant d_{n-1}$. For any $i, i = 0,\dots, k-1$ let $D_i$ be the number of distances between $L/2^{i+1}$ and $L/2^i$, and denote by $D_k$ the number of distances that are smaller than $2^{-k}L$. Observe that $D_i \leq N2^i$ for any $i$. For $D_k$ it is clear because $n < N2^k$, and for all other $D_i$ it follows from Lemma \ref{le:gap3}. Hence, the sum of numbers in each group is not greater than $NL$, and the total sum $l_T(X)$ is not greater than $NLk = Nl_{\operatorname{opt}}(X) \lceil \log_2(n) \rceil$. \subsection{Examples of orders with finite order breakpoint.} \label{subsec:euclid} Consider some order $T$ on $X = \mathbb{R}^d$ and suppose that we want to prove that $\operatorname{OR}_{X,T}(k) = O(\log(k))$. From Theorem \ref{thm:gap} it follows that its enough to find $s$ such that $\operatorname{OR}(s) < s$, i.e. such that the elongations of snakes on $s+1$ points in $(X,T)$ are bounded. Here we give an example of a simple order with this property, and together with Theorem \ref{thm:gap} this will give us an alternative proof that $\operatorname{OR}_{\mathbb{R}^d} = O(\log k)$. From Lemma \ref{rem:goedel} it follows that it is enough to provide an order on a unit cube $K = [0;1)^d$. For each point $x = (x_1,\dots,x_d)\in K$ we construct an infinite binary sequence $a_0a_1\dots$ as follows: for any $i\in (1,2,\dots,d)$ and $j\in (0,1,\dots)$ we put \[ a_{jd + i}= \text{the $j$-th digit in the binary representation of $x_i$.} \] Let $T$ be the lexicographical order on the corresponding sequences. \begin{lemma}\label{lem:expmins} In $(K, T)$ there are no snakes on $2^{d+1}+1$ points with elongation greater than $8\sqrt{d}$. \end{lemma} \begin{proof} For any $j$ the cube $K$ is divided into $2^{jd}$ cubes of sizes $2^{-j}\times\dots\times2^{-j}$, we call these cubes the {\it base cubes of level $j$}. Note that each base cube is convex with respect to $T$. Suppose $s>2^{d+1}$ and there is a snake $x_1<_T\dots <_T x_s$ of width $b$ and diameter $a > 8\sqrt{d} b$. Find $k$ such that $2^{-k-1} \leqslant b < 2^{-k}$. Points of the snake with odd indices can be covered by no more than $2^d$ base cubes of level $k$, the same holds for the points with even indices. \begin{tikzpicture} \draw[help lines,step = 0.5] (-0.6, -0.6) grid (3.6, 2.6); \draw[thick] (-0.6, -0.6) grid (3.6, 2.6); \draw[pattern=north west lines] (0,1) rectangle (1,2); \draw[pattern=north west lines] (2.5,-0.5) rectangle (3.5,0.5); \draw[thick, ->] (0.4,1.3) -- (3.25,0.2) -- (0.75, 1.4)--(3.35, -0.4) -- (0.75,1.75); \end{tikzpicture} By the pigeon hole principle there are two points of the snake with odd indices in one base cube $A$ of level $k$. This means that there must be a point with even index in the same cube $A$, but since $a - 2b > \sqrt{d} 2^{-k}$, none among the cubes of level $k$ can contain both odd points and even points. \end{proof} We will see later in the second claim of Lemma \ref{lem:ANfiltrationCorollary} that the constant in Lemma \ref{lem:expmins} above is far from being optimal, indeed the number of the points of snakes of large elongation, under assumption of Lemma \ref{lem:expmins} is at most linear in $d$. \section{Spaces of finite Assouad-Nagata dimension} \label{section:finitedimension} We have already mentioned that the spaces with doubling admit orders with at most logarithmic order ratio function $\operatorname{OR}_{M,T}(k)$, and that among finitely generated groups only groups of polynomial growth (virtually nilpotent ones) satisfy the doubling property. In this section we are going to show that any space of finite Assouad-Nagata dimension admits an order with $\operatorname{OR}(k) \le Const \ln k$ and provide an upper bound for the order breakpoint. We have mentioned that any group of polynomial growth and many groups of exponential growth have finite Assouad-Nagata dimension (wreath products with a base group of linear growth \cite{BrodskiyDydakLang}, Coxeter groups \cite{DranishnikovJanuszkiewicz}, relatively hyperbolic groups \cite{Hume17}, polycyclic groups \cite{HigesPeng}; see also further classes of groups mentioned in the introduction). Assouad-Nagata dimension can be viewed as a (more strong, linearly controlled) version of the notion of asymptotic dimension introduced later by Gromov in \cite{gromovasymptotic}. \begin{definition} \label{def:nagata} Assouad-Nagata dimension of a metric space $M$ is defined as follows. Let $M$ be a metric space, let $m$ be a positive integer and $K>0$. Suppose that for all $r>0$ there is a family $U_r$ of subsets, such that their union is equal to $M$ and such that the following holds \begin{enumerate} \item The diameter of any set $A \in U_r$ is at most $Kr$. \item Any ball of radius $\le r$ is contained in some set of $U_r$. \item Any point of $M$ belongs to at most $m+1$ sets of $U_r$. \end{enumerate} We say in this case that Assouad-Nagata dimension of $M$ is at most $m$. To shorten the notation, we will also call Assouad-Nagata dimension $AN$-dimension. \end{definition} If we weaken our assumption on the covering, and instead of Property $(1)$ require that there exists some bound (not necessarily linear) for diameters of sets $A\in U_r$ in terms of $r$, we obtain a definition of spaces of finite {\it asymptotic dimension.} Such upper bound for the diameters of $A \in U_r$ (in definition of $AN$-dimension, this bound is $\le Kr$) is called {\it $m$-dimension control function}. \begin{definition}[Equivalent definition of $AN$-dimension] \label{rem:def2AssouadNagata} Let $M$ be a metric space. Let us say that $M$ has $AN$-dimension at most $m$ if the following holds. There exists $K > 0$ such that for any $r$ there exists a partition $W_r$ of the space $M$ such that all sets from $W_r$ have diameters at most $Kr$, and any ball of radius $r$, centered at some point $x\in M$, intersects at most $m+1$ sets from $W_r$. \end{definition} For convenience of the reader we explain why the definitions are equivalent (with an appropriate choice $K$ for each of them). \begin{proof} Suppose that $M$ has $AN$-dimension at most $d$ with respect to the first definition. To explain the claim of the second definition, consider a covering $U_r$ in the first definition of Assouad-Nagata dimension. For any subset $A$ from $U_r$, replace this subset by a set $A'$, which we obtain from $A$ by removing the open) $r$-neighborhood of its complement. To prove that the union of $A'$ is equal to $M$ observe the following. For any point $x$ choose and fix a subset $A$ from $U_r$ containing a ball of radius $r$, centered at $x$. We consider the corresponding $A'$. Since for any $x$ we have a set $A'$ containing $x$, we conclude that $A'$ cover $M$. Now observe that if a ball of radius $r$ centered at $y$ intersects some $A'$, then $y$ belongs to $A$. This implies that any ball or radius $r$ intersects at most $m+1$ among subsets $A'$. Finally, observe that we can ensure that the sets are disjoint, replacing the sets by appropriate subsets. Now observe that if $M$ admits a partition satisfying the claim of Definition \ref{rem:def2AssouadNagata}, then we can consider open $r$-neighborhoods of the sets of the partition in this definition. Let $K$ be a constant from Definition \ref{rem:def2AssouadNagata}. It is clear that the obtained sets satisfy the assumption of Definition \ref{def:nagata} for the same $r$ and the constant of the $m$-dimensional control function in the sense of this definition $K'=K+1$. \end{proof} As an example of a space of finite Assouad-Nagata dimension recall the example of \cite{BrodskiyDydakLang} which shows that Assouad-Nagata dimension of $\mathbb{Z} \wr \mathbb{Z}/2\mathbb{Z}$ is one. We recall that the wreath product $A \wr B$ of groups $A$ and $B$ is a semi-direct product of $A$ and $\sum_A B$, where $A$ acts on $\sum_A B$ by shifts. Elements of the wreath product are pairs $(a,f)$, where $a\in A$ and $f:A\to B$ is a function with $f(x)=e_B$ for all but finitely many $x\in A$. We consider a standard generating set of $A\wr B$, which corresponds to the union of $S_A$, and $S_B$, where $S_A \subset A \subset A\wr B$ and $S_B\subset B$ is imbedded to $\sum_A B \subset A\wr B$ by sending $B$ to the copy of $B$ in $\sum_A B$ indexed by $e_A$. In the example below we consider the word metric on the Cayley graph with respect to this standard generating set (edges are not included). The argument \cite{BrodskiyDydakLang} uses the fact that the kernel of the map to $\mathbb{Z}$ is zero-dimensional, and then uses a Hurewicz type theorem for $AN$-dimension for group extensions. In the example below we describe explicitly partitions of the an infinite wreath product, as well as of a sequence of finite wreath products. These partitions uniformly satisfy the assumption of the Definition \ref{rem:def2AssouadNagata} and show that the spaces have $AN$-dimension equal to one. \begin{example}\label{ex:anwreath}[Partitions of wreath products] \begin{enumerate} \item For each $r>0$ there exists a partition $\mathcal{A}_r$ of $G=\mathbb{Z}\wr \mathbb{Z}/2\mathbb{Z}$ such that diameter of every set $A \in \mathcal{A}$ is at most $6r$ and any ball of radius $r/2$ intersects at most $2$ sets. \item For each $r>0$ there exists a partition $\mathcal{A}^i_r$ of $G=\mathbb{Z}/i\mathbb{Z}\wr \mathbb{Z}/ 2\mathbb{Z}$ such that diameter of every set $A \in \mathbb{A}$ is at most $9r$ and any ball of radius $r/2$ intersects at most $2$ sets. \end{enumerate} \end{example} \begin{proof} (1) The proof is reminiscent of a possible argument to show that asymptotic dimension, as well as $AN$-dimension of a free group, is $1$ (see e.g. Proposition 9.8 in \cites{roe}). We partition $\mathbb{Z}$ into disjoint intervals of length $r$ and call the corresponding subset $\{(x,f)| kr \le x < (k+1)r\}$ the {\it layer} $L_k$. It is clear that if $x\in L_k$ and $y \in L_m$ and $m-k\ge 2$ then the distance between $(x,f)$ and $(y,g)$ is $\ge r$. Now we subdivide each layer $L_k$ into sets, saying that $(x,f)$ and $(y,g)$ ($kr \le x,y <(k+1)r$) are in the same set if $f(z)= g(z)$ for any $z \notin [rk-r/2, (r+1)k+r/2]$. Observe that if $(x,f)$ and $(y,g)$ are in the same layer but not in the same set, then the distance between them is $\ge r$ since, starting from a point inside $[kr, (k+1)r]$ one needs to switch the value of the function at (at least) one point of $\mathbb{Z}$ which is either $\le kr -r/2$ or $\ge (k+1)r+r/2$. Observe also that diameter of any set is at most $6r$, since to go from $(x,f)$ to any point $(y,g)$ in the same set it is sufficient to start in $x$, end at $y$, visit all points of $[kr -r/2, (k+1)r+r/2]$ (since the length of this interval is $2r$, and hence $4r$ steps suffices) and make at most $2r$ switches. \medskip \noindent (2) Similarly one can construct a partition of a finite wreath product $\mathbb{Z}/i\mathbb{Z}\wr \mathbb{Z}/2\mathbb{Z}$. If $r\ge i$, we consider the partition consisting of one set. Now we assume that $r< i$. We subdivide $\mathbb{Z}/i\mathbb{Z}$ into several intervals of length $r$ and possibly one interval of length $\ge r$ and $< 2r$. Each layer $L_k$ corresponding to an interval $[I_k,J_k]$ is subdivided into sets, saying that $(x,f)$ and $(y,g)$ are in the same set if $f$ and $g$ coincide on the complement of the interval $[I_k-r/2, J_k+r/2]$. As before, distinct sets of the same layer are at distance at least $r$ and the sets of the layers $L_k$ and $L_m$, for $k-m\ge 2$ are at distance at least $r$. The diameter of each set is at most $3r+6r$. \end{proof} We will return to (2) of the example above in the next section, when we discuss the relation between infinite $\operatorname{Br}$ and spectral properties of a sequence of graphs. In that context we will discuss an order, provided by Thm \ref{thm:nagata} for a disjoint union of graphs from (2) of Example \ref{ex:anwreath}. \begin{thm} \label{thm:nagata} If $M$ is a metric space of finite Assouad-Nagata dimension $m$ with $m$-dimension control function at most $Kr$, then there exist an order $T$ on $M$ such that \begin{enumerate} \item For all $k\ge 2$ it holds $\operatorname{OR}_{M,T}(k) \le C \ln k$, where a positive constant $C$ can be chosen depending on $m$ and $K$ only. \item $\operatorname{Br}(M,T) \leqslant 2m + 2$. Moreover, the elongations of snakes in $(M,T)$ on $2m+3$ points are bounded by some constant depending on $m$ and $K$ only. \end{enumerate} \end{thm} In view of Lemma \ref{rem:goedel} it is sufficient to prove the statement of Theorem \ref{thm:nagata} for finite metric spaces. In the lemmas below we will make a more general assumption that $M$ is uniformly discrete. We recall that this means that there exists $c>0$ such that for all $x\ne y$ it holds $d(x,y) \ge c$. \begin{definition} [$AN$-filtrations] Given a metric space $M$, we say that a sequence of its partitions $\mathcal{V}_j$, $j\in \mathbb{Z}$ is {\it $AN$-filtration} with coefficients $m$, $\lambda$, $D$, $\delta$, if \begin{enumerate} \item Diameter of any set in $A \in \mathcal{V}_j$ is at most $\lambda^{j}D$. \item For any $x\in M$ the ball $B(x, \lambda^{j}\delta)$ can be covered by a union of $m+1$ sets from $\mathcal{V}_j$. \item If $A \in \mathcal{V}_j$ and $B \in \mathcal{V}_{j'}$, $j<j'$, then either the intersection $A \cap B$ is empty, or $A \subseteq B$. \end{enumerate} In the definition above we assume that $m\in \mathbb{Z}_{\geq 0}$, $\lambda, D, \delta \in \mathbb{R}_+$ and $\lambda>1$. \end{definition} In the definition above $j$ takes all integer values. Observe that if the metric space $M$ is uniformly discrete, then for all sufficiently large $j$ the sets in $\mathcal{V}_{-j}$ are one-point sets. \begin{lemma} \label{lem:finitedimensionimpliesANfiltration} Let $M$ be an uniformly discrete metric space. Assume that Assouad-Nagata dimension of $M$ is at most $m$ and that $M$ satisfies Definition \ref{rem:def2AssouadNagata} with a linear constant of $m$-dimensional control function at most $K$. Then $M$ admits an $AN$-filtration with parameters $m$, $\lambda = 4K$, $\delta = \frac{1}{2}$, $D=2K$. \end{lemma} \begin{proof} Since our space is uniformly discrete, there exists $c >0$ such that for any $x_1$, $x_2$ in $M$ it holds $d(x_1, x_2) \ge c$. Observe that the constant $K$ in the definition of Assouad-Nagata dimension satisfies $K\ge 1$, and in particular $\lambda=4K \geq 4 > 1$. Take the maximal $k_0 \in \mathbb{Z}$ such that $2K \lambda^{k_0} < c$. For all integers $k\le k_0$ define $\mathcal{V}_j$ to be a partition of $M$ into one-point sets. Observe that properties $(1), (2), (3)$ in the definition of $AN$-filtration are verified for $j,j' \le k_0$. Now suppose that we have already constructed $\mathcal{V}_j$ for all $j\le k$, $k\in \mathbb{Z}$, such that the properties of $AN$-filtration are verified for all $j, j'\le k$. Now we explain how to construct $\mathcal{V}_{k+1}$ such that the properties of $AN$-filtration are verified for all $j, j' \le k+1$. Each set from $\mathcal{V}_{k+1}$ will be a union of some sets from $\mathcal{V}_{k}$. By our assumption on $M$, we know that for any $r$ there exists a partition $W_r$ of $M$ (into disjoint sets) such that 1. The diameters of the sets from $W_r$ are at most $Kr$. 2. Any ball of radius $r$, centered at some point $x\in M$, intersects at most $m+1$ sets from $W_r$. \noindent Consider $r=\lambda^{k+1}$ and the partition $W_{r}$. We will group sets from $\mathcal{V}_{k}$ together using this partition. For any set from $W_r$ there will be one corresponding set in $\mathcal{V}_{k+1}$. For any set $A$ in $\mathcal{V}_k$ we choose one of the subsets of $W_{r}$ with a non-empty intersection with $A$, we denote this subset by $f(A)$. To give an idea of the size of sets $A \in \mathcal{V}_k$, we mention that by induction hypothesis the diameter of $A$ is at most $2K (4K)^k = r/2$. Now define subsets of $\mathcal{V}_{k+1}$ as the union of subsets $A$ of $\mathcal{V}_k$ with the same associated subset $f(A)$ (of $W_r$). Since subsets of $\mathcal{V}_k$ are disjoint, it is clear that subsets of $\mathcal{V}_{k+1}$ are also disjoint. It is also clear that for any subset $A \in \mathcal{V}_{k+1}$ and any subset $B \in \mathcal{V}_j$, with $j\le k$, either $B$ is contained in $A$ or they have empty intersection. Now we also observe that there is an upper bound on diameters of all subsets $A\in\mathcal{V}_{k+1}$, that guarantees that the Property (1) in the definition of $AN$-filtration is verified with parameters $D=2K$, $\lambda = 4K$. \[ \sup_{A\in\mathcal{V}_{k+1}}\operatorname{diam}(A) \leqslant \sup_{A\in W_{\lambda^{k+1}}}\operatorname{diam}(A) + 2\sup_{A\in\mathcal{V}_{k}}\operatorname{diam}(A) \leqslant K\lambda^{k+1} + 2K\lambda^{k} \leqslant 2K\lambda^{k+1}. \] Finally, observe that any ball of radius $\lambda^{k+1}$ intersects at most $m+1$ subsets of $W_r$, and hence any ball of radius $\lambda^{k+1} - \sup_{A\in\mathcal{V}_{k}}$ intersects at most $m+1$ subsets of $\mathcal{V}_{k+1}$. Since \[ \lambda^{k+1} - \sup_{A\in\mathcal{V}_{k}}\operatorname{diam}(A) \geqslant \lambda^{k+1} - 2K\lambda^{k} = \frac{1}{2}\lambda^{k+1}, \] we can conclude that any ball of radius $\frac{1}{2}\lambda^{k+1}$ intersects at most $m+1$ subsets of $\mathcal{V}_{k+1}$. Hence Property (2) on the definition of $AN$-filtration is satisfied with parameters $\lambda = 4K$ and $\delta =1/2$. \end{proof} \begin{lemma} \label{lem:ANfiltrationCorollary} Suppose that a metric space $M$ admits an $AN$-filtration $(\mathcal{V}_j)$ with coefficients $m$, $\lambda$, $D$, $\delta$, and for some order $T$ all the subsets from this $AN$-filtration are convex with respect to this order $T$. Then \begin{enumerate} \item $\operatorname{OR}_{M,T}(k) \le C \ln k,$ where $C$ is a constant depending on $m$, $\lambda$, $D$ and $\delta$. \item Elongations of snakes on $2m+3$ points are bounded by some constant $C'$, depending on $m$, $\lambda$, $D$ and $\delta$. \end{enumerate} \end{lemma} \begin{proof} Consider a finite subset $X \subset M$, of cardinality $N+1$ and enumerate its points \[ x_1 <_T \dots <_T x_{N+1}. \] Denote by $L$ the length $l_{\operatorname{opt}}(X)$ of a shortest path visiting all points of $X$. Consider some $j\in \mathbb{Z}$. Observe that the set $X$ can be partitioned into at most $\frac{\lambda^{-j} L}{\delta}+1$ subsets ("parts") of diameter at most $\lambda^{j}\delta$. In view of Property $(2)$ in the definition of $AN$-filtration, each of these sets can be covered by union of some $m+1$ sets from $\mathcal{V}_j$. We can therefore conclude that $X$ can be covered by at most $(m+1)(\frac{\lambda^{-j} L}{\delta}+1)$ sets from $\mathcal{V}_j$. \noindent Observe that if $A\in\mathcal{V}_j$, $x_i\in A$, and $d(x_i, x_{i+1}) > \lambda^{j}D$, then $x_{i+1} \notin A$. Since $A$ is convex with respect to $T$, all points $x_{i'}$, $i'>i$, do not belong to $A$. Therefore the number of indices $i$ such that $d(x_i, x_{i+1}) > \lambda^{j}D$ is not greater than the total number of parts, that is, not greater than $(m+1)(\frac{\lambda^{-j} L}{\delta}+1)$. \noindent Consider $n$ and $l$ such that $\lambda^{n-1} < N + 1 \leqslant \lambda^n$, $\lambda^{l-1} < L \leqslant \lambda^l$. Let $D_0$ be the number of indices $i$ such that $\lambda^{-1}L <d(x_i, x_{i+1}) \leq L$. Let $D_1$ be the number of indices $i$ such that $\lambda^{-2}L <d(x_i, x_{i+1}) \leq {\lambda}^{-1}L$, and so on until $D_{n-1}$. Finally, we define $D_n$ as the number of indices $i$ for which $d(x_i, x_{i+1}) \leq \lambda^{-n}L$. The total length of the path with respect to our order $T$ can be estimated as \[l_{T}(X) = \sum_{i=1}^{N}d(x_{i}, x_{i+1}) \leq L\cdot\sum_{k=0}^nD_k \lambda^{-k}. \] For any $k < n$, $D_k$ is not greater than the number of indices $i$ such that $d(x_i, x_{i+1}) > L\lambda^{-k-1}$. We have $L\lambda^{-k-1} > \lambda^{l - k - 2} > \lambda^{l-k-[\log_{\lambda}D]-3}D$. Hence, we can estimate $D_k$ as \[ D_k \leq (m+1)\left(\frac{\lambda^{-(l-k-[\log_{\lambda}D]-3)}L}{\delta} + 1\right) \leq (m+1)\left(\frac{\lambda^{k+[\log_{\lambda}D]+3}}{\delta} + 1\right); \] \[ D_k\lambda^{-k} \leq (m+1)(\lambda^{[\log_{\lambda}D+3]}/\delta + 1). \] \noindent Note that this number does not depend on $k$. We also estimate $D_n\lambda^{-n}$ as $(N+1)\lambda^{-n} \leq 1$. \noindent Finally, \[ l_{T}(X) \leq L(n+1) (m+1)(\lambda^{[\log_{\lambda}D+3]}/\delta + 1). \] \noindent The first claim of the lemma follows with $C = 2(\frac{1}{\ln{\lambda}}+1)(m+1)(\lambda^{[\log_{\lambda}D+3]}/\delta +1)$. \medskip To prove the second claim, consider in $(M,T)$ a snake with diameter $a$ and width $b$. Let $k$ be such that $\delta\lambda^{k-1} < b \leq \delta\lambda^k$. Consider all points with odd indices of this snake. Since they form a set of diameter at most $\delta\lambda^k$, by the definition of $AN$-filtration this set can be covered by $m+1$ subsets of $\mathcal{V}_k$. With the same argument we know the analogous fact about all even points of our snake. Since $b> \delta\lambda^{k-1}$, we know the following. If the elongation $a/b$ of the snake is larger than $\lambda \frac{D}{\delta} + 2$, then $a - 2b >D\lambda^k$ and no two points of the snake with different parity belong to the same set from $\mathcal{V}_k$. In this case any of $2m+2$ covering sets from $\mathcal{V}_k$ contains at most $1$ point of the snake. Hence, elongations of snakes on $2m+3$ points are bounded by $\lambda(\frac{D}{\delta} + 2)$. \end{proof} Now we have all the ingredients to prove Theorem \ref{thm:nagata}. As we mentioned above, we can assume that the metric space $M$ is uniformly discrete. From Lemma \ref{lem:finitedimensionimpliesANfiltration} it follows that $M$ admits an $AN$-filtration with parameters depending only on the $m$-dimensional control function. Lemma \ref{le:convex} implies that for some order $T$ all the sets from this family are convex with respect to $T$, and Lemma \ref{lem:ANfiltrationCorollary} gives us needed bounds on $\operatorname{OR}_{M,T}$ and elongations of snakes on $2m+3$ points in $(M,T)$. \subsection{Product of two binary trees} In this subsection we will illustrate Theorem \ref{thm:nagata} by giving an explicit example of an order on a product of two trees. We encode vertices of an infinite binary tree $M$ by finite binary words $u_i\in \{0,1\}^*$, the empty word is denoted by $\varepsilon$. Points of the product $P$ of two binary trees $M_1$ and $M_2$ can be coded by pairs of finite binary words $(u_1, u_2)$. The distance between points $(u_1,u_2)$ and $(u_3,u_4)$ in $P$ is defined as $\max(d_{M_1}(u_1, u_3), d_{M_2}(u_2, u_4))$. For given $r$ consider the following partition $W_r$ of $M$. Points $u$ and $v$ of $M$ belong to the same set $A\in W_r$ if and only if for some non-negative integer $k$ it holds $kr \leq |u|, |v| < (k+1)r$ and the largest common prefix of $u$ and $v$ has length $\geq (k-1/2)r$. Here by $|x|$ we denote the length of a word $x$, i.e. the distance to the root in $M$. This is a well-known construction of a partition of a metric tree with $1$-dimension control function $6r$, in the sense of Definition \ref{rem:def2AssouadNagata} (we referred already to this result in the proof of Example \ref{ex:anwreath}). If we take partitions $W_r$ for all integer powers of $2$, we obtain an $AN$-filtration $\mathcal{V}_k$ of $M$ with parameters $\lambda = 2$, $m=1$, $D=3$ and $\delta = 1/2$. We obtain an $AN$-filtration on $P$ with parameters $\lambda = 2$, $m = 3$, $D = 3$, $\delta = 1/2$ if for each $k$ we take the partition $\mathcal{W}_k = \{A\times B | A, B\in \mathcal{V}_k\}$. (Note that $AN$-dimension of $P$ is $2$, hence from Lemma \ref{lem:finitedimensionimpliesANfiltration} there exists an $AN$-filtration of $P$ with $m=2$, but explicit description of sets of the partitions for such filtrations would require extra work. Now we can describe an order $T$ on $P$ for which all sets of $\mathcal{W}_k$ are convex and consequently $\operatorname{OR}_{P,T}(k)$ is logarithmic in $k$ by Lemma \ref{lem:ANfiltrationCorollary}. For integers $l\geq 0$ and $k>0$ we use notation $f(l,k) = \max ([\frac{l}{2^{k+1}}]2^{k+1}-2^k, 0)$. We also denote use notation $f(l,0) = l$. For a point $x = (u_1, u_2)\in P$ we construct the following sequence of finite words $(v_i(x))$. For any $k$, let $v_{2k}(x)$ be the prefix of $u_1$ of length $f(|u_1|, k)$ and let $v_{2k+1}(x)$ be the prefix of $u_2$ of length $f(|u_2|, k)$. Here $|u|$ denotes the length of the word $u$. It is clear that for $x=(u_1, u_2)$ we have $v_0(x) = u_1$, $v_1(x) = u_2$, and $v_k(x) = \varepsilon$ for all large enough $k$. Hence for any two points $x_1, x_2 \in P$, $x_1 \ne x_2$ the sequences $(v_i(x_1))$ and $(v_i(x_2))$ are distinct but coincide at all put finitely many positions. Finally, we put $x_1 <_T x_2$ if $v_i(x_1)<_{\rm lex} v_i(x_2)$ and $i$ is the largest integer such that $v_i(x_1) \neq v_i(x_2)$. \section{Weak expansion and infinite order breakpoint} Given a sequence of spaces $(M_\alpha, T_{\alpha})$, $\alpha \in \mathcal{A}$, we can speak about order ratio function of this sequence by defining $$ \operatorname{OR}(k) = \sup_\alpha \operatorname{OR}_{M_\alpha, T_\alpha} (k). $$ Then we can also speak about order breakpoint of a sequence, considering minimal $k$ such that $\operatorname{OR}(k) < k$. Another way to formulate the definitions above is to consider a disjoint union $M = \bigcup M_\alpha$. We have already mentioned in section \ref{sec:firstexamples} that one can define the function $\operatorname{OR}$ for metric spaces with infinite distance allowed by considering only subsets with finite pairwise distances. In this case $\operatorname{OR}$ and $\operatorname{Br}$ of the sequence are equal to $\operatorname{OR}_M$ and $\operatorname{Br}(M)$. Given a finite $d$-regular graph $\Gamma$ on $n$ vertices, we consider its adjacency matrix $A$, and eigenvalues $$ d=\lambda_1 \ge \lambda_2 \ge \dots \ge \lambda_n. $$ It clear that $\tilde{\lambda}_i= \lambda_i/d$ are eigenvalues of the normalized adjacency matrix, which is a matrix of transition probabilities of the simple random walk on $\Gamma$. We recall that a connected $d$-regular graph is a Ramanujan graph if $\max_{i: |\lambda_i|<d }\lambda_i \le 2\sqrt{d-1}$. Gorodezky et al show in \cite{gorodezkyetal} (see also Theorem $2$ in Bhalgat, Chakrabarty and Khanna \cite{bhalgatetal}) the existence of a sequence of bounded degree graphs $\Gamma_i$ which have linear lower bound, in terms of cardinality of subsets, for the competitive ratio of the universal travelling salesman problem. In our terminology, they have provided a linear lower bound for the order ratio function of the sequence $\Gamma_i$. Both \cite{gorodezkyetal} and \cite{bhalgatetal} use a remarkable construction of Ramanujan graphs (constructed by Lubotzky, Philips and Sarnak \cite{lubotzkyetal}, see also \cite{margulis}). Now we prove that for linearity of competitive ratio in $k$, and moreover for an a priori stronger claim that order breakpoint is infinite, a milder assumption than Ramanujan condition for graphs is sufficient. Since we consider a sequence of graph in the theorem below, we use the name of the graph as an upper index for $\lambda$, denoting $\lambda^{\Gamma_i}_j$ the eigenvalue $\lambda_j$ of $\Gamma_i$. \begin{thm}\label{thm:expanders} Let $\Gamma_i$ be a sequence of finite graphs of degree $d_i\ge 3$ on $n_i$ vertices. Let $T_i$ be an order on $\Gamma_i$. \begin{enumerate} \item Let $\delta_i= (\lambda^{\Gamma_i}_1 - \lambda^{\Gamma_i}_2 ) /d_i$. Assume that $$ 1/\delta_i = o \left( \frac{\log_{d_i}n_i}{\ln \log_{d_i}n_i} \right). $$ Then the order breakpoint of the sequence $(\Gamma_i, T_i)$ is infinite. \item Moreover, for any $k\ge 1$ any ordered graph $(\Gamma_i, T_i)$ admits snakes on $k$ points of width at most $$ t_i = \lceil\frac{24k(\ln k + \ln(2/ \delta_i))} {\delta_i} \rceil $$ and length at least $$ L_i = \left[\frac{\ln n_i - 4 -2k\ln(2k) - 2k \ln (2/\delta_i)}{\ln (d_i-1)}\right] -1 $$ \item In particular, for a sequence of bounded degree expander graphs the following holds. If $d_i=d\ge 3$ and $\delta=\inf_i \delta_i >0$, then for each $k$ the graphs $(\Gamma_i, T_i)$ admit snakes on $k$ points of bounded width $C_k$ and of length at least $\log_{d-1}n_i -C'_k$, for some $C_k, C'_k>0$. \end{enumerate} \end{thm} \begin{proof} First observe that we can easily modify our graphs to ensure that the spectrum is non-negative. Let $\Gamma^{+}_i$ be a graph obtained from $\Gamma_i$ by adding $d_i$ loops in all vertices of $\Gamma_i$. Then $\Gamma^{+}_i$ is a regular graph of degree $2d_i$ (we use convention that each loop contributes one edge to its vertex). Observe that it is sufficient to prove that order breakpoint of a sequence $\Gamma^{+}_i$ is infinite. We will also prove other claims of the theorem under assumption that the spectrum is non-negative, and then the general case will follow. Let $\lambda^{\Gamma^+_i}_1 = 2d_i \ge \lambda^{\Gamma^+_i}_2 \ge \dots \ge \lambda^{\Gamma^+_i}_n$ be eigenvalues of the adjacency matrix of $\Gamma^+_i$. It is clear that $\lambda^{\Gamma^+_i}_j = \lambda_j^{\Gamma_i}+d \ge 0$ for all $j$. In particular, $\max (\lambda_2, |\lambda_n|) =\lambda_2$. Observe also that $\lambda_2^{\Gamma^+_i}/{(2d_i)} = (d_i +\lambda_2^{\Gamma_i})/{(2d_i)}$ and that the spectral gap $ ( \lambda_1^{\Gamma^+_i}- \lambda_2^{\Gamma^+_i} ) /{(2d_i)} $ is equal to $\delta_i/2$. Below we therefore assume that the graph $\Gamma_i$ has non-negative spectrum. Given $i$, consider $\Gamma_i$ and choose $m_i$ starting points in $\Gamma_i$ independently with respect to the uniform distribution on the vertices of $\Gamma_i$. Consider $m_i$ trajectories of independent random walks with $t_i$ steps, starting in the chosen starting points. The values $m_i$ and $t_i$ will be specified later in the text. We will use the following observation \begin{claim}\label{claim:snakes} Let $T$ be an order on a graph $G$. Suppose that we have a partition of $G$ into several convex sets with respect $T$ (we will call them intervals). Consider several $t$-step trajectories of a simple random walk on $G$, and assume that the distance between starting points of any two trajectories is at least $L$. If there are at least $k$ intervals with non-empty intersection with two among our trajectories, then $(G,T)$ admits a snake on $k+1$ points, of width at most $t$ and length at least $L-2t$. \end{claim} \begin{proof} First observe that the distance between any two trajectories is clearly at most $L-2t$. For each among $k$ intervals choose a point $A_i$ from the first trajectory and $B_i$ from the second. Let $A_1 <_T \dots <_T A_k$ and $B_1<_T \dots <_T B_k$ be points of these trajectories. Without loss of generality, $A_1 <_T B_1$. Then we can choose a snake $(A_1 <_T B_1<_T A_2<_T B_3 <_T \dots)$ on $k+1$ points that satisfies our Lemma. \end{proof} We will use Claim \ref{claim:snakes} above for a fixed number $k$, $G=G_i$ and numbers $t=t_i$ and $L=L_i$, for appropriately chosen sequence $t_i$ and $L_i$. We will explain the proof of (1) and (2) of the theorem, where for the proof of (1) we take any $k\ge 1$ and fix it for the argument below. And in (2) $k$ is the number in the formulation of this claim. We consider the partition of the elements of $\Gamma_i$ into $N_i$ sets of equal cardinality (almost equal in case $N_i$ does not divide $n_i$), convex with respect to the order $T_i$. The numbers $N_i $ will be specified later. In other words, we number the points of $\Gamma_i$ with respect to $T_i$: $g_1 \le_{T_i} g_2 \le_{T_i} \dots \le_{T_i} g_{n_i}$; put $r_i=[n_i/N_i]$ and consider sets $\Omega_1^i =\{g_1, g_2, \dots g_{r_i} \}$, $\Omega_2^i =\{g_{r_i+1}, g_{r_i+2}, \dots g_{2r_i}\}$, $\Omega_3^i =\{g_{2r_i+1}, g_{2r_i+2}, \dots, g_{3r_i}\}$ etc. Before we apply Claim \ref{claim:snakes} and specify a sequence $m_i$ of numbers of the trajectories, we start with a straightforward observation. If we choose independently $m$ points, with respect to uniform distribution on vertices of a graph of cardinality $n$, of degree $d$, then the probability that for at least two among the chosen points the distance is $\le L$ is at most \begin{equation} \label{eq:preneravenstvo2} \frac{m^2d(d-1)^L}{(d-2)n}. \end{equation} Indeed, the number of points at distance $L$ from a given point is at most $1+d+d(d-1)+\dots + d(d-1)^{L-1}$, and there are $m^2$ possible pairs of points chosen among $m$ points. We want to choose $m_i$ and $L_i$ in such a way that \begin{equation} \label{eq:neravenstvo2} \frac{m_i^2d_i(d_i-1)^{L_i}}{(d_i-2)n_i} \end{equation} is small enough. In view of the upper bound (\ref{eq:preneravenstvo2}) on the distance between a pair of points among $m$ points chosen at random in a graph of degree $d$, under this assumption we will have a lower bound for the probability that the maximum of pairwise distance between $m_i$ starting points of trajectories (in $\Gamma_i$) will be at least $L_i$. In order to use Claim \ref{claim:snakes}, to find snakes of arbitrarily large elongation (and thus to prove (1) of the theorem) we need $$ \frac{L_i-2t_i}{t_i} \to \infty. $$ We will need another condition, to estimate the number of intervals a trajectory intersects. We put \begin{equation} m_i = C_{N_i}^k+1. \end{equation} Since $m_i>C_{N_i}^k$, if each of the trajectories intersects at least $k$ convex intervals, then there exist two among our $m_i$ trajectories and $k$ among our $N_i$ intervals with non-empty intersection with both of these trajectories. The probability $P_i$ that one of our $m_i$ trajectories intersects less than $k$ intervals satisfies \begin{equation} \label{eq:corollaryThm36} P_i \le m_i C_{N_i}^k (\beta_i+\alpha_i)^{t_i}, \end{equation} for $\beta_i = k/N_i $ being the density of a set which is the union of $k$ convex intervals and $\alpha_i =\lambda_{2}^{\Gamma_i}/d_i= 1-\delta_i$ (since we assume that the spectrum is non-negative). Indeed, for each among our $m_i$ trajectories (with starting points chosen according to a uniform distribution on the vertices of $\Gamma_i$) and fixed $k$ among our convex intervals, the probability to stay inside the union of these intervals is at most \begin{equation}\label{eq:HLWimplies} (\beta_i+\alpha_i)^{t_i}, \end{equation} as follows from \cite{HLW}[Thm 3.6]. This result, that goes back to \cite{AKS}, states the following. Let $G$ be a $d$-regular graph on $n$ vertices and spectral values of the normalized adjacency matrix satisfy $|\tilde{\lambda}_2|, |\tilde{\lambda}_n| \le \alpha$. Let $B$ be a subset of vertices of $G$ of cardinality $\le \beta n$. Then the probability that a $t$ step trajectory of a random walk, starting at a point chosen with respect to the uniform distribution on $G$, stays inside $B$ is at most $(\alpha+\beta)^t$. As we have already mentioned, for the proof of the Theorem we can assume that the spectrum of $\Gamma_i$ is non-negative, so that in this case $\max |\lambda^{\Gamma_i}_2|, |\lambda^{\Gamma_i}_{n_i}|=\delta_i$. We return to the proof of the theorem. To get a desired bound, we choose $\beta_i$ of the same order as $1-\alpha_i$, putting $\beta_i= k/N_i = \delta_i/2$. More precisely, to ensure that $N_i$ is an integer, we choose \begin{equation}\label{eq:ravenstvo6} N_i= \lceil 2k/\delta_i \rceil. \end{equation} Then, since we can assume that $k\ge 2$ and hence $N_i \ge 4$, it holds $$ P_i \le (C_{N_i}^k + 1) C_{N_i}^k (\beta_i+\alpha_i)^{t_i} \le 2 (C_{[2k/\delta_i]}^k )^2 (1-\delta_i/2)^{t_i} \le 2 ((2k/\delta_i)^k)^2 \exp(-\delta_i t_i/2)= $$ \begin{equation}\label{eq:preuravnenie1} = 2 (2k/\delta_i)^{2k} \exp(-\delta_i t_i /2), \end{equation} since $(1-1/x)^x \le e^{-1}$ for any $x>1$. Take the logarithm of the previous expression. We want to choose $t_i$ in such a way that \begin{equation}\label{eq:inequality1} \delta_i t_i/2 - 2k (\ln k + \ln ( 1/\delta_i)+\ln 2)-\ln2 \ge 1, \end{equation} this guarantees that the probability $P_i$ that one of the trajectories intersects less than $k$ intervals is at most $\exp(-1)$. In particular, we can take \begin{equation}\label{eq:ti} t_i =\lceil \frac{12k(\ln k + \ln(1/ \delta_i))}{\delta_i} \rceil. \end{equation} We rewrite the upper bound in the formula (\ref{eq:preneravenstvo2}) for the probability, that two among our $m_i$ trajectories are at distance smaller than $L_i$. We take in account the choice of $m_i$ in the formula (\ref{eq:neravenstvo2}), the choice of $N_i$ in the formula (\ref{eq:ravenstvo6}) and the estimate $m_i^2 \leq 2(2k/\delta_i)^{2k}$. We also want to assume that the obtained upper bound satisfies \begin{equation}\label{eq:qi} Q_i=2(2k/\delta_i)^{2k} \frac{d_i(d_i-1)^{L_i}}{(d_i-2)n_i} \le e^{-1}. \end{equation} Since $2e^{-1}<1$, the inequality (\ref{eq:qi}) on $Q_i$ combined with our assumption on $P_i$ guarantees that our argument, with probability $\ge 1-2e^{-1}$, provides us snakes of width at most $t_i$ and of diameter at least $L_i$ in the ordered space $(\Gamma_i, T_i)$. We take the logarithm of the expression in the formula (\ref{eq:qi}) and use that $\ln (d_i/(d_i-2)) \le \ln 3 \le 2$. We introduce the notation $w_i=L_i/t_i$, and we want therefore that $t_i$ and $w_i$ satisfy \begin{equation} 2k\left(\ln(2k) + \ln (1/\delta_i)\right) + t_i w_i\ln (d_i-1) \leq \ln n_i - 4 \end{equation} In order to have the inequality above, to assure that $L_i$ is an integer and that we have snakes of length $L_i$, we can take $w_i$ such that \begin{equation}\label{eq:wi} w_i =\frac{1}{t_i}\left[\frac{\ln n_i - 4 -2k\ln(2k) - 2k \ln (1/\delta_i)}{\ln (d_i - 1)}\right]. \end{equation} Thus for ordered graphs with positive spectrum we have shown existence of snake of width $$ t_i = \lceil\frac{12k(\ln k + \ln(1/ \delta_i))} {\delta_i} \rceil $$ and of length at least $$ L_i = \left[\frac{\ln n_i - 4 -2k\ln(2k) - 2k \ln (1/\delta_i)}{\ln (d_i-1)}\right] -1. $$ As we have mentioned it is easy to modify the graph to ensure that the spectrum is positive. Since by this modification $\delta_i$ is replaced by $\delta_i/2$, $d_i$ by $2d_i$ and cardinality of the balls in $\Gamma_i$ does not change, this implies Claim (2) of the theorem for a general graph. In particular, if we have a sequence of expander graphs of fixed degree, then $1/\delta_i$ is bounded from above, hence there exist snakes of width at most $C$ and of length greater or equal to $\log_{d-1} n_i -C'$, where constants $C'$ and $C'$ depends on $k$ and the spectral gap $\delta>0$, $\delta= \inf_i \delta_i$. We have proved the third claim. To prove Claim (1) of the theorem, we want to guarantee that the elongation of snakes tends to $\infty$, and thus we need to ensure that $w_i \to \infty$, as $i \to \infty$. This will follow from conditions \begin{equation}\label{eq:13} t_i \ln (d_i-1) = o (\ln n_i) \end{equation} and \begin{equation}\label{eq:14} 1 + k\ln(1/\delta_i) = o(\ln n_i). \end{equation} \noindent The assumption of Claim (1) of the theorem implies that $1/\delta_i = o(\ln n_i)$ and hence (\ref{eq:14}) holds. Since we can take $t_i$ as in (\ref{eq:ti}), to ensure (\ref{eq:13}) it is enough to show that \begin{equation}\label{eq:dvanerva} \frac{\ln(1/\delta_i)}{\delta_i} =o \left( \frac{\ln n_i}{\ln (d_i-1)} \right). \end{equation} The assumption of (1) of the Theorem implies that $\log_{d_i} n_i$ tends to infinity; and that then \begin{equation} 1/\delta_i = o\left( \frac{\log_{d_i}n_i}{\ln \log_{d_i}n_i} \right)= o\left( \frac{\log_{d_i-1}n_i}{\ln \log_{d_i-1}n_i} \right), \end{equation} then \begin{equation} 1/\delta_i \ln(1/\delta_i) = o \left( \frac{\log_{d_i-1}n_i}{\ln \log_{d_i-1}n_i} \right) \left(\ln \log_{d_i-1}n_i - \ln \ln \log_{d_i-1}n_i \right)= o(\log_{d_i-1}n_i), \end{equation} \noindent and hence (\ref{eq:dvanerva}) holds, and we have proved the first claim of the theorem. \end{proof} Trajectories of random walks, for obtaining lower bound of the universal travelling salesman problem, appear in \cite{gorodezkyetal} (one trajectory) and in \cite{bhalgatetal} (two trajectories). In the proof of Theorem \ref{thm:expanders} we consider several trajectories and an essential point of our argument is appropriate subdivision of $\Gamma_i$ into convex intervals. \begin{rem} Recall that a discrete version of Cheeger's inequality (see e.g. \cite{HLW}[Thm 2.4]) relates the expansion coefficient $h(\Gamma)$ of a $d$-regular finite graph with its spectral gap: $$ \frac{d-\lambda_2}{2} \le h(G) \le \sqrt{2d(d-\lambda_2)}. $$ In particular, the assumption of Thm \ref{thm:expanders} is verified for a sequence of graphs of fixed degree satisfying $$ \frac{1}{h(G_i)^2} = o \left( \frac{\ln n_i}{\ln \ln n_i} \right). $$ \end{rem} \begin{rem}\label{rem:ANpoincare} Proposition $9.5$ of Hume, MacKay, Tessera \cite{humeetal} implies that a union $X$ of bounded degree graphs of finite $AN$- dimension $\Gamma_i$ and the Poincar\'{e} profile of $X$ satisfies for some $C$ $$ \Lambda_X^2(|\Gamma_i|) \le \frac{C |\Gamma_i|}{\log |\Gamma_i|} + C. $$ Here Proposition 9.5 is applied to $X$ being a disjoint union of graphs, $\delta=1$, and one uses the estimation $\gamma_n(t) \le d^{K t}$, which holds for graphs of degree $d$ and $n$-dimensional control function $Kt$. From Proposition $1.2$ for $p=2$ in Bourdon \cite{bourdon} it follows that $\Lambda^2_X(|\Gamma_i|) \geqslant |\Gamma_i|h^2(\Gamma_i)\sim |\Gamma_i|\lambda_{1,2}^{\frac{1}{2}}(\Gamma_i)$. In notation of Theorem \ref{thm:expanders} this means if a sequence of bounded degree graphs has finite $AN$-dimension, then $\delta_i \le {\rm Const} (1/\ln n_i)^2$ for all $i$ and some constant ${\rm Const}$. In other words, if $1/\delta_i = o(\ln n_i)^2$, then $AN$-dimension of this sequence is infinite. We are grateful to David Hume for explanations on \cite{humeetal} and \cite{bourdon}. \end{rem} \begin{rem}[Wreath products, lamplighter on cyclic groups]\label{rem:wreath} Let $\Gamma_i$ be a Cayley graph of $\mathbb{Z}/i\mathbb{Z} \wr \mathbb{Z}/2\mathbb{Z}$, with respect to standard generators of this group. Then $\delta_i$ is asymptotically equivalent to $1/i^2$, and thus is equivalent to $1/\ln (n_i)^2$ (in fact, in this particular example all spectral values and the spectral measure for SWS random walk are calculated, see Grigorchuk, Leemann, Nagnibeda, Theorem 5.1 of \cite{gln} for calculation of spectrum and spectral measure of de Bruijn graphs and Theorem 6.1.3 for explanation that lamplighter graphs are particular case of de Bruijn graphs). Thus $1/\delta_i$ is equivalent to $(\ln n_i)^2$. Recall that the $AN$-dimension of the disjoint union of $\mathbb{Z}/i\mathbb{Z} \wr \mathbb{Z}/2\mathbb{Z}$ is finite (see Example \ref{ex:anwreath} in the previous section). Thus by Theorem \ref{thm:nagata} we know that $\operatorname{Br}$ of this union is finite, and Claim (1) of Thm \ref{thm:expanders} does not hold. \end{rem} More generally, any non-trivial sequence of finite lamplighter graphs (and also many other sequence of wreath products) violates the assumption of Thm \ref{thm:expanders}, as we discuss in the following remark. However, in contrast with graphs from the previous remark, many such sequences satisfy Claim (1) of the theorem. We will see it in the next section, as an application of the criterion of weak imbeddings of cubes. \begin{rem} There exists $C>0$ such that for any finite group $A$, of cardinality at least $2$, the spectral gap of $A \wr \mathbb{Z}/2 \mathbb{Z}$ satisfies \begin{equation}\label{eq:largerelaxtime} 1/\delta \ge C \ln n, \end{equation} where $n$ is the cardinality of $A \wr \mathbb{Z}/2\mathbb{Z}$. (It is clear $n= \# A 2^{\# A})$. This follows from the result of Peres and Revelle (\cite{peresrevelle}). Their result, under a more general assumption on the Markov chain on the wreath product (including simple random walks) states that $$ t_{\rm rel} (A \wr \mathbb{Z}/2\mathbb{Z}) \ge \frac{1}{8\ln 2} t_{hit} (A), $$ where {\it relaxation time} $$ t_{\rm rel} = \frac{1}{\max_{j:\tilde{\lambda}_j <1}(1-|\lambda_j/d|)} $$ (in particular for random walks with non-negative spectrum $t_{\rm rel} = 1/(1-\lambda_2/d)$) and $t_{\rm hit}(A)$ is {\it the maximal hitting time} of a random walk on $A$, that is the maximum, over $x,y$ in the graph of the expected time to hit $y$ starting from $x$. Recall that for a simple random walk on a finite graph $\Gamma$, hitting time is $\ge \# \Gamma/2$. Indeed, if we fix $x\in \Gamma$, consider $T_y$ to be the time of the first visit of $y$; then for any trajectory $p$ starting at $x$ we have $\sum_{y \in \Gamma}T_{y} \geq \#\Gamma(\#\Gamma - 1)/2$, hence $\sum_{y\in \Gamma\setminus \{x\}} \mathbb{E}_xT_y \geq \#\Gamma(\#\Gamma - 1)/2$. (Another result of \cite{peresrevelle} also implies that for fixed $d\ge 3$ $A=(\mathbb Z/i Z)^d$ there is a matching upper bound for $1/\delta \le {\rm Const} \ln n$). Moreover, for many wreath products $A\wr B$ the inequality (\ref{eq:largerelaxtime}) holds, as can be deduced from \cite{komjathyperes}[Thm 1.3]). \end{rem} \begin{rem}\label{rem:sardari} In claim (3) of Theorem \ref{thm:expanders} we prove d a logarithmic lower bound $\ln_{d-1} n_i -C_k$ for the length of some snakes on $k$ points in $\Gamma_i$. These estimates can not be significantly improved in some graphs, since this length can not be greater than the diameter: recall that for any $d\ge 3$ and any $\varepsilon>0$ the diameter of a.e. random $d$-regular graph on $n$ vertices is at most $D$, where $D$ is the smallest integer satisfying $(d-1)^D \ge (2+\varepsilon) d n \ln n$ (see \cite{BollobasdelaVega}), in particular the diameter is close to the straightforward lower bound for the diameter (in terms of $n$ and $d$) $D =\log_{d-1} n (1+o(1))$. It is known that random $d$-regular graphs form a sequence of expander graphs, close to being Ramanujan \cite{Friedman}. For possible Cayley graph examples with diameter close to $\log_{d-1} n$ see \cite{rivinsardari}[Section 4.1], who provide a numerical evidence that the diameter of ${\rm SL}(2, \mathbb{Z}/p \mathbb{Z})$, with respect to a random generating set on $d$ generators is close to $\ln_{d-1} n$, as $p\to \infty$. \end{rem} Given a graph $\Gamma$ and an integer $l\ge 1$, denote by ${\rm Sc}(\Gamma, l)$ a graph obtained from $\Gamma$ by replacing an edge by a chain of $l$ edges. We will be interested in the metric on the vertices of such graphs, and therefore to make in a regular graph we can add loops for all new vertices, so that if $\Gamma$ is a regular graph of degree $d$, then the scaled graph ${\rm Sc}(\Gamma, l)$ is also regular of degree $d$. \begin{rem}\label{rem:nospectral} Let $l_i \ge 1$ be a sequence of integer numbers. Observe that if $\operatorname{Br}$ is infinite for a sequence of graphs $\Gamma_i$, it is also infinite for the sequence ${\rm Sc}(\Gamma_i, l_i)$. In particular, it impossible to obtain a necessary and sufficient condition for $\operatorname{Br}=\infty$ in terms of $\delta_i$, $n_i$ and $d_i$. Indeed, consider a sequence of expander graphs $\Gamma_i$ and choose $l_i$ to grow rapidly. We can obtain a sequence of graphs ${\rm Sc}(\Gamma_i, l_i)$ with very quick decay of normalized spectral gap $\delta_i$, but $\operatorname{Br} =\infty$. \end{rem} \begin{rem}\label{rem:closetoexpandersnobs} On the other hand, take any sequence of graphs $\Gamma_i$ and choose $l_i$ to grow very slow. We obtain a sequence of graphs $\Gamma'_i={\rm Sc}(\Gamma_i, l_i)$ with $\delta_i$ which tends to zero arbitrarily slow. It is not difficult to check that, whatever $\Gamma_i$ we take and whatever $l_i$ tending to $\infty$, the obtained graphs $\Gamma'_i$ admit orders $T_i$ (a version of ${\rm Star}$ orders) such that the sequence $(\Gamma'_i, T_i)$ does not contain snakes of bounded width. Thus, the assumption in (3) of the Theorem can not be weakened: no other condition on the decay of $\delta_i$ (unless $\delta_i$ is bounded away from zero) can guarantee the existence of a sequence of snakes of bounded width. \end{rem} \section{Weak imbeddings of cubes and infinite order breakpoint}\label{sec:infinitegirth} In the previous section we have seen a spectral condition for a sequence of graphs that guarantees that order breakpoint is infinite. In this section we prove another sufficient condition (for spaces or sequences of spaces) in terms of weak imbeddings of cubes. The following lemma generalizes the claim of Lemma \ref{le:examplecircle} about circles. We denote by $S^d$ a unit $d$-sphere in $\mathbb{R}^{d+1}$, the metrics on $S^d$ is the Euclidean metric induced from $\mathbb{R}^{d+1}$. \begin{lemma}[Snakes in the spheres] \label{lem:sphere} Let $\varepsilon>0$ and let $X$ be a $\varepsilon$-net of the Euclidean sphere $S^d$. Let $T$ be any order on $X$. Then there exist two antipodal points $x$ and $\dot{x}$ and a snake $(x_1<_T\dots <_T x_{d+2})$, $x_i\in X$ for $1 \le i \le d+2$, such that the following holds: $d(x_i,x) \leq \varepsilon$ if $i$ is odd, $d(x_i,\dot{x}) \leq \varepsilon$ if $i$ is even. In particular, the diameter of this snake is at least $2-2\varepsilon$ and width $ \le 2\varepsilon$, and $$ \operatorname{OR}_{S^d} (d+1) = d+1. $$ \end{lemma} \begin{proof} Let $x$ be a point of the sphere, $s$ be a positive integer and $r>0$. Let us say that $x$ is {\it a tail point} with parameters $(s,r)$, if there exists a snake $x_1 <_T \dots <_T x_s$ in $X$ such that all points with odd indexes are at distance at most $r$ from $x$, and the points with even indexes are at distance at most $r$ from the point $\dot{x}$ antipodal to $x$. The union of s of all tail points of parameters $(s,r)$ we denote by ${\rm Tail}_{s,r}$. \begin{figure}[!htb] \centering \includegraphics[scale=.43]{pictures/zmejkasfera2.png} \caption{A tail point of a snake is presented by a black point. } \label{pic:zmejkasfera} \end{figure} \noindent Fix some positive $\delta$ much smaller then $\varepsilon$ and define open subsets $U_0,\dots,U_d$ of $S^d$ as follows. - Let $U_0$ be the set of all points at distance greater than $\delta/2$ from ${\rm Tail}_{2, \varepsilon}$. - Let $U_1$ be the set of points at distance smaller than $\delta$ from ${\rm Tail}_{2, \varepsilon}$ but at distance greater than $\delta/2$ from ${\rm Tail}_{3, \varepsilon + \delta}$. - Let $U_2$ be the set of points at distance smaller than $\delta$ from $\rm{Tail}_{3, \varepsilon + \delta}$, but at distance greater than $\delta/2$ from $\rm{Tail}_{3, \varepsilon + 2 \delta}$, \dots - Let $U_d$ are points at distance less than $\delta$ from ${\rm Tail}_{d+1, \varepsilon + (d-1)\delta}$ and of distance greater than $\delta/2$ from ${\rm Tail}_{d+2, \varepsilon + d \delta}$. If some of the sets discussed above are empty, we use the convention that a distance to an empty set is $\infty$. It is clear that all the subsets $U_i$ are open. First suppose that $U_0$, $U_1$, \dots, $U_d$ do not cover the sphere $S^d$. Consider a point $x$ not belonging to their union. We observe that the sets ${\rm Tail}_{2, \varepsilon}$, ${\rm Tail}_{3, \varepsilon + \delta}$, \dots, ${\rm Tail}_{d+2, \varepsilon + d\delta}$ are at distance no more then $\delta/2$ from this point $x$. In particular, the set ${\rm Tail}_{d+2, \varepsilon + d\delta}$ is not empty. If this happens for arbitrary small $\delta$, then by a compactness argument there exists a point $x\in S^d$ and a snake on $d+2$ points with all the odd points at distance at most $\varepsilon$ from $x$ and all the even points at distance at most $\varepsilon$ from $\dot{x}$. And the claim of the lemma follows. For the proof of the lemma it is therefore sufficient to prove that (whatever $\delta$ we choose) the sets $U_0$, \dots, $U_d$ can not cover the sphere. Suppose that it is not the case: the union of $U_i$ is equal to $S^d$. We recall that by the Generalized Theorem of Lusternik-Schnirelmann (see e.g. \cite{greene}) we know that \vspace{2pt} {\it If the union of $d+1$ sets, each of the set is open or closed, covers $S^d$ , then for one of these sets there exists a point $x$ of $S^d$ such that both $x$ and the antipodal point $\dot{x}$ belong to this set}. \vspace{2pt} We apply this theorem for our open sets $U_i$. We deduce that if our sets $U_0$,$U_1,$ \dots, $U_d$ cover the sphere, then there exists $k$ such that $U_k$ contains a pair of antipodal points. First assume that $k=0$. Consider $x$ such that both $x$ and $\dot{x}$ belong to $U_0$. Since $X$ is a $\varepsilon$-net of the sphere, there exists a point $a\in X$ at distance at most $\varepsilon$ from $x$, and there exists a point $b \in X$ at distance at most $\varepsilon$ from $\dot{x}$. Observe that if $a<_Tb$, then $x\in {\rm Tail}_{2, \varepsilon}$, and if $a>_Tb$, then $\dot{x}\in {\rm Tail}_{2, \varepsilon}$. We get a contradiction with the definition of the set $U_0$. Now suppose that for some $k>0$ the set $U_k$ contains both $x$ and $\dot{x}$. Since the distance from $x$ to ${\rm Tail}_{k+1, \varepsilon + (k-1)\delta}$ is at most $\delta$, we know that $x \in {\rm Tail}_{k+1, \varepsilon + k\delta}$. We also know that $\dot{x} \in {\rm Tail}_{k+1, \varepsilon + k\delta}$. Therefore, there exist snakes $a_1 <_T \dots <_T a_{k + 1}$ and $b_1 <_T \dots <_T b_{k+1}$ in $X$ such that all $a_i$ with even indexes $i$ and all $b_j$ with odd indexes $j$ are at distance $< \varepsilon + k\delta$ from $\dot{x}$; and all $a_j$ with odd indexes $j$ and all $b_i$ with even indexes $i$ are at distance $<\varepsilon + k\delta$ from $x$. If $a_{k+1}<_T b_{k+1}$, then $a_1 <_T \dots <_T a_{k + 1} <_T b_{k+1}$ is a snake and we have $x\in {\rm Tail}_{k+2, \varepsilon + k\delta}$. If $a_{k+1} >_T b_{k+1}$, then $\dot{x}\in {\rm Tail}_{k+2, \varepsilon + k\delta}$. In both cases we have obtained a contradiction with the definition of $U_k$, and we have therefore completed the proof of the lemma. \end{proof} A more combinatorial version of Lemma \ref{lem:sphere} is given in the Lemma \ref{lem:triangulationsphere} below. We recall that the {\it octahedral triangulation} of a $d$-dimensional Euclidean sphere (centered at $0$) $S^d \subset \mathbb{R}^{d+1}$ is obtained by cutting the sphere by $d+1$ coordinate subspaces of dimension $d$. \begin{lemma}\label{lem:triangulationsphere} Consider a centrally symmetric triangulation $K$ of the Euclidean sphere $S^d$, and assume that $K$ is a subdivision of the octahedral triangulation. Let $T$ be an order on vertices of $K$. Then there exist two antipodal simplices $\Delta_1$ and $\Delta_2$ of this triangulation and a snake on $d+2$ points, with odd indexed point being vertices of $\Delta_1$, and even indexed points being vertices of $\Delta_2$. \end{lemma} \begin{proof} Consider a triangulation $K'$, which is the barycentric subdivision of $K$. By definition, there is a one-to-one correspondence between vertices of $K'$ and simplices of $K$; two vertices of $K'$ are adjacent if and only if for the two corresponding simplices of $K$ one is a subset of the other. Observe that the triangulation $K'$ is also symmetric and also is a subdivision of the octahedral triangulation of $S^d$. Write in each vertex of $K'$ an integer in the following way. Let $x$ and $x'$ be two antipodal vertices of $K'$. Let $\Delta$ and $\Delta'$ be corresponding (antipodal) simplices of $K$. Consider maximal $s$, such that there exists a snake on $s$ points, will all odd indexed vertices in $\Delta$ and all even indexed vertices in $\Delta'$, or vice versa. Choose one of these snakes. If its first vertex is in $\Delta$, assign the number $s-1$ to the point $x$, and $-(s-1)$ to the point $x'$. Analogously, if its first vertex is in $\Delta'$, we assign $-(s-1)$ to $x$ and $s-1$ to $x'$. Note that any two such snakes of maximal number of points can not start at opposite simplices. Indeed, otherwise we have snakes $x_1<_T\dots <_T x_s$ and $y_1<_T \dots <_T y_s$, the points $x_s$ and $y_s$ are in opposite simplices $\Delta$ and $\Delta'$. Since $s$ is the maximal length of snakes, $x_s >_T y_s$ (otherwise we can add $y_s$ to the first snake). In the same way we show that $x_s <_T y_s$ and obtain a contradiction. If there is a point with assigned number $d+1$ or larger, the claim of the lemma holds. If not, observe that we can apply Tucker's Lemma to a half of the sphere. We recall that this lemma (see e.g. \cite{matucek}[Thm 2.3.1]) claims that if the vertices of a triangulation of the $n$-ball which is antipodally symmetric on the boundary are labeled with the set $\{\pm 1,\pm 2,\dots,\pm n\}$ such that antipodal vertices on the boundary receive labels which sum to zero then some edge has labels which sum to zero. Applying this lemma to a hemisphere (of $S^d$), the triangulation $K'$ and the labelling as above, we conclude that there exist two adjacent vertices of $K'$ with two opposite numbers (that is, $t$ and $-t$) written in them. Note that this means that there are snakes on the same number of points with starting points in two antipodal simplices of $K$. Then at least one of them can be extended for a snake on the larger number of points. This contradiction completes the proof of the lemma. \end{proof} \begin{rem} A statements similar to Lemma \ref{lem:triangulationsphere} can be obtained as a particular case of Zig-Zag theorem from Simonyi, Tardos \cite{zigzag}[Sec. 3.3]. Given a topologically $t$-chromatic graph and an ordered coloring of its vertices, this theorem provides a sufficient condition for existence of a full bipartite graph with colors that appear alternating on the two sides of the bipartite subgraph (we do not recall the definition of a topologically $t$-chromatic graph and refer to \cite{zigzag} for the exact formulation of this result). To study triangulations of spheres one can consider graphs with edges connecting almost antipodal vertices of the triangulation and obtain a version of Lemma \ref{lem:triangulationsphere}. \end{rem} \begin{rem} Similar constructions influence lower bounds in universal ordered protocols (also in other problems), see for example Theorem $9$ in \cite{Christodoulou1}. \end{rem} In the following definition we consider the $l_1$ metric on cubes. \begin{definition}\label{def:largescubes} Let us say that a metric space $(M, d_M)$ {\it weakly contains arbitrarily large cubes} of dimension $d$, if there exists a sequence $n_i$, tending to $\infty$ and a sequence of mappings of discrete cubes $f_i:[-n_i, n_i]^d\to M$ (that is, a sequence of mappings of integer points of $[-n_i, n_i]^d$ to $M$) such that $$ \lim_{i\to \infty} \frac{{\rm minop}(f_i)}{{\rm maxne}(f_i)} = \infty. $$ Here ${\rm minop}(f_i)$ is the minimal distance $d_M(f_i(x), f_i(\dot{x}))$, where $x, \dot{x}$ are two antipodal points on the boundary of $[-n_i,n_i]^d$, and ${\rm maxne}(f_i)$ is the maximal distance $d_M(f_i(x_1), f_i(x_2))$, where $x_1$ and $x_2$ are two neighboring points (points with integer coordinates at $l_1$-distance one) in the cube $[-n_i, n_i]^d$. \end{definition} We also say that a metric space contains {\it weakly a sequence of arbitrarily large cubes} if for all $d\ge 1$ this space weakly contains arbitrarily large cubes of dimension $d$. Now we explain a sufficient condition for $\operatorname{OR}(k) = k$. \begin{corol}\label{cor:cubes} If a metric space $M$ weakly contains arbitrarily large cubes of dimension $d$, then for any order $T$ on $M$ it holds $$ OR_{M,T}(d) = d. $$ \noindent In particular, if a metric space $M$ weakly contains a sequence of arbitrarily large cubes, then for any order $T$ on $M$ the order breakpoint of $(M, T)$ is infinite. \end{corol} \begin{proof} Let $T$ be an order on $M$. Assume that there exists a sequence $n_i$, tending to $\infty$, and (for each $n_i$) a mapping $f_i$ of integer points of the boundary of the cube $[-n_i, n_i]^d$ to $M$, satisfying $\frac{{\rm minop}(f_i)}{{\rm maxne}(f_i)} \to \infty. $ We want to prove that $(M,T)$ admits snakes on $d+1$ points of arbitrarily large elongation. Take a sufficiently large integer $n_i$ and denote the integer points of $[-n_i,n_i]^d$ by $K_i$. The cube $K_i$ contains $(2n_i+1)^d$ integer points. The boundary $S_i$ of $K_i$ is homeomorphic to a $(d-1)$-dimensional sphere, and $(2n_i+1)^d-(2n_i-1)^d$ integer points of $K_i$ belong to $S_i$. Observe that $S_i$ is subdivided into $2d(2n_i)^{d_i}$ unit cubes of dimension $d-1$, and this is a subdivision of the (image by homeomorphism of the) octahedral partition of $S$. Each of the $(d-1)$-dimensional unit cubes can be divided into $(d-1)!$ simplices. Hence there exists a centrally symmetric triangulation of $S_i$ (consisting of simplices above) such that vertices of this triangulation belong to integer points of $K_i$ and all the simplices have diameters $d-1$. Consider the pullback $T'$ of the order $T$ with respect to the mapping $f_i$. By definition $T'$ is an order on vertices of $S_i$ such that $x<_{T'}y \Longleftrightarrow f_i(x) <_T f_i(y)$. By Lemma \ref{lem:triangulationsphere} we know that there exists a pair of antipodal simplices $\Delta_1$, $\Delta_2$ and a snake $(x_1,\dots,x_{d+1})$ on $d+1$ points oscillating between them. Consider the image in $M$ (under $f_i$) of this snake. Observe that $d_M(f_i(x_k),f_i(x_l)) \leqslant (d-1) {\rm maxne}(f_i) $ for any $k,l$ of the same parity. We assume that $n_i$ is large enough, so that $$ {\rm minop}(f_i)- 2 (d-1) {\rm maxne}(f_i) \ge 1. $$ Assume that this assumption guarantees that the image of the points of the snake are distinct points in $M$. Indeed, for the consecutive points $f_i(x_k), f_i(x_{k+1})$ the assumption above implies that $d_M(f_i(x_k), f_i(x_{k+1})) \geqslant 1.$ By definition of the pullback, the fact that $f_i(x_k) \ne f_i(x_{k+1})$ and $x_k<_{T'} x_{k+1}$, we know that $f_i(x_k)<_{T} f_i(x_{k+1})$. We conclude therefore that $$ f_i(x_1)<_{T} f_i(x_{2}) <_T \dots <_{T} f_i(x_{d})<_{T} f_i(x_{d+1}). $$ Hence, $(f_i(x_1),\dots, (f_i(x_{d+1}))$ is indeed a snake, its width is no more than $(d-1){\rm maxne}(f_i)$ and its diameter is at least ${\rm minop}(f_i)- 2 (d-1) {\rm maxne}(f_i)$. Its elongation is at least $$ \frac{ {\rm minop}(f_i) - 2 (d-1) {\rm maxne}(f_i)}{{\rm maxne}(f_i)}. $$ From the definition of weak imbeddings of cubes we conclude that this elongation tends to infinity, and this concludes the proof of the corollary. \end{proof} A particular case of Corollary \ref{cor:cubes} is when $G$ is such that for all $d$ there exists an uniform imbedding of $\mathbb{Z}^d$ in $G$. This condition holds in particular for any group $G$ that contains $\mathbb{Z}^\infty$ as a subgroup. For example, $G = \mathbb{Z} \wr \mathbb{Z}$. Then for any order $T$ on $G$ it holds $$ \operatorname{OR}_{G,T}(k) = k $$ for all $k$. We recall that some known examples of groups of infinite Assouad-Nagata dimension do not admit uniform imbeddings of $\mathbb{Z}^d$. For example, if $G=\mathbb{Z}^2\wr A$, $A$ is a finite group of cardinality $\ge 2$, then Assouad-Nagata dimension of $G$ is infinite, see \cite{nowak}, who studied also other amenable wreath products; for a general case of any base group of superlinear growth see \cite{BrodskiyDydakLang}[Corr 5.2]. On the other hand, the asymptotic dimension of $G$ is $2$, see \cite{nowak}, see also \cite{BrodskiyDydakLang}[Thm 4.5] for upper bounds on dimension control function. In particular $G$ can not contain uniform images of $\mathbb{Z}^3$. Wreath product examples mentioned above are discussed in the following subsections. \subsection{Wreath products} \begin{lemma}\label{lem:wcubeswr} [Sequences of arbitrarily large cubes in wreath products] \noindent Let $G =A \wr B$, where $A$ is a finitely generated group of super-linear growth and $B$ is a group of cardinality at least two, then $G$ contains weakly a sequence of arbitrarily large cubes. \end{lemma} The argument we explain below works for all $B$, but we want to point out that our main interest when $B$ is finite. (If $B$ is infinite, observe that for all $d\ge 1$ $G$ contains uniformly $\mathbb{Z}_+^d$, and the claim of the lemma follows). The the proof below is reminiscent of an argument of Theorem 4.1 in \cite{BrodskiyDydakLang}. \begin{proof} Observe that the claim of the lemma does not depend on the generating set in the wreath product. Fix some generating set $S_A$ of $A$, a set $S_B$ of $B$ and consider a standard generating set $S$ of $G$, which one can identify with the union of $S_A$ and $S_B$. Fix $d\ge 1$. We are going to prove that $G$ contains weakly arbitrarily large cubes of dimension $d$. We first observe that for infinitely many $n$ there exist $d$ disjoint subsets $\Omega^n_1$, $\Omega^n_2$, \dots, $\Omega^n_d$ inside the ball $B_{A,S_A}(e,n)$ of radius $n$ such that the following holds. The cardinality of each set $\Omega_i$ is $n$ and for each $i$, $1 \le i \le d$, the length of any path visiting all points of $\Omega^n_i$ is $\ge \frac{1}{6} (n-1) \sqrt{n}$. We have already mentioned an elementary case of Polynomial Growth theorem of \cite{justin71}. The result of this paper states that if $A$ is not virtually cyclic, then the growth function of $A$ satisfies $v_{A,S_A}(n) \ge n(n+1)/2$. In this case there exist infinitely many $n$ such that $v_{A,S_A}(n+\sqrt n)/v_{A,S_A}(\sqrt{n}/6) \ge n$. Take such $n$ and put $\varepsilon = \sqrt{n}/2$, choose an $(\varepsilon, \varepsilon)$-net of the ball $B_{A,S_A}(e,n)$ in $A$. Place a ball of radius $\varepsilon/3$ around each point of the net. In each ball choose $d$ distinct elements. If $n$ is sufficiently large, then $v_{A,S_A}(\sqrt{n}/6)$ is greater than $d$, and therefore such choice is possible. Observe also that the number of the points in such net is at least $v_{A,S_A}(n+\sqrt n)/v_{A,S_A}(\sqrt{n}/6)\geq n$. In each ball choose one point and denote the union of the chosen points $\Omega^n_1$. Choose one more point in each ball and denote the union of these points by $\Omega^n_2$ and so on. It is clear that the distance between any two points of $\Omega_i^n$ is at least $\sqrt{n}/6$, and hence the length of any path visiting all points of $\Omega^n_i$ is at least $\frac{1}{6} (n-1) \sqrt{n}$. \begin{figure}[!htb] \centering \includegraphics[scale=.85]{pictures/largecubes2.png} \caption{The sets $\Omega^n_i$, here shown for $n=12$, $d=3$, $i=1, 2, 3$. The points of the same colour correspond to the same set. The image of $(9,3,5)$ under $\rho$ is shown, this is a configuration taking value $1$ in the first $9$ yellow points, in the first $3$ red points and in the first $5$ green points.} \label{pic:lc} \end{figure} Enumerate the points of $(\varepsilon, \varepsilon)$-net in an arbitrary order and consider the restriction of this order to $\Omega_i^n$. Fix a non-identity element $b$ in the already fixed generating set of $B$. Consider a map $\rho: [0,1,\dots,n]^d \to A \wr B$. A point with coordinates $(z_1, z_2, \dots, z_d)$ is sent to $(e_A,f)$ where $f$ is the configuration which takes value $b$ in the first $z_i$ elements of $\Omega_i$, $1\le i \le d$, and takes value $e_B$ elsewhere. See Picture \ref{pic:lc}. The picture shows a possible choice for $A=\mathbb{Z}^2$, $B=\mathbb{Z}/2\mathbb{Z}$. We use an additive notation in $B$, and a non-identity element $b$ is denoted by $1$. If $u, v$ are points in the cube at distance $1$ in $l_1$-metric of $\mathbb{Z}^d$, then the distance between $\rho(u)$ and $\rho(v)$ in the word metric of the wreath product is at most $2n+1$. Indeed, observe that in this case the values of the configuration of $a$ and that of $b$ differ at one point, which we denote by $x$. To make this change, it is sufficient to go from identity to $x$, to make a switch at go back to the identity. Observe that if we take two antipodal points $w, \dot{w}$ in the boundary of the cube, then the distance between $\rho(w)$ and $\rho(\dot{w})$ is at least $\frac{1}{6} \sqrt{n} (n-1)$. Indeed, observe that if we have a pair of antipodal points on the boundary of the cube, then there exists $i$ such that the $i$-th coordinate of one of them is $n$ and of the other is $0$. Therefore, To move from between the images of these antipodal points, we need the to visit all points of $\Omega^n_i$, and hence the length of such path is at least $\sqrt{n} (n-1)/6$. This completes the proof of the lemma. \end{proof} \begin{corol}[Order ratio function for wreath products] Let $G=A \wr B$ be a wreath product of $A$ and $B$, where $\#A = \infty, \#B > 1$. Then either we have a logarithmic upper bound for the order ratio function (and this happens if and only if $G$ has a finite $AN$-dimension), or the order ratio function is linear (moreover, the order breakpoint is infinite). \end{corol} \begin{proof} Indeed, by a result of \cite{BrodskiyDydakLang}[Thm 5.1 and Cor 5.2] $AN$-dimension of $A\wr B$ is finite if and only if $A$ is of linear growth and $B$ is finite. The cited theorem deals with the case when $B$ is finite, and it is straightforward that the $AN$-dimension is infinite when $A$ and $B$ are both infinite, and the dimension is finite when $A$ and $B$ are both finite. By Thm \ref{thm:nagata} we know that if $AN$-dimension is finite, then there is a logarithmic bound for the order ratio function. On the other hand, if $A$ is of super-linear growth or if $A$ and $B$ are infinite, we know by Lemma \ref{lem:wcubeswr} that $A\wr B$ contains weakly arbitrarily large cubes (of arbitrarily large dimension), and hence by Corollary \ref{cor:cubes} the order breakpoint is infinite. \end{proof} \subsection{Product of tripods} In Thm \ref{thm:nagata} we proved that if a metric space $M$ has $AN$-dimension $d$ then $\operatorname{Br}(M) \leq 2d+2$. Now we will show that this estimate is close to optimal. \begin{prop} Let $M$ be a Cartesian product of $d$ tripods. Then for any order $T$ on $M$ it holds $\operatorname{OR}_{M,T}(2d) = 2d$; in other words, $\operatorname{Br}(M) \geq 2d+1$. \end{prop} \begin{proof} There exists a continuous map $f:S^{2d-1} \to M$ such that any two opposite points in $S^{2d-1}$ map to different points. Indeed, $S^{2d-1}$ is homeomorphic to the boundary of the product $D^d$ of $d$ $2$-dimensional unit disks. There is a continuous mapping $h$ from a unit disk to tripod such that $h$ maps any two antipodal points on the boundary of $D$ to points at distance $1$. \begin{tikzpicture}[scale = 0.5, every node/.style={scale=0.8}] \draw[pattern=north west lines] (0,1) circle(3); \foreach \x in {0,...,5}{ \node at ({3.2*cos(60 * \x + 30)}, {3.2*sin(60 * \x + 30)+1}) {$A_{\x}$}; } \draw[->][thick] (6.5,1.1) -- (8.5,1.1); \begin{scope}[shift = {(13,0)}] \draw[ultra thick] (0,0) -- (0,3); \draw[ultra thick] (0,0) -- (2.5,-1); \draw[ultra thick] (0,0) -- (-2.5,-1); \draw (-0.2,3.2) node[above]{$A_1$} -- (-0.2, 0.2) node[above left]{$A_2$} -- (-2.6, -0.8) to[out = -130, in = 190] (-2.4, -1.2) node[below left] {$A_3$} -- (0, -0.25)node[below] {$A_4$} -- (2.4, -1.2) node[below right] {$A_5$} to [out = -20, in = -10] (2.6, -0.8) --(0.2, 0.2) node[above right] {$A_0$} -- (0.2, 3.2) to [out = 90, in = 90] cycle; \end{scope} \end{tikzpicture} Denote the mapping $h^d:D^d \to M$ by $f$. If $x$ and $x'$ are two opposite points in the boundary of $D^d$ then their projections to one of the disks are opposite points of boundary of this disk, and projections of $f(x)$ and $f(x')$ to the corresponding tripod are two points at distance 1. Let $T$ be an order on $M$ and let $T'$ be a pullback of $T$ to $S^{2d-1}$. By Lemma \ref{lem:sphere} for any $\varepsilon$ there are two opposite points $x$ and $x'$ of $S^{2d-1}$ and a snake $x_1 <_{T'} \dots <_{T'} x_{2d+1}$ such that points with odd indices are in $B(x,\varepsilon)$ and points with even indices are in $B(x',\varepsilon)$. It is clear that points $f(x_i)$ form a snake of large elongation in $(M,T)$, so $\operatorname{OR}_{M,T}(2d+1) = 2d$. \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The presence of a characteristic quantum stress term in a fluid formulation of the Schr\"odinger equation was known immediately after the appearance of the equation by Madelung in 1926 \cite{Madelung-1926}. Ignoring the new term with the quantum origin, which has a role only on a small scale (smaller than the quantum Jeans scale), the fluid is the same as a zero-pressure medium, thus behaving as the cold dark matter. The role of the stress term with extreme small mass was proposed as the fuzzy dark matter in \cite{Hu-Barkana-Gruzinov-2000} and is now an active field of research in the dark matter study with interesting observational consequences, see \cite{FDM-review} for reviews. In the relativistic perturbation theory, fluid equation with the same quantum stress is derived from the Klein-Gordon equation of a massive scalar field combined with the Einstein equation only assuming the Compton wavelength smaller than the horizon scale \cite{Hwang-Noh-2009, Hwang-Noh-2021}. The result is {\it the same} as in the non-relativistic one; scale larger than the Compton wavelength is demanded for consistency in the relativistic calculation based on the Einstein-Klein-Gordon system, whereas such a condition is not needed in the non-relativistic analysis based on the Schr\"odinger-Poisson system, see below Eq.\ (\ref{relation-ZSG-2}) and \cite{Hwang-Noh-2021}. Derivation of density perturbation equation known in non-relativistic limit is possible {\it only} in certain gauge condition: the comoving gauge, the zero-shear gauge and the uniform-curvature gauge \cite{Hwang-Noh-2021}. Although the simple non-relativistic equation for density perturbation is derived in three different gauge conditions, {\it only} in the comoving gauge the equation coincides with the zero-pressure fluid system like the cold dark matter. In the other two gauge conditions, the relativistic equations of the zero-pressure fluid are more complicated, and {\it only} in the sub-horizon scale the equations coincide \cite{Hwang-Noh-2021}. A new quantum effect of the massive scalar field (ultralight axion or fuzzy dark matter) with an astronomically observable consequence was proposed by Khmelnitsky and Rubakov in \cite{Khmelnitsky-Rubakov-2014}. While we ignored the oscillatory terms in deriving the quantum stress by taking the time-average, the new effect is related to the oscillatory gravitational potential caused by the oscillating pressure perturbation with Compton frequency. This is a purely relativistic effect caused by the relativistic pressure, without non-relativistic counterpart. Here, our aim is deriving the oscillating potential equation in the context of relativistic cosmological perturbation, see Sec.\ \ref{sec:QO}. The situation differs from \cite{Khmelnitsky-Rubakov-2014} which concerns {\it nonlinear} density inhomogeneity in the non-expanding medium. We clarify that the analysis in \cite{Khmelnitsky-Rubakov-2014} is incomplete by not determining the phase of oscillation and ignoring the equation of motion. Here, our study is confined to the linear perturbation theory in cosmology, but considers the full equations in both field and fluid. Determination of the phase leads to a new oscillation term which could be more relevant in cosmological observation. The equation of motion also leads to a condition that the analysis is valid one scales larger than the Compton wavelength. Comparison with \cite{Khmelnitsky-Rubakov-2014} will be made in Sec.\ \ref{sec:discussion}. In Sec.\ \ref{sec:comparison} we compare equations for the perturbed density, velocity and gravitational potential for three cases: the axion fluid and the zero-pressure fluid both in the ZSG and the Newtonian fluid. The zero-pressure fluid in the ZSG exactly coincides with Newtonian fluid for the perturbed velocity and the gravitational potential, but only in the sub-horizon limit for the perturbed density. Whereas, ignoring the quantum stress term, the axion density perturbation in the ZSG exactly {\it coincides} with the Newtonian one, but both the perturbed velocity and gravitational potential {\it differ} from Newtonian ones. Therefore, for axion, it is convenient to express the result using the density perturbation variable, see Eq.\ (\ref{varphi_chi-eq-axion}). In Sec.\ \ref{sec:equations} we setup our notation with a summary of the basic equations. In Sec.\ \ref{sec:field-pert} we derive the density and potential perturbation equations with the quantum stress and the quantum oscillation, respectively: these are Eqs.\ (\ref{delta_chi-eq-axion}) and (\ref{varphi_chi-eq-axion}). Section \ref{sec:discussion} is a discussion including comparison with \cite{Khmelnitsky-Rubakov-2014}. \section{Basic equations} \label{sec:equations} We consider a {\it flat} Friedmann cosmology with linear order scalar perturbations. Our metric convention is \cite{Bardeen-1988, Noh-Hwang-2004, Hwang-Noh-2013} \bea & & g_{00} = - a^2 \left( 1 + 2 \alpha \right), \quad g_{0i} = - a \chi_{,i}, \nonumber \\ & & g_{ij} = a^2 \left( 1 + 2 \varphi \right) \delta_{ij}, \label{metric} \eea where $x^0 = \eta$ with $c dt \equiv a d \eta$, and $a(t)$ is the cosmic scale factor. We imposed a spatial gauge condition which completely removes the spatial gauge degree of freedom; in this sense all perturbation variables can be regarded as spatially gauge-invariant \cite{Bardeen-1988}. The fluid quantities are identified based on the time-like four-vector $u_a$ with $u^a u_a \equiv -1$. In the energy frame, setting the flux four-vector $q_a \equiv 0$, and ignoring the anisotropic stress, we have the energy-momentum tensor \cite{Ellis-1971} \bea & & T_{ab} = \mu u_a u_b + p \left( g_{ab} + u_a u_b \right). \label{Tab-identification} \eea For a massive scalar field, we have \bea & & T_{ab} = \phi_{,a} \phi_{,b} - {1 \over 2} \left( \phi^{;c} \phi_{,c} + {m^2 c^2 \over \hbar^2} \phi^2 \right) g_{ab}. \label{Tab-MSF} \eea To the linear order perturbation, we set $\mu \rightarrow \mu + \delta \mu$, $p \rightarrow p + \delta p$, $\phi \rightarrow \phi + \delta \phi$, $u_i \equiv a v_i/c$, and $v_i \equiv - v_{,i}$. To the background order, we have \bea & & H^2 = {8 \pi G \over 3 c^2} \mu + {\Lambda c^2 \over 3}, \quad \dot \mu = - 3 H \left( \mu + p \right), \label{BG-eqs} \eea where $H \equiv \dot a/a$, and $\Lambda$ the cosmological constant. A complete set of perturbation equations is \cite{Bardeen-1988, Noh-Hwang-2004, Hwang-Noh-2013} \bea & & \kappa \equiv 3 H \alpha - 3 \dot \varphi - c {\Delta \over a^{2}} \chi, \label{eq1} \\ & & {4 \pi G \over c^2} \delta \mu + H \kappa + c^2 {\Delta \over a^{2}} \varphi = 0, \label{eq2} \\ & & \kappa + c {\Delta \over a^{2}} \chi - {12 \pi G \over c^4} a \left( \mu + p \right) v = 0, \label{eq3} \\ & & \dot \kappa + 2 H \kappa + \left( 3 \dot H + c^2 {\Delta \over a^{2}} \right) \alpha = {4 \pi G \over c^2} \left( \delta \mu + 3 \delta p \right), \label{eq4} \\ & & \varphi + \alpha - {1 \over c} \left( \dot \chi + H \chi \right) = 0, \label{eq5} \\ & & \delta \dot \mu + 3 H \left( \delta \mu + \delta p \right) = \left( \mu + p \right) \left( \kappa - 3 H \alpha + {\Delta \over a} v \right), \label{eq6} \\ & & {1 \over a^{4}} \left[ a^4 \left( \mu + p \right) v \right]^{\displaystyle\cdot} = {c^2 \over a}\left[ \delta p + \left( \mu + p \right) \alpha \right]. \label{eq7} \eea For a massive scalar field, the fluid quantities can be identified, using Eqs.\ (\ref{Tab-identification}) and (\ref{Tab-MSF}), as \cite{Hwang-Noh-2021} \bea & & \mu = {1 \over 2 c^2} \left( \dot \phi^2 + \omega_c^2 \phi^2 \right), \quad p = {1 \over 2 c^2} \left( \dot \phi^2 - \omega_c^2 \phi^2 \right), \label{fluid-MSF-BG} \\ & & \delta \mu = {1 \over c^2} \left( \dot \phi \delta \dot \phi - \dot \phi^2 \alpha + \omega_c^2 \phi \delta \phi \right), \quad \left( \mu + p \right) v = {1 \over a} \dot \phi \delta \phi, \nonumber \\ & & \delta p = {1 \over c^2} \left( \dot \phi \delta \dot \phi - \dot \phi^2 \alpha - \omega_c^2 \phi \delta \phi \right), \label{fluid-MSF-pert} \eea where $\omega_c \equiv m c^2/\hbar = c/\lambdabar_c$ is the Compton frequency. The equation of motion gives \bea & & \ddot \phi + 3 H \dot \phi + \omega_c^2 \phi = 0, \label{EOM-BG} \\ & & \delta \ddot \phi + 3 H \delta \dot \phi - c^2 {\Delta \over a^2} \delta \phi + \omega_c^2 \delta \phi \nonumber \\ & & \qquad = \dot \phi \left( \kappa + \dot \alpha \right) + \left( 2 \ddot \phi + 3 H \dot \phi \right) \alpha. \label{EOM-pert} \eea The above set of perturbation equations is presented without imposing the temporal gauge (hypersurface or slicing) condition, and all variables used are spatially gauge invariant \cite{Bardeen-1988}. We will mainly consider the zero-shear gauge (ZSG) which sets $\chi \equiv 0$ as the gauge condition \cite{Bardeen-1980, Bardeen-1988}. Under this gauge condition the gauge degrees of freedom are completely fixed, and each variable in this gauge condition has a unique gauge-invariant combination of variables. We will use explicitly gauge-invariant notations \bea \varphi_\chi \equiv \varphi - {1 \over c} H \chi, \quad v_\chi \equiv v - {c \over a} \chi = - {c \over a} \chi_v, \quad {\it etc.}, \label{GI} \eea where $\varphi_\chi$ is a unique gauge invariant combination which is the same as $\varphi$ in the ZSG setting $\chi \equiv 0$; for gauge transformation properties, see Eq.\ (252) in \cite{Noh-Hwang-2004}. In the case of field, the fluid quantities constructed from the axion (an oscillating scalar field) qualitatively differ from the ordinary fluid with conventional equation of state. From Eq.\ (\ref{eq4}), using Eqs.\ (\ref{eq1}), (\ref{eq2}) and (\ref{eq5}), we have \bea & & {1 \over a^3} \left[ a^2 \left( a \varphi_{\chi} \right)^{\displaystyle{\cdot}} \right]^{\displaystyle{\cdot}} = - {4 \pi G \over c^2} \left[ \delta p_{\chi} + (\mu + p) \alpha_{\chi} \right]. \label{varphi_chi-p_chi-eq} \eea For axion, the right-hand side is pure oscillatory with quantum nature, see Eq.\ (\ref{varphi_chi-eq-axion}). On the other hand, from Eq.\ (\ref{eq1}), using Eqs.\ (\ref{eq2}) and (\ref{eq5}), we have \bea & & 3 H {1 \over a} \left( a \varphi_{\chi} \right)^{\displaystyle{\cdot}} - c^2 {\Delta \over a^2} \varphi_{\chi} = 4 \pi G \delta \varrho_\chi, \label{varphi_chi-eq-density} \eea which is the $\varphi_{\chi}$ equation supported by the density perturbation; in the {\it sub-horizon} limit, the first term is negligible and we have the Poisson equation in the ZSG. Now, our task is to derive $\delta p_\chi + (\mu + p) \alpha_\chi$ in Eq.\ (\ref{varphi_chi-p_chi-eq}) in the case of axion as a fuzzy dark matter. We will also show that, for axion, $\delta \varrho_\chi$ satisfies the non-relativistic density perturbation equation known in the fuzzy dark matter. We will show that the relativistic analysis is valid in {\it all} scales bigger than the Compton wavelength \cite{Hwang-Noh-2021}. \section{Axion perturbation} \label{sec:field-pert} Following \cite{Khmelnitsky-Rubakov-2014}, we take an {\it ansatz} \bea & & \phi (t) + \delta \phi ({\bf x}, t) = a^{-3/2} [ A + \delta A ({\bf x}, t) ] \nonumber \\ & & \qquad \times \cos{[\omega_c t + \theta + \delta \theta ({\bf x}, t) ]}, \label{ansatz} \eea with $A$ and $\theta$ constants in time and space. This is {\it the same} ansatz used in \cite{Hwang-Noh-2009, Hwang-Noh-2021}, and also {\it the same} as the Klein transformation used to derive the Schr\"odinger equation from the Klein-Gordon equation \cite{Klein-1927, Chavanis-Matos-2017}, see Eqs.\ (38) and (54) in \cite{Hwang-Noh-2021}. To the linear order in perturbation, we have \bea & & \phi = a^{-3/2} A \cos{(\omega_c t + \theta )}, \\ & & \delta \phi = a^{-3/2} \left[ \delta A \cos{(\omega_c t + \theta)} - A \delta \theta \sin{(\omega_c t + \theta)} \right]. \eea {\it Assuming} $H/\omega_c = \hbar H/(m c^2) = \lambdabar_c / \lambda_H \ll 1$, with $\lambda_H \equiv c/H$ the horizon scale and $\lambdabar_c \equiv \hbar/(mc)$ the Compton wavelength, the fluid quantities in Eqs.\ (\ref{fluid-MSF-BG}) and (\ref{fluid-MSF-pert}) give \bea & & \mu = {\omega_c^2 \over 2 c^2 a^3} A^2, \quad p = - \mu \cos{(2 \omega_c t + 2 \theta)}, \label{fluid-axion-BG}\\ & & {\delta \mu \over \mu} + (1 + {\rm w}) \alpha = 2 {\delta A \over A}, \nonumber \\ & & {\delta p \over \mu} + (1 + {\rm w}) \alpha = - 2 {\delta A \over A} \cos{(2 \omega_c t + 2 \theta)} \nonumber \\ & & \qquad + 2 \delta \theta \sin{(2 \omega_c t + 2 \theta)}, \nonumber \\ & & v = {c^2 \over a \omega_c} \left( \delta \theta - { \sin{(2 \omega_c t + 2 \theta)} \over 1 - \cos{(2 \omega_c t + 2 \theta)} } {\delta A \over A} \right), \label{fluid-axion-pert} \eea where we set ${\rm w} \equiv p/\mu$. Equation of motion in Eq.\ (\ref{EOM-pert}) gives \bea 2 {\delta \dot A \over A} - {c^2 \over \omega_c} {\Delta \over a^2} \delta \theta = \kappa, \quad 2 \delta \dot \theta + {c^2 \over \omega_c} {\Delta \over a^2} {\delta A \over A} = 2 \omega_c \alpha, \label{EOM-axion} \eea where we used $\dot \alpha \sim H \alpha \ll \omega_c \alpha$. With the ansatz $A$ and $\theta$ constants, the background equation of motion in Eq.\ (\ref{EOM-BG}) is valid to $(H/\omega_c)^2$ order. Now we have the {\it complete} set of equations for axion perturbation without imposing the gauge condition: these are Eqs.\ (\ref{eq1})-(\ref{eq7}), (\ref{fluid-axion-pert}) and (\ref{EOM-axion}). Before proceeding, we examine $\delta p$-relation in Eq.\ (\ref{fluid-axion-pert}). Without imposing the gauge condition, and without imposing time-average for both $\delta p$, $v$, and $p$, Eq.\ (\ref{fluid-axion-pert}) gives \bea & & {\delta p \over \mu} + (1 + {\rm w}) \alpha = \delta + ( 1 + {\rm w} ) \alpha \nonumber \\ & & \qquad + 2 {\omega_c \over c^2} a v \sin{(2 \omega_c t + 2 \theta)}. \eea This is not wrong but apparently the expression misleads the spirit of our ansatz in Eq.\ (\ref{ansatz}) where $\delta A$ and $\delta \theta$ are supposed to be smoothly varying compared with the Compton oscillation. Thus, the right-hand side of this expression should be pure oscillatory; indeed $v$ contains the oscillatory part canceling the first two non-oscillatory terms. Thus, for a more proper expression, we may take the time-average (thus ignore the oscillatory part) of the $v$-relation in Eq.\ (\ref{fluid-axion-pert}). In this way, we have $\delta \theta = (\omega_c / c^2) a v$, and \bea & & {\delta p \over \mu} + (1 + {\rm w}) \alpha = - 2 {\delta A \over A} \cos{(2 \omega_c t + 2 \theta)} \nonumber \\ & & \qquad + 2 {\omega_c \over c^2} a v \sin{(2 \omega_c t + 2 \theta)}. \label{delta-p} \eea In the following, we {\it take} time-average for $v$ and to the background order, thus $p = 0$. \subsection{Zero-pressure fluid vs. axion fluid} \label{sec:comparison} Before proceeding to determine the coefficients of Eq.\ (\ref{delta-p}), thus making Eq.\ (\ref{varphi_chi-p_chi-eq}) a closed form, here we compare the axion fluid in the ZSG with the zero-pressure fluid in the ZSG (thus, relativistic) and the one in Newtonian context. There are subtle differences among these three formulations. \subsubsection{Axion fluid} We further {\it take} the time-average for $\delta p$, thus $\delta p/\mu = - \alpha$. For this axion, Eqs.\ (\ref{eq1})-(\ref{eq7}) become \bea & & \kappa \equiv 3 H \alpha - 3 \dot \varphi - c {\Delta \over a^{2}} \chi, \label{eq1-axion} \\ & & {4 \pi G \over c^2} \delta \mu + H \kappa + c^2 {\Delta \over a^{2}} \varphi = 0, \label{eq2-axion} \\ & & \kappa + c {\Delta \over a^{2}} \chi = {12 \pi G \over c^4} a \mu v, \label{eq3-axion} \\ & & \dot \kappa + 2 H \kappa + c^2 {\Delta \over a^{2}} \alpha = {4 \pi G \over c^2} \delta \mu, \label{eq4-axion} \\ & & \varphi + \alpha = {1 \over c} \left( \dot \chi + H \chi \right), \label{eq5-axion} \\ & & \dot \delta = \kappa + {\Delta \over a} v, \label{eq6-axion} \\ & & {1 \over a} \left( a v \right)^{\displaystyle\cdot} = 0. \label{eq7-axion} \eea The structure of the equations is quite different compared with the zero-pressure fluid case in Eqs.\ (\ref{eq1})-(\ref{eq7}). In the ZSG, from Eq.\ (\ref{eq5-axion}) we have $\alpha_\chi = - \varphi_\chi$. From Eqs.\ (\ref{eq2-axion}) and (\ref{eq3-axion}), Eq.\ (\ref{eq7-axion}), Eqs.\ (\ref{eq1-axion}) and (\ref{eq3-axion}), and Eqs.\ (\ref{eq3-axion}) and (\ref{eq6-axion}), respectively, we have \bea & & c^2 {\Delta \over a^2} \varphi_\chi + 4 \pi G \varrho \delta_\chi = - {12 \pi G \varrho \over c^2} H a v_\chi, \label{Poisson-eq-ZSG-axion} \\ & & \dot v_\chi + H v_\chi = 0, \label{dot-v-eq-ZSG-axion} \\ & & \dot \varphi_\chi + H \varphi_\chi = - {4 \pi G \varrho \over c^2} a v_\chi, \label{dot-varphi_chi-eq-ZSG-azion} \\ & & \dot \delta_\chi = {\Delta \over a} v_\chi + {12 \pi G \varrho \over c^2} a v_\chi. \label{dot-delta-eq-ZSG-axion} \eea From these we have \bea & & {1 \over a^3} \left[ a^2 \left( a \varphi_{\chi} \right)^{\displaystyle{\cdot}} \right]^{\displaystyle{\cdot}} = 0, \label{ddot-varphi_chi-eq-axion} \\ & & \ddot \delta_\chi + 2 H \dot \delta_\chi - 4 \pi G \varrho \delta_\chi = - c^2 {\Delta \over a^2} \alpha_\chi. \label{ddot-delta_chi-eq-axion} \eea Equations (\ref{dot-v-eq-ZSG-axion}) and (\ref{ddot-varphi_chi-eq-axion}) are closed form equations for $v_\chi$ and $\varphi_\chi$, and $\alpha_\chi$ in Eq.\ (\ref{ddot-delta_chi-eq-axion}) gives the quantum stress (thus, pure quantum effect) and will be determined in terms of $\delta_\chi$ later, see Eq.\ (\ref{delta_chi-eq-axion}). \subsubsection{Relativistic zero-pressure fluid} Now, in the case of zero-pressure fluid, we still have $\alpha_\chi = - \varphi_\chi$, and from Eqs.\ (\ref{eq2}) and (\ref{eq3}), Eq.\ (\ref{eq7}), Eqs.\ (\ref{eq1}) and (\ref{eq3}), and Eqs.\ (\ref{eq3}) and (\ref{eq6}), respectively, we have \bea & & c^2 {\Delta \over a^2} \varphi_\chi + 4 \pi G \varrho \delta_\chi = - {12 \pi G \varrho \over c^2} H a v_\chi, \\ & & \dot v_\chi + H v_\chi = - {c^2 \over a} \varphi_\chi, \label{dot-v-eq-ZSG-fluid} \\ & & \dot \varphi_\chi + H \varphi_\chi = - {4 \pi G \varrho \over c^2} a v_\chi, \\ & & \dot \delta_\chi = {\Delta \over a} v_\chi + {12 \pi G \varrho \over c^2} a v_\chi + 3 H \varphi_\chi. \label{dot-delta-eq-ZSG-fluid} \eea Thus, Eqs.\ (\ref{dot-v-eq-ZSG-fluid}) and (\ref{dot-delta-eq-ZSG-fluid}) differ from Eqs. (\ref{dot-v-eq-ZSG-axion}) and (\ref{dot-delta-eq-ZSG-axion}). From these we have \bea & & {1 \over a^2} \left[ a \left( a v_{\chi} \right)^{\displaystyle{\cdot}} \right]^{\displaystyle{\cdot}} = 4 \pi G \varrho v_\chi, \label{ddot-v_chi-eq-fluid} \\ & & {1 \over a^3} \left[ a^2 \left( a \varphi_{\chi} \right)^{\displaystyle{\cdot}} \right]^{\displaystyle{\cdot}} = 4 \pi G \varrho \varphi_\chi, \label{ddot-varphi_chi-eq-fluid} \\ & & \ddot \delta_{\chi} + 2 H \dot \delta_{\chi} - 8 \pi G \varrho \delta_{\chi} + {3 H^2 + 6 \dot H + {c^2 \Delta \over a^2} \over 3 H^2 - \left( 1 + {c^2 \Delta \over 12 \pi G \varrho a^2} \right) {c^2 \Delta \over a^2}} \nonumber \\ & & \qquad \times \left[ H \dot \delta_{\chi} + \left( 1 + {c^2 \Delta \over 12 \pi G \varrho a^2} \right) 4 \pi G \varrho \delta_{\chi} \right] = 0. \label{ddot-delta_chi-eq-fluid} \eea These are closed form equations for the perturbed velocity, potential and density, respectively. {\it All} of these {\it differ} from the corresponding ones for the axion in Eqs.\ (\ref{dot-v-eq-ZSG-axion}), (\ref{ddot-varphi_chi-eq-axion}) and (\ref{ddot-delta_chi-eq-axion}). Only in the sub-horizon limit, Eq.\ (\ref{ddot-delta_chi-eq-fluid}) reproduces Eq.\ (\ref{ddot-delta_chi-eq-axion}) with vanishing $\alpha_\chi$. \subsubsection{Newtonian fluid} To the linear order in cosmology context, the Newtonian conservation equations and the Poisson equation give \cite{Hwang-Noh-2021} \bea & & \dot \delta + {1 \over a} \nabla \cdot {\bf v} = 0, \\ & & \dot {\bf v} + H {\bf v} = - {1 \over a} \nabla \Phi, \\ & & {\Delta \over a^2} \Phi = 4 \pi G \varrho \delta. \eea We can derive closed form equations as \bea & & {1 \over a^2} \left[ a \left( a \nabla \cdot {\bf v} \right)^{\displaystyle{\cdot}} \right]^{\displaystyle{\cdot}} = 4 \pi G \varrho \nabla \cdot {\bf v}, \\ & & {1 \over a^3} \left[ a^2 \left( a \Phi \right)^{\displaystyle{\cdot}} \right]^{\displaystyle{\cdot}} = 4 \pi G \varrho \Phi, \\ & & \ddot \delta + 2 H \dot \delta - 4 \pi G \varrho \delta = 0. \label{ddot-delta-Newtonian} \eea Comparison of the closed form equations in the three formulations reveals that, for a fluid, $\delta_\chi$ in the sub-horizon limit, and $v_\chi$ and $\varphi_\chi$ in all scales have Newtonian correspondence \cite{Hwang-Noh-1999}. In the case of axion, however, ignoring the quantum stress, $\delta_\chi$ in {\it all} scales coincides with the Newtonian $\delta$, whereas both $v_\chi$ and $\varphi_\chi$ {\it differ} from the corresponding Newtonian counterparts in all scales. This implies that for clear understanding in axion, we better express results using $\delta_\chi$ which has exact Newtonian correspondence. Now, we derive the the quantum stress and quantum oscillation terms in Eqs.\ (\ref{ddot-delta_chi-eq-axion}) and (\ref{varphi_chi-p_chi-eq}) using the complete set of equation for axion in Eqs.\ (\ref{fluid-axion-pert}), (\ref{EOM-axion}) and (\ref{eq1-axion})-(\ref{eq7-axion}). \subsection{Quantum stress} \label{sec:QS} The density perturbation equation for axion fluid was derived for three different gauges (the comoving gauge, the ZSG, and the uniform-curvature gauge) in unified form in \cite{Hwang-Noh-2021}. Here, we present the case in the ZSG, using our different form of ansatz in Eq.\ (\ref{ansatz}). We have the density perturbation equation in Eq.\ (\ref{ddot-delta_chi-eq-axion}), and to close the equation we need to determine $\alpha_\chi$ in terms of $\delta_\chi$. For that purpose, we can use the fluid quantities in Eq.\ (\ref{fluid-axion-pert}) and the equation of motion in Eq.\ (\ref{EOM-axion}). We take time-average for all fluid variables, and Eq.\ (\ref{fluid-axion-pert}) gives \bea \delta_\chi = 2 {\delta A_\chi \over A} - \alpha_\chi, \quad {\delta p_\chi \over \mu} = - \alpha_\chi, \quad \delta \theta_\chi = {\omega_c \over c^2} a v_\chi. \label{relation-ZSG-1} \eea The third relation, together with Eq.\ (\ref{eq7-axion}), implies $\delta \dot \theta_\chi = 0$. Equation (\ref{EOM-axion}) gives \bea & & 2 {\delta \dot A_\chi \over A} - {\Delta \over a} v_\chi = \kappa_\chi, \quad \alpha_\chi = {c^2 \over 2 \omega_c^2} {\Delta \over a^2} {\delta A_\chi \over A}. \label{relation-ZSG-2} \eea Equation (\ref{eq6-axion}) gives $\dot \delta_\chi = \kappa_\chi + \Delta v_\chi/a$, and consistency with a combination of the first relations of Eqs.\ (\ref{relation-ZSG-1}) and (\ref{relation-ZSG-2}) {\it demands} $\alpha_\chi \ll \delta_\chi$, thus $(\lambda_c/\lambda)^2 \ll 1$. Thus, we have \bea & & \delta_\chi = 2 {\delta A_\chi \over A}, \quad \alpha_\chi = {c^2 \over 4 \omega_c^2} {\Delta \over a^2} \delta_\chi. \label{relation-ZSG-3} \eea Therefore, Eq. (\ref{ddot-delta_chi-eq-axion}) gives \bea & & \ddot \delta_\chi + 2 H \dot \delta_\chi - 4 \pi G \varrho \delta_\chi = - {\hbar^2 \Delta^2 \over 4 m^2 a^4} \delta_\chi, \label{delta_chi-eq-axion} \eea where the right-hand side is the quantum stress term; the left-hand side is the well known perturbation equation for a pressureless medium in the comoving gauge and the synchronous gauge \cite{Lifshitz-1946}, and in Newtonian context \cite{Bonnor-1957}. The competition between gravity and quantum stress terms in Eq.\ (\ref{delta_chi-eq-axion}) gives the quantum Jeans scale \bea & & \lambda_J = {2 \pi a \over k_J} = \pi \left( {\hbar^2 \over \pi G \varrho m^2} \right)^{1/4} = \sqrt{2 \pi \lambda_c \lambda_H \over \sqrt{6 \Omega_m}} \nonumber \\ & & \qquad = {55.6 {\rm kpc} \over \sqrt{ m_{22} \sqrt{\Omega_{m0}}h_{100}}} \left( {\varrho_0 \over \varrho} \right)^{1/4}, \label{Jeans-scale} \eea where $m_{22} \equiv m c^2/(10^{-22} {\rm eV})$, $\lambda_c = 0.40 {\rm pc}/m_{22}$, $H_0 = 100 h_{100} {\rm km}/({\rm sec} \cdot {\rm Mpc})$, and $\Omega_m = 8 \pi G \varrho / (3 H^2)$ the density parameter of (axion) dark matter component; subindex $0$ indicates the present epoch. \subsection{Quantum oscillation} \label{sec:QO} In Eq.\ (\ref{varphi_chi-p_chi-eq}), to close the equation, we need $\delta p_{\chi} + (\mu + p) \alpha_{\chi}$ expressed in terms of $\varphi_{\chi}$. To derive the quantum stress we took time-average of $\delta p$ in Eq.\ (\ref{fluid-axion-pert}). Now, by {\it not} taking the time-average of $\delta p_\chi$, we can derive a closed equation for Eq.\ (\ref{varphi_chi-p_chi-eq}). In the ZSG, Eq.\ (\ref{delta-p}) gives \bea & & {\delta p_\chi \over \mu} + \alpha_\chi = - 2 {\delta A_\chi \over A} \cos{(2 \omega_c t + 2 \theta)} \nonumber \\ & & \qquad + 2 \delta \theta_{\chi} \sin{(2 \omega_c t + 2 \theta)}. \label{delta-p-ZSG} \eea Our task is to express $\delta A_\chi$ and $\delta \theta_\chi$ in terms of perturbation variables like $\varphi_\chi$, $v_\chi$ or $\delta_\chi$. As these coefficient variables are slowly varying compared with the Compton oscillation, in order to determine these now we can freely take the time-average of the pressure perturbation to evaluate them. The relations were already determined in Eqs.\ (\ref{relation-ZSG-1})-(\ref{relation-ZSG-3}). Together with Eqs.\ (\ref{Poisson-eq-ZSG-axion}) and (\ref{dot-varphi_chi-eq-ZSG-azion}), we have \bea & & - 2 {\delta A_\chi \over A} = - \delta_\chi = {1 \over 4 \pi G \varrho} \left[ c^2 {\Delta \over a^2} \varphi_\chi - 3 H {1 \over a} \left( a \varphi_\chi \right)^{\displaystyle{\cdot}} \right], \nonumber \\ & & 2 \delta \theta_\chi = 2 {\omega_c \over c^2} a v_\chi = - {\omega_c \over 2 \pi G \varrho} {1 \over a} \left( a \varphi_\chi \right)^{\displaystyle{\cdot}} = {2 \omega_c a^2 \over c^2 \Delta} \dot \delta_\chi, \label{expressions-ZSG} \eea where we used sub-horizon limit in the last step. Therefore, we have the relation we are looking for \bea & & {\delta p_\chi \over \mu} + \alpha_\chi = {1 \over 4 \pi G \varrho} c^2 {\Delta \over a^2} \varphi_\chi \cos{(2 \omega_c t + 2 \theta)} \nonumber \\ & & \qquad - {\omega_c \over 2 \pi G \varrho} {1 \over a} \left( a \varphi_\chi \right)^{\displaystyle{\cdot}} \sin{(2 \omega_c t + 2 \theta)}, \label{QO} \eea where we ignored $H/\omega_c$ order term. This closes Eq.\ (\ref{varphi_chi-p_chi-eq}). The perturbation variable $\varphi_\chi$ can be expressed in terms of $\delta_\chi$ and $v_\chi$ using Eq.\ (\ref{expressions-ZSG}). In the case of axion, as {\it only} $\delta_\chi$ has Newtonian correspondence in sub-horizon scale [see below Eq.\ (\ref{ddot-delta-Newtonian})], we use $\delta_\chi$. Equation (\ref{varphi_chi-p_chi-eq}) gives \bea & & {1 \over a^3} \left[ a^2 \left( a \varphi_{\chi} \right)^{\displaystyle{\cdot}} \right]^{\displaystyle{\cdot}} = 4 \pi G \varrho \Big[ \delta_\chi \cos{(2 \omega_c t + 2 \theta)} \nonumber \\ & & \qquad + 2 {\omega_c a^2 \over c^2 \Delta} \dot \delta_\chi \sin{(2 \omega_c t + 2 \theta)} \Big], \label{varphi_chi-eq-axion} \eea where we {\it assumed} the sub-horizon limit. Now, we evaluate the oscillation strength. {\it Assuming} near stationary medium [i.e., ignoring the time dependence of $\varrho$, $a$ and $\delta_\chi$, but with $\dot \delta_\chi \sim \delta_\chi/t_g \sim \sqrt{G \varrho} \delta_\chi$], we have the solution \bea & & \varphi_\chi \sim - {4 \pi G \varrho a^2 \over c^2 \Delta} \delta_\chi \bigg[ 1 - {\lambda_c^2 \over 4 \lambda^2} \cos{(2 \omega_c t + 2 \theta)} \nonumber \\ & & \qquad - {1 \over 8 \sqrt{\pi}} {\lambda_c^2 \over \lambda_J^2} \sin{(2 \omega_c t + 2 \theta)} \bigg], \label{varphi_chi-solution} \eea where the non-oscillatory solution is the one supported by the density perturbation. The two oscillatory solutions are competing with the quantum Jeans scale as the criterion: the sine term dominates over the cosine term for $\lambda > \lambda_J$. We may set Eq.\ (\ref{varphi_chi-solution}) as \cite{Khmelnitsky-Rubakov-2014} \bea & & \Psi ({\bf x}, t) \equiv - c^2 \varphi_\chi = \Psi_d ({\bf x}) + \Psi_c ({\bf x}) \cos{(2 \omega_c t + 2 \theta)} \nonumber \\ & & \qquad + \Psi_s ({\bf x}) \sin{(2 \omega_c t + 2 \theta)}, \label{Psi-solution} \eea where \bea & & \Psi_d \equiv - c^2 \varphi_\chi = {4 \pi G \varrho a^2 \over \Delta} \delta_\chi = - {1 \over \pi} G \varrho \delta_\chi \lambda^2, \nonumber \\ & & \Psi_c \equiv - \Psi_d {\lambda_c^2 \over 4 \lambda^2} = {1 \over 4 \pi} G \varrho \delta_\chi \lambda_c^2, \nonumber \\ & & \Psi_s \sim - {\Psi_d \over 8 \sqrt{\pi}} {\lambda_c^2 \over \lambda_J^2} \sim {\Psi_c \over 2 \sqrt{\pi}} {\lambda^2 \over \lambda_J^2} \sim {1 \over 8 \pi \sqrt{\pi}} G \varrho \delta_\chi {\lambda^2 \lambda_c^2 \over \lambda_J^2}. \eea {\it Assuming} nearly scale invariant $\Psi_d$ in cosmic scale, $\Psi_c$ increases as the size becomes smaller with $\Psi_c \propto 1/\lambda^2$, whereas $\Psi_s$ stays constant independent of the size; $\Psi_s$ is dominant over $\Psi_c$ on $\lambda > \lambda_J$. Amplitudes of the oscillatory terms and the quantum Jeans scale increase as the mass decreases, with $\Psi_c \propto 1/m^2$, $\Psi_s \propto 1/m$ and $\lambda_J \propto 1/\sqrt{m}$. The oscillatory terms have a frequency and dimensionless amplitudes \bea & & f = 2 \nu_c = 4.8 \times 10^{-8} m_{22} {\rm Hz}, \nonumber \\ & & {\Psi_c \over c^2} \sim 4.0 \times 10^{-17} \left( {1 \over m_{22}} {100{\rm kpc} \over \lambda_0} {a_0 \over a} \right)^2, \nonumber \\ & & {\Psi_s \over c^2} \sim 3.7 \times 10^{-17} {\sqrt{\Omega_{m0}} h_{100} \over m_{22}} \sqrt{\varrho \over \varrho_0}, \eea where we used $\Psi_d /c^2 \sim 10^{-5}$. The transition from $\Psi_c$ to $\Psi_s$ occurs near the quantum-Jeans scale in Eq.\ (\ref{Jeans-scale}). Whether this oscillation in the gravitational potential has any future observational prospect besides the pulsar timing array signals proposed in \cite{Khmelnitsky-Rubakov-2014} is yet to be unraveled. Considering the linear nature of our study and the uncertainty of $\Psi_d$ on scales smaller than the quantum Jeans scale, the $\Psi_s$ term which is scale invariant like $\Psi_d$ might be more relevant in cosmology. \section{Discussion} \label{sec:discussion} Our main result is the derivation of the gravitational potential equation with quantum oscillation in Eq.\ (\ref{varphi_chi-eq-axion}). We also derive the density perturbation equation with the quantum stress in Eq.\ (\ref{delta_chi-eq-axion}), both in the ZSG. Compared with previous studies in \cite{Hwang-Noh-2009, Hwang-Noh-2021, Khmelnitsky-Rubakov-2014}, the quantum stress equation is derived using a different notation in Eq.\ (\ref{ansatz}), and the quantum oscillation equation is rigorously derived in the cosmology context. The quantum oscillation of gravitational potential was previously discovered in Minkowski background \cite{Khmelnitsky-Rubakov-2014}. We derive an additional oscillation term that dominates on scales larger than the quantum Jeans scale. Here, we compare our result with \cite{Khmelnitsky-Rubakov-2014} which motivated our work. The authors considered Minkowski background. They took an ansatz $\widetilde \phi ({\bf x}, t) = \widetilde A ({\bf x}) \cos{( \omega_c t + \widetilde \theta ({\bf x}) )}$ and did not separate the background and perturbation in fluid quantities; tildes indicate quantities including both the background and perturbation, like $\widetilde \phi ({\bf x}, t) = \phi (t) + \delta \phi ({\bf x}, t)$, etc. In the ZSG, they presented \bea & & \widetilde \mu = {\omega_c^2 \over 2 c^2} \widetilde A^2, \quad \widetilde p = - \widetilde \mu \cos{( 2 \omega_c t + 2 \widetilde \theta )}, \label{KR1} \\ & & \Delta \Psi = 4 \pi G \widetilde \varrho, \quad \ddot \Psi = - 4 \pi G \widetilde \mu \cos{( 2 \omega_c t + 2 \widetilde \theta )}. \label{KR2} \eea These can be compared with our Eqs.\ (\ref{relation-ZSG-3}), (\ref{delta-p-ZSG}), (\ref{varphi_chi-eq-density}) and (\ref{varphi_chi-eq-axion}), respectively. Comparison reveals some correspondence, but metric perturbation ($\alpha$ term) is ignored in \cite{Khmelnitsky-Rubakov-2014}. Including the background fluid quantities in Eq.\ (\ref{KR2}) is not correct as the gravitational potential (metric variable) in Eq.\ (\ref{metric}) is pure perturbed order, but it can be justified, {\it if} the background density is negligible compared with the perturbed part as in the galactic halo considered in \cite{Khmelnitsky-Rubakov-2014}. The sine-part oscillation ($\Psi_s$ term) missing in \cite{Khmelnitsky-Rubakov-2014} can be absorbed to $\delta \theta$, but the authors ignored evaluating it. If one stops here as in \cite{Khmelnitsky-Rubakov-2014}, without analyzing the equation of motion and not specifying the perturbed phase, the ansatz with $\widetilde A({\bf x})$ in \cite{Khmelnitsky-Rubakov-2014} is fine. However, to determine the perturbed part of the phase, $\widetilde \theta ({\bf x})$, thus recovering the sine-part of oscillation in Eq.\ (\ref{varphi_chi-eq-axion}), we have to determine the remaining fluid quantity $v_i$, and to proceed further we need to consider the equation of motion; in the process, the assumption of $A({\bf x})$ may not be guaranteed and decomposing the perturbation and background would be convenient, see our Eqs.\ (\ref{relation-ZSG-1}) and (\ref{relation-ZSG-2}). Despite these potential shortcomings, as a major difference, \cite{Khmelnitsky-Rubakov-2014} considered dark matter halo as the medium where the density enhancement is huge compared with the background and {\it claimed} validity of Eqs.\ (\ref{KR1}) and (\ref{KR2}) in that highly nonlinear situation in the matter and field. On galactic halo, density enhancement of the dark matter is estimated to be more than $10^4$ times higher than the background density \cite{Khmelnitsky-Rubakov-2014} and even higher in the soliton core predicted in the fuzzy dark matter simulation based on Schr\"odinger-Poisson system \cite{Schive-etal-2014} reaching up to $10^9$ \cite{deMartino-etal-2017}. The subject is {\it beyond} the scope of our present linear perturbation analysis where we consistently consider linear order perturbations in both metric and matter (i.e., axion field). To handle the nonlinear situation properly, either the post-Newtonian approximation or the weak gravity limit approximation would be needed. In both cases, analysis of the scalar field equation of motion together with the fluid counterpart should be necessary for proper handling with full consistency. This will be pursued in near future. \section*{Acknowledgments} We thank Donghui Jeong for useful discussion. H.N.\ was supported by the National Research Foundation (NRF) of Korea funded by the Korean Government (No.\ 2018R1A2B6002466 and No.\ 2021R1F1A1045515). J.H.\ was supported by IBS under the project code, IBS-R018-D1, and by the NRF of Korea (No.\ NRF-2019R1A2C1003031).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{intr} The Kondo physics is one of fundamental issues in exploring many-body correlations\cite{Kondo1964, Hewson1993}, which originates from the screening of local moment by conduction electrons in metal. Theoretical prediction \cite{Ng1988, Glazman1988} and its physical realization \cite{Goldhaber-Gordon1998, Cronenwett1998, Schmid1998} of the Kondo effect in a single-electron transistor(SET) provide a great chance to investigate intriguing features of the Kondo physics by tuning various controllable parameters such as gate voltage, source-drain voltage, the magnetic field, and so on. The existence of the Kondo resonance near the Fermi level in the SET affects dramatically the transport of the device, and a unitary conductance in the Coulomb-blockade regime can be reached\cite{Cronenwett1998}. In addition, this set-up is also an ideal platform to study non-equilibrium many-body correlation if these parameters are changed in time, as explored extensively in experiments \cite{Elzerman2000, Kogan2004, Delbecq2011} and theories \cite{Meir1993, Ng1993, Hettler1994,Schiller1996,Goldin1998,Lopez1998,Kaminski1999,Nordlander2000,Nguyen2012}. When the SET is irradiated with microwave, photon-induced Kondo satellites have been observed experimentally when appropriate microwave frequency and amplitude were applied\cite{Kogan2004}. Very recently, Hemingway \textit{et al.} \cite{Hemingway2013} measured the time-averaged differential conductance of the SET and observed some novel transport behaviors when magnetic field and microwave are applied at the same time. They found that i) at zero magnetic field the microwave applied suppresses the Kondo effect when the photon energy is comparable or greater than the Kondo temperature, which is in agreement with available theoretical prediction \cite{Kaminski1999}; ii) at finite magnetic field the Kondo conductance changes non-monotonically with magnetic field for a given microwave frequency and the conductance shows a very sharp peak as a function of magnetic field, this non-monotonic behavior is absent at low frequencies; and iii) the microwave frequency is larger of about one order than the corresponding Zeeman energy applied while the anomalous non-monotonic behavior is observed. Since the first observation is well understood by available theory \cite{Kaminski1999} and the latter two features are novel but can currently not be understood, below we focus on the case of finite magnetic field and clarify the latter two features. First of all, we present our basic picture. Fig. \ref{fig1} shows a schematic energy level diagram of the SET. In the absence of microwave [Fig.\ref{fig1}(a)], the dot level splits due to the Zeeman energy $\Delta \varepsilon = g\mu_B B$ where $g$ is Land\'e factor, $\mu_B$ Bohr magneton and $B$ magnetic field applied. When the SET is irradiated with microwave, some parameters, for example, the bias voltage and dot-leads tunneling in the SET, may vary in time, which lead to many interesting features such as the satellites\cite{Kaminski1999,Kogan2004,Nguyen2012}. However, in the experiment \cite{Hemingway2013} these satellites have not been observed and therefore here we will focus on another possible effect, namely, the spin-flip transition induced by microwave field, which has been extensively studied in double quantum dots system\cite{Oosterkamp1998,van der Wiel2002, Petta2004,Nowack2007,Laird2007,Pioro-Ladriere2008,Petersson2009,Schreiber2011} but has not been explored in the present set-up. Here whether the photon energy matches the Zeeman energy or not leads to quite different physics. If match, the microwave can induce a spin-flip transition, as shown in Fig. \ref{fig1}(b), as a consequence of this transition the dot levels are renormalized[solid lines in Fig. \ref{fig1}(c)]. Two effects are observed. The dot levels for both spin shift down and at the same time, the Zeeman splitting increases effectively(due to the experimental observation, as discussed later). If not match, the spin-flip transition can not happen. As a result, the dot levels remain almost unchanged, as shown in Fig. \ref{fig1}(d,e). As shown below, for a given microwave frequency in experiment, the existence of a sharp peak of the Kondo conductance as a function of magnetic field applied can be simply attributed to the resonance between the frequency and magnetic field. This is because that in this case the dot energy is renormalized to more deeper level and thus the Kondo effect is strengthened effectively. In this picture we clarify qualitatively the anomalous features observed experimentally. The result can further stimulate the related experimental study of the dynamic response of the Kondo effect in non-equilibrium devices. \begin{figure}[tbp] \includegraphics[width=0.9\columnwidth]{fig1.jpg} \caption {Schematic dot levels in the SET when both magnetic field and microwave are applied. (a) The dot level Zeeman splitting without microwave irradiation. $\Delta \varepsilon$ and $U$ are the Zeeman energy and the Coulomb repulsion, respectively. (b) $\&$ (c) The photon energy matches the (renormalized) Zeeman energy. A spin-flip transition in the SET is induced and two renormalized effects occur, namely, the downshift of the dot level and the increase of the effective Zeeman energy (due to the experimental observation here). The dashed lines are unrenormalized levels. $\Delta\tilde\varepsilon$ and $\tilde U$ are renormalized Zeeman energy and Coulomb repulsion, respectively. (d) $\&$ (e) The photon energy does not match the Zeeman energy, thus the spin-flip transition is prohibited. The dot levels remain unchanged. }\label{fig1} \end{figure} \section{Model and renormalized dot levels} In the absence of microwave irradiation, the SET can be described by the following Hamiltonian \begin{eqnarray} && H_{\text{SET}} = \sum_{k\sigma,\alpha = L,R} \varepsilon_{k\alpha\sigma}c_{k\alpha\sigma}^{\dag}c_{k\alpha\sigma} + \sum_{\sigma}\varepsilon_{d\sigma} d^\dagger_\sigma d_\sigma + U n_{d\uparrow}n_{d\downarrow} \nonumber\\ &&\hspace{2cm} + \sum_{k\alpha\sigma}\left( V_{\alpha} d_{\sigma}^{\dagger}c_{k\alpha\sigma} + h.c.\right), \label{eq1} \end{eqnarray} where the first term represents the left(L) and right(R) leads, the successive two terms denote dot Hamiltonian and the last one is the hybridization between the dot and leads. $\varepsilon_{d\sigma} = \varepsilon_d + \frac{\sigma}{2}g\mu_B B$ and $U$ are the dot levels with spin $\sigma = \pm(\uparrow,\downarrow)$ and the on-site Coulomb repulsion. In the presence of microwave irradiation with frequency $f$, in principle the device is in out-of-equilibrium and many parameters may vary in time. As mentioned above, here we only focus on the spin-flip transition induced by the microwave field, which can be captured by the following effective Hamiltonian \begin{equation} H_{\text{p-d}} = \omega_{p}a^{\dagger}a + \lambda(a^{\dagger} + a)\sum_{\sigma}d_{\sigma}^{\dagger}d_{\bar{\sigma}},\label{eq2} \end{equation} where $\omega_p = hf$($h$: Planck constant) and $\lambda$ is the coupling strength between the dot and the photon, which depends on the match between the photon energy and the Zeeman energy, as discussed below. Thus the total Hamiltonian is given by \begin{equation} H = H_{\text{SET}} + H_{\text{p-d}}. \label{eq3} \end{equation} To derive the renormalized dot level, we first apply the canonical transformation $H\rightarrow \tilde H = e^S H e^{-S}$ with $S = \frac{\lambda}{\omega_p}(a^\dagger - a)\sum_\sigma d^\dagger_\sigma d_{\bar\sigma}$, one can obtain the following effective Hamiltonian (see Sec. I in Supplemental Materials\cite{note}) \begin{eqnarray} && \tilde H \approx \sum_{k\sigma,\alpha = L,R} \varepsilon_{k\alpha\sigma}c_{k\alpha\sigma}^{\dag}c_{k\alpha\sigma} + \sum_{\sigma}\tilde{\varepsilon}_{d\sigma} d^\dagger_\sigma d_\sigma + \tilde U n_{d\uparrow}n_{d\downarrow} \nonumber\\ &&\hspace{2cm} + \sum_{k\alpha\sigma}\left( \tilde V_{\alpha} d_{\sigma}^{\dagger}c_{k\alpha\sigma} + h.c.\right), \label{eq1-a} \end{eqnarray} where \begin{equation} \tilde\varepsilon_{d\sigma} = \varepsilon_d - \frac{\lambda^2}{\omega_p} +\frac{\sigma}{2}\Omega_{\lambda}\Delta\varepsilon,\label{eq4} \end{equation} and $\tilde U = U + \frac{2\lambda^2}{\omega_p}$, $\tilde V_{\alpha} = \Omega'_{\lambda} V_\alpha$, $\Delta\varepsilon=g\mu_B B$. Here $\Omega_{\lambda}$ and $\Omega'_{\lambda}$ are renormalized parameters given in Sec. I in Supplemental Materials \cite{note}. Equations (\ref{eq1-a}) and (\ref{eq4}) are our central results of the present work. In comparison to Eq. (\ref{eq3}), the influence of the photons is renormalized into the quantum dot parameters, as shown in $\tilde \varepsilon_{d\sigma}, \tilde U$, and $\tilde V_\alpha$. When $\lambda = 0$, it is easy to show that all renormalized parameters go back to the original ones and Eq. (\ref{eq1}) is recovered. From Eq. (\ref{eq4}) the coupling with photons produces two important consequences. One is that the dot levels for both spin have a downshift with $\lambda^2/\omega_p$, whose consequence is to strengthen Kondo effect. The other is the renormalization of the Zeeman energy which can not be evaluated exactly here but can be fixed phenomenologically by the experiment (see below). It should be emphasized that the above renormalization happens only under the resonance condition, namely, the microwave frequency match the renormalized Zeeman energy, namely, $\lambda = \lambda_0\delta(\omega_p - \Omega_\lambda\Delta\varepsilon)$, where $\lambda_0$ is the coupling strength dependent of the microwave frequency applied. In calculation, this can be expressed by a Lorentzian function as follow in a dimensionless form \begin{equation} \lambda = \frac{\lambda_0}{\pi}\frac{\beta}{(\omega_p - \Omega_\lambda\Delta\varepsilon)^2 + \beta^2}, \label{eq5} \end{equation} where $\beta\rightarrow 0^{+}$ is the width of the resonance. As argued in Sec. I of Supplemental Materials, here we take phenomenologically $\Omega_\lambda\sim 7$, which is determined by experimental observation. \begin{figure}[tbp] \begin{center} \includegraphics[width=0.9\columnwidth]{fig2.jpg} \caption{(Color online) (a) The coupling strength $\lambda$ as a function of magnetic field applied for a given frequency $f = 34.1$GHz. The dots represent the coupling strengths corresponding to magnetic fields taken in experiment[same in Fig. \ref{fig2}(b)]. (b) The renormalized spin-independent dot level $\tilde\varepsilon_d$ and the dot occupancy as functions of magnetic fields. (c) $\&$ (d) Theoretical Kondo conductance for different magnetic fields at zero temperature. (e) $\&$ (f) Experimental Kondo conductance for different magnetic fields (reproduced from Ref. \cite{Hemingway2013}). The parameters used are $\varepsilon_d = -4.1\Gamma$, $\lambda_0 = 3\times 10^{-4} \Gamma$, $\beta=2\times 10^{-3} \Gamma$, the half-bandwidth $D = 100\Gamma$ and the renormalized broadening $\Gamma = 10$meV. The Land\'e factor $g = 0.207$ and $\mu_B = 58\mu eV/T$.}\label{fig2} \end{center} \end{figure} \begin{figure}[tbp] \begin{center} \includegraphics[width=0.9\columnwidth]{fig3.jpg} \caption{(Color online) (a) Theoretical and (b) experimental differential conductance at the Fermi level as function of magnetic field applied for three different microwave frequency. The parameters used are $\lambda_0 = (2.8, 2.2, 0.2) \times 10^{-4} \Gamma$ for $f=(27, 24, 5)$ GHz in order to fit the heights of the conductance for different frequency. The other parameters used are the same as those in Fig. \ref{fig2}.}\label{fig3} \end{center} \end{figure} \section{Comparison with experiment} Below we address the experiment on the following two aspects. One is to starting from the effective Hamiltonian Eq. (\ref{eq4}), in which the effect of the microwave irradiation is already included as renormalized parameters. The other is to fit directly the experimental data by the Kondo resonance and to extract the Kondo temperature as a function of magnetic fields applied. To calculate the differential conductance, we use the Keldysh formalism\cite{Haug2008} and the slave-boson mean field method (see Sec. II and III in Supplemental Materials\cite{note}, respectively). Near the Fermi level, the slave-boson mean-field is sufficient to capture the essential physics of the Kondo effect. From these calculations experimental observations can be reproduced qualitatively, as discussed below. For a given microwave frequency $f = 34.1 $GHz(obtained by $hf = \Omega_\lambda g\mu_B B$, $\Omega_\lambda \sim 7$, $g=0.207$, and $B = 1.68$T\cite{Hemingway2013}), when magnetic field is varied, the coupling strength changes according to if match between the frequency and magnetic field applied or not, as shown in Fig. \ref{fig2}(a). For different magnetic fields [denoted by dots in Fig.\ref{fig2}(a)] the differential conductance is presented in Fig. \ref{fig2}(c) and (d) at zero temperature. With increasing magnetic field one notes that the height of the conductance peak increases and reaches a maximal value at $B = 1.68$T, where the (renormalized) Zeeman energy matches the microwave frequency and the dot occupancy also reaches maximum due to the downshift of the dot level, as shown in Fig. \ref{fig2}(b). Further increasing magnetic field, the height of the peak decreases. Thus the height of the Kondo conductance is non-monotonic as a function of magnetic field, which is qualitatively consistent with the experimental observations, as shown in Fig. \ref{fig2}(e,f). One also notes that the width of the peaks in Fig. \ref{fig2}(c) and (d) is much broader than that in experiment, which is due to the approximate slave-boson mean field method. In the above discussion the Zeeman splitting is invisible due to small magnetic field. To further confirm this observation, in Fig. \ref{fig3}(a) we present the differential conductance at the Fermi level for different microwave frequency and compare directly to the experimental results. The agreement with the experiment is qualitatively good. Thus one can conclude that the essential physics of the non-monotonic field dependence of the Kondo conductance is that the dot level has a downshift due to the renormalization effect if the photon energy matches the renormalized Zeeman energy, as mentioned above. In addition, the experiment also indicates that for high frequency, besides a very sharp peak, at low magnetic fields, about half and one quarter there are obvious peaks and/or shoulder structures, which is further explained qualitatively in Sec. IV of Supplemental Materials\cite{note}. Though the above discussion captures the essential physics of the experimental observation in the SET at finite magnetic field, there still exists quantitative difference between the calculation and the experiment due to some unknown parameters such as the dot level, tunneling matrix element and the approximate method used. In the following we focus on the experimental data. Since the SET is tuned to be in Kondo regime, the feature near Fermi level is nothing but the Kondo resonance, which can be simplified as \begin{equation} T_d(\omega) \approx \frac{1}{\omega - \varepsilon_K + i T_K},\label{eq6} \end{equation} where $\varepsilon_K$ is the location of the Kondo resonance with half-width $T_K$, namely, the Kondo temperature. Here we still neglect the Zeeman splitting due to small magnetic field. This is justified by the experiment, in which no sizable Zeeman splitting is observed except for $B = 8.8$T \cite{Hemingway2013}. Phenomenologically, we use the following Fano formula to fit the experimental data. \begin{equation} G = G_0 - \rho_0[(q^2 - 1) \text{Im}T_d(\omega) - 2 q \text{Re} T_d(\omega)], \end{equation} where $q$ is Fano asymmetric factor \cite{Fano1961, Luo2004} to describe the asymmetry of the differential conductance and $G_0$, $\rho_0$ are background parameters. The result is presented in Fig. \ref{fig4}(a), which shows well fitting (the fitting parameters are presented in Sec. V in Supplemental Materials\cite{note}). Fig. \ref{fig4}(b) shows the Kondo temperature extracted from the fitting for different magnetic fields. The logarithm scale presents the field dependence of the dot level up to a constant. This can be obtained by the exact Kondo temperature expression $T_K \propto \exp[\pi(\varepsilon_d - \varepsilon_F)/2\Delta]$\cite{Haldane1978}, where $\Delta$ is the tunneling matrix element and $\varepsilon_F$ is the Fermi level. For convenience to compare, in Fig. \ref{fig4}(c) we again present the renormalized dot level as a function of magnetic field for three different microwave frequency. The comparison further supports our theoretical picture. \begin{figure}[tbp] \includegraphics[width=0.9\columnwidth]{fig4.jpg} \caption {(Color online) (a) Experimental (scatters) differential conductance as function of magnetic field and their theoretical fitting (solid lines). (b) The logarithm of the Kondo temperature extracted from the fitting, which is proportional to the dot level. (c) The renormalized dot levels as function of magnetic field for three different microwave frequency.}\label{fig4} \end{figure} \section{Discussion and outlook} In Introduction, we mentioned that the non-equilibrium Kondo physics has been intensively investigated in the previous theoretical works \cite{Meir1993, Ng1993, Hettler1994,Schiller1996,Goldin1998,Lopez1998,Kaminski1999,Nordlander2000,Nguyen2012}, most of which focused on the time-varying bias voltages. In this case a striking phenomenon predicted is the satellite peaks in the dependence of the differential conductance on the dc bias voltage, which has been confirmed experimentally in 2004 by Kastner's group \cite{Kogan2004}. In particular, very recently, Nguyen \cite{Nguyen2012} has studied in detail dynamic response of an SET in a magnetic field irradiated with microwaves. They explored two fold effect of the microwave, one is the oscillation in voltage with microwave frequency $\Omega$ and the other is the oscillation in the coupling parameter with frequency $\Omega/p(p\in N)$. While the former has been well studied in the previous works, the latter one leads to the central result of that work, namely, the satellite peak splitting \cite{Nguyen2012}. It is a delicate situation to observe the satellite peaks, which requires properly the frequency and amplitude of the microwaves\cite{Kogan2004}. However, in the recent experiment carried out by Kogan's group \cite{Hemingway2013}, neither the satellite peaks nor the satellite peak splitting have been observed. Instead, the differential conductance shows a non-monotonic field dependence for a given microwave frequency greater than the Kondo temperature. This result indicates that the physics behind the experiment does not fall into the framework studied in the previous works \cite{Nguyen2012}. Several spin-flip transition mechanisms in quantum dot has been explored by Khaetskii and Nazarov \cite{Khaetskii2001} but they are irrelevant to microwave excitation mechanism discussed here. By renormalization effect on the Zeeman splitting one can understand why the microwave frequency disagrees quantitatively with the previous theoretical prediction\cite{Schiller1996} $\Delta\varepsilon = hf$. However, we are unable to calculate exactly the renormalization factor due to the coupling between the electrons and the photons. Therefore, we take this renormalization factor from the experiment. A further problem left is not answered is that the renormalization factor is almost a constant of about $7$ observed in the experiment for both SETs\cite{Hemingway2013}. Irrespective of the detail, the factor should be related to the the photon population which can be crudely estimated at temperature $T$ by $n_p = 1/\left(\text{e}^{hf/k_BT}-1\right)$. It is noted that in experiment the base electron temperature in both SETs is about $T = 70$mK, which is kept fixed. It is instructive to change the temperature in the cavity thus one can check if the renormalization factor changes or not. This should be tested by experiment. On the other hand, it is also interesting to explore possible influence of the interplay between the microwave excitation and the possible spin-flip transition mechanisms \cite{Khaetskii2001} on the non-equilibrium transport of the SET. In summary, we have proposed a phenomenological mechanism to explain the non-monotonic magnetic field dependence of the Kondo conductance observed in a recent experiment, which can not be understood in the current theory. The essential physics is the renormalization effects attributed to the spin-flip transition induced by the microwave when the photon energy matches the renormalized Zeeman energy. The result sheds a light on the transport behavior of the related devices. \section{Acknowledgments} The authors thank S. Y. Cho and T.-F. Fang for valuable discussion. The work is partly supported by the programs for NSFC, PCSIRT (Grant No. IRT1251), the national program for basic research and the Fundamental Research Funds for the Central Universities of China. \section{DERIVATION OF THE EFFECTIVE HAMILTONIAN} Similar to the scheme extensively adopted in studying the Anderson-Holstein (AH) model\cite{David2002, Mahan2000}, we apply a generalized Lang-Firsov transformation to each operator $O$ presented in the original Hamiltonian (1-3) in the main text as \begin{equation} e^{S}Oe^{-S}=O+\left[ S,O\right] +\frac{1}{2!}\left[ S,\left[ S,O\right] \right] +\frac{1}{3!}\left[ S,\left[ S,\left[ S,O\right] \right] \right] +... , \end{equation} where $S=\frac{\lambda}{\omega_{p}}\left( a^{+}-a\right) \sum_{\sigma}d_{\sigma}^{\dagger}d_{\bar{\sigma}}$. It is no difficult to obtain \begin{eqnarray} && e^{S}d_{\sigma}e^{-S} = X d_{\sigma}-Y d_{\bar{\sigma}},\\ && e^{S}d_{\sigma}^{\dagger}e^{-S} = X d_{\sigma}^{\dagger}+Y d_{\bar{\sigma}}^{\dagger}. \end{eqnarray} Here $X, Y$ are given by \begin{eqnarray} &&X =\frac{1}{2}\left[F\left(\frac{\lambda}{\omega_{p}}\right)+F\left(-\frac{\lambda}{\omega_{p}}\right)\right], \\ &&Y=\frac{1}{2}\left[F\left(\frac{\lambda}{\omega_{p}}\right)-F\left(-\frac{\lambda}{\omega_{p}}\right)\right], \end{eqnarray} where $F(\frac{\lambda}{\omega_{p}})=e^{\frac{\lambda}{\omega_{p}}\left( a^{\dagger}-a\right) }$. It's easy to show that $X$ and $Y$ are Hermitian and anti-Hermitian operators, respectively, which satisfy $X^{\dag}=X$ and $Y^{\dag}=-Y$. Thus we have \begin{eqnarray} &&X^{2}=\frac{1}{4}\left[ F\left(\frac{2\lambda}{\omega_{p}}\right)+F\left(-\frac{2\lambda}{\omega_{p}}\right)\right]+\frac{1}{2},\\ &&Y^{2}=\frac{1}{4}\left[ F\left(\frac{2\lambda}{\omega_{p}}\right)+F\left(-\frac{2\lambda}{\omega_{p}}\right)\right]-\frac{1}{2},\\ &&XY=\frac{1}{4}\left[ F\left(\frac{2\lambda}{\omega_{p}}\right)-F\left(-\frac{2\lambda}{\omega_{p}}\right)\right],\\ &&X^{2}-Y^{2}=1. \end{eqnarray} For the creation and annihilation operators of photon, we have \begin{eqnarray} e^{S}ae^{-S} & =a-\frac{\lambda}{\omega_{p}}\sum_{\sigma}d_{\sigma}^{\dagger}d_{\bar{\sigma}},\\ e^{S}a^{\dagger}e^{-S} & =a^{\dagger}-\frac{\lambda}{\omega_{p}}\sum_{\sigma}d_{\sigma}^{\dagger}d_{\bar{\sigma}}. \end{eqnarray} Since the relation $e^{-S}e^{S}=1$ that $e^{S}MNe^{-S}=e^{S}Me^{-S}e^{S}Ne^{-S}$. We can readily obtain the transformed forms of each ingredient of the original Hamiltonian as follows \begin{eqnarray} \hspace{-0.2cm}e^{S}\sum_{\sigma}\varepsilon_{d\sigma}n_{d\sigma}e^{-S} & =\sum_{\sigma}\left[ {\varepsilon}_{d}+\frac{\sigma}{2}\left( X^{2}+Y^{2}\right) \Delta\varepsilon\right] n_{d\sigma} \notag\\ &-XY\Delta\varepsilon\sum_{\sigma}\sigma d_{\sigma}^{\dagger}d_{\bar{\sigma}}, \end{eqnarray} \begin{eqnarray} e^{S}Un_{d\uparrow}n_{d\downarrow}e^{-S}=Un_{d\uparrow}n_{d\downarrow},\\ e^{S}\sum_{\sigma}d_{\sigma}^{\dagger}d_{\bar{\sigma}}e^{-S}=\sum_{\sigma}d_{\sigma}^{\dagger}d_{\bar{\sigma}}, \end{eqnarray} \begin{eqnarray} & e^{S}\lambda\left( a^{+}+a\right) \sum_{\sigma}d_{\sigma}^{\dagger}d_{\bar{\sigma}}e^{-S}= \lambda\left( a^{\dagger}+a\right) \sum_{\sigma}d_{\sigma}^{\dagger}d_{\bar{\sigma}} \notag\\ & -\frac{2\lambda^2}{\omega_p}\sum_{\sigma}n_{d\sigma}+\frac{2\lambda^2}{\omega_p}\sum_{\sigma}n_{d\sigma}n_{d\bar{\sigma}}, \end{eqnarray} \begin{eqnarray} &e^{S}\omega_{p}a^{\dagger}ae^{-S}=\omega_{p}a^{\dagger}a-\lambda\left( a^{\dagger}+a\right) \sum_{\sigma}d_{\sigma}^{\dagger}d_{\bar{\sigma}} \notag\\ &+\frac{\lambda^2}{\omega_p}\sum_{\sigma}n_{d\sigma}-\frac{\lambda^2}{\omega_p}\sum_{\sigma}n_{d\sigma}n_{d\bar{\sigma}}. \end{eqnarray} Summing over above terms leads to a completely equivalent Hamiltonian \begin{eqnarray} &&\tilde{H}=\sum_{k\sigma,\alpha}\varepsilon_{k\sigma\alpha}c_{k\sigma\alpha}^{\dagger}c_{k\sigma\alpha}+\sum_{\sigma}\tilde{\varepsilon}_{d\sigma}n_{d\sigma} +\tilde U n_{d\uparrow}n_{d\downarrow} +\omega_{p}a^{\dagger}a \notag\\ &&-XY\Delta\varepsilon\sum_{\sigma}\sigma d_{\sigma}^{\dagger}d_{\bar{\sigma}} +\sum_{k\sigma\alpha}\left[ V_{\alpha}\left(X d_{\sigma}^{\dagger}+Y d_{\bar{\sigma}}^{\dagger}\right)c_{k\alpha\sigma}+h.c.\right], \label{16} \end{eqnarray} where $\tilde{\varepsilon}_{d\sigma} = {\varepsilon}_{d}-\frac{\lambda^2}{\omega_p}+\frac{\sigma}{2}\left( X^{2}+Y^{2}\right)\Delta\varepsilon$ and $\tilde U = U+\frac{2\lambda^2}{\omega_p} $. This Hamiltonian implies a series of renormalization effect on the model parameters such as the dot level, Zeeman energy, Coulomb repulsion and coupling strength between dot and conducting leads. Formally, the photon Hamiltonian is decoupled from the electron Hamiltonian. However, this is not true due to the presence of operators $X$ and $Y$, which contain the photon operator. In the mean-field level, one can treat $\Omega_\lambda = \langle X^2 + Y^2\rangle$ as renormalization factor of the Zeeman energy, where $\langle \cdot \rangle = \langle \cdot \rangle_{\tilde H}$. However, due to the coupling between the photons and the electrons, it is difficult to evaluate exactly the expectation for the Hamiltonian $\tilde H$. Here we do not pursue the analytical evaluation of this renormalization factor but follow the experiment observation to take phenomenologically this renormalization factor as $\Omega_\lambda \sim 7$ in our calculation. In this case, the following Hamiltonian is taken as our starting point \begin{eqnarray} && \tilde{H}=\sum_{k\sigma,\alpha}\varepsilon_{k\sigma\alpha}c_{k\sigma\alpha}^{\dagger}c_{k\sigma\alpha}+\sum_{\sigma}\tilde{\varepsilon}_{d\sigma}n_{d\sigma} +\tilde U n_{d\uparrow}n_{d\downarrow} \nonumber\\ && \hspace{2cm} +\sum_{k\sigma\alpha}\left[\tilde{V}_{\alpha} d_{\sigma}^{\dagger}c_{k\sigma\alpha}+h.c.\right], \label{16b} \end{eqnarray} where $\tilde{\varepsilon}_{d\sigma} = \varepsilon_d - \frac{\lambda^2}{\omega_p} +\frac{\sigma}{2}\Omega_\lambda \Delta \varepsilon$, $\tilde U = U + \frac{2\lambda^2}{\omega_p}$, and $\tilde{V}_{\alpha} = \Omega'_\lambda V_\alpha$ with $\Omega'_\lambda = \langle X \rangle_{\tilde H}$. Here we neglect $\langle Y\rangle_{\tilde H}$ and $\langle X Y\rangle_{\tilde H}$, which are believed to be vanishing small due to $Y$'s definition. Actually, one can show $\langle F(\frac{\lambda}{\omega_p})\rangle_{H_{p}}=\langle F(-\frac{\lambda}{\omega_p})\rangle_{H_{p}}$, here $H_{p} = \omega_{p}a^{\dagger}a$ is the Hamiltonian of photons. \section{BRIEF INTRODUCTION TO KELDYSH FORMALISM} In order to study the transport property of the model Eq. (\ref{16b}), we use the usual Keldysh formalism\cite{Haug2008}. According to Ref. \cite{Meir1992}, the electronic current formula is given by \begin{equation} I=\frac{2e}{\pi\hbar}\frac{\Gamma_{L}\Gamma_{R}}{\Gamma}\int d\omega\left[ f_{R}\left( \omega\right) -f_{L}\left( \omega\right)\right] \sum_{\sigma}\operatorname{Im}G_{\sigma}^{r}\left(\omega\right), \label{17} \end{equation} where $\Gamma=\sum_{\alpha}\Gamma_{\alpha}$ is the broadening of dot level $\varepsilon_{d\sigma}$ induced by the hybridization matrix elements $V_{\alpha}$ with $\Gamma_{\alpha}=\pi\rho\left\vert V_{\alpha}\right\vert^{2}$, and \begin{equation} G_{\sigma}^{r}\left( \omega\right) =-i\int dte^{i\omega t}\theta(t)\left\langle \left\{ d_{\sigma}\left(t\right), d_{\sigma}^{\dagger}(0)\right\} \right\rangle \label{18} \end{equation} is the retarded Green's function (GF) of the dot electron. Here $\theta\left(t\right)$ is the Heaviside step function, and $f_{\alpha}\left( \omega\right)=f(\omega-eV_\alpha)$ represents the Fermi distribution function in $\alpha$ lead and here $V_\alpha$ is the bias voltage of lead $\alpha$. In our practical calculation we adopt the symmetric bias. The differential conductance is defined as $G(V)=dI/dV$. Due to the coupling between the photons and the electrons, the GF in Eq. (\ref{18}) is dressed by photon, which leads to sideband structure\cite{Mahan2000}. However, in experiment such sideband structure has not been observed in the bias voltage applied. Thus we here neglect such sideband effect but only consider the central band, which has a factor difference from the practical differential conductance, which does not affect the qualitative behavior of the differential conductance as a function of magnetic field. \section{SLAVE-BOSON MEAN-FIELD THEORY} To study the Kondo effect of the effective Hamiltonian obtained above, we express it in terms of the slave-boson operators\cite{Fang2013,Aguado2002}. In this way the fermionic operator of the dot is written as a combination of a pseudofermion and a boson operator: $d_\sigma=b^\dag f_\sigma$ where $f_\sigma$ is the pseudofermion which annihilates one ``occupied state" in the dot and $b^+$ is a boson operator which creates an ``empty state" in the dot. We consider the limit $\tilde U\rightarrow\infty$ so that the double occupancy of the dot is forbidden. Accordingly, a Lagrange multiplier $\gamma$ is introduced to enforce the constraint, $b^\dag b+\sum_\sigma f^\dag_\sigma f_\sigma=1$. Thus the Hamiltonian in the slave-boson language reads \begin{eqnarray}\label{eqC1} &&\tilde{H}_{SB}=\sum_{k\alpha\sigma}\varepsilon_{k\alpha\sigma}c_{k\alpha\sigma}^{\dag}c_{k\alpha\sigma}+\sum_{\sigma} \tilde\varepsilon_{d\sigma} f^\dag_\sigma f_\sigma \notag\\ &&+\sum_{k\alpha\sigma}\tilde{V}_{\alpha}\left[ b f_{\sigma}^{\dag}c_{k\alpha\sigma}+h.c.\right]+\gamma\left( b^\dag b+\sum_\sigma f^\dag_\sigma f_\sigma-1\right).\label{eqC1} \end{eqnarray}\label{eqB1} We solve Eq. (\ref{eqC1}) within the mean-field approach, which is the leading order in a $1/N$ expansion. This approach sets the boson operator $b^\dag(b)$ to a classical, nonfluctuating value $r$, thereby neglecting charge fluctuations. The slave-boson mean field Hamiltonian is then given by \begin{eqnarray}\label{eqB2} &&\tilde{H}_{SBMF}=\sum_{k\alpha\sigma}\varepsilon_{k\alpha\sigma}c_{k\alpha\sigma}^{\dag}c_{k\alpha\sigma}+\sum_{\sigma}\left( \tilde\varepsilon_{d\sigma}+\gamma\right) f^\dag_\sigma f_\sigma \notag\\ &&\hspace{1.2cm}\hspace{-0.7cm}+\sum_{k\alpha\sigma}r\tilde{V}_{\alpha}\left[ f_{\sigma}^{\dag}c_{k\alpha\sigma}+h.c.\right]+\gamma\left( r^{2}-1\right), \label{eqB2} \end{eqnarray} where the two mean-field parameters $r$ and $\gamma$ have to be determined through their saddle-point equations which minimize the free energy $F =-k_BT \textrm{ln}[\textrm{Tr}(e^{\tilde H_{SBMF}/(k_BT )})]$, i.e., \begin{eqnarray}\label{eqC3} \frac{\partial F}{\partial\gamma}=0, \frac{\partial F}{\partial r} =0. \label{eqC3} \end{eqnarray}\label{eqB2} After some calculations, Eqs. (\ref{eqC3}) yields \begin{eqnarray}\label{eqB3} && 1-r^{2}-\sum_{\sigma}\left\langle n_{f\sigma}\right\rangle =0,\\ && \sum_{k\alpha\sigma}r\tilde{V}_{\alpha}\left\langle f_{\sigma}^{\dag}c_{k\alpha\sigma}\right\rangle +\gamma r^{2} =0. \end{eqnarray}\label{eqB3} The thermal equilibrium averages above are related to the corresponding retarded GFs through the spectral theorem as follows \begin{equation} \langle BA\rangle=-\frac{1}{\pi}\int_{-\infty}^{\infty}d\omega f(\omega)\operatorname{Im}\langle\langle A,B\rangle\rangle, \end{equation} Using equation of motion method we obtain \begin{eqnarray} &&\langle\langle c_{k\alpha\sigma},f_{\sigma}^{\dag}\rangle\rangle =\frac{r\tilde{V}_{\alpha}}{w-\varepsilon_{k\alpha\sigma}}\left\langle \left\langle f_{\sigma},f_{\sigma}^{\dag}\right\rangle \right\rangle , \label{eqC5}\\ &&\langle\langle f_{\sigma},f_{\sigma}^{\dag}\rangle\rangle =\frac{1}{\omega-\tilde\varepsilon_{d\sigma}-\gamma-r^{2}\sum_{k\alpha} \frac{\tilde{V}_{\alpha}^{2}}{\omega-\varepsilon_{k\alpha\sigma}}} \notag\\ &&\hspace{2cm} =\frac{1}{\omega-\tilde\varepsilon_{d\sigma}-\gamma+ir^{2}\tilde{\Gamma}}, \label{eqC6} \end{eqnarray} By using the spectral theorem we thus obtain \begin{eqnarray} && \sum_{\sigma}\langle n_{f\sigma}\rangle =-\frac{1}{\pi}\sum_{\sigma}\int d\omega f(\omega) \operatorname{Im}\langle\langle f_{\sigma},f_{\sigma}^{\dag}\rangle\rangle,\label{eqC7}\\ &&\sum_{k\sigma\alpha}r\tilde{V}_{\alpha}\langle f_{\sigma}^{\dag}c_{k\sigma\alpha}\rangle =\frac{r^{2}\tilde{\Gamma}}{\pi}\sum_{\sigma}\int d\omega f(\omega) \operatorname{Re}\langle\langle f_{\sigma},f_{\sigma}^{\dag}\rangle\rangle.\label{eqC8} \end{eqnarray} Here we have made the approximation $-i\tilde{\Gamma}=\sum_{k\alpha}\frac{\tilde{V}_{\alpha}^{2}}{\omega-\varepsilon_{k\sigma\alpha}}$. For simplicity, we replace the symbol $\tilde \Gamma$ by $\Gamma$ in the main text. Substituting (\ref{eqC5}) and (\ref{eqC6}) into (\ref{eqC7}) and (\ref{eqC8}) we have \begin{eqnarray} &&1-r^{2}+\frac{1}{\pi}\sum_{\sigma}\int d\omega f(\omega) \operatorname{Im}\langle\langle f_{\sigma},f_{\sigma}^{\dag}\rangle\rangle = 0, \label{eqB9}\\ &&\frac{\tilde{\Gamma}}{\pi}\sum_{\sigma}\int d\omega f(\omega)\operatorname{Re}\langle\langle f_{\sigma},f_{\sigma}^{\dag}\rangle\rangle +\gamma = 0. \label{eqB10} \end{eqnarray} After solving out $r$ and $\gamma$ we can calculate \begin{equation}\label{eqB11} \langle\langle d_{\sigma},d_{\sigma}^{\dag}\rangle\rangle=r^{2}\langle\langle f_{\sigma},f_{\sigma}^{\dag}\rangle\rangle =\frac{r^{2}}{\omega-\tilde\varepsilon_{d\sigma}-\gamma+ir^{2}\tilde\Gamma}. \end{equation} Now recalling Eq. (\ref{17}) we can turn to calculate the electronic current and then differential conductance. \begin{figure}[h] \begin{center} \includegraphics[width=0.9\columnwidth]{fig5.jpg} \caption{(Color online) The differential conductance as a function of magnetic field under different microwave frequency. The resonance match is given by Eq. (\ref{eqd1}) and the parameters used are the same as those in Fig. 3(a) in the main text.}\label{fig5} \end{center} \end{figure} \begin{table}[h] \begin{tabular}{lcccccc} \hline \hline $B(T)$ &0.98 &1.47 &1.68 &1.76 &1.86 &1.96\\ \hline $G_0$ &0.3614 &0.3648 &0.3558 &0.3573 &0.3600 &0.3712\\ \hline $\rho_0$ &2.9450 &2.4264 &1.7424 &2.7515 &3.4200 &4.8905\\ \hline $q$ &4.4958 &4.8379 &5.7301 &4.5688 &4.1269 &3.4676\\ \hline $T_K(\mu eV)$ &157.856 &134.992 &112.161 &130.910 &149.935 &189.478\\ \hline $\varepsilon_K(\mu eV)$ &-15.609 &-13.164 &-8.310 &-20.434 &-25.566 &-33.557\\ \hline \hline \end{tabular}\label{tab1} \caption{Fitting parameters for different magnetic field $B$ applied.} \end{table} \section{Subpeak and/or SHOULDER STRUCTURES AT HALF AND ONE QUARTER} As can be seen from the experimental results, there exist fine subpeak and/or shoulder structures of the differential conductance at the Fermi level as a function of magnetic field, which locate at about half and one quarter of the resonance magnetic field (see Fig.4 (d) of Ref.\cite{Hemingway2013}). In the paper \cite{Hemingway2013} this feature has not been pointed out. To reproduce these features in our match picture, one can consider phenomenologically the following resonance condition \begin{equation} \lambda = \sum_{i=0}^{2}\lambda_{i} \delta(\omega_p - \Omega_{\lambda,i}\Delta\varepsilon),\label{eqd1} \end{equation} where $\lambda_i = 2^{-i}\lambda_0$ is assumed and $\lambda_0$ is considered to be dependent of the microwave frequency. In addition, the renormalization factor $\Omega_{\lambda,i} = 2^i\Omega_\lambda$ is also assumed. Here we do not know essential physics under which these subpeaks and/or shoulder occur. Anyway, these features can be captured phenomenologically by our match picture, as shown in Fig. \ref{fig5}. \section{FITTING PARAMETERS OF FANO FORMULA} In Table \ref{tab1} we present the fitting parameters introduced in Eqs. (7) and (8) in the main text for the experimental data given in Fig. 4(a) and (b) for different magnetic fields. The fitting result is shown in Fig. 4(a) in the main text. The Kondo temperature obtained are in reasonable agreement with the experiment data, namely, about 1K.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The $O(N)$-symmetric scalar theories have served for decades as the testing ground of techniques developed for the investigation of the critical behaviour of field theories and statistical models. It comes, therefore, as a surprise that a recent study \cite{yabunaka} has found that their phase structure may be much more complicated that what had been found previously. In particular, it is suggested that, in dimensions $2<d<4$, several nonperturbative fixed points exist, which had not been identified until now. The large-$N$ limit \cite{sphericallargen1,sphericallargen2,sphericallargen3,sphericallargen4,sphericallargen5,zinnjustin} offers the possibility to identify such fixed points analytically, without resorting to perturbation theory. We shall consider the theory in this limit through the Wilsonian approach to the renormalization group (RG) \cite{wilson}. Its various realizations \cite{wegner,polchinski,wetterich,chvvednic,berges} give consistent descriptions of the fixed-point structure of the three-dimensional theory \cite{largenmorris}, in agreement with known results for the Wilson-Fisher (WF) fixed point \cite{4meps} and the Bardeen-Moshe-Bander (BMB) endpoint of the line of tricritical fixed points \cite{bmb1,bmb2,bmb3}. We shall employ the formalism of ref. \cite{wetterich}, leading to the exact Wetterich equation for the functional RG flow of the action. For $N\rightarrow \infty$ the anomalous dimension of the field vanishes and higher-derivative terms in the action are expected to play a minor role. This implies that the derivative expansion of the action \cite{derivexp1,derivexp2,derivexp3} can be truncated at the lowest order, resulting in the local potential approximation (LPA) \cite{wegner,berges,largenmorris,aoki}. The resulting evolution equation for the potential is exact in the sense explained in ref. \cite{largenmorris}. It has been analysed in refs. \cite{tet,marchais} in three dimensions. In this work, we extend the analysis over the range $2<d<4$, in an attempt to identify new fixed points. \section{Evolution equation for the potential} We consider the theory of an $N$-component scalar field $\phi^a$ with $O(N)$ symmetry in $d$ dimensions. We are interested in the functional RG evolution of the action as a function of a sharp infrared cutoff $k$. We work within the LPA approximation, neglecting the anomalous dimension of the field and higher-derivative terms in the action. We define $\rho=\frac{1}{2}\phi^a \phi_a $, $a=1...N$, as well as the rescaled field $\tilde{\rho}=k^{2-d} \rho$. We denote derivatives with respect to $\tilde{\rho}$ with primes. We focus on the potential $U_k(\rho)$ and its dimensionless version $u_k(\tilde{\rho})=k^{-d}U_k(\rho)$. In the large-$N$ limit and for a sharp cutoff, the evolution equation for the potential can be written as \cite{tet} \begin{equation}} \def\ee {\end{equation} \frac{\partial u'}{\partial t}=-2u'+(d-2)\tilde{\rho} \frac{\partial u'}{\partial \tilde{\rho}} -\frac{NC_d}{1+u'}\frac{\partial u'}{\partial \tilde{\rho}}, \label{evpot} \ee with $t=\ln(k/\Lambda)$ and $C_d^{-1}=2^d\pi^{d/2}\Gamma(d/2)$. This equation can be considered as exact, as explained in ref. \cite{largenmorris}. The crucial assumption is that, for $N\rightarrow \infty$, the contribution from the radial mode is negligible compared to the contribution from the $N$ Goldstone modes. The most general solution of eq. (\ref{evpot}) can be derived with the method of characteristics, generalizing the results of ref. \cite{tet}. It is given by the implicit relation \begin{eqnarray} && \tilde{\rho} -\frac{N C_d}{d-2} ~_2F_1\left( 1,1-\frac{d}{2},2-\frac{d}{2},-u' \right) \nonumber \\ &&~~~~ = e^{(2-d)t}\, G\left(u' e^{2t}\right) \, -\frac{N C_d}{d-2} e^{(2-d)t} ~_2F_1\left( 1,1-\frac{d}{2},2-\frac{d}{2},-u' e^{2t} \right), \label{sol} \end{eqnarray} with $_2F_1\left( a,b,c,z \right)$ a hypergeometric function. The function $G$ is determined by the initial condition, which is given by the form of the potential at the microscopic scale $k=\Lambda$, i.e. $u'_\Lambda(\tilde{\rho})=\Lambda^{-2} U_\Lambda'(\rho)$. $G$ is determined by inverting this relation and solving for $\tilde{\rho}$ in terms of $u'$, so that $G(u')=\tilde{\rho}(u')|_{t=0}$. The effective action is determined by the evolution from $k=\Lambda$ to $k=0$. We are interested in determining possible fixed points arising in the context of the general solution (\ref{sol}). Infrared fixed points are approached for $k\rightarrow 0$ or $t\rightarrow -\infty$. For finite $u'$, the last argument of the hypergeometric function in the rhs of eq. (\ref{sol}) vanishes in this limit. Using the expansion \begin{equation}} \def\ee {\end{equation} _2F_1\left( 1,1-\frac{d}{2},2-\frac{d}{2},-z \right)=1+\frac{d-2}{4-d}z -\frac{d-2}{6-d}z^2+{\cal O}(z^3) \label{expan0} \ee we obtain \begin{equation}} \def\ee {\end{equation} \tilde{\rho} -\frac{N C_d}{d-2} ~_2F_1\left( 1,1-\frac{d}{2},2-\frac{d}{2},-u' \right) = e^{(2-d)t} \left( G\left( u' e^{2t} \right) \, -\frac{N C_d}{d-2} \right). \label{wff} \ee The $t$-dependence in the rhs must be eliminated for a fixed-point solution to exist. This can be achieved for appropriate functions $G$. For example, we may assume that the initial condition for the potential at $k=\Lambda$ or $t=0$ is $u_\Lambda(\tilde{\rho})=\lambda_\Lambda (\tilde{\rho}-\kappa_\Lambda)^2/2$, so that $G(z)=\kappa_\Lambda+z/\lambda_\Lambda$. Through the unique fine tuning $\kappa_\Lambda=N C_d/(d-2)$ the rhs vanishes for $t\rightarrow -\infty$. The scale-independent solution, given by the implicit relation \begin{equation}} \def\ee {\end{equation} \tilde{\rho} -\frac{N C_d}{d-2} ~_2F_1\left( 1,1-\frac{d}{2},2-\frac{d}{2},-u'_* \right) = 0, \label{wf} \ee describes the Wilson-Fisher fixed point. Near the minimum of the potential, where $u_*'\simeq 0$, we have \begin{equation}} \def\ee {\end{equation} \tilde{\rho} -\frac{N C_d}{d-2} -\frac{N C_d}{4-d}u_*'+\frac{NC_d}{6-d} (u_*')^2+{\cal O}\left((u_*')^3\right) =0. \label{wfexp} \ee From this relation we can deduce that the minimum is located at $\tilde{\rho}=NC_d/(d-2)\equiv \kappa_*$, while the lowest derivatives of the potential at this point are $u''_*(\kappa_*)=(4-d)\,(NC_d)^{-1}$, $u'''_*(\kappa_*)=2(4-d)^3/(6-d)\, (NC_d)^{-2}$. For large $u_*'$, we can use the expansion \begin{equation}} \def\ee {\end{equation} _2F_1\left( 1,1-\frac{d}{2},2-\frac{d}{2},-z \right)= \Gamma\left(2-\frac{d}{2}\right)\Gamma\left(\frac{d}{2} \right)z^{\frac{d}{2}-1} +{\cal O}\left( z^{\frac{d}{2}-2} \right) \label{expaninf} \ee in order to obtain the asymptotic form of the potential: $u_*(\tilde{\rho}) \sim \tilde{\rho}^{d/(d-2)}$. This result is consistent with the expected critical exponent $\delta=(d+2)/(d-2)$ for vanishing anomalous dimension. Finally, we note that the hypergeometric function has a pole at $z=-1$. This implies that, in the regions of negative $u'$, the unrescaled potential $U_k\tilde{\rho})$ becomes flat, with its curvature scaling as $-k^{2}$ for $k\rightarrow 0$ \cite{convex}. We are interested in the existence of additional fixed points. In $d=3$ it is known that, apart from the Wilson-Fisher fixed point, a line of tricritical fixed points exists, terminating at the BMB fixed point \cite{bmb1,bmb2,bmb3}. In the following section we describe the flows between these fixed points in terms of the potential, in order to obtain useful intuition for the investigation of the case of general $d$. Our analysis extends the picture of refs. \cite{tet,marchais} away from the fixed points. \section{$d=3$} \begin{figure} \includegraphics[width=0.7\textwidth]{plot3.pdf} \caption{The evolution of the potential in $d=3$ on the critical surface. The continuous lines depict the potential during its initial approach to the tricritical fixed point, while the dashed lines its subsequent evolution towards the Wilson-Fisher fixed point. } \label{fig1} \end{figure} For $d=3$, the solution (\ref{sol}) reproduces the one presented in ref. \cite{tet}, through use of the identity \begin{eqnarray} &~_2F_1\left( 1,-\frac{1}{2},\frac{1}{2},-z \right)=\sqrt{z}\,\arctan(\sqrt{z})+1 & ~~~~~z>0 \nonumber \\ &~_2F_1\left(1,-\frac{1}{2},\frac{1}{2},-z \right)=\frac{1}{2}\sqrt{-z}\ln\left(\frac{1-\sqrt{-z}}{1+\sqrt{-z}}\right)+1 & ~~~~~z<0. \label{hyperd3} \end{eqnarray} In order to deduce the phase diagram of the three-dimensional theory, we consider a bare potential of the form \begin{equation}} \def\ee {\end{equation} u'_\Lambda(\tilde{\rho})=\lambda_\Lambda(\tilde{\rho}-\kappa_\Lambda )+\nu_\Lambda(\tilde{\rho}-\kappa_\Lambda )^2. \label{bare3d} \ee The solution (\ref{sol}) can be written as \begin{equation}} \def\ee {\end{equation} \tilde{\rho}-\kappa_* ~_2F_1\left( 1,-\frac{1}{2},\frac{1}{2},-u' \right) = e^{-t}\left[ G\left(u' e^{2t}\right) \, -\kappa_* ~_2F_1\left( 1,-\frac{1}{2},\frac{1}{2},-u' e^{2t} \right)\right], \label{sol3d} \ee with $\kappa_*=N/(4\pi^2)$. The function $G(z)$ is obtained by solving eq. (\ref{bare3d}) for $\tilde{\rho}$ as a function of $u'_\Lambda$. It is given by \begin{eqnarray} G(z)&=&\kappa_\Lambda+\frac{1}{2\nu_\Lambda} \left(- \lambda_\Lambda \pm \sqrt{\lambda^2_\Lambda+4\nu_\Lambda\, z}\right), \end{eqnarray} with the two branches covering different ranges of $\tilde{\rho}$. Let us impose the fine tuning $\kappa_\Lambda=\kappa_*$, which puts the theory on the critical surface. For $\lambda_\Lambda\not= 0$, we have $G(u' e^{2t})\simeq \kappa_*+u' e^{2t}/\lambda_\Lambda$ for $t\rightarrow -\infty$. We also have $_2F_1\left( 1,-{1}/{2},{1}/{2},-u' e^{2t} \right)\simeq 1+u' e^{2t}$. As a result, the rhs of eq. (\ref{sol3d}) vanishes in this limit. The evolution leads to the Wilson-Fisher fixed point discussed in the previous section. The additional fine tuning $\lambda_\Lambda= 0$ results in a different situation. For $t\rightarrow -\infty$ the rhs of eq. (\ref{sol3d}) becomes $t$-independent and we obtain \begin{equation}} \def\ee {\end{equation} \tilde{\rho}-\kappa_* ~_2F_1\left( 1,-\frac{1}{2},\frac{1}{2},-u_*' \right) =\pm \frac{1}{\sqrt{\nu_\Lambda}}\sqrt{u_*'}. \label{sol3dtr} \ee A whole line of tricritical fixed points can be approached, parametrized by $\nu_\Lambda$ \cite{marchais}. Each of them is expected to be unstable towards the Wilson-Fisher fixed point. The relative stability of the fixed points can be checked explicitly by considering the full solution (\ref{sol}). In fig. \ref{fig1} we depict the evolution of the potential, as predicted by this expression, for $\lambda_\Lambda=10^{-7}$ and $\nu_\Lambda=0.3$. We have set $N C_3=1$ through a redefinition of $\tilde{\rho}$ and $u'$. We have indicated by UV the initial form of the potential at $k=\Lambda$ and with IR its form for $k\rightarrow 0$. The continuous lines depict the potential at various values of $t$, with step equal to $-1$, during its initial approach to the tricritical fixed point (TP). The dashed lines depict its subsequent evolution towards the Wilson-Fisher fixed point (WF). We shall not analyse in detail the tricritical line, as this has been done elsewhere \cite{bmb1,bmb2,bmb3,marchais}. We note that it connects the Gaussian fixed point, for $\nu_\Lambda=0$, with a point approached for a value of $\nu_\Lambda$ for which the solution of eq. (\ref{sol3dtr}) diverges at the origin. This endpoint of the tricritical line is the BMB fixed point \cite{bmb1}. The corresponding value of $\nu_\Lambda$ can be derived by using the expansion (\ref{expaninf}) of the hypergeometric function near the origin, where the fixed-point potential diverges. It is given by $\nu_\Lambda=2/(\pi \kappa_*)$. Taking into account our definition of $\rho$, it can be checked that this value is consistent with the result of refs. \cite{bmb1,bmb2,bmb3}. The theory also displays first-order phase transitions if the potential develops two minima. It was shown in ref. \cite{tet} that, for a bare potential of the form (\ref{bare3d}), the surface $\kappa_\Lambda=\kappa_*+\lambda_\Lambda/\nu_\Lambda$ corresponds to first-order phase transtions. This surface intersects the surface $\kappa_\Lambda=\kappa_*$ of second-order phase transitions on the tricritical line $\lambda_\Lambda=0$. \section{$2<d<4$} \begin{figure}[!t] \centering $$ \includegraphics[width=0.5\textwidth]{plot1.pdf}\qquad \includegraphics[width=0.5\textwidth]{plot2.pdf}\qquad $$ \caption{\em The absolute value and the argument of the complex potential at the multicritical point, in dimensions $d=3.98,3.5,3,2.85,8/3$.} \label{fig2} \end{figure} We next turn to the search for new infrared fixed points with more than one relevant directions. The presence of the Gaussian fixed point, with $u'=0$, is obvious from eq. (\ref{evpot}). For any nontrivial fixed point, the rhs of eq. (\ref{sol}) must become independent of $t$ in the limit $t\rightarrow -\infty$. For $d>2$ the hypergeometric function can be approximated through the asymptotic expansion (\ref{expan0}) in this limit. An expression independent of $t$ requires an appropriate choice of the function $G$, determined through the initial condition. More precisely, we must have $G(z)- \kappa_* \sim z^{\frac{d-2}{2}}$ for $z \rightarrow 0$. This can be achieved through an initial condition of the form \begin{equation}} \def\ee {\end{equation} u_\Lambda(\tilde{\rho}) =\frac{d-2}{d}\nu_\Lambda (\tilde{\rho}-\kappa_\Lambda)^{\frac{d}{d-2}}, \label{initcondi} \ee where the parametrization of the multiplicative constant has been introduced for later convenience. We obtain \begin{equation}} \def\ee {\end{equation} G(z)=\kappa_\Lambda+\nu_\Lambda^{\frac{2-d}{2}} z^{\frac{d-2}{2}}. \label{gzz} \ee The tuning $\kappa_\Lambda= \kappa_*$ results in a fixed-point potential given by the solution of \begin{equation}} \def\ee {\end{equation} \tilde{\rho} -\frac{N C_d}{d-2} ~_2F_1\left( 1,1-\frac{d}{2},2-\frac{d}{2},-u'_* \right) =\nu_\Lambda^{\frac{2-d}{2}} (u'_*)^{\frac{d-2}{2}}, \label{multi} \ee where we have taken the limit $t\rightarrow -\infty$ with finite $u'_*$. The fixed-point potential has $u'_*=0$ at $\tilde{\rho}=\kappa_*$, similarly to the bare potential $u_\Lambda$. It is apparent from eqs. (\ref{initcondi}), (\ref{multi}) that a nonsingular real potential for all values of $\tilde{\rho}\geq 0$ can be obtained only if $d/(d-2)$ takes positive integer values $n$, i.e. at dimensions $d=2n/(n-1)$. If we require $2<d<4$, we have $n \geq 3$. Approaching a fixed point requires, apart from the tuning of $\kappa_\Lambda$, the absence of all terms $(\tilde{\rho}-\kappa_\Lambda)^m$ with $1<m<n$ in the bare potential. (The absence of the term with $m=1$ is equivalent to the tuning of $\kappa_\Lambda$.) This means that the fixed point at a given dimension $d=2n/(n-1)$ has $n-1$ relevant directions and can be characterized as a multicritical point. For $\tilde{\rho} < \kappa_*$ the form of the fixed-point potential depends on $n$. For $n$ odd we have $u'_*(\tilde{\rho})>0$ for $\tilde{\rho}<\kappa_*$, while for $n$ even we have $u'_*(\tilde{\rho})<0$. In the second case, the potential at the origin is constrained by the pole in the hypergeometric function to satisfy $u'_*(0)>-1$. For $d\not= 2n/(n-1)$, the initial condition (\ref{initcondi}) and the solution (\ref{multi}) develop certain pathologies. For $\tilde{\rho} > \kappa_*$, the potential is real. For $\tilde{\rho}\gg \kappa_*$ we have $u'_*\gg 1$, so that the hypergeometric function in eq. (\ref{multi}) has the expansion (\ref{expaninf}). As we discussed above, we obtain the asymptotic form of the potential $u_*(\tilde{\rho}) \sim \tilde{\rho}^{d/(d-2)}$ and a critical exponent $\delta=(d+2)/(d-2)$. However, divergencies in the higher derivatives of both the bare and fixed-point potentials appear as one approaches the point $\tilde{\rho}=\kappa_*$, at which $u'_\Lambda=u'_*=0$. The situation is more problematic for $\tilde{\rho}<\kappa_*$, where eqs. (\ref{initcondi}), (\ref{multi}) indicate that the potential must become complex. This leads to the conclusion that a continuous range of real fixed-point solutions as a function of $d$ does not exist in the large-$N$ limit. It must be pointed out that a real solution can be constructed through an initial condition of the form \begin{equation}} \def\ee {\end{equation} u_\Lambda(\tilde{\rho}) =\pm \frac{d-2}{d}\nu_\Lambda |\tilde{\rho}-\kappa_\Lambda|^{\frac{d}{d-2}}, \label{initcondib} \ee where the positive sign is used for $\tilde{\rho}>\kappa_*$, while the choice is ambiguous for $\tilde{\rho}<\kappa_*$. Both signs lead to real potentials, but for both choices the potentials are nonanalytic at $\tilde{\rho}=\kappa_*$. It cannot be excluded that the nonanalyticity has a physical origin. On the other hand, it is not possible to have a continuous dependence of the fixed-point potentials on $d$. The real and continuous solutions at $d = 2n/(n-1)$ result from initial conditions given by (\ref{initcondib}) with one of the two signs, but switch from one sign to the other as $n$ is increased. The only way to preserve a notion of analyticity and a continuous dependence on $d$ seems to be to consider a continuation of the potential in the complex plane. Even though we cannot offer a physical interpretation of the potential, such a construction is interesting because it may be linked to the picture presented in ref. \cite{yabunaka}. There, it is found that fixed-point solutions that exist for a continuous range of increasing values of $N$ collide with each other at some critical value $N_c(d)$ and disappear, consistently with what has been seen through the $\epsilon$-expansion \cite{stergiou}. The collision of two-fixed points is expected to cause them to move into the complex plane \cite{kaplan}. In this sense, the presence of complex fixed-point solutions for the full potential at $N\rightarrow \infty$ would be consistent with the findings of ref. \cite{yabunaka}. In fig. \ref{fig2} we present complex solutions of the fixed-point equation (\ref{multi}) for $\nu_\Lambda=0.3$. We have set $N C_d=1$ through a redefinition of $\tilde{\rho}$ and $u'$. The left plot depicts the absolute value and the right plot the argument of the complex potential at the multicritical point. For $\tilde{\rho}>\kappa_*=1$ the solution is real and the argument vanishes. For $\tilde{\rho}<1$ the solution is real and continuous only at $d=3$ and $8/3$, and in general at $d=2n/(n-1)$, as discussed earlier. For any other value of $d$, the bare and fixed-point potentials have branch cuts along the negative real axis. In fig. \ref{fig2} we depict the argument of the potential as the negative real axis is approached from above. The argument has the opposite sign when the negative real axis is approached from below. The potential is discontinuous as the negative real axis is crossed, apart from at $d=2n/(n-1)$. On the other hand, there is a continuity in the dependence of the potential on $d$. In particular, for $\tilde{\rho} < \kappa_\Lambda$, the potential switches automatically from solutions with $u'>0$ to ones with $u'<0$ and back, as $n$ is increased. \section{Conclusions} Our analysis aimed at examining the presence of nonperturbative fixed-point solutions of the $O(N)$-symmetric scalar theory in dimensions $2<d<4$ for $N\rightarrow \infty$. The motivation arose through the findings of ref. \cite{yabunaka}, which indicate the presence of previously unknown fixed-point solutions for finite $N$. Some of the new solutions collide with each other at some critical value $N_c(d)$ and disappear. One expects the presence of complex solutions beyond this critical value \cite{kaplan}. However, some novel real solutions are expected to persist in the limit $N\rightarrow \infty$ \cite{priv}. Our aim was to identify them through an analytical treatment of the RG equation. In this respect, a crucial point is our assumption about what constitutes the leading contribution for large $N$. For vanishing anomalous dimension, and under the assumption that higher-derivative terms in the action can be neglected, the exact Wetterich equation is reduced to a partial differential equation for the potential \cite{wetterich}. The renormalization of the potential is induced by a term proportional to $N$, arising from the contributions of the Goldstone modes, and a term arising from the contribution of the unique radial mode. Our large-$N$ approximation consists in neglecting the second term. We presented the exact solution (\ref{sol}) of the large-$N$ equation (\ref{evpot}) for the evolution of the potential towards the infrared, starting from an initial condition at an ultraviolet energy scale. The presence of critical points in dimensions $2<d<4$, the necessary fine tunings of the initial condition in order to approach them during the evolution, as well as their relative stability, can be deduced from eq. (\ref{sol}) by specifying the function $G(z)$. Our analysis of the previous two sections reproduced the known critical and multicritical points, including the Wilson-Fisher fixed point and the BMB fixed point. However, it did not reveal any new analytic solutions. Even though we used a sharp cutoff for our analysis, we expect similar results for other cutoff functions. For example, the three-dimensional fixed-point structure that we identified is the same as the one found in ref. \cite{marchais} with a different cutoff. By continuing the potential in the complex plane, we obtained a class of solutions with a branch-cut discontinuity along the negative real axis and a continuous dependence on $d$. These solutions become real at specific values of $d$, thus reproducing the known multicritical points. The presence of complex fixed points is consistent with the finding of ref. \cite{yabunaka} that fixed-point solutions that exist for finite $N$ collide with each other at some critical value $N_c(d)$ and disappear. On the other, it is expected that some of the real solutions presented in ref. \cite{yabunaka} survive for $N\rightarrow \infty$ \cite{priv}. No such solutions were found through our analysis. The only new real solutions we found display discontinuities or singularities in the higher derivatives of the potential at its minimum. They can be obtained from an initial condition given by eq. (\ref{initcondib}), for both signs, as discussed in the previous section. A natural question is whether some of the numerical solutions presented in ref. \cite{yabunaka} display similar discontinuities or singularities, so that they can be identified with our solutions. Another possibility is that our assumption that the radial mode gives a contribution subleading in $1/N$ is violated by the novel solutions \cite{yabunaka,priv}. \subsubsection*{Acknowledgments} N.T. would like to thank B. Delamotte, M. Moshe, A. Stergiou, S. Yabunaka for useful discussions. A big part of this work was carried out while N.T. was visiting the Theoretical Physics Department of CERN.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Recent years have seen a great expansion of topological quantum materials beyond time-reversal-invariant topological insulators \cite{kane,zhang}, driven by the search for symmetry-protected topological (SPT) states of matter that are distinct from trivial states only in the presence of certain symmetry. This underlying symmetry can be associated with conservation of internal quantum numbers such as charge and spin \cite{ludwig, wen,senthil}, or with spatial operations such as rotation and reflection~\cite{andofu}. Since spatial symmetry is a common property of all crystals, a wide array of topological band insulators protected by various crystal symmetries, commonly referred to as topological crystalline insulators (TCIs) \cite{fu}, has been theorized. The hallmark of a TCI is the existence of topologically protected gapless excitations on surfaces that preserve the relevant crystal symmetry. A notable class of TCIs protected by reflection symmetry was predicted and observed in the IV-VI semiconductors Sn$_{1-x}$Pb$_x$(Te,Se) \cite{hsieh,ando,story,hasan}, and the symmetry protection of the topological surface states has been demonstrated \cite{madhavan1, madhavan2, arpes}. More recently, TCIs have been generalized to band insulators with magnetic point group symmetries \cite{Sato, Fang}, nonsymmorphic symmetries~\cite{Shiozaki, VanLeeuwen,Sid,Fang,Haruki,Sato2}, and with both glide reflection and time-reversal symmetry \cite{Sato2, hourglass, Aris}. In addition, topological insulators protected by translation \cite{FuKaneMele,Stern} and magnetic translation symmetry \cite{Moore} were studied in early works. The interplay between topology and crystallography is continuing to knit together abstract mathematics and real materials. Recently, a new type of electronic TCIs protected by reflection symmetry has been theoretically constructed \cite{Hermele}, which is enabled by electron interactions and do not exist in free fermion systems. In a broader context, interaction-enabled topological crystalline phases were also been found in fermion superconductors \cite{Hughes} and boson insulators \cite{Oshikawa, Chen, Cirac, Xu, Kimchi, Ying, Yoshida, HermeleChen}. Such phases are now attracting wide attention, and it is of great interest to find their material realizations and experimental signatures. In this work, we find a new class of interaction-enabled topological crystalline insulators in two and three dimensions, which are protected by time-reversal ($\mathcal{T}$) and reflection/rotation symmetry ($\mathcal{R}$), or simply the combined symmetry $\mathcal{R} \mathcal{T}$. This phase exists in systems of spin-$\frac{1}{2}$ electrons with spin-orbit interaction, and cannot be adiabatically connected to any Slater insulator in the presence of $\mathcal{R} \mathcal{T}$ symmetry. Instead, this phase admits a natural description in terms of a magnetic system of interacting spins, hence is termed ``topological crystalline magnets'' (TCMs). A distinctive feature of TCMs is the presence of gapless spin excitations on the edge parallel to the axis of reflection. These edge states exhibit strongly anisotropic response to magnetic fields in directions parallel and perpendicular to edge. Our model for two- and three-dimensional TCMs is adiabatically connected to an array of {\it decoupled} one-dimensional symmetry-protected topological (SPT) states, on which the $\mathcal{R} \mathcal{T}$ symmetry acts as an internal anti-unitary $\mathbb{Z}_2$ symmetry. This stacking approach provides a unifying description of all previously known topological crystalline insulators \cite{Hermele}, both with \cite{IsobeFu, Furusaki} and without \cite{Fulga, Ezawa} interactions. The one-dimensional SPT state serving as the building block of our higher dimensional TCMs apparently looks similar to, but, in fact, is remarkably different from the Affleck, Kennedy, Lieb, and Tasaki (AKLT) state~\cite{aklt1,aklt2}. The AKLT state belongs to the Haldane phase, which is a \emph{bosonic} SPT phase protected, for example, by the dihedral ($\mathbb{Z}_2\times\mathbb{Z}_2$) symmetry or the time-reversal symmetry~\cite{Oshikawa}. However, the Haldane phase is not a \emph{fermionic} SPT phase and is hence trivial as an electronic phase~\cite{White,Scalapino,Rosch}. Namely, when we decompose the $S=1$ spins of the AKLT model into \emph{mobile} electrons with spin-$1/2$, the ground state is adiabatically deformable into a trivial band insulator~\cite{White,Scalapino,Rosch} while keeping the dihedral and the time-reversal symmetry. In contrast, our 1D TCM state is a robust fermionic SPT phase protected by $\mathcal{R} \mathcal{T}$ as we shall see now. \begin{figure} \includegraphics[width=0.8\linewidth]{aklt.pdf} \caption{The 1D model. Each gray bond represents a singlet pair of neighboring two $\hat{\vec{\Gamma}}_{\vec{x}}^\mu$'s and orange dots illustrate the edge degrees of freedom. A gapless edge state appears on each edge of a finite-size system. The edge degrees of freedom satisfy $(\hat{\mathcal{R}}\hat{\mathcal{T}})^2=-\hat{I}$, which is distinct from physical electrons or edge states of noninteracting topological insulators. \label{fig}} \end{figure} \section{1D model} Our 1D model (Fig.~\ref{fig}) is formed by a four-dimensional Hilbert space $\mathcal{H}_x$ on each site arising from the spin and orbital degrees of freedom of an {\it even} number of spin-$\frac{1}{2}$ electrons. The time-reversal operator $\hat{\mathcal{T}}$ thus satisfies $\hat{\mathcal{T}}^2=(-\hat{I})^{2n}=+\hat{I}$ on $\mathcal{H}_x$. As the simplest realization of such anti-unitary symmetry we take the complex conjugation $\mathcal{T}=\mathcal{K}$. We also assume that states in $\mathcal{H}_x$ are all even or all odd under a spatial symmetry $\mathcal{R}$, which is either the reflection about the $xz$ plane $(x,y,z) \rightarrow (x,-y,z)$ or the $\pi$-rotation about $x$-axis $(x,y,z) \rightarrow (x,-y,-z)$. The operator $\hat{\mathcal{R}}$ is hence represented by the identity operator $\hat{\mathcal{R}}=\hat{I}$ on $\mathcal{H}_x$. In one dimension $\mathcal{R}$ is essentially an internal symmetry, but will become a true spatial symmetry in higher dimensional cases to be studied later. As an explicit example, $\mathcal{H}_x$ can be identified as a subset of the states of two spin-$\frac{1}{2}$ electrons occupying two orbitals. Assuming each orbital is invariant under reflection or rotation, the operator $\hat{\mathcal{R}}$ only acts on the spin part of the two-electron wavefunction. There are in total six two-electron states, consisting of spin-singlet states formed by two electrons on the same orbital, as well as spin-singlet and spin-triplet states formed by two electrons on different orbitals. We denote the electron operators associated with these two orbitals by $\hat{c}_{x,s}^\dagger$ and $\hat{d}_{x,s}^\dagger$ respectively, where $s={\uparrow},{\downarrow}$ is the spin projection along the $z$ axis. Then, out of the six two-electron states, the following four satisfy $\hat{\mathcal{R}}|n\rangle_x=(+1)|n\rangle_x$ and $\hat{\mathcal{T}}|n\rangle_x=(+1)|n\rangle_x$ ($n=1,2,3,4$) and span the desired Hilbert space $\mathcal{H}_x$: \begin{eqnarray} |1\rangle_x&\equiv&\hat{c}_{x\uparrow}^\dagger \hat{c}_{x\downarrow}^\dagger |0\rangle,\\ |2\rangle_x&\equiv&\hat{d}_{x\uparrow}^\dagger \hat{d}_{x\downarrow}^\dagger |0\rangle,\\ |3\rangle_x&\equiv&\frac{1}{\sqrt{2}}(\hat{c}_{x\uparrow}^\dagger \hat{d}_{x\downarrow}^\dagger- \hat{c}_{x\downarrow}^\dagger \hat{d}_{x\uparrow}^\dagger) |0\rangle,\\ |4\rangle_x&\equiv&\frac{1}{\sqrt{2}}(\hat{c}_{x\uparrow}^\dagger \hat{d}_{x\uparrow}^\dagger+ \hat{c}_{x\downarrow}^\dagger \hat{d}_{x\downarrow}^\dagger) |0\rangle. \end{eqnarray} The remaining two states can also be included in the following discussion, but as long as their energy level is set much higher than these four states, they will not affect the topological property of our ground state. The 1D Hamiltonian for a finite chain $1\leq x\leq L$ reads \begin{equation} \hat{H}_{\text{1D}}=J\sum_{x=1}^{L-1}\hat{\vec{\Gamma}}^1_x \cdot \hat{\vec{\Gamma}}^2_{x+1},\label{1D} \end{equation} where both $\hat{\vec{\Gamma}}^1$ and $\hat{\vec{\Gamma}}^2$ are a set of three Hermitian operators that generate the $SU(2)$ algebra and mutually commute, i.e., \begin{eqnarray} [\hat{\Gamma}^{\mu a},\hat{\Gamma}^{\mu b}]&=&i \epsilon^{abc}\hat{\Gamma}^{\mu c}, \;\; [\hat{\Gamma}^{1 a}, \hat{\Gamma}^{2 b}] =0 \end{eqnarray} with $a,b=x,y,z$ and $\mu=1,2$. The components of these $\Gamma$ operators are explicitly given by the following $4\times4$ matrices in the basis of $|n\rangle$ \begin{eqnarray} \vec{\Gamma}^1 &\equiv& \frac{1}{2}(-\sigma^z\otimes\sigma^y,-\sigma^y\otimes\sigma^0,-\sigma^x\otimes\sigma^y),\\ \vec{\Gamma}^2&\equiv& \frac{1}{2}(-\sigma^0\otimes\sigma^y,\sigma^y\otimes\sigma^z,-\sigma^y\otimes\sigma^x). \end{eqnarray} Note that $\vec{\Gamma}^{1,2}$ are pure imaginary and are hence odd under time-reversal symmetry $\mathcal{T}$. The Hamiltonian \eqref{1D} consists of bilinears of $\Gamma$'s and is therefore time-reversal invariant. It is also invariant under $\mathcal{R}$ since $\mathcal{R}$ does not transform $\Gamma$ at all. To analyze the topological nature of the ground state of $\hat{H}_{\text{1D}}$, it is more convenient to switch the basis of $\mathcal{H}_x$ from $\{|n\rangle_x \}_{n=1,2,3,4}$ to $\{|s\rangle_x^1\otimes|s'\rangle_x^2\}_{s,s'=\pm}$ by the local linear transformation $|s\rangle_x^1\otimes|s'\rangle_x^2=\sum_{n}|n\rangle_x U_{n,ss'}$: \begin{eqnarray} |+\rangle_x^1\otimes|+\rangle_x^2&\equiv&\frac{1}{\sqrt{2}}(|1\rangle_x-i|4\rangle_x),\label{trans1}\\ |+\rangle_x^1\otimes|-\rangle_x^2&\equiv&\frac{1}{\sqrt{2}}(|3\rangle_x-i|2\rangle_x),\\ |-\rangle_x^1\otimes|+\rangle_x^2&\equiv&-\frac{1}{\sqrt{2}}(|3\rangle_x+i|2\rangle_x),\\ |-\rangle_x^1\otimes|-\rangle_x^2&\equiv&\frac{1}{\sqrt{2}}(|1\rangle_x+i|4\rangle_x).\label{trans4} \end{eqnarray} In this new basis, $\hat{\vec{\Gamma}}_x^\mu$ is nothing but the spin operator acting on $\{|s\rangle_x^\mu\}_{s=\pm}$, \begin{equation} U^\dagger\vec{\Gamma}^1U=\frac{1}{2}\vec{\sigma}\otimes\sigma^0,\quad U^\dagger\vec{\Gamma}^2U=\frac{1}{2}\sigma^0\otimes\vec{\sigma}.\label{spin} \end{equation} For example, the usual spin algebras such as $\hat{\Gamma}_x^{\mu z}|\pm\rangle_x^\mu=\pm\frac{1}{2}|\pm\rangle_x^{\mu}$ and $(\hat{\Gamma}_x^{\mu x}\pm i\hat{\Gamma}_x^{\mu y})|\mp\rangle_x^{\mu}=|\pm\rangle_x^\mu$ hold. Therefore, $\hat{H}_{\text{1D}}$ in Eq.~\eqref{1D} is just an antiferromagnetic spin chain whose exchange coupling is nonzero in every other bond. The ground state is the valence-bond solid (VBS) state: \begin{eqnarray} |\Psi(s,s')\rangle&\equiv&|s\rangle_1^1\otimes\left(\Pi_{x=1}^{L-1} \otimes|\phi_0\rangle_{x,x+1}\right)\otimes|s'\rangle_L^2,\label{VBS}\\ |\phi_0\rangle_{x,x+1}&\equiv&\frac{1}{\sqrt{2}}(|+\rangle_x^2|-\rangle_{x+1}^1-|-\rangle_x^2|+\rangle_{x+1}^1). \end{eqnarray} In a finite-size system, the ground state is four-fold degenerate due to the edge dofs $|s\rangle_1^1$ and $|s'\rangle_L^2$ ($s,s'=\pm$). The nontrivial topology of the model is encoded in the symmetry property of the edge states. Although the auxiliary field $|s\rangle_x^\mu$ apparently behaves like an electronic spin, its transformation under $\hat{\mathcal{R}}\hat{\mathcal{T}}$ is in fact quite distinct from the physical spin. In the $\{|s\rangle_x^1\otimes|s'\rangle_x^2\}$ basis, $\hat{\mathcal{T}}$ and $\hat{\mathcal{R}}$ are represented by $U^\dagger \mathcal{T} U=U^\dagger U^*\mathcal{K}=(i\sigma^y)\otimes(i\sigma^y)\mathcal{K}$ and $U^\dagger \mathcal{R} U=U^\dagger IU=\sigma^0\otimes\sigma^0$, respectively. Namely, $|s\rangle_x^\mu$ transforms under $\mathcal{T}$ in the same way as the physical spins, while it does not change under $\mathcal{R}$ ($\hat{\mathcal{R}}|\pm\rangle_x^\mu=|\pm\rangle_x^\mu$) unlike electrons. This peculiar transformation property of the auxiliary field $|s\rangle_x^\mu$ can be summarized as \begin{equation} \hat{\mathcal{R}}=+\hat{I},\quad \hat{\mathcal{T}}^2=-\hat{I}, \quad (\hat{\mathcal{R}}\hat{\mathcal{T}})^2=-\hat{I} \label{RT2} \end{equation} on the two dimensional Hilbert space spanned by $\{|s\rangle_x^\mu\}_{s=\pm}$. Equation~\eqref{RT2} must be compared to $\hat{\mathcal{T}}^2=\hat{\mathcal{R}}^2=-\hat{I}$ and hence $(\hat{\mathcal{R}}\hat{\mathcal{T}})^2=+\hat{I}$ of a physical spin-$\frac{1}{2}$ electron. One may think one can redefine $\hat{\mathcal{R}}'\equiv i \hat{\mathcal{R}}$ to get $(\hat{\mathcal{R}}')^2=+\hat{I}$, but even after that $(\hat{\mathcal{R}}'\hat{\mathcal{T}})^2$ remains unchanged since $\hat{\mathcal{T}}$ is anti-unitary. Although the Hamiltonian $\hat{H}_{\text{1D}}$ is invariant under $\hat{\mathcal{T}}$ and $\hat{\mathcal{R}}$ separately, we can add arbitrary symmetry-breaking perturbations keeping only the combined symmetry $\hat{\mathcal{R}}\hat{\mathcal{T}}$ and the bulk gap. Since $\hat{\mathcal{R}}\hat{\mathcal{T}}$ is an anti-unitary symmetry that squares into $-1$, it protects the Kramers degeneracy on each edge. The fact that the value of $(\hat{\mathcal{R}}\hat{\mathcal{T}})^2$ of our edge state is different from that of physical electrons has two important implications. (i) The edge state of any (noninteracting) topological insulator satisfies $(\hat{\mathcal{R}}\hat{\mathcal{T}})^2=+\hat{I}$. Therefore, the VBS state in Eq.~\eqref{VBS} cannot be adiabatically connected to electronic topological insulators. In other words, the VBS state is an interaction-enabled topological phase protected by $\hat{\mathcal{R}}\hat{\mathcal{T}}$. (ii) The edge state of the VBS state is robust against the perturbation of attaching physical spin-$\frac{1}{2}$ electrons to the edge. In the case of the standard AKLT model, for example, the edge spin-$\frac{1}{2}$ can be gapped by attaching an electron, since both of them fall into the same class of projective representations $\hat{\mathcal{R}}^2=\hat{\mathcal{T}}^2=-\hat{I}$. On the other hand, the edge state of our model cannot be gapped this way, since even after attaching an electron, the anti-unitary symmetry $\hat{\mathcal{R}}\hat{\mathcal{T}}$ remains $(\hat{\mathcal{R}}\hat{\mathcal{T}})^2=-\hat{I}$. To summarize, we have presented a simple 1D model of interacting electrons that realizes an interaction-enabled topological phase protected by the combined symmetry $\hat{\mathcal{R}}\hat{\mathcal{T}}$. The edge degrees of freedom satisfy $(\hat{\mathcal{R}}\hat{\mathcal{T}})^2=-\hat{I}$ and are stable against attaching additional electrons to the edge. \begin{figure} \includegraphics[width=0.9\linewidth]{aklt2D.pdf} \caption{Two 2D models: the stacked 1D chains (a) and a more intrinsically 2D model (b). The reflection/rotation symmetry must be site-centered, not bond-centered. We can add weak perturbation to realize A-B sublattice structure (gray shadow) to break the bond-centered mirror. \label{fig2}} \end{figure} \section{2D models} Now we move onto 2D TCM models. This time the reflection/rotation symmetry is truly a spatial symmetry and the 2D TCM phases are hence protected purely by non-local symmetries. We will discuss two models. The first one is stacked 1D chains shown in Fig.~\ref{fig2} (a). The Hamiltonian is \begin{equation} \hat{H}_{\text{2D}}=J\sum_{x=1}^{L_x-1}\sum_{y=-\infty}^{+\infty}\hat{\vec{\Gamma}}_{(x,y)}^2\cdot\hat{\vec{\Gamma}}_{(x+1,y)}^1,\label{2D1} \end{equation} where $\vec{\Gamma}^1\equiv\frac{1}{2}\vec{\sigma}\otimes\sigma^0$ and $\vec{\Gamma}^2\equiv\frac{1}{2}\sigma^0\otimes\vec{\sigma}$ in the basis of $\{|s_1\rangle_{(x,y)}^1\otimes|s_2\rangle_{(x,y)}^2\}_{s_1,s_2=\pm}$. The second one is a square-lattice model depicted in Fig.~\ref{fig2} (b). \begin{eqnarray} \hat{H}_{\text{2D}}'=&&J\sum_{x=1}^{L_x-1}\sum_{y=-\infty}^{+\infty}\hat{\vec{\Gamma}}_{(x,y)}^2\cdot\hat{\vec{\Gamma}}_{(x+1,y)}^1\notag\\ &&+J\sum_{x=1}^{L_x}\sum_{y=-\infty}^{+\infty}\hat{\vec{\Gamma}}_{(x,y)}^4\cdot\hat{\vec{\Gamma}}_{(x,y+1)}^3,\label{2D2} \end{eqnarray} where $\vec{\Gamma}^1\equiv\frac{1}{2}\vec{\sigma}\otimes\sigma^0\otimes\sigma^0\otimes\sigma^0$, $\vec{\Gamma}^2\equiv\frac{1}{2}\sigma^0\otimes\vec{\sigma}\otimes\sigma^0\otimes\sigma^0$, $\vec{\Gamma}^3\equiv\frac{1}{2}\sigma^0\otimes\sigma^0\otimes\vec{\sigma}\otimes\sigma^0$, and $\vec{\Gamma}^4\equiv\frac{1}{2}\sigma^0\otimes\sigma^0\otimes\sigma^0\otimes\vec{\sigma}$ in the basis of $\{|s_1\rangle_{(x,y)}^1\otimes|s_2\rangle_{(x,y)}^2\otimes|s_3\rangle_{(x,y)}^3\otimes|s_4\rangle_{(x,y)}^4\}_{s_1,s_2,s_3,s_4=\pm}$. For both models, each auxiliary field $|s\rangle_{(x,y)}^\mu$ ($s=\pm$) transforms as \begin{equation} \hat{\mathcal{R}}|s\rangle_{(x,y)}^\mu=|s\rangle_{(x,-y)}^\mu,\quad \hat{\mathcal{T}}|s\rangle_{(x,y)}^\mu=s\,|-s\rangle_{(x,y)}^\mu \label{RT2D} \end{equation} so that $\hat{\vec{\Gamma}}_{(x,y)}^\mu$ satisfies \begin{equation} \hat{\mathcal{R}}\hat{\vec{\Gamma}}_{(x,y)}^\mu\hat{\mathcal{R}}^{-1}=\hat{\vec{\Gamma}}_{(x,-y)}^\mu,\,\,\hat{\mathcal{T}}\hat{\vec{\Gamma}}_{(x,y)}^\mu\hat{\mathcal{T}}^{-1}=-\hat{\vec{\Gamma}}_{(x,y)}^\mu. \label{Gamma2D} \end{equation} The first transformation in Eq.~\eqref{RT2D} is again distinct from that of spin-$\frac{1}{2}$ electrons. As a consequence, $|s\rangle^{\mu}_{(x,y)}$ satisfies $(\hat{\mathcal{R}}\hat{\mathcal{T}})^2=-\hat{I}$ unlike electrons as before. Although both $\hat{H}_{\text{2D}}$ and $\hat{H}_{\text{2D}}'$ themselves are invariant under $\hat{\mathcal{R}}$ and $\hat{\mathcal{T}}$ separately, arbitrary perturbations can be added to these Hamiltonians as long as the combined symmetry $\hat{\mathcal{R}}\hat{\mathcal{T}}$ is respected and the bulk gap is not closed. Note that the reflection/rotation symmetry $\mathcal{R}$ here needs to be site-centered [$\mathcal{R}:(x,y,z)\mapsto(x,-y,\pm z)$] and cannot be bond-centered [$\tilde{\mathcal{R}}:(x,y,z)\mapsto(x,1-y,\pm z)$]. The bond-centered one does not protect gapless edge states as we discuss below. To break the bond-centered symmetry without affecting the site-centered one, one can introduce A-B sublattice structure [gray shadows in Fig.~\ref{fig}(b)] by modifying the spin Hamiltonian by weak perturbation. \begin{figure} \includegraphics[width=\linewidth]{edge.pdf} \caption{The 1D edge state along $x=1$ of the 2D models in Fig.~\ref{fig2}. The color represents the nonuniform perturbation $\hat{H}'=\sum_{y} \vec{H}(y)\cdot\hat{\vec{\Gamma}}_{(x=1,y)}^1$ with $\vec{H}(y)=h\,\text{tanh}(y) \hat{z}$, for example, which respects the $\hat{\mathcal{R}}\hat{\mathcal{T}}$ symmetry. The edge state in the region $y\gg0$ and $y\ll0$ opens a gap proportional to $h$, while there will be a residual gapless edge state protected by $(\hat{\mathcal{R}}\hat{\mathcal{T}})^2=-\hat{I}$ on the domain wall. \label{fig3}} \end{figure} The ground state of these 2D Hamiltonians is the VBS state illustrated in Fig.~\ref{fig2}, analogous to Eq.~\eqref{VBS}. There is a 1D edge state formed by $\{|s\rangle_{(x=1,y)}^1\}_{y\in[-L_y,L_y]}$ along the line $x=1$, and another 1D edge state formed by $\{|s\rangle_{(x=L_x,y)}^2\}_{y\in[-L_y,L_y]}$ along $x=L_x$. To see the gaplessness of the edge states, we add a $\hat{\mathcal{R}}\hat{\mathcal{T}}$-symmetric perturbation $\hat{H}'=\sum_{y} \vec{H}(y)\cdot\hat{\vec{\Gamma}}_{(x=1,y)}^1$ along the line $x=1$ as shown in Fig.~\ref{fig3}, where $\vec{H}(y)$ is an odd function of $y$ that approaches to a constant $\vec{H}(y)\rightarrow\vec{h}$ for $y\gg1$. Note that $\vec{H}(y)$ must flip sign at $y=0$ to be consistent with the $\hat{\mathcal{R}}\hat{\mathcal{T}}$ symmetry, forming a domain wall around $y=0$. All $\hat{\vec{\Gamma}}$'s along the edge away from the domain wall open a gap proportional to $h$. However, the edge state at the domain wall $y=0$ must remain gapless. This is protected, again, by the anti-unitary symmetry $\hat{\mathcal{R}}\hat{\mathcal{T}}$ with $(\hat{\mathcal{R}}\hat{\mathcal{T}})^2=-\hat{I}$. This unavoidable gaplessness of the edge state signals the topological nature of our 2D models. Essentially, $\vec{\Gamma}_{\vec{x}}^\mu$'s on the $y=z=0$ line play the role of the 1D spin chain discussed above. In contrast, when $\mathcal{R}$ is bond-centered, there will be an even number of $|s\rangle$'s at the domain wall and $(\hat{\mathcal{R}}\hat{\mathcal{T}})^2=(-\hat{I})^{2n}=+\hat{I}$ and the edge may be completely gapped. \section{Anisotropic response to a magnetic field} An experimental signature of TCMs is the anisotropic response of the edge state to the external magnetic field $\vec{B}$. We start with the case where $\mathcal{R}$ is the reflection $\mathcal{M}_y$ about the $xz$ plane. Recall that the $(B_x, B_y, B_z)\rightarrow (-B_x, B_y, -B_z)$ under $\mathcal{M}_y$, while $\hat{\vec{\Gamma}}$ does not react to $\mathcal{M}_y$. Both $\vec{B}$ and $\hat{\vec{\Gamma}}$ flip sign under $\mathcal{T}$. The familiar form of the coupling to the external field $\vec{B}\cdot\hat{\vec{\Gamma}}_{\vec{x}}^{\mu}$ is thus not allowed by symmetry $\mathcal{M}_y\mathcal{T}$. Instead, arbitrary linear coupling to $B_y$, i.e., $B_yc_{\mu a}\hat{\Gamma}_{\vec{x}}^{\mu a}$, is allowed. When $B_y$ is set to a constant value, this term breaks the $\mathcal{M}_y\mathcal{T}$ symmetry and the edge states will be gapped and the gap should be proportional to $|B_y|$. On the other hand, $B_{x}$ and $B_{z}$ do not couple linearly to $\hat{\Gamma}_{\vec{x}}^{\mu}$. We therefore expect anisotropic response of the edge state towards the external magnetic field. When $\mathcal{R}$ is the $\pi$-rotation $\mathcal{R}_{\pi,x}$ around $x$ axis, the magnetic field $(B_x, B_y, B_z)$ changes to $(B_x, -B_y, -B_z)$ under $\mathcal{R}_{\pi,x}$. Thus, arbitrary linear coupling between $B_x$ and $\Gamma_{\vec{x}}^{\mu a}$ is allowed. Thus a constant $B_x$ can induce a gap to the edge, while $B_y$ and $B_z$ cannot. We thus expect similar anisotropic response in this case too. \begin{figure}[b] \includegraphics[width=0.6\linewidth]{aklt3D.pdf} \caption{The 3D model, which is the 2D array of the 1D chain. \label{fig4}} \end{figure} \section{3D model} One can readily construct a 3D TCM model in the same way as we did for the 2D models. The 3D model is a 2D array of the 1D TCM chains, illustrated in Fig.~\ref{fig4}. For this 3D model, $\mathcal{R}$ must be the site-centered $\pi$-rotation about the $x$-axis. Namely, the rotation axis must coincide with one of the 1D chain. The gapless 2D surfaces at $x=1$ and $x=L_x$ are protected by the combined symmetry $\mathcal{R}\mathcal{T}$. To see this, let us again add a $\mathcal{R}\mathcal{T}$-symmetric perturbation $\hat{H}'=\sum_{y,z} \vec{H}(y,z)\cdot\hat{\vec{\Gamma}}_{(x=1,y,z)}^1$. To be consistent with the $\mathcal{R}\mathcal{T}$ symmetry, $\vec{H}(y,z)$ should satisfy $\vec{H}(y,z)=-\vec{H}(-y,-z)$, meaning that $H(0,0)=0$. Therefore, there will be a residual zero mode at the ``vortex core'' of the perturbed surface, protected by $(\hat{\mathcal{R}}\hat{\mathcal{T}})^2=-\hat{I}$. \section{Conclusion} In this paper we introduced TCM phases protected by non-local symmetry $\mathcal{R}\mathcal{T}$ in two and three dimension. They are interaction-enabled and are robust against attaching physical electrons to the edge. They can be detected in experiment from their anisotropic response of the edge state towards external magnetic fields. \begin{acknowledgements} We thank Yang Qi and Yohei Fuji for insightful discussions. LF is supported by the DOE Office of Basic Energy Sciences, Division of Materials Sciences and Engineering under Award No. DE-SC0010526. \end{acknowledgements}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:introduction} Turbulent motions underpin the entire evolution of a protoplanetary disk. Foremost, turbulence determines the bulk gas viscosity and hence regulates the angular momentum transport and accretion in disks \citep{Shakura_ea_1973,Pringle81}. Secondly, turbulence is a key factor for dust evolution and transport in disks \citep{Testi_ea_2014,Henning_Meeus_2011}. However, until recently, observational constraints on the level of disk turbulence were extremely challenging to obtain and hence scarce. With the advent of the Atacama Large Millimeter / submillimeter Array (ALMA), we have access for the first time to the high sensitivity, spectral resolution and angular resolution observations which are needed to directly measure turbulent velocities in disks. \FigureSpectra Accurate determination of the turbulent velocity dispersion from line broadening requires a good understanding of the other components which contribute to the line width, namely bulk motions of the gas, thermal broadening and, in the case of a highly optically thick line, broadening due to the line opacity. All previous measurements of $v_{\rm turb}${} have revolved around the fitting of a parametric model in order to extract a disk-averaged turbulent broadening value. The derived values ranged from very low values of $\la 10-100$~m\,s$^{-1}$ ($\la 0.02 - 0.2$~$c_s$) derived for the TW~Hya and HD~163296 disks, to higher velocities of $\la 100 - 200$~m\,s$^{-1}$ ($\la 0.3-0.5$~$c_s$) for the disks of DM~Tau, MWC~480 and LkCa~15 \citep{Dartois_ea_2003,Pietu_ea_2007,Hughes_ea_2011,Rosenfeld_ea_2012,Flaherty_ea_2015}. With the exception of TW~Hya and HD~163296 \citep{Hughes_ea_2011,Flaherty_ea_2015}, the spectral resolution of the data used to determine these values, of the order $\sim$~\vel{200}, is too coarse to resolve the small expected contribution from turbulent broadening, although \citet{Guilloteau_ea_2012} did correct for this effect when using CS to measure turbulence in DM Tau. High-quality ALMA Cycle~2 observations of TW~Hya allow us for {\em the first time} to get a direct measure of the line widths and thus spatially resolved turbulent velocity structure. With a near face-on inclination of only $i \approx 7\degr$ \citep{Qi_ea_2004} and as the nearest protoplanetary disk at $d \approx 54$~pc, TW~Hya provides the best opportunity to directly detect turbulent broadening as the impact of Keplerian shear for such face-on disks is minimized compared to more inclined systems. We present here the first direct measurements of $v_{\rm turb}${} in a protoplanetary disk using the line emission of CO, CN and CS. In Sect.~\ref{sec:observations} we describe our ALMA observations and the data reduction. Section~\ref{sec:vturb} describes the methods we used to extract $v_{\rm turb}${}: two direct methods relying on a measure of the line widths and a more commonly used fit of a parametric model. Discussion in Sect.~\ref{sec:discussion} follows. \section{Observations} \label{sec:observations} The observations were performed using ALMA on May 13, 2015 under excellent weather conditions (Cycle 2, 2013.1.00387.S). The receivers were tuned to cover CO J=(2-1), CS J=(5-4) and all strong hyperfine components of CN N=(2-1) simultaneously. The correlator was configured to deliver very high spectral resolution, with a channel spacing of 15~kHz (and an effective velocity resolution of \vel{40}) for the CO J=(2-1) and CS J=(5-4) lines, and 30~kHz (\vel{80}) for the CN N=(2-1) transition. Data was calibrated using the standard ALMA calibration script in the \texttt{CASA} software package\footnote{\url{http://casa.nrao.edu/}}. The calibrated data were regridded in velocity to the LSR frame, and exported through UVFITS format to the \texttt{GILDAS}\footnote{\url{http://www.iram.fr/IRAMFR/GILDAS}} package for imaging and data analysis. Self-calibration was performed on the continuum data, and the phase solution applied to all spectral line data. With robust weighting, the $uv$ coverage, which have baselines between 21 and 550~m, provided by the $\sim 34$ antennas yields a beamsize of $0.50\arcsec \times 0.42\arcsec$ at position angle of $80\degr$.The absolute flux calibration was referred to Ganymede. The derived flux for our amplitude and phase calibrator, J1037-2934, was 0.72 Jy at 228 GHz at the time of the observations, with a spectral index $\alpha = -0.54$, while the ALMA flux archive indicated a flux of $0.72 \pm 0.05$ Jy between April 14$^{\rm th}$ and April 25$^{\rm th}$. We hence estimate that the calibration uncertainty is about 7\%. After deconvolution and primary beam correction, the data cubes were imported into \texttt{CLASS} for further analysis, in particular line profile fits including the hyperfine structure for CN lines. For the azimuthal average, each spectrum was shifted in velocity from its local projected Keplerian velocity before averaging. We used for this the best fit Keplerian model assuming a stellar mass of $0.69~M_{\sun}$ and $i = 7\degr$, see Section.~\ref{sec:method3}. All three emission lines show azimuthal symmetry within the noise justifying our choice to azimuthally average the data. CO emission looks identical to previous studies (for example \citet{Qi_ea_2013}), while integrated intensity plots for CN, including all hyperfine components, and CS are shown in Appendix~\ref{sec:appobservations}. Sample spectra illustrating the very high signal to noise obtained in CO and CN, and the noisier CS data, are given in Fig.~\ref{fig:example_spectra} and a gallery of azimuthally averaged spectra at different radial locations can be found in Appendix~\ref{sec:appobservations}. Finally, examples of the full compliment of CN hyperfine components are found in Fig.~\ref{fig:aaallspectra}. \section{Disentangling Turbulent Velocity Dispersions} \label{sec:vturb} \FigureLinewidths Turbulent motions within a gas manifest themselves as a velocity dispersion along the line of sight, broadening the width of the emission (or absorption) line. This broadening term acts in tandem with thermal broadening, a contribution typically an order of magnitude larger than the turbulent width. Additionally, the Keplerian shear across the beam will broaden the observed emission lines. This effect is the most dominant in the inner disk and for highly inclined disks, making TW~Hya an ideal source as this effect is minimized. In the following section we discuss three methods to extract $v_{\rm turb}${}, the turbulent velocity dispersion: two `direct' and one `parametric' approach, and apply each to TW~Hya. \subsection{Line width measurements} \label{sec:linewidthmeasurements} Physical parameters were extracted from the line profiles at each pixel in the image and for an azimuthal average. CO is highly optically thick and displays a saturated core meaning the line profile deviates strongly from an optically thin Gaussian (see left panel of Fig.~\ref{fig:example_spectra}). By fitting a line profile of the form, \begin{equation} I_v = \Big(J_{\nu}(T_{\rm ex}) - J_{\nu}(T_{\rm bg}) \Big) \cdot \left( 1 - \exp \left[ -\tau \exp \left\{ -\frac{(v - v_0)^2}{\Delta V^2}\right\} \right] \right), \end{equation} \noindent \new{where $T_{\rm bg} = 2.75$~K,} we are able to obtain the line full-width at half maximum, FWHM, line center $v_0$ and, if the line is sufficiently optically thick, $T_{\rm ex}${} and $\tau$ (otherwise only the product is constrained). Under the assumption that all hyperfine components arise from the same region in the disk and that the main component is optically thick, the relative intensities of the CN hyperfine components yield an optical depth and $T_{\rm ex}${}. Using the `hyperfine' mode in \texttt{CLASS} the hyperfine components were simultaneously fit with Gaussian profiles. It was found that the recommended spacing of hyperfine components were systematically biased across the disk, suggesting that the recommended offset values were incorrect. Fitting for the relative positions of each component allowed for a better determination of their spacing to $\approx$~\vel{1}. The adopted frequencies are given in Table~\ref{tab:cn21}. Finally, the CS emission was well fit by an optically thin Gaussian, from which the linewidth and line center were able to be extracted accurately. However with only a single transition the degeneracy between $T_{\rm ex}${} and $\tau$ could not be broken so we remain ignorant on the local temperature. The linewidths are sufficiently well sampled with spectral resolutions of \vel{40} for CO and CN and \vel{37} for CS such that sampling effects are negligible for our data. Assuming square channels and Gaussian line profiles, we estimate that the bias on the measured $\Delta V$ would be $\approx 2~\%$ for CO and CN and $\approx 3.5~\%$ for CS. Figure~\ref{fig:resolution} shows the impact of the resolution on the determination of $\Delta V$. These biases have been included in the following analysis. \subsection{Keplerian shear correction} \label{sec:beamsmear} In the following `direct' methods we only consider the disk outside 40~au. Within this radius the spectra start to strongly deviate from the assumed Gaussian (in opacity) line profiles, because parts of the disk rotating in opposite directions are smeared in the beam. Given the flux calibration, there is an intrinsic 7\% uncertainty on the peak values of the spectra, thus the $T_{\rm ex}${} values derived for CO and CN have uncertainties of at least 7\%. The impact of this is discussed in Sec.~\ref{sec:discussion}. To estimate the impact of the artificial broadening due to the beam smear, the physical model of TW~Hya from \citet{Gorti_ea_2011} was used. The model was run through the \texttt{LIME} radiative transfer code \citep{Brinch_ea_2010} for a range of inclinations, $i = \{0\degr,\,5\degr\,,6\degr\,,7\degr\,,8\degr\,,9\degr\}$, assuming no turbulent broadening. We note that the projected velocity is a product of both stellar mass and inclination. Thus, by varying only the inclination we are able to consider uncertainties in both quantities\footnote{The relative error $\delta i / i \approx 0.29$ considered is equivalent to assuming $\delta M_{\star} / M_{\star} = 0.58$. Alternatively, this could be considered as $M_{\star} = 0.6 \pm 0.15~M_{\sun}$ and $i = 7 \pm 1.9~\degr$, well representative of TW~Hya.}. Following \cite{Rosenfeld_ea_2013}, we account for the height above the midplane in the calculation of the velocity field. Both CO J=(2-1) and C$^{18}$O (2-1) lines were modelled, allowing us to sample both an optically thick and thin case. Using \texttt{CASA}, the model observations were converted to synthetic observations with the same array configuration as the true observations. Differences in the resulting line width at each pixel between an inclined disk and a face on disk were attributed to Keplerian broadening. At our linear resolution ($\sim 25$ au), the radial distribution of differences in linewidths was well fit by a power law outside of 40~au, \begin{equation} \Delta V_{\rm Kep} = \Big(2.6 \pm 0.5\Big) \times \left(\frac{r}{100} \right)^{\,-3.2 \pm 0.1} \,\, {\rm m\,s^{-1}}, \label{eq:v_kep} \end{equation} \noindent with $r$ the radial distance in au. Quoted uncertainties are $1\sigma$ and are dominated by an uncertainty in inclination of $\pm 2\degr$. The differences between the $^{12}$CO and C$^{18}$O cases were smaller than these quoted uncertainties. This component was subtracted from all linewidths prior to further analysis. Figure~\ref{fig:linewidths} shows the measured linewidths (black lines), and the linewidths after the correction for Keplerian shear (blue lines). \FigureKineticTemps \subsection{Single Molecule Approach} \label{sec:method1} After correcting for the Keplerian shear we assume the linewidth is only a combination of thermal and turbulent broadening. Hence the remaining linewidth can be described as, \begin{equation} \Delta V = \sqrt{v_{\rm turb}^2 + \frac{2kT_{\rm kin}}{\mu m_{\rm H}}}, \label{eq:linewidth} \end{equation} \noindent where $\mu$ is the molecular mass of the tracer molecule, $m_{\rm H}$ the mass of a hydrogen atom, the kinetic temperature of the molecule $T_{\rm kin}${} and the linewidth $\Delta V = {\rm FWHM} \,/\, \sqrt{4 \ln 2}$. For both CO and CN, the line profiles provided $T_{\rm ex}${}, so a conversion to $T_{\rm kin}${} must be made. Guided by the particle densities in the model of \citet{Gorti_ea_2011} at the region of expected emission for CO and CN, $\ga 10^6-10^7$~cm$^{-3}$, we make the assumption that both CO and CN lines are thermalised so that $T_{\rm ex}${} = $T_{\rm kin}${} $ = T$. The validity of this assumption is discussed in Sec.~\ref{sec:discussion}. Derived $T_{\rm kin}${} values for CO and CN are shown by the blue lines in the left two panels of Fig.~\ref{fig:kinetic_temps}. The black lines show $T^\mathrm{max}_\mathrm{kin}$, the maximum kinetic temperature in the absence of any turbulence: \begin{equation} T^\mathrm{max}_\mathrm{kin} = \frac{\mu m_{\rm H}}{2 k} \,\Big(\Delta V\Big)^2. \end{equation} \noindent In essence, the residual between these two lines must be accounted for either by turbulent broadening, or sub-thermal excitation, i.e. $T_{\rm kin}${} $> T$. Outside of $r \sim 140$~au, CN shows signs of non-LTE effects as the derived $T_{\rm ex}${} is considerably higher than $T_{\rm kin}^{\rm max}$, indicating the presence of weak pumping of the line (see Fig.~\ref{fig:kinetic_temps}). These `supra-thermal' regions are neglected in the remainder of the analysis. The (small) impact of unresolved turbulence and or temperature gradients in the finite beamsize will be discussed Section~\ref{sec:discussion}. With a known $T_{\rm kin}${} a simple subtraction of the thermal broadening component leaves $v_{\rm turb}${}. The left two columns of Fig.~\ref{fig:direct_turbulent} show the derived $v_{\rm turb}${} in units of \vel{} in the top panel and as a function of local soundspeed $c_s$ in the bottom panel for CO and CN respectively. Fig.~\ref{fig:turbulence_2D} shows the spatial distribution of $v_{\rm turb}${} (we neglected here the primary beam correction, which only reaches 7\% at the map edge). For the case of CS, the line is essentially optically thin, and we cannot derive an excitation temperature. \subsection{Co-Spatial Approach} \label{sec:method2} Instead of relying on the temperature derived from a single molecule, we can take advantage of molecules with different molecular weights to separate the thermal and turbulent broadening, assuming the lines from these molecules emit from the same location in the disk. Under this assumption the total linewidths would be tracing the same $v_{\rm turb}${} and $T_{\rm kin}${}. Solving Equation~\ref{eq:linewidth} simultaneously for two molecules, $A$ and $B$ with respective molecular masses, $\mu_{\rm A}$ and $\mu_{\rm B}$ where $\mu_{\rm A} < \mu_{\rm B}$, and total linewidths, $\Delta V_{\rm A}$ and $\Delta V_{\rm B}$, we find, \begin{align} T_{\rm kin} &= \frac{m_{\rm H}}{2k} \frac{\mu_{\rm A} \, \mu_{\rm B}}{\mu_{\rm B} - \mu_{\rm A}} \, \Big( \Delta V_{\rm A}^2 - \Delta V_{\rm B}^2 \Big), \label{eq:simtkin}\\ v_{\rm turb} &= \sqrt{\frac{\mu_{\rm B} \Delta V_{\rm B}^2 - \mu_{\rm A} \Delta V_{\rm A}^2}{\mu_{\rm B} - \mu_{\rm A}}} \label{eq:simvturb}. \end{align} \noindent This method does not make any assumption about the excitation temperature of the observed transitions, but relies only on the measured linewidths and the co-spatiality of the emitting regions. Among the observed molecules, CO may only trace a narrow layer because of its high optical depth. However, one would expect the optically thin CN and CS to trace a larger vertical region. Both CN and CS would freeze-out at a similar temperature so the bottom of their respective molecular layers would be relatively coincident, thus potentially trace the same region in the disk. Hence we choose to apply this method to the two lines of CN and CS. The right most panel of Fig.~\ref{fig:kinetic_temps} shows the $T_{\rm kin}${} (blue line) derived from CN and CS, in comparison to $T^\mathrm{max}_\mathrm{kin}$, the maximum $T_{\rm kin}${} derived from the CS linewidth (black). Radial profiles of $v_{\rm turb}${} derived from CN and CS are shown in the right column of Fig.~\ref{fig:direct_turbulent}, in \vel{} (top) and as a function of $c_s$ (bottom). Gaps in $T_{\rm kin}${} and $v_{\rm turb}${} correspond to where the $\mu$-scaled linewidth of CS is less than the $\mu$-scaled linewidth of CN (see Fig.~\ref{fig:scaledlinewidths}). In this situation there is no solution to Eqs.~\ref{eq:simtkin} and \ref{eq:simvturb}, thus the assumption of CN and CS being cospatial fails. \subsection{Parametric Model Fitting} \label{sec:method3} \DiskFitTable The above direct methods require a proper correction of the Keplerian shear, which scales as $\sqrt{M_*} \sin(i)$. For edge-on disks, or when the angular resolution is insufficient to remove the Keplerian shear, our direct technique is unapplicable, and the only available method is to use a parametric model assuming $T_{\rm kin}${} and the total local linewidth $\Delta V$. A parametric model fit can recover $\Delta V$ with high accuracy independently of the absolute (flux) calibration error. However, the fraction of this width which is due to turbulence depends on the absolute calibration since the thermal line width scales as the square root of the kinetic temperature. In the following we give a brief description of the parametric model but refer the reader to \citet{Dartois_ea_2003} and \citet{Pietu_ea_2007} for a thorough model description and fitting methodology. The model assumes a disk physical structure which is described by an ensemble of power laws: \begin{equation} A_{\rm r} = A_{100} \times \left( \frac{r}{100}\right)^{-e_{\rm A}}, \end{equation} \noindent for some physical parameter $A$ and cylindrical distance $r$ in au. A positive $e_a$ means the $A$ parameter \emph{decrease} with radius. The molecule densities follow a Gaussian distribution in $z$, whose scale height $H$ is used as a free parameter (this is equivalent to a uniform abundance in a vertically isothermal disk). This method allows to correct to first order the geometric effects in the projected rotation velocities due to disk thickness. CO was found to sample a much higher layer (larger $H$) than CN or CS which yielded similar values. With this method we fit two models, firstly one used previously in the literature where $v_{\rm turb}${} is described as a radial power-law, and secondly where we fit for the total linewidth, $\Delta V$, then calculate the value of $v_{\rm turb}${} from Equation.~\ref{eq:linewidth}. Note that by fitting for $\Delta V$ we result in a non-power-law description of $v_{\rm turb}${}. An inclination, position angle and systemic velocity were found that were comparable to literature values: $i \approx 6\degr$, ${\rm PA} \approx 240\degr$ and $V_{\rm LSR} \approx 2.82$~km\,s$^{-1}$. Physical parameters relevant to $v_{\rm turb}${} are found in Table~\ref{tab:disk_params} along with their formal errors. All three molecules yielded a steeper dependance of $e_v$ than a Keplerian profile with $e_v \approx 0.53$. Such a change in projected velocity could either be a projection effect, such as a warp in the disk \citep{Roberge_ea_2005,Rosenfeld_ea_2012}, or gas pressure resulting in non-Keplerian rotational velocities for the gas \citep{Rosenfeld_ea_2013}. To account for such an exponent with a warp, $i$ needs to change by $\approx 1\degr$ between 40 and 180~au. Thus, while this non-Keplerian bulk motion was not considered explicitly in the removal of the Keplerian shear, the range of inclinations considered, $7 \pm 2\degr$, sufficiently accounts for such a deviation. Further analysis of this is beyond the scope of this paper. As with the two direct methods, it was assumed all lines were fully thermalised so that the excitation temperature recovered the full thermal width of the line. A comparison of the total linewidths, temperature profiles and turbulent components are shown in yellow solid lines in Figs.~\ref{fig:linewidths},~\ref{fig:kinetic_temps} and \ref{fig:direct_turbulent} respectively. \section{Results and Discussion} \label{sec:discussion} In the previous section we have described three approaches we have used to measured $v_{\rm turb}${} in TW~Hya. In the following section we compare the methods and discuss their limitations with a view to improving them. \subsection{Temperature Structure} Thermal and turbulent broadening are very degenerate and so a precise determination of the temperature structure is pre-requisite to deriving the level of turbulent broadening. Both direct and parametric methods yield comparable temperatures for CO and CN, as shown in Fig.~\ref{fig:kinetic_temps}, however find largely different values for $v_{\rm turb}${}, demonstrating the sensitivity of $v_{\rm turb}${} to the assumed temperature structure. Excitation temperatures derived from the parametric modelling approach yielded warmer temperatures for CO than CN, in turn warmer than CS with $T_{100}$ = $35.4 \pm 0.2$~K, $25.3 \pm 0.2$~K and $12.2 \pm 0.1$~K respectively when fitting for a total linewidth (see Table~\ref{tab:disk_params}), a trend that was also seen in the direct methods. These values suggest that the emission from each molecule arises from a different height above the midplane in the disk and therefore could be used to trace the vertical structure of $v_{\rm turb}${}. In the single molecule analyis, either direct or parametric, it was assumed that for both CO and CN, $T_{\rm ex}${} = $T_{\rm kin}${}, that is, they are both in local thermal equilibrium (LTE). This assumption was guided by the model of \citet{Gorti_ea_2011} which has particle densities of $\ga 10^6-10^7$~cm$^{-3}$ where we believe the molecular emission of CO and CN to arise from. This is sufficient to thermalise the CO line. Given that $T_{\rm kin}${} $\geq$ $T_{\rm ex}${}, except from the extremely rare case of supra-thermal excitation, the above analysis yielded a lower limit to $T_{\rm kin}${}, therefore an upper limit to $v_{\rm turb}${}. However, for CN, we have clear evidence for supra-thermal excitation beyond 130 au. A detailed discussion of this issue is beyond the scope of this article. In the future, with multiple transitions it is possible to use the relative intensities of the transitions to guide modeling of the excitation conditions traced by the molecule, thereby yielding a more accurate scaling of $T_{\rm ex}${} to $T_{\rm kin}${}. The co-spatial assumption for CN and CS clearly fails in certain regions of the disk where there is no solution to Eqs.~\ref{eq:simtkin} and \ref{eq:simvturb}. Indeed, the temperatures derived from the parametric modeling yield considerably different temperatures for both CN and CS (see Table~\ref{tab:disk_params}), suggesting that this co-spatial assumption fails across the entire disk. Chemical models suggest that CN is present mostly in the photon-dominated layer, higher above the disk plane than CS (although S-bearing molecules are poorly predicted by chemical models, see \cite{Dutrey_ea_2011}). The non-thermalization of the CN N=2-1 line that we observe beyond 130~au also supports the presence of CN relatively high above the disk plane. The accuracy of this assumption can be tested, as well as searching for other co-spatial molecular tracers, with the observation of edge-on disks where the `molecular layers' can be spatially resolved. Measurements of temperature will be sensitive to temperature gradients along the line of sight, both vertically and radially. Radial gradients will prove more of an issue than vertical as molecular emission will arise predominantly from a relatively thin vertical region, so we expect only a small dispersion vertically in temperature. With the temperature profiles discussed in Section.~\ref{sec:method3}, we estimate that the radial average dispersion across the beam is $\delta T_{\rm beam} \lesssim 5~\%$ outside 40~au for all three lines with a maximum of $\sim 10~\%$ for the very inner regions. \FigureTemperatureBias To understand the impact of this on the subsequent derivation of $v_{\rm turb}${}, we consider a two-zone model. We take two regions of differing temperature, but sharing the same turbulent velocity described by a Mach number, $\mathcal{M}_{\rm true} = v_{\rm turb} \, / \sqrt{2} \, c_s$ and the same optical depth. We measure a temperature and linewidth using a Gaussian line profile of the resulting combined line profile and derive a Mach number, $\mathcal{M}_{\rm obs}$. With this method we are able to explore how accurately $\mathcal{M}_{\rm obs}$ can recover $\mathcal{M}_{\rm true}$ with a given temperature dispersion. Figure~\ref{fig:temperaturebias} shows the relative error on $\mathcal{M}${}, $\delta \mathcal{M}$, as a funtion of $\mathcal{M}_{\rm true}$ and temperature dispersion $\delta T$, assuming the main temperature is 30~K. Taking the temperature dispersions across the beam of 10~\%, we find an uncertainty of $\lesssim 1~\%$ for $\mathcal{M}${}. This suggests that our determination of $v_{\rm turb}${} is not biased by the expected line of sight gradients in temperature and turbulent width. \subsection{Turbulent Velocity Dispersions} With an assumed thermal structure, the turbulent broadening component was considered the residual linewidth not accounted for by thermal broadening or beam smear. Resulting values of $v_{\rm turb}${} are compared in Fig.~\ref{fig:direct_turbulent} and \ref{fig:turbulence_2D}. All three methods yielded values of $v_{\rm turb}${} that ranged from $\sim 50 - 150 \, {\rm m\,s^{-1}}$ corresponding to the range $\sim 0.2 - 0.4~c_s$, however exhibit different radial profiles. The azimuthal structure seen near the centre of the disk in all panels of Fig.~\ref{fig:turbulence_2D} is due to the azimuthal-independent subtraction of beam smearing used in Section~\ref{sec:linewidthmeasurements}. \FigureDirectTurbulence \subsubsection{Single Molecule Approach} CO and CN emission allowed for a `single molecule approach' as described in Section~\ref{sec:method1}. CO yielded values of $v_{\rm turb}${} for $40 \lesssim r \lesssim 190$~au while CN was limited to $40 < r \lesssim 130$~au because of the potential non-LTE effects described in the previous section. Both molecules displayed a decreasing $v_{\rm turb}${} with radius, although CO has a slight increase in the other edges. As a fraction of $c_s$, both molecules ranged within $\sim 0.2 - 0.4~c_s$, however for CO this was found to increase with radius while CN decreased. \subsubsection{Co-Spatial Approach} Assuming CN and CS are co-spatial, we find $v_{\rm turb}${} values ranging from $v_{\rm turb}${}~$\leq$~\vel{100} or $v_{\rm turb}${}~$\leq~0.4~c_s$, comparable to the range found for CO and CN individually. This method, however, is limited by the validity that CN and CS are co-spatial. Indeed, the assumption fails absolutely between $100 \lesssim r \lesssim 180$~au where the linewidth measurements do not allow for a solution of Eqns.~\ref{eq:simtkin} and \ref{eq:simvturb} to be found. This is more clearly seen in Fig.~\ref{fig:scaledlinewidths} which shows the linewidths of CN and CS scaled by $\sqrt{\mu}$ where $\mu = 26$ for CN and $\mu = 44$ for CS. In the region where no solution is found the scaled linewidth for CS is less than that of CN. Despite this limitation, this suggests the utility of another method of determining both $T_{\rm kin}${} and $v_{\rm turb}${}. \subsubsection{Parametric Model Fitting} All previous measurements of $v_{\rm turb}${} have relied on fitting a power-law model of a disk to the observations \citep{Dartois_ea_2003,Pietu_ea_2007,Hughes_ea_2011,Guilloteau_ea_2012,Rosenfeld_ea_2012,Flaherty_ea_2015}, so this allows for a direct comparison to previous results in the literature. In addition, with data with reduced spatial and spectral resolution, the `direct' methods will not be possible so it is important to validate the parametric modelling approach. We have described two models which were fit to the data with the results shown in Table~\ref{tab:disk_params}. Both include the excitation temperature as a radial power-law, however for one we assume the total linewidth is a power-law, while for the other we assume $v_{\rm turb}${} is a power-law. Accordingly, the parameter not fit for is \emph{not} a power law, but rather derived through Eqn.~\ref{eq:linewidth}. Typically with high spectral and spatial resolution, the data only allows for the second method. A comparison between the models are shown in Fig.~\ref{fig:direct_turbulent} where the yellow solid line shows the case where $\Delta V$, the total linewidth, was assumed a power-law, and the dashed gray lines are where $v_{\rm turb}${} was assumed a power-law. All three molecules display similar ranges of $v_{\rm turb}${}, $\sim 50 - 150 \, {\rm m\,s^{-1}}$ ($\sim 0.1 - 0.4~c_s$) as the direct methods. For both CO and CS the two parametric models yield similar results, however the second, where $v_{\rm turb}${} is fit for, has larger uncertainties. Both molecules have a slightly increasing $v_{\rm turb}${} with radius $e_{v_{\rm turb}} \approx -0.22$ and $-0.1$ respectively around \vel{60}. CN, on the other hand, shows a distinct dichotomy between the two due to the different temperature profiles derived for the two methods (see Table~\ref{tab:disk_params}). As mentioned in the previous section, CN displays non-LTE effects which the LTE parametric model may struggle to fit. A limiting feature of such parametric model fitting is showcased by the results of CO (left column of Fig.~\ref{fig:direct_turbulent}). If the physical properties of the disk vary from a power-law description, the model will fail to fit this and may be driven to the best `average' description. For example, while the power-law method recovers $v_{\rm turb}${} for CO for $r \gtrsim 100$~au, inside of this radius the two derived $v_{\rm turb}${} vales, one directly and one from model fitting, can deviate by up to a factor of 2. \subsection{Limits on the Detectability of $v_{\rm turb}${}} \FigureMaps \FigureScaledLinewidths The single molecule methods, either direct or parametric, are limited by our ability to recover with precision the kinetic temperature. Uncertainty on the kinetic temperature come from different origins: thermal noise, incomplete thermalization of the observed spectral lines, absolute calibration accuracy, and in the parametric model, inadequacy of the model. Thermal noise can be overcome by sufficient integration time. Incomplete thermalization is a complex issue, and will in general require multi-line transition to be evaluated. However, in the case of CO, the critical densities are low, and we expect the CO lines to be very close to thermalization. Absolute calibration will place an ultimate limit to the our capabilities to measure the turbulence. We derive in Appendix~\ref{sec:derivation} the impact of the uncertainty on the kinetic temperature on the derivation of the turbulence, \begin{equation} \frac{\delta v_{\rm turb}}{v_{\rm turb}} = \frac{\mu_{\rm H}}{2 \mu \mathcal{M}^2} \, \frac{\delta T}{T}, \label{eq:errordv} \end{equation} \noindent where $\mathcal{M}${} is the Mach number of the turbulent broadening. The left panel of Fig.~\ref{fig:limits} shows, in the absence of any error in the measurement of the linewidth, the relative error in $v_{\rm turb}${} as a function of relative error in $T_{\rm kin}${} for CO (assuming $\mu = 28$). Note that as errors in $\Delta V$ have been neglected, Fig.~\ref{fig:limits}a underestimates the necessary precision in $T_{\rm kin}${} to detect $v_{\rm turb}${}. Previous measurements from the Plateau de Bure Interferometer (PdBI) and Sub-Millimetre Array (SMA) have typical flux calibrations of $\sim$~10\% and $\sim$~20\% respectively \citep{Hughes_ea_2011,Guilloteau_ea_2012}, so we estimate that these can only directly detect $v_{\rm turb}${} at $3 \sigma$ when $v_{\rm turb}${} $\gtrsim 0.16~c_s$ and $\gtrsim 0.26~c_s$ respectively. Our current ALMA experiment has 7-10 \% calibration accuracy, thus is sensitive to $v_{\rm turb}${}~$\gtrsim~0.2~c_s$ for the turbulence not to be consistent with \vel{0} to $5~\sigma$. Ultimately, ALMA is expected to reach a flux calibration of $\approx$~3\%, which will translate to a limit of $v_{\rm turb}${} $\gtrsim 0.07~c_s$ for a $\geq 3\sigma$ detection. However, the flux calibration does not affect the precision to which widths can be measured. The resulting errors on turbulence and temperature derived in the co-spatial method are given by, \begin{equation} \frac{\delta v_{\rm turb}}{v_{\rm turb}} = \frac{1}{\mu_{\rm B} - \mu_{\rm A}} \, \frac{\delta \Delta V}{\Delta V} \sqrt{\,\left(\mu_{\rm A} + \frac{\mu_{\rm H}}{\mathcal{M}^2} \right)^2 + \frac{1}{x^2} \, \left( \mu_{\rm B} + \frac{\mu_{\rm H}}{\mathcal{M}^2} \right)^2}, \end{equation} \begin{equation} \frac{\delta T}{T} = \frac{2 \mu_{\rm A} \mu_{\rm B}}{\mu_{\rm B} - \mu_{\rm A}} \frac{\delta \Delta V}{\Delta V} \sqrt{\,\left( \frac{\mathcal{M}^2}{\mu_{\rm H}} + \frac{1}{\mu_{\rm A}} \right)^2 + \frac{1}{x^2} \, \left( \frac{\mathcal{M}^2}{\mu_{\rm H}} + \frac{1}{\mu_{\rm B}}\right)^2}, \end{equation} \FigurePrecision \noindent where $x$ is a scaling factor between the relative errors on the two linewidths, \begin{equation} \frac{\delta \Delta V_{\rm A}}{\Delta V_{\rm A}} = x \cdot \frac{\delta \Delta V_{\rm B}}{\Delta V_{\rm B}} = \frac{\delta \Delta V}{\Delta V}. \end{equation} \noindent See the Appendix~\ref{sec:derivation} for the complete derivation. Figure \ref{fig:limits}b shows the relative error on $v_{\rm turb}${} assuming the molecular masses of CN and CS (26 and 44 respectively) and that the relative errors on both lines are the same, $x=1$. Figure~\ref{fig:limits}c shows the limits of this method in determining $T_{\rm kin}${}. For the observations presented in this paper, we have a precision in the measurement of the linewidth of $\approx 0.3~\%$ for both CO and CN, and $\approx 1~\%$ for CS (hence $x \approx 0.33$). Parametric models typically return much lower formal errors on $v_{\rm turb}${} than direct methods (for example, we find relative errors in Section.~\ref{sec:method3} on the order of 5\%). However, this is only a result of the imposed prior on the shape of the radial dependency of the temperature and turbulent width, which can lead to a significant bias that is not accounted for in the analysis. In any case, these parametric models suffer from the same fundamental limits due to thermalization and absolute calibration as the single-molecule direct method. \subsection{Comparison with Other Observations, Disks and Simulations} Turbulence in TW~Hya was modelled previously by \citet{Hughes_ea_2011} using \vel{40} resolution SMA observations of CO (3-2). Using a model fitting approach the authors found an upper limit of $v_{\rm turb}${} $\la$ \vel{40} corresponding to $\la 0.1 c_s$, considerably lower than the values plotted in Fig.~\ref{fig:direct_turbulent}. The temperature profile assumed for their parametric model was warmer than found in this work, with the authors quoting $T_{100} = 40$~K and $e_T = 0.4$ compared to our values of $T_{100} = 34.5 \pm 0.1$~K and $e_T = 0.492 \pm 0.002$ (see Fig.~\ref{fig:temperature_comparisons}). This warmer profile is sufficient to account for any difference in resulting $v_{\rm turb}${}. In any case, both measurements are fundamentally limited by the absolute calibration uncertainty, and only imply $v_{\rm turb}${}~$< 0.23 c_s$ (SMA data) or $< 0.16 c_s$ (our ALMA data). Other disks have also been the subject of investigations of $v_{\rm turb}${}. DM~Tau, MWC~480 and LkCa~15 have yielded higher velocities of $\la 100 - 200$~m\,s$^{-1}$ ($\la 0.3-0.5$~$c_s$) \citep{Dartois_ea_2003,Pietu_ea_2007} which are sufficiently large to be detected by the PdBI. However, the velocity resolution of the observations was on the order of \vel{200} resulting in a poorly constrained total linewidth which may result in overestimating $v_{\rm turb}${}. The impact of the spectral resolution was accounted for in the more recent measurement of DM Tau by \citet{Guilloteau_ea_2012} using the heavier molecule CS, who found $v_{\rm turb}${}~$\simeq 0.3 - 0.4~c_s$ More recently \citet{Flaherty_ea_2015} used parametric modeling of multiple CO isotopologue transitions to infer $v_{\rm turb}${} $\la 0.04~c_s$ in HD~163296. We must, however, consider the impact of flux calibration on all methods involving a single line measurement. Every method will constrain the local linewidth using some combination of diagnostics, such as the broadening of channel images or the peak-to-trough ratio of the integrated spectra. Each method will recover this linewidth to its own precision (depending particularly on the functional form imposed for the spatial dependency of this linewidth). However, once the uncertainty on the local linewidth is known, Equation~\ref{eq:errordv} can be applied to propagate the error due to this uncertainty and to the absolute calibration precision to the turbulent component of the line width. Application to the results of \citet{Hughes_ea_2011} and \citet{Flaherty_ea_2015} yield upper limits of $v_{\rm turb}${}~$< 0.23~c_s$ and $< 0.16~c_s$ respectively, more similar to what we measure here. The $v_{\rm turb}${} value found for DM~Tau is considerably larger than limits imposed by the flux calibration $(\approx 10\%$). Given consideration of the observed limits, this suggests that the disk of DM~Tau is more turbulent than those of TW Hya and HD~163296. Comparisons with numerical simulations also provide a chance to distinguish between turbulent mechanisms. \citet{Simon_ea_2015} used an ensemble of shearing-box MHD simulations coupled with radiative transfer modeling to predict the velocity dispersion traced by CO emission in a proto-typical T-Tauri disk pervaded by MRI. The authors found that molecular emission would trace a transition region between the dead-zone and the turbulent atmosphere, showing velocity dispersion of between 0.1 and 0.3~$c_s$, almost identical to the range found in TW~Hya. \citet{Flock_ea_2015} ran similar, however global, models of an MRI active disk, finding velocity dispersions of $v_{\rm turb}${} $\approx$ 40 -- \vel{60} near the midplane, rising to 80 -- \vel{120} higher above the midplane, again consistent with the values found in TW~Hya. A comparison with the $\alpha$ viscosity models is more complex, as the relation between $v_{\rm turb}${} and $\alpha$ depends on the nature of the viscosity, with $v_{\rm turb}${} ranging between a few $\alpha c_s$ and $\sqrt{\alpha} c_s$ \citep{Cuzzi_ea_2001}. A vertical dependence of $v_{\rm turb}${}, as found in \citet{Flock_ea_2015}, is a typical feature of MRI driven turbulence and may provide a discriminant between other models of turbulent mixing. In addition to the parametric model finding different temperatures for all three molecules, CO and CN yielded different $T_{\rm ex}${} values from the line profile fitting and the simultaneous method failed under the assumption that CN and CS are co-spatial. These pieces of evidence suggest CO, CN and CS each trace distinct vertical regions in the disk, potentially providing a possibility to trace a vertical gradient in $v_{\rm turb}${}. With the current uncertainties on the temperatures for the three molecules we are unable to distinguish any difference in $v_{\rm turb}${} with height above the midplane. \citet{Cleeves_ea_2015} have modelled the ionization structure of TW~Hya using observations of key molecular ions HCO$^+$ and N$_2$H$^+$, concluding that the disk may have a large MRI-dead zone extending to $\sim 50-65$~au. An observable feature of such a dead zone would be a sharp decrease in the velocity dispersion at this radius. Our data lack the spatial resolution and sensitivity to reliably trace the gas turbulent motions in the inner $\sim 40$~au where this feature may be more prominent. However, the power law analysis indicates $v_{\rm turb}$ values actually increase with radius (exponent $e_{\delta\mathrm{v}} < 0$), in contrast with the direct measurements. This difference may be due to the impact of such a less turbulent inner region that is ignored in the direct method, but must be fitted in the power law analysis. Future observations will improve the above analysis: in order to improve the accuracy of $v_{\rm turb}${} determination with this direct method, a well constrained thermal structure is crucial. This can be attained with observations of multiple transitions of the same molecule. Furthermore, for more inclined systems, a better understanding of the impact of beam smearing on the velocity dispersion is paramount. This can be combated with smaller beamsizes, resolving a smaller shear component. Among observed species, CS currently provides the best opportunity to probe velocity dispersions closer to the midplane, while we have demonstrated that the ensemble of CO, CN and CS can allow additionally for the determination of the vertical dependence of $v_{\rm turb}${}. Despite all these improvements, direct measures of turbulence will ultimately be limited by the flux calibration of the interferometers with a sensitivity of $\approx 0.1 c_s$ for ALMA's quoted 3\% accuracy. \section{Conclusion} \label{sec:conclusion} \FigureTemperatureComparison We have discussed several methods of obtaining the turbulent velocity dispersion in the disk of TW~Hya using CO, CN and CS rotational emission with a view to complementing the commonly used parametric modeling approach. Guided by previous models of TW~Hya, the direct method yields $v_{\rm turb}${} values which depend strongly on the radius of the disk, reaching $\approx$~\vel{150} at 40~au, dropping to a near constant $\approx$~\vel{50} outside 100~au for all three tracers. As a function of local soundspeed, CO and CN displayed a near constant $v_{\rm turb}${} $\sim 0.2~c_s$. However, the analysis of the possible sources of errors shows that these numbers should most likely be interpreted as upper limits. Direct or parametric methods using a single molecule are limited by a poor knowledge of the thermal structure of the disk. Additional transition lines will provide a more accurate determination of the temperature, however this is ultimately limited by the flux calibration of ALMA. With an expected minimum of 3\% error on flux calibration, we estimate that a firm detection of turbulent broadening is only possible if $v_{\rm turb}${} $/ c_s \ga 0.1$ via this direct method. The co-spatial method can potentially overcome this absolute calibration issue, however requires two co-spatial tracers of sufficient abundance to have strong emission. Tracing $v_{\rm turb}${} close to the midplane will be considerably more challenging, requiring a strong detection of o-H$_2$D$^+$ and another molecule residing in the midplane, such as N$_2$D$^+$. \begin{acknowledgements} We thank the referee who's helpful comments have improved this manuscript. R.T. is a member of the International Max Planck Research School for Astronomy and Cosmic Physics at the University of Heidelberg, Germany. D. S. and T. B. acknowledge support by the Deutsche Forschungsgemeinschaft through SPP 1385: "The first ten million years of the solar system a planetary materials approach" (SE 1962/1- 3) and SPP 1833 "Building a Habitable Earth (KL 1469/13-1), respectively. This research made use of System. This paper makes use of the following ALMA data: ADS/JAO.ALMA\#2013.1.00387.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), NSC and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. This work was supported by the National Programs PCMI and PNPS from INSU-CNRS. \end{acknowledgements} \bibliographystyle{aa}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \subsection{} In the beginning, in Stasheff's seminal papers \cite{Sta63}, $A_\infty$-spaces (algebras) had points (units) in what was subsequently termed the zero arity of the operad $\Ass$.\footnote{Therefore, in our notation, we should write it as $u\Ass$, or $\Ass_+$: the \emph{unitary} associative operad.} Stasheff called them \emph{degenerations}. They were still present in \cite{May72} and \cite{BoVo73}, for instance, but after that, points or units generally disappeared and for a while people working with operads assumed as a starting point $P(0) = \emptyset$, in the topological setting, or $P(0) = 0$ in the algebraic one: see for instance \cite{GK94}. This may have been caused because of the problems posed by those points (units), including \begin{itemize} \item[(1)] \cite{Hin03} had to correct his paper \cite{Hin97} about the existence of a model structure in the category of operads of complexes over an arbitrary commutative ring, excluding the arity zero of the operads---or considering just the case of characteristic zero. \item[(2)] \cite{Bur18} explains how the bar construction of a dg associative algebra with unit is homotopy equivalent to the trivial coalgebra, thus destroying the usual bar-cobar construction through which one usually builds minimal models for operads in Koszul duality theory. \item[(3)] \cite{Mar96} (see also \cite{MSS02}) constructs minimal models for operads of chain complexes over a field of zero characteristic, carefully excluding operads with non-trivial arity zero, which allows him to implicitly replace the somewhat \lq\lq wild" general free operad $\Gamma(M)$ for the tamer one that we denote by $\Gamma_{01}(M)$. \end{itemize} More recently, the situation changed and people have turned their efforts to problems involving non-trivial arity zero. In the topological context, we have the works \cite{MT14}, or \cite{FTW18}, for instance. In the algebraic context we can mention \cite{FOOO09a}, \cite{FOOO09b}, \cite{Pos11}, \cite{Lyu11}, \cite{HM12}, \cite{Bur18}\dots And coping with both, \cite{Mur16}, or \cite{Fre17a} and \cite{Fre17b}. In introducing points (units) back in the theory of up-to-homotopy things, there are two main possibilities: either you consider \emph{strict} ones, as in Stasheff's original papers \cite{Sta63}, or in \cite{May72}, \cite{Fre17a}, \cite{Fre17b}, \cite{FTW18}, \cite{Bur18}, or you consider \emph{up-to-homotopy} ones, or other relaxed versions of them: \cite{BoVo73}, \cite{FOOO09a}, \cite{FOOO09b}, \cite{Pos11}, \cite{Lyu11}, \cite{HM12}, \cite{MT14}\dots Or you can do both: \cite{KS09}. In this paper, we work in the algebraic and strict part of the subject. The contribution we add to the present panorama is to prove the existence of minimal models à la Sullivan $P_\infty$ for operads $P$ on cochain complexes over a characteristic zero field $\mk$, with non-trivial arity zero in cohomology, $HP(0) = \mk$. In doing so, we extend the works of Markl \cite{Mar96}, \cite{Mar04} (see also \cite{MSS02}) which proved the existence of such models for non-unitary operads, $P(0) = 0$. Our models include the one of \cite{Bur18} for the unitary associative operad $\Ass_+ = u\Ass$. More precisely, our main result says: \newtheorem*{I1}{\normalfont\bfseries Theorem $\textbf{\ref{existencia+}}$} \begin{I1} Every cohomologically connected operad with unitary multiplication $P\in \Ass_+ \backslash \Op $, $HP(1) = \mk$, and which is cohomologically unitary, $HP(0) = \mk$, has a Sullivan minimal model $P_\infty \longrightarrow P$. This minimal model is connected, $P_\infty (1) = \mk$, and unitary, $P_\infty (0) = \mk$. \end{I1} The restriction condition \emph{operad with unitary multiplication} just means that $P$ is an operad together a morphism $\varphi : \Ass_+ \longrightarrow P$. This means that the unit $1 \in P(0)$ in fact acts as a unit; in other words, there is at least one operation $m_2 \in P(2)$ such that $1$ is a unit for the operation, $m_2 \circ_i 1 = \id, i=1,2$. Therefore, the hypothesis of being (cohomologically) unitary for $P$ is not actually an empty condition. In the non-unitary case, the importance of such minimal models is well known. For instance, they provide a \emph{strictification} of up-to-homotopy algebras, in that for an operad $P$ (with mild hypotheses), up-to-homotopy $P$-algebras are the same as strict, regular $P_\infty$-algebras. We show how $A_\infty$-algebras with strict units are exactly $(\Ass_+)_\infty = su\Ass_\infty$-algebras. As an application too, we offer another proof of the formality of the \emph{unitary} $n$-little disks operad $\D_{n+}$ over the rationals. This fills the gap in our paper \cite{GNPR05} noticed by Willwacher in his speech at the 2018 Rio International Congress of Mathematicians \cite{Wil18}. \subsection{} Markl's mimicking of the Sullivan's original algorithm for dg commutative algebras to non-unitary operads relies on the fact that, when restricted to operads which are non-unitary $P(0) = 0$\footnote{In fact, we show how there is only the need to assume \emph{cohomologically} non-unitary operads, $HP(0) = 0$, in his case.} and cohomologically connected $HP(1) = \mk$, their minimal model is a free graded operad $P_\infty = \Gamma (M)$ over a $\Sigma$-module $M$ which is trivial in arities $0$ and $1$, $M(0) = M(1) = 0$. In this situation, the free graded operad $\Gamma (M)$ has tamer behavior than the \lq\lq wild" general one. We call it $\Gamma_{01} (M)$ and we prove for it lemma \ref{lamadredelcordero}, which allows the Sullivan algorithm to work inductively on the arity of the operads. The precise statement, also containing our unitary case, is the following: \newtheorem*{I2}{\normalfont\bfseries Lemma $\textbf{\ref{lamadredelcordero}}$} \begin{I2} For every module $M$ with $M(0) = M(1) = 0$, and every homogeneous module $E$ of arity $p > 1$, $\Gamma_{01}$ and $\Gamma_{+1}$ verify: \begin{itemize} \item[(a)] $$ \Gamma_{01}(M)(l) = \begin{cases} 0 , & \mbox{if } l= 0, \\ \mk , & \mbox{if } l = 1, \\ \Gamma (M) (l) , & \mbox{if } l \neq 0,1 \end{cases} \qquad \text{and} \qquad \Gamma_{+1}(M) (l) = \begin{cases} \mk, & \mbox{if } l= 0, \\ \mk , & \mbox{if } l = 1, \\ \Gamma (M) (l) , &\mbox{if } l \neq 0,1 \ . \end{cases} $$ \item[(b)] For $\Gamma = \Gamma_{01}, \Gamma_{+1}$, $$ \Gamma (M \oplus E) (l) = \begin{cases} \Gamma (M) (l), & \mbox{if } l < p, \\ \Gamma (M) (p) \oplus E , & \mbox{if } l = p. \end{cases} $$ \end{itemize} \end{I2} Part $(a)$ of the lemma just says that the minimal models $P_\infty$ we are going to construct for cohomologically non-unitary $HP(0) = 0$ (resp., unitary, $HP(0) = \mk$), and cohomologically connected $HP(1) = \mk$ will be non unitary $P_\infty (0) =0$ (resp., unitary, $P_\infty(0) = \mk$) and connected, $P_\infty (1) = \mk$. The possibility of doing the Sullivan algorithm arity-wise relies on this part $(b)$, which shows that, under these restrictions, the new generators $E$ you add in arity $p$ don't produce anything other than themselves in the arity $p$ of the free operad $\Gamma (M\oplus E)$ and don't change what you had in previous arities. In case $M(0)$ or $M(1)$ were non-trivial, the situation would be much more involved. This was clearly the situation in Markl's case. Now the point is that, if we want to construct the minimal model à la Sullivan for cohomologically unitary and cohomologically connected operads $HP(0) = HP(1) = \mk$, \emph{keeping the units strict}, we can also assume that the generating module $M$ also has trivial arities $0$ and $1$. This possibility has been recently made feasible thanks to Fresse's $\Lambda$-modules and $\Lambda$-operads, \cite{Fre17a}. We recall the definitions of $\Lambda$-modules and $\Lambda$-operads in section $2$, but to put it succinctly, we strip out of the operad all the structure carried by the elements of $P(0)$ and add it to the underlying category of $\Sigma$-modules. For instance, the action of a unit $1 \in \mk = P(0)$ on an arbitrary element $\omega \in P(m)$, $\omega \mapsto \omega \circ_i 1 \in P(m-1)$ becomes part of the structure of the underlying module as a \emph{restriction operation} $\delta_i: P(m) \longrightarrow P(m-1)$. The enhanced category of $\Sigma$-modules with these operations is the category of $\Lambda$-modules, and the free operad $\Gamma_{+1}$ of our lemma \ref{lamadredelcordero} is the left adjoint of the forgetful functor from operads to $\Lambda$-modules. Notice that, as a consequence, the $\Lambda$-structure, or which is the same, the action of the units, becomes fixed and is inherited by the free operad $\Gamma_{+1}$. So the units of our minimal models and their algebras are strict: up-to-homotopy units are excluded in this treatment of the subject. \subsection{} As with our paper \cite{CR19}, a comparison with the minimal models of operads obtained thanks to the \emph{curved} Koszul duality \cite{Bur18}, \cite{HM12} might be in order. Of course, since both share the property of being minimal, they must give isomorphic models when applied to the same operads. Nevertheless, let us point out a slight advantage of our approach: in order to construct the minimal model of an operad $P$ through the Sullivan algorithm, $P$ does \emph{not} need to fulfill any Koszul duality, curved or otherwise; not even to be quadratic. You just need the simpler conditions on its cohomology $HP(0) \in \{0, \mk \}$ and $HP(1) = \mk$. \subsection{} The contents of the paper are as follows. In section two, we recall some general definitions and facts about $\Sigma$ and $\Lambda$ modules and operads. Section three does the same with trees, free operads and the two particular instances of them we use in the present paper. Here we prove lemma \ref{lamadredelcordero}, which allows the Sullivan algorithm to work arity-wise in both cases that are studied in this paper, non-unitary and unitary ones. Section four contains the basic homotopy theory of operads we need: extensions and their cofibrant properties, and homotopies between morphisms of operads. The results are well known, at least in the non-unitary case (see \cite{MSS02}). Here we check that everything works also in the unitary case. Section five is devoted to the proof of our main results: the existence and uniqueness of minimal models for dg operads in the non-unitary and unitary case. We show how, once we choose the right free operad, the proof is formally the same in both cases. In section six we prove the aforementioned formality result and check different issues rised by our main results, namely, the relationships between: $(1)$ the minimal model of a unitary operad $P_+$ and the one of its non-unitary truncation $P$; $(2)$ the minimal model of a unitary operad $P_+$ and up-to-homotopy $P_+$-algebras with strict units and $(3)$ the minimal models of the unitary associative operad $u\Ass$ \emph{with up-to-homotopy units} $hu\Ass_\infty$ and ours \emph{with strict units} $su\Ass_\infty$, giving greater accuracy to a remark in \cite{HM12} about the latter not being cofibrant. The appendix contains one main technical device which allows us to transfer the constructions from the non-unitary case to the unitary one. Namely, a simplicial structure for operads with unitary multiplication and a \emph{Kan-like condition} that elements appearing in those constructions verify. The use of this Kan-like condition imposes a restriction on the operads to which our main result applies which might, at first sight, seem artificial: that of being operads with a \emph{unitary multiplication}. But in fact, all this Kan-like structure asks of our operads \emph{with unit} $1 \in P(0)$ that their unit should not be an \lq\lq idle" one, but must necessarily go hand in hand with at least one operation $m_2 \in P(2)$ for which $1$ is \emph{actually} a unit; that is, $m_2 \circ_1 1 = \id = m_2 \circ_2 1$. \section{Notations and conventions} \subsection{} Throughout this paper, $\mk$ denotes a field of zero characteristic. Except for a brief appearance of the little disks operad at the end of the paper, all of our operads live in two categories: $\C = \dgVect $, or $\C = \gVect$, the categories or \emph{dg vector spaces} (also, cochain complexes, differential of degree $+1$) and \emph{graded vector spaces}, over $\mk$. If necessary, we will use the notation $\sMod^\C , \Op^\C \dots$ for the categories of $\Sigma$-modules and operads with coefficients in $\C$; otherwise, we will omit $\C$ everywhere. Alternatively, we will call their objects \emph{dg operads} and \emph{graded operads}, respectively. We denote by $0$ the initial object of $\C$ and also by $\mk$ the unit object of the standard tensor product. $1 \in \mk$ denotes the unit of the field $\mk$. $\id$ denotes the identity of an object in any category and also the image of $1 \in \mk$ by the unit morphism of the operad $\eta : \mk \longrightarrow P(1)$. \subsection{} Let $C \in \C$ be a dg vector space or a graded space. If $c\in C^n$, we will say that $c$ has \emph{degree} $n$ and note it as $|c| = n$. A morphism of complexes $\varphi: C \longrightarrow D$ is a \emph{quasi-isomorphism}, \emph{quis} for short, if it induces an isomorphism in cohomology $\varphi_* = H\varphi : HC \longrightarrow HD$. Given a morphism $\varphi: C \longrightarrow D$ of complexes, we denote by $C\varphi$ the \textit{cone of $\varphi$}. This is the cochain complex given by $C\varphi^n=C^{n+1}\oplus D^n$ with differential $$ \begin{pmatrix} -\partial_C & 0 \\ -\varphi & \partial_D \end{pmatrix} \ . $$ We will also denote by $ZC\varphi$, $BC\varphi$ and $HC\varphi = H(C,D)$ the graded vector spaces of the \emph{relative cocycles}, \emph{relative coboundaries} and \emph{relative cohomology}, respectively. The morphism $\varphi$ is a quasi-isomorphism if and only if $HC\varphi =0$. \subsection{$\Sigma$-modules} Let us recall some definitions and notations about operads (see \cite{KrMay95}, \cite{MSS02}, \cite{Fre17a}). Let $\Sigma$ be the {\it symmetric groupoid\/}, that is, the category whose objects are the sets $ \under{n} = \{1,\dots , n \} $ for $n\ge 1$. For $n=0$, we put $\under{0} = \emptyset$, the empty set. As for the morphisms, $$ \Sigma (\under{m},\under{n}) = \begin{cases} \Sigma_n, & \mbox{if } m = n, \\ \emptyset, & \mbox{otherwise}, \end{cases} $$ where $\Sigma_n = \Aut\{1,\dots , n \} $ are the symmetric groups. In the case $n = 0,1$, we have $\Sigma_0 = \Sigma_1 = *$, the one-point set. We will also need to consider its full subcategories $\Sigma_{>1} \subset \Sigma_{>0} \subset \Sigma$, without the $\under{0}, \under{1}$ objects, and the $\under{0}$ object, respectively. The category of contravariant functors from $\Sigma$ to $\mathcal{C}$ is called the category of $\Sigma$-{\it modules\/} ($\Sigma$-sequences in \cite{Fre17a}) and it is denoted by $\sMod$. We identify its objects with sequences of objects in $\mathcal{C}$, $M=\left(M(l)\right)_{l\geq 0} = (M(0), M(1), \dots , M(l), \dots)$, with a right $\Sigma_l$-action on each $M(l)$. So, every $M(l)$ is a $\mk [\Sigma_l]$-module, or $\Sigma_l$-module for short. If $\omega$ is an element of $M(l)$, $l$ is called the {\it arity\/} of $\omega$. We will write $\arity (\omega) = l$, in this case. Also, we will say that a $\Sigma$-module $E$ is of \emph{homogeneous arity} $p$ if $E(l) = 0$ for $l \neq p$. If $\omega \in M(l)^p$, we will say that $\omega$ has \emph{arity-degree} $(l,p)$. If $M$ and $N$ are $\Sigma$-modules, a {\it morphism of $\Sigma$-modules} $f:M \longrightarrow N$ is a sequence of $\Sigma_l$-{equivariant\/} morphisms $f(l): M(l) \longrightarrow N(l), \ l\geq 0$. Such a morphism is called a \emph{quasi-isomorphism} if every $f(l) : M(l) \longrightarrow N(l)$ is a quasi-isomorphism of complexes for all $l\geq 0$. If $M$ is a $\Sigma$-module, it's clear that cocycles, coboundaries and cohomology, $ZM, BM, HM$, inherit a natural $\Sigma$-module structure too. In the same vein, if $\varphi : M \longrightarrow N$ is a morphism of $\Sigma$-modules, also its cone $C\varphi$ and the relative cocycles, coboundaries and cohomology, $ZC\varphi, BC\varphi, HC\varphi = H(M,N)$ inherit natural $\Sigma$-module structures. Finally, it's equally clear that projections from cocycles to cohomology $\pi : Z \longrightarrow H$ are morphisms of $\Sigma$-modules, with these inherited structures. We will also use the categories $\sModc$ and $\sModu$ of contravariant functors from $\Sigma_{>0}$ and $\Sigma_{>1}$ to $\C$. We can also consider $\sModu$ and $\sModc$ as the full subcategories of $\sMod$ of those $\Sigma$-modules $M$ such that $M(0) = M(1) = 0$ and $M(0) = 0$, respectively. \begin{remark}\label{projectius} We are going to resort quite frequently to the fact that over the group algebras $\mk [\Sigma_n] $ all modules are projective. So, for any $\Sigma$-module $M$ and any $n$, $M(n)$ is a projective $\Sigma_n$-module. This is a consequence of Maschke's theorem. \end{remark} \subsection{$\Sigma$-operads} The category of $\Sigma$-operads is denoted by $\Op$. Operads can be described as $\Sigma$-modules together with either \emph{structure morphisms} \cite{MSS02} (also called \emph{full composition products} \cite{Fre17a}), $$ \gamma_{l;m_1,\dots ,m_l} : P(l)\otimes P(m_1) \otimes \dots \otimes P(m_l) \longrightarrow P(m) \ , $$ or, equivalently, \emph{composition operations} \cite{MSS02} (also called \emph{partial composition products} \cite{Fre17a}), $$ \circ_i : P(l) \otimes P(m) \longrightarrow P(l+m-1) \ , $$ and a \emph{unit} $\eta : \mk \longrightarrow P(1)$, satisfying equivariance, associativity and unit axioms (see \cite{KrMay95}, \cite{MSS02}, \cite{Fre17a}). If $P$ and $Q$ are operads, a \emph{morphism of operads} $\varphi : P \longrightarrow Q$ is a morphism of $\Sigma$-modules which respects composition products and units. A morphism of operads is called a \emph{quasi-isomorphism} if it is so by forgetting the operad structure. We say that an operad $P \in \Op$ is: \begin{itemize} \item[(a)] \emph{Non-unitary} if $P(0) = 0 $, and we denote by $\Op_0$ the subcategory of \emph{non-unitary operads}. \item[(b)] \emph{Unitary} if $P(0) = \mk$, and we denote by $\Op_+$ the subcategory of \emph{unitary operads}. \item[(c)] \emph{Connected} if $P(1) = \mk$, and we denote by $\Op_{01}$ and $\Op_{+1}$ the subcategories of $\Op$ of \emph{non-unitary and connected} operads and \emph{unitary and connected operads}, respectively. \item[(d)] \emph{With unitary multiplication} if there is a morphism of operads $\Ass_+ \longrightarrow P$, and we denote its category by $\Ass_+ \backslash \Op$. Notice that this condition entails the existence of an associative operation $m_2 \in P$, $m_2 \circ_1 m_2 = m_2 \circ_2 m_2$ with unit $m_2 \circ_1 1 = \id = m_2 \circ_2 1$. \end{itemize} Two basic operations we perform on our operads, when possible, are the following: \begin{itemize} \item[(a)] Let $P$ be a connected operad. Denote by $\overline{P}$ its \emph{augmentation ideal}. It is the $\Sigma$-module $$ \overline{P}(l) = \begin{cases} 0 , & \mbox{if } l =0,1, \\ P(l), & \mbox{otherwise}. \end{cases} $$ \item[(b)] We say that a non-unitary operad $P$ admits a \emph{unitary extension} when we have a unitary operad $P_+$ which agrees with $P$ in arity $l> 0$ and composition operations extend the composition operations of $P$. In this case, the canonical imbedding $i_+ : P \longrightarrow P_+$ is a morphism in the category of operads. \end{itemize} Later on, we will recall when a non-unitary operad admits such a unitary extension. \subsection{$\Lambda$-modules} Following \cite{Fre17a}, in order to produce minimal models for our unitary operads, we split the units in $P(0)$ out of them. But we don't want to forget about this arity zero term, so we \lq\lq include" the data of the units in the $\Sigma$-module structure as follows: Let $\Lambda$ denote the category with the same objects as $\Sigma$, but with morphisms $$ \Lambda (\under{m}, \under{n}) = \left\{ \text{injective maps}\ \under{m} \longrightarrow \under{n} \right\} \ . $$ We will also consider its subcategories $\Lambda_{>1} \subset \Lambda_{>0}\subset \Lambda$, defined in the same way as the ones of $\Sigma$. So, a $\Lambda$-structure on $M \in \sMod$ is just a contravariant functorial morphism $$ u^* : M(n) \longrightarrow M(m) $$ for every injective map $u: \underline{m} \longrightarrow \underline{n}$. If $M$ is the $\Lambda$-module associated with a unitary operad $P_+$, then, for every $\omega\in P_+(n)$, we have $$ u^*(\omega) = \omega (1, \dots, 1, \id, 1 , \dots , 1, \id , 1, \dots, 1) \ , $$ with $\id$ placed at the $u(i)$-th variables, for $i = 1, \dots , m$. All these $u^*$ can be written as compositions of the \emph{restriction operations} that come from some particular injective maps $u = \delta^i: \underline{n-1} \longrightarrow \underline{n}$: $$ \delta^i(x) = \begin{cases} x, & \mbox{if } x= 1, \dots , i-1 \\ x+1, & \mbox{if } x= i, \dots , n-1 \ . \end{cases} $$ Again, if $M = P_+$, this means that $$ \delta_i (\omega) = \omega (\id, \dots, \id, 1 , \id , \dots , \id) \ , $$ with $1$ in the $i$-th variable. For instance, an augmentation $\varepsilon : M(n) \longrightarrow \mk$, $\varepsilon (\omega) = \omega (1, \dots, 1)$, can be written as a composition like $\varepsilon = \delta_1 \circ \delta_1 \circ \dots $. So whenever we want to define a $\Lambda$-structure on $M$, we can restrict ourselves to define those $\delta_i : M(n) \longrightarrow M(n-1), i = 1, \dots , n, n\geq 1$, subjected to the natural contravariant functorial constraints $(\delta^i\delta^j)^* = \delta_j\delta_i$, equivariant relations..., that we can find in \cite{Fre17a}, p. 71. The category of contravariant functors from $\Lambda$ to $\C$ is called the category of $\Lambda$-modules ($\Lambda$-sequences in \cite{Fre17a}) and it is denoted by $\lMod$. We still have the obvious notions of arity, morphisms and the full subcategories $\lModu$ and $\lModc$ for $\Lambda$-modules. Maschke's theorem also applies for $\Lambda_n$-modules because $\mk [\Lambda_n] = \mk [\Sigma_n] $. Since restriction operations commute with differentials, cocycles, coboundaries and cohomology, $ZM$, $BM$, $HM$ inherit natural structures of $\Lambda$-modules from a $\Lambda$-module. The same is true if we have a morphims $\varphi : M \longrightarrow N$ of $\Lambda$-modules: its cone $C\varphi$, relative cocycles, coboundaries and cohomology, $ZC\varphi, BC\varphi, HC\varphi = H(M, N)$ inherit natural $\Lambda$-structures. It's enough to show the one on $C\varphi$ and to recall that the morphisms defining the $\Lambda$-structure commute with differentials, by definition. So the structure on $C\varphi$ is the following: $$ \begin{pmatrix} \delta_i^M & 0 \\ 0 & \delta_i^N \end{pmatrix} : M(n)^{+1} \oplus N(n) \longrightarrow M(n-1)^{+1} \oplus M(n-1) \ . $$ And again it's clear that the projections from cocycle to cohomology $\pi : Z \longrightarrow H$ are $\Lambda$-morphisms. \subsection{$\Lambda$-operads} Let $P_+$ be an unitary operad $P_+(0) = \mk$. We can associate to $P_+$ a non-unitary one $P = \tau P_+$, its truncation, $$ P (l) = \begin{cases} 0, & \mbox{if } l = 0, \\ P_+(l), & \mbox{otherwise}. \end{cases} $$ together with the following data: \begin{itemize} \item[(1)] The composition operations $\circ_i : P_+(m)\otimes P_+(n) \longrightarrow P_+(m+n-1)$ of $P_+$ , for $m, n >0$. \item[(2)] The restriction operations $u^*: P_+(n) \longrightarrow P_+(m)$, for every $u\in \Lambda (\under{m},\under{n})$, for $m,n >0$. These restrictions are defined as $u^*(\omega) = \omega (1, \dots, 1, \id, 1 , \dots , 1, \id , 1, \dots, 1)$, with $\id$ placed at the $u(i)$-th variables, for $i = 1, \dots , m$. \item[(3)] The augmentations $\varepsilon : P_+(m) \longrightarrow \mk = P_+(0)$, $\varepsilon (\omega) = \omega (1, \dots , 1)$, for $m>0$. \end{itemize} A non-unitary operad $P$ together with the structures $(1), (2), (3)$ is called a $\Lambda$-operad. \begin{remark} Truncation makes sense also for operads having arbitrary zero arity $P(0)$. See \ref{twominimalmodels}. \end{remark} According to \cite{Fre17a}, p. 58, every unitary operad $P_+$ can be recovered from its non-unitary truncation $P$ with the help of these data, which define the category of $\Lambda$-operads, $\Lambda \Op_0$, and its corresponding variants (see \cite{Fre17a}, page 71). This can be written as isomorphisms of categories $$ \tau : \Op_{+} = \mathbf{\Lambda}\Op_0 / \Com: (\ )_+ \ , \qquad \text{and} \qquad \tau : \Op_{+1} = \mathbf{\Lambda}\Op_{01} / \Com: (\ )_+ \ . $$ Here, $(\ )_+$ denotes the \emph{unitary extension} associated with any non-unitary and augmented $\Lambda$-operad (see \cite{Fre17a}, p. 81). Namely, if $P \in \mathbf{\Lambda}\Op_0 / \Com$, its unitary extension $P_+$ is the $\Sigma$-operad defined by $$ P_+ (l) = \begin{cases} \mk, & \mbox{if } l = 0, \\ P(l), & \mbox{otherwise}. \end{cases} $$ And the unitary operad structure is recovered as follows: \begin{itemize} \item[(1)] Composition operations $\circ_i : P_+(m)\otimes P_+(n) \longrightarrow P_+(m+n-1)$ for $m, n >0$ are those of $P$. \item[(2)] For $n>1$, the \emph{restriction operation} $u^* = \delta_i: P(n) \longrightarrow P(n-1)$ gives us the partial composition operation $\_\circ_i 1: P_+(n)\otimes P_+(0) \longrightarrow P_+(n-1), \ i=1,\dots , n$. \item[(3)] The augmentation $\varepsilon : P(1) \longrightarrow \mk$ gives the unique partial composition product $P_+(1)\otimes P_+(0) \longrightarrow P_+(0)$. \end{itemize} Let us end this section with a couple of easy remarks. \begin{lemma}\label{extensionfunctor} The unitary extension functor $(\ )_+$ commutes with cohomology and colimits. That is, $$ H(P_+) = (HP)_+ \qquad \text{and} \qquad \dirlim_n (P_{n})_+ = (\dirlim_n P_n)_+ \ . $$ \end{lemma} \begin{proof} Commutation with cohomology is obvious. Commutation with colimits is a consequence of $(\ )_+$ having a right adjoint, namely the truncation functor $\tau$. \end{proof} As a consequence, $(\ )_+$ is an exact functor. \begin{remark} The initial object of the category of general operads $\Op$ is the operad $I$ $$ I(l) = \begin{cases} \mk , & \mbox{if } l = 1, \\ 0, & \mbox{otherwise} \ , \end{cases} $$ and the obvious operad structure. It's also the initial object of the subcategory of non-unitary connected operads $\Op_{01}$. We shall denote it also by $I_0$. Its unitary extension $I_+$ $$ I_+(l) = \begin{cases} \mk , & \mbox{if } l = 0, 1, \\ 0, & \mbox{otherwise} \ , \end{cases} $$ with the only possible non-zero partial composition operation being the identity, is the initial object of the subcategory of unitary operads $\Op_{+}$ and its subcatecory of unitary and connected ones, $\Op_{+1}$. \end{remark} \section{Free operads} We recall in this section the definition of the general free operad and of two of its particular instances we are going to use. We start with a review of trees. Trees are useful to represent elements (operations) of operads, its composition products and to produce an accurate description of the free operad. \subsection{Trees} When we speak of trees, we adhere to the definitions and conventions of \cite{Fre17a}, appendix I. We include a summary here, for the reader's convenience. \begin{definition} An $r$-\emph{tree} $T$ consists of: \begin{itemize} \item[(a)] A finite set of \emph{inputs}, $\underline{r} = \left\{i_1, \dots , i_r \right\}$ and an \emph{output} $0$. \item[(b)] A set of \emph{vertices} $v \in V(T)$. \item[(c)] A set of \emph{edges} $e\in E(T)$, oriented from the \emph{source} $s(e) \in V(T) \sqcup \underline{r}$ towards the \emph{target} $t(e) \in V(T) \sqcup \left\{0 \right\}$. \end{itemize} These items are subjected to the following conditions: \begin{itemize} \item[(1)] There is one and only one edge $e_0 \in E(T)$, the \emph{outgoing edge of the tree}, such that $t(e_0) = 0$. \item[(2)] For each $i \in \underline{r}$, there is one and only one edge $e_i \in E(T)$, the \emph{ingoing edge of the tree indexed by $i$}, such that $s(e_i) = i$. \item[(3)] For each vertex $v \in V(T)$, there is one and only one edge $e_v \in E(T)$, \emph{the outgoing edge of the vertex $v$}, such that $s(e_v) = v$. \item[(4)] Each $v \in V(T)$ is connected to the output $0$ by a chain of edges $e_v, e_{v_{n-1}}, \dots , e_{v_1}, e_{v_0}$ such that $v = s(e_v), t(e_v)= s(e_{v_{n-1}}), t(e_{v_{n-1}}) = s(e_{v_{n-2}}), \dots , t(e_{v_2}) = s(e_{v_1}), t(e_{v_1}) = s(e_{v_0})$ and $t(e_{v_0}) = 0$. \end{itemize} \end{definition} Some fundamental examples of trees: \begin{itemize} \item[(1)] The $r$-\emph{corolla}: the only tree having just one vertex, $r$ inputs and one output. We will note it by $Y_r \ $ . \item[(2)] The \emph{unit tree}: the only tree without vertices; just one input and one output. We will note it by $|\ $. \item[(3)] \emph{Corks}, also called \emph{units}, are trees without inputs, just one output and just one vertex. We will note them by \begin{tikzpicture}[xscale=0.3,yscale=0.3]\draw[thick] (1,1.5)--(1,0); \draw[fill] (1,1.5) circle [radius = 0.25];\end{tikzpicture}\ . \end{itemize} \begin{tikzpicture}[xscale=1,yscale=1] \draw[thick] (0,3)--(1.3359,1.6414); \draw[thick] (3,3)--(1.6414,1.6414); \draw[thick] (1.5,1.3)--(1.5,0); \draw[thick] (1.5,1.5) circle [radius=0.2]; \node at (1.5,1.5){$\omega$}; \node at (0,3.2) {$i_1$}; \node at (3,3.2) {$i_r$}; \node at (1.5,2.5) {$\dots$}; \node at (1.5,-0.2) {$0$}; \draw[thick] (1,3)--(1.4368,1.6897); \draw[thick] (2,3)--(1.5632, 1.6897); \node[left] at (1,2) {$e_{i_1}$}; \node[right] at (2,2) {$e_{i_r}$}; \node[align=center, below] at (1.5,-0.5) {An $r$-corolla}; \draw[thick] (5,1.5)--(5,0); \node[above] at (5,1.5) {$1$}; \node[below] at (5,0) {$0$}; \node[align=center, below] at (5,-0.5) {The unit tree}; \draw[thick] (8,1.5)--(8,0); \draw[fill] (8,1.5) circle [radius = 0.1]; \node[below] at (8,0) {$0$}; \node[align=center, below] at (8,-0.5) {A cork}; \node at (10,1.5) {Fig. 1}; \end{tikzpicture} An operation of $r$ variables (an element of arity $r$) $w\in P(r)$ can be depicted as a tree of $r$ inputs. The unit tree represents the identity $\id \in P(1)$ and a cork can be thought as a unit $1 \in P(0)$. \begin{tikzpicture}[xscale=1,yscale=1] \draw[thick] (0,9)--(1,8); \node[below left] at (0.5,8.5) {$e_{i_1}$}; \node[above] at (0,9) {$i_1$}; \draw[thick] (2,9)--(1,8); \node[above] at (2,9) {$i_2$}; \draw[fill] (1,8) circle [radius=0.1]; \node[below left] at (1,8) {$v_1$}; \draw[thick] (1,8)--(2,7); \draw[thick] (4,9)--(2,7); \draw[fill] (4,9) circle [radius=0.1]; \node[below right] at (4,9) {$v_2$}; \draw[fill] (2,7) circle [radius=0.1]; \node[below left] at (2,7) {$v_3$}; \draw[thick] (2,7)--(3,6); \draw[fill] (3,6) circle [radius=0.1]; \node[below left] at (3,6) {$v_4$}; \draw[thick] (3,6)--(4,5); \draw[fill] (4,5) circle [radius=0.1]; \node[below right] at (4,5) {$v_0$}; \node[right] at (4,4) {$e_0$}; \draw[thick] (9,9)--(7,7); \node[above] at (9,9) {$i_5$}; \draw[fill] (7,7) circle [radius=0.1]; \node[below right] at (7,7) {$v_5$}; \draw[thick] (5,9)--(7,7); \node[above] at (5,9) {$i_3$}; \draw[thick] (7,9)--(7,7); \node[above] at (7,9) {$i_4$}; \draw[thick] (7,7)--(4,5); \node[below right] at (5.5,6) {$e_{v_5}$}; \draw[thick] (4,5)--(4,3.5); \node[below] at (4,3.5) {$0$}; \node at (2,8.5) {$e_{i_2}$}; \node at (3.5,8) {$e_{v_2}$}; \node at (5.5,8) {$e_{i_3}$}; \node at (6.5,8.5) {$e_{i_4}$}; \node at (8.5,8) {$e_{i_5}$}; \node[below left] at (1.5,7.5) {$e_{v_1}$}; \node[below left] at (2.5,6.5) {$e_{v_3}$}; \node[below left] at (3.5,5.5) {$e_{v_4}$}; \node[align=center, below] at (5,3) {A tree $T$ with five inputs $\underline{r} = \left\{i_1, \dots , i_5 \right\}$, \\ six vertices $V(T) = \left\{v_1, \dots , v_5, v_0 \right\}$, \\ and eleven edges $E(T) = \left\{e_{i_1}, \dots , e_{v_5}, e_0 \right\}$. }; \node at (10,5) {Fig. 2}; \end{tikzpicture} Composition operations can be represented as \emph{grafting} of trees. \begin{tikzpicture}[xscale=1,yscale=1] \draw[thick](0,3)--(1.3359,1.6414); \draw[thick](3,3)--(1.6414,1.6414); \draw[thick] (1.5,1.3)--(1.5,0); \draw[thick] (1.5,1.5) circle [radius=0.2]; \node at (1.5,1.5) {$\omega$}; \node at (0,3.2) {$i_1$}; \node at (3,3.2) {$i_2$}; \node at (1.5,-0.2) {$0$}; \node[left] at (1,2) {$e_{i_1}$}; \node[right] at (2,2) {$e_{i_2}$}; \node at (3.5,1.5) {$\circ_1$}; \draw[thick] (5,1.5)--(5,0); \draw[fill] (5,1.5) circle [radius = 0.1]; \node[below] at (5,0) {$0$}; \node at (6,1.5) {$=$}; \draw[thick] (7,3)--(8.3359,1.6414); \draw[fill] (7,3) circle [radius = 0.1]; \draw[thick] (10,3)--(8.6414,1.6414); \draw[thick] (8.5,1.3)--(8.5,0); \draw[thick] (8.5,1.5) circle [radius=0.2]; \node at (8.5,1.5) {$\omega$}; \node at (10,3.2) {$i_2$}; \node at (8.5,-0.2) {$0$}; \node[left] at (8,2) {$e_{i_1}$}; \node[right] at (9,2) {$e_{i_2}$}; \node[align=center, below] at (5,-0.5) {The action of a unit (cork) on an arity two operation, $\omega \mapsto \delta^1(\omega) = \omega\circ_1 1$}; \node at (12,1.5) {Fig. 3}; \end{tikzpicture} However, we are going to only consider trees that fulfill the following additional property \begin{itemize} \item[(5)] For each vertex $v\in V(T)$, we have at least one edge $e\in E(T)$ such that $t(e) = v$, \end{itemize} In other words, except for the unit $1 \in \mk = P(0)$, our trees won't have real \emph{corks}, because the only time composition operations will involve grafting a cork onto a tree, as in fig. 3 above, we will apply the reduction process described in \cite{Fre17a}, A.1.11, consisting of re-indexing the inputs and removing them where we apply a cork. To a vertex $v$ we also associate a set of \emph{ingoing edges}: those edges whose target is $v$. Let's denote its cardinal by $$ r_v = \sharp \left\{ e\in E(T) \ | \ t(e) = v\right\} \ . $$ The extra condition $(5)$ is equivalent to the requirement that $r_v \geq 1$, for every $v\in V(T)$. In fact, in the constructions of our two particular instances of the free operad, we are going to find only vertices satisfying $r_v \geq 2$. A tree for which every vertex satisfies this extra condition is called \emph{reduced}. \begin{example} So for the tree in Fig. 2, we have: $$ r_{v_2}= 0\ , \quad r_{v_4} = 1\ , \quad r_{v_1} = r_{v_3}= r_{v_0} = 2 \ , \quad r_{v_5} = 3 \ . $$ Hence, this is \emph{not} a reduced tree, because of vertices $v_2$ and $v_4$. \end{example} Let us denote by $\tree (r)$ the category whose objects are $r$-trees and whose morphisms are just isomorphisms. $\widetilde{\tree (r)}$ will denote the full subcategory of \emph{reduced} trees. For $r\geq 2$, $Y_r \in\widetilde{\tree (r)} $. \subsection{The general free operad} The forgetful functor $U : \Op \longrightarrow \sMod$ has a left adjoint, the \emph{free operad functor}, $\Gamma : \sMod \longrightarrow \Op$. Arity-wise it can be computed as $$ \Gamma (M)(l) = \dirlim_{T \in \tree(l)} M(T) \ , $$ Here, $M(T)$ denotes the \emph{treewise tensor product} of the $\Sigma$-module $M$ over a tree $T$. It is the tensor product $$ M(T) = \bigotimes_{v\in V(T)} M(r_v) \ . $$ Of course, this free operad has the well-known universal property of free objects; that is, every morphism of operads $\phi : \Gamma (M) \longrightarrow P$ is uniquely determined by its restriction $\phi_{| M}$. (See \cite{Fre17a}, prop. 1.2.2, for instance.) \subsection{Two particular instances of the free operad} We will need two particular, smaller instances of the free operad. First, because of \cite{Fre17a}, the restriction of the general free operad $\Gamma$ to $\Sigma$-modules satisfying $M(0) = M(1) = 0$ is a non-unitary and connected operad $\Gamma (M) \in \Op_{01}$. So the general free operad functor restricts to a smaller one, which we note $\Gamma_{01}$. It is the left adjoint of the obvious forgetful functor: $$ \xymatrix{ {\Op_{01}} \ar@<.5ex>[r]^-{U} & \sModu \ar@<.5ex>[l]^-{\Gamma_{01}} } \ . $$ This is the free operad used by Markl in constructing his minimal models à la Sullivan of non-unitary and cohomologically connected operads (see \cite{Mar96} and \cite{MSS02}). Second, if $M \in \lModu/\overline{\Com}$, then the general free operad $\Gamma (M)$ inherits the additional structure of an augmented, connected and unitary $\Lambda$-operad (\cite{Fre17a}, prop. A.3.12). Hence, because of the isomorphism of categories between $\Lambda$-operads and unitary $\Sigma$-operads, it has a unitary extension. Let's denote it by $\Gamma_ {+1}(M) = \Gamma (M)_+$ . It is the left adjoint of the forgetful functor $\overline{U}$ which sends each operad $P$ to its augmentation ideal $\overline{P}$: $$ \xymatrix{ {\Op_{+1} = \mathbf{\Lambda}\Op_{01} / \Com} \ar@<.5ex>[r]^-{\overline{U}} & \lModu/\overline{\Com} \ar@<.5ex>[l]^-{\Gamma_{+1}} } \ . $$ Here is a little road map for these categories and functors: $$ \xymatrix{ {\sMod}\ar@<.5ex>[rr]^{\Gamma} & & {\Op} \ar@<.5ex>[ll]^{U} & \\ {\sModc} \ar@<.5ex>[r]^-{\Gamma_0} \ar[u]^{\iota} & {\Op_0} \ar[ur]^{\iota} \ar@<.5ex>[l]^-{U} & & {\Op_+} \ar[ul]^{\iota} \ar[ll]^{\tau} \ar@{=}[r] & {\mathbf{\Lambda}\Op_0/\Com} \ar@<.5ex>[r]^{U} & {\lModc/\Com} \ar@<.5ex>[l]^{\Gamma_{+}} \\ {\sModu} \ar[u]^{\iota} \ar@<.5ex>[r]^-{\Gamma_{01}} & {\Op_{01}} \ar[u]^{\iota} \ar@<.5ex>[l]^-{U} & & {\Op_{+1}} \ar[u]^{\iota} \ar[ll]^{\tau} \ar@{=}[r] & {\mathbf{\Lambda}\Op_{01}/\Com} \ar@<.5ex>[r]^{\overline{U}} & {\lModu/\overline{\Com} \ .} \ar@<.5ex>[l]^{\Gamma_{+1}} \ar[u]^{\iota} } $$ Here, $\iota$ denotes the natural inclusions. We are mainly interested in the bottom row. The key point that encompases the possibility of constructing minimal models for operads in both cases we are studying, cohomologically non-unitary and unitary, is that, since $M(0) = M(1) = 0$, there will be no arity zero and one trees in the colimit defining the free operad and, since in this case the only morphisms in the subcategory of \emph{reduced} trees $\widetilde{\tree(l)} $ are trivial isomorphisms, this colimit is reduced to a direct sum \cite{Fre17a}, proposition A.3.14. Hence, for $\Gamma = \Gamma_{01}, \Gamma_{+1}$, $$ \Gamma (M)(l) = \bigoplus_{T \in \widetilde{\tree(l)}} M(T) \ . $$ All this leads to the following \begin{lemma}\label{lamadredelcordero} For every module $M$ with $M(0) = M(1) = 0$, and every homogeneous module $E$ of arity $p > 1$, $\Gamma_{01}$ and $\Gamma_{+1}$ verify: \begin{itemize} \item[(a)] $$ \Gamma_{01}(M)(l) = \begin{cases} 0 , & \mbox{if } l= 0, \\ \mk , & \mbox{if } l = 1, \\ \Gamma (M) (l) , & \mbox{if } l \neq 0,1 \end{cases} \qquad \text{and} \qquad \Gamma_{+1}(M) (l) = \begin{cases} \mk, & \mbox{if } l= 0, \\ \mk , & \mbox{if } l = 1, \\ \Gamma (M) (l) , &\mbox{if } l \neq 0,1 \ . \end{cases} $$ \item[(b)] For $\Gamma = \Gamma_{01}, \Gamma_{+1}$, $$ \Gamma (M \oplus E) (l) = \begin{cases} \Gamma (M) (l), & \mbox{if } l < p, \\ \Gamma (M) (p) \oplus E , & \mbox{if } l = p. \end{cases} $$ \end{itemize} \end{lemma} \begin{proof} Let's compute: \begin{eqnarray*} (M\oplus E)(T) &=& \bigotimes_{v\in V(T)}(M\oplus E)(r_v) = \bigotimes_{v\in V(T)} (M(r_v)\oplus E(r_v) ) \\ &=& \left( \bigotimes_{ \substack{ v\in V(T) \\ r_v \neq p} } M(r_v) \right) \otimes \left( \bigotimes_{ \substack{ v\in V(T) \\ r_v = p } } \left( M(r_v) \oplus E \right) \right) \ . \end{eqnarray*} Hence, \begin{eqnarray*} \Gamma (M\oplus E)(m) &=& \bigoplus_{T \in \widetilde{\tree (m)}} (M\oplus E)(T) \\ &=& \bigoplus_{T \in \widetilde{\tree (m)}} \left[ \left( \bigotimes_{ \substack{ v\in V(T) \\ r_v \neq p} } M(r_v) \right) \otimes \left( \bigotimes_{ \substack{ v\in V(T) \\ r_v = p } } \left( M(r_v) \oplus E \right) \right) \right] \ . \end{eqnarray*} If $m<p$, $ \bigotimes_{ \substack{ v\in V(T) \\ r_v = p } } \left( M(r_v) \otimes E \right) = \mk $, since there are no trees with $m < p$ ingoing edges and a vertex with $p$ ingoing edges (there are no corks, since $M(0) = 0$). Hence, in this case, we simply have $$ \Gamma (M\oplus E)(m) = \bigoplus_{T \in \widetilde{\tree (m)}} \left( \bigotimes_{ \substack{ v\in V(T) \\ r_v \neq p} } M(r_v) \right) = \Gamma (M) (m) \ . $$ For $ m = p$, we split our sum over all trees in two terms: one for the corolla $Y_p$ and another one for the rest: \begin{multline*} \Gamma (M\oplus E)(p) = \left[ \left( \bigotimes_{ \substack{ v\in V(Y_p) \\ r_v \neq p} } M(r_v) \right) \otimes \left( \bigotimes_{ \substack{ v\in V(Y_p) \\ r_v = p } } \left( M(r_v) \oplus E \right) \right) \right] \oplus \\ \bigoplus_{ \substack{ T \in \widetilde{\tree (m)} \\ T \neq Y_p } } \left[ \left( \bigotimes_{ \substack{ v\in V(T) \\ r_v \neq p} } M(r_v) \right) \otimes \left( \bigotimes_{ \substack{ v\in V(T) \\ r_v = p } } \left( M(r_v) \oplus E \right) \right) \right] \end{multline*} And we have: \begin{itemize} \item[(1)] The set of vertices $ v\in V(Y_p) \ , r_v \neq p $ is empty. So for the first tensor product we get: $ \bigotimes_{ \substack{ v\in V(Y_p) \\ r_v \neq p} } M(r_v) = \mk $. \item[(2)] There is just one vertex in $v\in V(Y_p)$ with $ r_v = p$. Hence, $\bigotimes_{ \substack{ v\in V(Y_p) \\ r_v = p } } (M(r_v) \oplus E) = M(p)\oplus E $. \item[(3)] We leave $\bigotimes_{ \substack{ v\in V(T) \\ r_v \neq p} } M(r_v) $ as it is. \item[(4)] As for $ \bigotimes_{ \substack{ v\in V(T) \\ r_v = p } } \left( M(r_v) \oplus E \right) $, since we are assuming $r_v \geq 2$, this set of vertices is empty for every tree. So we only get $\mk$ for every vertex $v$. \end{itemize} All in all, $$ \Gamma (M\oplus E)(p) = \left( M(p) \oplus E \right)\ \oplus \bigoplus_{ \substack{ T \in \widetilde{\tree (m)} \\ T \neq Y_p } } \left[ \bigotimes_{ \substack{ v\in V(T) \\ r_v \neq p} } M(r_v) \right] = \Gamma (M)(p) \oplus E \ . $$ \end{proof} \begin{remark} So, for $M(0) = M(1) = 0$, and forgetting the $\Lambda$-structure if necessary, it's clear that both $\Gamma_{01}(M)$ and $\Gamma_{+1}(M)$ agree with the general free operad $\Gamma (M)$, outside arities $0$ and $1$. By definition, also $\Gamma_{+1}(M) = \Gamma_{01} (M)_+$ when $M$ has a $\Lambda$-module structure. \end{remark} \section{Basic operad homotopy theory} We develop here the basic, standard homotopy theory for operads. This basic homotopy theory can be formalized under the name of \emph{Cartan-Eilenberg}, or \emph{Sullivan} categories (see \cite{GNPR10}) and emphasizes just three elements: weak equivalences, or \emph{quis}, homotopy and cofibrant (minimal) objects. Cofibrant, \emph{Sullivan}, operads are built by the known procedure of \lq\lq attaching cells". We have to distinguish between the non-unitary case and the unitary one from the very definition stage of this attachment procedure because in the second case we also need to deal with the restriction operations; that is, the action of the units. Our strategy will be the following. First, we treat the non-unitary case, whose constructions and results may be well known, but will nevertheless be made explicit here, for the reader's convenience and because we will need to manipulate them once we add the data of the restriction operations. \subsection{Lifting properties: the non-unitary case} For the sake of lightening the notation, in this section $\Gamma$ refers to its restriction to non-unitary, connected operads, $\Gamma_{01}$. \begin{definition}\label{OpdefKS} (See \cite{MSS02}, cf \cite{GNPR05}) Let $n\geq 2$ be an integer. Let $P \in \Op_{01}$ be free as a graded operad, $P= \Gamma (M)$, where $M $ is a graded $\Sigma$-module, with $M(0)=M(1)=0$. An \textit{arity $n$ principal extension} of $P$ is the free graded operad $$ P \sqcup_d \Gamma (E) :=\Gamma (M \oplus E) \ , $$ where $E$ is an arity-homogeneous $\Sigma_n$-module with zero differential and $d:E\longrightarrow ZP(n)^{+1}$ a map of $\Sigma_n$-modules of degree $+1$. The differential $\partial$ on $P\sqcup_d \Gamma (E)$ is built upon the differential of P, $d$ and the Leibniz rule. \end{definition} \begin{remark} In the context of commutative dg algebras, the analogous construction is called a \emph{Hirsch extension} \cite{GM13}, or a \emph{KS-extension} \cite{Hal83}. \end{remark} \begin{lemma}\label{diferencialHirsch} $P \sqcup_d \Gamma (E) $ is a dg operad and the natural inclusion $\iota : P \longrightarrow P \sqcup_d \Gamma (E) $ is a morphism of dg operads. \end{lemma} \begin{proof} This is clear. \end{proof} \begin{lemma}[Universal property of principal extensions]\label{propuniversalHirsch} Let $P \sqcup_d \Gamma (E)$ be a principal extension of a free-graded operad $P = \Gamma (M)$, and let $\varphi : P \to Q$ be a morphism of operads. A morphism $\psi : P\sqcup_d \Gamma (E)\lra Q$ extending $\varphi$ is uniquely determined by a morphism of $\Sigma_n$-modules $f : E\to Q(n)$ satisfying $\partial f = \varphi d$. \end{lemma} \begin{proof} This is clear. \end{proof} \begin{lemma}\label{liftingthroughextensions} Let $\iota: P \longrightarrow P \sqcup_d \Gamma (E)$ be an arity $n$ principal-extension and $$ \xymatrix{ {P} \ar[r]^{\varphi} \ar[d]_{\iota} & Q \ar@{>>}[d]^{\rho}_{\wr} \\ {P\sqcup_d \Gamma(E)} \ar[r]^{\psi}\ar@{-->}[ur]^{\psi'} & R } $$ a solid commutative diagram of operad morphisms, where $\rho$ is a surjective quasi-isomorphism. Then, there is an operad morphism $\psi'$ making both triangles commute. \end{lemma} \begin{proof} Consider the solid diagram of $\mk [\Sigma_n]$-modules $$ \xymatrix{ & {ZC\id_{Q(n)}} \ar[d]^{\id \oplus \rho (n)} \\ {E} \ar[r]^-{\lambda} \ar@{-->}[ur]^{\mu} & {ZC\rho (n) \ .} } $$ The given commutative square implies that the linear map $\lambda = (\varphi d \ \ \psi_{|E} )^t$ has its image included in the relative cocycles of the morphism $\rho$: $$ \begin{pmatrix} -\partial_{Q(n)} & 0 \\ -\rho(n) & \partial_{Q(n)} \end{pmatrix} \begin{pmatrix} \varphi d \\ \psi_{|E} \end{pmatrix} = \begin{pmatrix} -\partial \varphi d \\ -\rho \varphi d + \partial \psi_{|E} \end{pmatrix} = \begin{pmatrix} - \partial^2 \psi_{|E} \\ - \varphi d + \partial \psi_{|E} \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \end{pmatrix} \ . $$ Also, $$ \id_{Q(n)^{+1}}\oplus \rho(n) = \begin{pmatrix} 1 & 0 \\ 0 & \rho \end{pmatrix} : Q(n)^{+1} \oplus Q(n) \longrightarrow Q(n)^{+1} \oplus R(n) $$ restricts to a linear map between the relative cocycles of $\id_{Q(n)^{+1}}$ and those of $\rho (n)$ because it commutes with the differentials of the respective cones: $$ \begin{pmatrix} -\partial & 0 \\ -\rho & -\partial \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 0 & \rho \end{pmatrix} = \begin{pmatrix} -\partial & 0 \\ -\rho & \partial \rho \end{pmatrix} = \begin{pmatrix} -\partial & 0 \\ -\rho & \rho \partial \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ 0 & \rho \end{pmatrix} \begin{pmatrix} -\partial & 0 \\ -1 & \partial \end{pmatrix} \ . $$ Here and in the rest of this proof we will frequently drop the arity index $n$. To find our sought extension $\psi'$ we just need to find a $\mk[\Sigma_n]$-linear map $\mu$ making the triangle commute: if $\mu = (\alpha \ f )$, then we would take as $\psi'$ the morphism induced by $\varphi$ and $f : E \longrightarrow Q(n)$. And this is because: \begin{itemize} \item[(a)] According to the universal property of principal extensions of Lemma $\ref{propuniversalHirsch}$, in order to see that this defines a morphism of operads, all we have to check is that we get $\partial f = \varphi d$. And we would have it because, if the image of $\mu$ is included in the relative cocycles of $\id_{Q(n)}$ and makes the triangle commutative, we would have $$ \begin{pmatrix} 0 \\ 0 \end{pmatrix} = \begin{pmatrix} -\partial & 0 \\ -1 & \partial \end{pmatrix} \begin{pmatrix} \alpha \\ f \end{pmatrix} = \begin{pmatrix} \partial \alpha \\ -\alpha + \partial f \end{pmatrix} \qquad \Longleftrightarrow \qquad \begin{cases} \partial \alpha &= 0 \\ \alpha &= \partial f \end{cases} \ , $$ \noindent and $$ \begin{pmatrix} 1 & 0 \\ 0 & \rho \end{pmatrix} \begin{pmatrix} \alpha \\ f \end{pmatrix} = \begin{pmatrix} \varphi d \\ \psi_{|E} \end{pmatrix} \qquad \Longleftrightarrow \qquad \begin{cases} \alpha &= \varphi d \\ \rho f &= \psi_{|E} \end{cases} \ ; $$ \noindent hence, $\partial f = \alpha = \varphi d$. \item[(b)] Also, such a $\psi'$ would make the top triangle commute: $\psi'\iota = \psi'_{|P} \varphi$, by definition of $\psi'$. \item[(c)] Also the lower triangle would commute: according to the universal property of KS-extensions, $\rho \psi' = \psi$ boils down to $\rho \psi'_{| P} = \psi_{| P}$ and $\rho \psi'_{| E} = \psi_{| E}$. The first equality is true because $\rho \psi'_{| P} = \rho\phi = \psi_{|P}$. The second one because $\rho \psi'_{| E} = \rho f = \psi_{| E} $. \end{itemize} So we only need to prove that the $\mk [\Sigma_n]$-linear map $\mu$ exists. For which it is enough to see that $\id \oplus \rho$ is an epimorphism between the spaces of cocycles: let $(q,r) \in ZC\rho \subset Q(n)^{+1} \oplus R(n)$. We need to produce $(x,y) \in ZC\id_Q \subset Q(n)^{+1}\oplus Q(n)$ such that $$ x = q \ , \ \rho y = r \qquad \text{and} \qquad \partial x = 0 \ , \ x = \partial y \ . $$ So, there is no choice but to make $x = q$. As for $y$, since $\rho$ is a \emph{quis} $HC\rho = 0$. So, we have $(q',r') \in Q(n)^{+1}\oplus R(n)$ such that $q = \partial q'$ and $r = -\rho q' + \partial r'$. We try $y = q'$ and compute: $\rho y = \rho q' = \partial r' - r$. But $\rho$ is an epimorphism, so we can find some $q'' \in Q(n)$ such that $r' = \rho q''$. So, finally, $r = \rho (q' + \partial q'')$ and we take $y = q' + \partial q''$. \end{proof} \subsection{Lifting properties: the unitary case} For the sake of lightening the notation, in this section $\Gamma$ refers to its restriction to unitary, connected operads, $\Gamma_{+1}$. \begin{definition}\label{OpdefKS+} (See \cite{MSS02}, cf \cite{GNPR05}) Let $n\geq 2$ be an integer. Let $P \in \Op_{+1}$ be free as a graded operad, $P= \Gamma (M)$, where $M $ is a graded $\Sigma$-module, with $M(0)=M(1)= 0$. An \textit{unitary arity $n$ principal extension} of $P$ is the free graded operad $$ P \sqcup_d^\delta \Gamma (E) :=\Gamma (M \oplus E) \ , $$ where $E$ is an arity-homogeneous $\Sigma_n$-module with zero differential and: \begin{enumerate} \item[(a)] $d:E\longrightarrow ZP(n)^{+1}$ is a map of $\Sigma_n$-modules of degree $+1$. The differential $\partial$ on $P\sqcup_d \Gamma (E)$ is built upon the differential of P, $d$ and the Leibniz rule. \item[(b)] $\delta_i : E \longrightarrow P(n-1), i = 1, \dots , n$ are morphisms of $\mk [\Sigma_n]-modules$, compatibles with $d$ and the differential of $P$, in the sense that, for all $i=1, \dots , n$ we have commutative diagrams $$ \xymatrix{ {E} \ar[r]^{d} \ar[d]^{\delta_i} & {ZP(n)^{+1}} \ar[d]^{\delta_i} \\ {P(n-1)} \ar[r]^{\partial} & {P(n-1)^{+1}} \ . } $$ \noindent They have also to be compatible with the $\Lambda$-structure of $P$, from arity $n-1$ downwards. \end{enumerate} \end{definition} \begin{lemma}\label{diferencialHirsch+} $P \sqcup_d^\delta \Gamma (E) $ is a unitary dg operad and the natural inclusion $\iota : P \longrightarrow P \sqcup_d^\delta \Gamma (E) $ is a morphism of unitary dg operads. \end{lemma} \begin{proof} This is clear. \end{proof} \begin{lemma}[Universal property of unitary principal extensions]\label{propuniversalHirsch+} Let $P \sqcup_d^\delta \Gamma (E)$ be a unitary principal extension of a free-graded unitary operad $P = \Gamma (M)$, and let $\varphi : P \to Q$ be a morphism of unitary operads. A morphism $\psi : P\sqcup_d^\delta \Gamma (E)\lra Q$ extending $\varphi$ is uniquely determined by a morphism of $\Sigma_n$-modules $f : E\to Q(n)$ satisfying $\partial f = \varphi d$ and making commutative the diagrams $$ \xymatrix{ {E} \ar[r]^{f} \ar[d]^{\delta_i} & {Q(n)} \ar[d]^{\delta_i} \\ {P(n-1)} \ar[r]^{\varphi} & {Q(n-1)} } $$ for all $i=1, \dots , n$. \end{lemma} \begin{proof} This is clear. \end{proof} \begin{lemma}\label{liftingthroughextensions+} Let $\iota: P \longrightarrow P \sqcup_d^\delta \Gamma (E)$ be a unitary arity $n$ principal-extension and $$ \xymatrix{ {P} \ar[r]^{\varphi} \ar[d]_{\iota} & Q \ar@{>>}[d]^{\rho}_{\wr} \\ {P\sqcup_d^\delta \Gamma(E)} \ar[r]^{\psi}\ar@{-->}[ur]^{\psi'} & R } $$ a solid commutative diagram of morphisms of operads, where $\rho$ is a surjective quasi-isomorphism between operads with unitary multiplication. Then, there is an operad morphism $\psi'$ making both triangles commute. \end{lemma} \begin{proof} We have built $\psi'$, lifting $\psi$ and extending $\varphi$ in the non-unitary case, thanks to $f: E \longrightarrow Q(n)$, verifying $\rho f = \psi_{| E}$ and $\partial f = \varphi d$. Now, we need now to check the compatibility of $f$ with the restriction operations. That is, we would like to have $$ \delta_i (fe) = \varphi (\delta_i e) \ , \quad \text{for all} \ i = 1, \dots , n \ . $$ This is not necessarily true, so for every $i=1, \dots, n$, we consider the differences $\omega_i = \delta_i fe - \varphi \delta_i e$ and check the following: \begin{itemize} \item[(i)] For every $i$, $\omega_i$ is a cocycle: $\partial \omega_i = \partial \delta_ife - \partial \varphi\delta_i e = \delta_i\partial fe -\delta_i\partial fe = 0$. \item[(ii)] For every $i$, $\omega_i $belongs to $\ker\rho$: $\rho \omega_i = \rho \delta_i fe - \rho\varphi\delta_i e = \delta_i \rho fe -\psi\iota\delta_i e = \delta_i\psi e - \psi\delta_i e = \psi\delta_i e - \psi\delta_i e = 0$. \item[(iii)] The family $\left\{ \omega_i \right\}_{i=1,\dots n}$ satisfies the Kan-like condition, \ref{Kan-likecondition}: $\delta_i\omega_j = \delta_i\delta_j fe -\delta_i\varphi\delta_j e = \delta_i\delta_jfe -\delta_i\delta_j\varphi e = \delta_{j-1}\delta_i fe - \delta_{j-1}\delta_i \varphi e = \delta_{j-1} \omega_i$, for $i<j$ \end{itemize} Here, we have applied that $\varphi, \rho, \psi$ already are unitary $= \Lambda$-morphisms. Hence, because of lemma \ref{enhancedKan-likeresult}, we conclude that there is $\omega \in ZQ(n) \cap \ker\rho$ such that $\delta_i \omega = \omega_i $ for all $i=1,\dots , n$. So, we change our original $fe$ into $$ f'e = fe - \omega \ . $$ We immediately see that we haven't lost what we had obtained in the non-unitary case, namely $\rho f'e = \psi e$ and $\partial f' e = \varphi de$ and we have added the commutativity with the restriction operations: $$ \delta_i f'e = \delta_i fe - \delta_i\omega = \delta_i fe - \omega_i = \delta_ife -(\delta_i fe - \varphi\delta_i e) = \varphi\delta_i e \ . $$ Final problem: this $\omega$ shoul depend $\mk[\Sigma_n]$-linearly on $e \in E(n)$ in order to have a map $f' : E(n) \longrightarrow Q(n)$: $e \mapsto \{ \omega_1(e), \dots , \omega_n(e)\} \mapsto \omega(e)$. $\mk$-linearity is clear at both steps, just looking at the definitions of $\omega_i$ and the algorithm producing $\omega$ in the proof of lemma \ref{Kan-likeresult}. Yet, $\Sigma_n$-equivariance is unclear. First, $\delta_i$'s appearing in the definition of $\omega_i$'s, are \emph{not} $\Sigma_n$-equivariant. Neither $s_i$'s used in the construction of $\omega$ from $\omega_i$'s are. Indeed, for $\sigma \in \Sigma_n$ and $\nu$ an arbitrary element, we have the following relations, which are easy to verify $$ \delta_i (\sigma\cdot \nu) = \delta_{\sigma^{-1}(i)} (\nu) \qquad \text{and} \qquad s_i (\sigma \cdot \nu) = s_{\sigma^{-1}(i)} (\nu) \ , $$ and pass to $\omega_i$: $$ \omega_i(\sigma\cdot e) = \omega_{\sigma^{-1}(i)} (e) \ . $$ Fortunatelly, we have the standard \emph{average} procedure to get a $\Sigma_n$-equivariant morphism from $\omega$: $$ \widetilde{\omega} (e) = \frac{1}{n!} \sum_{\sigma \in \Sigma_n} \sigma \cdot \omega (\sigma^{-1} \cdot e) \ . $$ Clearly, $\widetilde{\omega} (e)$ is still a cocycle and belongs to $\ker \rho$. And we still have what we were looking for from the very beginning: \begin{eqnarray*} \delta_i\widetilde{\omega} (e) &=& \delta_i\left( \frac{1}{n!} \sum_{\sigma \in \Sigma_n} \sigma \cdot \omega (\sigma^{-1} \cdot e) \right) \\ &=& \frac{1}{n!} \sum_{\sigma \in \Sigma_n} \delta_i \left(\sigma \cdot \omega (\sigma^{-1} \cdot e) \right)\\ &=& \frac{1}{n!} \sum_{\sigma \in \Sigma_n} \delta_{\sigma^{-1}(i)} \left( \cdot \omega (\sigma^{-1} \cdot e) \right) \\ &=& \frac{1}{n!} \sum_{\sigma \in \Sigma_n} \omega_{\sigma^{-1}(i)} (\sigma^{-1}\cdot e)\\ &=& \frac{1}{n!} \sum_{\sigma \in \Sigma_n} \omega_i(e) = \omega_i(e) \ . \end{eqnarray*} Therefore, we put this $\widetilde{\omega}$ instead of our first $\omega$ and we are done. \end{proof} \subsection{Lifting properties: conclusions} \begin{definition} A \textit{Sullivan operad} is the colimit of a sequence of principal extensions of arities $l_n \geq 2$, starting from the initial operad. $$ I_\alpha \longrightarrow P_1 = \Gamma (E(l_1)) \longrightarrow \cdots \longrightarrow P_{n} = P_{n-1} \sqcup \Gamma (E(l_n)) \longrightarrow \dots \longrightarrow \dirlim_n P_n = P_\infty \ . $$ \end{definition} This definition stands for both the non-unitary and unitary cases, as long as we read it with the following dictionary, where the first option is for the non-unitary case, and the second for the unitary one. \begin{itemize}\label{diccionari} \item[(1)] $\alpha = 0, +$. \item[(2)] $P = P, P_+$. \item[(3)] $\Gamma = \Gamma_{01}, \Gamma_{+1}$. \item[(4)] $\sqcup = \sqcup_d, \sqcup_d^\delta$. \end{itemize} The next result says that Sullivan operads are cofibrant objects in the Hinich model structure of the category of operads, \cite{Hin97}. \begin{proposition}\label{strict_lifting} Let $S$ be a Sullivan operad. For every solid diagram of operads (resp., of operads with unitary multiplication) $$ \xymatrix{ &P\ar@{->>}[d]^{\rho}_{\wr}\\ \ar@{.>}^{\varphi'}[ur]S\ar[r]^\varphi&Q& } $$ in which $\rho$ is a surjective quasi-isomorphism, there exists a morphism of operads (resp., of operads with unitary multiplication) $\varphi'$ making the diagram commute. \end{proposition} \begin{proof} Induction and lemmas \ref{liftingthroughextensions}, or \ref{liftingthroughextensions+}, depending on weather we are dealing with the non-unitary, or the unitary case. \end{proof} \subsection{Homotopy} Similarly to the setting of commutative algebras, there is a notion of homotopy between morphisms of operads, defined via a functorial path (see Section 3.10 of \cite{MSS02}, cf. \cite{CR19}), based on the following remark. \begin{remark} Let $P$ be a dg operad and $K$ a commutative dg algebra. Then $P \otimes K = \left\{ P(n)\otimes K\right\}_{n \geq 0}$ has a natural operad structure given by the partial composition products $$ (\omega \otimes a) \circ_i (\eta \otimes b) = (-1)^{|a||\eta|} (\omega\circ_i\eta)\otimes (ab) \ . $$ \end{remark} In particular, let $K = \mk [t,dt] = \Lambda (t,dt)$ be the free commutative dg algebra on two generators $|t| = 0$, $|dt| = 1$ and differential sending $t$ to $dt$. We have the unit $\iota $ and evaluations $\delta^0$ and $\delta^1$ at $t=0$ and $t=1$ respectively, which are morphisms of $\Com$-algebras satisfying $\delta^0 \circ \iota=\delta^1 \circ \iota=\id$. \[ \xymatrix{\mk\ar[r]^-{\iota}&\mk[t,dt] \ar@<.6ex>[r]^-{\delta^1} \ar@<-.6ex>[r]_-{\delta^0}&\mk}\,\,\,;\,\,\, \delta^k\circ \iota=\id \ . \] The following are standard consequences of Proposition $\ref{strict_lifting}$. The proofs are adaptations of the analogous results in the setting of $\Com$-algebras (see Section 11.3 of \cite{GM13}; see also \cite{CR19} in the context of operad algebras). \begin{remark} We can skip treating the non-unitary and unitary cases separately in this section, once we notice that: \begin{itemize} \item[(a)] If $P_+$ is a (cohomologically) unitary operad, then so is $P_+[t,dt]$, thanks to $1 \in \mk [t,dt]$. \item[(b)] Whenever we state the existence of a morphism, we either use a universal property valid in both cases, or we are using the appropriate version of proposition \ref{strict_lifting} above. \end{itemize} \end{remark} \begin{definition}\label{path} A \textit{functorial path} in the category of operads is defined as the functor \[ -[t,dt]:\Op \longrightarrow \Op \] given on objects by $P[t,dt]=P\otimes \mk[t,dt]$ and on morphisms by $\varphi [t,dt]=\varphi\otimes\mk[t,dt]$, together with the natural transformations \[ \xymatrix{P\ar[r]^-{\iota}&P[t,dt] \ar@<.6ex>[r]^-{\delta^1} \ar@<-.6ex>[r]_-{\delta^0}&P}\,\,\,;\,\,\, \delta^k\circ \iota= \id \] given by $\delta^k=1\otimes \delta^k:P[t,dt]\longrightarrow P\otimes\mk=P$ and $\iota=1\otimes\iota:P=P\otimes\mk\to P[t,dt]$. \end{definition} The map $\iota$ is a quasi-isomorphism of operads while the maps $\delta^0$ and $\delta^1$ are surjective quasi-isomorphisms of operads. The functorial path gives a natural notion of homotopy between morphisms of operads: \begin{definition} Let $\varphi, \psi :P \longrightarrow Q$ be two morphisms of operads. An \textit{homotopy from $\varphi$ to $\psi$} is given by a morphism of operads $H:P\longrightarrow Q[t,dt]$ such that $\delta^0\circ H=\varphi$ and $\delta^1\circ H=\psi$. We use the notation $H:f\simeq g$. \end{definition} The homotopy relation defined by a functorial path is reflexive and compatible with the composition. Furthermore, the symmetry of $\Com$-algebras $\mk[t,dt]\lra \mk[t,dt]$ given by $t\mapsto 1-t$ makes the homotopy relation into a symmetric relation. However, the homotopy relation is not transitive in general. As in the rational homotopy setting of $\Com$-algebras, we have: \begin{proposition}\label{transitive} The homotopy relation between morphisms of operads is an equivalence relation for those morphisms whose source is a Sullivan operad. \end{proposition} \begin{proof} It only remains to prove transitivity. Let $S$ be a Sullivan operad and consider morphisms $\varphi, \varphi', \varphi'': S\longrightarrow P$ together with homotopies $H:\varphi \simeq \varphi'$ and $H':\varphi'\simeq \varphi''$. Consider the pull-back diagram in the category of operads \[ \xymatrix{ P[t,dt,s,ds] \ar@/_/[ddr]_{\delta^0_t} \ar@/^/[drr]^{\delta^1_s} \ar@{.>}[dr]|-{\pi}\\ & \pb Q \ar[d]\ar[r] & P[t,dt] \ar[d]^{\delta^0_t}\\ & P[s,ds] \ar[r]_{\delta^1_s} & P } \] To see that the map $\pi$ is surjective, note that if $p(s,ds)$ and $q(t,dt)$ are polynomials such that $p(1,0)=q(0,0)$, representing an element in $Q$, then \[ \pi(p(s,ds)+q(st,dt)-q(0,0))=(p(s,ds), q(t,dt)) \ . \] It is straightforward to see that all the operads in the above diagram are quasi-isomorphic and that $\pi$ is a quasi-isomorphism. Consider the solid diagram \[ \xymatrix{ &&P[t,dt,s,ds]\ar@{->>}[d]^{\pi}_{\wr}\\ S\ar@{.>}[rru]^{\psi}\ar[rr]_{(H,H')}&&Q&. } \] By Proposition $\ref{strict_lifting}$, there exists a dotted arrow $\psi$ such that $\pi \psi=(H,H')$. Let $H\widetilde{+}H':=\nabla \psi$, where $\nabla:P[t,dt,s,ds] \longrightarrow P[t,dt]$ is the map given by $t,s\mapsto t$. This gives the desired homotopy $H\widetilde{+}H': \varphi \simeq \varphi''$. \end{proof} Denote by $[S,P]$ the set of homotopy classes of morphisms of operads $\varphi : S\longrightarrow P$. \begin{proposition}\label{bijecciohomotopies}Let $S$ be a Sullivan operad. Any quasi-isomorphism $\varpi: P\longrightarrow Q$ of operads induces a bijection $\varpi_*:[S,P] \longrightarrow [S,Q]$. \end{proposition} \begin{proof} We first prove surjectivity: let $[\varphi] \in [S,Q]$. Consider the mapping path of $\varpi$, given by the pull-back \[ \xymatrix{ \pb\ar[d]_{\pi_1}R(\varpi)\ar[r]^{\pi_2}&Q[t,dt]\ar[d]^{\delta^0}\\ P\ar[r]^w&Q&. } \] Define maps $\psi:=\delta^1\pi_2:R(\varpi) \longrightarrow Q$ and $\chi:=(1,\iota \varpi):P\longrightarrow R(\varpi)$. We obtain a solid diagram \[ \xymatrix{ &P\ar@<.6ex>[d]^{\chi}\ar@/^2pc/[dd]^{\varpi}\\ &R(\varpi)\ar@<.6ex>[u]^{\pi_1}\ar@{->>}[d]^{\psi}_{\wr}\\ S\ar@{.>}[ur]^{\varphi'}\ar[r]^{\varphi}&Q&, } \] where $\psi$ is a surjective quasi-isomorphism and $\psi \chi=\varpi$. By Proposition $\ref{strict_lifting}$, there exists $\varphi'$ such that $\psi \varphi'=\varphi$. Let $\phi:=\pi_1 \varphi'$. Then $\varpi\phi=\psi\chi \pi_1 \varphi'\simeq \psi \varphi'=\varphi$. Therefore $[\varpi\phi]=[\varphi]$ and $\varpi_*$ is surjective. To prove injectivity, let $\varphi_0,\varphi_1:S\longrightarrow Q$ be such that $H : \varpi\varphi_0\simeq \varpi\varphi_1$. Consider the pull-back diagram \[ \xymatrix{ P[t,dt] \ar@/_/[ddr]_{(\delta^0,\delta^1)} \ar@/^/[drr]^{\varpi[t,dt]} \ar@{.>}[dr]|-{\overline{\varpi}}\\ & \pb R(\varpi,\varpi)\ar[d]\ar[r] & Q[t,dt] \ar[d]^{(\delta^0,\delta^1)}\\ & P\times P \ar[r]_{\varpi\times \varpi} & Q\times Q \ . } \] One may verify that $\overline{\varpi}$ is a surjective quasi-isomorphism. Let $\overline{H}=(\varphi_0,\varphi_1,H)$ and consider the solid diagram \[ \xymatrix{ &&P[t,dt]\ar[d]^{\overline{\varpi}}_{\wr}\\ S\ar@{.>}[urr]^{G}\ar[rr]^-{\overline{H}}&&R(\varpi,\varpi)&. } \] Since $\overline{\varpi}_*$ is surjective, there exists a dotted arrow $G$ such that $\overline{\varpi}G\simeq \overline{H}$. It follows that $\varphi_0\simeq \delta^0G\simeq \delta^1 G\simeq \varphi_1$. Thereby, $\varphi_0\simeq \varphi_1$ by Proposition $\ref{transitive}$. \end{proof} \section{Minimal models} Sullivan \emph{minimal} operads are Sullivan operads for which the process of adding new generators $E$ is done with strictly increasing arities. In this section we prove the existence and uniqueness of Sullivan minimal models for operads in our two aforementioned cases, (cohomologically) non-unitary and unitary. \begin{definition}\label{defSullmin} A \textit{Sullivan minimal operad} $P_\infty$ is the colimit of a sequence of principal extensions starting from the initial operad, ordered by strictly increasing arities $$ I_\alpha \longrightarrow P_2 = \Gamma (E(2)) \longrightarrow \cdots \longrightarrow P_{n} = P_{n-1} \sqcup \Gamma (E(n)) \longrightarrow \dots \longrightarrow \dirlim_n P_n = P_\infty \ , $$ with $E(n)$ an arity $n$ homogeneous $\Sigma_n$-module with zero differential. A \emph{Sullivan minimal model} for an operad $P$ is a Sullivan minimal operad $P_\infty$ together with a quasi-isomorphism $\rho: P_\infty \stackrel{\sim}{\longrightarrow} P$. \end{definition} \begin{remarks} \begin{itemize} \item[(1)] The same conventions as in \ref{diccionari} apply here, so this definition works for both cases: non-unitary and unitary one. \item[(2)] In particular, a Sullivan minimal operad is a free graded operad $P_\infty = \Gamma (E)$, with $E = \bigoplus_n E(n)$, plus an extra condition on its differential $\partial$, usually called being \emph{decomposable}. The interested reader can check that both definitions, as a colimit of principal extensions or as a free graded operad plus a decomposable differential, agree by looking at \cite{GNPR05}, proposition 4.4.1. Even though this second characterization is useful in practice to recognise a Sullivan minimal operad, we are not going to use it in this paper. \end{itemize} \end{remarks} \subsection{Existence. The non-unitary case} \begin{theorem}\label{existencia} Every cohomologically connected operad $P\in \Op , HP(1) = \mk$, and cohomologically non-unitary, $HP(0) = 0$, has a Sullivan minimal model $P_\infty \longrightarrow P$. This minimal model is connected $P_\infty (1) = \mk$ and non-unitary $P_\infty (0) = 0$. \end{theorem} \begin{proof} Let $P$ be a cohomologically connected, cohomologically non-unitary operad. This is Markl's case \cite{MSS02}, with the slight improvement that we are just assuming $HP(0) =0$ instead of $P(0) = 0$. We are going to first write down the proof for this case, and then comment on the modifications needed for the cohomologically unitary case. Here, we use the free operad functor $\Gamma = \Gamma_{01}$ and start with $E = E(2) = HP(2)$. Take a $\mk[\Sigma_2]$-linear section $s_2 : HP(2) \longrightarrow ZP(2) \subset P(2)$ of the projection $\pi_2 : ZP(2) \longrightarrow HP(2)$, which exists because $\mk$ is a characteristic zero field, and define: $$ P_2 = \Gamma (E) \ , \quad {\partial_2}_{|E} = 0 \ , \qquad \text{and}\qquad \rho_2 : P_2 \longrightarrow P \ , \quad {\rho_2}_{|E} = s_2 \ . $$ It's clear that $P_2$ is a dg operad with differential $\partial_2 = 0$ and $\rho_2$ a morphism of dg operads. Also it is a \emph{quis} in arities $\leq 2$ because: \begin{itemize} \item[(0)] $P_2(0) = \Gamma (E)(0) = 0 = HP(0)$, because of lemma \ref{lamadredelcordero} (a), \item[(1)] $P_2(1) = \Gamma (E)(1) = \mk = HP(1) $, because of lemma \ref{lamadredelcordero} (a), and \item[(2)] $P_2(2) = \Gamma (E)(2) = E(2) = HP(2)$, because of lemma \ref{lamadredelcordero} (b). \end{itemize} Assume we have constructed a morphism of dg operads $\rho_{n-1} : P_{n-1} \longrightarrow P$ in such a way that: \begin{enumerate} \item $P_{n-1}$ is a minimal operad, and \item $\rho_{n-1} : P_{n-1} \longrightarrow P$ is a {\it quis\/} in arities $\leq n-1$. \end{enumerate} To build the next step, consider the $\Sigma_n$-module of the relative cohomology of $\rho_{n-1} (n): P_{n-1}(n) \longrightarrow P(n)$ $$ E = E(n) = H(P_{n-1}(n),P(n)) \ . $$ Since we work in characteristic zero, every $\Sigma_n$-module is projective. So we have a $\Sigma_n$-equivariant section $s_n = (d_n \ f_n)$ of the projection $$ P_{n-1}(n)^{+1} \oplus P(n) \supset Z(P_{n-1}(n),P(n))\longrightarrow H(P_{n-1}(n),P(n)) \ . $$ That is, $e = \pi_n s_n e$. Let $$ \begin{pmatrix} -\partial_{n-1} (n) & 0 \\ -\rho_{n-1}(n) & \partial (n) \end{pmatrix} $$ be the differential of the mapping cone $C\rho_{n-1}(n)$: the cocycle condition implies that $$ \partial_{n-1}(n)d_n = 0 \quad \text{and} \quad \rho_{n-1}d_n = \partial_n(n)f_n \ . $$ That is, $d_n$ induces a differential $\partial_n$ on $P_n = P_{n-1} \sqcup_{d_n} \Gamma (E)$ and $f_n$ a morphism of operads $\rho_n : P_n \longrightarrow P$ such that ${\rho_n}_{|P_{n-1}} = \rho_{n-1}$ and ${\rho_n}_{|E'} = f_n$, because of lemmas \ref{diferencialHirsch} and \ref{propuniversalHirsch}. Let us verify that $\rho_n$ induces an isomorphism in cohomology $$ {\rho_n}_* : H P_n(m) \longrightarrow HP(m) $$ in arities $m= 0,\dots , n$. First, if $m < n$, $$ \rho_n (m) = \rho_{n-1} (m) $$ by lemma \ref{lamadredelcordero} and so, by the induction hypothesis, we are done. Again by lemma \ref{lamadredelcordero} and its definition, in arity $n$, $\rho_n$ is $$ \rho_n (n) = (\rho_{n-1}(n) \quad f_n ) : P_{n-1}(n) \oplus E(n) \longrightarrow P(n) $$ Let us see that $\rho_n(n)$ is a {\it quis\/}. $\bullet$ \emph{${\rho_n(n)}_*$ is a monomorphism.} Let $\omega + e \in P_{n-1}(n) \oplus E(n)$ be a cocycle such that $\rho_n(n)_* [\omega + e] = 0$. Note that being a cocyle means $$ \partial_{n-1}(n)\omega + d_ne = 0 $$ and the fact that $\rho_n(n)$ sends its cohomology class to zero means that we have $\nu \in P(n)$ such that $$ d\nu =\rho_{n-1}(n)\omega + f_ne \ . $$ Hence the differential of $\omega + \nu \in P_{n-1}(n)^{+1} \oplus P(n)$ in the mapping cone $C\rho_{n-1}(n)$ is $$ \begin{pmatrix} -\partial_{n-1}(n) & 0 \\ -\rho_{n-1}(n) & \partial(n) \end{pmatrix} \begin{pmatrix} \omega \\ \nu \end{pmatrix} = \begin{pmatrix} -\partial_{n-1}(n)\omega \\ -\rho_{n-1}(n)\omega + \partial (n)\nu \end{pmatrix} = \begin{pmatrix} d_ne \\ f_ne \end{pmatrix} = s_n(e) \ . $$ Therefore, $ e = \pi_ns_ne = [s_ne] = 0 $, and we are left with only $\omega$ in our cocyle, which means that $\omega $ must be a cocycle itself and $$ 0 = \rho_n(n)_*[\omega] = [\rho_{n-1}(n)\omega] \ . $$ So, there must be some $\nu' \in P(n)$ such that $$ \rho_{n-1}(n)\omega = \partial(n)\nu' \ . $$ Which means that $\omega + \nu' \in P_{n-1}(n)^{+1} \oplus P(n)$ is a relative cocycle of $\rho_{n-1}(n)$. Let us call $$ e' = \pi_n(\omega + \nu') = [\omega + \nu'] \in H^*(P_{n-1}(n), P(n)) = E(n) $$ its cohomology class. By definition of $s_n$, $$ [\omega + \nu'] = [s_ne'] = [d_ne' + f_ne'] \ , $$ so both relative cocycles have to differ on a relative boundary: $$ \begin{pmatrix} d_ne' - \omega \\ f_ne' - \nu' \end{pmatrix} = \begin{pmatrix} -\partial_{n-1}(n) & 0 \\ -\rho_{n-1}(n) & \partial (n) \end{pmatrix} \begin{pmatrix} \omega' \\ \nu'' \end{pmatrix} \ . $$ Which in particular implies $$ \omega = \partial_{n-1}(n)\omega' + d_ne' = \partial_n(n)(\omega' + e') \ . $$ Thus $[\omega] = 0$ in $HP_n(n)$ and we are done. $\bullet$ \emph{${\rho_n(n)}_*$ is an epimorphism.} From any cocyle $\nu \in P(n)$ we can build a relative one: $$ 0 + \nu \in P_{n-1}(n)^{+1}\oplus P(n) \ . $$ Let us denote its cohomology class by $e = [0 + \nu] \in H(P_{n-1}(n), P(n)) = E(n)$. Then $s_ne = d_ne + f_ne$ and $0+\nu$ are relative cohomologous cocycles. This means that there is a primitive $\omega + \nu' \in P_{n-1}(n)^{+1}\oplus P(n)$ such that: $$ \begin{pmatrix} d_ne \\ f_ne - \nu \end{pmatrix} = \begin{pmatrix} -\partial_{n-1}(n) & 0 \\ -\rho_{n-1}(n) & \partial_n(n) \end{pmatrix} \begin{pmatrix} \omega \\ \nu' \end{pmatrix} = \begin{pmatrix} -\partial_{n-1}(n)\omega \\ -\rho_{n-1}(n)\omega + \partial_n(n)\nu' \end{pmatrix} \ . $$ Particularly, $$ \nu = f_ne + \rho_{n-1}(n)\omega - \partial_n(n)\nu' = \rho_n(n) (\omega +e ) + \partial_n(n) (-\nu') \ . $$ So $\rho_n(n)_*[\omega + e ] = [\nu]$ and we are done. \end{proof} \subsection{Existence. The unitary case} \begin{theorem}\label{existencia+} Every cohomologically connected operad with unitary multiplication $P\in \Ass_+ \ \Op $, $ HP(1) = \mk$, and which is cohomologically unitary, $HP(0) = \mk$, has a Sullivan minimal model $P_\infty \longrightarrow P$. This minimal model is connected $P_\infty (1) = \mk$ and unitary $P_\infty (0) = \mk$. \end{theorem} \begin{proof} Let $P$ be a cohomologically connected and cohomologically unitary operad. We make the most of the non-unitary construction we have already built, but now using $\Gamma = \Gamma_{+1}$. To that process we have to add two things: \begin{enumerate} \item[(1)] A $\Lambda$-structure on the new generators $E$, and \item[(2)] The need to check the compatibility of the differentials $d_n$ and the successive extensions of our \emph{quis} $f_n$ with this $\Lambda$-structures. \end{enumerate} So, starting with $E = E(2) = HP(2)$, we have natural choices for the restriction operations: the ones induced in cohomology by those of $HP(2)$. Next, we have to find a section $s'_2$ that makes the following diagram commute for $i=1,2$: $$ \xymatrix{ {E =HP(2)} \ar[d]^{\delta_i} \ar@{.>}@/^1pc/[r]^{s'_2} & {ZP(2)} \ar[d]^{\delta_i} \ar[l]^{\pi_2} \\ {I_+(1) = \mk = HP(1)} \ar@/^1pc/[r]^{s_1} & {ZP(1)} \ar[l]^-{\pi_1} } $$ Here, the section $s_1$ is the unique morphism from the initial operad $I_+$ and the $\Lambda$-structure on $E$ is the one induced by $\delta_i : P(2) \longrightarrow P(1)$ on cohomology. Notice that $s_1$ is necessarily a section of $\pi_1$. So we are seeking a section $s'_2$ that satisfies $$ \delta_i s'_2 = s_1 \delta_i \ , \quad \text{for} \ i=1,2 \ , $$ since the induced morphism $\rho_2 : P_2 = \Gamma (E) \longrightarrow P$ will thus be a unitary operad morphism. We work as before: given $e\in E$, and any section $s_2$ obtained from the non-unitary case, we check that the elements $\omega_i = \delta_i s_2 e - s_1\delta_i e, i = 1,2$ are cocycles satisfying the Kan condition \ref{Kan-likecondition}. Morevoer: they are coboundaries: \begin{eqnarray*} \pi_1\omega_i &=& \pi_1 (\delta_is_2 e - s_1\delta_i e)\\ &=& \pi_1\delta_i s_2 e - \pi_1 s_1 \delta_i e \\ &=& \delta_i\pi_2s_2 e - \pi_1 s_1 \delta_i e \\ &=& \delta_i e - \delta_i e = 0 \ , \quad \text{for} \ i = 1,2 \ . \end{eqnarray*} Here we have used the fact that the projections onto the cohomology $\pi_1, \pi_2$ are $\Lambda$-morphisms and $s_1, s_2$ are sections of them. Hence, \ref{enhancedKan-likeresult} tells us that there is a coboundary $\partial\omega \in BP(2)$ such that $\delta_i \partial\omega = \omega_i, i=1,2$. So, we substract this $\partial\omega$ from our previous, arbitrary section $s_2$: $$ s'_2 e = s_2 e - \partial\omega \ . $$ And we check that all this can be done without elements again, that we still get a \emph{quis}, etc. Assume we have already extended the $\Lambda$-structure with a compatible \emph{quis} up to arity $n-1, n>2$. For the next step, we have to produce a $\Lambda$-structure on $E = E(n) = HC\rho_{n-1}(n) = H(P_{n-1}(n), P(n))$ and a compatible section $s'_n = (d'_n \ f'_n)^t$; that is, making the following diagram commute for $i=1,\dots , n$: $$ \xymatrix{ {E} \ar[d]^{\delta_i} \ar@{.>}@/^1pc/[r]^-{s'_n} & {Z(P_{n-1}(n), P(n))} \ar[d]^{\delta_i} \ar[l]^-{\pi_n} \\ P_{n-1}(n-1) \ar[r]^-{s_{n-1}} & {Z(P_{n-1}(n-1), P(n-1))} } $$ Here $s_{n-1} = (\partial (n-1) \ \rho(n-1))^t$. So again we need our section $s'_n$ to verify $$ \delta_i s'_n = s_{n-1}\delta_i, \ , \quad \text{for} \ i=1,\dots , n \ , $$ since then the induced morphism $\rho_n : P_n = P_{n-1} \sqcup_d^\delta \Gamma (E) \longrightarrow Q$ will be a unitary operad morphism. An easy and always available choice for the $\Lambda$-structure on $E$ is $\delta_i = 0$ for all $i=1, \dots , n$ (see remark \ref{unitsremainforever} below). Hence we have to produce a section $s'_n$ such that $\delta_i s'_n = 0, i = 1, \dots , n$: take the section $s_n$ we got from the non-unitary case, $e \in E$ and compute: $$ \pi_{n-1}\delta_i s_n e = 0 \ . $$ Here, $\pi_{n-1} : Z(P_{n-1}(n-1), P(n-1)) \longrightarrow H(P_{n-1}(n-1), P(n-1)) = 0$, because $\rho(n-1)$ is a \emph{quis}, by the induction hypothesis. So we have $\omega_i$ such that $\delta_is_n e = \partial\omega_i$, for $i=1, \dots , n$. These $\partial \omega_i$ verify the Kan condition \ref{Kan-likecondition}, hence we get some $\partial \omega$ such that $\delta_i\partial\omega = \partial\omega_i, i =1, \dots , n$, because of \ref{enhancedKan-likeresult}. Therefore, we start with some arbitrary section $s_n$ obtained in the non-unitary case and rectify it with this $\partial\omega$: $$ s'_ne = s_ne - \partial\omega \ . $$ Then we perform the fancy average procedure to get a $\Sigma_n$-equivariant $\omega$ and check that everything works as it is supposed to work. \end{proof} \begin{remark}\label{unitsremainforever} Notice that the action of the units on $(P_+)_\infty$ we have built $$ \delta_i = \_ \circ_i 1 : (P_+)_\infty(n) \otimes (P_+)_\infty(0) \longrightarrow (P_+)_\infty(n-1)\ , n > 1 \ , i=1, \dots , n $$ reduces to the following cases: \begin{itemize} \item[(a)] For $n=1$, it is the isomorphism $\mk \otimes \mk \longrightarrow \mk $. \item[(b)] For $n =2$, it is just the induced action from $P_+$: $HP_+(2) \otimes HP_+(0) \longrightarrow HP_+(1)$ \item[(c)] For $n>2$, it is the trivial action $\omega \mapsto \omega\circ_i 1 = 0$. \end{itemize} \end{remark} \subsection{Uniqueness. The non-unitary case} The following lemma will provide a proof of the uniqueness up to isomorphism of minimal models. It is inspired in \cite{HT90}, definition 8.3 and theorem 8.7. It also inspired a categorical definition of minimal objects: see \cite{Roi93} and \cite{Roi94c}, cf \cite{GNPR10}. \begin{lemma}\label{seccio} Let $P_\infty$ be a Sullivan minimal operad and $\rho : Q \longrightarrow P_\infty$ a \emph{quis} of operads. Then there is a section $\sigma : P_\infty \longrightarrow Q$, $\rho\sigma = \id_{P_\infty} $. \end{lemma} \begin{proof} We are going to build the section $\sigma : P_\infty \longrightarrow Q$ inductively on the arity: $$ \xymatrix{ {I_0}\ar[r]\ar@{.>}[rrrrrd]_{\sigma_1} &{P_2}\ar[r]\ar@{.>}[rrrrd]^{\sigma_2} &{\dots}\ar[r] &{P_n}\ar[r]\ar@{.>}[rrd]^{\sigma_n} & {\dots} \ar[r] & {P_\infty} \ar@/^/@{.>}[d]^{\sigma} \\ & & & & & {Q} \ar[u]^{\rho} } $$ in such a way that: \begin{itemize}\label{secciosigma} \item[(1)] $\rho\sigma_n = \id_{P_n}$ (note that, because of the minimality, $\im \rho\sigma_n \subset P_n$), and \item[(2)] ${\sigma_n}_{| P_{n-1}} = \sigma_{n-1}$. \end{itemize} So, we start with the universal morphism $\sigma_1 : I_0 \longrightarrow Q$ from the initial operad $I_0$ to $Q$. It's clear that $\rho\sigma_1 = \id_{I_\alpha}$. Let us assume that we have already constructed up to $ \sigma_{n-1} : P_{n-1} \longrightarrow Q$ satisfying conditions above $(1)$ and $(2)$ and let us define $\sigma_n : P_n \longrightarrow Q$ as follows: first, take the $\Sigma$-module $$ Q_{n-1} := \im (\sigma_{n-1}: P_{n-1} \longrightarrow Q) \ . $$ By induction hypothesis, $\sigma_{n-1}$ is a monomorphism, so $\sigma_{n-1}: P_{n-1} \longrightarrow Q_{n-1}$ is an isomorphism of $\Sigma$-modules and $\rho_{\vert Q_{n-1}}$ its inverse. Next, consider the following commutative diagram of $\Sigma_n$-modules, $$ \begin{CD} 0 @>>> Q_{n-1}(n) @>>> Q(n) @>>> Q(n)/Q_{n-1}(n) @>>> 0 \\ @. @VV\rho(n)_{| Q_{n-1}} V @VV\rho (n) V @VV\overline{\rho} (n)V @.\\ 0 @>>> P_{n-1}(n) @>>> P_\infty(n) @>>> P_\infty(n)/P_{n-1}(n) @>>> 0 \end{CD} \ . $$ in which the horizontal rows are exact. As we said, the first column is an isomorphism and the second a {\it quis\/}. So the third column is also a {\it quis\/}. By minimality and lemma \ref{lamadredelcordero}, $P_\infty(n) = P_n(n) = P_{n-1} \oplus E(n) $. Hence, $P_\infty(n) / P_{n-1}(n) \cong E(n)$, with zero differential. So we have an epimorphism of $\Sigma_n$-modules, $$ Z(Q(n)/Q_{n-1}(n)) \longrightarrow H(Q(n)/Q_{n-1}(n)) \cong E(n) \ . $$ Take a section $s: E(n) \longrightarrow Z(Q(n)/Q_{n-1}(n))$ and consider the pull-back of $\Sigma_n$-modules $$ \xymatrix{ {Q(n)}\ar@/^2pc/[drr]^{\rho(n)}\ar@/_2pc/[ddr]_{\pi(n)}\ar@{.>}[dr]^{(\pi(n)\ \rho(n)))} & & \\ & {(Q(n)/Q_{n-1}(n)) \times_{E(n)} P_\infty (n)} \ar[r]\ar[d] & {P_\infty(n)} \ar[d] \\ & {Q(n)/Q(n-1)} \ar[r]^{\overline{\rho(n)}} & {P_\infty(n)/P_{n-1}(n) \cong E(n)} } $$ and the induced morphism $(\pi (n) \ \rho (n) )$. This turns out to be an epimorphism: if $$ (\overline{\omega}, \nu) \in ( Q(n)/Q_{n-1}(n)) \times_{E(n)} P_\infty (n) \ , $$ it means that $ \overline{\rho}(n) \overline{\omega} = \overline{\nu}$. That is to say, $\rho (n) \omega - \nu \in P_{n-1} (n)$. Then \begin{eqnarray*} (\pi (n) \ \rho (n) ) (\omega - \sigma_{n-1}(n)(\rho (n)\omega -\nu ) ) & = & (\overline{\omega} - 0, \rho (n)\omega - \rho (n)\sigma_{n-1}(n)(\rho (n)\omega -\nu ) ) \\ & = & (\overline{\omega},\nu ) \end{eqnarray*} by induction hypothesis. Let $i: E(n) \hookrightarrow P_\infty(n)$ denote the inclusion. We can lift $(s \ i )$ in the diagram $$ \xymatrix{ & {Q(n)}\ar[d]^{(\pi(n)\ \rho(n))} \\ E(n) \ar@{.>}[ur]^{f} \ar[r]_-{(s\ i)} & {(Q(n)/Q_{n-1}(n)) \times_{E(n)} P_\infty(n)} } $$ to a morphism $f : E(n) \longrightarrow Q(n)$ such that $(\pi (n) \ \rho (n) ) \circ f = (s \ i)$. Note that here we are using bare projectiviness to lift morphisms, since we don't need them to commute with any differentials at this stage. Finally, define $\sigma_n : P_n \longrightarrow Q$ by $$ \sigma_{n \vert P_{n-1}} = \sigma_{n-1} \qquad \mathrm {and} \qquad \sigma_{n \vert E(n)} = f \ . $$ According to the universal property of principal extensions \ref{propuniversalHirsch}, in order to check that $\sigma_n$ is a morphism of operads, we only need to prove that $$ \sigma_{n-1}d_n e = \partial_{Q(n)} f e $$ for every $e \in E(n)$. Very well: since $\pi (n)fe = se \in Z (Q(n)/ Q_{n-1}(n))$, we have $\overline{0} = \overline{\partial} se = \overline{\partial} \pi (n) fe = \pi (n) \partial fe$. Hence, $dfe \in Q_{n-1}(n) = \mathrm {im}\, \sigma_{n-1}(n)$. Let $\omega \in P_{n-1}(n)$ be such that $\partial fe = \sigma_{n-1} (n) \omega $. Then, apply $\rho (n)$ to both sides of this last equality: $$ \rho (n) \overline{\partial}fe = \rho (n) \sigma_{n-1} (n) \omega = \omega \ , $$ by induction hypothesis, and also $$ \rho (n) \partial fe = \partial \rho (n) fe = d e \ , $$ because $f$ is a lifting of $( s \ d)$. So $d e = \omega $ and hence $\sigma_{n-1} d e = \sigma_{n-1}\omega = \partial fe$, as we wanted. Finally, $\rho\sigma_n = {\mathrm id}_{P_n}$ because $\rho\sigma_{n \vert P_{n-1} } = \rho\sigma_{n-1} = {\mathrm id}_{P_{n-1}}$, by induction hypothesis, and $\rho fe = e$, because $f$ lifts $(s \ i )$. \end{proof} \subsection{Uniqueness. The unitary case} \begin{lemma}\label{seccio+} Let $P_\infty$ be a unitary Sullivan minimal operad and $\rho : Q \longrightarrow P_\infty$ a \emph{quis} of operads, where $Q$ has a unitary multiplication. Then, there is a unitary section $\sigma : P_\infty \longrightarrow Q$, $\rho\sigma = \id_{P_\infty} $. \end{lemma} \begin{proof} We follow the same strategy as in the previous results: starting with the section $f$ already built for the non-unitary case, \begin{equation}\label{eqseccio+} \xymatrix{ & {Q(n)}\ar[d]^{(\pi(n)\ \rho(n))} \\ E(n) \ar[ur]^{f} \ar[r]_-{(s\ i)} & {(Q(n)/Q_{n-1}(n)) \times_{E(n)} P_\infty(n) \ ,} } \end{equation} we will rectify it in order to make it compatible with the $\Lambda$-structures. Let's start with the case $n=2$. Diagram \ref{eqseccio+} reduces to $$ \xymatrix{ & {ZQ(n) \subset Q(n)}\ar[d]^{\id} \\ {E(2) \cong HQ(2)} \ar@{.>}[ur]^{f} \ar[r]_{s_2} & {ZQ(n) \subset Q(n) \ .} } $$ Hence $f=s_2$ and the problem boils down to proving that section $s_2$ can be chosen to be compatible with the $\Lambda$-structures. Again, this means studying the differences $$ \omega_i = \delta_is_2e - s_1\delta_ie \ , \quad \text{for} \ i =1,2 \ , $$ for $e \in E(2)$. Here, $s_1 : HP_\infty (1) = \mk \longrightarrow ZQ(1)$ is the unique lifting of the isomorphism $\rho_*(1) : HP_\infty(1) \longrightarrow HQ(1)$ to $ZQ(1)$ that sends $\id \in HP_\infty (1)$ to $\id \in ZQ(1)$. $$ \xymatrix{ {E(2)} \ar[r]^{s_2}\ar[d]^{\delta_i} & {ZQ(2)} \ar[d]^{\delta_i} \\ {HP_\infty(1) = HQ(1)} \ar[r]^-{s_1} & {ZQ(1)\ .} } $$ Again, these $\omega_i$ don't need to be zero, but it's easy to verify that they are cocycles, satisfy the Kan-like condition \ref{Kan-likecondition} and, morever, belong to $\ker\rho$. Let's check this last statement: \begin{eqnarray*} \rho\omega_i &=& \rho\delta_is_2e - \rho s_1\delta_i e \\ &=& \delta_i\rho s_2 e - \rho s_1 \delta_i e \\ &=& \delta_i e - \delta_i e = 0 \ , \quad \text{for} \ i =1,2 \ . \end{eqnarray*} Here we have used the fact that $s_2$ and $s_1 $ are already sections of $\rho$, which we know from the non-unitary case, and that $\rho$ is a morphism of unitary operads. Hence, because of \ref{enhancedKan-likeresult} there is some $\omega \in ZQ(2) \cap \ker \rho$ such that $\delta_i \omega = \omega_i, i = 1,2$ and we use it to rectify our previous choice of a section: $$ s'_2 e = s_2 e - \omega \ . $$ Assume we have already built the section $\sigma$ up to the $n-1, n>2$ stage of the minimal operad $P_\infty$. Now the $\Lambda$-structure on $E(n)$ is the trivial one $\delta_i = 0, i=1, \dots n$. So we have to rectify our $f$ in \ref{eqseccio+} to verify $\delta_i f = 0, i = 0, \dots , n$. Again, for $e\in E(n)$ we don't necessary have $\omega_i = \delta_i fe = 0, i =0, \dots , n$. Nevertheless, $$ \pi \omega_i = \delta_i \pi fe = \delta_i se = s\delta_i e = 0 \ , $$ because of the trivial $\Lambda$-structure on $E(n)$. Here we have used the fact that we could also have previously rectified also $s$ into a morphism of $\Lambda$-structures---a verification which we leave to the conscientious reader. Therefore, $\delta_ife \in Q_{n-1}(n-1), i =1, \dots , n$. Again these $\delta_ife$ verify the Kan condition \ref{Kan-likecondition}, and belong to $\ker\rho$. So, because of lemma \ref{enhancedKan-likeresult}, we get $\omega \in Q_{n-1}(n) \cap \ker\rho$ such that $\delta_i\omega = \omega_i$ and we redefine our $f$ from the non-unitary case as usual as $f'e = fe - \omega$. Again, we average to produce a $\Sigma_n$-equivariant $\omega$ and check that everything works as it's supposed to. \end{proof} \subsection{Uniqueness: conclusions} From lemmas \ref{seccio} and \ref{seccio+}, uniqueness follows at once from both the non-unitary and unitary cases. \begin{proposition}\label{quisimplicaiso} Let $\rho : P_\infty \longrightarrow P'_\infty$ be a \emph{quis} between minimal Sullivan operads. Then, $\rho$ is an isomorphism. \end{proposition} \begin{proof} Because of the previous lemmas \ref{seccio} and \ref{seccio+}, $\rho$ has a section $\sigma$ which, by the two out of three property is also a \emph{quis}. So $\sigma$ also has a section and it's both a monomorphism and an epimorphism. \end{proof} \begin{theorem}\label{unicitat} Let $\varphi :P_\infty \lra P$ and $\varphi': P'_\infty \lra P$ be two Sullivan minimal models of $P$. Then there is an isomorphism $\psi : P_\infty \lra P'_\infty$, unique up-to-homotopy, such that $\varphi' \psi\simeq \varphi$. \end{theorem} \begin{proof} The existence of $\psi$ follows from the up-to-homotopy lifting property \ref{strict_lifting}. It is a \emph{quis} because of the $2$-out-of-$3$ property and hence an isomorphism because of the previous proposition \ref{quisimplicaiso}. \end{proof} \section{Miscellanea} In this section we develop some corollaries relating the minimal models of $P_+$ and $P$, and stablishing their relationship with up-to-homotopy algebras. Namely: \begin{itemize} \item[(6.1)] We compare the minimal models of an unitary operad $P_+$ and its non-unitary truncation $P$. \item[(6.2)] We relate the minimal model of an unitary operad $(P_+)_\infty$ and the up-to-homotopy $P_+$-algebras with strict units. \item[(6.3)] For the case of the unitary associative operad, we compare our minimal model $su\Ass_\infty = \Ass_{+\infty}$ with the one of up-to-homotopy algebras with up-to-homotopy units, $hu\Ass_\infty$. \item[(6.4)] Here we extend some results of our previous paper \cite{CR19}, that we could not address there for the lack of minimal models for unitary operads. \item[(6.5)] We complete the results of \cite{GNPR05} concerning the formality of operads, so as to include unitary operads. \end{itemize} \subsection{Minimal models of an operad and its unitary extension} Let $P$ be an operad admitting a unitary extension $P_+$. We clearly have a split exact sequence of $\Sigma$-modules $$ 0 \longrightarrow P \longrightarrow P_+ \longrightarrow \mk [1] \longrightarrow 0 \ . $$ Here, $P \longrightarrow P_+$ is the canonical embedding, $\mk [1] = \mk$ the $\Sigma$-module which is just a $\mk$-vector space on one generator $1$ in arity-degree $(0,0)$ and zero outside and $P_+ \longrightarrow \mk [1]$ the projection of $\Sigma$-modules that sends $P(l)$, $ l > 0$ to zero and the identity on $P_+(0) = \mk$. We could also write $$ P_+ = P \oplus \mk [1] $$ as $\Sigma$-modules. \begin{proposition}\label{twominimalmodels} For every cohomologically unitary and cohomologically connected operad $P_+$ we have an isomorphism of operads $$ (P_+)_\infty = (P_\infty)_+ \ . $$ \end{proposition} In particular, we have an isomorphism of $\Sigma$-modules, $$ (P_+)_\infty = P_\infty \oplus \mk [1] \ . $$ \begin{proof} For every $P_+$, its truncation $P$ is a $\Lambda$-operad and this structure passes to its minimal model $P_\infty$ (see remark \ref{unitsremainforever}). So, we have a unitary extension $(P_\infty)_+$. Let's see how this unitary extension agrees with the minimal model of $P_+$. Indeed, $P_\infty$ is a colimit of principal extensions $$ I_0 \longrightarrow P_2 = \Gamma \left( E(2) \right) \longrightarrow \dots \longrightarrow P_n = \Gamma \left(\bigoplus_{n\geq 2} E(n)\right)\longrightarrow \dots \longrightarrow \dirlim_n P_n = P_\infty $$ starting with the non-unitary initial operad $I_0$. For the same reasons we just remarked about $P_\infty$, all these operads $P_n$ have unitary extensions. So we can take the unitary extension of the whole sequence $$ I_+ \longrightarrow (P_2)_+ = \Gamma (E(2))_+ \longrightarrow \dots \longrightarrow (P_n)_+ = \Gamma \left( \bigoplus_{n\geq 2} E(n) \right)_+ \longrightarrow \dots \longrightarrow (\dirlim_n P_n)_+ = (P_\infty)_+ \ . $$ But, as we noticed in lemma \ref{extensionfunctor}, the functor $(\ )_+$ commutes with colimits, so $(P_\infty)_+ = (\dirlim_n P_n)_+ = \dirlim_n (P_{n+}) = (P_+)_\infty$. \end{proof} \subsection{Minimal models and up-to-homotopy algebras} In the non-unitary case, the importance of these minimal models $P_\infty$ is well-known: they provide a \emph{strictification} of up-to-homotopy $P$-algebras. That is, up-to-homotopy $P$-algebras are the same as regular, \emph{strict} $P_\infty$-algebras. One way to prove it is the following: first we have a commonly accepted definition for up-to-homotopy $P$-algebras, at least for \emph{Koszul} operads. Namely, the one in \cite{GK94}: \begin{definition}(\cite{GK94}, see also \cite{LoVa12}) Let $P$ a Koszul operad. Then an \emph{up-to-homotopy} $P$-algebra is an algebra over the Koszul resolution (model) $\Omega P^{\text{\text{!`}}}$ of $P$. \end{definition} Then one proves that $\Omega P^{\text{!`}} \stackrel{\sim}{\longrightarrow} P$ is a minimal model of $P$, in the sense that it is unique up to isomorphism (\cite{LoVa12}, corollary 7.4.3). Since Markl's minimal model \emph{à la Sullivan} $P_\infty \stackrel{\sim}{\longrightarrow} P$ is also minimal and cofibrant, we necessarily have an isomorphism $P_\infty = \Omega P^{\text{!`}}$ (see \cite{Mar96}). Then one has to check that this definition as $\Omega P^{\text{!`}}$-algebras also agrees with the definitions through \lq\lq equations" in the particular cases. For instance, one has to check that $\Ass_\infty = \Omega\Ass^{\text{!`}}$-algebras are the same as $A_\infty$-algebras, defined as dg modules, together with a sequence of $n$-ary operations $$ \mu_n : A^{\otimes n} \longrightarrow A ,\ n \geq 2 ,\ |\mu_n| = 2 - n \ , $$ satisfying the equations $$ \partial (\mu_n) = \sum_{\substack{ p+q+r = n \\ p+1+r=m}} (-1)^{qr+p+1} \mu_m \circ_{p+1} \mu_q \ . $$ (See \cite{LoVa12}, lemma 9.2.1.) We would like to say that the same is true in the unitary case, in other words, to prove a theorem such as \begin{theorem} Up-to-homotopy $P_+$-algebras with strict unit are the same as $(P_+)_\infty$-algebras. \end{theorem} But for this, one important ingredient is missing: we lack a common, accepted definition for up-to-homotopy $P_+$-algebras with strict units. To the best of our knowledge, such a definition exists only for the operad $u\Ass = \Ass_+ $. For instance, the one in \cite{KS09}, definition 4.1.1 (cf. \cite{Lyu11}, \cite{Bur18}): \begin{definition} An $A_\infty$-algebra $A$ is said to have a \emph{strict unit} if there is an element $1\in A$ of degree zero such that $\mu_2(1,a) = a = \mu_2(a,1)$ and $\mu_n(a_1, \dots , 1, \dots, a_n) = 0$ for all $n\neq 2$ and $a, a_1, \dots , a_n \in A$. \end{definition} So we prove our theorem for the only case currently possible: $P_+ = \Ass_+$. \begin{theorem} $A_\infty$-algebras with strict unit are the same as $\Ass_{+\infty}$-algebras. \end{theorem} \begin{proof} We just have to prove that the unit $1 \in \Ass_{+\infty}(0)$ acts as described in the definition. Namely, $$ \mu_2 \circ_1 1 = \id = \mu_2 \circ_2 1 \ , \qquad \text{and} \qquad \mu_n \circ_i 1 = 0 \ , \quad \text{for all} \quad n > 2 \ , \quad \text{and} \quad i = 1, \dots , n $$ As for the first equations, because of remark \ref{unitsremainforever}, partial composition products $$ \circ_i : \Ass_{+\infty} (2) \otimes \Ass_{+\infty} (0) \longrightarrow \Ass_{+\infty} (1) \ , \quad i = 1, 2 \ , $$ are induced by $$ \circ_i : \Ass_+(2) \otimes \Ass_+(0) \longrightarrow \Ass_+ (1) \ , \quad i = 1, 2 $$ which verify said identities. As for the rest of the equations, for $n>2$, again because of remark \ref{unitsremainforever}, partial composition products $\circ_i : \Ass_{+\infty} (n) \otimes \Ass_{+\infty} (0) \longrightarrow \Ass_{+\infty} (n-1)$ are trivial. \end{proof} \subsection{Strict units and up-to-homotopy units} Here we compare two minimal models of a unitary operad $P_+$: the one with \emph{strict units} that we developed $(P_+)_\infty$, and the one with \emph{up-to-homotopy units} that we find for the case of the unitary associative operad in \cite{HM12} or \cite{Lyu11}. We will use the notations $su\Ass_\infty = \Ass_{+\infty}$ and $hu\Ass_\infty$, respectively. As \cite{HM12} mentions, $su\Ass_\infty$ \emph{cannot} be cofibrant, nor \emph{minimal} and cofibrant, since if it were, we would have two \emph{quis} $su\Ass_\infty \stackrel{\sim}{\longrightarrow} u\Ass \stackrel{\sim}{\longleftarrow} hu\Ass_\infty $ and hence, by the up-to-homotopy lifting property and the fact that both are minimal, we would conclude that both operads $su\Ass_\infty$ and $hu\Ass_\infty$ should be isomorphic, which we know they clearly are not, just by looking at their presentations: $$ hu\Ass_\infty = \Gamma ( \{\mu_n^S\}_{S,n\geq2}) \ , $$ (see \cite{HM12}, \cite{Lyu11}) and $$ su\Ass_\infty = \Gamma_{+1} (\{\mu_n\}_{n\geq 2}) = \dfrac{\Gamma (1, \{\mu_n\}_{n\geq 2})}{\langle \mu_2 \circ_1 1 - \id,\ \mu_2 \circ_2 1 - \id, \ \{\mu_n \circ_i 1\}_{n\geq 2, i=1,\dots, n}\rangle } $$ Nevertheless, we have indeed proven that $su\Ass_\infty$ is a minimal and cofibrant operad. And it is of course, but \emph{as an operad with unitary multiplication}, in $\Ass_+ \backslash \Op$. Even though \emph{it is not as an operad} in $\Op$. Indeed, looking at its second presentation, with the free operad functor $\Gamma$, we see that it seems to lack the first condition of minimality; i.e., being free as a graded operad. Again, there is no contradiction at all: it is free graded \emph{as a unitary operad}; that is, in $\Op_{+1}$, with the free operad functor $\Gamma_{+1}$. So, summing up: $su\Ass_\infty$ is an honest minimal, cofibrant and graded-free operad \emph{in} $\Ass_+\backslash \Op$, while it is none of the above in the category of all operads $\Op$. \begin{example}The \emph{free} $\C$-algebra $CX$ in \cite{May72}, construction 2.4, and lemma 2.9, or, more generally, the \emph{free reduced} $P_+$-algebra $\mathbb{S}_*(P_+, X)$ for a unitary operad $P_+$ in \cite{Fre17a}, p. 74, are also examples of free objects when you consider them in categories of unitary algebras, but losing its \lq\lq freedom" in the categories of all algebras, unitary or otherwise. \end{example} This difference between the same object being free in a subcategory and not being free in a larger category has as a consequence that minimal objects in a subcategory can lose its minimality in a larger category. This apparently paradoxical phenomenon, it is not so new and has already been observed (see, for instance, \cite{Roi94c}, remark 4.8). Here we present another example of this phenomenon, but in the category of dg commutative algebras. \begin{example}\label{nounitexample} Let $\Cdga{\mathbb{Q}}$ denote the category of dg commutative algebras, \emph{without unit}. Let $\Cdga{\mathbb{Q}}_1$ denote the category of algebras \emph{with unit}. By forgetting the unit, we can consider $\Cdga{\mathbb{Q}}_1$ as a subcategory of $\Cdga{\mathbb{Q}}$. $\mathbb{Q}$, being the initial object in $\Cdga{\mathbb{Q}}_1$, is free, cofibrant and minimal in $\Cdga{\mathbb{Q}}_1$. Indeed, if we denote by $\Lambda_1$ the free graded commutative algebra \emph{with unit} functor, then $\Lambda_1(0) = \mathbb{Q}$: the free graded commutative algebra \emph{with} unit on the $\mathbb{Q}$-vector space $0$. However, it is \emph{neither} minimal, nor cofibrant, nor free as an object in the larger category $\Cdga{\mathbb{Q}}$. To see this, let us denote by $\Lambda$ the free graded-commutative algebra \emph{without} unit. As an algebra \emph{without} unit, $\mathbb{Q}$ has an extra relation. Namely, $1^2 = 1$. So, it is \emph{not} a free algebra in $\Cdga{\mathbb{Q}}$: $$ \mathbb{Q} = \Lambda_1 (0) = \dfrac{\Lambda (1)}{(1^2 - 1)} $$ Next, consider the free graded-commutative algebra \emph{without} unit $\Lambda (t,x)$ on two generators $t,x$ in degrees $|t|=-1$ and $|x|=0$ and differential $dx = 0$ and $dt = x^2 - x$. Hence, as a graded vector space, $$ \Lambda (t,x)^{i} = \begin{cases} (x), & \mbox{if } i = 0, \\ [t, tx, tx^2, \dots , tx^n, \dots], & \mbox{if } i = -1, \\ 0 , & \mbox{otherwise}. \end{cases} $$ where: \begin{itemize} \item[(1)] $(x)$ is the ideal generated by $x$ in the polynomial algebra $\mathbb{Q}[x]$. That is, the $\mathbb{Q}$-vector space $[x, x^2, \dots , x^n, \dots]$ \item[(2)] $[t, tx, tx^2, \dots , tx^n, \dots]$ means the $\mathbb{Q}$-vector space generated by those vectors. \end{itemize} Consider the morphism of algebras \emph{without} unit $$ \varphi : \Lambda (t,x) \longrightarrow \mathbb{Q} $$ defined by $\varphi(x) = 1, \varphi (t) = 0$. It's clear that $\varphi$ is a \emph{quis} and an epimorphism. So, if $\mathbb{Q}$ were a minimal and cofibrant algebra \emph{without unit}, we would have a section $\sigma : \mathbb{Q} \longrightarrow \Lambda (t,x)$, $\varphi \sigma = \id $. For degree reasons, we would then have $\sigma (1) = p(x)$, for some polynomial $p(x) \in (x)$. That is, a polynomial of degree $\geq 1$. But, since $\sigma(1)\sigma(1) = \sigma (1^2) = \sigma (1) $, we would get $p(x)^2 = p(x) $, which is impossible for a polynomial of degree $\geq 1$. Hence, $\mathbb{Q}$ is graded-free, cofibrant and minimal \emph{as an algebra with unit}. But it's neither of those things \emph{as an algebra without unit}. In fact, we could argue that we have computed its minimal model $\Lambda (t,x)$ in $\Cdga{\mathbb{Q}}$, but this would lead us to develop the theory of minimal dg commutative algebras without unit, possible generators in degree zero, and elements of negative degrees, which is beyond the scope of this paper. \end{example} \subsection{Minimal models of operad algebras for tame operads} In \cite{CR19} we proved the existence and uniqueness of Sullivan minimal models for operad algebras, for a wide class of operads we called \lq\lq tame", and for operad algebras satisfying just the usual connectivity hypotheses. Of particular importance was the fact that, if an operad $P$ is tame, then its minimal model $P_\infty$ is also tame: that is, $P_\infty$-algebras also have Sullivan minimal models \cite{CR19}, proposition 4.10. This provides minimal models for $\Ass_\infty, \Com_\infty$ and $\Lie_\infty$-algebras, for instance. Since at that time we were not aware of the possibility of building minimal models for unitary operads, there was a gap in our statements, meaning we had to formulate them only for non-unitary operads (there called \lq\lq reduced"). Now we can mend that gap. \begin{proposition} Let $P\in \Op$ be a cohomologically connected and cohomologically non-unitary, or unitary $r$-tame operad. Then its minimal model is a also $r$-tame. \end{proposition} \begin{proof} Indeed, the presence of a non-trivial arity zero $P(0)$ adds nothing to the condition of being tame or not. \end{proof} \begin{corollary} Every cohomologically connected $\Ass_{+\infty}$ or $\Com_{+\infty}$-algebra has a Sullivan minimal model. Also every $1$-connected $\Ger_{+\infty}$-algebras has a Sullivan minimal model. \end{corollary} Then we went on to prove the same results for pairs $(P,A)$, where $P$ is a tame operad and $A$ a $P$-algebra, thus providing a global invariance for our minimal models in the form of a minimal model $(P_\infty, \M) \stackrel{\sim}{\longrightarrow} (P,A)$ in the category of such pairs, the category of operad algebras \emph{over variable operads}. We can add now unitary operads to that result too. \begin{theorem} Let $P$ be a cohomologically connected and cohomologically non-unitary, or unitary, $r$-tame operad and $A$ an $r$-connected $P$-algebra. Then $(P,A)$ has a Sullivan $r$-minimal model $(P_\infty, \M) \stackrel{\sim}{\longrightarrow} (P,A)$. \end{theorem} \subsection{Formality} It has been pointed out by Willwacher in his speech at the 2018 Rio's International Congress of Mathematicians, \cite{Wil18}, talking about the history of the formality of the little disks operads, that our paper \cite{GNPR05} missed the arity zero. Here we complete the results of that paper for the unitary case. \begin{proposition}\label{formality+} Let $P_+$ be a unitary dg operad with $HP_+(0) = HP_+(1) = \mk$. Then $$ P_+ \ \text{is a formal operad} \qquad \Longleftrightarrow \qquad P \ \text{is a formal operad} $$ \end{proposition} \begin{proof} Since the truncation functor is exact, implication $\Longrightarrow$ is clear. In the opposite direction, because of the hypotheses, $P$ and $P_+$ have minimal models $P_\infty$ and $(P_\infty)_+$. Assume $P$ is formal. Then we have a couple of \emph{quis} $$ HP \stackrel{\sim}{\longleftarrow} P_\infty \stackrel{\sim}{\longrightarrow} P \ . $$ Applying the unitary extension functor to this diagram, and taking into account that it is an exact functor because of \ref{extensionfunctor}, we get $$ (HP)_+ \stackrel{\sim}{\longleftarrow} (P_\infty)_+ \stackrel{\sim}{\longrightarrow} P_+ \ . $$ Which, it is just $$ HP_+ \stackrel{\sim}{\longleftarrow} (P_+)_\infty \stackrel{\sim}{\longrightarrow} P_+ \ . $$ Hence, $P_+$ is also a formal operad. \end{proof} \begin{corollary} (cf. \cite{Kon99}, \cite{Tam03}, \cite{GNPR05}, \cite{LaVo14}, \cite{FW18}) The unitary $n$-little disks operad $\D_{n+}$ is formal over $\mathbb{Q}$. \end{corollary} \begin{proof} Follows from \cite{GNPR05}, corollary 6.3.3 and our previous proposition \ref{formality+}. \end{proof} We can also offer a unitary version of the main theorem 6.2.1 in \emph{op.cit.} about the independence of formality from the ground field. \begin{corollary} (cf. \cite{Sul77}, \cite{HS79}, \cite{Roi94b}, \cite{GNPR05}) Let $\mk$ be a field of characteristic zero, and let $\mk \subset \mathbf{K}$ be a field extension. If $P$ is a cohomologically connected and cohomologically unitary dg $\mk$-operad with finite type cohomology, then the following statements are equivalent: \begin{itemize} \item[(1)] $P$ is formal. \item[(2)] $P\otimes \mathbf{K}$ is formal. \end{itemize} \end{corollary} \begin{proof} Because the statements only depend on the homotopy type of the operad, we can assume $P$ to be minimal, and hence connected and unitary: let's call it $P_+$. Then, $P_+$ is formal if and only if its truncation $P$ is so, because of previous proposition \ref{formality+}. Because of op.cit. theorem 6.2.1, $P$ is formal if and only if $P\otimes \mathbf{K}$ is so. Because of previous proposition \ref{formality+}, this is true if and only if $(P\otimes \mathbf{K})_+ = P_+ \otimes \mathbf{K}$ is formal. \end{proof} The interested reader can easily check that the rest of the sections of \cite{GNPR05} concerning non-unitary operads admit similar extensions to unitary ones. This is true, even for the finite type results like theorem 4.6.3 in \emph{op.cit} from which the descent of formality hinges: \begin{theorem} Let $P$ be a cohomologically connected and cohomologically non-unitary, or unitary operad. If the cohomology of $P$ is of finite type, then its minimal model $P_\infty$ is of finite type. \end{theorem} And this is so because, even in the unitary case, $P_\infty = \Gamma (E)$, with $E(0) = E(1) =0$. In particular, we have the celebrated Sullivan's criterium of formality based on the lifting of a \emph{grading automorphism} also for unitary operads. \begin{definition} Let $\alpha \in \mk^*$ to not be a root of unity and $C$ a complex of $\mk$-vector spaces. The \emph{grading automorphism} $\phi_\alpha$ of $HC$ is defined by $\phi_\alpha = \alpha^i \id_{HC^i}$ for all $i\in \mathbb{Z}$. A morphism of complexes $f$ of $C$ is said to be a \emph{lifting} of the grading automorphism if $Hf = \phi_\alpha$. \end{definition} \begin{proposition}(cf. \cite{Sul77}, \cite{GNPR05}, \cite{Pet14}) Let $P$ be a cohomologically connected and cohomologically non-unitary or unitary operad with finite type cohomology. If for some nonroot of unity $\alpha \in \mk^*$, $P$ has a lifting of $\phi_\alpha$, then $P$ is formal. \end{proposition}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Currently, most of the energy produced worldwide uses coal or natural gas. However, much of this energy is wasted. In the United States of America, approximately 58\% of energy produced is wasted \cite{batt_2013}. Furthermore, 40\% of this wasted energy is due to industrial and residential buildings. By reducing energy wastage in the electric power industry, we reduce damage to the environment and reduce the dependence on fossil fuels. Short-term load forecasting (STLF) (i.e., one hour to a few weeks) can assist since, by predicting load, one can do more precise planning, supply estimation and price determination. This leads to decreased operating costs, increased profits and a more reliable electricity supply for the customer. Over the past decades of research in STLF there have been numerous models proposed to solve this problem. These models have been classified into classical approaches like moving average \cite{Andrade_09} and regression models \cite{Hong_11}, as well as machine learning based techniques, regression trees \cite{Mori_01}, support vector machines \cite{Niu2006} and Artificial Neural Networks \cite{Lee_92}. In recent years, many deep learning methods have been shown to achieve state-of-the-art performance in various areas such as speech recognition \cite{Hinton_12}, computer vision \cite{Krizhevsky_2012} and natural language processing \cite{Collobert_2008}. This promise has not been demonstrated in other areas of computer science due to a lack of thorough research. Deep learning methods are representation-learning methods with multiple levels of representation obtained by composing simple but non-linear modules that each transform the representation at one level (starting with the raw input) into a representation at a higher, slightly more abstract level \cite{LeCun_15}. With the composition of enough such transformations, very complex functions can be learned. In this paper, we compare deep learning and traditional methods when applied to our STLF problem and we also provide a comprehensive analysis of numerous deep learning models. We then show how these methods can be used to assist in the pricing of electricity which can lead to less energy wastage. To the best of our knowledge, there is little work in such comparisons for power usage in an electrical grid. The data we use is based on one year of smart meter data collected from residential customers. We apply each of the deep and traditional algorithms to the collected data while also noting the corresponding computational runtimes. Due to differences in electricity usage between the week and the weekend, we then split the data into two new datasets: weekends and weekly data. The algorithms are applied to these new datasets and the results are analyzed. The results show that the deep architectures are superior to the traditional methods by having the lowest error rate, but they do have the longest run-time. Due to space limitations we do not provide details of the traditional approaches but do provide references. \section{Analysis} \subsection{Data Description} Our dataset consists of 8592 samples of 18 features that were collected from several households. The dataset was broken into 3 parts for training, validation and testing of sizes 65\%, 15\%, 20\% respectively. The readings were recorded at hourly intervals throughout the year. Some of the features were electrical load readings for the previous hour, the previous two hours, the previous three hours, the previous day same hour, the previous day previous hour, the previous day previous two hours, the previous 2 days same hour, the previous 2 days previous hour, the previous 2 days previous two hours, the previous week same hour, the average of the past 24 hours and the average of the past 7 days. The rest of the features (which do not contain electrical load readings) are the day of the week, hour of the day, if it is a weekend, if it is a holiday, temperature and humidity. These features were selected as they are typically used for STLF. In addition, the total electrical load does not change significantly throughout the year since the households are located in a tropical country where the temperature remains fairly constant throughout the year. \begin{table}[!t] \caption{Baseline algorithms} \label{tab:1} \centering \begin{tabular}{rrrr} \toprule Algorithm & MAPE & MPE & Time (s)\\ \midrule WMA & 9.51 & -1.96 & 100\\ MLR & 24.25 & -1.47 & 1\\ MQR & 12.91 & -7.63& 7\\ RT & \textbf{7.23}& -1.71& 15\\ SVR & 13.65 & \text{ }3.16 &19\\ \bottomrule \end{tabular} \end{table} \subsection{Comparison Method} As a preprocessing step, the data is cleaned and scaled to zero mean and unit variance. All traditional methods use cross-validation to determine appropriate values for the hyper-parameters. A random grid search was used to determine the hyper-parameters for the deep learning methods. Several baseline algorithms were chosen. They include the Weighted Moving Average (WMA) where $y_{t+1} = \alpha y_{i} + \beta y_{i-167}$ with $\alpha = 0.05$ and $\beta = 0.95$, Multiple Linear Regression (MLR) and quadratic regression (MQR), Regression Tree (RT) with the minimum number of branch nodes being 8, Support Vector Regression (SVR) with a linear kernel and Multilayer Perception (MLP), with the number of hidden neurons being 100. For our Deep Neural Network methods we used Deep Neural Network without pretraining (DNN-W), DNN with pretraining using Stacked Autoencoders (DNN-SA) \cite{Hoo_11}, Recurrent Neural Networks (RNN) \cite{Hermans_2013}, RNNs and Long Short Term Memory (RRN-LSTM) \cite{Gers2001}, Convolutional Neural Networks (CNN) \cite{Siripurapu_15} and CNNs and Long Short Term Memory (CNN-LSTM)] \cite{Sainath_15} To evaluate the goodness of fit of these algorithms we use the Mean Absolute Percentage Error (MAPE) defined as: \begin{equation} \text{MAPE} = \frac{100}{n} \sum_{t=1}^{n} \frac{|y_t - \widehat{y}_t|}{y_t} \end{equation} where $n$ is the number of data points, $t$ is the particular time step, $y_t$ is the target or actual value and $\widehat{y}_t$ is the predicted value. In order to determine the cost of the prediction errors (i.e. whether the prediction is above or below the actual value) the Mean Percentage Error (MPE) is used, which is defined as: \begin{equation} \text{MPE} = \frac{100}{n} \sum_{t=1}^{n} \frac{y_t - \widehat{y}_t}{y_t} \end{equation} \subsection{Numerical Results} \begin{table}[!t] \caption{DNN algorithms (subscript denotes number of layers)} \label{tab:2} \centering \footnotesize\setlength{\tabcolsep}{3pt} \begin{tabular}{rrrrrrr} \toprule Algorithm & \multicolumn{3}{c}{200 Epocs} &\multicolumn{3}{c}{400 Epocs}\\ \cmidrule(l){2-7} & \text{MAPE} & \text{MPE} & \text{Time(s)} & \text{MAPE} & \text{MPE} & \text{Time(s)}\\ \midrule MLP & 5.62 & -5.62 & 14 & 4.55 & -4.54 & 25\\ $\text{DNN-W}_3$ & \textbf{2.64} &\text{ }1.61 & 30 & \text{ }2.50 & 1.98 & 56\\ $\text{DNN-W}_4$ & 5.71 & -5.36 & 37 & 5.48 & -5.32 & 72\\ $\text{DNN-W}_5$ & 4.40 & \text{ }1.79 & 38 & 5.98 & 5.45 & 69\\ $\text{DNN-SA}_3$ & 2.97 & \text{ }1.23 & 23 & 2.01 & \text{ }0.74 & 25\\ $\text{DNN-SA}_4$ & 2.88 & \text{ }0.23 & 29 & 2.37 & \text{ }0.79 & 42\\ $\text{DNN-SA}_5$ & 2.92 & \text{ }0.91 & 37 & \textbf{1.84} & \text{ }0.53 & 49\\ RNN & 5.23 & \text{ }0.89 & 174 & 5.13 & -0.37 & 359\\ RNN-LSTM & 5.36 & -1.26 & 880 & 5.27 & -1.17 & 1528\\ CNN-LSTM & 5.74 & -3.85 & 1029 & 6.43 & -5.96 & 1912\\ CNN & 3.15 & -3.53 & 799 & 4.60 & \text{ }4.23 & 1188\\ \bottomrule \end{tabular} \end{table} We first look at the baseline methods, (with the exception of MLP) in Table \ref{tab:1}. From the table we see that MLR performs the worst, with a MAPE of 24.25\%, which would indicate that the problem is not linear (see Figure \ref{fig:week_and_weekend}). However, the RT algorithm outperforms the rest of the methods by a noticeable margin. This shows that the problem can be split into some discrete segments which would accurately forecast the load. This can be confirmed by looking at the load in Figure \ref{fig:week_and_weekend} where it is clear that, depending on the time of day, there is significant overlap of the value of the load between days. Thus, having a node in the RT determining the time of the day would significantly improve accuracy. The run-time for these algorithms was quite short with WMA taking the longest due to the cross-validation step where we determined all possible coefficients in steps of 0.05. Due to the typically long running time of DNN architectures, the algorithms were restricted to 200 and 400 epocs. From Table \ref{tab:2}, there is a clear difference when looking at the 200 epocs and the 400 epocs MAPE columns, as most of the algorithms have a lower MAPE after running for 400 epocs when compared with 200 epocs. This is especially true for the $\text{DNN-SA}_3$ which saw significant drops in the MAPE. The MLP did not perform the worst in both epocs but it was always in the lower half of accuracy. This indicates that the shallow network might not be finding the patterns or structure of the data as quickly as the DNN architectures. However, it outperformed RT in both the 200 and 400 epocs. This alludes to the fact that the hidden layer is helping to capture some of the underlying dynamics that a RT cannot. \begin{table}[!t] \caption{Daily MAPE Values} \label{tab:3} \centering \footnotesize\setlength{\tabcolsep}{3pt} \begin{tabular}{rrrrrrrr} \toprule Algorithm & Sun & Mon & Tue & Wed & Thu & Fri& Sat\\ \midrule WMA & 5.71 & 10.05 & 8.87 & 10.24 & 10.74 & 10.37 & 10.67\\ MLR & 65.46 & 27.61 & 12.55 & 11.39 & 9.01 & 9.38 & 35.59\\ MQR & 1.17 & 11.92 & 9.88 & 14.24 & 14.11 & 17.11 & 13.24\\ RT & 7.45 & 5.99 & 7.63 & 7.37 & 5.98 & 7.26 & 8.87\\ SVR & 20.70 & 12.96 & 10.73 & 11.53 & 11.63 & 10.90 & 17.40\\ MLP & 5.18 & 4.62 & 4.43 & 4.27 & 4.31 & 4.70 & 4.34\\ $\text{DNN-W}_3$ & 2.95 & 1.88 & 2.12 & 2.49 & 2.54 & 2.46 & 3.12\\ $\text{DNN-W}_4$ & 6.67 & 5.45 & 5.25 & 4.88& 4.61 & 5.65 &5.83\\ $\text{DNN-W}_5$ & 7.23 &5.53 & 5.56 & 6.14 & 6.13 & 5.81 & 5.48\\ $\text{DNN-SA}_3$ & 2.29 & 1.84 & 1.76 &1.97& 1.87 & 2.03 & 2.35\\ $\text{DNN-SA}_4$ & 2.67 & 2.19 & 2.00 & 2.14 & 2.27 & 2.55 & 2.82\\ $\text{DNN-SA}_5$ & 2.28 & 1.47 & 1.63 & 1.93 & 1.60 & 1.76 & 2.22\\ RNN & 5.38 & 5.30 & 4.41 & 5.14 & 5.11 & 5.35 & 5.45\\ RNN-LSTM & 4.25 & 4.34 & 4.96 & 4.55 & 5.64 & 6.97 & 6.13\\ CNN-LSTM & 7.79 & 6.86 & 6.04 & 6.05 & 5.65 & 6.44 & 6.21\\ CNN & 6.39 & 4.20 & 4.27 & 3.32 & 3.87 & 4.18 & 5.03\\ \bottomrule \end{tabular} \end{table} Looking at the 200 epocs column, we see that $\text{DNN-W}_3$ performs the best with a MAPE of 2.64\%. On the other hand, the most stable architecture is the DNN-SA with a MAPE consistently less than 3\%. This robustness is shown when the epocs are increased to 400 where the DNN-SA architecture outperforms all the other methods (both the baseline and deep methods). The pretraining certainly gave these methods a boost over the other methods as it guides the learning towards basins of attraction of minima that support better generalization from the training data set \cite{Erhan_2010}. RNNs, and to an extent LSTM, have an internal state which gives it the ability to exhibit dynamic temporal behavior. However, they require a much longer time to compute which is evident in Table \ref{tab:2} since these methods had trouble mapping those underlying dynamics of the data in such a small number of epocs. CNNs do not maintain internal state, however with load forecasting data, one can expect a fair amount of auto-correlation that requires memory. This could explain their somewhat low but unstable MAPE for 200 and 400 epocs. Taking both tables into consideration, most of the DNN architectures vastly outperform the traditional approaches, but DNNs require significantly more time to run and thus there is a trade-off. For STLF, which is a very dynamic environment, one cannot wait for a new model to complete its training stage. Hence, this is another reason we limited the number of epocs to 200 and 400. Table \ref{tab:2} shows that limiting the epocs did not adversely affect many of the DNN architectures as most were able to surpass the accuracy of the traditional methods (some by a lot). When selecting a model, one would have to determine if the length of time to run the model is worth the trade-off between accuracy and runtime. \subsection{Daily Analysis} We know that people have different electrical usage patterns on weekdays when compared to weekends. This difference can be seen in Figure \ref{fig:week_and_weekend} which illustrates usage for a sample home. This household uses more energy during the weekdays than on weekends. There are electrical profiles that may be opposite, i.e., where the weekend electrical load is more. Whatever the scenario, there are usually different profiles for weekdays and weekends. \begin{figure}[!t] \subfloat[Weekday Electrical Usage]{\includegraphics[width=\columnwidth]{usage_week.jpg} \label{fig:week}} \\ \subfloat[Weekend Electrical Usage]{\includegraphics[width=\columnwidth]{weekend_usage.jpg} \label{fig:weekend}} \caption{Electrical Profiles} \label{fig:week_and_weekend} \end{figure} To see how our models handle weekdays and weekends, we calculated the average MAPE for each day of the week in the test set (the 400 epoc models was used for the DNNs calculations). The average for each day of the week is tabulated in Table \ref{tab:3}. From the table, it is clear that most of the DNN algorithms have their lowest MAPE during the week. This is indicative that the patterns for weekdays are similar and as a result have more data. By having more data, DNNs are better able to capture the underlying structure of the data and thus are able to predict the electrical load with greater accuracy. Weekend predictions have a higher MAPE since DNNs require a lot of data to perform accurate predictions and for weekends this data is limited. The WMA and MQR seem to have their best day on Sunday, but have a very poor MAPE for the rest of the days. This indicates that the models have an internal bias towards Sunday and as a result fail to accurately predict the values for other days. It is clear, again, that DNNs outperform the traditional methods. \subsection{Mean Percentage Error} In this particular domain, an electricity provider will also be interested in changes of electrical load, as opposed to absolute error, in order to adjust generation accordingly, mostly because starting up additional plants takes time. This is why the Mean Percentage Error (MPE) was used. The MPE would tell that a model with a positive value "under-predicts" the load while a negative value "over-predicts" the actual value and they can then adjust their operations accordingly. Many of the traditional methods had predicted more electrical load than the actual load, including MLP. However, most of the DNNs have under-predicted the load value. Looking at the best in Table \ref{tab:2}, DNN-SAs MPE values (for 400 epocs), they are all under 1\% and positive, which indicates that it under-predicts the value. However, one should not use the MPE alone. An example is RNNs which have a low positive MPE, however it's MAPE in both epocs is around 5\%. This indicates that RNN had a slightly larger sum of values that "under-predicts" than "over-predicts", but its overall accuracy is not as good as other deep architectures. \subsection{Applications to Energy Efficiency} Using the results from STLF (MAPE and MPE), a company can now accurately predict upcoming load. This would mean that a power generating company can now produce energy at a much more precise amount rather than producing excess energy that would be wasted. Since most of these companies use fossil fuels which are non-renewable sources of energy, we would be conserving them as well as reducing levels of carbon dioxide released into the atmosphere and the toxic byproducts of fossil fuels. Another benefit of accurate load forecasting is that of dynamic pricing. Many residential customers pay a fixed rate per kilowatt. Dynamic pricing is an approach that allows the cost of electricity to be based on how expensive this electricity is to produce at a given time. The production cost is based on many factors, which in this paper, is characterized by the algorithms for STLF. By having a precise forecast of electrical load, companies now have the ability to determine trends, especially at peak times. An example of this would be in the summer months when many people may want to turn on their air conditioners and thus electricity now becomes expensive to produce as the company could have to start up additional power generating plants to account for this load. If the algorithms predict that there would be this increase in electrical load around the summer months, this would be reflected in the higher price that consumers would need to pay. As a result, most people would not want to keep their air conditioner on all the time (as per usual) but use it only when necessary. Taking this example and adding on washing machines, lights and other appliances, we can see the immense decrease in energy that can be achieved on the consumer side. \section{Related Work} The area of short-term load forecasting (STLF) has been studied for many decades but deep learning has only recently seen a surge of research into its applications. Significant research has been focused on Recurrent Neural Networks (RNNs). In the thesis by \cite{mishra_08}, RNNs was used to compare other methods for STLF. These methods included modifications of MLP by training with algorithms like Particle Swarm Optimization, Genetic Algorithms and Artificial Immune Systems. Two other notable papers that attempt to apply DNN for STLF are \cite{Busseti_12} and \cite{Connor_92}. In \cite{Busseti_12}, they compare Deep Feedfoward Neural Networks, RNNs and kernelized regression. In the paper by \cite{Connor_92} a RNN is used for forecasting loads and the result is compared to a Feedfoward Neural Network. However, a thorough comparison of various DNN architectures is lacking and any applications to dynamic pricing or energy efficiency is absent. \section{Conclusion} In this paper, we focused on energy wastage in the electrical grid. To achieve this, we first needed to have an accurate algorithm for STLF. With the advent of many deep learning algorithms, we compared the accuracy of a number of deep learning methods and traditional methods. The results indicate that most DNN architectures achieve greater accuracy than traditional methods even when the data is split into weekdays and weekends. However such algorithms have longer runtimes. We also discussed how these algorithms can have a significant impact in conserving energy at both the producer and consumer levels.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction } The recognition of interstellar propagation effects in the long-term flux variations of pulsars (Sieber 1982) led to the discovery of a new class of propagation effects in radio astronomy, known as Refractive Interstellar Scintillation (RISS) (Rickett, Coles \& Bourgois 1984). RISS is thought to arise from propagation through electron density inhomogeneities of spatial scales much larger than the Fresnel scale. In contrast to Diffractive Interstellar Scintillation (DISS), applications of RISS extend beyond the field of pulsars, and it is thought to be the cause of low frequency variability (LFV) of compact extragalactic radio sources. Theoretical treatments of the observable consequences of refractive scintillation can be found in Cordes, Pidwerbetsky \& Lovelace (1986) and Romani, Narayan \& Blandford (1986) (also see Rickett 1990; Narayan 1992; Hewish 1992 for recent reviews). Pulsars are excellent objects for studying both DISS and RISS. In addition to the familiar long-term (days to weeks at metre wavelengths) flux variations, refractive scintillation effects are expected to give rise to slow modulations (time scales $ \sim $ days to weeks) of DISS observables such as decorrelation bandwidth (\mbox{$ \nu _d $ }) and scintillation time scale (\mbox{$ \tau _d $~}). Refraction due to large-scale density irregularities also gives rise to organized features such as sloping patterns in pulsar dynamic spectra (e.g. Smith \& Wright 1985; Roberts \& Ables 1982). RISS can also give rise to not so easily observable effects such as variations in the pulse arrival times and angular wandering of the apparent source positions. While DISS effects probe the density irregularities of spatial scales $ \sim $ $ 10^6 $ m to $ 10^8 $ m, RISS effects enable us to probe irregularities of much larger spatial scales ($ \sim $ $ 10^{10} $ m to $ 10^{12} $ m). Therefore, simultaneous measurements of DISS and RISS properties are a powerful method of determining the nature of the electron density spectrum over a much wider range than that has been possible by DISS alone. The density fluctuations can be characterized by their spatial wavenumber spectrum, for which there are various potential forms. The spectrum can be taken as an `extended power-law' with the following general form (e.g. Rickett 1990). \begin{equation} {\rm P _{\delta n_e} } (\kappa) ~ = ~ \mbox{${\rm C_n^2}$\,} ~ \left( \kappa ^2 + \mbox{$ {\rm \kappa _{out} ^2 }$\,} \right) ^{-(\alpha/2)} ~ {\rm exp} \left( - { \kappa ^2 \over \mbox{$ {\rm \kappa _{inn} ^2 }$\,} } \right) \end{equation} \noindent where $ \kappa $ is the spatial wavenumber, inversely related to the length scale $s$ (we use the relation $ \kappa = 1 / s $, in accordance with the convention of Armstrong, Rickett \& Spangler 1995), \mbox{$ {\rm \kappa _{inn} }$\,} and \mbox{$ {\rm \kappa _{out} }$\,} correspond to the inner and the outer cut-offs in scale sizes respectively. The amplitude of the spectrum, \mbox{${\rm C_n^2}$\,}, is also known as the strength of turbulence. The line of sight integral of equation (1) is a measure of the rms of electron density fluctuations, $ \delta n_e $. The spectrum can be represented by a simpler form $ {\rm P _{\delta n_e} } (\kappa) ~ = ~ \mbox{${\rm C_n^2}$\,} ~ \kappa ^{-\alpha} $ in the range $ \mbox{$ {\rm \kappa _{out} }$\,} \ll \kappa \ll \mbox{$ {\rm \kappa _{inn} }$\,} $. The possibility that is most commonly discussed in the literature is the density fluctuations describable by a Kolmogorov spectrum, in which case $ \alpha = \mbox{$ { 11 \over 3 } $ } $. We refer to this as hypothesis (I). Two possible subsets of this hypothesis that may be relevant are: (IA) the cutoffs are not important ($ie,$ an inner scale much smaller than the smallest scale that influences the scintillation and an outer scale much larger than the largest scale sampled by the observations), and (IB) the spectrum is truncated at a large inner scale (say, intermediate between diffractive and refractive scales). The second possibility is that the spectrum is a simple power-law (similar to IA), but with $ \alpha > \mbox{$ { 11 \over 3 } $ } $. We will call this hypothesis (II). The third possibility is that the density fluctuations are $ not $ describable by a simple power-law form over the entire range of spatial scales of interest: this can lead to a multi-component, ``piece-wise power-law'' form (hypothesis IIIA) or a single power-law with an additional ``bump'' (hypothesis IIIB) at wavenumbers corresponding to density structures that are not part of the power-law process. The fourth possibility is that the random medium (say, described by one of the abovesaid hypotheses) has deterministic structures superposed, in which case a power spectral description may not be adequate. We will call this hypothesis (IV). The exact form of the spectrum, especially the validity of a simple power-law description and the existence of inner and/or outer cutoff(s), is still a matter of research. In this paper, our main goal is to discriminate between the above hypotheses using the data from our long-term pulsar observations. Observable effects of RISS are considered to be powerful techniques for discriminating between different kinds of density spectra proposed for the ISM (e.g. Hewish, Wolszczan \& Graham 1985; Rickett 1990). Refractive modulation characteristics such as depths of modulations of DISS observables (\mbox{$ \nu _d $ } and \mbox{$ \tau _d $~}), flux and drift slope are all thought to be highly sensitive to the spectral form (e.g. Romani et al. 1986). But not all possible models indicated by hypotheses (I)$-$(IV) have been analyzed in detail. The $pure$ Kolmogorov form (type IA) is the best analyzed of all, and there are well defined predictions for the magnitudes of observable effects. The type II spectra have also been analyzed to some extent (e.g. Blandford \& Narayan 1985; Romani et al. 1986). Three specific cases for which detailed analysis can be found in the literature are (i) $ \alpha = \mbox{$ { 11 \over 3 } $ } $, (ii) $ \alpha = 4 $ (`critical' spectrum), and (iii) $ \alpha = 4.3 $. The effects analyzed include depths of modulations and time scales of fluctuations for different observables, refractive angle perturbations, scaling laws and cross-correlation properties between the fluctuations. While models based on hypothesis (IA) predict small-amplitude fluctuations of DISS observables and flux, and small refractive perturbations ($ \mbox{${\rm \theta _{ref} }$\,} < \mbox{$ {\rm \theta _{diff} } $ } $), those based on hypothesis (II) can allow much larger fluctuations and large refractive angles ($ \mbox{${\rm \theta _{ref} }$\,} \ \mbox{\raisebox{-0.1ex}{$\scriptscriptstyle \stackrel{>}{\sim}$\,}} \ \mbox{$ {\rm \theta _{diff} } $ } $) (Romani et al. 1986; Hewish et al. 1985). It has also been recognized that type IB spectrum, which has been suggested as an alternative to type II spectrum, can cause large-amplitude flux modulations (Coles et al. 1987; Goodman et al. 1987). Effects (particularly perturbations on DISS observables) due to spectra grouped under hypothesis (III) have not been formally analyzed. Observations such as periodicities seen in pulsar dynamic spectra and ``extreme scattering events'' (ESE) seen towards some quasars and the millisecond pulsar PSR B1937+21 suggest that models based on hypothesis (IV) are also relevant (e.g. Cordes \& Wolszczan 1986; Fiedler et al. 1987; Cognard et al. 1993). Observationally, there are a number of ways of investigating the nature of the density fluctuation spectrum. But there have been conflicting interpretations from different measurements. The results from methods such as frequency scaling of decorrelation bandwidth (Cordes, Weisberg \& Boriakoff 1985), frequency drifts in dynamic spectra (e.g. Smith \& Wright 1985) and VLBI observations (Gwinn et al. 1988a, 1988b), are consistent with the Kolmogorov spectrum. On the other hand, based on a study of dispersion measure (DM) variability, Phillips \& Wolszczan (1991) find the spectrum to be much steeper ($\mbox{$ {\rm \langle \alpha \rangle } $ } \approx 3.84 \pm 0.02$). Backer et al. (1993) studied pulsars with comparatively larger DMs, and argue that DM variations are caused by density fluctuations unrelated to those responsible for DISS. Contradictory results have come from flux monitoring observations of pulsars (Kaspi \& Stinebring 1992; Gupta, Rickett \& Coles 1993; LaBrecque, Rankin \& Cordes 1994). Observations of dynamic spectra at 408 MHz (for six pulsars) by Gupta, Rickett \& Lyne (1994) show the modulations of decorrelation bandwidth to be larger than the Kolmogorov expectations, but Cordes et al. (1990) find the modulations of PSR B1937+21 to be consistent with the Kolmogorov predictions. Further, observations of quasi-periodic patterns in dynamic spectra (e.g. Cordes \& Wolszczan 1986) and detections of ESEs towards some quasars (Fiedler et al. 1987, 1994) go against the Kolmogorov expectations. Attempts have also been made to construct a {\it composite} spectrum by combining a variety of scintillation measurements from different observations and for various pulsars and radio sources. The most recent study by Armstrong et al. (1995) finds the {\it average} spectral index to be $\approx$ 3.7 over the wavenumber range $ 10 ^{-13} - 10 ^{-8} \ {\rm m ^{-1}} $ for pulsars within 1 kpc. They also find that, when combined with non-ISS measurements, the spectrum has an {\it approximately} power-law form between $ 10 ^{-18} $ $ {\rm m ^{-1}} $ and $ 10 ^{-6} $ $ {\rm m ^{-1}} $. Though this result is interesting, there is enough evidence in the literature suggesting that the distribution of scattering material within a region of 1 kpc around the Sun is not homogeneous (Bhat, Gupta \& Rao 1997, 1998). Furthermore, evidence for inadequacies of such a model can be seen in the composite spectrum of Armstrong et al. (1995), in which the estimated power levels are discrepant from the power-law expectations at several places. It is also possible that the nature of the density spectrum varies with the direction ($l,b$) and the location in the Galaxy, but this aspect has not been systematically investigated so far. Observational data from a systematic long-term study of diffractive and refractive scintillations of 18 pulsars have been presented in our previous paper (Bhat, Rao \& Gupta 1998a, hereinafter referred to as Paper I). These observations have yielded fairly accurate estimates of properties of observables such as decorrelation bandwidth (\mbox{$ \nu _d $ }), scintillation time scale (\mbox{$ \tau _d $~}), flux density (F), and drift rate of patterns (\mbox{ $ d t / d \nu $\,}). Though our sample consists of mostly nearby pulsars (distance \mbox{\raisebox{-0.1ex}{$\scriptscriptstyle \stackrel{<}{\sim}$\,}} 1 kpc), there is a reasonably uniform coverage in $(l,b)$, DM and distance, and forms a more or less unbiased sample. Furthermore, we have made simultaneous measurements of DISS and RISS properties from our data, thereby reducing the possibility of observational bias. In this paper, we study two important and easily observable effects that can be studied using our data, $viz,$ (i) modulations of DISS observables and flux density, and (ii) drifting of intensity patterns. We examine the conformity of our data with the available quantitative predictions (for type IA and II spectra), and discuss the possible implications of our results for the form of the density spectrum. The remainder of this paper is organized as follows. In \S 2.1, we present our measurements of modulation indices (of \mbox{$ \nu _d $ }, \mbox{$ \tau _d $~} and F) and compare them with the theoretical predictions based on power-law models. Then we describe the estimation of statistical properties of diffractive and refractive angles with which we estimate a slope parameter, indicative of the relative power level enhancement at large scales (\S 2.2). This is followed by a discussion on persistent drifting features seen for some pulsars (\S 2.3), which suggest the presence of discrete structures, at least along some lines of sight. In \S 3, we discuss the significance of various results from our observations and several others from the published literature, and study the implications for the nature of the density spectrum. Our conclusions are presented in \S 4. \section{The Observational Data and Results} The observational data used here come from an extensive series of scintillation measurements of 18 pulsars in the DM range $3-35$ $ {\rm pc ~ cm ^ {-3} } $ made using the Ooty Radio Telescope (ORT) at 327 MHz over a three-year period from 1993 to 1995. Pulsars and the periods of observation are tabulated in columns (2) and (3) of Table 3. The dynamic scintillation spectra of these pulsars were obtained at $ \sim $ 10$-$90 epochs spanning periods ranging from $ \sim $ 100 days to $ \sim $ 1000 days. Columns (4) and (5) in Table 3 give the number of epochs of observation (\mbox{ $ {\rm N_{ep} } $ }) and the time span of observation (\mbox {$ {\rm T_{sp} } $ }) respectively. The observations and the analysis methods used have been described in Paper I. The observations were carried out in four well-separated sessions, each extending over a period of $ \sim $ 100 days, in which 6 to 8 pulsars were regularly monitored for their dynamic spectra at intervals of 1$-$2 days typically. Four pulsars $-$ PSRs B0823+26, B0834+06, B1133+16 and B1919+21 $-$ were followed up for multiple observing sessions: PSRs B0823+06 and B1919+21 for 2 sessions, PSR B1133+16 for 3 sessions, and PSR B0834+06 for all the 4 sessions. The symbols I$-$IV, when attached along with these pulsar names, indicate the data from a particular session (see Tables 1 and 2 of Paper I for more details) . Most of the basic diffractive scintillation results have been presented in Paper I, including the time series of decorrelation bandwidth (\mbox{$ \nu _d $ }), scintillation time scale (\mbox{$ \tau _d $~}), drift slope of intensity patterns (\mbox{ $ d t / d \nu $\,}) and pulsar flux density (F) (Figs. 4(a)$-$(x) of Paper I). In this paper, we start with these time series and study their implications for the spectrum of the electron density fluctuations in the ISM. \subsection{Refractive Modulations of Diffractive Scintillation Observables and Pulsar Flux Density} \subsubsection{Predictions from Theoretical Models} Due to refractive scintillation effects, measurable quantities such as \mbox{$ \nu _d $ }, \mbox{$ \tau _d $~}, \mbox{ $ d t / d \nu $\,} and F are expected to fluctuate with time. Several authors have addressed the theory of refractive effects in pulsar scintillation (e.g. Cordes et al. 1986; Romani et al. 1986), but only a few attempts have been made so far in measuring and verifying them. The observable effects of RISS are thought to be highly sensitive to the form of the density spectrum, more specifically the relative power level enhancement at large scales compared to that at small scales. Romani et al. (1986) have worked out the theory for refractive effects due to simple power-law forms of density spectra with different spectral indices (covered under hypotheses IA and II of \S 1). No explicit predictions are available at present for other kinds of spectra, such as those covered under hypotheses (III) and (IV) of \S 1. Considering specific cases of $ \alpha $ = \mbox{$ { 11 \over 3 } $ } (Kolmogorov spectrum), $\alpha$ = 4 (`critical' spectrum), and $ \alpha $ = 4.3 (`steep' spectrum), Romani et al. (1986) find that the depth of modulation is the lowest for a Kolmogorov density spectrum, and increases for larger values of $ \alpha $. In the simplest scattering geometry of a thin screen located midway between pulsar and observer, the magnitude of fluctuations of the quantities \mbox{$ \nu _d $ }, \mbox{$ \tau _d $~} and F will depend on (i) the strength of scattering (\mbox{${\rm C_n^2}$\,}), (ii) observing wavelength (\mbox{${\rm \lambda _{obs}}$ }), and (iii) distance to the pulsar (D), when the distribution of density irregularities follows a Kolmogorov form of power spectrum (hypothesis IA). In contrast, the fluctuations are expected to be insensitive to these parameters for $ \alpha $ =4 and $ \alpha $ = 4.3 spectra (hypothesis II). Using the expressions given by Romani et al. (1986), for $ \alpha = \mbox{$ { 11 \over 3 } $ } $ spectrum, the modulation indices of decorrelation bandwidth ($ m_b $), scintillation time scale ($ m_t $) and flux density ($ m_r $) are given by \begin{equation} $ m_b $ ~ \approx ~ {\rm 9.8 \times 10^{-2} \times \left( C_n^2 \right) ^{-0.2} ~ (\lambda _{obs,m}) ^{-0.6} ~ (D_{kpc}) ^{-0.4} } \end{equation} \begin{equation} $ m_t $ ~ \approx ~ {\rm 4.8 \times 10^{-2} \times \left( C_n^2 \right) ^{-0.2} ~ (\lambda _{obs,m}) ^{-0.6} ~ (D_{kpc}) ^{-0.37} } \end{equation} \begin{equation} $ m_r $ ~ \approx ~ {\rm 1.2 \times 10^{-1} \times ( C_n^2 ) ^{-0.12} ~ (\lambda _{obs,m}) ^{-0.57} ~ (D_{kpc}) ^{-0.37} } \end{equation} \noindent where $ C_n^2 $ is expressed in $ {\rm 10^{-4} ~ m ^{-20/3} } $. Using these expressions, we obtain the predicted estimates for $ m_b $, $ m_t $ and $ m_r $ for the pulsars in our data set. These are given in columns (4), (5) and (6) of Table 1. For \mbox{${\rm C_n^2}$\,} values, we use our results given in Paper I. For pulsars with multiple observing sessions, the average of \mbox{${\rm C_n^2}$\,} estimates from different sessions is used. Distance estimates used (given in column 3 of Table 1) are based on the the model for electron density distribution given by Taylor \& Cordes (1993), except for PSR B0823+26, for which we use the independent distance estimate from parallax measurements (Gwinn et al. 1986). The approximate predictions for `steeper' spectra $-$ $ \alpha = 4 $ and $ \alpha = 4.3 $ $-$ are listed in Table 2 (see Romani et al. (1986) for details). \subsubsection{Results $-$ Modulation indices of \mbox{$ \nu _d $ }, \mbox{$ \tau _d $~} and F} From the time series presented in Figs. 4(a)$-$4(x) in Paper I, we estimate the modulation indices of \mbox{$ \nu _d $ }, \mbox{$ \tau _d $~} and F. The rms fluctuations of any of these quantities (say \mbox{$ \nu _d $ }) is given by \begin{equation} \Delta \mbox{$ \nu _d $ } \approx \left[ { 1 \over \mbox{ $ {\rm N_{ep} } $ } } \sum _{i=1} ^{i=\mbox{ $ {\rm N_{ep} } $ }} \left( \mbox{ $ \nu _{d,i} $ } - \mbox{$ \langle \nu _d \rangle $\,} \right) ^2 \right] ^{0.5} \end{equation} \noindent where \mbox{ $ {\rm N_{ep} } $ } is the number of epochs of observations and \mbox{ $ \nu _{d,i} $ } denotes the measurement at $i^{th}$ epoch. The modulation indices $ m_b $, $ m_t $ and $ m_r $, which are the fractional rms fluctuations of \mbox{$ \nu _d $ }, \mbox{$ \tau _d $~} and F respectively, are given by \begin{equation} $ m_b $ = { \Delta \mbox{$ \nu _d $ } \over \mbox{$ \langle \nu _d \rangle $\,} } \hspace{1.0cm} $ m_t $ = { \Delta \mbox{$ \tau _d $~} \over \mbox{ $ \langle \tau _d \rangle $\,} } \hspace{1.0cm} $ m_r $ = {\rm { \Delta F \over \mbox{ $ \langle \rm F \rangle $\,} } } \end{equation} \noindent where \mbox{$ \langle \nu _d \rangle $\,}, \mbox{ $ \langle \tau _d \rangle $\,} and \mbox{ $ \langle \rm F \rangle $\,} represent the average estimates. We use the values \mbox{ $ \nu _{d,g} $\,} and \mbox{ $ \tau _{d,g} $\,} obtained from the Global Auto-covariance Function (GACF) method described in Paper I as estimators for \mbox{$ \langle \nu _d \rangle $\,} and \mbox{ $ \langle \tau _d \rangle $\,} . The mean flux density, \mbox{ $ \langle \rm F \rangle $\,}, is computed from the time series directly. Before seriously interpreting our results in terms of refractive scintillation, one needs to examine (i) the statistical reliability of the data, and (ii) various possible reasons (other than RISS) for the fluctuations of the quantities. In addition to the number of measurements (\mbox{ $ {\rm N_{ep} } $ }), the number of refractive cycles spanned (\mbox{ $ {\rm N_{ref} } $ }) also determines the statistical quality of our data. On the basis of \mbox{ $ {\rm N_{ep} } $ } and \mbox{ $ {\rm N_{ref} } $ }, we have divided our data into 3 broad categories, the details of which are given in Appendix A and the results are summarized in columns (8) and (9) of Table 3. The statistical reliability is best for data belonging to category ``A'' and reasonably good for those in category ``B''. The data which fall in either of the ``C'' categories are considered to be of poor statistical quality and are not taken seriously in our comparison with the predictions. The measured modulation indices range from 0.17 to 0.50 for \mbox{$ \nu _d $ }, 0.13 to 0.49 for \mbox{$ \tau _d $~}, and 0.21 to 0.69 for F. In general, values of $ m_t $ are comparatively smaller than those of $ m_b $ and $ m_r $, and this is in qualitative agreement with the predictions given in Tables 1 and 2. A visual examination of the time series $-$ Figs. 4(a)$-$(x) of Paper I $-$ shows that the observed fluctuations are generally random, but there are a few exceptions where some systematic trends can be seen over the time span of observation. A closer inspection of the time series of \mbox{$ \tau _d $~} measurements of PSR B2327$-$20 (Fig. 4(x) of Paper I) reveals a systematic downward trend where \mbox{$ \tau _d $~} changes from $ \sim $ 1000 sec to $ \sim $ 200 sec over a span of 65 days. This is responsible for a substantially large value of $ m_t $ ($ 0.49 \pm 0.03 $) for this pulsar compared to the rest. Excluding this outlier case, and also the data with poor statistical reliability, we find that $ m_t $ values range from 0.13 to 0.31. For PSRs B1604$-$00 and B2016+28, some systematic trends are evident in their flux density time series. The modulation indices are, however, not significantly higher than those of the rest. Excluding the data with poor statistical quality, we find $ m_r $ values ranging from 0.23 to 0.57. The global average modulation indices are 0.36 for \mbox{$ \nu _d $ }, 0.19 for \mbox{$ \tau _d $~} and 0.45 for F. There are various sources of errors and non-ISS effects that contribute to the observed modulation index. These include (a) measurement noise, \mbox{$\sigma _{meas}$\,} (applicable for all the 3 quantities), (b) effect of variable Faraday rotation on flux density modulations, and (c) effect of Earth's orbital motion on modulations of scintillation time scale. A detailed treatment of these noise sources is presented in Appendix B and the estimates of their contributions are summarized in Tables 4, 5 and 6. The modulation indices due to the measurement noise are typically 0.1 and hence their contribution to the measured modulation indices are only marginal for most of the data sets (see Table 4). The noise-corrected modulation indices of \mbox{$ \nu _d $ }, \mbox{$ \tau _d $~} and F are given in columns (6), (7) and (8) respectively of Table 4. Further, as can be seen from Tables 5 and 6, the effects of variable Faraday rotation and the Earth's orbital motion are significant only for a few pulsars. \subsubsection{Comparison with the Predictions} Taking into consideration various sources of errors and non-ISS effects discussed in Appendix B, and eliminating the data where such effects are found to be significant, we do a comparison study between the measured and predicted values of modulation indices. Here we confine ourselves to the time series of 18 data sets where the statistical reliability is reasonable. We also exclude the \mbox{$ \tau _d $~} modulation indices of \mbox{$ {\rm PSR ~ B0823+26(II) } $ } (due to $ \mbox{${ m_{noise} }$\,} \approx $ m_t $ $), PSR B$1604-00$ (due to $ \mbox{ $ {\rm { \delta t _{vobs} } } $\,} \approx $ m_t $ $ and $ \mbox{ $ {\rm { \delta t _{virr} } } $\,} \approx $ m_t $ $) and PSR B$2327-20$ (due to the presence of a systematic trend in the time series) from the present comparison. In general, we find most of the modulation indices (of \mbox{$ \nu _d $ }, \mbox{$ \tau _d $~} and F) to be considerably larger than the Kolmogorov predictions given in Table 1, but there are a few exceptions. The details are as follows: (i) Flux density: There is no pulsar for which the measured value of $ m_r $ is in agreement with the predicted value given in Table 1. While 12 of the measurements lie within the range between the predictions for $ \alpha = 4 $ and $ \alpha = 4.3 $ spectra (Table 2), the remaining 6 measurements are consistent with $ \mbox{$ { 11 \over 3 } $ } < \alpha < 4 $. (ii) Decorrelation bandwidth: There is only one measurement of $ m_b $ (\mbox{$ {\rm PSR ~ B1919+21(I) } $ }) which agrees with the prediction given in Table 1. Among the rest, 14 range between the predictions for $ \alpha $ = 4 and $ \alpha $ = 4.3 (Table 2), whilst 3 are between the predictions for $ \alpha = \mbox{$ { 11 \over 3 } $ } $ and $\alpha = 4$. (iii) Scintillation time scale: Only two measurements of $ m_t $ (\mbox{$ {\rm PSR ~ B0834+06(II) } $ } and \mbox{$ {\rm PSR ~ B0834+06(III) } $ }) agree with the predictions in Table 1. Rest of the values range between the predictions for $\alpha = \mbox{$ { 11 \over 3 } $ }$ and $ \alpha $ = 4.3. Though scattering from a thin screen is taken to be a good approximation in explaining diffractive scintillation phenomena, refractive effects may differ significantly depending on the scattering geometry considered. There exist only a few theoretical treatments investigating refractive effects under more complex scattering geometries such as one or more thick screens or an extended medium. Romani et al. (1986) find that such scenarios will give rise to larger flux modulations than those caused by a thin screen model. According to the authors, if the scattering is uniformly distributed along the line-of-sight, the flux fluctuations will be larger by a factor 2.3 compared to the thin screen, in the case of a Kolmogorov form of spectrum ($ie,$ hypothesis IA of \S 1). Coles et al. (1987) have also investigated the flux modulations for an extended medium, and their predicted modulation indices are comparable to those of Romani et al. (1986). Similar estimates, however, do not exist for the rest of the observables of interest. On comparing the observed flux modulation indices with the predictions, we find 9 of the measurements to be in reasonable agreement with their predicted values. For \mbox{$ {\rm PSR ~ B1133+16(II) } $ }, the measured value ($ m_r $ = 0.21) is much below the prediction (0.35). For rest of the data (9 measurements), the observed values are substantially larger than the predictions. According to Romani et al. (1986), for a spectrum with $\alpha = 4$, the flux modulation index for an extended medium is expected to be $\sqrt{3}$ times larger than that for a thin screen model. This would mean $ m_r $ $\approx$ 0.66, much above any of the measured values. Thus, excluding one measurement, the observed flux modulations are consistent with $\mbox{$ { 11 \over 3 } $ } \le \alpha < 4 $, if one considers the scattering material to be uniformly distributed along the line of sight. Our analysis shows that the modulations due to various sources of errors and non-ISS effects (Appendices A and B) can be completely ignored for \mbox{$ {\rm PSR ~ B0834+06(IV) } $ } and PSR B0919+06. These data are characterized by reasonably good statistical reliability, both in terms of number of measurements and number of refractive cycles of fluctuations (Table 3). The contributions to the modulation indices (of \mbox{$ \nu _d $ }, \mbox{$ \tau _d $~} and F) due to the measurement noise are only marginal for these data. Also, flux modulations due to Faraday rotation effects are negligible for \mbox{$ {\rm PSR ~ B0834+06(IV) } $ } ($ \mbox{${ m_{r,pol} }$\,} \approx 0.01 $) and marginal for PSR B0919+06 ($ \mbox{${ m_{r,pol} }$\,} \approx 0.14 $). Further, effects due to the Earth's orbital motion (\mbox{ $ {\rm { \delta t _{vobs} } } $\,}) and the motion of the density irregularities (\mbox{ $ {\rm { \delta t _{virr} } } $\,}) can be ignored for PSR B0919+06 and are only marginal for \mbox{$ {\rm PSR ~ B0834+06(IV) } $ } (a worst case reduction of 0.02 in $ m_t $ on accounting for both the effects). In addition, the data are free from effects such as persistent drift slopes and systematic trends in the time series. The results show that the modulation indices of all the 3 quantities are considerably above the Kolmogorov predictions. The measurements of $ m_b $ are consistent with $ \mbox{$ { 11 \over 3 } $ } < \alpha \le 4 $ and that of $ m_t $ with $ \alpha \approx 4 $, whereas those of $ m_r $ require $ 4 \le \alpha < 4.3 $, as far as the predictions of a thin screen model are concerned. If one considers an extended medium, there is agreement between the measured and predicted values of $ m_r $ for \mbox{$ {\rm PSR ~ B0834+06(IV) } $ } (0.33 versus 0.32), but for PSR B0919+06, the measured value is considerably larger than the prediction (0.46 versus 0.28). In summary, the measured modulation indices of \mbox{$ \nu _d $ }, \mbox{$ \tau _d $~} and F are found to be considerably larger than the predictions of a thin screen model with a Kolmogorov form of density spectrum (hypothesis IA of \S 1). Effects due to various sources of noise involved in the measurements are insignificant except for a few values. Further, modulations due to non-ISS effects such as variable Faraday rotation and the Earth's orbital motion can be ignored for most of the data. If we compare the flux modulation indices with the predictions of a uniform scattering medium along the line-of-sight, roughly half of the measurements agree with the predicted values. However, similar predictions are not available at present for the modulation indices of \mbox{$ \nu _d $ } and \mbox{$ \tau _d $~}. Clearly, theories based on a thin screen model and hypothesis (IA) are inadequate to account for the results from the present observations. \subsection{Drifting of Intensity Patterns in Dynamic Spectra} Another important observable effect of refractive scintillation which can be studied using our data is drifting bands in dynamic spectra. The problem was first addressed by Shishov (1973), and subsequently by Hewish (1980). The basic picture is that the diffraction by small-scale irregularities (typically $ 10^6 $ m to $ 10^8 $ m) results in an angular spectrum of scattered rays, and its mean direction of arrival is modified by the refraction through large-scale irregularities (typically $ 10^{10} $ m to $ 10^{12} $ m). The bending angle, \mbox{${\rm \theta _{ref} }$\,}, usually known as `refractive scattering angle', is determined by the phase gradient due to large-scale density irregularities, and given by \begin{equation} \mbox{${\rm \theta _{ref} }$\,} ~ = ~ \left( { { \lambda \over 2 \ \pi } \nabla \mbox{$ {\rm \phi _{ref} } $\,} } \right) \propto \left( { \partial \mbox{$ n_e $\,} \over \partial r } \right) \end{equation} \noindent where $ \lambda $ is the observing wavelength, $r$ is the transverse dimension, and \mbox{$ {\rm \phi _{ref} } $\,} is the slowly varying phase component (sometimes known as ``refractive phase'') and \mbox{$ n_e $\,} is the electron number density. The intensity patterns at the observing plane are displaced by X $ \sim $ Z\mbox{${\rm \theta _{ref} }$\,}, where Z is the distance to the phase screen. The frequency dependence of the refraction angle (\mbox{${\rm \theta _{ref} }$\,} $\propto$ $\nu ^{-2}$) results in varying magnitudes of displacements for patterns at different frequencies. In the presence of a relative motion between the pulsar and the observer, these cause intensity peaks at different frequencies to arrive at progressively increasing delays and hence appear as sloping patterns in the dynamic spectra. On elaborating the analytical treatments given by Shishov (1973), Hewish (1980) showed that the drift slope, \mbox{ $ d \nu / d t $ }, can be related to the refractive steering angle, $ \mbox{${\rm \theta _{ref} }$\,} $, through the following expression \begin{equation} { d t \over d \nu } ~ = ~ { {\rm D} \ \mbox{${\rm \theta _{ref} }$\,} \over \mbox{ $ V_{iss} $ } \mbox{$ f_{obs} $\,} } \end{equation} \noindent where \mbox{ $ V_{iss} $ } is the speed of scintillation patterns, \mbox{$ f_{obs} $\,} is the frequency of observation, and D is the separation between the source and the observer. The above expression is for a thin screen geometry, with the screen located midway between the source and the observer (D=2Z). In characterizing the drifting features in our data, we prefer \mbox{ $ d t / d \nu $\,} over \mbox{ $ d \nu / d t $ }. Justification for this choice and our definition of drift slope is described in Paper I. Drifting of intensity patterns are extensively seen in our data (e.g. Figs. 1(a)$-$(h) of Paper I). The property is highly pronounced for PSRs B0834+06, B1133+16, B1919+21 and B2045$-$16. The measured values of \mbox{ $ d t / d \nu $\,} (see Figs. 4(a)$-$(x) of Paper I) range from $ \sim $ 0.05 $ {\rm secs ~ kHz ^ {-1} } $ (e.g. PSRs B1133+16, B1237+25) to a few $ {\rm secs ~ kHz ^ {-1} } $ (e.g. PSRs B1604$-$00, B2327$-$20). Also several pulsars show gradual and systematic variations of drift slopes, along with a number of sign reversals, during the time span of observations. The data of PSRs B0823+26 and B0919+06 are good examples illustrating this property. In general, drift slopes are found to vary over time scales comparable to those of \mbox{$ \nu _d $ }, \mbox{$ \tau _d $~} and F, but our data are not sampled regularly enough to obtain robust estimates of these time scales. The drift slope averaged over all the measurements of a given data set, \mbox{ $ \langle d t / d \nu \rangle $\,}, and the rms fluctuation, \mbox{ $ \delta ( d t / d \nu ) $\,}, are computed for each data set, and are given in columns (3) and (4) of Table 7. For several pulsars, values of \mbox{ $ \langle d t / d \nu \rangle $\,} are found to be quite close to zero. Some pulsars, especially those characterized by few or no slope reversals, show significantly large values of average drift slopes. \subsubsection{Pattern Drifts and Decorrelation Bandwidth} Refractive effects, such as those which produce drifting intensity patterns, are expected to affect the estimation of the decorrelation bandwidth (e.g. Gupta et al. 1994; Cordes et al. 1986). Usually, the decorrelation bandwidth is measured as the half-width at half-maximum along the frequency lag axis of the 2-D ACF of the dynamic spectrum. In the absence of refractive bending, this method correctly estimates the true decorrelation bandwidth produced by DISS, as intensity patterns at different frequencies are aligned in time. However, in the presence of significant refractive bending, this method will always underestimate the decorrelation bandwidth as the drifting patterns are no longer aligned in time. Under such conditions, a better estimate can be obtained by measuring the half-power bandwidth along the direction of the drift slope rather than along the frequency lag axis. For the case where the direction of the shift of intensity patterns is aligned with the direction of the scintillation velocity (\mbox{ $ V_{iss} $ }), the new technique will result in complete correction of the \mbox{$ \nu _d $ } value. Clearly, the effectiveness of this technique decreases as the angle between the phase gradient and \mbox{ $ V_{iss} $ } increases from $0^o$ to $90^o$. Thus we define a new estimator for the decorrelation bandwidth, which we refer to as the ``drift-corrected decorrelation bandwidth'', \mbox{ $ \nu _{d_c} $\,}, which is the frequency lag corresponding to the point on the half-maximum contour of the ACF that is furthest from the time lag axis. In terms of the parameters \mbox{$ { {\rm C_1 } } $\,} , \mbox{$ { {\rm C_2 } } $\,} and \mbox{$ { {\rm C_3 } } $\,} describing the model Gaussian (eq. [2] of Paper I) fitted to the ACF, \mbox{ $ \nu _{d_c} $\,} can be expressed as \begin{equation} \mbox{ $ \nu _{d_c} $\,} ~ = ~ {\rm \left( ln ~ 2 \right) ^{0.5} ~ \left( C_1 ~ - ~ { C_2^2 \over 4 ~ C_3 } \right) ^{-0.5} } . \end{equation} \noindent The time series of corrected decorrelation bandwidths obtained in this manner are shown in Figs. 1(a)$-$(x) of this paper. These can be compared with the `instantaneous' or `apparent' decorrelation bandwidth, \mbox{$ \nu _d $ } $-$ shown in Fig. 4 of Paper I $-$ to obtain some idea about the reduction in decorrelation bandwidth due to refractive effects. We also compute the statistical properties of \mbox{ $ \nu _{d_c} $\,}, such as its average value, \mbox{$ \langle \nu _{d_c} \rangle $\,}, and fractional rms fluctuation, \mbox{${ m_{b,c} }$ }, which are given in columns (6) and (7) of Table 7. Columns (8) and (9) of Table 7 give the noise modulation indices and the noise corrected values of \mbox{${ m_{b,c} }$ }, analogous to columns (3) and (6) of Table 4. Here we briefly summarize the general characteristics of the corrected bandwidth. For most pulsars, its average value is larger than that of the traditional decorrelation bandwidth (i.e., $ \mbox{$ \langle \nu _{d_c} \rangle $\,} > \mbox{$ \langle \nu _d \rangle $\,} $). In addition, modulation indices of \mbox{ $ \nu _{d_c} $\,} are generally smaller than those of \mbox{$ \nu _d $ } (i.e., $ \mbox{${ m_{b,c} }$ } \mbox{\raisebox{-0.1ex}{$\scriptscriptstyle \stackrel{<}{\sim}$\,}} $ m_b $ $). Only exceptions are PSRs B1237+25 and B1604$-$00 for which estimates of \mbox{${ m_{b,c} }$ } are substantially larger than those of $ m_b $. For PSR B1604$-$00, this is due to a few dominating measurements in the time series (Fig. 1.o). For PSRs B1540$-$06 and B2310+42, we see that corrected bandwidths appear to be more or less stable (\mbox{${ m_{b,c} }$ } \mbox{\raisebox{-0.1ex}{$\scriptscriptstyle \stackrel{<}{\sim}$\,}} 0.1), which could be an artifact due to the limitation of our spectrometer resolution. Excluding these 4 outliers, we find the global average modulation index (\mbox{${ \langle m_{b,c} \rangle }$ }) to be $\approx 0.3$. Measurement noise is expected to cause a modulation index of 0.1 (see column (8) of Table 7), and hence its contribution to the measured value is only marginal. The only case where noise modulations are significant is \mbox{$ {\rm PSR ~ B1919+21(I) } $ }, where $ \mbox{${ m_{b,c} }$ } \sim \mbox{${ m_{noise} }$\,} $ and we get $ \mbox{${ m_{b,c(riss)} }$ } \approx 0.06 $. On comparing the estimates of \mbox{${ m_{b,c} }$ } with the theoretical predictions, we find them to be considerably larger than the Kolmogorov predictions. However, in contrast to $ m_b $, most measurements range between the predictions of $ \alpha = \mbox{$ { 11 \over 3 } $ } $ and $ \alpha = 4 $. The above analysis confirms that the traditional estimator for decorrelation bandwidth (\mbox{$ \nu _d $ }) is biased due to the presence of refractive drifts which are produced by the large-scale irregularities in the ISM. The new estimator for the decorrelation bandwidth (\mbox{ $ \nu _{d_c} $\,}) is less prone to bias due to such refractive effects. Thus, \mbox{ $ \nu _{d_c} $\,} should be a better choice for estimating effects due to purely diffractive scintillation phenomena, $ie.,$ effects produced by the small-scale irregularities in the ISM. In the following section, where we attempt to estimate effects of small-scale and large-scale irregularities independently, we use \mbox{ $ \nu _{d_c} $\,} as the estimator for the decorrelation bandwidth. \subsubsection{Estimation of Diffractive and Refractive Scattering Angles} Diffractive and refractive scattering angles are two useful indicators of the magnitude of electron density fluctuations at small ($ \sim $ $ 10^6 $ m to $ 10^8 $ m) and large ($ \sim $ $ 10^{10} $ m to $ 10^{12} $ m) spatial scales respectively. From our data, we estimate these two angles, denoted as \mbox{$ {\rm \theta _{diff} } $ } and \mbox{${\rm \theta _{ref} }$\,} respectively, at each epoch of observation. The diffractive scattering angle at $ i^{th} $ epoch, \mbox{$ {\rm \theta _{diff,i} } $ }, is given by \begin{equation} \mbox{$ {\rm \theta _{diff,i} } $ } ~ = ~ \left( { c \over \pi ~ {\rm D} ~ \mbox{$\nu _{d_c,i}$\,} } \right) ^{0.5} \end{equation} \noindent where \mbox{$\nu _{d_c,i}$\,} denotes the measurement at $i^{th}$ epoch of observation and $c$ is the speed of light. We use the average diffractive angle (\mbox{${\rm \langle \theta _{diff} \rangle}$~}) over the entire time span of observation to characterize density fluctuations at small spatial scales. Our values of \mbox{${\rm \langle \theta _{diff} \rangle}$~} are given in column (5) of Table 8. The refractive angle at $ i^{th} $ epoch of observation, \mbox{ $ {\rm \theta _{ref,i} } $ }, is obtained from the estimates of drift rate, $(\mbox{ $ d t / d \nu $\,})_i$, and scintillation pattern speed, \mbox{$ V_{iss,i} $ }, at that epoch, using the relation \begin{equation} \mbox{ $ {\rm \theta _{ref,i} } $ } = \left( { \mbox{$ V_{iss,i} $ } \mbox{$ f_{obs} $\,} \over {\rm D } } \right) \left( { d t \over d \nu } \right) _i \qquad . \end{equation} \noindent We note that when the gradient of the refractive wedge is not aligned with the pattern velocity, the true refractive angle is given by $ {\rm \mbox{ $ {\rm \theta _{ref,i} } $ } sec \psi _i } $, where $ \psi $ is the angle between the gradient and the velocity. For simplicity, we assume $ \psi $ = 0 in estimating \mbox{ $ {\rm \theta _{ref,i} } $ }, but address this issue later in this section. The pattern speed (\mbox{ $ V_{iss} $ }) is estimated from the measurements of decorrelation bandwidth and scintillation time scale, using the following expression \begin{equation} {\rm \mbox{$ V_{iss,i} $ } ~ = ~ A_V ~ \left( D_{[kpc]} ~ \mbox{$\nu _{d_c,i}$\,} _{[MHz]} \right) ^{0.5} ~ \left( \mbox{$ f_{obs[{\rm GHz}]} $\,} ~ \mbox{$\tau _{d,i}$\,} _{[sec]} \right) ^{-1} \hspace{1.0cm} $ {\rm km ~ secs ^ {-1} } $ } \end{equation} \noindent where \mbox{$\tau _{d,i}$\,} denotes the measurements at $ i^{th} $ epoch of observation, and we adopt $ {\rm A_V = 3.85 \times 10^4 } $ given by Gupta et al. (1994). Our measurements of refractive scattering angles obtained in the above manner are presented in Figs. 2(a)$-$2(x), in the form of the time series for each pulsar. According to the models which treat refraction effects as random phenomena due to the low wavenumber part of the underlying density power spectrum, \mbox{${\rm \theta _{ref} }$\,} is expected to vary randomly about a zero mean value over refractive time scales (Rickett 1990; Rickett et al. 1984, Romani et al. 1986). But treatments which consider refraction by a separate large-scale component (e.g. Shishov 1973; Hewish 1980) may allow non-zero mean values for \mbox{${\rm \theta _{ref} }$\,}. The mean refractive angles (\mbox{$ {\rm \langle \theta _{ref} \rangle } $ }) computed from the time series in Fig. 2 are listed in column (3) of Table 8. For a number of pulsars, \mbox{$ {\rm \langle \theta _{ref} \rangle } $ } $ \approx $ 0 within the measurement uncertainties (e.g. PSRs B0919+06 and B0823+26), while for some, \mbox{$ {\rm \langle \theta _{ref} \rangle } $ } is found to be significantly different from zero; best examples for which are PSRs B0834+06 and B1919+21. PSRs B1133+16 (data from sessions II and III), B1237+25, B1604$-$00, B1929+10 and B2045$-$16 also show statistically significant non-zero values for \mbox{$ {\rm \langle \theta _{ref} \rangle } $ }. For PSRs B1237+25 and B1929+10, such an effect may be attributed to the poor statistics in terms of limited number of measurements and/or dominance of a few measurements over the rest (see Figs. 2.l and 2.s). PSRs B1133+16 and B2045$-$16 show non-zero \mbox{$ {\rm \langle \theta _{ref} \rangle } $ } despite sufficiently good statistics (\mbox{ $ {\rm N_{ep} } $ } = 25$-$35). A closer inspection of their time series of \mbox{${\rm \theta _{ref} }$\,} (Figs. 2.j, 2.k and 2.v) reveal that, though not highly pronounced, there are some signatures of persistent drifts lasting over typically several weeks. A similar trend can also be seen for PSR B1604$-$00 (Fig. 2.o), but here the sampling is rather coarse. Thus not all our data are in support of the expectations of models based on random refraction. As mentioned earlier, in the case of two-dimensional refraction, the observer will measure a refractive angle ${\rm \mbox{${\rm \theta _{ref} }$\,} = \mbox{ $ {\rm \Theta _{ref} } $ } cos \psi }$, where $ \mbox{ $ {\rm \Theta _{ref} } $ } $ is the `true refractive angle' and $ \psi $ is the `alignment angle'. This effect can potentially modify the statistical properties estimated for \mbox{${\rm \theta _{ref} }$\,}. In particular, it will result in an underestimation of the rms refractive angle, \mbox{$ {\rm \delta \theta _{ref} } $}. Though an exact correction for this is not practical, we attempt a first order correction by assuming $ \mbox{ $ {\rm \Theta _{ref} } $ } $ and $ \psi $ to be independent zero-mean random variables, and also that $\psi$ can range from $-\pi/2$ to $\pi/2$ with uniform probability. This gives $ \mbox{$ {\rm \langle \Theta _{ref} ^2 \rangle } $ } = 2 \ \mbox{$ {\rm \langle \theta _{ref} ^2 \rangle } $ } $, which implies $ \mbox{$ {\rm \delta \Theta _{ref} } $ } = \sqrt {2} \ \mbox{$ {\rm \delta \theta _{ref} } $ } $. This will yield somewhat better estimates of \mbox{$ {\rm \delta \theta _{ref} } $ }, at least for data with \mbox{$ {\rm \langle \theta _{ref} \rangle } $ } $\approx$ 0. The values of \mbox{$ {\rm \delta \theta _{ref} } $ } given in column (4) of Table 8 are the measured values scaled by $\sqrt{2}$. \subsubsection{Estimation of Slope of the Electron Density Spectrum} In this section, we use the measurements of \mbox{$ {\rm \theta _{diff} } $ } and \mbox{${\rm \theta _{ref} }$\,} to estimate the slope ($ \alpha $) needed for representing the underlying density fluctuations by a simple power-law form of spectrum, at least over the DISS and RISS scales of interest for our data. There are two possible ways of doing this. The first is to make use of the ratio \mbox{$ {\rm \delta \theta _{ref} } $ }/\mbox{${\rm \langle \theta _{diff} \rangle}$~} as a discriminator of $\alpha$. For $\alpha < 4$, the refractive scattering angles are expected to be smaller than the diffractive angles ($\mbox{$ {\rm \delta \theta _{ref} } $ } < \mbox{${\rm \langle \theta _{diff} \rangle}$~}$), but for `steep' spectra ($\alpha > 4$), $\mbox{$ {\rm \delta \theta _{ref} } $ } > \mbox{${\rm \langle \theta _{diff} \rangle}$~}$ can be expected (Hewish et al. 1985; Romani et al. 1986; Rickett 1990). Earlier studies (Smith \& Wright 1985; Hewish et al. 1985) did not have good enough data to facilitate accurate determinations of \mbox{$ {\rm \delta \theta _{ref} } $ } and \mbox{${\rm \langle \theta _{diff} \rangle}$~}, and often employed \mbox{${\rm \theta _{ref} }$\,}/\mbox{$ {\rm \theta _{diff} } $ } from a given (or a few) epoch(s) of observations to discriminate between different kinds of density spectra. For the values of \mbox{$ {\rm \delta \theta _{ref} } $ } and \mbox{${\rm \langle \theta _{diff} \rangle}$~} given in Table 8, we find the ratio for most of the pulsars to be in the range 0.1$-$0.8, and there is no pulsar for which it is above unity. Thus, if the density spectra are to be represented by simple power-law forms, then the measurements of \mbox{$ {\rm \theta _{diff} } $ } and \mbox{${\rm \theta _{ref} }$\,} from our observations will preclude $ \alpha > 4 $ for these epochs. The second method is to estimate the power levels \mbox{$ {\rm P _{ref} }$~} and \mbox{$ {\rm P _{diff} }$~} at refractive and diffractive wavenumbers \mbox{$ {\rm \kappa _{ref} }$\,} and \mbox{$ {\rm \kappa _{diff} }$\,} respectively, from which the slope of the density spectrum can be estimated (e.g. Armstrong et al. 1995). The procedure is as follows. From the measurements of \mbox{$ {\rm \delta \theta _{ref} } $ }, one can obtain the structure function level at refractive length scale \mbox{${\rm s _{ref} }$ } (given by $1/\mbox{$ {\rm \kappa _{ref} }$\,}$), from which an estimate of the amplitude of the density spectrum (\mbox{${\rm C_N^2}$\,}) is obtained. The power level at refractive scale is then given by \mbox{$ {\rm P _{ref} }$~} = \mbox{${\rm C_N^2}$\,} \mbox{$ {\rm \kappa ^{-\alpha} _{ref} }$\,}. From the measurements of decorrelation bandwidths, one can estimate the amplitude of the spectrum (\mbox{${\rm C_n^2}$\,}) at small spatial scales, and the power level at diffractive wavenumber (given by $\mbox{$ {\rm \kappa _{diff} }$\,}=1/\mbox{${\rm s_o}$ }$, where \mbox{${\rm s_o}$ } is the `coherence scale') is then given by \mbox{$ {\rm P _{diff} }$~} = \mbox{${\rm C_n^2}$\,} \mbox{$ {\rm \kappa ^{-\alpha} _{diff} }$\,}. This method should give $\mbox{${\rm C_N^2}$\,} \approx \mbox{${\rm C_n^2}$\,}$ if the assumed value of $\alpha$ is correct. Alternatively, one can estimate a slope $\beta$ = log (\mbox{$ {\rm P _{ref} }$~}/\mbox{$ {\rm P _{diff} }$~})/log (\mbox{$ {\rm \kappa _{ref} }$\,}/\mbox{$ {\rm \kappa _{diff} }$\,}). Throughout this paper, we use $\alpha$ to denote the power-law index (as defined in eq. [1]) of the density spectrum, and $\beta$ to represent the slope estimated from our measurements. It is easy to show that the above two methods are not independent and also that for $\alpha < 4$, there is an exact correspondence between the ratio of scattering angles (\mbox{$ {\rm \delta \theta _{ref} } $ }/\mbox{${\rm \langle \theta _{diff} \rangle}$~}) and the slope estimate, $\beta$. For this, we first rewrite equation (3) of Armstrong et al. (1995) as \begin{equation} \mbox{$ C_n^2 $} ~ = ~ K_{\alpha} ~ \left[ \mbox{$ {\rm D _{\phi} }$ } \left( s \right) \right] ~ \left( 8 ~ \pi ~ r_e^2 ~ \mbox{$ {\rm \lambda ^2 _{obs} } $} ~ {\rm D } ~ s ^{\mbox{$ \alpha _1 $}} \right) ^{-1} \end{equation} \noindent where $\mbox{$ \alpha _1 $} = \alpha - 2$, $ K_{\alpha} = \left( 1 + \mbox{$ \alpha _1 $} \right) \left[ f \left( \mbox{$ \alpha _1 $} \right) \right] ^{-1} $, D is the propagation distance, and $r_e$ is the classical electron radius. The phase structure function, \mbox{$ {\rm D _{\phi} }$ }, at refractive scale, \mbox{${\rm s _{ref} }$ }, is given by \begin{equation} \mbox{$ {\rm D _{\phi} }$ }(\mbox{${\rm s _{ref} }$ }) ~ \approx ~ \left[ \left( { 2 \pi \over \mbox{${\rm \lambda _{obs}}$ } } \right) \mbox{$ {\rm \delta \theta _{ref} } $ } \mbox{${\rm s _{ref} }$ } \right] ^2 \end{equation} \noindent where \mbox{${\rm \lambda _{obs}}$ } is the observing wavelength. The refractive scale can be taken as the `scattering disk', given by $ \sim $ D\mbox{${\rm \langle \theta _{diff} \rangle}$~}. The structure function is unity at coherence scale (\mbox{$ {\rm D _{\phi} }$ }(\mbox{${\rm s_o}$ }) = 1), which is also considered as the diffractive length scale, \mbox{${\rm s_{diff}}$ }. Thus, from equation (13), we can get the two estimates of the amplitude of the density spectrum $-$ \mbox{${\rm C_N^2}$\,} and \mbox{${\rm C_n^2}$\,}. Using these expressions, the slope estimate, $\beta$, simplifies to \begin{equation} \beta = 4 + \left( { {\rm log} \left( { \mbox{$ {\rm \delta \theta _{ref} } $ } / \mbox{${\rm \langle \theta _{diff} \rangle}$~} } \right) \over {\rm log} \left( u \right) } \right) \end{equation} \noindent where $u$ is a measure of the the strength of scattering, defined as $\sqrt{\mbox{${\rm s _{ref} }$ }/ s_o}$ (Rickett 1990). The value of $u$ can be estimated from decorrelation bandwidth measurements (see Table 4 Paper I for our estimates). We note that equation (15) is very similar to equation (2.9) given by Rickett (1990). We also note that the above result is valid only for $\alpha < 4$, as equation (13) is valid only for this regime. From this result, one can see that, for a given value of $u$, the ratio of scattering angles is related to the slope of the density spectrum. Further, one can see that as $\beta \longrightarrow 4$, $\mbox{$ {\rm \delta \theta _{ref} } $ } \longrightarrow \mbox{${\rm \langle \theta _{diff} \rangle}$~}$. Our estimates of $\beta$ are given in column (6) of Table 8, and they range from 3.3 to 3.9. Since all the 3 quantities (\mbox{$ {\rm \delta \theta _{ref} } $ }, \mbox{${\rm \langle \theta _{diff} \rangle}$~} and $u$) have been fairly accurately determined, we are able to estimate the $\beta$ values with fairly good accuracies (1-$\sigma$ uncertainty $ \sim $ 0.02$-$0.1). In some cases, the uncertainties are larger ($ \sim $ 0.1$-$0.2), probably due to sloping patterns being not well pronounced in the corresponding data. Further, none of our $\beta$ values are above the critical value 4, which gives an ex post facto justification of the assumption $\alpha < 4$ made in obtaining equation (15). Taking into consideration the uncertainties (at $\pm$2-$\sigma$ levels), we find 18 of the 25 measurements to be consistent with the Kolmogorov value 11/3. For 6 data sets, the measured values are significantly above 11/3. The highest value ($\beta = 3.91 \pm 0.03$) is measured for PSR B1929+10, the closest pulsar in our sample. The other cases are PSRs B1604$-$00 and B2327$-$20, and part of the data of PSRs B0834+06 (session II) and B1133+16 (sessions II and III), with $\beta \approx$ 3.77 to 3.8. We note that values of $\beta$ that are significantly below \mbox{$ { 11 \over 3 } $ } ($ie, ~ \beta \approx 3.3 - 3.4 $) are also the ones that have large measurement errors (PSRs B0329+54, B1508+55, B1540$-$06 and B2310+42). Further, these are also the cases where sloping patterns are less pronounced, due to which one tends to underestimate the \mbox{ $ d t / d \nu $\,} values and consequently the $\beta$ values. For one data set (\mbox{$ {\rm PSR ~ B0834+06(I) } $ }), the estimated $\beta$ ($\approx 3.58 \pm 0.03$) is significantly below the Kolmogorov index. Thus, not all the measurements from our data are consistent with a Kolmogorov form of density spectrum (hypothesis IA of \S 1). The larger values of $\beta$ (3.77 to 3.91) are seen for nearby pulsars ($ \sim $ 200 to 700 pc) and there is a weak trend for a decrease in $\beta$ with DM (up to $\sim$ 20 $ {\rm pc ~ cm ^ {-3} } $ ) and distance (up to $\sim $ 1 kpc) (see Figs 3.a and 3.b). Clearly, a ubiquitous nature of the density spectrum is not quite supported and there are several directions towards which the density spectrum appears to be steeper than $\alpha = 11/3$. Before proceeding further, we comment on some of the underlying assumptions in our $\beta$ estimation. Firstly, our $\beta$ values are indicative of the true slope only if (i) the spectrum is a simple power-law, and (ii) $ \alpha < 4 $. Secondly, the method is subject to the validity of the implicit assumption of drifting features arising due to density fluctuations on spatial scales $ \sim $ scattering disk, D\mbox{$ {\rm \theta _{diff} } $ }. Further, the method assumes a stationary statistics for RISS, which need not be necessarily true in practice, especially for data with non-zero \mbox{$ {\rm \langle \theta _{ref} \rangle } $ }. Hence \mbox{$ {\rm \delta \theta _{ref} } $ }, and consequently $\beta$, should be treated with caution for those cases. In the simplest model of a thin screen placed between the source and the observer, it is easy to see that the method is insensitive to the location of the screen. \subsection{Persistent Drifting Features in Dynamic Spectra} According to the theoretical models for RISS which consider the underlying density fluctuations to be a stochastic process, drift slopes of patterns are expected to vary randomly about a zero mean value over refractive time scales (cf. Rickett 1990). From the time series of measurements (see Fig. 4(a)$-$(x) of Paper I), we see that this broad picture is substantiated by a number of pulsars, of which PSRs B0823+26 and B0919+06 form good examples. However, as mentioned earlier, our data also show several examples where the drifting features are `persistent' and do not show frequent sign reversals of slopes. A visual examination reveals that data from the first 3 observing sessions of PSR B0834+06 and from the 2 sessions of PSR B1919+21 form the best examples of such persistent slopes, as they are characterized by a complete absence or few epochs of slope reversals (e.g. \mbox{$ {\rm PSR ~ B0834+06(I) } $ }, \mbox{$ {\rm PSR ~ B1919+21(II) } $ }). A closer inspection of Fig. 4 of Paper I (and also Fig. 2 of this paper) reveals that there are several other pulsars which form comparatively weaker examples of such a property; PSRs B0329+54, B0823+26(II), B1540$-$06, B2020+28 and B2310+42 belong to this category. To make a quantitative distinction between the two cases, $viz.$ ``frequent drift reversals'' and ``persistent drift slopes'', we examine the distributions of the measured drift slopes for a clear ``skewness'' with respect to zero. Selected plots of such distributions are shown in Figs. 4(a)$-$(d) to illustrate this scheme. The idea here is to identify data for which the mean drift slope, \mbox{ $ \langle d t / d \nu \rangle $\,}, is substantially offset from zero in comparison to its observed fluctuations. Data for which mean slopes are smaller than the rms fluctuation (\mbox{ $ \langle d t / d \nu \rangle $\,} $ < $ \mbox{ $ \delta ( d t / d \nu ) $\,}) and are free from a skewness in the distribution (e.g. Fig. 4.b) are categorized as Class I, and those with mean slopes offset from zero by more than the rms (\mbox{ $ \langle d t / d \nu \rangle $\,} \mbox{\raisebox{-0.1ex}{$\scriptscriptstyle \stackrel{>}{\sim}$\,}} \mbox{ $ \delta ( d t / d \nu ) $\,}) and have a skewed distribution (e.g. Figs. 4.a and 4.c) are labeled as Class II. The classification becomes ambiguous when the quantities \mbox{ $ \langle d t / d \nu \rangle $\,} and \mbox{ $ \delta ( d t / d \nu ) $\,} have substantial uncertainties, and the skewness is not well pronounced; these are treated as `NC' (non-classifiable). Column (5) of Table 7 shows the `drift class' decided in this manner. As per the above scheme, PSRs B0834+06(I), B0834+06(II) and B0834+06(III), B1919+21(I) and B1919+21(II) come under Class II. For PSRs B0329+54, B0823+26(II), B1133+16(III), B1237+25, B2045$-$16 and B2310+42, the estimates of \mbox{ $ \langle d t / d \nu \rangle $\,} and \mbox{ $ \delta ( d t / d \nu ) $\,} are comparable, but the uncertainties are large; hence, we treat them as NC. Rest of the data are free from such ambiguities and belong to Class I. A special pulsar in this context is PSR B0834+06, which shows the behaviour of Class I in the final observing session, much in contrast with what is seen in the first three sessions. Earlier studies of dynamic spectra (Gupta et al. 1994) reported a similar property for PSRs B0628$-$28 and B1642$-$03. For PSR B0628$-$28, no persistent drifting features are seen in our data. As mentioned before, our observations show PSR B0834+06 changing the behaviour from persistent drift slopes (January 1993$-$June 1994) to frequent slope reversals (April$-$July 1995). For PSR B1919+21, we do not have similar information, as it was not followed-up for a third session. From these examples, it appears that persistent drifting features usually last over time intervals $ \sim $ several months to a few years. \subsubsection{Implications for the Density Spectrum} We now examine the implications of our observations of persistent drift slopes for the nature of the density irregularity spectrum. The crucial question is whether the density structures responsible for such effects form part of the power-law spectrum determined by measurements of \mbox{$ {\rm \delta \theta _{ref} } $ } and \mbox{${\rm \langle \theta _{diff} \rangle}$~}, as described in \S 2.2.3. This can be answered by extrapolating the power-law spectrum to $ \kappa = 1 / S $ (where $S$ corresponds to the time span over which persistent drift slopes are observed), estimating the expected rms refractive angle that would be produced by the power at these scales, and comparing this with the measured values of \mbox{$ {\rm \langle \theta _{ref} \rangle } $ }. The expected rms refractive angle can be obtained simply by inverting equation (15), while using $U = \sqrt{S/s_o}$ in place of $u$. Since \mbox{$ {\rm \langle \beta \rangle } $ } $\approx$ 11/3 for both PSRs B0834+06 and B1919+21, this gives $\mbox{$ {\rm \delta \theta _{r,kol} } $ } = \mbox{${\rm \langle \theta _{diff} \rangle}$~} \ U ^ {-1/3}$. These values are given in column (7) of Table 9. Since the measured values of \mbox{$ {\rm \langle \theta _{ref} \rangle } $ } for individual sessions are $\sim$ 1.2$-$2.7 times larger than the \mbox{$ {\rm \delta \theta _{r,kol} } $ } values, the probability of the density structures being part of a Kolmogorov-like spectrum is {\it rather low} ($\sim$ 1$-$3\% for PSR B0834+06 and $\sim$ 15$-$20\% for PSR B1919+21). Using the larger values of $S$ for the combined data from multiple sessions increases the discrepancy with the measured \mbox{$ {\rm \delta \theta _{r,kol} } $ } (\mbox{$ {\rm \langle \theta _{ref} \rangle } $ } is 3.4 times larger for PSR B0834+06 and 1.8 times for PSR B1919+21) and further reduces the probability of the structures being part of the power-law spectrum determined by \mbox{$ {\rm \delta \theta _{ref} } $ } and \mbox{${\rm \langle \theta _{diff} \rangle}$~} (0.1\% for PSR B0834+06 and 6\% for PSR B1919+21). In Figs. 5.a and 5.b, we have plotted the power levels at wavenumbers corresponding to diffractive and refractive scales (\mbox{${\rm s_{diff}}$ } and \mbox{${\rm s _{ref} }$ }), and at the larger spatial scales ($S$). The power levels at $ \kappa = 1 / S $ are significantly above the Kolmogorov expectations, which can be interpreted in different ways. One possibility is that a single power-law description is inadequate and the spectrum steepens at lower wavenumbers ($ 10^{-14} \ {\rm m^{-1}} \ \mbox{\raisebox{-0.1ex}{$\scriptscriptstyle \stackrel{<}{\sim}$\,}} \ \kappa \ \mbox{\raisebox{-0.1ex}{$\scriptscriptstyle \stackrel{<}{\sim}$\,}} \ 10^{-11} \ {\rm m^{-1}}$). This would correspond to the type IIIA spectrum of \S 1. From the estimated power levels, we find the average slope over this range (\mbox{$ {\rm \langle \beta _{steep} \rangle } $ }) to be much larger than \mbox{$ { 11 \over 3 } $ } ($\approx$ 4.9 for PSR B0834+06 and $\approx$ 4.5 for PSR B1919+21). Such a ``piece-wise power-law'' form of spectrum for representing the density fluctuations over a wide range of spatial scales (about 6 decades or more) is an interesting possibility. The other possibility is that a Kolmogorov-like distribution of irregularities is superposed with a separate large-scale component giving rise to a ``bump'' near $ \kappa \ \sim \ 10^{-13} - 10^{-12} \ {\rm m^{-1}} $ (type IIIB spectrum of \S 1). With the present observational data, it is difficult to discriminate between these two options. \subsubsection{Constraints on Discrete Plasma Structures} A possible alternative interpretation of the persistent drifting features is existence of large-scale deterministic density structures along the line-of-sight to the pulsar. Our observations allow us to put constraints on the characteristics of such structures, in particular their sizes and electron densities. If we consider a refractive wedge with thickness L and electron density \mbox{$ {\rm N_e } $\,}, the resulting refractive angle \mbox{ $ {\rm \Theta _{ref} } $ } is given by \begin{equation} \mbox{ $ {\rm \Theta _{ref} } $ } ~ = ~ { 1 \over k } ~ \left( { \partial \phi \over \partial r } \right) ~ = ~ \left( { r_e ~ \mbox{${\rm \lambda _{obs}}$ } \over k } \right) ~ \int _0 ^L { \partial \mbox{$ {\rm N_e } $\,} \over \partial r } dr \end{equation} \noindent Under the assumption that \mbox{$ {\rm N_e } $\,} is uniform within the wedge, the integral simplifies to $ {\rm \Delta \mbox{$ {\rm N_e } $\,} L } / S $, where $\Delta \mbox{$ {\rm N_e } $\,}$ is the deviation from the ambient mean density ($ie.,~\Delta \mbox{$ {\rm N_e } $\,} = \mbox{$ {\rm N_e } $\,} - \mbox{$ {\rm \langle n_e \rangle } $\,}$) and $S$ is the transverse extent of the wedge. In the simplest case of a spherical cloud, the `aspect ratio' is unity ($S$ $ \sim $ L), and assuming high electron densities ($ \mbox{$ {\rm N_e } $\,} \gg \mbox{$ {\rm \langle n_e \rangle } $\,} $), the above expression simplifies to \begin{equation} \mbox{ $ {\rm \Theta _{ref} } $ } ~ = ~ { r_e ~ \mbox{$ {\rm \lambda ^2 _{obs} } $} ~ \mbox{$ {\rm N_e } $\,} \over 2 ~ \pi } \end{equation} \noindent where $ r_e $ is the classical electron radius ($ 2.82 \times 10^{-15} $ m) and \mbox{${\rm \lambda _{obs}}$ } is the observing wavelength. We essentially estimate the electron density (\mbox{$ {\rm N_e } $\,}) required to produce the mean refractive angle, \mbox{$ {\rm \langle \theta _{ref} \rangle } $ }. The constraint on the size ($S$) is simply given by \begin{equation} S ~ \mbox{\raisebox{-0.1ex}{$\scriptscriptstyle \stackrel{>}{\sim}$\,}} ~ \mbox{$ \langle V_{iss,c} \rangle $ } \mbox {$ {\rm T_{sp} } $ } \end{equation} \noindent where \mbox {$ {\rm T_{sp} } $ } is the time span of observation over which drifting features lasted (column 4 of Table 9), and \mbox{$ V_{iss,c} $ } ($ie,$ \mbox{ $ V_{iss} $ } computed using \mbox{ $ \nu _{d_c} $\,} and \mbox{$ \tau _d $~}) is given by equation (12) (column 3 of Table 9). The inferred values of \mbox{$ {\rm N_e } $\,} and $S$ are estimated for PSRs B0834+06 and B1919+21, and are listed in columns (5) and (6) of Table 9. The mean refractive angles required to produce the persistent drifting features seen in our data are moderate (0.1 to 0.3 mas). The important implication is that high electron densities (\mbox{$ {\rm N_e } $\,} $ \sim $ 2$-$4 $ {\rm cm ^ {-3} } $ ) need to persist over spatial scales much larger than the characteristic refractive scales ($S$ $ \gg $ \mbox{${\rm s _{ref} }$ }) in order to give rise to such effects. Constraints on the size of these structures from a single observing session (\mbox {$ {\rm T_{sp} } $ } $\sim$ 100 days) are $S$ $\sim$ 10 AU. Further, both these pulsars show similar persistent drifting features for more than one session. If we assume that the persistent drifts are sustained during the intervals between the successive sessions, then \mbox {$ {\rm T_{sp} } $ } is much longer ($\sim 300 - 500$ days), and the inferred sizes are $\sim$ 70 AU for PSR B0834+06 and $\sim$ 40 AU for PSR B1919+21 (see Table 9). \section{Discussion} We have studied the properties of DISS and RISS for a number of nearby pulsars in an attempt to constrain the power spectrum of plasma density fluctuations in the ISM. We have focused on the results from two important and easily observable effects due to refractive scintillation: (i) modulations of DISS observables (\mbox{$ \nu _d $ } and \mbox{$ \tau _d $~}) and flux density, and (ii) drifting bands in dynamic spectra. Our sample consists of mostly nearby pulsars (D $ \mbox{\raisebox{-0.1ex}{$\scriptscriptstyle \stackrel{<}{\sim}$\,}} $ 1 kpc), and there is a reasonably uniform coverage in $(l,b)$, DM and D; hence, a more or less unbiased sample. Our data are sensitive to the density inhomogeneities in the spatial scale range $\sim$ $10^7 - 10^{13}$ m. Since all the basic measurements of DISS (\mbox{$ \nu _d $ } and \mbox{$ \tau _d $~}) and RISS (\mbox{ $ d t / d \nu $\,}, $ m_b $, $ m_t $ and $ m_r $) used in our analysis are from self-consistent data sets, the possibility of an observational bias is reduced. Furthermore, we have relied upon the more meaningful quantity, \mbox{$ {\rm \delta \theta _{ref} } $ }/\mbox{${\rm \langle \theta _{diff} \rangle}$~}, for discriminating between different kinds of density spectra and estimating $ \beta $ values, compared to the earlier attempts which often employed estimates of \mbox{${\rm \theta _{ref} }$\,}/\mbox{$ {\rm \theta _{diff} } $ } from one (or a few) epoch(s) of observations. As discussed in Paper I, it has also become possible from our observations to estimate the average scintillation properties more robustly than previously published work. Therefore, we believe the implications of our results for the nature of the electron density spectrum need serious consideration. \subsection{Implications of the Main Results for the Density Spectrum} The main results from our data can be summarized as follows: \begin{enumerate} \item Our observations show large-amplitude modulations of decorrelation bandwidth (\mbox{$ \nu _d $ }), scintillation time scale (\mbox{$ \tau _d $~}) and flux density (F). The measured depths of modulations are found to be considerably larger than the predictions of a thin screen model with a simple Kolmogorov form of density spectrum (hypothesis IA of \S 1). Barring some cases, the measured modulation indices of \mbox{$ \nu _d $ } and F are consistent with $ 4 < \alpha < 4.3 $, and those of \mbox{$ \tau _d $~} with $ \mbox{$ { 11 \over 3 } $ } < \alpha < 4.3 $, as far as the predictions of a thin screen model are concerned. For the flux modulation indices, better agreement is seen with spectrum of type IA if the scattering medium is taken to be uniformly distributed along the line of sight. Even then, roughly half the measurements are significantly larger than the predictions of type IA spectrum. \item Measurements of refractive and diffractive angles are consistent with $ \alpha < 4 $, as the ratio \mbox{$ {\rm \delta \theta _{ref} } $ }/\mbox{${\rm \langle \theta _{diff} \rangle}$~} is found to be below unity for all pulsars. Further, our estimates of density spectral slope ($\beta$) range from 3.3 to 3.9. While 18 of the 25 measurements are consistent with the Kolmogorov index (at $\pm$2-$\sigma$ levels), for 6 pulsars (D $ \sim $ 200 to 700 pc), $\beta$ is found to be significantly larger than 11/3. \item Persistent drifting bands lasting over many months are seen with PSRs B0834+06 and B1919+21, which imply excess power at spatial scales $ \sim $ 10$-$100 AU (much larger than the refractive scales) compared to the expectations from a type IA spectrum. This is possible if the spectrum (i) is piece-wise power-law which steepens at $ \kappa < \mbox{$ {\rm \kappa _{ref} }$\,} $ or (ii) has a low wavenumber enhancement (hypotheses IIIA and IIIB respectively of \S 1). An alternative possibility is the existence of localized density structures of spatial scales $\sim$ 10$-$70 AU and \mbox{$ {\rm N_e } $\,} $\sim$ 2$-$4 $ {\rm cm ^ {-3} } $ (hypothesis IV of \S 1). \end{enumerate} \subsubsection{Refractive Modulations and Slopes of the Density Spectrum} It is difficult to reconcile the first two results with a simple power-law model for the electron density spectrum in the ISM. Hence we look at them more critically. Starting with the first result, we note that ours are not the first reported measurements of modulation indices larger than the Kolmogorov expectations. Flux monitoring observations have been made by a number of groups earlier (Stinebring \& Condon 1990; Kaspi \& Stinebring 1992; Gupta et al. 1993; LaBrecque et al. 1994; Gupta et al. 1994). In Table 10, we summarize the results from all these observations, for pulsars common with our observations. To compare the measurements made at different frequencies, we use equation (4) for scaling the predicted modulation indices. We find that our measured flux modulation indices (column (3)) are comparable with observations at nearby frequencies (e.g. columns (4), (5), (8) and (9)). Further, most of the modulation indices given in Table 10 are significantly larger than those expected from a Kolmogorov spectrum (type IA of \S 1). For modulation indices of \mbox{$ \nu _d $ } and \mbox{$ \tau _d $~}, there are very few observations reported in the literature that we are aware of. Gupta et al. (1994) have reported \mbox{$ \nu _d $ } modulation indices for 6 pulsars from their long-term scintillation study at 408 MHz (their Table 4). The values for the 5 pulsars common with our observations are comparable, and more than the predicted values for type IA spectrum. As discussed in \S 2.1.3, if the scattering medium is assumed to be uniformly distributed along the line of sight, our results of flux modulation indices are in somewhat better agreement with a type IA spectrum. About half the values are then consistent with $\alpha = \mbox{$ { 11 \over 3 } $ }$ and the remaining half are consistent with $\mbox{$ { 11 \over 3 } $ } < \alpha < 4$. Due to the lack of relevant predictions, similar comparisons cannot be made for the modulations of \mbox{$ \nu _d $ } and \mbox{$ \tau _d $~}. Thus our results from modulation indices can be partially reconciled with our $\beta$ estimates. Turning now to the second result, we note that it may be possible that our refractive angle measurements (and hence the $\beta$ values derived from them) underestimate their true values, despite the first order correction applied for the effect due to the alignment angle $\psi$ (\mbox{$ {\rm \delta \Theta _{ref} } $ } = $\sqrt{2}$ \mbox{$ {\rm \delta \theta _{ref} } $ }). The underlying assumption of random fluctuations of \mbox{ $ {\rm \Theta _{ref} } $ } and $\psi$ need not be true always, in which case the above correction may be still inadequate. Observations of persistent drifts (for 2 pulsars) and statistically significant non-zero values of \mbox{$ {\rm \langle \theta _{ref} \rangle } $ } (for 5 pulsars) in our data indicate that such situations exist in practice. In such cases, the true $\beta$ values can be larger than our estimates, thereby reducing the disagreement with the first result. Another aspect worth mentioning is the assumption of isotropic turbulence in the ISM. It is known that the presence of a strong magnetic field will make the turbulence highly anisotropic (Higdon 1984, 1986), a concept well supported by the recent observations of field-aligned anisotropic density structures in the inner solar wind (Armstrong et al. 1990; Anantharamaiah, Gothoskar \& Cornwell 1994). However, it is unclear at present what is the importance of anisotropy in the ISM. As Coles et al. (1987) point out, there may be several important consequences for ISS in the case of anisotropic turbulence, but they have not been worked out analytically. The anisotropic turbulence will make \mbox{${\rm C_n^2}$\,} sensitive to the field geometry and can also affect the inner scale cutoff; hence, it can potentially influence the modulations of DISS observables and flux. Since the fluctuations in the large-scale Galactic magnetic fields have length scales $ \sim $ 100 pc (Simonetti, Cordes \& Spangler 1984), anisotropic ISS is probably more relevant for nearby pulsars. Furthermore, there is evidence for large-scale ($ \sim $ 100$-$500 pc) spatial inhomogeneities in \mbox{${\rm C_n^2}$\,} within the local (\mbox{\raisebox{-0.1ex}{$\scriptscriptstyle \stackrel{<}{\sim}$\,}} 1 kpc) ISM (Bhat et al. 1997, 1998). Therefore, anisotropic ISS may be relevant for some of our pulsars. \subsubsection{Persistent Drift Slopes and Multiple Imaging Events} In addition to persistent drifting bands, phenomena such as multiple imaging events (e.g. Wolszczan \& Cordes 1987) and extreme scattering events (e.g. Fiedler et al. 1987) are also thought to be caused by large-scale deterministic density structures in the ISM. While ESEs are mostly osberved with compact extra-galactic radio (EGR) sources, the other two effects are seen in pulsar dynamic spectra. So far, multiple imaging events have been reported for 7 pulsars $-$ Hewish et al. (1985) for PSRs B1133+16 and B1642$-$03; Cordes \& Wolszczan (1986) for PSRs B0919+06, B1133+16 and B1919+21; Wolszczan \& Cordes (1987) for PSR B1237+25; Kuz'min (1992) for PSR B1919+21; Gupta et al. (1994) for PSR B2016+28; Rickett, Lyne \& Gupta (1997) for PSR B0834+06; Bhat et al. (1998a) for PSR B1133+16. Very few observations of persistent drifts have been reported so far. For PSRs B0834+06 and B1919+21, our data show persistent drift slopes lasting over $ \sim $ 300$-$500 days. A similar property is reported by Gupta et al. (1994) for PSRs B0628$-$28 and B1642$-$03, and Smith \& Wright (1985) see some signatures of it for PSRs B0823+26 and B1929+10. PSR B1937+21 is the only pulsar for which the occurrence of ESEs has been reported. To date, a total of 10 ESEs have been identified with EGR sources (see Fiedler et al. 1994 for a summary). Fiedler et al. (1987) infer a very high density cloud (\mbox{$ {\rm N_e } $\,} $\sim$ 1000 $ {\rm cm ^ {-3} } $ ) of $\sim$ 7 AU from the observations of ESEs in the light curves (at 2.7 and 8.1 GHz) of quasar 0954+658. This unusually large constraint on the density can, however, be relaxed to \mbox{$ {\rm N_e } $\,} $\sim$ 100 $ {\rm cm ^ {-3} } $ if one considers an edge-on geometry with an aspect ratio ($\eta$) of 10:1. Cognard et al. (1993) infer clouds of similar densities ( \mbox{$ {\rm N_e } $\,} $\sim$ 25$-$220 $ {\rm cm ^ {-3} } $ ) from the ESE observed for PSR B1937+21 at 1.4 GHz. However, the requirement on the size is much smaller ($\sim$ 0.05$-$0.1 AU) as the event spans only a short period of 15 days. Rickett et al. (1997) suggest a structure of $\sim$ 3 AU and \mbox{$ {\rm N_e } $\,} $\sim$ 40 $ {\rm cm ^ {-3} } $ (for $\eta = 1$) as one of the possible scenarios to explain the interstellar fringes observed at 408 MHz for PSR B0834+06. Compared to all these results, the structures inferred from our data are comparatively larger in size, but relatively less dense. The similarity in the interpretations of the above three effects indicates a probable connection between them. Romani, Blandford \& Cordes (1987) point out that the periodicities in pulsar dynamic spectra, LFV of quasars and the ESE of 0954+658 (Fiedler et al. 1987) can be understood in terms of multiple imaging and focusing by large-scale refracting structures in the ISM. It is interesting to note that PSRs B0834+06 and B1919+21, which show persistent drifts in our observations, are known to have shown multiple imaging events (Cordes \& Wolszczan 1986; Kuzm'in 1992; Rickett et al. 1997). Similarly, PSR 1133+16, which shows evidence for multiple imaging in our data (also see Hewish et al. 1985; Cordes \& Wolszczan 1986), also shows some signatures of persistent drifts lasting over several weeks (see Figs. 2.j and 2.k). The large proper motion of this pulsar ($\approx $ 475 $ {\rm km ~ secs ^ {-1} } $) would mean the corresponding spatial scales to be $ \sim $ 10 AU, quite comparable to the size of the density structures inferred from the persistent drifts of PSRs B0834+06 and B1919+21 (see Table 9). Thus, our observations of these three pulsars along with detections of multiple imaging events in earlier observations form a direct evidence in favour of the connection between the two effects, thereby supporting the view of Romani et al. (1987) on scattering effects due to localized high-density structures. It is not clear whether there is a large population of such structures in the Galaxy, but the observational data suggest hypothesis (IV) is relevant at least for some lines of sight. \subsection{A Summary of Various Constraints on the Plasma Density Spectrum} A number of attempts have been made in the recent past towards determining the form of the density spectrum, and there are conflicting interpretations from various kinds of measurements. While several observations are consistent with a simple Kolmogorov form (hypothesis IA), there is substantial amount of observational data which go against it. Attempts have also been made to construct a composite spectrum extending over a wide range of spatial scales. Here we give an overview of various observational evidences accumulated so far $-$ from our data as well as from the published literature. Among the several possible alternatives to the {\it pure} Kolmogorov form (hypothesis IA), three specific cases $viz,$ (i) steeper spectra (hypothesis II), (ii) Kolmogorov spectrum truncated at a large inner scale (hypothesis IB), and (iii) Kolmogorov spectrum with a low wavenumber enhancement (hypothesis IIIB) have been more commonly discussed in the literature. Effects due to such {\it non-Kolmogorov} forms of spectra are not fully understood yet. Something that is common to all the three is that they are of more refractive nature than case IA. Most observational evidences against hypothesis IA can therefore be interpreted in terms of one or more of the remaining possibilities. In \S 3.3.1, we describe the observational evidences in support of a pure Kolmogorov form, and in \S 3.3.2, we summarize the observational data that go against it. In \S 3.3.3, we attempt to reconcile the various observational results and discuss the possible implications for the overall nature of the density spectrum. \subsubsection{Evidence in Favor of Kolmogorov ($ \alpha = \mbox{$ { 11 \over 3 } $ } $) Spectrum} Our observations show that the measurements of diffractive and refractive angles, and consequently the slope parameter ($\beta$) derived from them, are consistent with a Kolmogorov form of spectrum (hypothesis IA of \S 1) for a large number (14 out of 18) pulsars. A quite similar result is reported by Smith \& Wright (1985), where the measured scattering angles are found to be compatible with a Kolmogorov spectrum extending over a range of spatial scales from $\sim$ $ 10^9 $ m to $ \sim $ $ 10^{12} $ m. There are 14 pulsars common between our sample and that of Smith \& Wright (1985). While we find 5 pulsars with $\beta$ significantly larger than \mbox{$ { 11 \over 3 } $ } (see Table 8), the sample of Smith \& wright (1985) has 3 pulsars favoring $ \mbox{$ { 11 \over 3 } $ } < \alpha < 4 $ (see their Table 1). PSR B1929+10 is an interesting case for which a steeper spectrum is suggested by both the observations ($ \beta \approx 3.91 \pm 0.03 $ from our data and $ \mbox{${\rm \theta _{ref} }$\,}/\mbox{$ {\rm \theta _{diff} } $ } \approx 0.61 $ from Smith \& Wright 1985). Among the remaining 4, PSR B0834+06 is a special case with $\beta$ larger than \mbox{$ { 11 \over 3 } $ } in session II (note that $\mbox{$ {\rm \langle \beta \rangle } $ } \approx 3.69 \pm 0.03$). For PSRs B1133+16 and B1604$-$00, no meaningful drift measurements were made by Smith \& Wright (1985). Thus, in general, measurements of drift slopes in dynamic spectra support a $\alpha \ \approx \ \mbox{$ { 11 \over 3 } $ }$ spectrum over the spatial scale range $\sim $ $ 10^7 - 10^{12} $ m. The scaling of scintillation parameters with the frequency and/or distance is known to be sensitive to $\alpha$. Although there is ambiguity involved in relating the scaling exponent to $\alpha$, it can be resolved using other observational indicators. Cordes et al. (1985) find the frequency scaling of decorrelation bandwidths (for 5 pulsars) to be consistent with the index $ \alpha = 3.63 \pm 0.2 $. For the common pulsar PSR B0329+54, we find $\beta \approx 3.3 \pm 0.2$, consistent with the result of Cordes et al. (1985). Another evidence in this direction comes from the scaling of decorrelation bandwidth and scintillation time scale for PSR B1937+21 (Cordes et al. 1990), where the scaling implies $\alpha = 3.55 \pm 0.11$. Further evidence in support of a Kolmogorov form comes from the VLBI observations. Gwinn, Moran \& Reid (1988) studied the image wander of clusters of $ {\rm H_2 O } $ masers in W49 and Sgr B2 and showed $\alpha \approx 3.67$ up to length scales $ \sim $ $10^{11}$ m. Observations of the scattering disk of the pulsar PSR B1933+16 are found to be consistent with $\alpha = 3.52 \pm 0.13$ at length scales of $10^6$ to $10^7$ m (Gwinn et al. 1988a). \subsubsection{Evidence in Favor of non-Kolmogorov Forms of Spectra} Evidence against a simple Kolmogorov form comes from a number of observations. Many pulsars are known to show flux modulations well in excess of the Kolmogorov predictions (see \S 3.1.1 for a discussion and summary). Though an immediate interpretation is that the spectrum needs to be steeper (hypothesis II of \S 1), Kolmogorov spectrum truncated at a sufficiently large inner scale (hypothesis IB of \S 1) can also explain large flux modulations (cf. Coles et al. 1987; Goodman et al. 1987). Similarly, measurements of modulation indices of \mbox{$ \nu _d $ } and \mbox{$ \tau _d $~} are found to be significantly larger than the Kolmogorov predictions for almost all pulsars (see \S 3.1.1), but are consistent with the predictions for $ \mbox{$ { 11 \over 3 } $ } < \alpha < 4.3 $ given by Romani et al. (1986). Further evidence favoring a {\it non-Kolmogorov} form of spectrum comes from measurements of angular broadening and studies of long-term DM variability. Using the first method, Spangler \& Cordes (1988) measure $\alpha = 3.79 \pm 0.05$ towards the compact source 2013+370, and Wilkinson, Spencer \& Nelson (1988) obtain $\alpha = 3.85 \pm 0.05$ towards Cygnus X-3. Phillips \& Wolszczan (1991) studied DM variations of PSRs B0823+26, B0834+06 and B0919+06 over a time span $\sim$ 2 yr, and their structure function analysis shows $\alpha$ to be larger than \mbox{$ { 11 \over 3 } $ } ($\mbox{$ {\rm \langle \alpha \rangle } $ } = 3.84 \pm 0.02$) over the spatial scale range $ \sim $ $ 10^7 - 10^{13} $ m. At first sight, this might appear contrary to our observations, since we find the $ \beta $ values for these pulsars to be consistent with \mbox{$ { 11 \over 3 } $ } within the errors (Table 8). It is important to note that our $ \beta $ values are sensitive to fluctuations on spatial scales $\sim$ $ 10^7 - 10^{11} $ m, whereas the DM fluctuations probe $ \sim $ $ 10^{11} - 10^{13} $ m. As mentioned in \S 2.3.1, from persistent drifting features observed in our PSR B0834+06 data, we infer a similar enhancement in power level at spatial scales $ \sim $ $ 10^{12} - 10^{14} $ m compared to the Kolmogorov expectations (see Fig. 5.a). Backer et al. (1993) studied DM variations of 4 pulsars (PSRs B1821$-$24, B1855+09, B1937+21 and B1951+32) spanning the DM range 13$-$119 $ {\rm pc ~ cm ^ {-3} } $ . They find the \mbox{ $ {\rm \delta DM } $\,}$-$DM relation to be much flatter than that expected based on DISS, which they interpret as an evidence against a direct link between the density fluctuations responsible for DM variations and those which cause DISS. They suggest a model comprising several wedge-like structures randomly distributed along the line of sight to account for the observed DM variations. It may be mentioned that Rickett et al. (1997) suggest a similar explanation for the fringing event seen in the PSR B0834+06 data at 408 MHz. In both cases, the models proposed for the density fluctuations are in accordance with the hypothesis (IV) of \S 1. Observations of unusual scattering phenomena such as ESEs, multiple imaging events and persistent drifting features can be considered to be a strong evidence in favour of non-Kolmogorov forms of spectra. Multiple imaging events are expected to be rare for type IA spectrum, but can be more common if the spectrum is type II with $ \alpha \ \mbox{\raisebox{-0.1ex}{$\scriptscriptstyle \stackrel{>}{\sim}$\,}} \ 4 $ (Hewish et al. 1985; Rickett 1990). But the existing observational data are not adequate to make a firm statement on the statistics of their occurrence. The more commonly favored explanation is in terms of refraction through discrete structures (e.g. Cordes \& Wolszczan 1986). Observations of ESEs are also explained in terms of large-scale refractors ($ \sim $ 10 AU) in the ISM (Romani et al. 1987; Fiedler et al. 1994; Clegg, Fey \& Lazio 1998). As discussed in \S 2.3.2, persistent drifting bands can also be understood in terms of discrete density structures in the ISM. Relatively rare occurrences of the three effects indicate that these density structures are localized. As stated in \S 3.1.2, there is some evidence for the connection between these effects; for example, three of our pulsars $-$ PSRs B0834+06, B1133+16 and B1919+21 $-$ are known to have shown both multiple imaging and persistent drifts. Thus the data accumulated so far clearly signify the importance of discrete structures in the ISM, thereby supporting hypothesis (IV), at least along some lines of sight. \subsubsection{Overall Nature of the Density Spectrum in the Local (1 kpc) ISM} Various methods discussed in \S 3.3.1 and \S 3.3.2 probe different parts of the density fluctuation spectrum, and it is interesting to see that the measurements based on a particular method give similar implications for the nature of the spectrum. It is worth examining what extent these observational results can be reconciled and what they mean for the overall nature of the spectrum. The frequency scaling of \mbox{$ \nu _d $ } basically probes spatial scales in the range $ \sim $ $ 10^6 - 10^8 $ m, where $ \alpha $ is found to be consistent with $ \mbox{$ { 11 \over 3 } $ } $. VLBI angular broadening of PSR B1933+16 also probes a similar range ($ \sim $ $ 10^6 - 10^7 $ m), and the value of $ \alpha $ inferred from this is also consistent with $ \mbox{$ { 11 \over 3 } $ } $. The measurements of drift slopes probe spatial scales near $ \sim $ $ 10^{10} - 10^{11} $ m, and support $ \alpha \approx \mbox{$ { 11 \over 3 } $ } $ spectrum towards a number of lines of sight. Further, observations of image wander of $ {\rm H_2 O } $ masers, which also probe spatial scales of similar range ($ \sim $ $ 10^{11} $ m), gives another independent evidence for $ \alpha = \mbox{$ { 11 \over 3 } $ } $ spectrum. Thus the density spectrum seem to be a power-law with the Kolmogorov index (hypothesis IA) in the range $ \sim $ $ 10^6 - 10^{11} $ m. Long-term DM variability, as reported by Phillips \& Wolszczan (1991), probes much larger spatial scales ($ \sim $ $ 10^{11} - 10^{13} $ m) and the results indicate that the strength of density fluctuations at these scales is significantly larger than the Kolmogorov expectations. As discussed in \S 2.3.1, persistent drift slopes observed in our data are also suggestive of excess power at $ \sim $ $ 10^{12} - 10^{13} $ m. These, combined with the results for smaller spatial scales ($ \sim $ $ 10^6 $ m to $ \sim $ $ 10^{11} $ m) would warrant a ``multi-component'' spectrum ($ie,$ hypothesis III of \S 1) in the range $ \sim $ $ 10^6 - 10^{13} $ m, with either a break near $ \kappa \ \sim \ 10^{-11} \ {\rm m ^{-1} } $ (type IIIA) or a ``bump'' at $ \kappa \ \sim \ 10^{-12} - 10^{-13} \ {\rm m ^{-1} } $ (type IIIB). The modulations of DISS observables (\mbox{$ \nu _d $ } and \mbox{$ \tau _d $~}) and flux, which are indicative of the strength of density fluctuations near refractive scales ($ \sim $ $ 10^{10} - 10^{11} $ m), are not in agreement with the Kolmogorov predictions (\S 2.1, \S 3.1). However, the implications for the density spectrum here are not unambiguous since the observed discrepancies can, at least partly, be attributed to the inadequacies of the thin-screen models. The thin screen is not likely to be a valid approximation towards most pulsars within 1 kpc, and the theoretical treatments need to be refined to analyze the perturbations of DISS parameters due to extended and/or inhomogeneous media. Hence the modulation indices of observables, though discrepant with the predictions of type IA spectrum, do not put stringent constraints on the form of the spectrum. The results from angular broadening measurements (towards Cyg X-3 and 2013+270) and DM variations of Backer et al. (1993) are also not in direct contradiction with the implications from other measurements. The lines of sight of Cyg X-3 and 2013+270 can be treated as atypical as they are characterized by exceedingly large strengths of scattering, predominantly from the region beyond $ \sim $ 1 kpc. The conclusion of Backer et al. (1993) $-$ wedge-like discrete structures responsible for DM variations $-$ is largely based on observations of distant (D \mbox{\raisebox{-0.1ex}{$\scriptscriptstyle \stackrel{>}{\sim}$\,}} 1 kpc) pulsars. Observations such as ESEs, multiple imaging and persistent drifts can be interpreted in terms of discrete density structures in the ISM (hypothesis IV of \S 1). Multiple imaging events have been reported only for 7 pulsars, and 4 pulsars are known to have shown persistent drift slopes (\S 3.1.2). To date, ESEs have been identified in the data of 9 radio sources and one pulsar. The existing observational data therefore suggest such effects are relatively rare phenomena, which means discrete structures may {\it not} be {\it very} common. Hence the hypothesis IV ($ie,$ a density spectrum of irregularities with superposed deterministic structures) seems to be relevant only for a limited number of lines of sight. The overall picture emerges from the above discussion is that underlying density fluctuations can, in general, be described by hypothesis (III) ($ie,$ a Kolmogorov-like spectrum which either steepens or exhibits a ``bump'' in the low wavenumber range), and hypothesis (IV) applies to some specific lines of sight. \subsection{Implications for the Theoretical Models} The simplest scenario usually considered by the theoretical models is of a thin screen scattering geometry and a density irregularity spectrum of type IA ($ie, \ \mbox{${\rm s_{inn}}$ } \ll \mbox{${\rm s_{diff}}$ }, \ \mbox{${\rm s_{out}}$ } \gg \mbox{${\rm s _{ref} }$ } $). Though the estimates of density spectral slope ($\beta$) from our data are consistent with $\mbox{$ { 11 \over 3 } $ } \ \mbox{\raisebox{-0.1ex}{$\scriptscriptstyle \stackrel{<}{\sim}$\,}} \ \alpha < 4$, the corresponding theoretical predictions for the modulation indices of \mbox{$ \nu _d $ }, \mbox{$ \tau _d $~} and F do not match with the observations. This inconsistency gives a clear indication of the inadequacy of theoretical models based on such a simple scenario. Not many investigations have been made so far to understand refractive scintillation effects due to more complex, but realistic scenarios such as an extended scattering medium. Nevertheless, a partial reconciliation of our results is possible if we consider a geometry with the scattering material uniformly distributed along the line of sight. Such a scenario in combination with $\mbox{$ { 11 \over 3 } $ } \le \alpha < 4$ will suffice to account for the measured flux modulation indices from our data. If such models can also give rise to $ \sim $ 2 times larger modulations of \mbox{$ \nu _d $ } and \mbox{$ \tau _d $~} compared to the thin screen, then it is possible to reconcile the observed modulations of all the 3 quantities and estimates of slope with $\mbox{$ { 11 \over 3 } $ } \le \alpha < 4$ ($ie,$ type IA and II spectra). At present, theories for extended media $-$ homogeneous and/or inhomogeneous $-$ are not fully developed, but detailed treatments have become necessary in the light of new results from our observations. Alternatives such as models based on hypothesis IB (say, with $ \mbox{${\rm s _{ref} }$ } > \mbox{${\rm s_{inn}}$ } > \mbox{${\rm s_{diff}}$ } $) deserve some consideration here, as they can give rise to some observable effects akin to those produced by type II spectra. For example, such models can give rise to large modulations of flux and periodic patterns in dynamic spectra (cf. Goodman et al. 1987; Coles et al. 1987). However, at present, there is no compelling observational evidence suggesting a large inner scale. Furthermore, it is unclear whether such models can also give rise to large modulations of \mbox{$ \nu _d $ } and \mbox{$ \tau _d $~}, and still be consistent with $\mbox{$ { 11 \over 3 } $ } \le \beta < 4$. In this context, models based on hypotheses (IIIA) and (IIIB) also need to be examined critically. In addition to general phenomena such as modulations of DISS observables and flux density, and drifting bands in dynamic spectra, a satisfactory model also needs to account for relatively rare phenomena such as persistent drifts, multiple imaging events and extreme scattering events. All three effects can be understood in terms of refractive effects due to large-scale deterministic structures ($ie,$ hypothesis IV of \S 1), suggesting that an interconnection between them is likely. Despite the progress made so far, we still lack quantitative interpretations describing these phenomena and further investigations are needed. \section{Conclusions} We have attempted to constrain the power spectrum of plasma density fluctuations in the ISM by studying refractive effects in pulsar scintillation. We have used the data from a long-term scintillation study of 18 pulsars. Reliable and accurate estimates of diffractive and refractive scintillation properties were obtained by monitoring the dynamic scintillation spectra at a number of epochs spanning several months. We studied two important and easily observable effects due to refractive scintillation: (i) modulations of scintillation observables and flux density, and (ii) drifting bands in dynamic spectra, which provide two independent means of constraining the form of the density irregularity spectrum. We have considered a set of hypotheses to describe the possible potential forms of the density spectrum and tested them using our data. The relevant hypotheses are: (IA) $ \alpha = \mbox{$ { 11 \over 3 } $ } $ (Kolmogorov spectrum), (IB) $ \alpha = \mbox{$ { 11 \over 3 } $ } $ with large inner scale; (II) $ \alpha > \mbox{$ { 11 \over 3 } $ } $ (`steep' spectrum); (IIIA) `piece-wise' power-law, (IIIB) power-law with low wavenumber enhancement; and (IV) power-law with superposed discrete structures. At present, quantitative predictions are available only for the cases covered under (IA) and (II). On comparing the observed modulation indices of diffractive scintillation observables $-$ decorrelation bandwidth (\mbox{$ \nu _d $ }) and scintillation time scale (\mbox{$ \tau _d $~}) $-$ and pulsar flux density (F) with the predictions, we find that the measured values are considerably larger than the predicted values for a thin-screen model with a density spectrum of type IA. The measured modulation indices are spread over a wide range of values, and are consistent with the predictions for power-law spectra with $\mbox{$ { 11 \over 3 } $ } < \alpha < 4.3$ (hypothesis II). The flux density modulations will also be consistent with a smaller range $\mbox{$ { 11 \over 3 } $ } \le \alpha < 4$, if an extended scattering geometry with uniformly distributed scattering material along the line-of-sight is considered. Predictions are not available for the modulations of \mbox{$ \nu _d $ } and \mbox{$ \tau _d $~} for such a medium. Estimates of density spectral slope ($\beta$) are obtained from our measurements of diffractive and refractive scattering angles, and are found to be reasonably close to 11/3 (within the measurement uncertainties) for a number of pulsars (14 out of 18). For several nearby pulsars (distance $ \sim $ 200 to 700 pc), $ \beta $ is found to be significantly larger than 11/3 (but less than 4). Thus, there are conflicting interpretations, and the results from the two methods are {\it not fully} reconcilable within the framework of theoretical models based on hypotheses (IA) and (II). Further, the observations of persistent drifting bands lasting over many months seen in our data (e.g. PSRs B0834+06 and B1919+21) indicate that there is excess power at spatial scales $ \sim $ 10$-$100 AU, much larger than the refractive scales. This would mean the spectrum either needs to be piece-wise power-law or has a bump in the low wavenumber range (hypotheses IIIA and IIIB respectively). An alternative interpretation is the existence of large-scale ($\sim$ 40$-$70 AU) high density (\mbox{$ {\rm N_e } $\,} $\sim$ 2$-$4 $ {\rm cm ^ {-3} } $ ) clouds along some lines of sight (hypothesis IV). A careful consideration of all available results from the literature and our current work leads us to the picture of a Kolmogorov-like spectrum ($ \alpha \ \approx \ \mbox{$ { 11 \over 3 } $ } $) in the wavenumber range $ {\rm \sim 10^{-6} \ m^{-1} \ to \ \sim 10^{-11} \ m^{-1} } $, that either steepens or has a bump of enhanced power at low wavenumbers ($ \kappa \ \sim \ 10^{-12} - 10^{-13} \ {\rm m ^{-1} } $). In addition, observations of relatively rare phenomena such as persistent drift slopes, ESEs and multiple imaging events suggest the existence of localized high density structures along some lines of sight. Thus the observational data indicate that the electron density fluctuations in the ISM can, in general, be described by hypothesis (III), and hypothesis (IV) applies to some specific lines of sight. Unlike the case with hypotheses (I) and (II), refractive scintillation effects due to scattering media described by hypotheses (III) and (IV) have not been fully developed. We hope the present work will stimulate detailed theoretical works necessary towards an improved understanding of refractive scintillation effects in pulsar signals and the power spectrum of plasma density fluctuations in the ISM. {\it Acknowledgments:} The authors wish to thank J. Chengalur and M. Vivekanand for reading an earlier version of this manuscript and giving useful comments. We thank an anonymous referee for an illuminating review, which stimulated several lively discussions among us and also helped us in improving upon the clarity as well as the contents of the paper. \clearpage \begin{appendix} \section{Statistical Reliability of the Data} The statistical quality of our data largely depends on the number of independent measurements (\mbox{ $ {\rm N_{ep} } $ }) and the number of refractive cycles (\mbox{ $ {\rm N_{ref} } $ }) spanned during the time span of observation. For the latter, we need to know the time scale of fluctuations of our observables. Expectations based on simple models are that the fluctuations of all the 3 quantities $-$ decorrelation bandwidth (\mbox{$ \nu _d $ }), scintillation time scale (\mbox{$ \tau _d $~}) and flux density (F) $-$ occur over refractive time scales (\mbox{ ${\rm \tau _{ref}}$ }), which are expected to be days to weeks at our observing frequency. A structure function analysis was attempted for determining the time scales, but did not yield meaningful results owing to limited number of measurements. However, since our measurements of decorrelation bandwidth and scintillation time scale are fairly accurate (with typical uncertainties $ \sim $ $5-10$\%), first order estimates of refractive time scales can be estimated using the expression (Rickett 1990) \begin{equation} \mbox{ ${\rm \tau _{ref}}$ } \approx \left( { 2 ~ \mbox{$ f_{obs} $\,} \over \mbox{$ \nu _d $ } } \right) ~ \mbox{$ \tau _d $~} \end{equation} \noindent where \mbox{$ f_{obs} $\,} is the observing frequency. The above relation is based on simple models, i.e., scattering due to a thin screen with a power-law form of density spectrum. We use the estimates of \mbox{ $ \nu _{d,g} $\,} and \mbox{ $ \tau _{d,g} $\,} obtained from the Global ACF analysis (see Paper I) to estimate \mbox{ ${\rm \tau _{ref}}$ }. Our values of \mbox{ ${\rm \tau _{ref}}$ } and \mbox{ $ {\rm N_{ref} } $ } (given by \mbox {$ {\rm T_{sp} } $ }/\mbox{ ${\rm \tau _{ref}}$ }, where \mbox {$ {\rm T_{sp} } $ } is the time span of observation) are listed in columns (6) and (7) of Table 3. We do not have any pulsar for which the expected time scale of fluctuation is larger than the time span of observation. On the basis of the estimates of \mbox{ $ {\rm N_{ref} } $ }, the data are divided into 3 broad categories: A ($ \mbox{ $ {\rm N_{ref} } $ } \ge 10 $), B ($ \mbox{ $ {\rm N_{ref} } $ } \sim 5-10 $) and C ($ \mbox{ $ {\rm N_{ref} } $ } < 5 $). In a similar way, a categorization is made based on the number of measurements: A ($ \mbox{ $ {\rm N_{ep} } $ } \ge 20 $), B ($ \mbox{ $ {\rm N_{ep} } $ } \sim 10-20 $) and C ($ \mbox{ $ {\rm N_{ep} } $ } < 10 $). These categories are listed in columns (8) and (9) respectively of Table 3. The data which have `C' for either of the two categories, are considered to be of poor statistical quality. These include PSRs 1540$-$06, 2016+28 and 2310+42 which have only a few cycles of fluctuations (mainly due to their low space velocities), data from the initial session of PSR B1133+16 (\mbox{ $ {\rm N_{ep} } $ } = 6) and PSRs 1237+25, 1508+55 and 1929+10. From Table 3, we find 7 data sets in category A (both in terms of \mbox{ $ {\rm N_{ep} } $ } and \mbox{ $ {\rm N_{ref} } $ }), and 11 with reasonably good statistical reliability. These 18 data sets are used while comparing our results with the predictions. \newpage \section{Non-ISS Contributions to the Modulation Indices} {\bf (a) Measurement Noise:} First of all, we consider the effect due to various sources of noise involved in the measurement process. These include (i) errors due to the Gaussian fitting done to the ACF (\mbox{ $ \sigma _{mod} $}) $-$ relevant for \mbox{$ \nu _d $ } and \mbox{$ \tau _d $~}, (ii) ``DISS noise'' or estimation errors due to the finite number of scintles in the dynamic spectrum ($\mbox{ $ \sigma _{est} $}$) $-$ relevant for \mbox{$ \nu _d $ }, \mbox{$ \tau _d $~} and F, and (iii) errors due to the flux calibration (\mbox{ $ \sigma _{cal} $}) $-$ relevant for F. The techniques for estimation of these quantities are discussed in detail in Paper I. The time series of \mbox{$ \nu _d $ }, \mbox{$ \tau _d $~} and F (see Figs. 4(a)$-$(x) of Paper I) give some idea about these noise sources, where the uncertainties in \mbox{$ \nu _d $ } and \mbox{$ \tau _d $~} are given by $ \sqrt { \mbox{ $ \sigma ^2 _{mod} $} + \mbox{ $ \sigma ^2 _{est} $} } $ and that in F is given by $ \sqrt{ \mbox{ $ \sigma ^2 _{cal} $} + \mbox{ $ \sigma ^2 _{est} $} } $. The effect of these noise sources is an apparent increase in the modulations, and therefore the measured modulation indices ($ m_b $, $ m_t $ and $ m_r $) need to be corrected for this. We estimate the noise modulation indices (\mbox{${ m_{noise} }$\,}) as the typical fractional uncertainties of these quantities and these are given in columns (3), (4) and (5) of Table 4. The measured modulation index (\mbox{${ m_{meas} }$\,}) is then given by \begin{equation} (m _{meas}) ^2 ~ = ~ (m _{riss}) ^2 ~ + ~ (m _{noise}) ^2 \end{equation} \noindent The RISS-induced modulations (\mbox{${ m_{riss} }$ }) of \mbox{$ \nu _d $ }, \mbox{$ \tau _d $~} and F obtained in this manner are given in columns (6), (7) and (8) of Table 4. Since noise modulation indices are typically 0.1 for our data, their contributions to the measured modulation indices are usually marginal. Only exceptions are \mbox{$ \tau _d $~} modulations of \mbox{$ {\rm PSR ~ B0823+26(II) } $ } and PSR B1929+10, for which \mbox{${ m_{noise} }$\,} is slightly larger than the estimate of $ m_t $, and the flux modulation of \mbox{$ {\rm PSR ~ B1133+16(I) } $ } with $ \mbox{${ m_{noise} }$\,} \approx $ m_r $ $. Note that the last two data are of poor statistics due to limited number of measurements ($ \mbox{ $ {\rm N_{ep} } $ } < 10 $) as described earlier. Further, for part of the data of PSR B0834+06 (II and III) and PSR B2310+42, corrected estimates of $ m_t $ are significantly lower ($ < 0.1 $) compared to their uncorrected values. Excluding these 6 measurements and the data with poor statistical reliability, global averages of modulation indices are: \mbox{${ \langle m_b \rangle }$ } = 0.36, \mbox{${ \langle m_t \rangle }$ } = 0.17 and \mbox{${ \langle m_r \rangle }$ } = 0.44, very close to those obtained from the direct measurements. {\bf (b) Effect of variable Faraday rotation on flux density modulations:} Since ORT is sensitive only to linearly polarized radiation with the electric field in the North-South plane, we need to consider the apparent flux modulation index due to epoch-to-epoch variations of Faraday rotation (due to the Earth's ionosphere). Significant fraction of radiation from most pulsars is known to be linearly polarized and our sample consists of pulsars with fractional linear polarization (at 400 MHz) ranging from 0.1 to 0.8. In Table 5, column (3) gives the fraction of linearly polarized radiation (\mbox{${ m_{lin} }$\,}) at 400 MHz, and the position angle (PA) swing across the pulse profile is given in column (4). These are measurements reported in the literature (Gould 1994; Manchester \& Taylor 1977, Hamilton et al. 1977). Rotation measures (RM) of our pulsars are listed in column (5) of Table 5. Adopting the method described in Gupta et al. (1993), we have estimated the apparent flux modulation index (\mbox{${ m_{r,pol} }$\,}) for each pulsar. This method takes into account the differential Faraday rotations across the observing band (\mbox{$ {\rm B_{obs} } $\,}) and across the pulse profile (\mbox{$ {\rm \tau _{pulse} } $\,}), and estimates the worst case values of \mbox{${ m_{r,pol} }$\,} (rotation angle variations across \mbox{$ {\rm B_{obs} } $\,} and \mbox{$ {\rm \tau _{pulse} } $\,} are treated as approximately linear, and ionospheric contribution to RM is assumed to be $ \sim $ 1 $ {\rm rad ~ m ^ {-2} } $ ). The values of \mbox{${ m_{r,pol} }$\,} are listed in column (6) of Table 5. The flux modulation indices (\mbox{${ m_{r,riss} }$ }) given in column (8) of Table 4, which are already corrected for the contribution due to noise modulations, are further corrected for the contribution due to \mbox{${ m_{r,pol} }$\,}, and the new values of \mbox{${ m_{r,riss} }$ } are given in column (7) of Table 5. For 10 pulsars, \mbox{${ m_{r,pol} }$\,} \mbox{\raisebox{-0.1ex}{$\scriptscriptstyle \stackrel{<}{\sim}$\,}} 0.05, and therefore this effect can be ignored. For 6 pulsars, $ \mbox{${ m_{r,pol} }$\,} \sim 0.1 $ to 0.2, but much smaller compared to their observed flux modulation indices and hence the effect is only marginal. For PSRs 1237+25 and 1929+10, for which substantial fraction of radiation is linearly polarized (with fractional linear polarizations of 0.56 and 0.79 respectively at 408 MHz), \mbox{${ m_{r,pol} }$\,} is estimated to be very large ($ \sim 0.4-0.5 $). We also note that for these pulsars, the measured values of $ m_r $ are substantially larger than that of the rest, substantiating the effect of Faraday rotation. On applying the correction, we get $ $ m_r $ \approx 0.42 $ for PSR B1929+10, and $ $ m_r $ \approx 0.55 $ for PSR B1237+25. {\bf (c) Effect of the Earth's orbital motion on modulations of scintillation time scale:} The scintillation pattern speed (\mbox{ $ V_{iss} $ }), which determines the scintillation time scale (\mbox{$ \tau _d $~}), is predominantly due to pulsar's proper motion. However, in the exceptional cases of pulsars with low proper motions, contributions due to the Earth's orbital motion (\mbox{$ V_{obs} $ }) around the Sun and the bulk flow of the density irregularities (\mbox{$ V_{irr} $ }) may also turn out to be significant, which will modify the `intrinsic' fluctuations of scintillation time scale caused by RISS. In order to identify data where these effects may be significant, we quantify the effect of Earth's motion as the expected fractional variation in \mbox{ $ V_{iss} $ }, which is computed as the ratio of change in the transverse component of Earth's motion (\mbox{ $ \Delta V_{obs _{\bot }} $}) over the observing time span (\mbox {$ {\rm T_{sp} } $ }) to the scintillation speed (\mbox{ $ V_{iss} $ }) computed from average values of \mbox{$ \nu _d $ } and \mbox{$ \tau _d $~}. The values of \mbox{ $ \Delta V_{obs _{\bot }} $} and \mbox{ $ {\rm { \delta t _{vobs} } } $\,} = \mbox{ $ \Delta V_{obs _{\bot }} $} / \mbox{ $ V_{iss} $ } are given in columns (3) and (4) of Table 6. Estimates of \mbox{ $ {\rm { \delta t _{vobs} } } $\,} range from 0.01 to 0.24, but for most pulsars it can be ignored in comparison to $ m_t $. Only exceptions are PSRs B1540$-$06 and B1604$-$00, for which $ m_t $ values are comparable to their \mbox{ $ {\rm { \delta t _{vobs} } } $\,}, and therefore modulations of their \mbox{$ \tau _d $~} measurements are not very reliable. It is not possible to get a similar estimate for the effect due to motion of the medium, but it is known to be significantly lower in comparison to the Earth's motion and pulsar's proper motion. Bondi et al. (1994), based on their one-year flux modulation studies of low frequency variables, argue that \mbox{$ V_{irr} $ } $ < $ 10 $ {\rm km ~ secs ^ {-1} } $. Therefore we assume \mbox{$ V_{irr} $ } $ \sim $ 10 $ {\rm km ~ secs ^ {-1} } $ and estimate the expected modulation in \mbox{$ \tau _d $~} due to it as \mbox{ $ {\rm { \delta t _{virr} } } $\,} = \mbox{$ V_{irr} $ }/\mbox{ $ V_{iss} $ }. Our estimates of \mbox{ $ {\rm { \delta t _{virr} } } $\,} are given in column (5) of Table 6, from which one can see that the effect is significant only for PSR B1604$-$00, for which the values of $ m_t $ and \mbox{ $ {\rm { \delta t _{virr} } } $\,} are comparable. For PSRs B1540$-$06, B2016+28, B2310+42 and B2327$-$20, though \mbox{ $ V_{iss} $ } $ < $ 100 $ {\rm km ~ secs ^ {-1} } $, \mbox{ $ {\rm { \delta t _{virr} } } $\,} are considerably lower than the measurements of $ m_t $ and hence can cause only marginal increase in the RISS-induced \mbox{$ \tau _d $~} modulations. Thus modulations of \mbox{$ \tau _d $~} due to the Earth's orbital motion and/or the motion of the density irregularities are significant only for two pulsars. Nevertheless, neither of the effects are reflected as a large value of $ m_t $ for these pulsars. {\bf (d) Effect of intrinsic flux variations on the flux modulation index:} It is generally believed that pulsar flux variations seen at time scales of days to weeks are due to RISS. But if there are some intrinsic flux variations occurring over similar time scales, then the measured values of $ m_r $ will be overestimates of flux modulations due to RISS. Observations so far, have not conclusively established the occurrence of such intrinsic flux variations. Another possibility is variations over time scales intermediate between our typical durations of observation (2$-$3 hours) and interval between the successive measurements (1$-$2 days). However, we do not find any compelling reason to consider such an effect. Recent studies of flux monitoring suggest that pulsar fluxes are stable over time scales larger than the refractive time scales (e.g. Kaspi \& Stinebring 1992). Though the present observations show some evidence for flux variations over time scales longer than our typical time spans of observations (see Paper I for a discussion on long-term stability of flux densities), such effects can be ignored in the present context. While we do not totally rule out any hitherto unrecognized form of intrinsic flux variations, in the absence of any other information, we assume that the observed flux modulations are largely due to RISS. {\bf (e) Modulations of decorrelation bandwidth:} Unlike the case with the scintillation time scale (\mbox{$ \tau _d $~}) and the flux density (F), there are no non-ISS effects in our data that can cause modulations of decorrelation bandwidth (\mbox{$ \nu _d $ }). Our data are in general free from various kinds of man-made radio noise (mainly due to the geographical location of the ORT). The fraction of data corrupted by different kinds of RFI (radio frequency interference) seldom exceeds a few percent, and are excluded from the analysis. Sample data presented in Figs. 1(a)$-$(h) and 3(a)$-$(m) of Paper I give some idea about typical quality of our data. The modulations of \mbox{$ \nu _d $ } can result from phase gradient and/or curvature effects, whereas only the latter is relevant for \mbox{$ \tau _d $~} and F. But the theoretical treatments incorporate both the effects in predicting the modulation index of \mbox{$ \nu _d $ }. It is possible to correct the measured \mbox{$ \nu _d $ } at a given epoch for the refraction due to the gradient effects, as seen from the drifting features in dynamic spectra. This issue is discussed in \S 2.2.1. Our analysis shows that modulation indices of corrected \mbox{$ \nu _d $ }, though somewhat lower than those of measured \mbox{$ \nu _d $ }, they are considerably larger than the Kolmogorov predictions given in Table 1. Further, part of our data are associated with ``persistent drifts'' or non-zero values of mean refractive angles (this aspect is discussed in \S 2.3), which go against the expectations based on simple models. PSRs B0834+06 (excluding data from the session IV), B1919+21, B1133+16, B1604$-$00 and B2045$-$16 belong to this category. It is unclear, however, whether in such cases the measured modulation indices of \mbox{$ \nu _d $ } signify their true values. But we do not consider this to be a strong enough reason to exclude them from the present discussion. \end{appendix}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The purpose of \cite{speck-strain} was to construct solutions to the special relativistic Boltzmann equation as corrections of local relativistic Maxwellians, where the thermodynamic fields evolve in time according to the relativistic Euler fluid equations. To close the relativistic Euler system we add an equation of state. It turns out that in the context of \cite{speck-strain} the right choice is obtained extrapolating the relations that hold between the thermodynamic quantities of the global equilibria of the relativistic Boltzmann equation (known as J\"uttner equilibria). We may call this the kinetic equation of state. It is given by \begin{equation} \label{state1} p=k_B n \theta = m_0 c^2 \frac{n}{\beta}, \end{equation} \begin{equation} \label{state2} \rho= m_0 c^2 n \frac{K_1(\beta)}{K_2(\beta)}+3 p, \end{equation} \begin{equation} \label{state3} n = 4 \pi e^4 m_0^3 c^3 h^{-3}\exp \left( \frac{-\eta}{k_B}\right) \frac{K_2(\beta)}{\beta}\exp \left( \beta \frac{K_1(\beta)}{K_2(\beta)}\right). \end{equation} Here $n$ is the proper number density, $\theta$ is the temperature, $\beta= m_0 c^2/(k_B\theta)$ is a dimensionless quantity proportional to the inverse temperature --where $c$ is the speed of light, $m_0$ is the mass of gas particles and $k_B$ is Boltzmann's constant--, $p$ stands for the pressure, $\rho$ stands for proper energy density and $\eta$ stands for the entropy per particle, while $h$ is Planck's constant and $K_1,\ K_2$ are modified Bessel functions (which we define below). For the related computations see \cite{CercignaniKremer,deGroot,Synge} for instance. In fact, it can be checked that the same equation of state arises in the formal hydrodynamic limit of a kinetic model having J\"uttner distributions as equilibria. This is the case for BGK-type models, see \cite{BGK,AW,CercignaniKremer,Majorana, Marle2} for example. During their analysis, the authors of \cite{speck-strain} ran into some difficulties as several properties of the kinetic equation of state (local solvability for any of the thermodynamic variables in terms of any other two of them, hyperbolicity and causality of the relativistic Euler system under the kinetic equation of state) were required but no rigorous proof for them seemed to be available in the literature. These properties were proven in \cite{speck-strain} to hold true in certain temperature regimes and conjectured to be true in general. Then a pair of statements were raised in \cite{speck-strain}, which can be quoted as follows: \begin{Conjecture} The map $(n,\beta) \leftrightarrow (\eta,\rho)$ given implicitly by the kinetic equation of state (\ref{state1})--(\ref{state3}) is an auto-diffeomorphism of $]0,\infty[\times]0,\infty[$. \end{Conjecture} \begin{Conjecture} Under the kinetic equation of state (\ref{state1})--(\ref{state3}), there holds that: \begin{itemize} \item There exists a smooth function $f_{\rm kinetic}$ such that $p$ can be expressed in terms of $\eta$ and $\rho$ as\footnote{Let us mention that we use the term ``kinetic equation of state'' in a broader sense than what is done in \cite{speck-strain}, where this term is used to refer to $f_{\rm kinetic}$, not to \eqref{state1}--\eqref{state3}.} $p=f_{\rm kinetic}(\eta,\rho)$. \item The relativistic Euler system is hyperbolic. \item The relativistic Euler system is causal (the speed of sound $c_S:=c\sqrt{\frac{\partial p}{\partial \rho}|_{\eta}}$ is real and less than the speed of light). Furthermore, $ 0<c_S<c/\sqrt{3}$. \end{itemize} \end{Conjecture} It was also shown in \cite{speck-strain} that these conjectures can be phrased in terms of relations between modified Bessel functions as \begin{equation} \label{conjetura1} 3\frac{K_1(\beta)}{K_2(\beta)}+\beta \left(\frac{K_1(\beta)}{K_2(\beta)} \right)^2-\beta-\frac{4}{\beta}<0,\quad \forall \beta>0 \end{equation} for the first conjecture and \begin{equation} \label{conjetura2} 3<3+\beta\frac{K_1(\beta)}{K_2(\beta)}+\frac{4\frac{K_1(\beta)}{K_2(\beta)} + \beta\left(\frac{K_1(\beta)}{K_2(\beta)} \right)^2- \beta}{3\frac{K_1(\beta)}{K_2(\beta)} + \beta\left(\frac{K_1(\beta)}{K_2(\beta)} \right)^2- \beta-\frac{4}{\beta}}<+\infty ,\quad \forall \beta>0 \end{equation} for the second conjecture. The purpose of this paper is to show that the previous inequalities \eqref{conjetura1}--\eqref{conjetura2} indeed hold true, thus Conjecture 1 and Conjecture 2 are then shown to be valid. This implies in particular that the results in \cite{speck-strain} concerning the hydrodynamic limit of the relativistic Boltzmann equation hold true for any range of positive temperatures. The bound on the sound speed may also be important in cosmology: some results suggest that $c/\sqrt{3}$ may be a boundary value separating unstable fluid regimes from stable fluid regimes for the case of nearly-uniform solutions to the fluid equations in rapidly expanding spacetimes \cite{Rendall04, Speck2012}. Specifically, an equation of state of the form $p=c_S^2\rho$ was addressed in these papers. On one hand, evidence was found pointing that solutions may be unstable in the regime given by $c_S>c/\sqrt{3}$. On the other hand, global future stability for small data solutions when $0\le c_S \le c/\sqrt{3}$ and the spacetime is expanding at a sufficiently rapid rate has been shown in \cite{Speck2012}. \section{Proofs of the conjectures} Modified Bessel functions can be defined as \cite{librotablas,Synge} \begin{equation} \label{ch-representation} K_j(\beta) = \int_0^\infty \cosh (jr) e^{-\beta \cosh(r)}\ dr \ge 0, \quad j \in \mathbb{N}, \ \beta>0. \end{equation} We recall that these functions obey the following recurrence relation: \begin{equation} \label{recurrence} K_{j+1}(\beta)= \frac{2j}{\beta}K_j(\beta) + K_{j-1}(\beta), \quad j \in \mathbb{N}, \ \beta>0. \end{equation} We are now ready to display our first result. \begin{Theorem} \label{uno} Inequality \eqref{conjetura1} holds true for any $\beta>0$. As a consequence, Conjecture 1 holds true. \end{Theorem} This result follows at once from the following non-trivial fact (see \cite{Kunik2004}, Lemma 2.2 for instance): \begin{Proposition} \label{parte1} Let $K_1$ and $K_2$ be the modified Bessel functions defined by \eqref{ch-representation}. Then \begin{equation} \label{repform} \left(\frac{K_1(\beta)}{K_2(\beta)} \right)'=\left(\frac{K_1(\beta)}{K_2(\beta)} \right)^2 +\frac{3}{\beta}\frac{K_1(\beta)}{K_2(\beta)} -1 \end{equation} and the following inequality holds true: \begin{equation} \label{kunik} \left(\frac{K_1(\beta)}{K_2(\beta)} \right)'<\frac{3}{\beta^2}, \quad \forall \beta>0. \end{equation} \end{Proposition} A proof for Proposition \ref{parte1} can be found in \cite{Kunik2004}. See also \cite{Synge}, p. 89. Inequality \eqref{kunik} shows that the specific energy of a gas in equilibrium (defined as $\psi(\beta)=\frac{3}{\beta}+\frac{K_1(\beta)}{K_2(\beta)}= \frac{\rho}{n}$) is an increasing function of the temperature. The rest of this section is devoted to prove the following statement: \begin{Theorem} \label{dos} Inequality \eqref{conjetura2} holds true for any $\beta>0$. As a consequence, Conjecture 2 holds true. \end{Theorem} More precisely, the middle term in \eqref{conjetura2} equals $(\frac{\partial p}{\partial \rho}|_{\eta})^{-1}$. Then when the second inequality in \eqref{conjetura2} is valid we get the existence of $f_{\rm kinetic}$, the hyperbolicity of the relativistic Euler system and the fact that $c_S>0$ all at once. When the first inequality in \eqref{conjetura2} holds we get the bound $c_S<c/\sqrt{3}$. See \cite{speck-strain} for details. Prior to the proof of Theorem \ref{dos}, let us stress that the second inequality in (\ref{conjetura2}) is trivially satisfied once we have proved Theorem \ref{uno}, as the denominator in \eqref{conjetura2} never vanishes. Thus, it suffices to focus on the first inequality. Then, the first thing we do to prove the second conjecture is to rephrase inequality \eqref{conjetura2} as an inequivalent inequality which is more suitable to use certain bounds on the ratio $K_1/K_2$. We now state and prove this reformulation in the following result. \begin{Lemma} To show that the first inequality in \eqref{conjetura2} is satisfied for every $\beta>0$ is equivalent to show that \begin{equation} \label{reformulation} \left( \frac{K_1(\beta)}{K_2(\beta)}\right)^2 < 1- \frac{3}{4 + \beta \frac{K_1(\beta)}{K_2(\beta)}} \end{equation} holds for every $\beta>0$. \end{Lemma} \begin{proof} We start using \eqref{repform} to write \eqref{conjetura2} as \begin{equation} \label{primera} 0< \beta\frac{K_1(\beta)}{K_2(\beta)} + \frac{\frac{K_1(\beta)}{K_2(\beta)}+\beta\left( \frac{K_1(\beta)}{K_2(\beta)}\right)'}{\beta\left( \frac{K_1(\beta)}{K_2(\beta)}\right)'-\frac{4}{\beta}}\quad \forall \beta>0. \end{equation} Note that thanks to \eqref{kunik} we can ensure that the denominator of the second term on the rhs is strictly negative. Due to this fact, \eqref{primera} is equivalent to \begin{equation} \label{segunda} \frac{K_1(\beta)}{K_2(\beta)} + \beta\left( \frac{K_1(\beta)}{K_2(\beta)}\right)'<-\beta^2 \frac{K_1(\beta)}{K_2(\beta)} \left( \frac{K_1(\beta)}{K_2(\beta)}\right)' + 4 \frac{K_1(\beta)}{K_2(\beta)} \end{equation} We substitute now the value of the derivative of the ratio $K_1/K_2$ given by \eqref{repform} in \eqref{segunda}, obtaining \begin{equation} \label{tercera} \left( \beta + \beta^2 \frac{K_1(\beta)}{K_2(\beta)} \right) \left\{ \left( \frac{K_1(\beta)}{K_2(\beta)}\right)^2 +\frac{3}{\beta}\frac{K_1(\beta)}{K_2(\beta)}-1\right\} < 3 \frac{K_1(\beta)}{K_2(\beta)}. \end{equation} Expanding the product and rearranging a bit, inequality \eqref{tercera} can be recast as \begin{equation} \label{cuarta} \beta \left( \frac{K_1(\beta)}{K_2(\beta)}\right)^2 \left(4 + \beta \frac{K_1(\beta)}{K_2(\beta)} \right)<\beta+\beta^2\frac{K_1(\beta)}{K_2(\beta)}. \end{equation} We divide by $\beta$ and the third term on the lhs of \eqref{cuarta} to arrive to \eqref{reformulation}. \end{proof} To begin with, we can show that \eqref{reformulation} holds for $0<\beta<1$. This is a consequence of the following estimate. \begin{Lemma} \label{ldos} The inequality $ 0\le K_1/K_2\le \beta/2 $ holds for all $\beta>0$. \end{Lemma} \begin{proof} Using \eqref{recurrence} for $j=1$ and the fact that $K_0\ge 0$ we deduce that $K_2(\beta)\ge 2 K_1(\beta)/\beta$ for all $\beta>0$. As $K_j(\beta)>0$ for all $\beta>0$ and $j \in \mathbb{N}$ the result follows. \end{proof} Now we replace the upper estimate granted by Lemma \ref{ldos} on the lhs of \eqref{reformulation} and we also replace the lower estimate on its rhs, so our claim follows. To proceed further we use again the recurrence relation \eqref{recurrence} to obtain more estimates on $K_1/K_2$. We display a procedure which is highly reminiscent of what is done in \cite{Synge} to obtain estimates on the functions $K_j(\beta)$ by means of their asymptotic expansions for $\beta \gg 1$ (see also \cite{librotablas} p. 378). \begin{Lemma} \label{cuatro} The following estimates hold for $\beta >1/2$: \begin{equation} \label{septima} \frac{128 \beta^3+48 \beta^2-15 \beta}{128 \beta^3 +240 \beta^2 +105 \beta -30} \le \frac{K_1(\beta)}{K_2(\beta)}\le \frac{4 \beta^2+3 \beta/2}{4 \beta^2 +15 \beta/2+3} \quad \forall \beta\ge 1/2. \end{equation} \end{Lemma} \begin{proof} We shall rewrite formula \eqref{ch-representation} by means of the change of variables $\frac{z^2}{4\beta}=\sinh^2(r/2)$. In such a way, \begin{equation} \label{cacero} K_0(\beta)=\frac{e^{-\beta}}{\sqrt{\beta}} \int_0^\infty \frac{1}{\sqrt{1+\frac{z^2}{4\beta}}} e^{-z^2/2}\ dz \end{equation} and \begin{equation} \label{casuma} K_0(\beta)+K_1(\beta)=2 \frac{e^{-\beta}}{\sqrt{\beta}} \int_0^\infty \sqrt{1+\frac{z^2}{4\beta}} \ e^{-z^2/2}\ dz. \end{equation} Now we estimate the integrands. A full binomial expansion for the square roots in \eqref{cacero}--\eqref{casuma} would yield asymptotic expansions for $K_0$ and $K_0+K_1$. Here we will content ourselves keeping just a few terms, which will be enough to prove Theorem \ref{dos}. Following \cite{Synge}, we recall that Taylor's theorem yields the following representation: \begin{equation} \label{binomial} (1+x)^{n-\frac{1}{2}} = 1+\sum_{i=1}^{p-1} {n-\frac{1}{2} \choose i} x^i+ p {n-\frac{1}{2} \choose p} I_p(x), \quad p=1,2,3,\ldots \end{equation} where \begin{equation} \label{remains} I_p(x)= \int_0^1 (1-t)^{p-1} (1+t x)^{n-\frac{1}{2}-p} x^p\ dt. \end{equation} We notice that $I_p(x)>0$ for $x>0$ and then the remainder term has the same sign as ${n-1/2 \choose p} x^p$ does, which is the first term of the binomial series that we have discarded. If we cut the expansion in a negative term we get an estimate from below, while we get an estimate from above if we cut the expansion in a positive term. Thus, if we substitute $x$ by $x^2$ in \eqref{binomial}--\eqref{remains} we obtain that \begin{equation} \label{Taylorbounds} 1-\frac{x^2}{2} \le \frac{1}{\sqrt{1+x^2}}\le 1-\frac{x^2}{2}+\frac{3x^4}{8},\ \forall x>0, \end{equation} \begin{equation} \label{Taylorbounds2} 1+\frac{x^2}{2}-\frac{x^4}{8} \le \sqrt{1+x^2}\le 1+\frac{x^2}{2},\ \forall x>0. \end{equation} We plug \eqref{Taylorbounds}--\eqref{Taylorbounds2} into \eqref{cacero}--\eqref{casuma} to obtain for all $\beta >0$ the following bounds: \begin{equation} \label{quinta} \sqrt{\frac{\pi}{2}} \frac{e^{-\beta}}{\sqrt{\beta}} \left( 1-\frac{1}{8 \beta}\right) \le K_0(\beta)\le \sqrt{\frac{\pi}{2}} \frac{e^{-\beta}}{\sqrt{\beta}} \left( 1-\frac{1}{8 \beta}+\frac{9}{128 \beta^2}\right) \end{equation} and \begin{equation} \label{sexta} 2\sqrt{\frac{\pi}{2}} \frac{e^{-\beta}}{\sqrt{\beta}} \left(1+\frac{1}{8 \beta}- \frac{3}{128 \beta^2}\right) \le K_0(\beta)+K_1(\beta)\le 2\sqrt{\frac{\pi}{2}} \frac{e^{-\beta}}{\sqrt{\beta}} \left(1+\frac{1}{8 \beta}\right). \end{equation} Note that the lower estimates that we get in such a way are positive for (say) $\beta \ge 1/2$. To conclude, we note that thanks to \eqref{recurrence} we have $K_2/K_1=2/\beta + K_0/K_1$; we rewrite $K_1/K_0$ as $(K_0 + K_1)/K_0 -1$ and use \eqref{quinta}--\eqref{sexta} to obtain \eqref{septima}. \end{proof} Substitution of the bounds given by \eqref{septima} into \eqref{reformulation} shows that, if \begin{equation} \label{septima2} \left( \frac{4 \beta^2+3 \beta/2}{4 \beta^2 +15 \beta/2+3}\right)^2 < 1- \frac{3}{4+\beta \frac{128 \beta^3+48 \beta^2-15 \beta}{128 \beta^3 +240 \beta^2 +105 \beta -30}} \end{equation} holds true for $\beta\ge 1/2$, then the inequality \eqref{reformulation} holds for $\beta \ge 1/2$ and we will be done. After some computations we see that \eqref{septima2} is equivalent to \begin{equation} \label{octava} \frac{16 \beta^4+12 \beta^3+ 9 \beta^2/4}{ 16 \beta^4 + 60 \beta^3 + 321 \beta^2/4+45 \beta + 9} <\frac{128 \beta^4 +176 \beta^3+225 \beta^2+105 \beta-30}{128 \beta^4+560 \beta^3+945 \beta^2 + 420 \beta -120}. \end{equation} Both numerators and denominators in \eqref{octava} are strictly positive in the range $\beta\ge 1/2$, so we can multiply by the denominators in \eqref{octava}. Thus \eqref{octava} is finally equivalent to \begin{equation} \label{novena} \frac{3}{4}\left(3072 \beta^6 + 20992 \beta^5 +36936 \beta^4 + 25107 \beta^3 +6150 \beta^2 -540 \beta -360 \right) >0. \end{equation} Then \eqref{novena} is easily seen to be true for $\beta \ge 1/2$, in fact it suffices to keep the last three terms to ensure it. Thus we have proved Theorem \ref{dos}. \section*{Acknowledgments} I would like to thank Jared Speck and Robert M. Strain for their support and their most useful comments on a first draft of this document. I also thank the referees of Commun. Pure Appl. Anal. for their most valuable comments and suggestions. I am grateful to Juan Soler for useful discussions about the contents of this paper.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Acknowledgments} \begin{acknowledgments} We would like to thank Alexander Abanov for fruitful discussions and earlier collaborations. GMM was supported by the National Science Foundation under Grant OMA1936351. VPN was supported in part by NSF Grants Nos. PHY-2112729 and PHY-1820271. SG is supported by NSF CAREER Grant No. DMR-1944967 (SG). \end{acknowledgments} \bibliographystyle{my-refs}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} More and more configuration parameters are provided to users, as systems in the cloud aim to support a wide variety of use cases~\cite{knobs}. For example, Hadoop~\cite{hadoop}, a popular big data processing system in the cloud, has more than 180 configuration parameters. The large number of configuration parameters lead to an ever-increasing complexity of configuration issues that overwhelms users, developers and administrators. This complexity can result in configuration errors~\cite{gmailconfig,amazonconfig,msconfig,fbconfig}. It can also result in unsatisfactory performances under atypical application workloads~\cite{ituned,RLweb,smarthillclimbing,starfish}. In fact, configuration settings have strong impacts on the system performance~\cite{tuneSpark,tuneHadoop,tuneCassandra,tuneMysql}. To tap the performance potential of a system, system users need to find an appropriate configuration setting through configuration tuning. \begin{figure*} \centering \subfigure[MySQL under uniform reads]{ \label{fig:mysql} \includegraphics[width=0.3\textwidth,height=103pt]{mysql.eps}}% \subfigure[Tomcat under webpage navigation workload]{ \label{fig:tomcat} \includegraphics[width=0.34\textwidth,height=105pt]{tomcatSurf.eps}} \subfigure[Spark under HiBench-KMeans workload]{ \label{fig:spark} \includegraphics[width=0.32\textwidth,height=105pt]{sparkKMeansLine.eps}} \vspace{-12pt}\caption{\underline{Diverging} performance surfaces of \emph{MySQL}, \emph{Tomcat} and \emph{Spark}. (Best view in color)}\vspace{-9pt} \label{fig:perfFunc} \end{figure*} A good configuration setting can greatly improve the system performance. For instance, changing the \emph{query\_cache\_type} parameter of MySQL from zero to one can result in more than \textbf{11 times} performance gain for an application workload, as shown in Figure~\ref{fig:mysql}. This performance gain can be significant if the workload is recurring on a daily base---this is very likely, especially for systems like databases or web servers. Nevertheless, configuration tuning for general systems is difficult due to the following three matters.\vspace{6pt}\\ \textbf{Variety}. Systems for configuration tuning can be data analytic systems like Hadoop~\cite{hadoop} and Spark~\cite{spark}, database systems like MySQL~\cite{mysql}, or web servers like Tomcat~\cite{tomcat}. Various deployments for a system are possible in the cloud. Performance goals concerning users can be throughput, latency, running time, etc. Among the variety of performance goals, some need to be maximized, while some minimized. The configuration tuning process must also take the application workload into account, and there are a variety of possible workloads. Furthermore, various combinations of systems, performance goals and workloads are possible.\\ \textbf{Complexity}. Given different performance goals and applied different workloads, a deployed system has different performance surfaces for a given set of configuration parameters. Different systems can have highly diverse and complex performance surfaces. Take Figure~\ref{fig:mysql}, \ref{fig:tomcat} and \ref{fig:spark} for example, MySQL has no performance surface but only two lines, while Tomcat has a bumpy performance surface and Spark has a relatively smooth performance surface. Previously, an unexpected performance surface is reported for PostgreSQL~\cite{ituned}, which even costs system developers months of efforts to reason about the underlying interactions.\\ \textbf{Overhead}. Configuration tuning involves solving a problem with a high-dimensional parameter space, thus a large sample set is commonly needed to find a solution~\cite{paraTuning}. However, collecting a large set of performance-configuration samples is impractical for configuration tuning. As no performance simulator exists for general systems, the samples can only be generated through real tests on the deployed system. Hence, configuration tuning must restrain the overhead of sample collection. Besides, the time overhead of the optimization process must also be considered. Existing solutions do not fully address all the above challenges. Though sporadic proposals are found on automatically suggesting configuration settings for Web servers~\cite{smarthillclimbing,rrs,RLweb}, databases~\cite{ituned} and Hadoop~\cite{starfish,aloja} respectively, these solutions are not generally applicable to the variety of systems in the cloud. A few statistical or machine learning models are proposed for distributed systems~\cite{tkdeConfTune,resSurf}, but these models are not applicable to the complicated cases as shown from Figure~\ref{fig:mysql} to \ref{fig:spark}. Configuration tuning is related to the problem of optimizing the performance for systems with high-dimensional parameters~\cite{paraTuningBook}, but previous research typically studies the problem based on simulations~\cite{paraTuning}; the overhead aspect is rarely considered to the extent as required by configuration tuning for general systems. In this paper, we present BestConfig---an automatic configuration tuning system that can optimize performance goals for general systems in the cloud by adjusting configuration parameters and that can recommend the best configuration setting found within a given resource limit. A typical resource limit is the number of tests allowed for configuration tuning. To address the resource limit challenge, BestConfig adopts an effective sampling method with wide space coverage and this coverage will be improved as more resources are provided. With the variety of systems and workloads, as well as the complexity of their interactions, it is impossible to build a useful performance model on a limited number of samples. Hence, BestConfig adopts a search-based optimization algorithm and exploits the general properties of performance models. To facilitate the usage with the variety of deployed systems and workloads, we design for BestConfig a software architecture that has loosely coupled but extensible components and that adopts a sample-test-optimize process in closed loop. In the evaluation with extensive experiments, BestConfig can improve the throughput of Tomcat by 75\%, that of Cassandra~\cite{cassandra} by 63\%, that of MySQL by 430\%, and reduce the running time of Hive-over-Hadoop~\cite{hive} join job by about 50\% and that of Spark join job by about 80\%, as compared to the default configuration setting, simply by adjusting configuration settings. In sum, this paper makes the following contributions.\vspace{-3pt} \begin{itemize} \item To the best of our knowledge, we are the first to propose and the first to implement an automatic configuration tuning system for general systems. And, our system successfully automates the configuration tuning for six systems widely deployed in the cloud. \item We propose an architecture ($\S$\ref{sec:arch}) that can be easily plugged in with general systems and any known system tests. It also enables the easy testing of other configuration tuning algorithms. \item We propose the divide-and-diverge sampling method ($\S$\ref{sec:DDS}) and the recursive-bound-and-search method ($\S$\ref{sec:RBS}) to enable configuration tuning for general systems within a resource limit. \item We demonstrate the feasibility and the benefits of BestConfig through extensive experiments ($\S$\ref{sec:eval}), while refusing the possibility of using common model-based methods such as linear or smooth prediction models for general systems ($\S$\ref{sec:model}). \item We have applied BestConfig to a real use case ($\S$\ref{sec:case}) showing that, even when a cloud deployment of Tomcat has a full resource consumption rate, BestConfig can still improve the system performance solely by configuration tuning. \end{itemize} \section{Background and Motivation}% \label{sec:problem} In this section, we describe the background and the motivation of automatic configuration tuning for general systems. We also analyze the challenges in solving this problem. \subsection{Background} Configuration tuning is crucial to obtaining a good performance from a deployed system. For an application workload, a configuration setting leads to the best performance of the system, but it might not be optimal given another application workload. Take Figure~\ref{fig:mysql} for example. Under the \emph{uniform-read} workload, the value of \emph{query\_cache\_type} is key to a good performance; but, as shown in Figure~\ref{fig:mysqlrw}, the \emph{query\_cache\_type} value has no obvious relation with the system performance for a \emph{Zipfian read-write} workload. In fact, the default setting generally cannot achieve the best performance of a system under all workloads. Configuration tuning is highly time-consuming and laborious. It requires the users: (1) to find the heuristics for tuning; (2) to manually change the system configuration settings and run workload tests; and, (3) to iteratively go through the second step many times till a satisfactory performance is achieved. Sometimes, the heuristics in the first step might be misguiding, as some heuristics are correct for one workload but not others; then, the latter two steps are in vain. In our experience of tuning MySQL, it has once taken five junior employees about half a year to find an appropriate configuration setting for our cloud application workloads. Configuration tuning is even not easy for experienced developers. For example, it has been shown that, although PostgreSQL's performance under the workload of a TPC-H query is a smooth surface with regard to the configuration parameters of cache size and buffer size~\cite{ituned}, the cache size interacts with the buffer size in a way that even takes the system developers great efforts to reason about the underlying interactions. In fact, the system performance models can be highly irregular and complicated, as demonstrated by Figure~\ref{fig:mysql} to \ref{fig:spark} and Figure~\ref{fig:mysqlrw}. How the configuration settings can influence the system performance can hardly be anticipated by general users or expressed by simple models. {\large \textbf{Benefits}}. Automatic configuration tuning can greatly benefit system users. First, a good configuration setting will improve the system performance by a large margin, while automatic configuration tuning can help users find the good configuration setting. Second, a good configuration setting is even more important for repetitive workloads, and recurring workloads are in fact a common phenomenon~\cite{repeatWork1,repeatWork2}. Third, automatic configuration tuning enables fairer and more useful benchmarking results, if the system under test is automatically tuned for a best configuration setting before benchmarking---because the system performance is related to both the workload and the configuration setting. \begin{figure}[!t] \centering \includegraphics[width=0.32\textwidth,height=105pt]{mysqlZipf.eps}\vspace{-6pt} \caption{The performance surface for MySQL under the \textbf{Zipfian} read-write workload. (Best view in color)}\vspace{-12pt} \label{fig:mysqlrw} \end{figure} \subsection{Challenges} Several challenges exist for automatic configuration tuning for general systems. These challenges must be addressed \emph{simultaneously}. \textbf{Variety of performance goals:} Users can have different performance goals for configuration tuning. For a data analytical job on Spark, the performance goal is normally to reduce the running time, while for a data-access workload on MySQL, it would be to increase the throughput. Sometimes, users can have multiple performance goals, e.g., increasing the throughput and decreasing the average latency of individual operations for MySQL. Some users would also require to improve the performance goal such as throughput but not to worsen other metrics such as memory usage. Besides, some performance goals need to be maximized, while some need to be minimized. \textbf{Variety of systems and workloads:} To tune the variety of systems and workloads, we cannot build or have users build performance models for the tuning purpose as previously done for Hadoop~\cite{starfish}. Some deployed systems are distributed, e.g., Spark and Hadoop, while some are standalone, e.g., Tomcat or one-node Hadoop. A system's performance model can be strongly influenced by the hardware and software settings of the deployment environment~\cite{acts}. Hence, the automatic configuration tuning system must handle the variety of deployment environments. It must enable an easy usage with the deployed systems and workloads. The heterogeneity of deployed systems and workloads can have various performance models for tuning, leading to different best configuration settings and invalidating the reuse of samples across different deployments~\cite{acts,ottertune}. \textbf{High-dimensional parameter space:} As mentioned previously, many systems in the cloud now have a large number of configuration parameters, i.e., a high-dimensional parameter space for configuration tuning. On the one hand, it is impossible to get the complete image of the performance-configuration relations without samples covering the whole parameter space. On the other hand, collecting too many samples is too costly. The typical solutions to the optimization problem over high-dimensional spaces generally assume the abundance of samples. For example, some solve the optimization problem with around \emph{10} parameters using about \emph{2000} samples~\cite{rrs,paraTuning}. Except through simulations, it is too costly to collect such an amount of samples in practice; thus, such solutions are not applicable to the configuration tuning problem of real systems. \textbf{Limited samples:} It is impractical to collect a large number of performance-configuration samples in practice. Besides, it is impossible to build a performance simulator for every system in the cloud, thus making the simulation-based sample collection infeasible. We have to collect samples through real tests against the deployed systems. Thus, methods used for configuration tuning cannot rely on a large sample set. Rather, it should produce results even on a limited number of samples. And, as the number of samples is increased, the result should be improved \begin{figure*}[!t] \centering \includegraphics[width=0.95\textwidth]{processS.eps}\vspace{-6pt} \caption{The automatic configuration tuning process and the major components of BestConfig.}\vspace{-3pt} \label{fig:process} \end{figure*} \section{BestConfig Design} \label{sec:main} BestConfig is designed to automatically find, within a given resource limit, a configuration setting that can optimize the performance of a deployed system under a specific application workload. We call the process of adjusting configuration settings as configuration tuning (or just tuning) and the system to adjust as SUT (System Under Tune) \subsection{Design Overview} To satisfy users' various needs on performance optimization and simultaneously simplify the optimization problem, we adopt the utility function approach to amalgamating multiple performance optimization needs into a single maximization goal. BestConfig exposes an interface for users to express their performance optimization goals ($\S$\ref{sec:goal}). To handle the variety of deployed systems and workloads, BestConfig is designed with a flexible architecture that has loosely coupled components, as sketched in Figure~\ref{fig:process}. These components are connected through data flows. The system manipulator component is an interface to interact with an SUT deployed in the target environment, while the workload generator component allows the easy plug-in of any target workload. With limited samples, the tuning process must collect samples by careful choices. Different combinations of deployed systems and workloads can require different sampling choices. Thus, we design the BestConfig architecture with a sampler component that interacts with the system manipulator on sample collection. Besides, the performance optimization process can introduce more information on which configuration settings to sample; thus, the performance optimizer component of BestConfig is designed to interact with the sampler to pass on such knowledge. The resulting architecture of BestConfig is detailed in Section~\ref{sec:arch}. To address the configuration tuning problem, we must solve the two subproblems of \textbf{sampling} and \textbf{performance optimization (PO)} simultaneously. Due to the challenges of high-dimensional parameter space and limited samples, the sampling subproblem differs from the random sampling in related works. It must be solved with additional conditions as detailed in Section~\ref{sec:subproblems}. The PO subproblem also faces similar conditions ($\S$\ref{sec:subproblems}). A feasible solution to automatic configuration tuning for general systems must address all the conditions for the two subproblems. BestConfig exploits the sampling information when solving the PO subproblem, and vice versa. In contrast, sampling and performance optimization are generally addressed \textbf{separately} in related works. We combine the sampling method DDS (Divide and Diverge Sampling) with the optimization algorithm RBS (Recursive Bound and Search) as a complete solution. DDS will sample for later RBS rounds in subspaces that are not considered in early RBS rounds. In this way, the requirement on wide space coverage for sampling is better satisfied. Furthermore, RBS exploits DDS in the bounded local search to reduce the randomness and increase the effectiveness of the search. In comparison, sampling was rarely considered and exploited for the local search step in related works. We detail DDS and RBS in Section~\ref{sec:ddsrbs}. In the following, we first describe the key steps for automatic configuration tuning ($\S$\ref{sec:steps}). \subsection{Key Steps for Configuration Tuning}% \label{sec:steps} Figure~\ref{fig:process} sketches the automatic configuration tuning process of BestConfig. The tuning process is in closed loop. It can run in as many loops as allowed by the resource limit. The resource limit is typically the number of tests that are allowed to run in the tuning process. It is provided as an input to the tuning process. Other inputs include the configuration parameter set and their lower/upper bounds (denoted as \emph{configuration constraints}). The output of the process is a configuration setting with the optimal performance found within a given resource limit. Given \emph{configuration constraints}, the configuration sampler generates a number of configuration settings as allowed by the resource limit. The configuration settings are then used to update the configuration setting of the SUT. For each configuration setting, a test is run against the SUT; and, the corresponding performance results are collected. The performance results are then transformed into a scalar performance metric through the utility function. All the sample pairs of the performance metric and the corresponding configuration setting are used by the performance optimization (PO) algorithm. The PO algorithm finds a configuration setting with the best performance. If the resource limit permits more tests and samples, the PO algorithm will record the found configuration setting and output a new set of configuration constraints for the next tuning loop. Otherwise, the tuning process ends and BestConfig outputs the configuration setting with the best performance found so far.\vspace{-9pt} \subsection{Performance Metric by {\large Utility Function}}% \label{sec:goal} BestConfig optimizes towards a scalar performance metric, which has only a single value. The scalar performance metric is defined by a \emph{utility function}, with user-concerned performance goals as inputs. If only one performance goal is concerned, e.g., the throughput or the latency, the utility function is the identity function, i.e., $f(x)=x$, where $x$ is the performance goal. If multiple performance goals are concerned simultaneously, we can define the utility function as a weighted summation. For example, if a user wants to increase the throughput and decrease the latency, the utility function can be defined as $f(x_t,x_l)=x_t/x_l$, where $x_t$ is the throughput and $x_l$ the latency. In case that the throughput must be increased and the memory usage must not exceed the threshold $c_m$, an example utility function is $f(x_t,x_m)=x_t\times S(c_m-x_m-5)$, where $x_m$ is the memory usage and $S(x)$ is the sigmoid function $S(x)=\frac{1}{1+e^{-x}}$. BestConfig allows users to define and implement their own utility functions through a \texttt{Performance} interface~\cite{bestconf}. During the configuration tuning process, BestConfig maximizes the performance metric defined by the utility function. Although users can have performance goals that need to be minimized, we can easily transform the minimization problem into a maximization one, e.g., taking the inverse of the performance metric.\vspace{-3pt} \subsection{Highly-Extensible Architecture}% \label{sec:arch} BestConfig has a highly flexible and extensible architecture. The architecture implements the flexible configuration tuning process in closed loop. It allows BestConfig to be easily used with different deployed systems and workloads, requiring only minor changes. The main components in BestConfig's architecture include \emph{Configuration Sampler}, \emph{Performance Optimizer}, \emph{System Manipulator} and \emph{Workload Generator}. \emph{Configuration Sampler} implements the scalable sampling methods. \emph{Performance Optimizer} implements the scalable optimization algorithms. \emph{System Manipulator} is responsible for updating the SUT's configuration setting, monitoring states of the SUT and tests, manipulating the SUT, etc. \emph{Workload Generator} generates application workloads. It can be a benchmark system like YCSB~\cite{ycsb} or BigOP~\cite{ycsb} running a benchmarking workload; or, it can be a user-provided testing system regenerating the real application workloads. The system manipulator and the workload generator are the only two components interacting with the SUT. For extensibility, the components in the architecture are loosely coupled. They only interact with each other through the data flow of configuration constraints, configuration settings and performance metrics. The configuration sampler inputs the system manipulator with sets of configuration settings to be sampled. The system manipulator inputs the performance optimizer with the samples of performance-configuration pairs. The performance optimizer adaptively inputs new configuration constraints to the configuration sampler. With such a design, BestConfig's architecture allows different scalable sampling methods and scalable PO algorithms to be plugged into the configuration tuning process. On coping with different SUTs or workloads, only the system manipulator and the workload generator need to be adapted. With the extensible architecture, BestConfig can even optimize systems emerging in the future, with only slight changes to the system manipulator and the workload generator.\vspace{-3pt} \subsection{An Example of Extending BestConfig} With the current Java implementation of BestConfig~\cite{bestconf}, one can define a new sampling method by implementing the {\tt ConfigSampler} interface. The {\tt ConfigSampler} interface accepts the sample set size limit and a list of configuration constraints as inputs, and returns a list of configuration settings. Similarly, to plug in a new PO algorithm, one can implement the {\tt Optimization} interface. The implementation of the {\tt Optimization} interface can accept a list of configuration settings and their corresponding performance metrics from the system manipulator. It must decide whether to continue the automatic configuration process or not, based on the given resource limit, e.g., the number of tests allowed. The best configuration setting can be output to a file, while the new configuration constraints are directly passed to the configuration sampler. At present, extending the system manipulator requires only changing a few shell scripts that interact with the SUT and the workload generator. The workload generator is loosely coupled with other system components. Thus, it is highly convenient to integrate user-provided workload generation systems, e.g., YCSB and HiBench bundled in the BestConfig source~\cite{bestconf}. Thanks to the highly extensible architecture, we have already applied BestConfig to six systems as listed in Table~\ref{tbl:paraNums}. The corresponding shell scripts for these systems are provided along with the BestConfig source.\vspace{-6pt} \subsection{Subproblems: Sampling and PO}% \label{sec:subproblems} \textbf{The subproblem of sampling} must handle all types of parameters, including boolean, enumeration and numeric. The resulting samples must have a wide coverage of the parameter space. To guarantee resource scalability, the sampling method must also guarantee a better coverage of the whole parameter space if users allow more tuning tests to be run. Thus, the sampling method must produce sample sets satisfying the following three conditions: (1) the set has a wide coverage over the high-dimensional space of configuration parameters; (2) the set is small enough to meet the resource limit and to reduce test costs; and, (3) the set can be scaled to have a wider coverage, if the resource limit is expanded. \textbf{The subproblem of performance optimization (PO)} is to maximize the performance metric based on the given number of samples. It is required that the output configuration setting must improve the system performance than a given configuration setting, which can be the default one or one manually tuned by users. To optimize the output of a function/system, the PO algorithm must satisfy the following conditions: (1) it can find an answer even with a limited set of samples; (2) it can find a better answer if a larger set of samples is provided; and, (3) it will not be stuck in local sub-optimal areas and has the possibility to find the global optimum, given enough resources. Two categories of PO algorithms exist, i.e., model-based~\cite{starfish,tkdeConfTune,ottertune} and search-based~\cite{rrs,smarthillclimbing,paraTuning}. In the design of BestConfig, we exploit the search-based methods. We do not consider model-based PO methods for the following reasons. First, with the large number of configuration parameters, model-based methods would require a large number of samples to construct a useful model, thus violating the first condition of the PO subproblem. Second, model-based methods require the user to have a priori knowledge about the model, e.g., whether the model should be linear or quadratic, but it is mission impossible for general users to input such a priori information for each combination of SUT, deployment setting and workload. Third, model-based methods have hyper-parameters, which strongly impact how the model works; but setting these hyper-parameters is as hard as tuning the configuration setting of the SUT. Without enough samples, precise a priori knowledge or carefully-tuned hyper-parameters, model-based methods will not even work. In Section~\ref{sec:model}, we experiment with two model-based methods using limited samples. We demonstrate that these model-based methods hardly work in the configuration tuning problem with a resource limit.\vspace{-6pt} \section{DDS \& RBS in Cooperation}% \label{sec:ddsrbs} To address the two subproblems of automatic configuration tuning, we propose the divide-and-diverge sampling (DDS) method and the recursive bound-and-search (RBS) algorithm. Although the sampling and PO methods can work separately, DDS and RBS in cooperation enables the effective tuning process of BestConfig. \subsection{DDS: Divide \& Diverge Sampling}% \label{sec:DDS} \textbf{Parameter space coverage}. To guarantee a wide coverage over the high-dimensional parameter space, we \textbf{divide} the space into subspaces. Then we can randomly select one point from each subspace. Thus, each subspace is represented by one sample. In comparison to the random sampling without subspace division, it is very likely that some subspaces are not represented, especially when the dimension of the space is high. Given $n$ parameters, we can divide the range of each parameter into $k$ intervals and collect combinations of the intervals. There are $k^n$ combinations, thus $k^n$ subspaces and samples. This way of sampling is called \emph{gridding} or \emph{stratified sampling}. Thanks to subspace division, gridding guarantees a complete coverage of the whole parameter space. But it also results in a sample set with a large cardinality, which is in exponential relation to the number of parameter dimensions. Hence, it violates the second requirement of the sampling subproblem. \textbf{Resource limit}. To meet the second requirement, we reduce the number of subspaces to be sampled. We observe that, \textbf{the impact of an influential parameter's values on the performance can be demonstrated through comparisons of performances, disregard of other parameters' values}. For example, consider the performance model of MySQL as plotted in Figure~\ref{fig:mysql}. If the value of a parameter has great impacts on the performance like $query\_cache\_type$, we actually do not need to examine all combinations of the parameter's values with every other parameter's values. Instead, we need only examine each potentially outstanding value of the parameter once and compare the resulting performance with other samples. Thus, given a limited resource, we consider each interval of a parameter once, rather than making full combinations of all intervals. After dividing parameter ranges into $k$ intervals, we do not make a full combination of all intervals. Rather, we take a permutation of intervals for each parameter; then, we align the interval permutation for each paremeter and get $k$ samples. For example, with two parameters $X$ and $Y$ divided into $6$ range intervals respectively, we can take $6$ samples as demonstrated in Figure~\ref{fig:dsprocess}. Each range interval of $X$ is represented exactly once by the sample set. So is that of $Y$. For a given sample-set size, we \textbf{diverge} the set of sample points the most by representing each interval of each parameter exactly once. \textbf{Scalability}. The third requirement for sampling is to be scalable with regard to the resource limit, e.g., the number of tests allowed, while meeting the previous two requirements. In fact, the above \textbf{divide-and-diverge sampling} (DDS) method directly meets the third requirement. The value of $k$ is set according to the resource limit, e.g., $k$ being equal to the number of tests allowed. Increasing the number of allowed tests, the number of samples will increase equally; thus, the parameter space will be divided more finely and the space coverage will be increased. Furthermore, as the configuration tuning process is in closed loop, multiple times of sampling can be run. For the sake of scalability and coverage, DDS do not complete restart a new sampling process by redividing the whole space. Rather, on a request of resampling, DDS reuses its initial division of the whole parameter space and samples in subspaces not considered previously, while diverging the sample points as much as possible. {\large \textbf{Heterogeneity of parameters}}. Although DDS considers the continuous range of a parameter, DDS can be applied to parameters of boolean or categorical types by transforming them into parameters with continuous numeric ranges. Take the boolean type for example. We can first represent the parameter value of \emph{true} and \emph{false} by $1$ and $0$ respectively. Then, we let the values be taken from the range of $[0,2)$, to which the DDS method can be directly applied. We can map a sampled value within ranges of $[0,1)$ and $[1,2)$ respectively to the values of $0$ or $1$, which are equal to \emph{false} and \emph{true} respectively. Similar mappings can be carried out for categorical or enumerative parameters as well. \begin{figure}[!t] \centering \includegraphics[width=0.33\textwidth]{algoS.eps}\vspace{-6pt} \caption{An example of running DDS and RBS for a 2D space.}\vspace{-12pt} \label{fig:dsprocess} \end{figure} \subsection{RBS: Recursive Bound \& Search}% \label{sec:RBS} Consider the performance surfaces in Figure~\ref{fig:tomcat} and Figure~\ref{fig:spark}. These performance plots have parameters with numeric values and continuous ranges, thus the performance surfaces are continuous surfaces. \textbf{Given a continuous surface, there is a high possibility that we find other points with similar or better performances around the point with the best performance in the sample set.} Even if the continuous performance surface might not be smooth, e.g., that in Figure~\ref{fig:tomcat}, or if the performance surface is continuous only when projected to certain dimensions, e.g., that in Figure~\ref{fig:mysql} when constrained to specific subspaces, the above observation still applies. Based on this observation, we design the RBS (Recursive Bound and Search) optimization algorithm. \textbf{Bound step}. Given an initial sample set, RBS finds the point $C_0$ with the best performance. Then, it asks for another set of points sampled in the \emph{bounded space} around $C_0$. Based on the observation in the last paragraph, there is a high possibility that we will find another point (say $C_1$) with a better performance. We can again sample in a bounded space around $C_1$. We can recursively carry out this bound-and-sample step until we find no point with a better performance in a sample set. Here, there is a problem of how large the \emph{bounded space} should be. According to the observation in Section~\ref{sec:DDS}, if a performance value shall have influential and positive impacts on the performance, it shall lead to a high performance in the sample set. For the initial sample set, parameter values other than those represented by $C_0$ are actually not having positive impacts as influential as $C_0$ on the performance, thus we should not consider them again given the limited resource. In other words, the \emph{bounded space} around $C_0$ shall not include parameter values represented by any other points in the sample set. In addition, a high performance might be achieved by any parameter value of those around $C_0$ but unrepresented in the sample set. RBS fixes the bounds of the \emph{bounded space} as follows. For each parameter $p_i$, RBS finds the largest value $p_i^f$ that is represented in the sample set and that is smaller than that of $C_0$. It also finds the smallest value $p_i^c$ that is represented in the sample set and that is larger than that of $C_0$. For the dimension represented by the parameter $p_i$, the bounded space has the bounds of $(p_i^f,p_i^c)$. Figure~\ref{fig:dsprocess} demonstrates this bounding mechanism of RBS. The same bounding mechanism can be carried out for every $C_j,j=0,1,...$ in each bound-and-sample step. By now, RBS has addressed the first two requirements for the PO subproblem. It finds an answer even with a limited set of samples by recursively taking the bound-and-sample step around the point $C_j$, which is the point with the best performance in a sample set. Let each bound-and-sample step called a \emph{round}. RBS can adjust the size of the sample set and the number of rounds to meet the resource limit requirement. For example, given a limit of $nr$ tests, RBS can run in $r$ rounds with each sample set sized $n$. Given more resources, i.e., a larger number of allowed tests, RBS can carry out more bound-and-sample steps to search more finely in promising bounded subspaces. \textbf{Recursion step}. To address the third requirement and avoid being stuck in a sub-optimal bounded subspace, RBS restarts from the beginning of the search by having the sampler to sample in the complete parameter space, if no point with a better performance can be found in a bound-and-sample step. This measure also enables RBS to find a better answer if a larger set of samples is provided. This is made possible through searching around more promising points scattered in the huge high-dimensional parameter space.\vspace{3pt} \subsection{Why Combining {\large DDS} with {\large RBS} Works}% \label{sec:reason} In this section, we discuss about why combing DDS with RBS works in the configuration tuning problem. The performance of a system can be represented by a measurable objective function $f(x)$ on a parameter space $D$. In DDS, $D$ is divided into orthogonal subspaces $D_{i}$. We define the distribution function of objective function values as: \begin{equation} \phi_{D_{i}}(y_0) = \frac{m(\{x\in D_{i}| f(x)\leq y_0\})}{m(D)} \end{equation} where $y_0$ is the performance for the default configuration setting $P_0$ and $m(\cdot)$ denotes \emph{Lebesgue measure}, a measure of the size of a set. For example, \emph{Lebesgue measure} is area for a set of 2-dimensional points, and volume for a set of 3-dimensional points, and so on. The above equation thus represents the portion of points that have no greater performance values than $P_0$ in the subspace. The values of $\phi_{D_{i}}(y_0)$ fall within the range of $[0,1]$. If a subspace has no points with greater performance values than $y_0$, it will have a zero value of $\phi(y_0)$. When all points in a subspace have higher performances, the subspace will have $\phi(y_0)$ evaluated to one. DDS divides the whole high-dimensional space into subspaces and then samples in each subspace. A sample can be either greater or no greater than the default performance $y_0$. Assume all points with no greater performances are in set $s_{i0}$ and those with greater ones are in set $s_{i1}$, we have $m(s_{i})=m(s_{i0})+m(s_{i1})$. Given $\phi_{D_{i}}(y_0)$ for subspace $D_{i}$, randomly sampling according to the \emph{uniform distribution} will result in a $\phi_{D_{i}}(y_0)$ probability of getting points with no better performances and a $1-\phi_{D_{i}}(y_0)$ probability of getting points with better performances. Randomly sampling according to the uniform distribution, DDS will output samples with greater performances after around $n=1/(1-\phi_{D_{i}}(y_0))$ samples for subspace $D_{i}$. Although the exact value of $n$ is not known, the principle underlying the uniform-random number generation guarantees that more samples will finally lead to the answer. In other words, given enough resources (i.e., samples), DDS will get a point with a greater performance than $P_0$. RBS bounds and samples around the point with the best performance in a sample set. This key step works because, if a point $C_j$ in a subspace $D_{i}$ is found with the best performance, it is highly probable that the subspace $D_{i}$ has a larger value of $1-\phi_{D_{i}}(y_0)$ than the other subspaces, as all subspaces are sampled for the same number of times in all rounds of RBS. According to the definition of $\phi_{D_{i}}(y_0)$, subspaces with larger values of $1-\phi_{D_{i}}(y_0)$ shall have more points that lead to performances greater than $y_0$, as compared to subspaces with smaller values of $1-\phi_{D_{i}}(y_0)$. Thus, RBS can scale down locally around $C_j$ to search again for points with better performances. As a result, the bound step of RBS, recursively used with DDS, will lead to a high probability of finding the point with the optimal performance. If the small probability event happens that the bound step runs in a subspace with a relatively small value of $1-\phi_{D_{i}}(y_0)$, the phenomenon of trapping in the local sub-optimal areas occurs. The recursion step of RBS is designed to handle this situation by sampling in the whole parameter space again.\vspace{3pt} \section{Evaluation}% \label{sec:eval} We evaluate BestConfig on six widely deployed systems, namely Hadoop~\cite{hadoop}, Hive~\cite{hive}, Spark~\cite{spark}, Cassandra~\cite{cassandra}, Tomcat~\cite{tomcat}, and MySQL~\cite{mysql}. These systems are deployed for Huawei's applications named Cloud+ and BI. To generate workloads towards systems under tune, we embed widely adopted benchmark tools in the workload generator. We use HiBench~\cite{hibench} for Hive+Hadoop and Spark, YCSB~\cite{ycsb} for Cassandra, SysBench~\cite{sysbench} for MySQL and JMeter~\cite{jmeter} for Tomcat. Table~\ref{tbl:paraNums} summarizes the evaluated systems along with the corresponding numbers of tuned parameters respectively. The detailed lists of the tuned parameters, as well as the detailed descriptions of the SUTs and the evaluated workloads, are accessible on the Web~\cite{bestconf}. \begin{table}[!b] \centering \vspace{-6pt} \caption{The evaluated systems and parameters.}\vspace{-6pt}% \label{tbl:paraNums}% \begin{tabular}{lllc} \toprule[1.2pt] \multirow{2}*{\small \textbf{Software}} & \multirow{2}*{\small \textbf{Description}} & \multirow{2}*{\small \textbf{Language}} & \textbf{\small \# Parameters}\\ & & & \textbf{\small Tuned}\\ \midrule[0.8pt] \textbf{Spark} & Distributed computing & Scala & \textbf{30}\\ \midrule[0.2pt] \textbf{Hadoop} & Distributed computing & Java & \multirow{2}*{\textbf{109}}\\ \textbf{Hive} & Data analytics & Java & {\small (in all)}\\ \midrule[0.2pt] \textbf{Cassandra} & NoSQL database & Java & \textbf{28}\\ \midrule[0.2pt] \textbf{MySQL} & Database server & C++ & \textbf{11}\\ \midrule[0.2pt] \textbf{Tomcat} & Web server & Java & \textbf{13} \\ \bottomrule[1.2pt] \end{tabular} \end{table} Our experimental setup involves multiple local clusters of servers to deploy the six systems. If not specifically mentioned, the server is equipped with two 1.6GHz processors that have two physical cores, and 32GB memory, running CentOs 6.0 and Java 1.7.0\_55. To avoid interference and comply with the actual deployment, we run the system under tune, the workload generator and other components of BestConfig on different servers. Further details can be found on the Web~\cite{bestconf}. In the evaluation, we answer five questions:\vspace{-2pt} \begin{enumerate} \item Why the configuration tuning problem with a resource limit is nontrivial ($\S$\ref{sec:model});\vspace{2pt} \item How well BestConfig can optimize the performance of SUTs ($\S$\ref{sec:evalTune});\vspace{2pt} \item How effective the cooperation of DDS and RBS is ($\S$\ref{sec:evalDDS});\vspace{2pt} \item How the sample set size and the number of rounds affect the tuning process ($\S$\ref{sec:evalSizeRound});\vspace{2pt} \item Whether the configuration setting found by BestConfig will maintain its advantage over the given setting in tests outside the tuning process ($\S$\ref{sec:evalWorkloads}). \end{enumerate} \begin{figure*}[t] \centering \begin{minipage}{0.43\textwidth} \centering \includegraphics[width=\textwidth]{qualityThroughput.eps}\vspace{-6pt} \label{fig:qualityThroughput} \end{minipage}\vspace{-6pt} \begin{minipage}{0.43\textwidth} \centering \includegraphics[width=\textwidth]{qualityDuration.eps}\vspace{-6pt} \label{fig:qualityDuration} \end{minipage}\vspace{-6pt} \caption{BestConfig's optimization capability with regard to the default configuration setting.}\vspace{-12pt} \label{fig:result} \end{figure*} \subsection{Infeasibility of Model-based Methods}% \label{sec:model} The difficulty of the configuration tuning problem can be demonstrated by the infeasibility of model-based methods. Common machine learning methods are model-based methods. They were previously used in optimization problems that have only a limited number of parameters. Based on the highly extensible architecture of BestConfig, we implement two PO algorithms, adopting the machine learning approach. One is based on the COMT (Co-Training Model Tree) method~\cite{comt}, which assumes a linear relation between parameters and the performance. COMT divides the parameter space into subspaces and builds linear models for each subspace. Therefore, many common linear models can be taken as special cases of COMT. Besides, COMT is a semi-supervised machine learning method, which is designed and expected to work with a limited number of samples. We train the COMT model using training sets of 100, 200 and 300 samples respectively. Each training set is randomly selected from a pool of 4000 samples, which are generated in a BestConfig tuning experiment over the Tomcat deployment described above. According to the COMT algorithm, the training not only exploits the training set, but also another set of unsampled points to reduce generalization errors. We validate the three learned models on the testing set with all samples in the sample pool. We summarize the prediction errors in Table~\ref{tbl:comt}, where \emph{Avg. err. rate} is the average error rate and \emph{Max. err. rate} the maximum error rate. Here, \emph{error rate} is computed as the actual performance dividing the difference between the predicted performance and the actual performance. \begin{table}[!h] \vspace{-2pt} \centering \small \caption{Linear-model based performance predictions.}\vspace{-6pt}% \label{tbl:comt}% \begin{tabular}{ccc} \toprule[1.1pt] { \textbf{Sample set size}} & { \textbf{Avg. err. rate}} & { \textbf{Max. err. rate}}\\ \midrule[0.8pt] 100 & 14\% & 240\% \\ \midrule[0.2pt] 200& 15\% & 1498\% \\ \midrule[0.2pt] 300& 138\% & 271510\% \\ \bottomrule[1.1pt] \end{tabular \end{table} From Table~\ref{tbl:comt}, we can see that the predictions are in fact very much inaccurate. Although the first two average error rates look small, the corresponding models can make highly deviated predictions. The reason that more samples lead to worse predictions is twofold. One is because of model overfitting, and the other is due to the highly irregular performance surface of the SUT. The other machine learning model we have tried is the GPR (Gaussian Process Regression) method~\cite{ituned}, which assumes a differentiable performance function on parameters. It is the state-of-the-art model-based method adopted in a recent work on database tuning~\cite{ottertune}. GPR does not predict the performance for a given point. Rather, it constructs the model based on the covariances between sample points and outputs points that are most probably to increase the performance the most, i.e., to achieve the best performance. We experiment GPR using training sets with 100, 200 and 300 samples respectively. These sample sets are also collected from BestConfig tuning experiments over the Tomcat deployment described above. Among all the provided samples, GPR make a guess on which point would lead to the best performance (\emph{best guess}). We then run a test on the best-guess point to get the actual performance. We compare the actual performance for GPR's best guess with that for the default configuration setting (\emph{default}). Besides, we compare GPR's best guess with the real best point that has the optimal performance among all the provided samples, denoted as \emph{real best}. The results are given in Table~\ref{tbl:gpr}. We can see that, although the prediction is improving as the number of samples increases, GPR's predictions about best points are hardly accurate. \begin{table}[!h] \vspace{-6pt} \centering \small \caption{GPR-based predictions on best points.}\vspace{-6pt}% \label{tbl:gpr}% \begin{tabular}{ccc} \toprule[1.1pt] { \textbf{Sample set size}} & { \textbf{Bst. guess/dflt.}} & { \textbf{Bst. guess/rl. bst.}}\\ \midrule[0.8pt] 100 & 93\% & 56\% \\ \midrule[0.2pt] 200& 104\% & 63\% \\ \midrule[0.2pt] 300& 121\% & 58\% \\ \bottomrule[1.1pt] \end{tabular}\vspace{-6pt} \end{table} Because of the complexity of performance models, common model-based optimization methods, e.g. COMT and GPR, do not work well in the configuration tuning problem. In essence, the assumptions of such algorithms do not hold for the SUTs. As a result, methods like COMT and GPR cannot output competitive configuration settings. Moreover, their results do not guarantee to improve as the number of samples is increased, i.e., not scalable with the resource limit; instead, their results might worsen because of overfitting, violating the conditions of the PO subproblem. \subsection{Automatic Configuration Tuning Results}% \label{sec:evalTune} Figure~\ref{fig:result} presents the automatic configuration tuning results for Cassandra, MySQL, Tomcat, Spark and Hive+Hadoop using BestConfig. For the latter three systems, we ran two tuning experiments with different benchmark workloads on each system. In all experiments, we set the sample set size to be 100 and the round number to be one. As demonstrated by the results, BestConfig improves the system performances in all experiments. Even though the Hive+Hadoop system has 109 parameters to tune, BestConfig can still make a performance gain. In comparison to the other SUTs, the performance gain for Hive+Hadoop is relatively small. The underlying reason is that this SUT has almost 10 times as many configuration parameters as the other SUTs. However, \textbf{BestConfig can improve the tuning result as the size of the sample set is increased}. Setting the sample set size to be 500, we carry out another experiment of Hive+Hadoop under the HiBench Join workload. The result is demonstrated in Figure~\ref{fig:hiveSurf}. The BestConfig setting reduces 50\% running time of the Join job. \begin{figure}[!b] \centering \vspace{-12pt} \includegraphics[width=0.48\textwidth]{hiveSurf.eps}\vspace{-6pt} \caption{BestConfig reduces 50\% running time of HiBench-Join on Hive+Hadoop within 500 tests.} \label{fig:hiveSurf} \end{figure} To sum up the results, BestConfig has improved the throughput of Tomcat by about 75\%, that of Cassandra by about 25\%, that of MySQL by about 430\%, and reduced the running time of Hive join job by about 50\% and that of Spark join job by about 80\%, solely by configuration adjustments. {\large \textbf{Invalidating manual tuning guidelines}}. The results produced by BestConfig have invalidated some manual tuning rules. For example, some guideline for manually tuning MySQL says that the value of \emph{thread\_cache\_size} should never be larger than 200. However, according to BestConfig's results demonstrated in Figure~\ref{fig:mysqlScatter}, we can set the parameter to the large value of $11987$, yet we get a much better performance than following the guideline. \begin{figure}[!b] \centering \vspace{-12pt} \includegraphics[width=0.45\textwidth]{mysqlScatter.eps}\vspace{-6pt} \caption{Throughputs for varied \emph{thread\_cache\_size} of MySQL, invalidating the manual tuning guideline.} \label{fig:mysqlScatter} \end{figure} \subsection{Cooperation of DDS and RBS}% \label{sec:evalDDS} We have evaluated the DDS (divide-and-diverge sampling) method of BestConfig as compared to uniform random sampling and gridding sampling. We carry out the comparisons based on Tomcat through tuning two configuration parameters. In the experiments, we use all sampling methods with RBS. We set the initial sample set size to be 100 and the round number to be 2. Thus, after sampling for the first round, each sampling method will sample around a promising point in the second round, denoted as \emph{bound and sample}. The results are plotted in Figure~\ref{fig:dds}. In the initial round, the three sampling methods have sampled points with similar best performances. The effectiveness and advantage of DDS is demonstrated in the bound-and-sample round. As shown in Figure~\ref{fig:dds}, DDS have sampled points with the best performance as much as three times more than those of the gridding and the uniform sampling. The advantage of DDS over the gridding is due to its diverging step, while that over the uniform sampling is due to the complete coverage of the sampling space. DDS considers 100 diversities for each parameter, while the gridding considers only 10. And, there is a possibility that the uniform sampling will take samples locating at some restricted area of the space, while DDS is guaranteed to scatter samples across the space and with divergence. \begin{figure}[!t] \centering \includegraphics[width=0.435\textwidth]{dds.eps}\vspace{-12pt} \caption{Sampling method comparisons: DDS outperforms gridding and uniform in the latter round.}\vspace{-15pt} \label{fig:dds} \end{figure} \begin{figure}[!b] \centering \vspace{-12pt} \includegraphics[width=0.39\textwidth]{ddsvlhs.eps}\vspace{-12pt} \caption{DDS+RBS makes progress in earlier rounds than LHS+RBS.} \label{fig:ddsvlhs} \end{figure} We also compare DDS with LHS (Latin Hypercube Sampling)~\cite{lhs}. LHS can produce the same sample sets as DDS in one-time sampling. However, DDS differs from LHS in that DDS remembers previously sampled subspaces and resamples towards a wider coverage of the whole parameter space. This difference leads to the DDS method's advantage of coverage and scalability over LHS. This advantage is demonstrated in Figure~\ref{fig:ddsvlhs} through a configuration tuning process for Tomcat. In this tuning process, we set the sample set size for each round as 100 (according to the experimental results of Table~\ref{tbl:sizeRound}). We can see that DDS in cooperation with RBS makes progress in earlier rounds than LHS with RBS. \begin{figure}[!t] \centering \includegraphics[width=0.42\textwidth]{rbsVrrs.eps}\vspace{-12pt} \caption{RBS+DDS vs. RRS+LHS.}\vspace{-15pt} \label{fig:rbsVrrs} \end{figure} Furthermore, we try replacing RBS with RRS (Recursive Random Search)~\cite{rrs}. RRS is a search-based optimization method with the exploitation and exploration steps, similar to the bound and recursion steps of RBS. However, RBS are designed with space coverage and scalability, while RRS has no such preferred properties. Hence, when we compare RBS+DDS with RRS+LHS in experiments on a Tomcat deployment, the former outperforms the latter given the same resource limit. The results are demonstrated in Figure~\ref{fig:rbsVrrs}. \subsection{Varied Sample-Set Sizes \& Rounds}% \label{sec:evalSizeRound} To understand how the sample set size and the number of rounds affect the optimization process, we limit the number of tests to 100 and carry out five sets of experiments with varied sample-set sizes and rounds. We vary the sample-set sizes from 5 to 100 and the number of rounds from 20 to 1 accordingly. The experiments are run upon Tomcat using the webpage navigation workload, tuning 13 parameters. The first five rows of Table~\ref{tbl:sizeRound} summarizes the results for each set of experiments, regarding the performance gains for the initial sampling-search round and the whole tuning process. As the size of the sample set increases, both performance gains are increasing. This fact implies that DDS is scalable. In the meantime, given a limited resource, we should set a sample-set size as large as possible, before we increase the number of rounds. However, \textbf{a larger sample-set size for one round does not necessarily always indicate a better tuning result.} We have experimented with 500 samples for one round, tuning the Tomcat deployment. We find that little performance gain is obtained over the tuning process with 100 samples for one round, as demonstrated by the last row of Table~\ref{tbl:sizeRound}. In comparison, the tuning process of Figure~\ref{fig:hiveSurf}, which also uses 500 samples for one round, makes much more performance gain than when using 100 samples for one round. The reason behind the difference lies in \textbf{the number of parameters}. In our experiments, Tomcat has only 13 parameters, while Hive+Hadoop has 109. The more parameters, the larger sample-set size is required. \begin{table}[!b] \centering \vspace{-12pt} \caption{Performance gains on varied sample-set sizes and rounds.}\vspace{-6pt}% \label{tbl:sizeRound}% \begin{tabular}{ccc} \toprule[1.2pt] { \textbf{Exps (SetSize*rounds)}} & { \textbf{Initial gain}} & { \textbf{Best gain}}\\ \midrule[0.8pt] 5*20& $\mathbf{0\%}$ &$\mathbf{5\%}$\\ \midrule[0.2pt] 10*10& $\mathbf{0\%}$ &$\mathbf{14\%}$\\ \midrule[0.2pt] 20*5& $\mathbf{26\%}$ &$\mathbf{29\%}$\\ \midrule[0.2pt] 50*2& $\mathbf{39\%}$ &$\mathbf{39\%}$\\ \midrule[0.2pt] 100*1& $\mathbf{42\%}$ &$\mathbf{42\%}$\\ \midrule[0.2pt] 500*1& $\mathbf{43\%}$ &$\mathbf{43\%}$\\ \bottomrule[1.2pt] \end{tabular} \end{table} \textbf{Rounds matter}. Despite that a large initial sample-set size is important, more rounds are necessary for better tuning results. Consider Figure~\ref{fig:ddsvlhs} again. Because of randomness, it is not guaranteed that more rounds mean definitely better results. For example, the second round of DDS+RBS does not actually produce a better result than the first round. However, more rounds can lead to better results, e.g., the third round and the fifth round of DDS+RBS in Figure~\ref{fig:ddsvlhs}. In fact, the third round and the fifth round are executing the recursion step of RBS. This step is key to avoiding suboptimal results. Thus, by searching in the whole parameter space again, the third and fifth rounds find configuration settings with higher performances. How much BestConfig can improve the performance of a deployed system depends on factors like SUT, deployment setting, workload and configuration parameter set. But BestConfig can usually tune a system better when given more resource and running more rounds than when given less resource and running fewer rounds. \begin{figure}[!t] \centering \includegraphics[width=0.45\textwidth]{workloadStatus.eps}\vspace{-9pt} \caption{A long-running test of the manually-tuned and BestConfig settings for Cassandra under Huawei's Cloud+ application workloads.}\vspace{-12pt} \label{fig:stable} \end{figure} \subsection{Stable Advantage of {\large BestConfig} Setting}% \label{sec:evalWorkloads} BestConfig generally adopts short tests in the tuning process. For the long running application workloads, it might seem that the iterative testing and configuration tuning process cannot work. We argue that, even though the automatic configuration tuning process for such workloads might be long, the performance gained from the long tuning process is still worthwhile. Besides, as the benchmarking community has proved theoretically and practically~\cite{JimAnon,tpchistory,tpc,spec,berkeleyview,bigop}, the long-running application workloads can be represented by some short-running workloads. As demonstrated by our experience with an actual BestConfig deployment, short tests can represent long-running workloads, if the tests are specified and generated properly. We have deployed BestConfig to tune the Cassandra system for Huawei's Cloud+ applications. For confidentiality reasons, we simulated the application workloads using the YCSB benchmark, which is then integrated to the workload generator of BestConfig. In the automatic configuration tuning process, the simulated workload is run for about ten minutes. We set the sample set size to be 60 and the round number as one. As output by BestConfig, a configuration setting was found with about $29\%$ performance gain than the setting tuned by Huawei engineers. Later, we applied the configuration setting found by BestConfig to Cassandra and ran the workload for about 40 minutes. A similar long-running test is carried out with the Huawei-tuned configuration setting as well. The resulting throughput is demonstrated in Figure~\ref{fig:stable}. As shown in Figure~\ref{fig:stable}, BestConfig's configuration setting keeps its advantage over the one set by Huawei engineers. In fact, BestConfig's setting has an average throughput of 9679 ops/sec, while Huawei's rule-based setting can only achieve one of 5933 ops/sec. Thus, BestConfig has actually made a $63\%$ performance gain. This performance improvement is made merely through configuration adjustments. In fact, BestConfig can usually tune a system to a better performance than the manual tuning that follows common guidelines recommended for general system deployments. The reasons are twofold. First, such manual tuning might achieve a good performance for many system deployments under many workloads, but the tuned setting is usually not the best for a specific combination of SUT, workload and deployment environment. As demonstrated in Section~\ref{sec:evalTune}, some tuning guidelines might work for some situations but not the others. Second, the number of configuration parameters is too large and the interactions within a deployed system are too complex to be comprehended by human~\cite{asilomar}.\vspace{-3pt} \subsection{Discussion} Users ought to specify a resource limit in proportion to the number of parameters for tuning. Although BestConfig can improve the performance of a system based on a small number of samples, there is a minimum requirement on the number of samples. Consider Table~\ref{tbl:paraNums}. If the user allows only 5 samples in a round for tuning 13 parameters, the user is not likely to get a good result. When the number of samples exceeds that of parameters, e.g., from the second row of Table~\ref{tbl:paraNums}, the performance of the system gets improved obviously. Similarly, for a system with more than 100 parameters to tune, BestConfig can only improve the system performance by about 5\%, if only 100 samples are provided (Figure~\ref{fig:result}). However, when the sample set size is increased to 500, BestConfig can improve the system performance by about 50\% (Figure~\ref{fig:hiveSurf}). If we can reduce the number of parameters to tune, we can reduce the number of tuning tests and fasten the tuning process, since the number of parameters is related to the number of samples needed for tuning. A recent related work on configuration tuning has proposed to reduce the number of parameters through a popular linear-regression-based feature selection technique called Lasso~\cite{ottertune}. We consider integrating similar parameter reduction methods into BestConfig as future work. BestConfig can generally do a great job if given the whole set of parameters. Even if the set of parameters is not complete, BestConfig can generally improve the system performance as long as the set contains some parameters affecting the system performance. In case that the set of parameters to tune are totally unrelated to an SUT's performance, BestConfig will not be able to improve the SUT's performance. Besides, BestConfig cannot improve an SUT's performance if (1) the SUT is co-deployed with other systems, which are not tuned by BestConfig and which involve a performance bottleneck affecting the SUT's performance; or, (2) the SUT with the default configuration setting is already at its optimal performance. However, if the above situations occur, it means that the SUT's performance cannot be improved merely through configuration adjustments. Instead, other measures must be taken such as adding influential parameters for tuning, removing bottleneck components or improving the system design.\vspace{-6pt} \section{Use Case: Tomcat for Cloud+}% \label{sec:case} BestConfig has been deployed to tune Tomcat servers for Huawei's Cloud+ applications. The Tomcat servers run on virtual machines, which run on physical machines equipped with ARM CPUs. Each virtual machine is configured to run with 8 cores, among which four are assigned to process the network communications. Under the default configuration setting, the utilizations of the four cores serving network communications are fully loaded, while the utilizations of the other four processing cores are about 80\%. With such CPU behaviors, Huawei engineers have considered that the current throughput of the system is the upper bound and no more improvement is possible. Using BestConfig and setting the overall throughput as the performance metric for optimization, we then found a configuration setting that can improve the performance of the deployment by 4\%, while the CPU utilizations remain the same. Later, the stability tests demonstrate that the BestConfig setting can guarantee the performance improvement stably. The results of the stability tests are demonstrated in Table~\ref{tbl:tomcatOnArm}. We can observe improvements on every performance metric by using the BestConfig setting. Thus, BestConfig has made it possible to improve the performance of a fully loaded system by simply adjusting its configuration setting. It has expanded our understanding on the deployed systems through automatic configuration tuning. \begin{table}[!t] \centering \caption{BestConfig improving performances of a fully-loaded Tomcat.}\vspace{-6pt}% \label{tbl:tomcatOnArm}% \begin{tabular}{lllr} \toprule[1.2pt] { \textbf{Metrics}} & { \textbf{Default}} & { \textbf{BestConfig}} & \textbf{Improvement}\\ \midrule[0.8pt] Txns/seconds& 978 & 1018 & $\mathbf{4.07\%\uparrow}$\\ \midrule[0.2pt] Hits/seconds& 3235 & 3620 & $\mathbf{11.91\%\uparrow}$\\ \midrule[0.2pt] Passed Txns& 3184598& 3381644& $\mathbf{6.19\%\uparrow}$\\ \midrule[0.2pt] Failed Txns& 165& 144& $\mathbf{12.73\%\downarrow}$\\ \midrule[0.2pt] Errors& 37& 34& $\mathbf{8.11\%\downarrow}$\\ \bottomrule[1.2pt] \end{tabular}\vspace{-12pt} \end{table} \section{Related Work}% \label{sec:related} The closest related works for BestConfig are the classic Latin Hypercube Sampling (LHS) method~\cite{lhs} and the recursive random search (RRS) algorithm~\cite{rrs}. DDS differs from LHS in that DDS remembers previously sampled subspaces and resamples towards a wider coverage of the whole parameter space. This difference leads to the DDS method's advantage of coverage and scalability over LHS. RBS differs from RRS in two aspects. First, RRS requires the users to set multiple hyper-parameters, which have strong impacts on the optimization results, but setting hyper-parameters is as hard as configuration tuning. Second, RRS searches a local subspace by examining one sample after another. Such a design is efficient only if the local search is limited to a small space, but this is generally not true for high-dimensional spaces. If the hyper-parameters are carefully set as for a narrow local search, then the space not examined would be too large. Such trade-off is difficult. Besides, searching a local space by taking one sample at a time involves too much randomness; in comparison, the local search of BestConfig takes advantage of the sampling method. One more crucial difference is that BestConfig exploits RBS and DDS together. Quite a few past works have been devoted to automatic configuration tuning for Web systems. These works either choose a small number of parameters to tune, e.g., smart hill climbing~\cite{smarthillclimbing}, or require a huge number of initial testings~\cite{eurosysConf,paraTuning}, e.g., simulated annealing~\cite{sa} and genetic algorithms~\cite{ga}. Although constructing performance models might help finding appropriate configuration settings~\cite{qog}, a large number of samples will be required for modeling in a large configuration parameter space. But collecting a large set of samples requires to test the SUT for many times. This is a highly costly process. A related work uses reinforcement learning in the same tuning problem~\cite{RLweb}. It formulates the performance optimization process as a finite Markov decision process (MDP), which consists of a set of states and several actions for each state. The actions are increasing or decreasing the values of individual parameters. As mentioned previously, the performance functions of deployed systems can be complicated, e.g., with many sudden ups and downs on the surface. A seemingly wise step with some performance gain might result in a bad final setting, due to the local suboptimal problem. Works on automatic configuration tuning for database systems also exist. iTuned~\cite{ituned} assumes a smooth performance surface for the SUT so as to employ the Gaussian process regression (GPR) for automatic configuration tuning. But the assumption can be inapplicable to other SUTs, e.g., Tomcat or MySQL given some specific set of configuration parameters. The recent work of OtterTune~\cite{ottertune} also exploits GPR. It additionally introduces a feature-selection step to reduce the number of parameters. This step reduces the complexity of the configuration tuning problem. We are examining the possibility of integrating similar feature selection methods into BestConfig to reduce the number of configuration parameters before starting the tuning process. Automatic configuration tuning is also proposed for the Hadoop system. Starfish~\cite{starfish} is built based upon a strong understanding of the Hadoop system and performance tuning. Thus, the method used in Starfish cannot be directly applied to other systems. Aloja~\cite{aloja} adopts the common machine learning methods, exploiting a large database of samples. But, samples are costly to obtain. As analyzed in Sectio~\ref{sec:subproblems} and \ref{sec:model}, Aloja's approach is not applicable to the configuration tuning of general systems. \section{Conclusion}% \label{sec:conclude} We have presented the automatic configuration tuning system BestConfig. BestConfig can automatically find, within a given resource limit, a configuration setting that can optimize the performance of a deployed system under a specific application workload. It is designed with a highly flexible and extensible architecture, the scalable sampling method DDS and the scalable performance optimization algorithm RBS. As an open-source package, BestConfig is available for developers to use and extend in order to effectively tune cloud systems. We have used BestConfig to tune the configuration settings of six widely used systems and observed the obvious performance improvements after tuning. Furthermore, tuning the Tomcat system on virtual machines in the Huawei cloud, BestConfig has actually made it possible to improve the performance of a fully loaded system by simply adjusting its configuration settings. These results highlight the importance of an automatic configuration tuning system for tapping the performance potential of systems. \begin{acks} We would like to thank our shepherd, Ennan Zhai, and the anonymous reviewers for their constructive comments and inputs to improve our paper. We would like to thank the Huawei Cloud+ and the Huawei BI teams in helping us verify BestConfig towards their online applications. This work is in part supported by the National Natural Science Foundation of China (Grant No. 61303054 and No. 61420106013), the State Key Development Program for Basic Research of China (Grant No. 2014CB340402) and gifts from Huawei. \end{acks} \balance \bibliographystyle{ACM-Reference-Format}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Two-dimensional (2D) carbon systems with mixed sp-sp$^2$ hybridization, i.e., graphyne and graphdiyne, aroused great interest in the scientific community over the last thirty years, as novel 2D carbon structures, \cite{Hirsch_NatMat_2010,baughman1987,Casari_Nanoscale_2016} paving the way for the ultimate goal of fabricating sp-hybridized carbon fragments, whose structural, optical and transport properties were deeply explored \cite{Casari_Nanoscale_2016, Bonardi, Ravagnan, Zanolli}. These systems can form a variety of 2D crystals with different structure, sp/sp$^2$ ratio, density, porosity and have been predicted to possess peculiar electronic properties, such as multiple Dirac cones in graphyne.\cite{Malko_PRL_2012} Synthetic chemistry attempts to develop sp-sp$^2$ carbon networks have been carried out by developing efficient methodologies based on a monomer-to-polymer approach. In this respect, one of the most used protocols has been the oxidative acetylenic coupling of macrocyclic carbon-rich precursors\cite{siemsen2000acetylenic}. However, direct coupling of the monomers reported the cross-linking of the chains\cite{bunz1999polyethynylated} which precludes the development of well-defined extended structures. To tackle this issue, Haley\cite{Haley_PureAppChem_2008} proposed the isolation of the reactive acetylene moieties and the assembly of the structure via an intramolecular cyclization approach, obtained through sequential Sonogashira cross-coupling reactions\cite{sonogashira2002development}. A graphdiyne film has been for the first time produced by G. Li et al.\cite{Li_2010_GDY} on copper substrate, through cross-coupling reaction, then several works reported the fabrication in form of flakes or powder and tested for potential applications in the field of photocatalysis and nanoelectronics \cite{Yang_2013, Li_Graphyne_molecular_electronics2015, james_2018, Huang_2018, Hui_2019}. The investigation of the structure of these systems is of fundamental importance, since for a long time graphdiyne has been considered elusive and poor stability of the sp carbon phase is still an important issue opposing the realization of extended and ordered 2D structures. In this context, Raman spectroscopy stands as the election technique to investigate the structure of carbon-based materials, proving to get access to the presence of sp carbon and to the structural properties and local bond order. The characteristic Raman signal of sp carbon and its structure-related behaviour allows a detailed investigation of sp and sp-sp$^2$ carbon systems.\cite{Milani_BeilsteinJ_2015,serafini2020raman} Despite the significant advancement in the synthesis and the investigation of these materials, the fabrication of extended graphdiyne monolayers and the imaging of their atomic-scale structure is still a challenge. Scanning tunneling microscopy and spectroscopy (STM/STS) in combination with theoretical modelling has the potential to unravel the atomic-scale structure and to provide a deep insight into the surface electronic properties. The atomic-scale imaging by means of scanning probe techniques usually requires material growth in UHV conditions on atomically flat surfaces and in-situ characterization. In this framework, the rapidly growing on-surface synthesis technique has demonstrated tremendous potential in the high-precision bottom-up construction of low-dimensional carbon nanostructures and in the atomic-scale imaging and characterization by surface science techniques. \cite{zhang2012homo,zhang2015surface,doi:10.1021/ja510292b,doi:10.1002/smll.201603675,gao2013glaser,Brambilla17}. Its great advantage relies on fostering selective chemical reactions between molecular precursors on metal surfaces under ultra-high vacuum (UHV) conditions\cite{xing2019selective,kang2019surface}. To this aim, molecular precursors are designed to favor the adsorption and the subsequent on-surface homocoupling reaction, where the substrate act as a catalytical template triggering the reaction in mild conditions. The targeted nanostructures are directly observed by surface-sensitive techniques, such as STM and atomic force microscopy (AFM)\cite{binning1986scanning}. An unambiguous demonstration of the power of this approach is the synthesis of atomically-precise graphene nanoribbons\cite{cai2010atomically,Ruffieux2016489} (GNRs) with engineered properties\cite{tao2011spatially,dilullo2012molecular,koch2012voltage,llinas2017short,dos2017electric, sun2020massive}. This strategy has been recently developed to synthesize a broad range of novel carbon nanostructures based on sp-hybridization, such as carbon atom wires (CAWs) and 2D extended networks of mixed sp-sp$^2$-hybridization. While the fabrication of sp$^2$ carbon systems is achieved through on-surface Ullman coupling reaction of aryl halides, an efficient strategy for sp carbon nanostructures is represented by on-surface dehalogenative/dehydrogenative homocoupling reaction of precursors functionalized with alkynyl halides\cite{shen2017frontiers,liu2017surface,klappenberger2015surface,Sun2018,Sun_AngewChem_2017,Shu_NatComm_2018,yang2018nanoscale}. However, accessing the nature of molecular bonds and hybridization state represents a key factor to comprehensively understand the properties of the novel sp-sp$^2$ carbon structures. In a previous work we have adopted Raman spectroscopy for the identification of sp-hybridization and investigation of on-surface formation mechanism of sp-sp$^2$ carbon atomic wires on Au(111) substrate\cite{rabia2019scanning}. Herein, we report on the nanoscale structure, electronic and vibrational properties of a carbon monolayer nanonetwork whose structure resembles $\gamma$- or $\alpha$-graphdiyne, while differing by the presence of aromatic rings with 3-fold hydrogen terminated bonds. A 2D carbon nanonetwork based on sp-sp$^2$ hybridization was grown on Au(111) under UHV conditions, exposing to organic molecular precursor with three alkynyl bromide groups\cite{sun2016}. By a combination of high-resolution STM imaging, Raman spectroscopy and DFT simulations we unveil the structure at different stages of the formation, i.e., from metal organic nanostructure comprising Au adatoms to the pure sp-sp$^2$ carbon nanonetwork obtained after release of Au atoms by thermal annealing. Insight on the 2D surface electronic structure is provided by first principle calculations. Raman spectroscopy and DFT calculations reveal the C-C vibrational modes of this one-atom thick 2D sp-sp$^2$ carbon nanonetwork on Au(111). The present work provides an example of great potentiality of Raman spectroscopy in studying the atomic-scale structure of novel carbon nanostructures based on sp-sp$^2$-hybridization, e.g. graphyne and graphdiyne, complementing the high-resolution STM imaging. The interaction between the 2D carbon nanonetwork and the underlying Au(111) surface involves a charge transfer and has a fundamental role in modifying both the electronic and vibrational properties of this system with respect to what expected for the free-standing counterpart. The availability of a 2D carbon-based semiconductor on a metal may open new opportunities in the field of catalysis with novel non precious nanomaterials, photoconversion and photovoltaics as well as carbon-based nanoelectronic devices and sensors. \section{Results and discussion} The graphdiyne-like 2D sp-sp$^2$ carbon monolayer is obtained on Au(111) after UHV deposition at room temperature of the precursor molecules shown in Fig.~\ref{scheme_1}a. The on-surface synthesis mechanism can be easily sketched in three steps \cite{Sun2018} (see Fig.~\ref{scheme_1}a) which take place simultaneously: i) adsorption of the molecules on Au(111) surface, ii) cleavage of C-Br bonds followed by the diffusion of radicals on the 2D template, iii) coupling of the sp carbon chains through gold adatoms. Following this scheme, after deposition the molecules self-assemble and form an organometallic 2D nanonetwork (i.e., metalated system) based on sp-sp$^2$ carbon, catalyzed by the Au(111) surface. The substrate is subsequently annealed to remove the Au atoms and promote the C-C homocoupling reaction\cite{sun2016} to form an all-carbon nanonetwork. The obtained nanostructure has three sp carbon diacetylenic chains connecting sp$^2$ six-membered aromatic rings in a three-fold organization instead of the six-fold structure of $\gamma$-graphdiyne (the remaining three-fold dangling bonds are saturated by H atoms). The resulting structure shows a large-scale hexagonal organization very similar to the $\alpha$-graphdiyne, except for the presence of sp$^2$ aromatic rings in place of single C atoms. \begin{figure*}[ht] \centering \includegraphics[height=9cm]{scheme.png} \caption{ (a) Scheme showing the deposition of 1,3,5-tris(bromoethynyl)benzene (tBEP) by organic molecular evaporator (OME) and reporting the different steps of on-surface synthesis process on Au(111); adsorption (1°), dehalogenation process (2°) and coupling through Au adatom (3°), (b) Large-scale STM image showing the sp-sp$^2$ carbon nanonetwork formed after the deposition of tBEP molecular precursor on Au(111) (1.3 V, 0.5 nA). } \label{scheme_1} \end{figure*} In the following sections we present the results of STM and Raman investigation of this system. DFT calculations of the electronic and vibrational properties allow to discuss STM images and Raman spectra and to unveil the fundamental role of the Au(111) surface which affects in a relevant way the density of states and the Raman spectrum. \subsection{Structural and electronic properties}\label{sec:stm} In the STM image in Fig.~\ref{scheme_1}b we report the formation of sp-sp$^2$ carbon nanonetwork which consists in a honeycomb structure extending on the gold surface. Investigating the coverage obtained on the sample, by acquiring large-scale STM images at different regions of the substrate, we observe an ordered sp-sp$^2$ carbon nanonetwork over most of the surface. However, there exist fractions of network where the regularity is disrupted by unreacted or possibly degraded molecules deposited as a second layer (Fig.~\ref{scheme_1}b). Boundary regions between sp-sp$^2$ nanonetwork and uncovered Au(111) surface show a disordered morphology (Fig.~\ref{scheme_1}b), probably because of the random endcapping of highly reactive sp carbon chains with substrate atoms. We studied the effect of thermal annealing on the morphology and structure by acquiring $in-situ$ STM images at RT. To this end, we have annealed the sample to temperatures up to 580~K (Fig.~\ref{morph}) in UHV. STM image of the sample annealed at 370~K (Fig.~\ref{morph}a) report a lower coverage than as-deposited sample (Fig.~\ref{scheme_1}b) and this is an indication of the detachment of the less-strong interacting molecules with the substrate. Then, after increasing the annealing temperature in the range 480 - 580~K (Fig.~\ref{morph}b-d), we observe the formation of an amorphous phase coexisting with the regular 2D nanonetwork. Such patchy structure might be established by the diffusion and thermal activation of the reactive sp carbon atomic chains originating the onset of the disordered phase, as also revealed by the Raman spectra discussed in the next section. The presence of the amorphous phase further increases when the sample is annealed at 580~K, as shown in Fig.~\ref{morph}d, making it difficult to image the surface. \begin{figure*}[ht] \centering \includegraphics[height=4.8 cm]{morph.png} \caption{ RT-STM images showing the morphology of the 2D nanonetwork based on sp-sp$^2$ carbon at different annealing temperatures: (a) a lower coverage observed after annealing the sample at 370 K (1 V, 0.3 nA), (b) the onset of the local disordered phase after annealing at 480 K (-0.5 V, 0.4 nA), (c) a progressive increase of the disordered phase after annealing at 505 K (-0.8 V, 0.3nA), (d) a pronounced morphological disorder developing after annealing at 580 K (-1.2 V, 0.3 nA). } \label{morph} \end{figure*} In addition to the morphology, we studied the molecular structure of sp-sp$^2$ carbon nanonetwork by high-resolution STM images and DFT calculations. The sp-sp$^2$ carbon nanonetwork has been obtained through on-surface coupling of tBEP molecules where the substrate catalyzes the reaction by detaching the bromine atoms and substituting them with surface gold adatoms. The crucial role of the surface atoms is pointed out in the STM image acquired on the sample annealed at 370~K (Fig.~\ref{struct}a) where, notably, the color contrast enhances the round and bright protrusions associated to the gold adatoms. In Fig.~\ref{struct}a gold adatoms are circled with green color and they form a Kagomé-like structure. \begin{figure}[th] \centering \includegraphics[height=11cm]{struct.png} \caption{ High-resolution STM images resolve the structure of sp-sp$^2$ carbon nanonetwork: (a) image of the metalated nanonetwork taken after annealing the sample at 370 K where Au adatoms are circled with green color (0.5 V, 0.3 nA), (b) FFT of (a) showing the hexagonal symmetry, (c) image taken after annealing the sample at 480 K (0.5 V, 0.3 nA) where white arrows points at the missing Au sites, (d) line-profile (red color) on metalated sp carbon chains in (b) showing a peak-to-peak of 13.5 \AA, (e) line-profile (blue color) on C-C coupled chains in (b) showing a peak-to-peak length of 10 \AA. } \label{struct} \end{figure} A further support to the structural analysis of metalated sp-sp$^2$ carbon nanonetwork has been provided by the Fast Fourier Transform (FFT) (Fig.~\ref{struct}b) which reports a hexagonal symmetry and a periodicity of $\sim$ 21 \AA. As the annealing temperature rises to 480 K the hexagons in Fig.~\ref{struct}c show protrusions associated to phenyl rings and intermediate gold atoms, while it is difficult to resolve the atomic structure of short sp carbon atomic chains due to the limited resolution of STM signal. Nevertheless, at 480 K we observe the disappearance of Au atoms in some sp carbon chains which are highlighted by white arrows in Fig.~\ref{struct}c. Indeed, by extracting the topographic profile (Fig.~\ref{struct}d-e) of the metalated and C-C coupled atomic chains (red and blue dashed line in Fig.~\ref{struct}c) we measure the length of the chains around 13.5 and 10 \AA~ respectively. Figure \ref{struct-theo} reports the ball-and-stick model of the metalated and C-C coupled 2D nanonetworks on Au(111) adopted in the calculations. The size of the unit cell was fixed to match as close as possible the periodicity extracted from the experimental STM-line profile and from the FFT, imposing a commensurate matching of the overlayer with the substrate (as also required in calculations with periodic boundary conditions). In particular for the metalated case the experimental periodicity is $\sim$ 21 \AA~and fits with a $7 \times 7$ unit cell of Au(111), corresponding to 20.85 \AA~(where the theoretical lattice constant $a_{th}=2.97$ \AA~has been used). With such values, the sp-sp$^2$ 2D nanonetwork on Au(111) is compressed by 0.5$\%$ with compared to the theoretical periodicity calculated for the freestanding nanonetwork. In this model the sp carbon chains are aligned along the $\left[11\bar2\right]$ direction. For the C-C coupled system the periodicity extracted from STM is about 17 \AA. A commensurate structure with similar lattice constant for the molecular overlayer on Au(111) can be obtained by rotating the overlayer by $8.9 ^\circ$ relative to the $\left[11\bar1\right]$ direction of the substrate in this configuration. The periodicity of the hexagonal lattice amount to 16.58 \AA~($\sim5.6~a_{th}$) accounting for an expansion of 0.7 $\%$ with respect to the freestanding 2D carbon nanonetwork. Upon geometrical relaxation the sp-sp$^2$ metalated 2D nanonetwork exhibits a mild bending of two sides of the hexagon (on the left in Fig.~\ref{struct-theo}) probably due to the small compression of the overlayer. The bond lengths of the adsorbed polymer are 125 pm, and 199 pm for the C-C triple bond and C-Au, respectively. Moreover, the interaction between the Au adatom and the substrate induces a corrugation of the molecular nanonetwork leading to a smaller distance between the Au atom in the chain and the substrate (2.46 \AA) compared to the average phenyl-substrate distance (2.70 \AA). These data are in agreement with previous theoretical findings for the analogous 1D system \cite{rabia2019scanning}. The C-C coupled system is decoupled from the substrate and displays a negligible out-of-plane and in-plane distortion and a larger distance from the substrate that amount to 2.78 \AA. The C-C single and triple bond lengths are 136 pm and 125 pm, respectively. \begin{figure}[ht] \centering \includegraphics[height=10 cm]{new-fig-theo.eps-10654.pdf} \caption{ Ball-and-stick model of the the metalated (a) and C-C coupled 2D nanonetwork (b) sp-sp$^2$ carbon nanonetwork, where the red arrows represent the unit vectors of the hexagonal lattice. Au atoms are depicted as yellow-orange-brown spheres from the first to the third surface layer. Panel (c) and (d) reports the simulated STM images for the two systems. } \label{struct-theo} \end{figure} The theoretical simulation of the STM images, reported in Fig.~\ref{struct-theo}, are in agreement with the experiments, showing bright protrusion in correspondence of Au atoms in the chain for the metalated nanonetwork. The theory predicts also a slightly enhanced STM contrast on the phenyl groups with respect to the attached ``arms''. For the all-carbon case the model shows a bright contrast of the atomic carbon chains along the rotated $\left [11\bar2\right]$ direction and a less intense signal in proximity to one end of the four oblique chains. This effect can be related to the different alignment of the atomic carbon chains with the underlying substrate in the adopted structural model. The resulting modulation of the heights of carbon atoms gives rise to a geometrical effect in the simulation. In order to investigate the variation of the electronic properties induced by the adsorption of precursors, formation of the 2D nanonetwork on the surface and by the removal of the gold atom upon annealing, we have analyzed the projected density of states (PDOS) integrated in the whole Brillouin zone for both the 2D nanonetworks supported on the Au(111) surface, in comparison with the freestanding system. Figure~\ref{PDOS} reports the integrated PDOS of the phenyl groups of the molecule and of the sp chain aligned with the $\left [11\bar2\right]$ direction. The latter is also resolved with respect to the magnetic quantum number ($m$) of the p orbitals. \begin{figure*}[ht] \centering \includegraphics[height=7cm]{PDOS-tot.pdf} \caption{Top panels: Density of States projected on different groups and atoms for the metalated (a) and C-C coupled 2D nanonetwork (b) 2D nanonetwork. The shaded area represents the PDOS of the bare Au(111) substrate. (c) and (d): PDOS of the freestanding systems metalated and C-C coupled systems, respectively. } \label{PDOS} \end{figure*} In the metalated 2D nanonetwork the states of the gold atom in the sp chain (green line in Fig.~\ref{PDOS}a) display a relevant superposition with the $d$-band of the Au(111) surface (gray shaded area). The chain-substrate interaction is also the driving force in the charge transfer process that determines an increase of electronic charge ($+0.68~e$) on the molecule with respect to the freestanding system. The energy separation related to the HOMO-LUMO gap is still visible, even though the hybridization with the states of the substrate induces a metal behavior in the 2D overlayer. The analysis of the $m$-components of the PDOS allows us to identify the contribution of p states of the sp chain along the different directions. In making the projection, we consider the average of the C atoms of a specific sp chain and align the $y$ axis along the chain direction. Indeed, considering all the sp chains would result in averaging in-plane. Hence, p$_\mathrm{x}$, p$_\mathrm{y}$, and p$_\mathrm{z}$ orbitals are aligned along $\left[1\bar10\right]$, $\left [11\bar2\right]$, and [111] directions, respectively. In particular the states closer to the Fermi level have p$_\mathrm{z}$ symmetry and form out-of-plane $\pi$-states. Other $\pi$-states (orthogonal to the sp chain axis) now lying parallel to the surface, are formed by the p$_\mathrm{x}$; those are more separated in energy, being located below $-2$~eV and above $2$~eV. Noticeably, p$_\mathrm{x}$ states are sharper in the PDOS than p$_\mathrm{z}$ states, due to a smaller hybridization with the Au(111) orbitals. Similar observations could be drawn for a linear sp-sp$^2$ system\cite{frat2018}. Conversely, p$_\mathrm{y}$ states are sit farther from the Fermi level both in the occupied (valence band) and unoccupied (conduction band) part, as expected from their $\sigma$ character and direction along the chain axis. Upon removal of the Au adatom, the 2D nanonetwork is decoupled from the substrate and maintains the energy gap around the Fermi level, although reduced with respect to the freestanding nanonetwork ($\approx 1.6$~eV versus $\approx 2.1$~eV). The lack of the metallic character related to the hybridization with the substrate is associated to a lower electronic charge on the carbon nanonetwork with respect to the metalated case and comparable to that of the freestanding layer ($-0.012~e$ on the supported 2D carbon nanonetwork). The ordering of the different p components is the same observed for the metallic system. In particular the occupied p states of the sp carbon chain lie in correspondence to the d band of Au(111). The theoretical calculation can also provide a simulation of STS images, as the DOS at the $\bar{\Gamma}$ point, which is representative of the states with the maximum interaction with the tip due to their slow decay in vacuum \cite{Donati2009} (see Supporting Information). DFT calculations reveal that the interaction between the 2D carbon nanonetwork and the underlying Au(111) surface consistently affects the electronic properties in comparison with the free standing carbon nanostructure. As detailed above and in the Supporting Information, the metalated nanostructure acquires a metallic character due to a consistent charge transfer while the graphdiyne-like nanonetwork displays a modified gap with respect to the free-standing system. In these sp carbon based systems peculiar structure-dependent conjugation effects and relevant electron-phonon coupling lead to a strong relationship between electronic and vibrational properties. The effect of the interacting Au(111) surface on the vibrational properties of 2D carbon system is addressed by comparing experimental Raman spectrum with DFT calculations, as discussed in the next section. \subsection{Raman spectroscopy and vibrational properties}\label{sec:Raman} The Raman spectra of the precursor and of the graphdyine-like 2D carbon nanonetwork are here discussed and interpreted in view of DFT calculations of the Raman response. To provide a reliable interpretation of the experimental trends we performed DFT calculations on the precursor molecule and on a molecular model describing the final 2D all-carbon nanostructure grown on the substrate. \begin{figure}[th] \centering \includegraphics[height=10cm]{Raman_precursor.png} \caption{DFT calculation (red) and experimental FT-Raman spectrum (black) of the molecular precursor. The DFT spectra have been rescaled to match the CC stretching mode of the phenyl ring at 1581 cm$^{-1}$. } \label{Raman_1} \end{figure} We start by discussing the spectrum of the molecular precursor (see Fig.~\ref{Raman_1}) measured in powder form, prior to sublimation. In particular, FT-Raman spectroscopy has been employed to acquire the Raman spectrum of the organic molecule and to avoid the strong luminescence background. By comparing the FT-Raman spectrum (black) of the initial molecule in Fig.~\ref{Raman_1} with the simulated spectrum of the precursor (red), a very good agreement is found, demonstrating the accuracy of the adopted level of theory in providing an accurate interpretation of the Raman response. The experimental spectrum shows three main spectral features: i) the sp carbon stretching mode at 2203 cm$^{-1}$, ii) the peak associated to the sp$^{2}$ carbon in the phenyl rings at 1581 cm$^{-1}$ and iii) few weak peaks in low-wavenumber region (950-1350 cm$^{-1}$) which show a very good correspondence with the peaks predicted in the theoretical spectrum. On the other hand, the two broad bands observed at about 1500 cm$^{-1}$ and 1150 cm$^{-1}$ can be attributed to impurities, also possibly due to degradation processes. DFT calculations allow us to assign the peaks in detail, starting from the peak at 2236 cm$^{-1}$ (computed at an unscaled wavenumber of 2319~cm$^{-1}$) which is due to CC stretching mode of the triple bond. The peak at 1581 cm$^{-1}$ (computed at an unscaled 1641 cm$^{-1}$) is attributed to CC stretching mode localized on the phenyl groups, while the bands at lower wavenumber values (1370, 1201 and 1014 cm$^{-1}$ unscaled values) are associated respectively to CC stretching of the phenyl coupled to CC stretching on the three branches, and different combination of CC and CH bending modes on the phenyl also coupled to the contribution of CBr stretching. Ex-situ Raman spectra of the 2D nanonetwork on Au(111) annealed at 480~K and 580~K have been acquired to follow the evolution as a function of the temperature (see Fig.~\ref{Raman_2}). The experimental spectra of the 2D nanonetwork on Au(111) (Fig.~\ref{Raman_2}) appear to be significantly modified compared to the spectrum of the precursor molecule (Fig.~\ref{Raman_1}). Indeed, the weak signals at lower frequency (950-1350 cm$^{-1}$) disappear while the peaks associated to sp carbon and sp$^{2}$ carbon in the phenyl ring become broader. By fitting the sp carbon band with two gaussians (Fig.~\ref{Raman_2}c) (one centered at 2200 cm$^{-1}$ and the other one at 2165 cm$^{-1}$) we observe that the contribution of the lower frequency peak (2165 cm$^{-1}$) increases after annealing the sample at 580~K. Concerning the spectral region associated to the sp$^{2}$ carbon in the phenyl ring, we observe a broad band shifted to lower-frequency (from 1581 cm$^{-1}$ to 1565 cm$^{-1}$) compared to the peak of the precursor molecule. The broadening of this band is further enhanced at 580~K probably due to the increased disorder observed in STM images and discussed below. We computed the Raman spectrum of the all-carbon 2D nanonetwork on Au(111) by adopting the simplified model in which small Au$_4$ clusters are interacting with the sp domains of a fragment of the 2D nanonetwork, as shown in Fig.~\ref{Raman_2}b (see Supplementary Information Fig.~S3 for an extended discussion). DFT calculations predict a double peak associated to the sp carbon chains at high frequency range, namely at 2145 cm$^{-1}$ and 2178 cm$^{-1}$. This doublet falls at lower wavenumber with respect to the sp carbon peak predicted for the precursor at 2236 cm$^{-1}$ and is associated now to collective stretching modes involving the different sp carbon domains, as described for sp carbon chain by the effective conjugation coordinate (ECC) model\cite{Milani_BeilsteinJ_2015,Casari_Nanoscale_2016}. The modulation of their frequency is due to the interactions with the gold clusters, which can affect the stretching force constants of the CC bond most involved in the interaction with gold, as also found in similar systems\cite{rabia2019scanning}. As a result, the whole collective CC stretching mode of the sp carbon chains is perturbed by the non-bonding interactions, and the peaks are shifted to lower wavenumber with respect to those predicted for the unperturbed precursor molecule. This result is in agreement with what we observe in the experimental spectra, where the sp carbon band can be fitted by two peaks at slightly lower wavenumber with respect to the precursor molecule. Therefore, DFT calculations allow us to give an interpretation to the Raman spectra indicating the formation of the 2D nanostructure on Au(111) as also observed by STM imaging. Morevover, DFT calculations predict other two weak peaks at 1939 and 1969 cm$^{-1}$. The normal mode associated to these peaks can be described as CC stretching vibrations mainly localized on those CC bonds which are more strongly interacting with the gold cluster. Due to their weak intensity these peaks are difficult to observe in the experimental spectra, they could be possibly corresponding to the weak bump at 2025 cm$^{-1}$ barely observable in both spectra. \begin{figure}[th] \centering \includegraphics[height=12cm]{Fig7-new_1.png} \caption{(a) Ex-situ Raman spectra of the 2D nanonetwork on Au(111) annealed at different temperatures and DFT calculations carried out on the molecular model shown in (b). (b) Complex comprised of fragment of the final 2D nanonetwork and Au cluster to mimick the interaction of the 2D nanonetwork with the substrate. (c) Magnification of the spectral range relevant to sp carbon (1900~cm$^{-1}$- 2400~cm$^{-1}$). d) The calculated spectrum of the graphdyine nanonetwork on Au(111) of panels (a) and (b) is compared with the freestanding case. } \label{Raman_2} \end{figure} The comparison with DFT calculations allows us to identify the Raman spectrum of the 2D nanostructure on Au(111). The two main peaks at slightly lower wavenumber (2165 and 2200 cm$^{-1}$ ) than the precursor molecule provides a clear signature of the occurrence of homocoupling of the molecules on the substrate. Although STM images provide a clear evidence of the metalated 2D nanostructure formed right after the deposition of the molecules on Au(111), we were not able to record an ex-situ Raman spectrum due to low signal-to-noise ratio. Such behavior was observed also in the case of 1D nanostructures based on sp-sp$^{2}$ carbon studied in previous work\cite{rabia2019scanning} and we think that annealing the system in UHV can improve the signal-to-noise ratio, possibly due to higher stability of the whole nanonetwork after exposure to the atmosphere. However, from STM images we notice a wide temperature window for the homocoupling reaction and the metalated domains are still present after annealing at 480~K and much less at 580~K. The progressive transformation of the structure with the thermal annealing is reflected in the Raman spectra where the peaks at 2165 and 2200 cm$^{-1}$ exhibit a change in intensity and width with the temperature (Fig.~\ref{Raman_2}.(c)). Lastly, the peak at 1565 cm$^{-1}$, which shows broadening at 580~K with the onset of a pronounced shoulder, can be interpreted as a superposition of the signal coming from the aromatic rings and a combination of D and G bands typical of sp$^{2}$-carbon amorphous phase, in agreement with the disordered phase observed in the STM image in Fig.~\ref{morph}(d). The interaction with Au turns out to modify significantly the vibrational properties and the Raman spectrum. By comparing DFT calculations of the sp-sp$^{2}$ system on Au(111) with the freestanding case, three main effects can be seen: i) the ECC peak of sp-carbon stretching mode redshifts; ii) the ECC peak splits in a doublet and iii) a new band at about 2000 cm$^{-1}$ is present which is split in two peaks as well. No substantial differences are seen in the peaks at about 1600 cm$^{-1}$ related to the sp$^{2}$-carbon stretching in the phenyl rings. The previous work carried out by us \cite{rabia2019scanning} reported the combined STM and Raman analysis of 1D sp-sp$^2$ polymeric carbon wires on Au(111), observing a similar softening of the ECC mode and the appearance of an additional peak at about 2000 cm$^{-1}$ due to the interaction of the carbon system with Au. These two systems have in common Raman peaks in the same three regions, i.e., the sp carbon fingerprint ($2150-2200$ cm$^{-1}$), the sp$^{2}$ carbon stretching in the phenyl ring (1550-1600 cm$^{-1}$) and a peculiar band at about 2000 cm$^{-1}$. However, the graphdiyne-like nanostructure shows systematic splitting of the peaks in the above mentioned regions compared to the 1D polymeric system. The similarity of the spectra could be expected since the precursor molecules are very similar (the present system has just one more bromoethynyl group) and the CC stretching modes are mainly localized in the $sp$-carbon chains and in the phenyl ring, respectively. Instead, the splitting of the peaks could be attributed to the interaction with gold which decouples vibrational modes due to the breaking of the local symmetry. Apart from the splitting found in the Raman spectra as a consequence of the interactions with the Au surface, it is also interesting to compare the DFT computed Raman spectra shown in Fig. \ref{Raman_2}.d for the free-standing model with previous calculations of free-standing 2D crystal of $\gamma$-graphdyine, as reported in the paper by Zhang et al.\cite{Zhang_JPCC2016-graphyne} and Serafini et al.\cite{serafini2020raman}. In all the cases, as expected, intense bands are predicted above 2000 cm$^{-1}$ and at 1581 cm$^{-1}$, consistently with the existence of sp carbon domains connected by phenyl units. On the other hands, two distinct and intense bands are predicted for $\gamma$-graphdyine (2142 and 2221 cm$^{-1}$ in Zhang et al. \cite{Zhang_JPCC2016-graphyne}, 2276 cm$^{-1}$ and 2335 cm$^{-1}$ in Serafini et al. \cite{serafini2020raman}) while only one is found here (2339 cm$^{-1}$ , unscaled value). The present system presents however a different structure, possessing a three-fold symmetry, instead of the six-fold symmetry of $\gamma$-graphdiyne. This structural difference affects the vibrational spectra of the two systems, generating different topology-dependent spectral patterns. In the context of graphdiyne-like materials, these results show the relevance of Raman spectroscopy as a characterization technique suitable to investigate and discriminate between sp-sp$^{2}$ hybrid carbon systems characterized by different topologies. \subsection{Conclusions} We characterized the atomic-scale structure and the vibrational properties of carbon sp-sp$^2$ 2D nanonetwork on Au(111) by means of STM and Raman spectroscopy. In particular, high-resolution STM imaging combined with Raman spectroscopy allowed us to identify the metalated nanostructure formed right after the deposition, and the subsequent formation of sp-sp$^2$ 2D nanonetwork upon annealing in UHV. Raman spectra reported the redshift and the splitting of the sp carbon fingerprint (ECC modes) and the activation of new Raman frequencies not originally present in freestanding nanostructure. The overall redshift of the sp carbon modes and the new Raman peak observed at significantly lower frequencies is attributed to bond softening promoted by the interaction with gold atoms. The DFT calculations of the electronic properties of both metalated and C-C coupled nanonetwork on Au(111) show a strong contribution of the substrate states and shifting of molecular states. Indeed, the metalated nanonetwork exhibits a broad peak at Fermi level related to the charge transfer from the substrate to the 2D layer, while for the C-C coupled system the interaction with the substrate is less important but still modifies the gap and the vibrational properties. Although we obtained sp-sp$^2$ 2D nanonetworks with different domain size, we did not observe any variation of the properties with the size, since the sp conjugation is limited in the short sp carbon chain and the phenyl ring. A larger extension of the $\pi$-electron conjugation could be achieved through the control of the oligomeric unit by a proper design of the molecular precursor, finely tuning the properties of the sp-sp$^2$ 2D nanonetwork. Although different research groups have proposed novel graphdiyne-like nanostructures obtained by on-surface synthesis \cite{Sun2018, Fan20152484, klappenberger2015surface, klappenberger2018functionalized}, the characterization of their vibrational properties by Raman spectroscopy is lacking. On the line opened by the present work, UHV-STM and Raman spectroscopy represent a powerful and non-destructive combination of techniques for the characterization of novel sp carbon systems, such as graphdiyne produced by means of on-surface synthesis on metal surfaces. These systems can display unique properties resulting from the interaction with the metal surface whose presence is a key factor to catalyse the formation of the 2D layer, itself. The control of such interaction through an accurate selection of the systems could open to the design of novel 2D carbon materials beyond graphene. The choice of the precursor molecule plays an important role in determining the final properties of the nanostructure \cite{Gao_2019}. For instance, the content and architecture of the linear sp carbon structures defines the sp/sp$^2$ ratio and pore-size. The chemical functionalization of the initial molecule, such as with halogens, tailors the interaction with substrate, which catalyzes the on-surface synthesis and modifies the electronic and vibrational properties of the resulting system. Compared with $\gamma$-graphdiyne structure, our carbon nanonetwork shows a three-fold connection of sp$^{2}$ six-membered aromatic rings through diacetylenic linkages and the remaining three-fold dangling bonds are saturated by H atoms. Accordingly, it possesses a uniformly larger pore-size (six times larger surface area) and lower sp/sp$^2$ ratio. The latter suggests a higher band-gap energy compared to $\gamma$-graphdiyne, as confirmed by the prediction of about 1.2 eV for $\gamma$-graphdiyne\cite{serafini2020raman} with respect to 2.1 eV for the structure reported here. In addition, the electronic properties of this 2D graphdiyne-like carbon nanonetwork are strongly affected by the interaction with the metallic substrate resulting in a decrease of the band-gap to about 1.6 eV. The possibility to modulate the pore-size and electronic properties of 2D graphdiyne-like nanomaterial represents a great potential for future applications in catalysis \cite{Huang_2018, Zuo_2018} and nanoelectronics \cite{james_2018}. \section{Experimental and computational details} The experiments were carried out in two interconnected ultra-high vacuum chambers with a base pressure of less than $5\times10^{-11}$ mbar. The first experimental set-up was used for the STM measurements while the second setup was devoted to sample preparation. The Au(111) surface was prepared by repeated cycles of Ar$^+$ ion sputtering, followed by annealing at 720 K. The \textit{herringbone reconstruction} characterizes the clean Au(111) surface with atomically-flat terraces as confirmed by STM measurements. The molecular precursor, i. e. 1,3,5-tris(bromoethynyl)benzene (tBEP), was thoroughly degassed prior to the deposition onto the cleaned Au(111) surface kept at room-temperature (RT). The tBEP precursor was thermally sublimated onto Au(111) surface by means of an organic molecular evaporator (OME) at 304 K. STM measurements were performed by employing an Omicron variable-temperature scanning tunneling microscope. STM images were taken in constant-current mode at room temperature, with a chemically etched tungsten tip. The bias for tunneling current is applied to the sample and typical voltages are from $-1.5$ to $+1.5$ V and tunneling current in the range 0.2-1.0 nA. Density functional theory calculations have been performed by adopting standard norm-conserving pseudopotentials and an atomic-orbitals basis set which includes double-zeta and polarization orbitals, as implemented in the SIESTA package \cite{Sole02}. The exchange and correlation potential was treated with the generalized gradient approximation (GGA-PBE) \cite{PBE} and the molecule-surface Van der Waals interaction was introduced via a DFT-D2 Grimme potential. \cite{Grimme}. The mesh cutoff for the real space grid was set to 450 Ry and a $3\times3$ and $4\times4$ sampling of the Brillouin zone was adopted for the metalated and C-C coupled system, respectively, corresponding to approximately a $21\times21$ grid in the primitive Brillouin zone of Au(111). A five times denser grid was used for the calculation of the density of states (DOS). The molecular layers were fully relaxed until the forces reached the tolerance value of 0.04 eV~\AA$^{-1}$. The substrate atoms were kept fixed to the coordinates of the unrelaxed ideal clean Au(111) surface, neglecting the $22\times\sqrt{3}$ reconstruction. Along the $z$-direction we consider six gold layers, with an interposed vacuum of 38~\AA. The STM simulations were performed in a Tersoff–Hamann approach, assuming a constant density of states for the tip. We integrated the electronic density of the empty states in an energy interval 0.5 eV just above the Fermi level. STM images were simulated at constant-distance of 3~{\AA}- from the surface and applying a 2~{\AA}-wide Gaussian spatial broadening to the electronic density to mimic finite experimental resolution. MicroRaman measurements were conducted, $ex situ$, using a Renishaw InVia spectrometer coupled with an Argon laser (514.5 nm). With the power set at 5 mW, we have acquired 100 spectra for each sample aiming at an adequate signal-to-noise ratio. At this excitation wavelength we have a fluorescent background coming from Au(111) surface and concurrent with Raman signal of the sp-sp$^2$ carbon atomic sheet. The background has been removed by subtracting the signal acquired on the pristine clean Au(111) surface under the same experimental conditions. The Raman spectrum of the molecular precursor in powder form was obtained by FT-Raman ($NICOLET$ $NEXUS$ $NXR$ $9650$ ) with 1064 nm excitation wavelength. The simulation of the Raman spectrum of the precursor and of the carbon nanonetwork have been carried out by applying the approach successfully adopted in a previous paper\cite{rabia2019scanning} on a similar system, using finite-dimension molecular models which properly describe the real system. Modelling the present system is indeed far from being trivial. An extended computational investigation has been carried out to this aim as reported in detail in the Supporting Information. After the calculation of the molecular precursor, DFT calculations have been performed for an isolated fragment of the 2D crystal interacting with three Au clusters, in order to model the interaction expected between the Au surface and the sp domains \cite{rabia2019scanning}. This model is reported in section 3.2. The Gaussian09 package \cite{g09} has been used to carry out these calculations, adopting the PBE0 functional and a cc-pVTZ basis set for C and H and ECP60MDF pseudopotential together with VTZ basis set for Au \cite{ECP_Au}. The computed vibrational frequencies have been rescaled by a factor of 0.963, determined by adjusting the phenyl stretching mode predicted by DFT at 1641 cm$^{-1}$ to the value 1581 cm$^{-1}$ observed experimentally for the precursor. \begin{acknowledgement} \end{acknowledgement} Authors acknowledge A. Mele and C. Gambarotti of Politecnico di Milano for their support in checking the stability of precursor molecules by gaschromatography and mass spectrometry. The authors also aknowledge CINECA under the ISCRA initiative (project HP10C3S9Z0) and Red Espa{\~n}ola de Supercomputacion (FI-2020-1-0014) for the availability of high performance computing resources and support. AR, FT, AM, VR, ALB, NB, AL, CSC acknowledge funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program ERC-Consolidator Grant (ERC CoG 2016 EspLORE grant agreement No. 724610, website: www.esplore.polimi.it). \begin{suppinfo} \begin{itemize} \item Supporting Information: Theoretical simulation of STS spectra and a detailed discussion on methods and models adopted for the computation of Raman spectra of both molecular precursor and 2D system on Au(111). \end{itemize} \end{suppinfo}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In finite dimension a classical-quantum or quantum-classical channel can always be represented as a quantum channel, by embedding the classical input or output into quantum system. Then it makes sense to speak about entanglement-assisted capacity $C_{ea}$ \cite{BSST}, \cite{E} of such a channel, in particular, to compare it with the unentangled classical capacity $C$. An interesting observation in \cite{BSST} was that entanglement-assisted communication may be advantageous even for \emph{ entanglement-breaking} channels such as depolarizing channel with sufficiently high error probability. In the paper \cite{H2} we considered the case of quantum-classical (measurement) channels, showing that generically $C<C_{ea}$ for such channels. For infinite dimensional (in particular, continuous variable) systems an embedding of the classical output into quantum is not always possible, however entanglement-assisted transmission still makes sense \cite{H2}; in particular this is the case for Bosonic Gaussian q-c channels. The measurement channels demonstrate the gain of entanglement assistance in the most spectacular way. On the contrary, as shown in \cite{Sh-19}, finite dimensional c-q channels (preparations) are \emph{essentially} characterized by the property of having no gain of entanglement assistance, in this sense being ``more classical'' than measurements. In the present paper we study Bosonic Gaussian c-q channels; we observe that the embedding of the classical input into quantum is always possible and $ C_{ea}$ under the input constraint is thus well defined. We prove a general property of entropy increase for the weak complementary channel, that implies equality $C=C_{ea}$ for certain class of c-q Gaussian channel under appropriate energy-type constraint. On the other hand, we show by explicit example that the inequality $C<C_{ea}$ is not unusual for \emph{constrained} c-q Gaussian channels. \section{Bosonic Gaussian Systems} The main applications of infinite-dimensional quantum information theory are related to Bosonic systems, for detailed description of which we refer to Ch. 12 in \cite{H-SSQT}. Let $\mathcal{H}_{A}$ be the representation space of the Canonical Commutation Relations (CCR) \begin{equation} W(z_{A})W(z_{A}^{\prime })=\exp \left( -\frac{i}{2}z_{A}^{t}\Delta _{A}z_{A}^{\prime }\right) W(z_{A}^{\prime }+z_{A}) \label{CCR} \end{equation with a coordinate symplectic space $(Z_{A},\Delta _{A})$ and the Weyl system $W_{A}(z)=\exp (iR_{A}\cdot z_{A});\,z_{A}\in Z_{A}$. Here $R_{A}$ is the row-vector of the canonical variables in $\mathcal{H}_{A}$, and $\Delta _{A}$ is the canonical skew-symmetric commutation matrix of the components of R_{A}$, \begin{equation} \Delta =\mathrm{diag}\left[ \begin{array}{cc} 0 & 1 \\ -1 & \end{array \right] _{j=1,\dots ,s}. \label{cf} \end{equation} Let $(Z_{A},\Delta _{A}),(Z_{B},\Delta _{B})$ be the symplectic spaces of dimensions $2s_{A},2s_{B},$ which will describe the input and the output of the channel (here $\Delta _{A},\Delta _{B}$ have the canonical form (\ref{cf})), and let $W_{A}(z_{A}),W_{B}(z_{B})$ be the Weyl operators in the Hilbert spaces $\mathcal{H}_{A},\mathcal{H}_{B}$ of the corresponding Bosonic systems. A centered Gaussian channel $\Phi :\mathfrak{T}(\mathcal{H _{A})\rightarrow \mathfrak{T}(\mathcal{H}_{B})$ is defined via the action of its dual $\Phi ^{\ast }$ on the Weyl operators: \begin{equation} \Phi ^{\ast }[W_{B}(z_{B})]=W(Kz_{B})\exp \left[ -\frac{1}{2}z_{B}^{t}\alpha z_{B}\right] , \label{gaus-ch} \end{equation where $K$ is matrix of a linear operator $Z_{B}\rightarrow Z_{A}$, and $\alpha $ is real symmetric matrix satisfying \begin{equation} \alpha \geq \pm \frac{i}{2}\left( \Delta _{B}-K^{t}\Delta _{A}K\right), \label{nid} \end{equation} where $\Delta _{B}-K^{t}\Delta _{A}K \equiv \Delta _{K}$ is a real skew-symmetric matrix. We will make use of the unitary dilation of the channel $\Phi $ constructed in \cite{cegh1} (see also \cite{H-SSQT}). Consider the composite Bosonic system $AD=BE$ with the Hilbert space $\mathcal{H}_{A}\otimes \mathcal{H _{D}\simeq \mathcal{H}_{B}\otimes \mathcal{\ H}_{E}$ corresponding to the symplectic space $Z=Z_{A}\oplus Z_{D}=Z_{B}\oplus Z_{E},$ where (Z_{E},\Delta _{E})\simeq (Z_{A},\Delta _{A})$. Thus $[R_{A}\,R_{D}]=[R_{B \,R_{E}]$ describe two different splits of the set of canonical observables for the composite system. Here $A$ and $B$ refer to input and output, while $D$ and $E$ to input and output environments. The channel $\Phi $ is then described by the linear input-output relation (preserving the commutators) \begin{equation} R_{B}^{\prime }=R_{A}K+R_{D}K_{D}, \label{ior} \end{equation where the system $D$ is in a centered Gaussian state $\rho _{D}$ with the covariance matrix $\alpha _{D}$ such that \begin{equation*} \alpha =K_{D}^t\alpha _{D}K_{D}. \end{equation* (for simplicity of notations we write $R_{A},\dots $ instead of R_{A}\otimes I_{D},\dots $). It is shown that the commutator-preserving relation (\ref{ior}) can be complemented to the full linear canonical transformation by putting \begin{equation} R_{E}^{\prime }=R_{A}L+R_{D}L_{D}, \label{iocomp} \end{equation where $\left( 2s_{A}\right) \times \left( 2s_{E}\right) -$ matrix $L$ and \left( 2s_{D}\right) \times \left( 2s_{A}\right) -$ matrix $L_{D}$ are such that the square $2\left( s_{A}+s_{D}\right) \times 2\left( s_{B}+s_{E}\right) -$ matrix \begin{equation} T=\left[ \begin{array}{cc} K & L \\ K_{D} & L_{D \end{array \right] \label{bltr} \end{equation is symplectic, i.e. satisfies the relation \begin{equation*} T^{t }\left[ \begin{array}{cc} \Delta _{A} & 0 \\ 0 & \Delta _{D \end{array \right] T=\left[ \begin{array}{cc} \Delta _{B} & 0 \\ 0 & \Delta _{E \end{array \right] , \end{equation* which is equivalent to \begin{eqnarray} \Delta _{B} &=&K^{t}\Delta _{A}K+K_{D}^{t}\Delta _{D}K_{D},\quad \label{com} \\ 0 &=&K^{t}\Delta _{A}L+K_{D}^{t }\Delta _{D}L_{D}, \label{com1} \\ \Delta _{E} &=&L^{t}\Delta _{A}L+L_{D}^{t }\Delta _{D}L_{D}. \label{com2} \end{eqnarray} Denote by the $U_{T}$ the unitary operator in $\mathcal{H}_{A}\otimes \mathcal{H}_{D}\simeq \mathcal{H}_{B}\otimes \mathcal{H}_{E}$ implementing the symplectic transformation $T$ so that \begin{equation} \lbrack R_{B}^{\prime }\,R_{E}^{\prime }]=U_{T}^{\ast }[R_{B}\,R_{E}]U_{T}=[R_{A}\,R_{D}]T. \label{deistvo} \end{equation Then we have the unitary dilation \begin{equation} \Phi ^{\ast }[W_{B}(z_{B})]=\mathrm{Tr}_{D}\left( I_{A}\otimes \rho _{D}\right) U_{T}^{\ast }\left( W_{B}(z_{B})\otimes I_{E}\right) U_{T}. \label{udi1} \end{equation The \emph{weakly complementary} channel \cite{cegh1} is then \begin{equation*} \left( \tilde{\Phi}^{w}\right) ^{\ast }[W_{E}(z_{E})]=\mathrm{Tr}_{D}\left( I_{A}\otimes \rho _{D}\right) U_{T}^{\ast }\left( I_{B}\otimes W_{E}(z_{E})\right) U_{T}. \end{equation* The equation (\ref{iocomp}) is nothing but the input-output relation for the weakly complementary channel which thus acts as \begin{equation} \left( \tilde{\Phi}^{w}\right) ^{\ast }[W_{E}(z_{E})]=W_{A}(Lz_{E})\exp \left[ -\frac{1}{2}z_{E}^{t}L_{D}^{t}\alpha _{D}L_{D}z_{E}\right] . \label{Gc} \end{equation} In the case of pure state $\rho _{D}=|\psi _{D}\rangle \langle \psi _{D}|$ the relation (\ref{udi1}) amounts to the Stinespring representation for the channel $\Phi $ with the isometry $V=U_{T}|\psi _{D}\rangle ,$ implying that $\tilde{\Phi}^{w}$ is the complementary channel $\tilde{\Phi}$ (see e.g. \cite{H-SSQT}). \section{A property of Gaussian classical-quantum channels} \label{2} Usually classical-quantum (c-q) channel is understood as a mapping $x\rightarrow \rho _{x}$ of the classical alphabet $\mathcal{X}=\{x\}$ into density operators in a Hilbert space. In the case of continuous alphabet there is no problem with embedding c-q channel into a quantum channel (as distinct from q-c channel, see \cite{H2}). Intuitively, let $\mathcal{X}$ be a continual domain with measure $dx$, then the required embedding is \begin{equation*} \Phi [\rho ]=\int_{\mathcal{X}}\langle x|\rho |x\rangle \rho _{x}dx, \end{equation*} where $\left\{ |x\rangle ;x\in \mathcal{X}\right\} $ is a Dirac's system satisfying $\langle x|x^{\prime }\rangle =$ $\delta (x-x^{\prime }).$ Here $\Phi$ maps density operators into density operators. Notice that the range of the dual channel $\Phi ^{\ast }$ consists of bounded operators diagonal in the $x$-representation. In general, we call a quantum channel $\Phi $ \emph{classical-quantum} (c-q) if the range of $\Phi ^{\ast }$ consists of commuting operators. By using a structure theorem for Abelian algebras of operators in a Hilbert space, it is then not difficult to see that such a definition is essentially equivalent to the usual understanding. It follows from (\ref{CCR}) that the necessary and sufficient condition for a Bosonic Gaussian channel (\ref{gaus-ch}) to be c-q is \begin{equation} K^{t}\Delta _{A}K=0. \label{cl-qu} \end{equation} Thus $\Delta _{K}=\Delta _{B}$ and therefore $\det \Delta _{K}\neq 0.$ Under this condition it was shown in \cite{H3} that in the unitary dilation described above one can take $s_{E}=s_{A},\,s_{D}=s_{B}$ (and in fact E=A,D=B$). We call such a dilation ``minimal'' as it is indeed such at least in the case of the pure state $\rho _{D}$, as follows from \cite{cegh1}. The condition (\ref{nid}) then amounts to \begin{equation} \alpha \geq \pm \frac{i}{2}\Delta _{B}, \label{unc} \end{equation saying that $\alpha $ is a covariance matrix of a centered Gaussian state \rho _{D}$. We say that the channel has \emph{minimal noise} if $\rho _{D}$ is a pure state, which is equivalent to the fact that $\alpha $ is a minimal solution of the inequality (\ref{unc}). In quantum optics such channels are called quantum-limited. Let us explain how this notion of c-q channel agrees with the usual one in the case of Bosonic Gaussian channels. The condition \ref{cl-qu}) means that the components of the operator $R_{A}K$ all commute, hence their joint spectral measure is a sharp observable, and their probability distribution $\mu _{\rho }(d^{2n}z)$ can be arbitrarily sharply peaked around any point $z=\mathsf{E}_{\rho }(R_{A}K)^{t}=K^{t}m$ in the support $\mathcal{X}$ of this measure by appropriate choice of the state \rho $. Here $\mathsf{E}_{\rho }$ denotes expectation with respect to $\rho $ and $m=\mathsf{E}_{\rho }(R_{A})^{t}$, hence $\mathcal{X}=\mathbf{Ran K^{t}\subseteq Z_{B}$. Thus in this case it is natural to identify $\Phi $ as c-q channel determined by the family of states $z\rightarrow W(z)\rho _{B}W(z)^{\ast };z\in \mathcal{X}$. \textbf{Proposition 1.} \emph{Let }$\Phi $ \emph{be a Gaussian c-q channel, then the weak complementary} $\tilde{\Phi}^{w}$ \emph{in the minimal unitary dilation has nonnegative entropy gain:} \begin{equation*} S(\tilde{\Phi}^{w}[\rho ])-S(\rho )\geq 0\quad \text{\textrm{for all \ \ \ } \rho . \end{equation*} \emph{In particular if} $\Phi $ \emph{has minimal noise, then this holds for the complementary channel $\tilde{\Phi}$, implying} \begin{equation}\label{IleS} I(\rho ,\Phi )\leq S(\Phi \lbrack \rho ]), \end{equation \emph{where} \begin{equation*} I(\rho ,\Phi )=S(\rho )+S(\Phi \lbrack \rho ])-S(\tilde{\Phi}[\rho ]) \end{equation* \emph{is the quantum mutual information.} \emph{Proof.} Taking into account (\ref{cl-qu}), the relation (\ref{com}) becomes \begin{equation} \quad \Delta _{B}=K_{D}^{t}\Delta _{D}K_{D}. \label{iskom2} \end{equation We consider the minimal dilation for which $\Delta _{D}=\Delta _{B}$, \Delta _{E}=\Delta _{A}$, hence $K_{D}$ is a symplectic $2s_{B}\times 2s_{B}- $ matrix. Then (\ref{com1}) implies \begin{equation*} L_{D}=-\left( K_{D}^{t}\Delta _{D}\right) ^{-1}K^{t}\Delta _{A}L. \end{equation* Substituting (\ref{com2}) gives $\Delta _{E}=L^{t}ML,$ where \begin{eqnarray*} M &=&\Delta _{A}+\Delta _{A}K\left( \Delta _{D}K_{D}\right) ^{-1}\Delta _{D}\left( K_{D}^{t}\Delta _{D}\right) ^{-1}K^{t}\Delta _{A} \\ &=&\Delta _{A}+\Delta _{A}KK_{D}^{-1}\Delta _{D}^{-1}\left( K_{D}^{t}\right) ^{-1}K^{t}\Delta _{A} \\ &=&\Delta _{A}+\Delta _{A}K\Delta _{B}^{-1}K^{t}\Delta _{A}. \end{eqnarray* Therefore $1=\left( \det L\right) ^{2}\det M,$ where \begin{eqnarray*} \det M &=&\det \left( \Delta _{A}+\Delta _{A}K\Delta _{B}^{-1}K^{t}\Delta _{A}\right) \\ &=&\det \left( I_{2s_{A}\times 2s_{A}}+K\Delta _{B}^{-1}K^{t}\Delta _{A}\right) . \end{eqnarray* Due to (\ref{cl-qu}) the matrix $N=K\Delta _{B}^{-1}K^{t}\Delta _{A}$ satisfies $N^{2}=0,$ hence it has only zero eigenvalues. Therefore I_{2s_{A}\times 2s_{A}}+N$ \ has only unit eigenvalues, implying $\det M=1$ and hence $\left\vert \det L\right\vert =1.$ By relation (\ref{Gc}), the channel $\tilde{\Phi}^{w}$ is the Gaussian channel with the operator $L$ playing the role of $K.$ By using a result of \cite{H1}, we have \begin{equation*} S(\tilde{\Phi}^{w}[\rho ])-S(\rho )\geq \log |\det L|=0.\qquad \square \end{equation*} \textbf{Proposition 2}. \emph{Let} $\Phi $ \emph{be a Gaussian c-q channel with minimal noise } $\alpha $, \emph{such that } $\mathbf{Ran}K^{t}=Z_{B}$, \emph{satisfying the input constraint\footnote{The trace here is understood in the sense of extended expectation, as in \cite{H1}.} } \begin{equation} \mathrm{Tr}\rho H\leq E, \label{constraint} \end{equation \emph{where} $H=RK\epsilon K^{t}R^{t}$ \emph{and} $\epsilon $ \emph{is real symmetric strictly positive definite matrix. } \emph{Then denoting }$C(E)$ (\emph{resp.} $C_{ea}(E)$) \emph{the classical (resp. entanglement-assisted) capacity of the channel under the constraint \ref{constraint}), } \begin{equation} C(E)=C_{ea}(E)=\sup_{\rho :\mathrm{Tr}\rho H\leq E}S(\Phi \lbrack \rho ]). \label{main} \end{equation} An important condition here is $\mathbf{Ran}K^{t}=Z_{B}$, as we shall see in the next Section. The form of the operator $H=RK\epsilon K^{t}R^{t}$ is such that the constraint is expressed only in terms of the input observables of the c-q channel. Without it one could hardly expect the equality (\ref{main}), although this requires further investigation. On the other hand, assumption of minimality of the noise seems to be related to the method of the proof and probably could be relaxed, with the last expression in (\ref{main}) replaced by the supremum of $\chi$-function. \bigskip \textbf{Lemma.} \emph{Under the assumption }(\ref{cl-qu})\emph{\ there exists a sequence of real symmetric} $\left( 2s_{A}\right) \times \left( 2s_{A}\right) -$\emph{matrices} $\gamma _{n}$ \emph{satisfying the conditions:} \begin{enumerate} \item $\gamma _{n}\geq \pm \frac{i}{2}\Delta _{A};$ \item $K^{t}\gamma _{n}K\rightarrow 0.$ \end{enumerate} \emph{Proof.} The assumption (\ref{cl-qu}) means that the subspace \mathcal{N}=\mathrm{Ran}K\subseteq Z_{A}$ is isotropic, i.e. such that \Delta _{A}$ is degenerate on it. From the linear algebra it is known that there is a symplectic basis in $Z_{A}$ of the form $\left\{ e_{1},\dots ,e_{k},h_{1},\dots ,h_{k},g_{1},\dots \right\} ,$ where $\left\{ e_{1},\dots ,e_{k}\right\} $ is a basis in $\mathcal{N},\left\{ h_{1},\dots ,h_{k}\right\} $ span the isotropic subspace $\mathcal{N}^{\prime }$ and are such that $e_{i}^{t}\Delta _{A}h_{j}=\delta _{ij},$ and $\left\{ g_{1},\dots \right\} $ span the symplectic orthogonal complement of \mathcal{N}+\mathcal{N}^{\prime }.$ Then $\Delta _{A}$ has the block matrix form in this basi \begin{equation*} \Delta _{A}=\left[ \begin{array}{ccc} 0 & I_{k} & 0 \\ -I_{k} & 0 & 0 \\ 0 & 0 & \Delta _{g \end{array \right] . \end{equation* Let $\varepsilon _{n}$ be a sequence of positive numbers converging to zero, then \begin{equation*} \gamma _{n}=\left[ \begin{array}{ccc} \varepsilon _{n}I_{k} & 0 & 0 \\ 0 & \frac{1}{4\varepsilon _{n}}I_{k} & 0 \\ 0 & 0 & \gamma _{g \end{array \right] , \end{equation* where $\gamma _{g}\geq \pm \frac{i}{2}\Delta _{g},$ satisfies the condition 1, and $K^{t}\gamma _{n}K=\varepsilon _{n}K^{t}K\rightarrow 0.\quad \square $ \emph{Proof of Proposition 2.} According to the general version of the finite-dimensional result of \cite{E} proven in \cite{H-Sh}, \begin{equation C_{ea}(E)=\sup_{\rho :\mathrm{Tr}\rho H\leq E}I(\rho, \Phi ). \end{equation} This version makes the only assumption that $H$ is positive self-adjoint operator, allowing the constraint set to be non-compact, which is important for our considerations in Sec. \ref{3}. Due to (\ref{IleS}), it is then sufficient to show that \begin{equation*} C(E)\geq \sup_{\rho :\mathrm{Tr}\rho H\leq E}S(\Phi \lbrack \rho ]). \end{equation* We first consider the supremum in the right-hand side. Since the constraint operator $H=RK\epsilon K^{t}R^{t}$ is quadratic in the canonical variables R,$ the supremum can be taken over (centered) Gaussian states. Since the entropy of Gaussian state with covariance matrix $\alpha $ is equal to \begin{equation} \frac{1}{2}\mathrm{Sp}g\left( \mathrm{abs}\left( \Delta ^{-1}\alpha \right) -I/2\right) =\frac{1}{2}\sum_{j=1}^{2s}g(|\lambda _{j}|-\frac{1}{2}), \label{entropy} \end{equation where $g(x)=(x+1)\log (x+1)-x\log x$, Sp denotes trace of the matrices as distinct from that of operators in $\mathcal{H}$, and $\lambda _{j}$ are the eigenvalues of $\Delta ^{-1}\alpha $ (see e.g. \cite{H-SSQT}, Sec. 12.3.4), we have \begin{eqnarray} \sup_{\rho :\mathrm{Tr}\rho H\leq E}S(\Phi \lbrack \rho ]) &=&\frac{1}{2 \sup_{\beta :\mathrm{Sp}K\epsilon K^{t}\beta \leq E}\mathrm{Sp}g\left( \mathrm{abs}\left( \Delta _{B}^{-1}\left( K^{t}\beta K+\alpha \right) \right) -I/2\right) \notag \\ &=&\frac{1}{2}\max_{\mu :\mathrm{Sp}\epsilon \mu \leq E}\mathrm{Sp}g\left( \mathrm{abs}\left( \Delta _{B}^{-1}\left( \mu +\alpha \right) \right) -I/2\right) . \label{max} \end{eqnarray Here in the first equality we used the formula (\ref{entropy}) for the output state with the covariance matrix $K^{t}\beta K+\alpha ,$ and in the second we denoted $\mu =K^{t}\beta K$ and used the fact that for every $\mu $ such a $\beta $ exists due to the condition $\mathbf{Ran}K^{t}=Z_{B}$. In the second expression the supremum is attained on some $\mu _{0}$ due to nondegeneracy of $\epsilon$ (see \cite{H-SSQT}, Sec. 12.5). Denote by $\beta _{0}$ a solution of the equation $\mu _{0}=K^{t}\beta _{0}K.$ We construct a sequence of suboptimal ensembles as follows. Using the condition 1 of the Lemma, we let $\rho _{n}$ be a centered Gaussian state in $\mathcal{H}_{A} $ with the covariance matrices $\gamma _{n}$ and $\rho _{n}(z)=D(z)\rho _{n}D(z)^{\ast },z\in Z_{A},$ be the family of the displaced states, where $D(z)$ are the displacement operators obtained by re-parametrization of the Weyl operators $W(z)$. Define the Gaussian probability density $p_{n}(z)$ with zero mean and the covariance matrix k_{n}\beta _{0},$ where $k_{n}=1-\mathrm{Sp}\gamma _{n}K\epsilon K^{t}/E>0$ for large enough $n$ by the condition 2$.$ The average state of this ensemble is centered Gaussian with the covariance matrix $\gamma _{n}+k_{n}\beta _{0}.$ Taking into account that $S(\rho _{n}(z))=S(\rho _{n}),$ the $\chi -$quantity of this ensemble is equal to \begin{equation*} \chi _{n}=\frac{1}{2}\mathrm{Sp}\,g\left( \mathrm{abs}\left( \Delta _{B}^{-1}\left( K^{t}\gamma _{n}K+k_{n}K^{t}\beta _{0}K+\alpha \right) \right) -I/2\right) \end{equation* \begin{equation*} -\frac{1}{2}\mathrm{Sp}\,g\left( \mathrm{abs}\left( \Delta _{B}^{-1}\left( K^{t}\gamma _{n}K+\alpha \right) \right) -I/2\right) . \end{equation* By the condition 2 this converges to \begin{equation*} \frac{1}{2}\mathrm{Sp}\,g\left( \mathrm{abs}\left( \Delta _{B}^{-1}\left( K^{t}\beta _{0}K+\alpha \right) \right) -I/2\right) -\frac{1}{2}\mathrm{Sp \,g\left( \mathrm{abs}\left( \Delta _{B}^{-1}\alpha \right) -I/2\right) . \end{equation* By minimality of the noise the second term is entropy of a pure state, equal to zero, and the first term is just the maximum in (\ref{max}). Thus \begin{equation*} C(E)\geq \limsup_{n\rightarrow \infty }\chi _{n}=\sup_{\rho :\mathrm{Tr}\rho H\leq E}S(\Phi \lbrack \rho ]).\qquad \square \end{equation*} \section{One mode}\label{3} Let $q,p$ be a Bosonic mode, $W(z)=\exp i(xq+yp)$ the corresponding Weyl operator and $D(z)=\exp i(yq-xp)$ the displacement operator. We give two examples where the channel describes classical signal with additive Gaussian (minimal) quantum noise, in the first case the signal being two-dimensional while in the second -- one-dimensional. As we have seen, a c-q channel can be described in two equivalent ways: as a mapping $m\rightarrow \rho _{m},$ where $m$ is the classical signal, and as an extended quantum channel satisfying (\ref{cl-qu}). 1. We first consider the minimal noise c-q channel with two-dimensional real signal and show the coincidence of the classical entanglement-assisted and unassisted capacities of this channel under appropriate input constraint, by using result of Sec. \ref{2}. Such a coincidence is generic for unconstrained finite-dimensional channels \cite{E}, but in infinite dimensions, as we will see in the second example, situation is different. Some sufficient conditions for the equality $C=C_{ea}$ were given in \cit {Sh-19}, however they do not apply here. Let $m=(m_{q},m_{p})\in \mathbf{R}^{2}$ and consider the mapping m\rightarrow \rho _{m}$, where $\rho _{m}$ is the state with the characteristic function \begin{equation} \mathrm{Tr}\rho _{m}W(z)=\exp \left[ i(m_{q}x+m_{p}y)-\frac{\left( N+\frac{ }{2}\right) }{2}(x^{2}+y^{2})\right] , \label{component} \end{equation so tha \begin{equation*} \rho _{m}=D(m)\rho _{0}D(m)^{\ast }. \end{equation* The mapping $m\rightarrow \rho _{m}$ can be considered as transmission of the two-dimensional classical signal $m=(m_{q},m_{p})$ with the additive quantum Gaussian noise $q,p$ with the average number of quanta $N$. The minimal noise corresponds to $N=0$. The classical capacity of this channel with the input constraint \begin{equation} \frac{1}{2}\int \left\Vert m\right\Vert ^{2}\,\,p(m)d^{2}m\leq E \label{3-5} \end{equation is given by the expression (see e.g. \cite{H-SSQT}, Sec. 12.1.4) \begin{equation*} C(E)=g(N+E)-g(N), \end{equation* with the optimal distribution \begin{equation} p(m)=\frac{1}{2\pi E}\,\mbox{exp}\left( -\frac{\left\Vert m\right\Vert ^{2}} 2E}\right) \label{3-9} \end{equation} in the ensemble of coherent states $|m\rangle\langle m|$. In particular, for the minimal noise channel ($N=0$), \begin{equation} C(E)=g(E)=S(\bar{\rho}), \label{cge} \end{equation where $\bar{\rho}$ is the Gaussian state with \begin{equation*} \mathrm{Tr}\bar{\rho}W(z)=\exp \left[ -\frac{\left( E+\frac{1}{2}\right) }{2 (x^{2}+y^{2})\right] . \end{equation*} Let us now embed this channel into quantum Gaussian channel $\Phi $ in the spirit of previous Section. Since the input m=(m_{q},m_{p})$ is two-dimensional classical, one has to use two Bosonic input modes $q_{1},p_{1,},q_{2},p_{2}$ to describe it quantum-mechanically, so that e.g. $m_{q}=q_{1},m_{p}=q_{2}.$ The environment is one mode $q,p$ in the Gaussian state $\rho _{0}$ so the output is given by the equations \begin{eqnarray} q^{\prime } &=&q+q_{1}=q+m_{q}; \label{eqcha} \\ p^{\prime } &=&p+q_{2}=p+m_{p}, \notag \end{eqnarray and the channel $\Phi $ parameters are \begin{equation*} K=\left[ \begin{array}{cc} 1 & 0 \\ 0 & 0 \\ 0 & 1 \\ 0 & \end{array \right] ,\quad \alpha =\left( N+\frac{1}{2}\right) I_{2}. \end{equation* The equations for the environment modes describing the weakly complementary channel $\tilde{\Phi}^{w}$ are \begin{eqnarray} q_{1}^{\prime } &=&q_{1}, \label{eqen} \\ p_{1}^{\prime } &=&p_{1}-p-q_{2}/2, \notag \\ q_{2}^{\prime } &=&q_{2}, \notag \\ p_{2}^{\prime } &=&p_{2}+q+q_{1}/2. \notag \end{eqnarray In fact, the set of equations (\ref{eqcha}), (\ref{eqen}) is the same as for the quantum channel with additive classical Gaussian noise (see \cite{H-SSQT , Ex. 12.42), but in the latter case the input variables are $q,p$ while in the former -- $q_{1},p_{1,},q_{2},p_{2}$ (in both cases the output is q^{\prime },p^{\prime }$). If $N=0$ so that $\rho _{0}$ is pure, these equations describe the complementary channel $\tilde{\Phi}$. Having realized the c-q channel as a quantum one (i.e. a channel with quantum input and output), it makes sense to speak of its entanglement-assisted capacity. Under the same constraint it is given by the expression \begin{equation} C_{ea}(E)=\sup_{\rho _{12}\in \mathfrak{S}_{E}}I(\rho _{12},\Phi ), \label{cea} \end{equation where \begin{equation*} \mathfrak{S}_{E}=\left\{ \rho _{12}:\mathrm{Tr}\rho _{12}\left( \frac q_{1}^{2}+q_{2}^{2}}{2}\right) \leq E\right\} \end{equation* corresponds to the constraint (\ref{3-5}). Notice that the constraint operator $H=\frac{q_{1}^{2}+q_{2}^{2}}{2}$ is unusual in that it is given by \emph{degenerate} quadratic form in the input variables q_{1},p_{1,},q_{2},p_{2}$. In this case the set $\mathfrak{S}_{E}$ is not compact, the supremum in (\ref{cea}) is not attained and to obtain this formula we need to use a result from \cite{H-Sh}. Now assume the minimal noise $N=0$ and let us show that \begin{equation} C_{ea}(E)=C(E)=g(E). \label{ccea} \end{equation Proposition 1 of Sec. \ref{2} implies \begin{equation*} C_{ea}(E)\leq \sup_{\rho _{12}\in \mathfrak{S}_{E}}S(\Phi \lbrack \rho _{12}]). \end{equation* But \begin{equation*} \Phi \lbrack \mathfrak{S}_{E}]=\left\{ \bar{\rho}_{p}:p\in \mathcal{P _{E}\right\}, \end{equation* where $\mathcal{P}_{E}$ is defined by (\ref{3-9}), as can be seen from the equations of the channel (\ref{eqcha}) and the identification of the probability density $p(m_{q}\,,m_{p})$ with that of observables $q_{1},q_{2}$ in the state $\rho _{12}.$ Invoking (\ref{cge}) gives $\sup_{\rho _{12}\in \mathfrak{S}_{E}}H(\Phi \lbrack \rho _{12}])=g(E)$ and hence the equality \ref{ccea}). This example is a special case of Proposition 2 in Sec. \re {2}, all the conditions of which are fulfilled with $\mathbf{Ran}K^t=Z_{B} \mathbf{R}^2$ and \begin{equation*} \gamma _{n}=\left[ \begin{array}{cccc} \varepsilon _{n} & 0 & 0 & 0 \\ 0 & \frac{1}{4\varepsilon _{n}} & 0 & 0 \\ 0 & 0 & \varepsilon _{n} & 0 \\ 0 & 0 & 0 & \frac{1}{4\varepsilon _{n} \end{array} \right] . \end{equation*} 2. Now we give an example with $C(E)<C_{ea}(E).$ Let $m\in \mathbf{R}$ be a real one-dimensional signal and the channel is $m\rightarrow \rho _{m}$, where $\rho _{m}$ is the state with the characteristic function \begin{equation} \mathrm{Tr}\rho _{m}W(z)=\exp \left[ imx-\frac{1}{2}(\sigma ^{2}x^{2}+\frac{ }{4\sigma ^{2}}y^{2})\right] , \label{component1} \end{equation so tha \begin{equation*} \rho _{m}=D(x,0)\rho _{0}D(x,0)^{\ast }. \end{equation* The mapping $m\rightarrow \rho _{m}$ can be considered as transmission of the classical signal $m$ with the additive noise arising from the $q$-component of quantum Gaussian mode $q,p$ with the variances $\mathsf{D}q=\sigma ^{2},\mathsf{D}p=\frac{1}{4\sigma ^{2}}$ and zero covariance between $q$ and $p$. The state $\rho _{0}$ is pure (squeezed vacuum) corresponding to a minimal noise. The constraint on the input probability distribution $p(m)$ is defined a \begin{equation} \int m^{2}\,\,p(m)dm\leq E, \label{3-51} \end{equation where $E$ is a positive constant. As the component $p$ is not affected by the signal, from information-theoretic point of view this channel is equivalent to the classical additive Gaussian noise channel $m\rightarrow m+q,$ and its capacity under the constraint (\ref{3-51}) is given by the Shannon formula \begin{equation} C(E)=\frac{1}{2}\log \left( 1+r\right) , \label{gacapa1} \end{equation where $r=E/\sigma ^{2}$ is the \emph{signal-to-noise ratio}. A different way to describe this channel is to represent it as a quantum Gaussian channel $\Phi $. Introducing the input mode $q_{1},p_{1},$ so that m=q_{1},$ with the environment mode $q,p$ in the state $\rho _{0}$, the output is given by the equations \begin{eqnarray} q_{1}^{\prime } &=&q_{1}+q; \label{eqcha1} \\ p_{1}^{\prime } &=&\quad \quad p, \notag \end{eqnarray and the channel $\Phi $ parameters are \begin{equation*} K=\left[ \begin{array}{cc} 1 & 0 \\ 0 & \end{array \right] ,\quad \alpha =\left[ \begin{array}{cc} \sigma ^{2} & 0 \\ 0 & \frac{1}{4\sigma ^{2} \end{array \right] . \end{equation* The equations for the environment mode describing the complementary channel \tilde{\Phi}$ are (see \cite{H-SSQT}) \begin{eqnarray} q^{\prime } &=&q_{1}, \label{eqen1} \\ p^{\prime } &=&p_{1}-p, \notag \end{eqnarray and the set of equations (\ref{eqcha1}), (\ref{eqen1}) describes the canonical transformation of the composite system = system+environment. The classical entanglement-assisted capacity of this channel under the same constraint is given by the expression \begin{equation} C_{ea}(E)=\sup_{\rho _{1}\in \mathfrak{S}_{E}^{(1)}}I(\rho _{1},\Phi ), \label{cea1} \end{equation where $\mathfrak{S}_{E}^{(1)}=\left\{ \rho _{1}:\mathrm{Tr}\rho _{1}q_{1}^{2}\leq E\right\} .$ As in the first example, the constraint operator $q_{1}^{2}$ is given by degenerate quadratic form in the input variables $q_{1},p_{1}$, the set $\mathfrak{S}_{E}^{(1)}$ is not compact and the supremum in (\ref{cea}) is not attained. Let us compute the entanglement-assisted capacity. For this consider the values of $I(\rho _{A},\Phi )$ for centered Gaussian states $\rho _{A}=\rho _{1}$ with covariance matrices \begin{equation*} \alpha _{1}=\left[ \begin{array}{cc} E & 0 \\ 0 & E_{1 \end{array \right] , \end{equation* satisfying the uncertainty relation $EE_{1}\geq \frac{1}{4}$ and belonging to the set $\mathfrak{S}_{E}^{(1)}$ with the equality. We use the formula (\ref{entropy}) implying \begin{equation*} S(\rho _{A})=g\left( \sqrt{EE_{1}}-\frac{1}{2}\right) , \end{equation* According to (\ref{eqcha1}), the output state $\rho _{B}=\Phi \lbrack \rho _{A}]$ has the covariance matrix \begin{equation*} \alpha _{B}=\left[ \begin{array}{cc} E+\sigma ^{2} & 0 \\ 0 & \frac{1}{4\sigma ^{2} \end{array \right] , \end{equation* with the entropy \begin{equation*} S(\rho _{B})=g\left( \sqrt{\frac{E}{4\sigma ^{2}}+\frac{1}{4}}-\frac{1}{2 \right) . \end{equation* Similarly, according to (\ref{eqen1}) the state $\rho _{E}=\tilde{\Phi}[\rho _{A}]$ of the environment has the covariance matrix \begin{equation*} \alpha _{E}=\left[ \begin{array}{cc} E & 0 \\ 0 & E_{1}+\frac{1}{4\sigma ^{2} \end{array \right] , \end{equation* with the entropy \begin{equation*} S(\rho _{E})=g\left( \sqrt{EE_{1}+\frac{E}{4\sigma ^{2}}}-\frac{1}{2}\right) . \end{equation* Summing up, \begin{eqnarray*} I(\rho _{A},\Phi ) &=&S(\rho _{A})+S(\rho _{B})-S(\rho _{E}) \\ &=&g\left( \sqrt{\frac{E}{4\sigma ^{2}}+\frac{1}{4}}-\frac{1}{2}\right) -\delta _{1}(E_{1}), \end{eqnarray* where \begin{equation*} \delta _{1}(E_{1})=g\left( \sqrt{EE_{1}+\frac{E}{4\sigma ^{2}}}-\frac{1}{2 \right) -g\left( \sqrt{EE_{1}}-\frac{1}{2}\right) \end{equation* is a positive function in the range $[\frac{1}{4E},\infty ),$ decreasing from $g\left( \sqrt{\frac{E}{4\sigma ^{2}}+\frac{1}{4}}-\frac{1}{2}\right) $ to 0 for $E_{1}\rightarrow \infty $ (this follows from the asymptotic g\left( x\right) =\log \left( x/\mathrm{e}\right) +o(1)$). Thu \begin{equation*} C_{ea}(E)\geq g\left( \sqrt{\frac{E}{4\sigma ^{2}}+\frac{1}{4}}-\frac{1}{2 \right) . \end{equation*} Let us show that in fact there is equality here, by using the concavity of the quantum mutual information (see \cite{H-SSQT}, Sec. 12.5). For a given input state $\rho $ with finite second moments consider the state \begin{equation*} \tilde{\rho}=\frac{1}{2}\left( \rho +\rho ^{\top }\right) , \end{equation* where the transposition $^{\top }$ corresponds to the antiunitary conjugation $q,p\rightarrow q,-p.$ The state $\tilde{\rho}$ has the same variances $\mathsf{D}q, \mathsf{D}p$ as $\rho$, and zero covariance between $q$ and $p$. The channel (\ref{eqcha1}) is covariant with respect to the transposition; by the aforementioned concavity, $I(\tilde{\rho},\Phi )\geq I(\rho ,\Phi ),$ moreover, $I(\tilde{\rho}_{G},\Phi )\geq I(\tilde{\rho},\Phi ),$ where \tilde{\rho}_{G}$ is the Gaussian state with the same first and second moments as $\tilde{\rho}.$ Thus \begin{eqnarray*} C_{ea}(E) &=&g\left( \sqrt{\frac{E}{4\sigma ^{2}}+\frac{1}{4}}-\frac{1}{2 \right) =g\left( \frac{\sqrt{1+r}-1}{2}\right) \\ &=&\frac{\sqrt{1+r}+1}{2}\log \frac{\sqrt{1+r}+1}{2}-\frac{\sqrt{1+r}-1}{2 \log \frac{\sqrt{1+r}-1}{2}, \end{eqnarray*} where $r=E/\sigma ^{2}$ is signal-to-noise ratio. Comparing this with (\re {gacapa1}), one has $C_{ea}(E)>C(E)$ for $E>0$ (see Appendix), with the entanglement-assistance gain $C_{ea}(E)/C(E)\sim -\frac{1}{2}\log r,$ as r\rightarrow 0$ and $C_{ea}(E)/C(E)\rightarrow 1,$ as $r\rightarrow \infty $ (see Figures). As it is to be expected, Proposition 2 is not applicable, as $\mathrm{rank K^t=1<\dim Z_{B}$ here, while \begin{equation*} \gamma _{n}=\left[ \begin{array}{cc} \varepsilon _{n} & 0 \\ 0 & \frac{1}{4\varepsilon _{n} \end{array \right] \end{equation* still satisfies the conditions 1, 2 of the Lemma. \section{Appendix} 1. Consider the channel (\ref{eqcha}). It is instructive to compare its unassisted classical capacity $C(E)$ given by (\ref{ccea}) with the values of $I(\rho _{12},\Phi )$ for centered Gaussian states $\rho _{12}=\rho _{A}$ with the covariance matrices \begin{equation*} \alpha _{12}=\left[ \begin{array}{cccc} E & 0 & 0 & 0 \\ 0 & E_{1} & 0 & 0 \\ 0 & 0 & E & 0 \\ 0 & 0 & 0 & E_{1 \end{array \right] , \end{equation* satisfying the uncertainty relation $EE_{1}\geq \frac{1}{4}$ and belonging to the set $\mathfrak{S}_{E}$ with the equality. We then find \begin{equation*} S(\rho _{12})=2g\left( \sqrt{EE_{1}}-\frac{1}{2}\right) . \end{equation* According to (\ref{eqcha}), $\rho _{B}=\Phi \lbrack \rho _{A}]$ has the covariance matrix \begin{equation*} \alpha _{B}=\left[ \begin{array}{cc} E+\frac{1}{2} & 0 \\ 0 & E+\frac{1}{2 \end{array \right] , \end{equation* with the entropy $g(E),$ and according to (\ref{eqen}) the state $\rho _{E}$ of the environment has the covariance matrix \begin{equation*} \alpha _{E}=\left[ \begin{array}{cccc} E & 0 & 0 & E/2 \\ 0 & \tilde{E}_{1} & -E/2 & 0 \\ 0 & -E/2 & E & 0 \\ E/2 & 0 & 0 & \tilde{E}_{1 \end{array \right] , \end{equation* where $\tilde{E}_{1}=E_{1}+\frac{1}{2}+\frac{E}{4}.$ The eigenvalues of \Delta _{E}^{-1}\alpha _{E}$ are $\sqrt{E}\left( \sqrt{\tilde{E}_{1}}\pm \frac{1}{2}\sqrt{E}\right) $ and have multiplicity 2. Thus \begin{equation*} S(\rho _{E})=S(\tilde{\Phi}[\rho _{12}])=g\left( \sqrt{E}\left( \sqrt{\tilde E}_{1}}+\frac{1}{2}\sqrt{E}\right) -\frac{1}{2}\right) \end{equation* \begin{equation*} +g\left( \sqrt{E}\left( \sqrt{\tilde{E}_{1}}-\frac{1}{2}\sqrt{E}\right) \frac{1}{2}\right) . \end{equation* Summing up, \begin{equation*} I(\rho _{12},\Phi )=g(E)-\delta (E_{1}), \end{equation* where \begin{eqnarray*} \delta (E_{1}) &=&g\left( \sqrt{E}\left( \sqrt{\tilde{E}_{1}}+\frac{1}{2 \sqrt{E}\right) -\frac{1}{2}\right) +g\left( \sqrt{E}\left( \sqrt{\tilde{E _{1}}-\frac{1}{2}\sqrt{E}\right) -\frac{1}{2}\right) \\ &&-2g\left( \sqrt{EE_{1}}-\frac{1}{2}\right) \end{eqnarray* is a positive function in the range $[\frac{1}{4E},\infty ),$ varying from g(E)$ to 0. Hence the value (\ref{ccea}) is attained only asymptotically for the input states $\rho _{12}$ with momentum variance $E_{1}\rightarrow \infty .$ 2. Introducing the new variable $x=\sqrt{1+r}\geq 1,$ we have \begin{equation*} C(E)=\log x\equiv f_{1}(x),\quad C_{ea}(E)=\frac{x+1}{2}\log \frac{x+1}{2} \frac{x-1}{2}\log \frac{x-1}{2}\equiv f_{2}(x). \end{equation* Then $f_{1}(1)=f_{2}(1),f_{1}^{\prime }(\infty )=f_{2}^{\prime }(\infty )$ and $f_{1}^{\prime \prime }(x)>f_{2}^{\prime \prime }(x).$ It follows f_{1}(x)<f_{2}(x),x>1.$ \bigskip \textbf{Acknowledgments.} This work was partly supported by RFBR grant N 12-01-00319-a, Fundamental Research Programs of RAS and by the Cariplo Fellowship under the auspices of the Landau Network - Centro Volta. The author is grateful to G. M. D'Ariano for the hospitality at the QUIT group of the University of Pavia, and to A. Barchielli, L. Maccone, P. Perinotti, M.F. Sacchi and M.E. Shirokov for stimulating discussions. Special thanks are due to L. Maccone for the help with Latex graphics.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Music is one of the most widespread and prevalent expressions of human culture. It has accompanied the human experience throughout history, and the enjoyment of music is one of the most common human activities. As an activity, music listening sessions commonly span over a sequence of songs, rather than a single song in isolation. Importantly, it is well established that music is experienced in temporal context and in sequence \cite{davies1978psychology,kahnx1997patterns}. This phenomenon not only underlies the notion of structure in music (as in the canonical sonata form \cite{cook1994guide}), but also implies that the pleasure one derives from a complete song is directly affected by its relative position in a sequence. This notion also underlies the manner in which DJs construct playlists \cite{cliff2000hang}, and indeed, research on automated playlist construction has aimed to produce generally appealing playlists \cite{oliver2006papa,crampes2007automatic}. However, such works have not considered the construction of personalized playlists tailored to \emph{individual} users' preferences. In the field of recommender systems, \cite{adomavicius2005toward} music has been of particular interest, both academically \cite{adomavicius2005toward,o2005trust} and commercially \cite{barrington2009smarter}. Pandora, Jango, and Last.fm are some examples of popular contemporary commercial applications. To the best of our knowledge, however, research on \emph{personalized} music recommendations has focused mostly on predicting users' preferences over \emph{individual} songs, rather than song \emph{sequences}. Overall, there has been little effort to relate learning individual listener preferences with holistic playlist generation. In this paper, we aim to bridge this gap and present DJ-MC, a novel framework for \emph{adaptive, personalized} music playlist recommendation. In this framework, we formulate the playlist recommendation problem as a sequential decision making task, and borrow tools from the reinforcement learning literature to learn preferences over both songs and song transitions on the fly. Our contributions are as follows. First, we formulate the problem of selecting which sequence of songs to play as a Markov Decision Process, and demonstrate the potential effectiveness of a reinforcement-learning based approach in a new practical domain. Second, we test the hypothesis that sequence does have a significant effect on listener experience through a user study. Third, we show empirically that DJ-MC's account for song order allows it to outperform recommendations based strictly on individual song preferences, implying such preferences can be learned efficiently with limited user information. In particular, we demonstrate that starting with no knowledge of a new user's preferences, DJ-MC is able to generate personalized song sequences within a single listening session of just 25--50 songs. The remainder of this paper is organized as follows. In Section $2$ we discuss our reformulation of playlist generation as a reinforcement learning task. In Section $3$ we describe how the DJ-MC agent models different aspects of the MDP for the purpose of learning. In Section $4$ we present the real-world data sources we used in this paper. In Section $5$ we present the full DJ-MC agent architecture. In Section $6$ we discuss the performance of DJ-MC in simulation, and in Section $7$ we present the results of applying DJ-MC in a user study with human participants. In Section $8$ we discuss related work and put our contributions in a broader context, and finally in Section $9$ we summarize and discuss our results. \section{Reinforcement Learning Framework} We consider the adaptive playlist generation problem formally as an episodic Markov Decision Process (MDP). An episodic MDP is a tuple $(S, A, P, R, T)$ where $S$ is the set of states; $A$ the set of actions, $P:S \times A \times S \rightarrow [0,1]$ is the state transition probability function where $P(s,a,s')=r$ denotes the probability of transitioning from state $s$ to state $s'$ when taking action $a$. $R:S \times A \rightarrow \mathbb{R}$ is the state-action reward function, where $R(s,a)=r$ means that taking action $a$ from state $s$ will yield reward $r$. $T$ is the set of terminal states, which end the episode. For the purposes of our specific application, consider a finite set of $n$ musical tracks (songs) $\cal{M}$ $= \{a_1, a_2, \ldots, a_n \} $ and assume that playlists are of length $k$. Our MDP formulation of the music playlist recommendation task is then as follows. \begin{itemize} \item To capture the complex dependency of listener experience on the entire sequence of songs heard, a Markov state must include an ordered list of all prior songs in the playlist. Thus, the state space $S$ is the entire ordered sequence of songs played, $S = \{(a_1, a_2, \ldots, a_i) | 1 \leq i \leq k; \forall j \leq i, a_j \in \cal{M}\}$. That is, a state $s \in S$ is an ordered tuple of songs ranging in length from 0 when choosing the first song of the playlist to $k$ when the playlist is complete. \item The set of actions $A$ is the selection of the next song to play, $a_k \in A$. This means that the action space is exactly the set of songs: $A = \cal{M}$. \item These definitions of $S$ and $A$ induce a deterministic transition function $P$. As such, we can use the shorthand notation $P(s,a) = s'$ to indicate that when taking action $a$ in state $s$, the probability of transitioning to $s'$ is 1, and to $s'' \neq s'$ is 0. Specifically, \linebreak $P((a_1, a_2, \ldots, a_i), a^*) = (a_1, a_2, \ldots, a_i, a^*)$. \item $R(s,a)$ is the utility (or pleasure) the current listener derives from hearing song $a$ when in state $s$. Note that this formulation implies that each listener induces a unique reward function. A key challenge addressed in this paper is enabling efficient learning of $R$ for a new listener. \item $T = \{(a_1, a_2, \ldots m_k)\}$: the set of playlists of length $k$. \end{itemize} \emph{Solving} an MDP typically refers to finding a policy $\pi: S \rightarrow A$ such that from any given state $s$, executing action $\pi(s)$ and then acting optimally (following the optimal policy $\pi^*$) thereafter, yields the highest (expected) sum of rewards over the length of the episode. In our case, since $P$ is deterministic, $\pi*$ corresponds to the single sequence of songs that would be most pleasing to the listener.\footnote{We consider the problem as finding a single playlist in isolation, ignoring the fact that the same listener may not want to hear similar sequences repeatedly. In practice, the stochasticity of our approach makes it exceedingly unlikely that the same sequence would be presented to a given listener multiple times, as will become clear below.} However, we assume that the listener's reward function $R$ is initially unknown. We consider the fundamental challenge of playlist generation as being efficiently modeling $R$. In particular, in the reinforcement learning literature, there are two high-level approaches to approximating (learning) $\pi^*$: model-free and model-based. \emph{Model-free} approaches learn the value of taking an action $a$ from state $s$ directly. Typical approaches, such as $Q$-learning and SARSA \cite{Sutton1998} are computationally efficient and elegant, but require a lot of experiential data to learn. \emph{Model-based} approaches alternatively learn the transition and reward functions ($P$ and $R$) so as to be able to \emph{simulate} arbitrary amounts of experiential data in order to find an approximate solution to the MDP in an approach that can be thought of as \emph{planning} through forward lookahead search. Compared to model-free methods, most model-based algorithms are significantly more computationally expensive, especially if they re-solve the MDP whenever the model changes. However, in many applications, including playlist recommendation, where data is considerably more scarce than computation, this tradeoff of computational expense for data efficiency is a good one. We therefore adopt a model-based learning approach in this paper (see Sections \ref{sec:model} and \ref{sec:djmc} for details). In the MDP defined above, the transition function $P$ is trivially known. Therefore the only unknown element of the model necessary for model-based learning is $R$, the current listener's utility (enjoyment) function. Indeed modeling $R$ in such a way that generalizes aggressively and accurately across both songs and song transitions is the biggest technical challenge in this work. Consider that even for a moderately sized music corpus of $10^3$ songs, and for playlist horizons of $10$ songs, the size of the state space alone would be $10^{30}$. It is impractical for a learning agent to even explore any appreciable size of this state space, let alone learn the listener's utility for each possible state (indeed our objective is to learn a new user's preferences and generate a personalized song sequence within a single listening session of 25--50 songs). Therefore to learn efficiently, the agent must internally represent states and actions in such a way that enables generalization of the listener's preferences. Section $3$ presents how DJ-MC compactly represents $R$ by 1) generalizing across songs via a factored representation; and 2) separating $R$ into two distinct components, one dependent only on the current song ($a$), and one dependent on the transition from the past history of songs to the current song ($s$ to $a$). Recognizing that DJ-MC's specific representation of $R$ is just one of many possible options, we also evaluate the extent to which the representational choices made are effective for generalization and learning. \section{Modeling} \label{sec:model} As motivated in the previous section, learning a listener's preference function over a large set of songs and sequences requires a compact representation of songs that is still rich enough to capture meaningful differences in how they are perceived by listeners. To this end, we represent each song as a vector of song \emph{descriptors}. Specifically DJ-MC uses spectral auditory descriptors that include details about the spectral fingerprint of the song, its rhythmic characteristics, its overall loudness, and their change over time. We find that these descriptors enable a great deal of flexibility (for instance, in capturing similarities between songs from vastly different backgrounds, or the ability to model songs in unknown languages). Nonetheless, our framework is in principle robust to using any sufficiently expressive vector of song descriptors. Section $3.1$ specifies in detail the descriptors used by DJ-MC. In order to further speed up learning, we make a second key representational choice, namely that the reward function $R$ corresponding to a listener can be factored as the sum of two distinct components: 1) the listener's preference over \emph{songs} in isolation, $R_s: A \rightarrow \mathbb{R}$ and 2) his preference over \emph{transitions} from past songs to a new song, $R_t: S \times A \rightarrow \mathbb{R}$. That is, $R(s,a) = R_s(a) + R_t(s,a)$. Section $3.2$ describes DJ-MC's reward model in detail. Section $3.3$ then evaluates the extent to which the chosen descriptors are able to differentiate meaningfully between song sequences that are clearly good and clearly bad. \subsection{Modeling Songs} As motivated above, we assume each song can be factored as a vector of scalar descriptors that reflect details about the spectral fingerprint of the song, its rhythmic characteristics, its overall loudness, and their change over time. For the purpose of our experiments, we used the acoustic features in the Million Song Dataset representation \cite{bertin2011million} to extract $12$ meta-descriptors, out of which $2$ are $12$-dimensional, resulting in a $34$-dimensional song descriptor vector. The complete set of descriptors is summarized in Table $1$. \begin{table} \begin{center} \begin{tabular}{|l|l|} \hline Descriptors & Descriptor Indices \\ \hline 10th and 90th percentiles of tempo & 1,2 \\ average and variance of tempo & 3,4 \\ 10th and 90th percentiles of loudness & 5,6 \\ average and variance of loudness & 7,8 \\ pitch dominance & 9--20 \\ variance of pitch dominance & 21 \\ average timbre weights & 22--33 \\ variance in timbre & 34 \\ \hline \end{tabular} \end{center} \caption{{\scriptsize Descriptors used for song representation. Tempo data was based on beat durations. Thus the first descriptor is the 10th percentile of beat durations in the song. Loudness was straightforwardly obtained from amplitude. Pitch dominance weights each of the $12$ possible pitch classes based on their presence in the song averaged over time. Timbre weights are the average weights of the $12$ basis functions used by the Echo Nest analysis tool to capture the spectro-temporal landscape of the song.}} \end{table} \subsection{Modeling The Listener Reward Function} Despite an abundance of literature on the psychology of human musical perception \cite{tan2010psychology}, there is no canonical model of the human listening experience. In this work we model listening as being dependent not only on preferences over the descriptors laid out above, but also over feature \emph{transitions}. This model is fairly consistent with many observed properties of human perception, such as the stochastic dependence on remembering earlier events, and evidence of working memory having greater emphasis on the present \cite{davies1978psychology,berz1995working,tan2010psychology}. We now proceed to specify the two components of $R$: $R_s$ and $R_t$. \subsubsection{Listener Reward Function over Songs $R_s$} To model $R_s$, we use a sparse encoding of the song descriptors to generate a binary feature vector. $R_s$ is then a linear function of this feature vector: that is, we assume that each feature contributes independently to the listener's utility for the song. Specifically, for each song descriptor, we collect statistics over the entire music database, and quantize the descriptor into 10-percentile bins. Following standard reinforcement learning notation, we denote the feature vector for song $a$ as $\theta_s(a)$. It is a vector of size $\#bins \times \#descriptors = 10 \times 34 = 340$ consisting of $\#descriptors$ containing 1's at coordinates that correspond to the bins song $a$ populates, and $0$ otherwise, meaning $\theta_s(a)$ behaves as an indicator function (the weight of $\theta_s(a)$ will be 34 overall). For each feature, we assume the listener has a value representing the pleasure they obtains from songs with that feature active. These values are represented as a weight vector $\phi_s(u)$. Thus $R_s(a) = \phi_s(u) \cdot \theta_s(a)$. The parameters of $\phi_s(u)$ must be learned afresh for each new user. \subsubsection{Listener Reward Function over Transitions $R_t$} A main premise of this work is that in addition to the actual songs played, a listener's enjoyment depends on the \emph{sequence} in which they are played. To capture this dependence, we assume that $$E[R_t((a_1, \ldots, a_{t-1}), a_t)] = \sum\limits_{i=1}^{t-1} \frac{1}{i^2}r_t(a_{t-i},a_t)$$ where $r_t(a_i,a_j)$ represents the listener's utility for hearing song $a_j$ sometime after having heard $a_i$. The term $\frac{1}{i^2}$ represents the notion that a song that was played $i$ songs in the past has a probability of $\frac{1}{i}$ of affecting the transition reward (i.e.\ being ``remembered''), and when it does, its impact decays by a second factor of $\frac{1}{i}$ (its impact decays over time). It remains only to specify the song to song transition reward function $R_t(a_i,a_j)$. Like $R_s$, we can describe $R_t$ as a linear function of a sparse binary feature vector: $R_t(a_i,a_j) = \phi_t(u) \cdot \theta_t(a_i,a_j)$ where $\phi_t(u)$ is a user-dependent weight vector and $\theta_t$ is a binary feature vector. Were we to consider the transitions between all 340 features of both $a_i$ and $a_j$, $\theta_t$ would need to be of length $340^2 > 100,000$. For the sake of learnability, we limit $\theta_t$ and $\phi_t$ to only represent transitions between 10-percentile bins of the same song descriptors. That is, there is for each of the 34 song descriptors, there are 100 features, one of which is 1 and 99 of which are 0, indicating which pair of 10-percentile bins were present in songs $a_i$ and $a_j$. Therefore, overall, $\theta_t$ consists of 3,400 binary features, 34 of which are 1's. Clearly, this representation is limiting in that it cannot capture the joint dependence of listener utility on transitions between multiple song descriptors. Especially for the pitch class features, these are likely to be relevant. We make this tradeoff in the interest of enabling learning from relatively few examples. Empirical results indicate that this representation captures enough of real peoples' transition reward to make a difference in song recommendation quality. Like $\phi_s(u)$, the parameters of $\phi_t(u)$ must be learned afresh for each new user. Thus all in all, there are 3740 weight parameters to learn for each listener. With even that many parameters, it is infeasible to experience songs and transitions with all of them active in just 25 songs. However DJ-MC is able to leverage knowledge of even a few transition examples to plan a future sequence of songs that is biased in favor of the positive ones and against the negative ones. \subsection{Expressiveness of the Listener Model} This representation of the listener's utility function as a 3740-dimensional sparse binary feature vector is just one of many possible representations. A necessary property of a useful representation is that its features are able to differentiate between commonly perceived ``good'' vs. ``bad'' sequences, and the DJ-MC agent internally relies on this property when modeling the listener reward function. To evaluate whether our features are expressive enough to allow this differentiation, we examine the transition profile for two types of transitions, ``poor'' vs. ``fair'', both derived from the same population of songs. We generate ``fair'' transitions by sampling pairs of songs that appeared in an actual sequence. We generate ``poor'' transitions by interleaving songs so that each one is distinctly different in character (for instance, a fast, loud track followed by a soft piece). The difference between the two profiles can be seen in Figure \emph{1}. More definitive evidence in favor of the adequacy of our representation is provided by the successful empirical application of our framework, discussed in Section $7$. \begin{figure}[h!] \label{fig1} \centering \includegraphics[height=150pt,width=0.5\textwidth,natwidth=310,natheight=322]{Diffs_Improved22_l.png} \caption{{\scriptsize Example of fair vs. poor transition profiles, based on the same set of $20$ songs. The plot shows the average transition delta for each feature. Both the fair transitions and the poor ones are constructed from the same core set of $20$ songs taken from $5$ different albums. In the case of fair transitions, we maintain the original order. In the case of poor transitions, the albums are randomly interleaved. The results indicate that qualitatively different sequences are indeed distinguishable in our feature model. In this specific example, $19$ of the $34$ features are discriminative (confidence intervals do not overlap). We expect different features to be discriminative for different transition profiles. }} \end{figure} \section{Data} A significant component of this work involves extracting real-world data for both songs and playlists to rigorously test our approach. In this section we discuss the different data sources we used to model both songs and playlists. For songs, we relied on the Million Song Dataset \cite{bertin2011million}, a freely-available collection of audio features and metadata for a million contemporary popular music tracks. The dataset covers $44,745$ different artists and $10^6$ different tracks. All the features described in Table \emph{1} are derived from this representation. An example of the audio input for a single track is provided in Figure \emph{2}. It should be noted that our agent architecture (described in detail in Section $5$) is agnostic to the choice of a specific song corpus, and we could have easily used a different song archive. \begin{figure}[h!] \label{fig1} \centering \includegraphics[height=140pt,width=0.5\textwidth,natwidth=310,natheight=322]{song_visual_new.png} \caption{{\scriptsize Example of loudness, pitch and timbre data for an example track over time (time units are in beats). Loudness is in dB, pitch measurements are represented as a vector of $12$ values between $0$ and $1$ representing dominance per pitch class. Timbre is approximated as the decomposition of the spectro-temporal surface of the song according to 12 pre-computed basis functions. }} \end{figure} To initially test our approach in simulation (a process described in detail in Section $6$), we also needed real playlists to extract song transition data from. A good source of playlists needs to be sufficiently rich and diverse, but also reflect real playlists ``in the wild''. In this paper, we used two separate sources. The first, the Yes.com archive, is corpus collected by Chen et al. \cite{chen2012playlist}. These playlists and related tag data were respectively crawled from Yes.com and Last.fm. Chen et al. harvested data between December 2010 and May 2011, yielding 75,262 songs and 2,840,553 transitions. The second source is the Art of the Mix Archive, collected by Berenzweig et al \cite{berenzweig2004large}. Berenzweig et al. gathered ~29,000 playlists from The Art of the Mix (\url{www.artofthemix.org}), a repository and community center for playlist hobbyists. These playlists were (ostensibly) generated by real individual users, rather than a commercial radio DJ or a recommendation system, making this corpus particularly appealing for listener modeling. \section{DJ-MC} \label{sec:djmc} In this section we introduce DJ-MC, a novel reinforcement learning approach to a playlist-oriented, personalized music recommendation system. The DJ-MC agent architecture contains two major components: learning of the listener parameters ($\phi_s$ and $\phi_t$) and planning a sequence of songs. The learning part is in itself divided into two parts - initialization and learning on the fly. Initialization is critical if we wish to engage listeners quickly without losing their interest before the agent has converged on a good enough model. Learning on the fly enables the system to continually improve until it converges on a reliable model for that listening session. In simulation, we assume the user is able to specify an initial list of songs that they like (this is similar to most initialization practices used by commercial music recommendation systems). However, in Section $7$ we show this step can be replaced with random exploration, while still reaching compelling results at the exploitation stage. The planning step enables the selection of the next appropriate song to play. As pointed out in Section $2$, given the sheer scope of the learning problem, even after various abstraction steps, solving the MDP exactly is intractable. For this reason we must approximate the solution. From a practical perspective, from any given state, the objective is to find a song that is ``good enough'' to play next. For this purpose we utilize Monte Carlo Tree Search. In Sections $5.1$ and $5.2$ we describe the initialization steps taken by DJ-MC. In Section $5.3$ we describe the core of the learning algorithm, which learns on the fly. in Section $5.4$ we describe the planning step. The full agent pseudocode is provided in Algorithm 5. \subsection{Learning Initial Song Preferences} To initialize the listener's song model, DJ-MC polls the listener for his $k_s$ favorite songs in the database and passes them as input to Algorithm 1. As a form of smoothing (or of maintaining a uniform prior), each element of $\phi_s(u)$ is initialized to $1/(k_s+\#bins)$, where $\#bins$ is the granularity of discretization of each song descriptor -- in our case 10 (line 2). Then for each favorite song $a$, $\phi_s(u)$ is incremented by $1/(k+\#bins) \cdot \theta_s(a)$ (line 5). At the end of this process, the weights in $\phi_s(u)$ corresponding to each song descriptor sum to 1. \begin{algorithm}[tb!] \caption{Initialize Song Preferences $R_s$} \label{alg:oneshot} \begin{algorithmic}[1] \STATE {\bfseries Input:} Song corpus, $\cal{M}$ \vskip 1pt Number of preferred songs to be provided by listener, $k_s$ \vskip 1pt \STATE initialize all coordinates of $\phi_s$ to $1/(k_s+\#bins)$ \STATE \emph{preferredSet} = $\{a_1, \ldots, a_{k_s} \}$ \emph{(chosen by the listener)} \FOR{$i=1$ {\bfseries to} $k_s$} \STATE {$\phi_s = \phi_s + \frac{1}{(k_s+1)} \cdot \theta_s(a_i)$} \ENDFOR \end{algorithmic} \end{algorithm} \subsection{Learning Initial Transition Preferences} In the second stage, the listener is queried for preferences regarding transitions, following the procedure in Algorithm 2. As in the case of initializing song preferences, the predicted value of a transition from bin $i$ to bin $j$ for each feature is initialized to $1/(k_t+\#bins)$ where $k_t$ is the number of transitions queried and $\#bins$ is the number of feature transition bins -- in our case 100 (line 2). We wouldn't want to query transitions for too small a subset of preferred songs, because that won't necessarily reveal enough about the preferred transitions. For this reason we explore the preferences of the listener in a targeted fashion, by presenting them with different possible transitions that encapsulate the variety in the dataset, and directly asking which of a possible set of options the listener would prefer. On the other hand, we would also like to exclude regions in the search space where expected song rewards are low. To accomplish both ends, DJ-MC first chooses a $50$-\% subset of the songs $\cal{M}^*$ of the song corpus $\cal{M}$ which, based on its song rewards model, obtains the highest song reward $R_s$ (line 3). Then, DJ-MC queries transition preferences over this upper median of songs by eliciting user feedback. It does so by applying the $\delta$-medoids algorithm, a novel method for representative selection (line 5) \cite{repsel}. This algorithm returns a compact but close-fitting subset of representatives such that no sample in the dataset is more than a parameter $\delta$ away from a representative, thus providing a diverse sample of the upper median of songs. $\delta$ is initialized to be the $10$-th percentile of the distance histogram between all pairs of songs in the database (line 4). We denote the representative subset $\cal{C}$. To model transitions, DJ-MC chooses songs from $\cal{C}$, and queries the listener which song $a_i \in \cal{C}$ they would like to listen to next (line 8).\footnote{If $\cal{C}$ is too large, it can be replaced at this step with a smaller subset, depending on the parameters of the system and the size of $\cal{C}$.} For modeling purposes, we assume the listener chooses the next song he would prefer by simulating the listening experience, including the non-deterministic history-dependent transition reward, and choosing the one with the maximal total reward. DJ-MC then proceeds to update the characteristics of this transition, by increasing the weight of transition features by $1/(k+\#bins)$ (line 9) , similarly to how it updated the model for song preferences (so again, the weights of each individual descriptor sum up to $1$). The full details of the algorithm are described in Algorithm 2. \begin{algorithm}[tb!] \caption{Initialize Transition Preferences $R_t$} \label{alg:oneshot} \begin{algorithmic}[1] \STATE {\bfseries Input:} Song corpus $\cal{M}$ \vskip 1pt Number of transitions to poll the listener, $k_t$ \vskip 1pt \STATE initialize all coordinates of $\phi_t$ to $1/(k_t+\#bins)$ \STATE Select upper median of $\cal{M}$, $\cal{M}^*$, based on $R_s$ \STATE $\delta = $ 10th percentile of all pairwise distances between songs in $\cal{M}$ \STATE representative set $ \cal{C} = \delta$ -medoids $(\cal{M}^*)$ \STATE $song_0 = $ choose a song randomly from $\cal{C}$ \FOR {$i=1$ {\bfseries to} $k_t$} \STATE $song_i \leftarrow $ \emph{ chosen by the listener from $\cal{C}$} \STATE $\phi_t = \phi_t + \frac{1}{(k_t+1)} \cdot \theta_t(song_{i-1}, song_i)$ \ENDFOR \end{algorithmic} \end{algorithm} \subsection{Learning on the fly} After initialization, DJ-MC begins playing songs for the listener, requesting feedback, and updating $\phi_s$ and $\phi_t$ accordingly. For ease of use DJ-MC does not require separate ratings for songs and transitions. Rather, it can assign credit to each component individually from a single unified reward signal. It does so by computing the relative contributions of the song and transition rewards to the total reward as predicted by its model. This update procedure is presented in Algorithm 3. Specifically, let $r$ be the reward the user assigns after hearing song $a$ in state $s$, and $\bar{r}$ be the average rewards assigned by this listener so far (line 4). We define $r_{incr} = log(\frac{r}{\bar{r}})$ (line 5). This factor determines both direction and magnitude for the update (negative if $r < \bar{r}$, positive otherwise, and greater the farther $r$ is from average). Let $R_s(a_i)$ and $R_t(a_{i-1}, a_i)$ be the expected song and transition rewards yielded by our model, respectively. DJ-MC uses the proportions of these values to set weights for credit assignment (this is essentially a maximum likelihood estimate). Concretely, we define the update weights for the song and transition to be $ w_s = \frac{R_s(a_i)}{R_s(a_i) + R_t(a_{i-1},a_i)}$ and $ w_t = \frac{R_t(a_{i-1},a_i)}{R_s(a_i) + R_t(a_{i-1},a_i)}$ respectively (lines 6-7). Finally, the agent uses the credit assignment values determined at the previous step to partition the given reward between song and transition weights, and update their values (lines 8-9). Following this step, DJ-MC normalizes both the song and transition reward models so that the weights for each feature sum up to $1$ (line 10). This update procedure as a whole can be perceived as a temporal-difference update with an attenuating learning rate, which balances how much the model ``trusts'' the previous history of observations compared to the newly obtained signal. It also guarantees convergence over time. \begin{algorithm}[tb!] \caption{Model Update} \label{alg:oneshot} \begin{algorithmic}[1] \STATE {\bfseries Input:} Song corpus, $\cal{M}$ \vskip 1pt Planned playlist duration, $K$ \vskip 1pt \FOR{ $i \in \{1, \ldots, K\}$} \STATE Use Algorithm 4 to select song $a_i$, obtaining reward $r_i$ \STATE let $\bar{r} = average(\{r_1, \ldots, r_{i-1}\})$ \STATE $r_{incr}$ = $log(r_i/\bar{r})$ \vskip 1pt weight update: \STATE $ w_s = \frac{R_s(a_i)}{R_s(a_i) + R_t(a_{i-1},a_i)}$ \STATE $ w_t = \frac{R_t(a_{i-1},a_i)}{R_s(a_i) + R_t(a_{i-1},a_i)}$ \STATE $\phi_s = \frac{i}{i+1} \cdot \phi_s + \frac{1}{i+1} \cdot \theta_s \cdot w_s \cdot r_{incr}$ \STATE $\phi_t = \frac{i}{i+1} \cdot \phi_t + \frac{1}{i+1} \cdot \theta_t \cdot w_t \cdot r_{incr}$ \STATE Per $d \in \mbox{descriptors}$, normalize $\phi_s^d, \phi_t^d$ \vskip 1pt (where $\phi_x^d$ denotes coordinates in $\phi_x$ corresponding to 10-percentile bins of descriptor $d$) \ENDFOR \end{algorithmic} \end{algorithm} \subsection{Planning} Equipped with the listener's learned song and transition utility functions $R_s$ and $R_t$, which determine the MDP reward function $R(s,a)=R_s(a)+R_t(s,a)$, DJ-MC employs a tree-search heuristic for planning, similar to that used in \cite{AAMAS13-urieli}. As in the case of initializing the transition weights (Algorithm 2), DJ-MC chooses a subset of $50$-percent of the songs in the database, which, based on $R_s$, obtain the highest song reward (line 2). At each point, it simulates a trajectory of future songs selected at random from this ``high-yield'' subset (lines 7-11). The DJ-MC architecture then uses $R_s$ and $R_t$ to calculate the expected payoff of the song trajectory (line 12). It repeats this process as many times as possible, finding the randomly generated trajectory which yields the highest expected payoff (lines 13-16). DJ-MC then selects the first song of this trajectory to be the next song played (line 19). It uses just the first song and not the whole sequence because as modeling noise accumulates, its estimates become farther off. Furthermore, as we discussed in Subsection $5.3$, DJ-MC actively adjusts $\phi_s$ and $\phi_t$ online based on user feedback using Algorithm 3. As a result, replanning at every step is advisable. If the song space is too large or the search time is limited, it may be infeasible to sample trajectories starting with all possible songs. To mitigate this problem, DJ-MC exploits the structure of the song space by clustering songs according to song types (line 9).\footnote{ In principle, any clustering algorithm could work. For our experiments, we use the canonical k-means algorithm~\cite{macqueen1967some}.} It then plans over abstract song types rather than concrete songs, thus drastically reducing search complexity. Once finding a promising trajectory, DJ-MC selects a concrete representative from the first song type in the trajectory to play (line 18). \begin{algorithm}[tb!] \caption{Plan via Tree Search} \label{alg:oneshot} \begin{algorithmic}[1] \STATE {\bfseries Input:} Song corpus $\cal{M}$, planning horizon $q$ \STATE Select upper median of $\cal{M}$, $\cal{M}^*$, based on $R_s$ \STATE \emph{BestTrajectory} $= null$ \STATE \emph{HighestExpectedPayoff} $= -\infty$ \WHILE {computational power not exhausted} \STATE {$trajectory = []$} \FOR {$1..\ldots q$} \STATE $song \leftarrow$ selected randomly from $\cal{M}^*$ \vskip 1pt \emph{(avoiding repetitions)} \STATE optional: \vskip 1pt $song\_type \leftarrow$ selected randomly from $song\_types(\cal{M}^*)$ \vskip 1pt \emph{(avoiding repetitions, $song\_types(\cdot)$ reduces the set to clusters)} \STATE add $song$ to $trajectory$ \ENDFOR \STATE $\mbox{expectedPayoffForTrajectory} = R_s(song_1) + \sum\limits_{i=2}^{q} (R_t((song_1, \ldots, song_{i-1}), song_i) + R_s(song_i))$ \IF{$\mbox{expectedPayoffForTrajectory} > \mbox{HighestExpectedPayoff}$} \STATE $\mbox{HighestExpectedPayoff} = \mbox{expectedPayoffForTrajectory}$ \STATE \emph{BestTrajectory} $ = $ \emph{trajectory} \ENDIF \ENDWHILE \STATE optional: if planning over song types, replace \emph{BestTrajectory}$[0]$ with concrete song. \STATE return \emph{BestTrajectory}$[0]$ \end{algorithmic} \end{algorithm} Combining initialization, learning on the fly, and planning, the full DJ-MC agent architecture is presented in Algorithm 5. \begin{algorithm}[tb!] \caption{Full DJ-MC Architecture} \label{alg:oneshot} \begin{algorithmic}[1] \STATE {\bfseries Input:} $\cal{M}$ - song corpus, $K$ - planned playlist duration, $k_s$ - number of steps for song preference initialization, $k_t$ - the number of steps for transition preference initialization \vskip \algorithmicindent Initialization: \STATE Call Algorithm 1 with corpus $\cal{M}$ and parameter $k_s$ to initialize song weights $\phi_s$. \STATE Call Algorithm 2 with corpus $\cal{M}$ and parameter $k_t$ to initialize transition weights $\phi_t$. \vskip\algorithmicindent Planning and Model Update: \STATE Run Algorithm 3 with corpus $\cal{M}$ and parameter $K$ \vskip 1pt (Algorithm 3 iteratively selects the next song to play by calling algorithm 4, and then updates $R_s$ and $R_t$. This is repeated for $K$ steps.) \end{algorithmic} \end{algorithm} \section{Evaluation in Simulation} Due to the time and difficulty of human testing, especially in listening sessions lasting hours, it is important to first validate DJ-MC in simulation. To this end, we tested DJ-MC on a large set of listener models built using real playlists made by individuals and included in The Art of the Mix archive. For each experiment, we sample a $1000$-song corpus from the Million Song Dataset. One of the issues in analyzing the performance of DJ-MC was the nonexistence of suitable competing approaches to compare against. Possible alternatives are either commercial and proprietary, meaning their mechanics are unknown, or they do not fit the paradigm of online interaction with an unknown individual user. Still, we would like our evaluation to give convincing evidence that DJ-MC is capable of learning not only song preferences but also transition preferences to a reasonable degree, and that by taking transition into account DJ-MC is able to provide listeners with a significantly more enjoyable experience (see Section $8$ for related work). In order to measure the improvement offered by our agent, we compare DJ-MC against two alternative baselines: an agent that chooses songs randomly, and a greedy agent that always plays the song with the highest song reward, as determined by Algorithm 1. As discussed in the introduction, we expect that the greedy agent will do quite well since song reward is the primary factor for listeners. However we find that by learning preferences over transitions, DJ-MC yields a significant improvement over the greedy approach. To represent different listener types, we generate $10$ different playlist clusters by using $k$-means clustering on the playlists (represented as artist frequency vectors). We generate $1000$ different listeners by first sampling a random cluster, second sampling $70\%$ of the song transition pairs in that cluster, and third inputting this data to Algorithms 1 and 2 to train the listener's song and transition weights. For the experiments reported here we used a playlist length of 30 songs, a planning horizon of 10 songs ahead, a computational budget of 100 random trajectories for planning, a query size of 10 songs for song reward modeling and 10 songs for transition rewards. As shown in Figure \emph{3}, DJ-MC performs significantly better than the baselines, most noticeably in the beginning of the session. \begin{figure}[h!] \label{fig1} \centering \includegraphics[height=200pt, width=0.425\textwidth,natwidth=310,natheight=322]{sim_res_redone_l.png} \caption{{\scriptsize Cumulative reward histograms for playlists of length $10$ (\emph{a}) and $30$ (\emph{b}), with listeners based on real playlist data. The DJ-MC agent outperforms both random and greedy agents, particularly for the first $10$ songs. Results are highly significant (\emph{p}-value $<<0.01$).}} \end{figure} \vspace{-0.5cm} \section{Evaluation on Human Listeners} While using simulated listeners allows for extensive analysis, ultimately the true measure of DJ-MC is whether it succeeds when applied on real listeners. To test whether this is the case, we ran two rounds of lab experiments with \emph{47} human participants. The participants pool was comprised of graduate students at the McCombs School of Business at the University of Texas at Austin. \subsection{Experimental Setup} Each participant interacted with a playlist generator. As a song corpus we used songs with corresponding Million Song Dataset entries that also appeared in Rolling Stone Magazine's list of 500 greatest albums of all time.\footnote{\url{http://www.rollingstone.com/music/lists/500-greatest-albums-of-all-time-20120531}} To keep the duration of the experiment reasonable, each song played for \emph{60} seconds before transitioning (with a cross-fade) to the next song. After each song the participants were asked, via a graphic user interface, to specify whether they liked or disliked the played song, as well as the transition to it. This provided us with separate (albeit not independent) signals for song quality and song transition quality to test how well DJ-MC actually did. Since asking users for their selection of $10$ songs was impractical in this setting, in order to seed the learning the agent explored randomly for \emph{25} songs, and then began exploiting the learned model (while continuing to learn) for \emph{25} songs. The participants were divided into \emph{2} groups - \emph{24} interacted with the greedy baseline, whereas \emph{23} interacted with DJ-MC. Though we expect the greedy agent to perform well based on song preferences only, we test whether DJ-MC's attention to transition preferences improves performance. \subsection{Results} Since our sample of human participants is not large, and given the extremely noisy nature of the input signals, and the complexity of the learning problem, it should come as no surprise that a straightforward analysis of the results can be difficult and inconclusive. To overcome this issue, we take advantage of bootstrap resampling, which is a highly common tool in the statistics literature to estimate underlying distributions using small samples and perform significance tests. At each stage we treat a ``like'' signal for either the transition or the song as $+1$ reward value vs. $0$ for a ``dislike''. We continue to reconstruct an approximate distribution of the aggregate reward for each condition by sampling subsets of \emph{8} participants with repetition for $N=250000$ times and estimating the average reward value for the subset. Figures \emph{4a} and \emph{4c} compare the reward distributions for the greedy and DJ-MC agents from song reward and transition reward respectively, during the first 25 episodes. Since both act identically (randomly) during those episodes, the distributions are very close (and indeed testing the hypothesis that the two distributions have means more than $0.25$ apart by offsetting the distributions and running an appropriate t-test does not show significance). During the exploitation stage (episodes 26-50), the agents behave differently. With regards to song reward, we see that both algorithms are again comparable (and better in expectation than in the exploration stage, implying some knowledge of preference has been learned), as seen in Figure \emph{4b}. In Figure \emph{4d}, however, we see that DJ-MC significantly outperforms the greedy algorithm in terms of transition reward, as expected, since the greedy algorithm does not learn transition preferences. The results are statistically significant using an unpaired t-test ($p << 0.01$), and are also significant when testing to see if the difference is greater than $0.25$. Interestingly, the average transition reward is higher for the greedy algorithm at the exploitation stage (apparent by higher average reward comparing Figures \emph{4a} and \emph{4b}). From this result we can deduce that either people are more likely to enjoy a transition if they enjoy the song, or that focusing on given tastes immediately reduces the ``risk'' of poor transitions by limiting the song space. All in all, these findings, made with the interaction of human listeners, corroborate our findings based on simulation, that reasoning about transition preferences gives DJ-MC a small but significant boost in performance compared to only reasoning about song preferences. \begin{figure}[h!] \label{fig1} \centering \includegraphics[height=490pt,width=0.45\textwidth,natwidth=310,natheight=322]{bootstrap_results2_redone2_reformat2.jpg} \caption{{\scriptsize (a) Histogram of cumulative song rewards for the first 25 songs (b) Histogram of cumulative song rewards for the songs 25-50. (c) Histogram of cumulative transition rewards for the first 25 songs. (d) Histogram of cumulative transition rewards for the songs 25-50. Histograms computed via bootstrap resampling the original data 250,000 times.}} \end{figure} \section{Related Work} Though not much work has attempted to model playlists directly, there has been substantial research on modeling similarity between artists and between songs. Platt et al. \cite{platt2003fast} use semantic tags to learn a Gaussian process kernel function between pairs of songs. Weston et al. \cite{weston2011multi} learn an embedding in a shared space of social tags, acoustic features and artist entities by optimizing an evaluation metric for various music retrieval tasks. Aizenberg et al. \cite{aizenberg2012build} model radio stations as probability distributions of items to be played, embedded in an inner-product space, using real playlist histories for training. In the academic literature, several recent papers have tried to tackle the issue of playlist prediction. Maillet et al.\cite{maillet2009steerable} approach the playlist prediction problem from a supervised binary classification perspective, with pairs of songs in sequence as positive examples and random pairs as negative ones. Mcfee and Lanckriet \cite{mcfee2011natural} consider playlists as a natural language induced over songs, training a bigram model for transitions and observing playlists as Markov chains. Chen et al.\cite{chen2012playlist} take on a similar Markov approach, treating playlists as Markov chains in some latent space, and learn a metric representation (or multiple representations) for each song in that space, without relying on audio data. In somewhat related work, Zheleva et al.\cite{zheleva2010statistical} adapt a Latent Dirichlet Allocation model to capture music taste from listening activities across users, and identify both the groups of songs associated with the specific taste and the groups of listeners who share the same taste. In a more recent related work, Natarajan et al. \cite{natarajan2013app} generalize this approach to the problem of collaborative filtering for interactional context. Users are clustered based on a one-step transition probability between items, and then transition information is generalized across clusters. Another recent work by Wang et al. \cite{wang2013exploration} also borrows from the reinforcement learning literature, and considers the problem of song recommendations as a bandit problem. Applying this approach, the authors attempt to balance the tradeoff between exploration and exploitation in personalized song recommendation. The key difference between these approaches and our methodology is that to the best of our knowledge, no one has attempted to model entire playlists \emph{adaptively}, while interacting with a human listener individually and learning his preferences over both individual songs and song transitions online. By explicitly modeling transitions and exploiting user reinforcement, our framework is able to learn preference models for playlists on the fly without any prior knowledge. \section{Summary and Discussion} In this work we present DJ-MC, a full DJ framework, meant to learn the preferences of an individual listener online, and generate suitable playlists adaptively. In the experimental sections we show that our approach offers significant improvement over a more standard approach, which only considers song rewards. DJ-MC, which focuses on the audio properties of songs, has the advantage of being able to generate pleasing playlists that are unexpected with respect to traditional classifications based on genre, period, etc. In future work, it would be of interest to combine intrinsic sonic features with varied sources of metadata (e.g. genre, period, tags, social data, artist co-occurrence rates, etc). It would also be of interest to test our framework on specific types of listeners and music corpora. This work shows promise for both creating better music recommendation systems, and demonstrating the effectiveness of a reinforcement-learning based approach in a new practical domain. \begin{scriptsize} \subsection*{Acknowledgements} This work has taken place in the Learning Agents Research Group (LARG) at the Artificial Intelligence Laboratory, The University of Texas at Austin. LARG research is supported in part by grants from the National Science Foundation (CNS-1330072, CNS-1305287), ONR (21C184-01), AFOSR (FA8750-14-1-0070, FA9550-14-1-0087), and Yujin Robot. \end{scriptsize} \label{sec:disc} \bibliographystyle{abbrv}
{ "redpajama_set_name": "RedPajamaArXiv" }
\chapter{The Initial Mass Function of Field OB Stars in the Small Magellanic Cloud} \section{Introduction} The standard model of star formation has been that most, if not all, stars form in clusters \citep[e.g.,][]{Lada03}, with the most massive stars aggregating in the dense cores of clusters. It is intuitive that massive O stars form preferentially from the plentiful gas reservoirs of giant molecular clouds (GMCs). However, another significant population of massive stars exists in an environment of the opposite extreme. These massive stars are far removed from dense clusters or OB associations and instead appear isolated within the sparse field population. The physical properties and origin of this field massive star population remain unclear, despite the fact that it accounts for 20--30\% of the massive stars in star-forming galaxies \citep{Oey04}. The existence of such stars in isolation poses a challenge for theories of massive star formation, which suggest that the necessary gas conditions are primarily or exclusively found in GMCs. Alternatively, rather than having formed in the field, these stars may have formed in clusters, and then been subsequently ejected from their birth locations as runaway stars. In either case, field massive stars are a unique, understudied subset of a galaxy's massive star population, probing both extremely sparse and extremely dense star-forming conditions. The observational evidence for in situ field massive star formation has grown in recent years. An optical and UV photometric census of candidate O-type stars in a portion of the LMC suggests that approximately half of these stars may be members of the field population \citep{Parker01}. Some strong, direct evidence of formation in the field is work by \citet{Testi97, Testi98}, who reported a sample of Herbig Ae/Be stars forming in isolation. At higher masses, \citet{Lamb10} detected sparse clusters associated with field OB stars in the Small Magellanic Cloud, and \citet{Bressert12} identified 15 O stars that are candidates for isolated formation near 30 Doradus, based on a variety of criteria. Additional individual candidates have been reported by \citet{Selier11} and \citet{Oskinova13}. \citet{Oey13} presented a sample of 14 field OB stars centered in circular HII regions, thus implying that they are unlikely to have transverse runaway velocities. Since these objects furthermore have non-runaway radial velocities, they most likely formed in situ. This growing observational dataset of massive stars that appear to have formed in sparse clusters or in isolation, without any indication of being runaways, strongly suggests that some component of the field massive star population formed in situ. Even so, formation within clusters cannot be entirely ruled out for these stars. \citet{Gvarmadze12} point out that cluster dissolution, slow ejections, or multi-stage ejections could all potentially mask the signatures that these stars formed in clusters. This problem on the origin of field OB stars is central to some outstanding controversies. \citet{Weidner06} suggest a deterministic relation between cluster mass and the associated maximum stellar mass; whereas if it is indeed the case that massive stars can form in sparse, low-mass clusters, it would suggest a large dispersion in the relation between cluster mass and the associated maximum stellar mass, which is inconsistent with such a scenario. Furthermore, it would also imply that individual sparse clusters must necessarily have stellar initial mass functions (IMFs) that grossly deviate from any standard values. It remains unclear whether such deviations are real or whether they arise from stochastic sampling, so that an aggregate population of sparse clusters would yield a Salpeter-like IMF, as suggested by \citet{Lamb10}. These issues are simply a consequence of the difficulties in understanding sparse massive star formation within the framework of current star formation models. Two primary theories for massive star formation are the competitive accretion model and the core accretion model. In the competitive accretion model, molecular clouds fragment into star forming cores, which continue to accrete matter from a shared reservoir of gas. In this scenario, massive stars form in locations where the gas density is highest, which is typically in the centers of GMCs \citep{Zinnecker82}. Thus, it is implicit to the competitive accretion model that massive stars may only form along with a significant population of lower mass stars \citep{Bonnell04}. In contrast, core accretion models suggest that the gas available for accretion is controlled by the mass of the fragmented core itself \citep{Shu87}. Thus in core accretion models it is possible, although difficult, to obtain gas conditions that would allow a massive star to form in isolation (e.g. \citealt{Krumholz09}). A less controversial component of the field is the runaway population. Observationally, isolated massive stars with large space velocities are well-known to exist. The typical definition for runaway stars is a peculiar space velocity $> 30$ km s$^{-1}$. Using this definition, runaway fractions ranging from $10\%$ \citep{Blaauw61} to $50\%$ \citep{deWit05} have been observed for massive stars within the Galaxy. However, other studies use evidence from bow shocks, the likelihood of slow runaway ejections, and the possibility of exotic multi-stage ejection mechanisms to suggest that the true runaway fraction is much higher, up to $100\%$ of the field population \citep{Gvarmadze12}. In this scenario, the field population is comprised primarily of stars that formed in dense cluster cores, where the best conditions for massive star ejections exist. Thus, the field population is a vital probe of the massive star formation process at both the densest and least dense extremes. Other than the obvious kinematic signatures expected for runaway stars, it is not well known how the properties of massive stars formed in isolation vs runaways would differ from stars in clusters. Observational studies do reveal a few trends: for example, a study by \citet{vandenBergh04} compares the distribution of spectral types between field and cluster O stars within the magnitude-limited Galactic O Star Catalog \citep{MaizApellaniz04}, finding that spectral types for field stars are skewed toward later types than stars in clusters. Thus, field O stars are either older or less massive as a population than O stars in clusters. A similar result was found in the Magellanic Clouds, where \citet{Massey95} and \citet{Massey02} discovered that the field population has an extremely steep IMF in a few selected fields. The stellar IMF for stars in clusters is generally consistent with the classical Salpeter slope of $\Gamma = 1.35$ for a power law given by $dn/ d \log m \propto m^{-\Gamma}$, where $n$ is the number of stars of mass $m$. However, \citet{Massey95} found a high-mass field IMF slope of $\Gamma \sim 4$ using a combination of spectra and photometry. This steep IMF also suggests that field massive stars are typically less massive than the massive stars in clusters. These findings represent the largest systematic departure from a Salpeter IMF based on direct star counts, and suggest that field massive stars as a population may originate in a fundamentally different way than those in clusters. Thus, there is a clear need for a systematic, statistically complete survey of field massive stars. Unlike stars in clusters, field massive stars in the Galaxy are distributed in all directions. Together with distance uncertainties and line-of-sight confusion caused by large and differential extinction, this causes great difficulty in identifying a complete, uniformly selected sample of Galactic field O stars. Sample size and stochasticity are also issues within the Galaxy, since we are limited to sampling only the nearby Galactic field. In order to mitigate these issues, we targeted the nearby Small Magellanic Cloud (SMC) to obtain a uniform, spectroscopic survey of its field massive star population, which we call the Runaways and Isolated O-Type Star Spectroscopic Survey of the SMC, or RIOTS4. Since the SMC is located at high Galactic latitude, it is relatively free from the line-of-sight issues that plague Galactic studies. Additionally, since all objects are at the SMC distance, RIOTS4 avoids issues associated with distance uncertainties. Thus, the most important benefit is that RIOTS4 targets a {\it spatially complete} and statistically significant sample of uniformly selected field massive stars. Here, we present an overview of the RIOTS4 survey and the results to date. \section{RIOTS4 Targets and Observations} RIOTS4 targets a spatially complete sample of 374 uniformly selected candidate field OB stars in the SMC. Our targets are identified by Oey et al. (2004; hereafter OKP04) according to the photometric criteria $B\leq 15.21$ and $Q_{UBR} \leq -0.84$, where the reddening-free parameter $Q_{UBR}$ is given by, \begin{eqnarray} Q_{UBR} &=& (U - R) - \frac {A_U - A_R} {A_B - A_R}(B - R) \nonumber \\ &=&(U - R) - 1.396(B - R)\ , \end{eqnarray} where the $A$ values correspond to extinction in the specified bands. In the calculation of $Q_{UBR}$, OKP04 adopted the ratio of total to selective extinction $R_V$ = 3.1 from \citet{Cardelli89}. These photometric criteria were designed to select stars with masses $\gtrsim 10\ M_\odot$, using the $B$ magnitude to eliminate less massive main sequence stars, and the $Q_{UBR}$ criterion to identify only the bluest stars; this corresponds to approximate spectral types of B0 V, B0.5 I, and earlier. OKP04 applied these criteria to the $UBVR$ photometric survey data for the SMC obtained by \citep{Massey02}, which was optimized to identify OB star candidates. This survey basically covered the full star-forming expanse of the galaxy, which ensures uniform selection of a spatially complete sample of massive stars in the SMC. OKP04 further carried out a friends-of-friends analysis on this sample to identify clusters. In this algorithm, stars are considered cluster members if their projected distances to other cluster members are smaller than the given clustering length. The clustering length is the value that maximizes the number of identified clusters \citep{Battinelli91}, which is 28 pc for the SMC sample. Thus the field OB targets for the RIOTS4 survey correspond to all candidates from the OKP04 sample with no other candidates within a 28 pc radius. OKP04 also identified a sample of candidate field O stars in a smaller region, covering the SMC bar, using UV photometry from the {\it Ultraviolet Imaging Telescope (UIT)} \citep{Parker98}. These 91 field O star candidates were selected using reddening-free indices that include UV and $UBVR$ photometry, along with the same $B$ magnitude criteria as the main sample. Of these 91 stars, there are 23 that were not identified by the optical photometric criteria above. We included these stars in our multi-object observations as described below. We observed the RIOTS4 survey targets over a five-year period from 2006 September to 2011 October using spectrographs on the Magellan telescopes at Las Campanas Observatory. The majority of our observations were obtained with the Inamori-Magellan Areal Camera and Spectrograph (IMACS) in the f/4 multi-slit mode on the Magellan Baade telescope \citep{Bigelow03}. With 49 slit masks, we observed 328 of the 374 candidate field OB stars, or over 7/8 of our total sample. We also observed the 23 objects unique to the UV-selected sample with this setup. We used the 1200 lines/mm grating and slit widths of either 0.7$\arcsec$ or 1.0$\arcsec$, yielding spectral resolutions of $R\sim$ 3700 and $R\sim$ 2600, respectively. Due to the varying placement of slits within the slit masks, our spectral coverage for each star varies; however, every spectrum includes coverage from 4000 -- 4700 \AA. We observed each field for a total of one hour in three exposures of 20 minutes each, which allows us to achieve a S/N $> 30$ for our fainter targets. All observations in our IMACS multi-object campaign occurred between 2006 September to 2010 December. During our initial observing run in 2006 September one of our 49 fields was observed with the 600 lines/mm grating, resulting in a spectral resolution of $R\sim$ 1900. To maximize the multi-object fields, we were unable to include 46 of our RIOTS4 targets in the IMACS slit masks. We therefore observed these targets individually or in pairs using long slit observations. The majority of our remaining targets were observed using the Magellan Inamori Kyocera Echelle (MIKE) Spectrograph on the Magellan Clay telescope \citep{Bernstein03}. We also used MIKE to re-observe 29 targets in cases where important diagnostic absorption lines fell within the IMACS CCD chip gaps, or when a spectrum from a multi-object observation fell into the center gap of the IMACS CCD array. We observed a total of 48 targets with MIKE using a 1$\arcsec$ slit width for a spectral resolution of $R\sim 28000$. Exposure times for MIKE observations ranged from 15 -- 30 minutes depending on the brightness of the target, again with a goal of achieving S/N $> 30$. All MIKE observations occurred in 2010 November. With IMACS f/4 out of commission during our 2011 observations, we also operated IMACS in f/2 mode with a 300 lines/mm grism to observe a total of 27 objects. Depending on the seeing, we used either a $0.5\arcsec$ or $0.7\arcsec$ slit width, which yield spectral resolutions of $R\sim$ 1000 and $R\sim$ 1300, respectively. As in the primary IMACS campaign, we observed objects for a total of one hour, in three 20-minute exposures. Our IMACS f/2 observations occurred between 2011 July and 2011 October. We also took advantage of the IMACS multi-object setup to conduct time-domain monitoring of three of our most densely populated fields. As described below in \S \ref{binaryfraction}, our goal was to identify binary stars from radial velocity variations. We observed about 9 epochs of these fields, with baselines in time ranging from $< 24$ hours to days, weeks, months, and years. Since these fields overlap in area, a few stars were observed with up to twice as many observations. Initial reduction of RIOTS4 IMACS multi-slit observations was completed with the Carnegie Observatories System for MultiObject Spectroscopy (COSMOS) data reduction package\footnote{COSMOS was written by A. Oemler, K. Clardy, D. Kelson, G. Walth, and E. Villanueva. See http://code.obs.carnegiescience.edu/cosmos.}. COSMOS was custom designed for use with the IMACS instrument and 8-CCD array setup. With COSMOS, we performed bias subtraction, flat-fielding, wavelength calibration, and extraction of 2-D spectra following the standard COSMOS pipeline. For single-star spectra from MIKE and IMACS, we used standard IRAF\footnote{IRAF is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy (AURA), Inc., under cooperative agreement with the National Science Foundation (NSF).} procedures to do bias subtraction, flat fielding, and wavelength calibration. From the wavelength-calibrated 2-D spectra for both single star observations and multi-slit observations, we used the {\tt apextract} package in IRAF to find, trace, and extract 1-D spectra. We rectified the spectra using the {\tt continuum} procedure and eliminated remaining cosmic rays or bad pixel values with the {\tt lineclean} procedure, both of which belong to the {\tt onedspec} package in IRAF. \section{RIOTS4 Data Products} \subsection{Catalog of Spectral Types} The first observational data product from RIOTS4 is the catalog of spectral classifications for candidate field OB stars. The completeness of RIOTS4 allows a full characterization of the distribution of stellar spectral types in the field. We classify the stars based primarily on the atlas of OB spectra published by \citet{Walborn90}, and we also rely on \citet{Walborn09} and \citet{Sota11}, especially for identification of unique spectral features. However, these atlases present mostly Galactic stars at solar metallicity ($Z \sim 0.02$), which is much higher than the SMC's metallicity ($Z \sim 0.004$). To investigate and eliminate potential biases in spectral types due to metallicity effects, we also refer to \citet{Walborn95} and \citet{Walborn00} for their comparison of stellar spectral types at Galactic and SMC metallicity. To obtain spectral types of supergiant stars, we adopt the criteria established by \citet{Lennon97} for SMC metallicity. For an initial estimate, four of us (J. B. L., M. S. O., A. S. G., and J. B. G. M.) each independently estimated the spectral type of every star in the RIOTS4 survey using the above resources. We collated these spectral types and arrived at a consensus for each object. We finalized our catalog by plotting spectra sequentially from earliest to latest spectral types and iteratively maneuver stars within this sequence until we achieve a smooth transition between each spectral sub-type according to diagnostic stellar absorption line ratios. The majority of our spectra are accurate to within half a spectral type, so that, for example, an O8 star can reasonably be expected to have a spectral type between O7.5 and O8.5. However, for fainter objects and especially for spectral types later than B0 V, we sometimes list a range in their spectral types due to the faintness or non-detection of metal lines caused by a combination of poor S/N and the low metallicity of the SMC. Additional difficulties in spectral typing arise due to confusion from binary systems or Oe/Be stars, which have emission in one or more Balmer or He lines due to the presence of a circumstellar disk. These issues are discussed in more detail below. \begin{figure*} \begin{center} \includegraphics[scale=1]{f1.eps} \caption{A sequence of spectral types from O4 V to O8 V stars from the RIOTS4 survey. We label the major spectral features in the range from $4000 - 4900$ \AA. The ratio of He {\footnotesize II} $\lambda$4542 to He {\footnotesize I} $\lambda$4471 is a primary spectral type diagnostic for O stars. } \label{first} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[scale=1]{f2.eps} \caption{A sequence of spectral types from O9 V to B1.5 V from the RIOTS4 survey. We label the major spectral features in the range from $4000 - 4900$ \AA. With the transition from O to B type stars, the primary spectral diagnostic becomes the ratio of Si {\footnotesize IV} $\lambda$4089 to Si {\footnotesize III} $\lambda$4555, after He {\footnotesize II} disappears at spectral type B0.5 V. } \label{last} \end{center} \end{figure*} We plot a sequence of RIOTS4 spectra in Figures~\ref{first} -- \ref{last}, which cover spectral types from our earliest object, an O4 V star, to one of our latest objects, a B1.5 V star. For O stars, the diagnostic absorption line ratios are He {\footnotesize II} $\lambda$4542 to He {\footnotesize I} $\lambda$4471 and, as a secondary check, He {\footnotesize II} $\lambda$4200 to He {\footnotesize I(+II)} $\lambda$4026. For B stars, the primary diagnostic absorption line ratio is Si {\footnotesize IV} $\lambda$4089 to Si {\footnotesize III} $\lambda$4555. A further constraint for early B type stars is the presence of He {\footnotesize II} $\lambda$4686, which disappears at spectral types later than B0.2 V, B0.5 III, and B1 I. \begin{figure*} \begin{center} \includegraphics[scale=1]{f3.eps} \caption{A sequence of evolved stars from O6 to B1.5 from the RIOTS4 survey. We label the major spectral features in the range from $4000 - 4900$ \AA. Except for the N {\footnotesize II} emission in the early O stars, evolved luminosity classes are primarily identified by the strength of the Si {\footnotesize IV} for late O stars and Si {\footnotesize III} for early B stars. } \label{evolved} \end{center} \end{figure*} We determine luminosity classes using a combination of spectral data as the primary diagnostic, and photometric magnitudes as a secondary check. To identify evolved stars at spectral types earlier than $\sim$O8, we look for the presence of emission features such as N {\footnotesize II} $\lambda\lambda$4634-4640-4042 and weak absorption to strong emission in He {\footnotesize II} $\lambda$4686. For later O stars, we use the increasing ratio of Si {\footnotesize IV} $\lambda$4089 to He {\footnotesize I} $\lambda$4026, which identifies increasingly evolved stars. In a similar manner, evolution in B stars is found in the increasing ratio of Si {\footnotesize III} $\lambda$4555 to He {\footnotesize I} $\lambda$4471. These luminosity effects are all demonstrated in the sample of evolved spectra shown in Figure \ref{evolved}. As mentioned previously, the lower metallicity of the SMC causes our spectra to have absent, or much weaker, metal lines than the Galactic spectral type standards in \citet{Walborn90}. In practice, the metal absorption lines tend to be absent in dwarf O stars for our observational setup, with the exception of {\sc C iii} $\lambda$4650 in O9 -- O9.5 V stars, but they do appear in giant or supergiant luminosity classes. We use the spectral criteria for SMC supergiants in \citet{Lennon97} to finalize our spectral types and luminosity classes for evolved stars. For a final check on the luminosity class, we compare the expected magnitude of our established spectral type \citep{Schmidt-Kaler82} at the distance of the SMC (DM $= 18.9$; \citealt{Harries03}) with the observed magnitude from \citet{Massey02}. If the star is much brighter than expected for its luminosity class, then we re-visit our luminosity classification and adjust it to a more evolved class in more ambiguous cases. However, the existence of a binary companion would also increase the observed brightness of an object. Therefore, we carefully re-examine such stars for evidence of spectroscopic binary companions. Even so, detection of a secondary may often go unnoticed without multi-epoch observations, or they may be unresolvable due to low inclination angle, small mass ratio, or long periods. Thus, undetected binaries may be expected to bias our results slightly towards later spectral types and more evolved objects. In general, there is a tendency that the magnitudes indicate brighter luminosity classes than derived spectroscopically; this is related to the known effect that SMC OB stars are observed to lie above theoretical evolutionary tracks on the H-R diagram, as discussed by, e.g., \citet{Lamb13} and \citet{Massey02}. However, for Be stars, we find more extreme discrepancies in luminosity class, and we therefore omit these from the spectral classifications of Be stars in our catalog. The fraction of our objects that are undetected binaries is likely to be significant; we obtain a lower limit to the binary fraction of $\sim$60\% in the RIOTS4 multi-epoch campaign (see \S \ref{binaryfraction}), which is similar to the frequency found in open clusters \citep[e.g.,][]{Sana08,Sana09,Sana11}. Thus, we want to quantify the potential effects undetected binaries will have on our spectral catalog. Furthermore, we require a method to determine spectral types of identified double-lined spectroscopic binaries. To address both these concerns, we create a sequence of synthetic binary stars, which we derive directly from the RIOTS4 spectral data. We begin by placing RIOTS4 stars with identical spectral classifications into separate groups. Any stars that have chip gaps affecting important diagnostic lines or have poor S/N are removed from these groups. The remaining stars in each group are wavelength-shifted to a radial velocity of zero and then median combined to create a template spectrum for each spectral type. We ensure that each template is created from a combination of at least five stars, which limits us to spectral types ranging from O8 to B1. Using these template spectra, we combine each pair, weighted according to their expected magnitudes \citep{Schmidt-Kaler82}, to generate our synthetic binary spectra. We plot an example sequence of these synthetic binaries in Figure \ref{binarymodels}. From this exercise, we find that the primary star in the system is rarely altered by more than a single spectral type. However, we find that the secondary spectral type is poorly constrained. This is especially true for O+B binaries, where the diagnostic Si {\footnotesize III} $\lambda$4555 line for the B star, which is already weakened due to the low metallicity, is further affected by the continuum of the primary O star. Most binary systems with a B dwarf secondary star are undetectable in RIOTS4 spectra due to the weak Si {\footnotesize III} lines. \begin{figure*} \begin{center} \includegraphics[scale=1]{f4.eps} \caption{A sample of synthetic binary spectra derived from actual RIOTS4 spectra. We label the major spectral features in the range from $4000 - 4900$ \AA. The top pair of spectra demonstrate the difficulty of identifying spectroscopic binaries that consist of an O star and a dwarf B star, due to the weakness of the Si {\footnotesize III} lines. However, the likely undetected companion does change the apparent spectral type of the spectrum from O8 to O8.5. In contrast, the bottom three spectra demonstrate the ease with which a primary O star can be identified with a giant B star companion, due to the clear presence of both He {\footnotesize II} and Si {\footnotesize III}. } \label{binarymodels} \end{center} \end{figure*} Another stellar population that creates issues for spectral typing is emission-line stars. In RIOTS4, this includes classical Oe/Be stars, supergiant B[e] (sgB[e]) stars \citep{Graus12}, and Wolf-Rayet (WR) stars. These stars are often partially or wholly enshrouded in circumstellar disks or envelopes whose emission is superimposed on the photospheric spectra. This results in weakened or absent absorption lines, which can drastically alter spectral types or make them impossible to determine. \citet{Lesh68} classifications for Oe/Be stars were carried out by JBGM. In \S \ref{estars}, we summarize our analysis of the four sgB[e] stars from \citet{Graus12} and present the two WR stars, which are already known in the literature. While the number of sgB[e] stars and WR stars in our sample is small, classical Oe/Be stars account for 42\% of the RIOTS4 sample (\S \ref{estars}). Infill of the photospheric absorption lines often affects He {\sc i} lines in Oe stars \citep[e.g.,][]{Negueruela04}, and Si {\sc iii} $\lambda$4555 and Si {\sc iv} $\lambda$4089 in Be stars. \citet{GoldenMarx15} describe in more detail our approach to correcting for this effect in Oe stars. Some stars in the RIOTS4 survey are included in previous spectroscopic studies of the SMC, including the limited study of field stars in the Magellanic Clouds by \citet{Massey95} and the 2dF survey of the SMC by \citet{Evans04}. Our survey has a typical S/N $\sim 75$ and $R\sim 3000$, compared to S/N $\sim 75$ and $R\sim 1500$ for \citet{Massey95}, and S/N $\sim 45$ and $R\sim$ 1600 for \citet{Evans04}. A comparison of spectral types for stars in common with \citet{Massey95} shows agreement to within half a spectral type, consistent with our internal uncertainty. The stars in common between the RIOTS4 and 2dF surveys show similar agreement with spectral type. However, many stars that we classify as dwarfs in RIOTS4 are listed as giants in 2dF. This discrepancy is linked to the problematic relation between observations and theoretical models mentioned above, and appears to result from our different methods for determining luminosity classes. \citet{Evans04} rely more heavily on stellar magnitude and the equivalent width of H$\gamma$ to determine luminosity classes due to the relatively poor spectral resolution and S/N of their data. Coupling this with the expected high binary fraction and our careful treatment of binaries may explain the differences. \subsection{Stellar Radial Velocities and Multi-Epoch Observations} Another important RIOTS4 data product is the measurement and distribution of radial velocities for SMC field OB stars. Radial velocities are an important property of a stellar population, both for individual objects and as an ensemble. Since runaways are a well known component of the field population, in principle, we can identify many such objects using their radial velocities. For the field stars as a whole, the velocity distribution and dispersion probe the kinematics of this population and on a large scale, the bulk motions of the SMC. For multi-epoch observations, variability in the radial velocity is a strong indicator of a massive binary system. We measure the radial velocities of RIOTS4 targets using the {\tt rvidlines} package in IRAF. Velocities are obtained by fitting gaussian profiles to a combination of H, He {\footnotesize I}, and He {\footnotesize II} absorption lines. We require a minimum of 3 lines to determine the radial velocity, to ensure that continuum fitting issues or odd line profiles do not affect our measurements. Lines with velocities that significantly deviate from all other lines for a single star are excluded from the radial velocity measurement. These spurious velocities are typically associated with lines close to the IMACS chip gaps, which can affect the continuum fitting and, therefore, the line profile. The uncertainties on our radial velocity measurements are $\sim 5$ km s$^{-1}$ for MIKE observations, $\sim 10$ km s$^{-1}$ for IMACS f/4 observations, and $\sim 25$ km s$^{-1}$ for IMACS f/2 observations. Since massive stars have a high binary frequency, it is likely that a large fraction of our radial velocity measurements are affected by variability. Thus, single-epoch radial velocity measurements may cause erroneous identification of binary systems as runaway stars. This variability also adds scatter to the distribution of radial velocities for the full population. Our multi-epoch observations are meant to address the magnitude of these effects by measuring the scatter and estimating the field binary fraction for 8\% of the RIOTS4 sample (\S \ref{binaryfraction}). Our multi-epoch data are all obtained in IMACS f/4 mode, which gives us sensitivity to radial velocity variations of $\sim10$ km s$^{-1}$. \section{Results} \subsection{Stellar Catalog} Table \ref{catalog} presents the basic catalog of the 374 objects in the RIOTS4 survey. In columns 1 -- 3, we list the stellar ID numbers and $B,\ V$ magnitudes from \citet{Massey02}, respectively; column 4 contains the reddening-free $Q_{UBR}$ calculated by \citet{Oey04}. In column 5, we provide an extinction estimate using the SMC extinction maps from \citet{Zaritsky02}. Column 6 contains the spectral classification derived from the RIOTS4 data. Columns 7 and 8 list our measured radial velocity of the star and the radial velocity of the nearest (in velocity space) H{\footnotesize I} kinematic component with brightness temperature $> 20$ K (see \S \ref{S_Runaways}). We list the instrument setup used to obtain the spectrum in column 9 and the observation date in column 10. The \citet{Massey02} photometric errors are on average 0.01 at $V = 13.0$, and 0.04 at $V= 15.0$. Table~\ref{catalogO} provides the same data for the 23 additional stars we observed from the UV-selected sample. In what follows, we consider only the original, optically selected sample so that our analysis is applied strictly to a uniformly selected sample. However, given that there are 23 additional stars out of 91 identified with the alternate criteria, we can infer that our base sample is incomplete at least at the 25\% level for identifying all actual OB stars. \begin{deluxetable*}{cccccccccc} \tabletypesize{\small} \tablewidth{0pc} \tablecaption{RIOTS4 Catalog\tablenotemark{a}} \tablehead{ \colhead{ID\tablenotemark{b}} & \colhead{$B$\tablenotemark{b}} & \colhead{$V$\tablenotemark{b}} & \colhead{$Q_{UBR}$} & \colhead{$A_V$\tablenotemark{c}} & \colhead{Sp Type} & \colhead{RV$_{\rm star}$ }& \colhead{RV$_{\rm HI}$\tablenotemark{d}}& \colhead{Instrument} & \colhead{Observation Date} \\ & & & & & & \colhead{(km s$^{-1}$)} & \colhead{(km s$^{-1}$)} & & \colhead{ (YYMMDD)} } \startdata 107 & 14.96 & 15.00 & -0.95 & 0.82 & Be$_3$ & -- & -- & MIKE & 111024 \\ 298 & 15.18 & 15.12 & -0.91 & 1.03 & B1e$_{3+}$ & -- & -- & IMACS f/4 & 070920 \\ 1037 & 15.15 & 15.28 & -0.85 & 0.44 & B0.5 V & 110 & 110 & IMACS f/4 & 070920 \\ 1600 & 14.42 & 14.60 & -0.87 & 0.32 & O8.5 V & 93 & 103 & IMACS f/4 & 070920 \\ 1631 & 15.19 & 15.15 & -0.99 & 1.11 & B1e$_2$ & 120 & 120 & IMACS f/4 & 070920 \enddata \tablenotetext{a}{This table is published in its entirety in the electronic edition of the $Astrophysical$ $Journal$. A portion is shown here for guidance regarding its form and content.} \tablenotetext{b}{From \citet{Massey02}.} \tablenotetext{c}{From \citet{Zaritsky02}.} \tablenotetext{d}{Measured from \citet{Stanimirovic99}.} \label{catalog} \end{deluxetable*} \begin{deluxetable*}{cccccccccc} \tabletypesize{\small} \tablewidth{0pc} \tablecaption{Additional UV-Optically Selected Stars in the SMC Bar} \tablehead{ \colhead{ID\tablenotemark{a}} & \colhead{$B$\tablenotemark{a}} & \colhead{$V$\tablenotemark{a}} & \colhead{$Q_{UBR}$} & \colhead{$A_V$\tablenotemark{b}} & \colhead{Sp Type} & \colhead{RV$_{\rm star}$ }& \colhead{RV$_{\rm HI}$\tablenotemark{c}}& \colhead{Instrument} & \colhead{Observation Date} \\ & & & & & & \colhead{(km s$^{-1}$)} & \colhead{(km s$^{-1}$)} & & \colhead{ (YYMMDD)} } \startdata 5391 & 13.36 & 13.31 & -1.00 & 1.13 & O8.5 III & 44 & 98 & IMACS f/4 & 060913\tablenotemark{d} \\ 6946 & 14.60 & 14.69 & -0.87 & 0.57 & O9.5 V & 141 & 141 & IMACS f/4 & 060913\tablenotemark{d} \\ 8257 & 14.69 & 14.49 & -0.87 & 1.55 & B1.5 V & 96 & 107 & IMACS f/4 & 060913\tablenotemark{d} \\ 9534 & 13.63 & 13.76 & -0.84 & 0.44 & B0.2 III & - & - & IMACS f/4 & 090824 \\ 10129 & 13.87 & 14.01 & -0.87 & 0.45 & B0.2 V & 130 & 130 & IMACS f/4 & 060913\tablenotemark{d} \\ 14190 & 15.04 & 14.83 & -0.99 & 1.46 & B1.5 V & 149 & 149 & IMACS f/4 & 090824 \\ 15203 & 14.06 & 14.11 & -0.87 & 0.69 & O9.5 V + O9.7 V & 156 & 156 & IMACS f/4 & 060912 \\ 15440 & 14.97 & 14.77 & -0.90 & 1.49 & B1e$_3$ & - & - & IMACS f/4 & 090825\\ 15690 & 14.05 & 14.07 & -0.99 & 0.89 & O6 V((f)) & 80 & 120 & IMACS f/4 & 090824 \\ 17963 & 15.12 & 15.21 & -0.99 & 0.55 & B0.2 V & 115 & 120 & IMACS f/4 & 090824 \\ 18200 & 14.33 & 14.33 & -0.87 & 0.90 & B0e$_3$ & 111 & 120 & IMACS f/4 & 090824 \\ 24982 & 14.75 & 14.94 & -0.85 & 0.26 & O8 V & 110 & 110 & IMACS f/4 & 060913\tablenotemark{d} \\ 25912 & 14.19 & 14.39 & -0.88 & 0.26 & O5 V & 150 & 150 & IMACS f/4 & 060913\tablenotemark{d} \\ 27272 & 13.62 & 13.78 & -0.85 & 0.35 & B0.7 III + B & 121 & 121 & IMACS f/4 & 060913\tablenotemark{d} \\ 28153 & 14.69 & 14.83 & -0.88 & 0.41 & O9.5 V & 169 & 169 & IMACS f/4 & 060912 \\ 36359 & 14.38 & 14.30 & -1.03 & 1.25 & B1e$_{4+}$ & - & - & IMACS f/4 & 060912 \\ 38302 & 14.64 & 14.81 & -0.84 & 0.29 & B1 V & 154 & 154 & IMACS f/4 & 090825 \\ 40341 & 13.77 & 13.98 & -0.92 & 0.24 & O8.5 III((f)) & - & - & IMACS f/4 & 090825 \\ 41095 & 14.84 & 14.85 & -0.92 & 0.92 & O9.5-B0 V + Be$_3$ & - & - & IMACS f/4 & 060911 \\ 44634 & 15.19 & 15.37 & -0.85 & 0.27 & O9.5-B0 V & 150 & 150 & IMACS f/4 & 090825 \\ 45677 & 13.52 & 13.66 & -0.92 & 0.47 & O9.5 III & 160 & 164 & IMACS f/4 & 090825 \\ 48672 & 14.34 & 14.52 & -0.93 & 0.36 & O7.5 V & - & - & IMACS f/4 & 090824 \\ 53373 & 14.08 & 14.20 & -0.84 & 0.51 & O9 V & 119 & 122 & IMACS f/4 & 090824 \enddata \tablenotetext{a}{From \citet{Massey02}.} \tablenotetext{b}{From \citet{Zaritsky02}.} \tablenotetext{c}{Measured from \citet{Stanimirovic99}.} \tablenotetext{d}{Observed multiple times for binary monitoring; see Table 3.} \label{catalogO} \end{deluxetable*} \subsection{Field IMF} Previous studies of the field massive star IMF in the Magellanic Clouds indicate a slope steeper than the traditional Salpeter slope of $\Gamma = 1.35$. The observed slopes range from $\Gamma$ = 1.80$\pm 0.09$ \citep{Parker98} to $\Gamma \sim$ 4.0$\pm 0.4$ \citep{Massey95, Massey02}. However, not all studies agree on this point, as observations of ``field'' stars in the LMC region surrounding 30 Dor suggest an IMF consistent with Salpeter \citep{Selman11}. Some of the uncertainty and variation in these results can be attributed to obtaining the IMF using only photometry or a combination of photometry and spectroscopy. As shown by, e.g., \citet{Massey11}, deriving accurate masses for massive stars can only be done with spectroscopy. If spectroscopically determined masses confirm the steep field IMF then it would represent the largest deviation from the traditional Salpeter IMF obtained from direct star counts. RIOTS4 was designed for such observations, since it avoids the uncertainty of photometric masses, and our large sample minimizes stochastic effects at the highest masses. With RIOTS4, we definitively measure the field massive star IMF with our spatially complete sample of objects; full details on methodology and results are reported by \citet{Lamb13}. Briefly, for stars with spectroscopically derived masses $> 20\ M_\odot$, we follow \citet{Koen06} to derive the cumulative mass distribution for the SMC field and compare it with evolved present-day mass functions from Monte Carlo models with ages up to 10 Myr, the lifetime of 20 $M_\odot$ stars. Using this method, we estimate that the field massive star IMF slope is $\Gamma$=2.3$\pm 0.4$ for the the highest-mass SMC stars. This slope is confirmed with OGLE~II photometry \citep{Udalski98} for $7 - 20\ M_\odot$ stars, using a stochastic approach that models the uncertainties in stellar positions on the H-R diagram. With further Monte Carlo modeling, we determine that undetected binaries or a unique star formation history are unable to explain this steep field IMF. Thus, we conclude that the steep observed IMF is a real property of the SMC field. In \S 5, we attribute this to a preponderance of tiny star-forming events. \subsection{In Situ Formation of Field O Stars} As outlined earlier, the origin of the field massive star population is an open question. In particular, it is unknown whether massive stars are capable of forming in isolation or within sparse clusters. Some theories of massive star formation, such as competitive accretion, suggest that the most massive star formed in a cluster depends on the cluster mass \citep{Bonnell04}. Other theories, such as those based on core accretion, allow for the formation of massive stars in sparse environments, or even in isolation \citep[e.g.,][]{Krumholz09}. The essential question is whether the formation of massive stars in sparse environments is merely improbable \citep[e.g.,][]{Elmegreen00} or actually impossible \citep[e.g.,][]{Weidner06}. Using RIOTS4 spectra, along with data from the {\sl Hubble Space Telescope (HST)} \citep{Lamb10} and OGLE photometry \citep{Udalski98}, we identify a sample of unusually strong candidates for in-situ, field OB star formation. \citet{Lamb10} discover three massive stars that formed in sparse clusters containing $\sim 10$ or fewer companion stars with mass $> 1M_\odot$ and another three candidates for truly isolated formation. \citet{Oey13} present a sample of 14 field OB stars that are centered on symmetric, dense {\sc H ii} regions, which minimizes the likelihood that these objects have transverse runaway velocities. In both studies, the RIOTS4 spectra eliminate line-of-sight runaways, leaving strong candidates for field massive stars that formed in situ. We set further constraints on the degree to which these stars are isolated by examining their immediate stellar environments with the {\sl HST} and OGLE imaging, allowing us to evaluate the relationship between the most massive stars in any sparse clusters and the cluster mass. Our results imply that these two quantities are independent, and thus they favor the core collapse models for massive star formation. \subsection{Radial Velocity Distribution} The distribution of radial velocities reveals information about the stellar population kinematics, as well as the bulk motion of the SMC. Our velocity distribution from RIOTS4 is generally consistent with earlier work; Figure \ref{veldistro} is qualitatively similar to that found from the 2dF survey of OBA-type stars in the SMC found by \citet{Evans08}. Both samples exhibit a gaussian-like velocity distribution with a FWHM of $\sim 30$ km s$^{-1}$ and a mean systemic velocity of $\sim 150$ km s$^{-1}$. As mentioned earlier, radial velocities for individual stars may be affected by binary motions, and so we can only make inferences based on aggregate trends. We do see evidence of a velocity gradient across the SMC, which we depict in Figure \ref{veldistroregions} by plotting velocity distributions of three regions in the SMC. The Bar 1 and Bar 2 regions have mean velocities of $140$ km s$^{-1}$ and $157$ km s$^{-1}$, respectively, with corresponding respective velocity dispersions of 32 km s$^{-1}$ and 39 km s$^{-1}$. Note that although we bisect the bar into two regions, it appears to have a relatively smooth velocity gradient. The SMC wing is more redshifted than the SMC bar, having a mean velocity of $177$ km s$^{-1}$ with velocity dispersion of 29 km s$^{-1}$, but it does not appear to have a significant internal velocity gradient. These observations of the large-scale motions in the SMC agree with results based on stars in the 2dF survey and on {\sc H i} gas from \citet{Stanimirovic04}. \begin{figure*} \begin{center} \includegraphics[scale=.45,angle=0]{f5.eps} \caption{The distribution of radial velocities from stars in the RIOTS4 survey.} \label{veldistro} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[scale=.45,angle=0]{f6a.eps} \includegraphics[scale=.45,angle=0]{f6b.eps} \includegraphics[scale=.45,angle=0]{f6c.eps} \includegraphics[scale=.45,angle=0]{f6d.eps} \caption{We split the RIOTS4 sample into three regions of the SMC, as shown in the upper left panel. Stars in our binary fields are plotted with asterisks while all other stars are plotted as crosses. In the other three panels, we plot the radial velocity distribution for stars in each separate region. The clear velocity gradient of RIOTS4 stars across the SMC agrees qualitatively with the velocity gradient of H{\footnotesize I} gas from \citet{Stanimirovic04}.} \label{veldistroregions} \end{center} \end{figure*} \subsection{Runaway Stars} \label{S_Runaways} Runaway stars are a well-known component of the field population, yet their relative contribution to the field and ejection mechanisms from clusters remain poorly understood. Observational estimates for the runaway frequency range from 10\% \citep{Blaauw61} to 50\% \citep{deWit05}, while some authors argue that {\it all} field massive stars are runaways \citep{Gvarmadze12}. One trend that seems to have emerged is that O stars have a $2-8$ times higher runaway frequency than B stars \citep[e.g.,][]{Gies87,Stone91}. Runaways arise from one of two likely methods: the binary supernova scenario \citep{Blaauw61}, or the dynamical ejection scenario \citep{Poveda67}. In the binary supernova scenario, the primary star in a massive binary explodes as a supernova, which drops the gravitational binding energy of the system and may impart a kick to the secondary star. In contrast, dynamical ejections primarily arise from three- or four-body interactions between a massive binary and single star or massive binary pairs \citep[e.g.,][]{Leonard90}. These ejection mechanisms will imprint different quantitative properties on the runaway population, including velocities, binary parameters, and chemical composition. For example, the binary supernova scenario cannot produce runaway velocities above $\sim 200$ km s$^{-1}$, while dynamical ejections can attain higher velocities \citep[][and references therein]{Gvaramadze09}. Both ejection scenarios are predicted to include binary runaways; however, the type of binaries differ significantly. For the binary supernova scenario, the compact object remnant of the primary star sometimes remains bound to the secondary as a runaway binary system with an eccentric orbit \citep{McSwain07}. For dynamical ejections, tight binaries are sometimes ejected as a single system, thus representing the only mechanism that can form a runaway double-lined spectroscopic binary. Finally, while both mechanisms originate from binary systems, stars ejected from the binary supernova scenario may be He-rich due to contamination from the supernova explosion \citep{Hoogerwerf01}. \begin{figure*} \begin{center} \includegraphics[scale=.45,angle=0]{f7a.eps} \includegraphics[scale=.45,angle=0]{f7b.eps} \caption{Position-velocity diagrams for H I in the line of sight for two RIOTS4 stars, showing data from \citet{Stanimirovic04}. The solid, vertical line depicts the observed radial velocity of the RIOTS4 target, while the dashed line shows our brightness temperature threshold of 20 K. Stars 35491 (left) and 43724 (right) are examples for which the stellar and gas velocities are consistent and inconsistent, respectively. } \label{runaways} \end{center} \end{figure*} To estimate the fraction of runaway stars in the RIOTS4 sample, we compare the observed stellar radial velocities of our OB stars with the {\sc H i} gas velocity distribution along the line of sight, using data from the Australia Telescope Compact Array (ATCA) and Parkes telescopes compiled and mapped by \citet{Stanimirovic99}. We identify runaway candidates as those objects with radial velocities that are different by $> 30$ km s$^{-1}$ from those of the nearest {\sc H i} velocity components having a brightness temperature $> 20$ K in the same line of sight. A pair of examples are shown in Figure \ref{runaways}, with star 35491 depicting an object consistent with the line-of-sight {\sc H i} gas velocity, and star 43724 meeting our criteria for a runaway star. We find that only 11\% of the stars meet these runaway criteria, 27 out of 238 stars with good radial velocity determinations. This frequency is likely to be overestimated due to false positives caused by binary motions, since the measured radial velocity may reflect the orbital motion for a binary star, rather than the systemic velocity. While such motions will also sometimes cause false negatives depending on the orbital configuration at the time of observation, false positives are more likely to be observed. A more significant effect is that radial velocities can only identify line-of-sight runaway motions. We estimate that our observations miss 50\% of runaways if the typical ejection velocity is $\sim 60$ km s$^{-1}$. Since only 8\% of our survey has multi-epoch observations, we are not yet able to correct for the effect of binaries on the stellar population kinematics. Therefore, we have initiated further follow-up, binary monitoring observations to further minimize these degeneracies. We do find one runaway, star 5391, that we identify as a binary star from our multi-epoch observations (\S~\ref{binaryfraction}). Its radial velocity of 44 km s$^{-1}$ is 55 km s$^{-1}$ removed from the nearest significant component of {\sc H i} gas. With a semi-amplitude of 108 km s$^{-1}$ in our observed variations, the secondary cannot be a degenerate star. Therefore, if this binary system is indeed ejected from a cluster, then it must be due to the dynamical ejection mechanism rather than the binary supernova mechanism. Since dynamical ejection frequently splits binaries, the existence of a non-degenerate, runaway binary suggests a major contribution by this process to the runaway population. Another interesting object also points to the importance of dynamical ejection: Star 49937 appears to be an extreme runaway that is unlikely to be the product of the binary supernova scenario. Its runaway velocity is $\sim 200$ km s$^{-1}$ removed from the nearest {\sc H i} velocity component. While it is possible that this star's runaway component is completely in the line of sight and/or fortuitously enhanced by binary motion, its high radial velocity, taken at face value, is near the maximum ejection speed possible from the binary supernova mechanism \citep{PortegiesZwart00}, as mentioned above. Thus, the existence of this star again suggests a significant role for dynamical ejection of runaways. \subsection{Binary Stars} \label{binaryfraction} Stellar multiplicity is a key parameter that probes the formation and dynamical evolution of a stellar population. For example, large protostellar disks may be disrupted in high-density environments, thereby suppressing the formation of massive binaries \citep{Kratter08, Kratter10}. Recent studies of Galactic clusters and OB associations find observed massive-star binary fractions ranging from $\sim 60\%$ to $\sim 80\%$ \citep[e.g.,][]{Sana08,Sana09,Sana11,Kiminki12,Kobulnicky14}. However, few studies have systematically investigated the multiplicity of massive stars in the field. Early studies found that field massive stars have roughly half the binary frequency of massive stars found in clusters \citep[e.g.,][]{Stone81,Gies87}. This general trend of a lower field binary frequency persists in later studies, such as \citet{Mason98,Mason09}, who use speckle interferometry of objects in the Galactic O Star Catalog \citep{MaizApellaniz04} to compare the frequency of multiplicity between cluster and field O stars. In this magnitude-limited sample, they find a $39\%$ binary fraction for field O stars, compared to a $66\%$ binary fraction for O stars in clusters. When combining their results with data from the literature on spectroscopically identified binaries, they obtain 51\% and 75\% binary fractions for field and cluster O stars, respectively. However, the spectroscopic data for these objects is non-uniform and therefore may not provide an accurate comparison of these statistics between cluster and field O stars. But it does suggest that the frequency of multiplicity for massive stars in the Galactic field is lower than in clusters. With the RIOTS4 survey, we performed repeat observations of three IMACS slit-mask fields over the 5-year survey period, totaling 29 objects, to obtain an initial evaluation of the binary fraction of field massive stars in the SMC. We note that some of these stars belong to the UV-selected sample (Table~\ref{catalogO}), rather than the default sample. We have $9-10$ epochs for each field, at intervals of days, weeks, months, and years apart; three stars appear in two of the three fields, yielding up to twice the number of observations for these objects. As with the larger survey, these fields have a high fraction of Oe/Be stars, and we focus here primarily on the 17 non-Oe/Be stars in these fields. We use three separate methods to identify potential binaries, which are described below. The first method identifies binaries using maximum observed radial velocity variations, the second method is based on a statistical F-test analysis following \citet{Duquennoy91}, and the third method uses the period power spectrum and searches for binary orbital solutions from the radial velocity data. Table~\ref{t_binaries} summarizes this binary monitoring sample: columns 1 and 2 give the star ID and spectral type, respectively; column 3 gives the number of observations, and columns 4 and 5 show the star's binary status determined from the second and third methods; we note that the first method yields the same results as the second. Column 6 gives the systemic velocity based on the orbital solution, if available, or the mean of the minimum and maximum measured radial velocities. Column 7 gives the largest velocity variation observed within a 14-day interval $\Delta v$, and column 8 provides the standard deviation $\sigma_{\rm obs}$ of the radial velocity measurements for each star. Column 9 lists the calculated $P (\chi^2)$, which is used to determine binary status in the statistical F-test (\S 4.6.2). Column 10 shows the observation dates for each object, coded as indicated. \begin{deluxetable*}{llcccccccc} \tabletypesize{\small} \tablewidth{0pc} \tablecaption{Stars in binary monitoring fields} \tablehead{ \colhead{ID} & \colhead{SpT} & \colhead{$N$} & \colhead{F-test} & \colhead{Power Spec} & $v_{\rm sys}$ (km s$^{-1}$) & $\Delta v$ (km s$^{-1}$) & $\sigma_{\rm obs}$ (km s$^{-1}$) & $P (\chi^2)$ & \colhead{Observation Dates\tablenotemark{a}} } \startdata \multicolumn{6}{c}{Normal OB Stars} \\ \hline 5391 & O8.5 III & 9 & Y & Y & 44 & 144 & 75 & $<0.01$ & ABCEFGIJK \\ 6908 & O9.5 -- B0 III & 9 & Y & N & 128 & 93 & 25 & $<0.01$ & ABCEFGIJK \\ 6946 & O9.5 V & 9 & N & N & 141 & 35 & 12 & 0.74 & ABCEFGIJK \\ 7437 & O6.5 I(f) & 9 & Y & Y & 151 & 29 & 33 & $<0.01$ & ABCEFGIJK \\ 7782 & O8 V & 9 & Y & Y & 127 & 65 & 33 & $<0.01$ & ABCEFGIJK \\ 8257 & B1.5 V & 9 & Y & Y & 96 & 61 & 21 & $<0.01$ & ABCEFGIJK \\ 8609 & B0 III & 9 & N & N & 128 & 21 & 11 & 0.97 & ABCEFGIJK \\ 10129 & B0.2 V & 9 & Y & Y & 130\tablenotemark{b} & 29 & 21 & $<0.01$ & ABCEFGIJK \\ 10671 & B0.5 V & 9 & Y & N & 122 & 108 & 33 & $<0.01$ & ABCEFGIJK \\ 21844 & O8 III((f)) & 9 & N & N & 151 & 36 & 13 & 0.09 & BCDEFGHIK \\ 24213 & B0 III & 16 & N & N & 126 & 6 & 9 & 0.99 & ABCDEFGHIJK\\ 24982 & O8 V & 8 & Y & Y & 110 & 59 & 32 & $<0.01$ & ADFGHIJK \\ 25912 & O5 V & 9 & Y & Y & 150 & 103 & 45 & $<0.01$ & ADEFGHIJK \\ 27272 & B0.7 III + B & 9 & Y & Y & 121\tablenotemark{c} & 223 & 105 & $<0.01$ & ADEFGHIJK \\ 27600 & B0.5 III & 10 & N & Y & 177\tablenotemark{b} & 16 & 13 & 0.64 & BCDEFGHIJK \\ 27712 & B1.5 V & 7 & N & N & 127 & 8 & 7 & 0.46 & ADFGHJK \\ 28841 & B1 III & 10 & N & Y & 141 & 22 & 15 & 0.02 & BCDEFGHIJK \\ \hline \\ \multicolumn{6}{c}{Classical Oe/Be Stars} \\ \hline 7254 & O9.5 IIIe$_2$ & 9 &\nodata & Y & 126 & 10 & \nodata & \nodata & ABCEFGIJK \\ 21933 & Be$_3$ & 4 &\nodata & N & 130 & 57 & \nodata & \nodata & AHIJ \\ 22321 & O9.5 IIIpe$_{4+}$ & 10 &\nodata & Y & 167\tablenotemark{b} & 28 & \nodata & \nodata & BCDEFGHIJK \\ 23710 & O9--B0 pe$_{3+}$ & 10 &\nodata & N & 168 & 48 & \nodata & \nodata & BCDEFGHIJK \\ 23954 & B1.5e$_{3+}$ & 7 &\nodata & N & 130 & 69 & \nodata & \nodata & ADFGHIJ \\ 24229 & B1e$_2$ & 7 &\nodata & N & 155 & 19 & \nodata & \nodata & ADFGHJK \\ 24914 & O9 III-Vpe$_1$ & 4 &\nodata & Y & 81 & 20 & \nodata & \nodata & AEHI \\ 25282 & B0e$_1$ & 17 &\nodata & N & 130 & 72 & \nodata & \nodata & ABCDEFGHIJK \\ 25337 & Be$_3$ & 9 &\nodata & Y & 124 & 55 & \nodata & \nodata & BCDEFGIJK \\ 27135 & B1e$_2$ & 18 &\nodata & N & 113 & 30 & \nodata & \nodata & BCDEFGHIJK \\ 27736 & B0e$_2$ & 6 &\nodata & Y & 153 & 39 & \nodata & \nodata & DEFGHJ \\ 27884 & O7-8.5 Vpe$_{4+}$ & 10 &\nodata & Y & 156 & 32 & \nodata & \nodata & BCDEFGHIJK \enddata \tablenotetext{a}{Dates of observation are coded as follows: (A) 2006 September 13, (B) 2007 September 19, (C) 2007 September 20, (D) 2008 September 24, (E) 2008 October 6, (F) 2008 October 7, (G) 2008 October 11, (H) 2008 November 21, (I) 2008 November 22, (J) 2009 August 24, (K) 2010 December 20.} \tablenotetext{b}{From orbital solution.} \tablenotetext{c}{Average of SB2 components A and B.} \label{t_binaries} \end{deluxetable*} \subsubsection{Maximum radial velocity variation and timescale} \begin{figure*} \begin{center} \includegraphics[scale=.45,angle=0]{f8.eps} \caption{The observed maximum short-term ($< 14$ days) radial velocity difference vs the largest radial velocity difference over any period. The dashed line depicts the identity relationship. Objects with the highest observed velocity difference happening over a $< 14$ day period will lie on this line. We expect real binary systems will exhibit velocity variations on both long and short term periods. Binaries identified by having radial velocity variations $> 30$ km s$^{-1}$ are plotted with a plus sign, while single stars are depicted as asterisks.} \label{ampvel} \end{center} \end{figure*} To identify binary star candidates, we first compare the amplitude of radial velocity variations with the timescale of the variations. Since the amplitude of radial velocity variations is inversely correlated with the period of a binary system, binaries with large-amplitude variation should display variability on short timescales, provided the eccentricity of the system is near zero. In Figure \ref{ampvel}, we plot the amplitude of the maximum observed radial velocity variation over short timescales ($ < 14$ days; Table~\ref{t_binaries}) versus the amplitude of the maximum radial velocity variation over any time scale. Note that in Figure \ref{ampvel} all objects must lie at or below the dashed identity line. In an ideal scenario, all short-period systems will lie along this locus; however, we cannot expect good sampling with $\lesssim 10$ epochs of data. Nonetheless, we still observe a large fraction of high-variation systems along the identity line, which suggests there are no systematic velocity offsets over time. Given the sampling of these fields and our systematic errors, we conservatively identify binaries as those objects with radial velocity variations $> 30$ km s$^{-1}$ including errors. This yields 10 probable binaries out of the 17 non-Oe/Be stars in our binary monitoring fields. \subsubsection{F-test: radial velocity variations relative to noise} \begin{figure*} \begin{center} \includegraphics[scale=.45,angle=0]{f9.eps} \caption{The distribution of $P (\chi^2)$ for non-Oe/Be stars in our binary fields. Binary systems that exhibit radial velocity variations significantly larger than expected from observational errors will have very low $P (\chi^2)$ values ($< 0.01$). } \label{ftest} \end{center} \end{figure*} We also use the approach of \citet{Duquennoy91} who identified binary candidates in the nearby solar neighborhood. This method compares the mean of the statistical measurement errors associated with each radial velocity measurement ($\sigma_{\rm ave}$) with the standard deviation in the measured radial velocities ($\sigma_{\rm obs}$; Table~\ref{t_binaries}) for each star. For single objects with properly estimated measurement errors, the ratio of $\sigma_{\rm obs}$/$\sigma_{\rm ave}$ should approximately equal unity. However, it is unclear where the cutoff ratio between single objects and binary stars should occur. Thus, \citet{Duquennoy91} use a statistical F-test to measure the probability $P (\chi^2)$ that the observed variations are due to statistical noise. Following their work, we calculate $\chi^2$, accounting for the number of observations $n$, with: \begin{equation} \chi^2 = (n-1)(\sigma_{\rm obs}/\sigma_{\rm ave})^2 \quad . \end{equation} Using the cumulative chi-square distribution given by \begin{equation} F_k(\chi^2) = G (k/2 , \chi^2/2) \end{equation} where $G$ is the regularized Gamma function for a given degree of freedom $k = n-1$, we calculate $P (\chi^2) = 1-F_k(\chi^2)$, given in Table~\ref{t_binaries}. In the case that all objects are single, the distribution of $P(\chi^2)$ should be uniform between values of 0 and 1. Binary systems, on the other hand, should have very low values of $P(\chi^2)$, since their radial velocity variations are not due to statistical noise. Thus, we can identify binaries as those objects with $P (\chi^2) < 0.01$. We plot the distribution of $P (\chi^2)$ for the same 17 stars in Figure \ref{ftest}. Again, we find a high binary fraction with 10 out of 17 objects having $P (\chi^2) < 0.01$. \subsubsection{Period power spectrum} We used the radial velocities to search for credible orbital solutions for all the stars in the binary monitoring sample based on the method described by \citet{Kiminki12a}. In this approach, we generate the power spectrum of periods for each object, and identify the most likely values, if any, with an IDL program created by A. W. Fullerton, which uses the CLEAN deconvolution algorithm of \citet{Roberts87}. We then apply the \citet{gudehus01} code for determining orbital solutions, the Binary Star Combined Solution Package, using the candidate periods. We show the phase diagrams of the two best orbital solutions in Figure~\ref{f_orbits}. These are for stars 10129 and 27600, with periods around 4.8 and 3.3 days, respectively. Star 10129 appears to have a moderate eccentricity around $e = 0.2$, while 27600 is consistent with a purely circular orbit. This approach again yields 10 out of 17 probable binaries, although the identified candidate binaries are not the exact same ones found with the preceding methods (Table~\ref{t_binaries}). \begin{figure*} \plottwo{f10a.eps}{f10b.eps} \caption{Phase diagrams showing the solutions for two more securely identified binaries among the normal OB stars in the monitoring sample. \label{f_orbits} } \end{figure*} \subsubsection{Binary Fraction} All three binary identification methods suggest binarity in 10 out of 17 ($59\% \pm 12\%$) of the non-Oe/Be stars in our three binary monitoring fields. This frequency is consistent, within the uncertainty, with previous observations of multiplicity in the Galactic field, which are $\sim 40 - 50$\% as described above. However, the small number statistics generate large errors, and our binary frequency is actually closer to values observed in Galactic clusters and OB associations. It is further difficult to compare these frequencies because of the different observational biases inherent in the different binary detection methods and sample properties; our frequencies are lower limits, representing results only for spectroscopic binaries. \citet{Sota14} find a strong lower limit of 65\% for the combined spectroscopic and visual binaries in the southern component of their Galactic O star survey. Almost one quarter of these are identified exclusively by astrometric methods. We have started follow-up monitoring of additional RIOTS4 targets to confirm these results, and to obtain binary orbital parameters. We also applied the third binary identification method to the remaining 12 stars in the monitoring fields, which are classical Oe/Be stars. The radial velocities measured for these stars are more uncertain than for normal stars because of emission-line contamination in the H lines. We find that 6 out of the 12 Oe/Be stars appear to be probable binaries. \begin{figure*} \begin{center} \includegraphics[scale=1]{f11.eps} \caption{The multi-epoch, RIOTS4 spectra of the double-lined spectroscopic binary 27272, with observation dates shown. } \label{27272} \end{center} \end{figure*} One of our binaries, star 27272, is a double-lined spectroscopic binary (SB2) with B0.7~III and B star components (Figure \ref{27272}). In our observations of this system, we find that the stronger absorption line appears blueshifted in all but 1--2 epochs. While this may be evidence of the Struve-Sahade (S-S) effect \citep{Struve37,Sahade59}, it is most likely caused by an unfortunate observing cadence, which impedes our ability to obtain a satisfactory orbital solution. \subsubsection{Systemic Velocities} Estimated systemic velocities $v_{\rm sys}$ for the 29 stars in the monitoring fields are given in Table~\ref{t_binaries}. These are generally given as the average of the minimum and maximum of the $N$ radial velocity measurements for each star. For three objects, more reliable values are available from fitted orbital parameters. The mean $v_{\rm sys}$ is 131 km s$^{-1}$, in good agreement with the value of 140 km s$^{-1}$ for the Bar 1 region, where these objects are located (Figure~\ref{veldistroregions}). Almost all of the stars in our monitoring sample have $v_{\rm sys}$ within $2\sigma$ of the mean. However, Star 5391 has $v_{\rm sys} = 44\ \rm km\ s^{-1}$, which is blueshifted by 87 km s$^{-1}$, more than 3$\sigma$ from the mean and thus potentially a runaway star. This O8.5~III star is also identified as a binary by our three methods (Table~\ref{t_binaries}). \subsection{Emission-line stars} \label{estars} A large fraction of our RIOTS4 stars turn out to be emission-line stars, mostly classical Oe/Be stars. We also identify four B supergiant stars that exhibit forbidden emission lines \citep{Graus12}. One of these, star 29267 (AzV 154; \citealt{Azzopardi75}) was a previously known sgB[e] star \citep{Zickgraf89}. The other three stars are 46398, 62661, and 83480 (R15, R38, and R48, respectively; \citealt{Feast60}). SgB[e] stars are normally defined as stars exhibiting forbidden emission lines along with strong IR dust excess. However, this strong dust emission is not present in the three RIOTS4 stars newly shown to be B[e] stars. In \citet{Graus12}, we discuss these objects in detail, demonstrating that they do show more modest, free-free near-IR emission. We propose that they represent a new, transition class of dust-poor sgB[e] stars. There are two Wolf-Rayet stars included in the RIOTS4 survey. They are stars 22409 and 30420, which are both identified as WN3 + abs stars by \citet{Massey01}. In our RIOTS4 spectra, we detect only H absorption lines for 22409, while 30420 also exhibits He {\footnotesize II} absorption (Figure \ref{WR-HMXB}). \citet{Massey01} identify He {\footnotesize II} absorption in both objects and use the lack of He {\footnotesize I} to estimate that the absorption components correspond to O3-O4 stars. \begin{figure*} \begin{center} \includegraphics[scale=1]{f12.eps} \caption{Spectra of the Wolf-Rayet and Be/X-ray binary stars in the RIOTS4 catalog. } \label{WR-HMXB} \end{center} \end{figure*} The rest of the emission-line stars are classical Oe/Be stars, comprising $\sim 25\%$ of the O stars \citep{GoldenMarx15} and $\sim 50\%$ of the B stars in the RIOTS4 survey. These objects exhibit emission in their Balmer lines due to `decretion disks' of material that are likely caused by rapid stellar rotation \citep[e.g.,][]{Porter03}. Oe/Be stars are more common at lower metallicities, with a Galactic Oe/O-star fraction of $0.03 \pm 0.01$ as measured from Galactic O Star Spectroscopic Survey \citep[GOSSS;][]{Sota11, Sota14} and a $0.24 \pm 0.09$ Oe/O-star fraction in SMC clusters \citep{Martayan10}. The denominators here represent all O stars, including Oe stars. Similarly, the Be/B frequency of $30-40\%$ in SMC clusters is about twice the Galactic frequency \citep{Maeder99,Wisniewski06}. This metallicity effect is consistent with the decretion disk scenario, since the metal-poor SMC stars have weak stellar winds, thereby impeding the loss of stellar angular momentum through the winds. The high rotation rates therefore promote the formation of decretion disks, leading to the Be phenomenon. Our RIOTS4 field Oe stars and their statistics are presented by \citet{GoldenMarx15}, yielding an Oe/O ratio of $0.27\pm0.04$. We also find that the Oe spectral type distribution extends to earlier types than in the Galaxy, both in terms of conventional classifications and hot Oe stars whose spectral types are uncertain but are apparently of extremely early type. One extreme star, 77616, has He {\sc ii} in emission from the disk, showing that even the hottest O stars can present the Oe/Be phenomenon; this supports theoretical models predicting that fast rotators can reach higher effective temperatures \citep{Brott11}. Our large sample of Oe stars in the SMC strongly supports the metallicity effects predicted by the decretion disk model and characterizes the properties of early Oe stars. Regarding the Be stars, the RIOTS4 Be/B fraction appears to be even higher than found in previous studies. This result should be treated with caution because our sample selection criteria may be biased to favor selection of Be stars. These objects emit strongly in H$\alpha$, which results in a brightening of their $R$ magnitude, thus lowering $Q_{UBR}$. Therefore, our sample selection criterion of $Q_{UBR} \leq -0.84$ is especially useful for selecting Be stars. Given our additional limiting $B$ magnitude criterion, it is unclear whether our completeness limit for Be stars extends to later spectral types than normal stars, or whether it provides more complete identification of B stars by including more Be stars down to the magnitude limit. A comprehensive treatment of the Be stars, including detailed investigation of the selection effects and estimates of the luminosity classes, will be presented in a future publication. For now, we include \citet{Lesh68} classifications (Table \ref{catalog}) for these stars, which are a measure of the magnitude of the Be phenomenon, and also indicate the presence of Fe II emission. In total, the Oe/Be stars account for 157 of the 374 stars (42\%) in the RIOTS4 sample. We also observed three previously known Be/X-ray binary systems within our survey, whose spectra are plotted in Figure \ref{WR-HMXB}. Object 52865 is reported to be a B0-0.5 III-Ve star in a binary system with a $967$-s pulsar \citep{Schurch07, Haberl08}, and our spectral type for 52865 agrees with this spectral classification. \citet{Coe12} report object 64194 to be a B0.5-1 Ve star in a binary system with a presumed neutron star, although no pulsar has been identified; we find a spectral type of B0e$_3$ for this star. Object 77458 is an eclipsing X-ray binary with a period of $\sim3.9$ days \citep{Schreier72}. \citet{Webster72} first identified 77458 as the optical counterpart of the X-ray source, a $0.72$-s pulsar \citep{Lucke76} with a period of 3.9 days. There is a variety of spectral classifications for 77458 in the literature, ranging from O9.5 II \citep{Lennon97} to B2 I \citep{Garmany87}, which suggest some real variation in this object's spectrum. The most recently published spectral type is O9.7 Ia+ from \citet{Evans04}; our spectral type for this object is slightly later, B0.2e$_1$. \section{Discussion} The RIOTS4 survey provides a first, quantitative characterization of the field massive star population based on a complete, uniformly selected sample of OB stars. It is also the first complete survey of field massive stars in an external galaxy. The resulting characterization of this population is necessarily sensitive to our definition of field stars, recalling that our criterion requires that members be at least 28 pc from other OB candidates, regardless of the presence of lower-mass stars. Thus, most of our objects can be expected to represent the ``tip of the iceberg'' on low-mass clusters. On the other hand, we note that our 28-pc requirement is a more stringent criterion for isolation than is often used in other studies. This clustering length is derived from the spatial distribution of the entire OB population and represents a characteristic value for the SMC as a galaxy \citep{Oey04}. In contrast, other studies often use more arbitrary definitions, for example, ``field'' OB stars in the vicinity of the 30 Doradus giant star cluster \citep{Bressert12} correspond to a different concept of field stars. \citet{Oey04} showed that the clustering law for SMC OB stars follows an $N_*^{-2}$ power law extending down to $N_*=1$, which corresponds to our individual RIOTS4 field OB stars, where $N_*$ is the number of OB stars per cluster. This basically confirms that most of our sample corresponds to the ``tip of the iceberg'' objects, as expected. However, as discussed in detail by \citet{Oey04}, the magnitude of the $N_*=1$ bin does suggest a slight, but difficult to quantify, enhancement above a simple extrapolation of the power law distribution. Conservatively, it is $< 0.3$ dex, implying that any excess ``deep field'' population is less than 50\% of the total, and perhaps much less. Based on Monte Carlo simulations, \citet{Lamb10} find that observations of sparse clusters and field O stars are consistent with full sampling of the IMF up to the universal stellar upper-mass limit $m_{\rm up}$. This is at odds with the steep upper IMF for the field stars found in \S 4.2. However, these results can be reconciled by the fact that \citet{Lamb10} also identified the existence of an effective lower limit to $N_*$ for normal clusters, $N_*\gtrsim 40$. This value corresponds to a mean cluster mass limit of $M_{\rm cl} \gtrsim 20\ M_\odot$ for a \citet{Kroupa01} IMF. Since typically $m_{\rm up} \gtrsim M_{\rm cl}$ here, it is apparent that in this regime it becomes physically impossible, {\it on average,} to fully sample the IMF up to $m_{\rm up}$. Therefore, the maximum stellar masses in the sparsest clusters must necessarily be lower, on average, than in normal clusters. {\it Since our RIOTS4 sample is dominated by such stars in sparse clusters, the steeper IMF is a natural consequence.} This effect also provides a natural explanation for the value of the steeper Salpeter IMF slope in clusters, compared with a simple --2 power law expected from simple Bondi-Hoyle accretion \citep{Oey11}. Our RIOTS4 field stars therefore consist of both ``tip of the iceberg'' stars that dominate small, but normal, clusters and ``deep field'' objects that are substantially more isolated. The former correspond to objects that are consistent with stochastic sampling of the IMF and clustering mass function, as described above; while the latter correspond to objects that formed in greater isolation, if any, and runaway stars. As discussed in \S 4.6, \citet{Mason09} estimate, based on somewhat uncertain statistics, that Galactic field stars have a binary frequency of about 51\%, as compared with a cluster frequency of 75\%. If we assume that binaries actually form with the same frequency in the field and clusters (i.e., 75\%), then the lower observed field frequency can be attributed entirely to dilution by runaway stars, which increase the number of single stars. While dynamical ejection mechanisms do predict some binary runaways, these should be relatively insignificant for our purposes. These assumptions imply that runaways comprise 1/3 of all the massive field stars. If we further assume to first order that the tip-of-the-iceberg stars comprise another 50\% of the field, as described above, then the remaining 1/6 of the sample corresponds to objects that formed in extreme isolation. We stress that these estimates are subject to substantial, unknown uncertainties, and they only represent a first attempt at understanding the field partition between these components. For example, if the runaway frequency is less than 33\%, this implies that field stars actually form with a lower binary frequency in the field. Thus, the possibility remains that on the order of 1/6 of the field OB stars may constitute a population that formed in extreme, or even complete, isolation. As described in \S 4.3, \citet{Oey13} presented 14 candidate field stars that appear to have formed in situ, and 5 of these remain candidate members of this extremely isolated class. \citet{Lamb10} also presented 3 such isolated candidate objects. Thus we have at least 8 candidate in-situ, deep-field OB stars, which may be around 13\% of all such objects in our sample, based on the crude estimate above of their contribution to the RIOTS4 survey. As noted earlier, a number of other studies have also identified strong candidates for isolated OB star formation in the Magellanic Clouds and the Galaxy \citep[e.g.,][]{Selier11,Bressert12,Oskinova13}. In \S~\ref{S_Runaways}, we found a lower limit to the runaway frequency of $\sim 11$\%, which is consistent with our estimate of $\sim 33$\%, following our analysis above. Also, the steep upper IMF (\S 4.2) suggests that runaway stars do not dominate the field population. Since O stars have a higher observed runaway frequency than early B stars \citep{Gies87, Stone91}, the presence of runaways counteracts the IMF steepening discussed earlier. This is also consistent with the relatively high binary frequency ($0.59\pm 0.12$; \S 4.6.4) in our monitoring subsample. Thus, the picture of the field stellar population that emerges from RIOTS4 and its ancillary studies is one that is dominated by ``tip of the iceberg'' clusters, but with a significant fraction, on the order of one-third, of runaway stars. There is also evidence consistent with a significant contribution, perhaps $\sim$17\%, from stars formed in extreme isolation. Work is currently in progress to evaluate the relative contributions of these components in the RIOTS4 survey. At present, the evidence remains consistent with highly isolated OB star formation constituting a small fraction of the deep field. \section{Conclusions} The Runaways and Isolated O-Type Star Spectroscopic Survey of the SMC (RIOTS4) provides a spatially complete, spectroscopic dataset for the field massive stars in the Small Magellanic Cloud obtained from uniform criteria applied to the entire star-forming body of this galaxy. This survey sample is identified using photometric selection criteria combined with a friends-of-friends algorithm to identify the most isolated objects \citep{Oey04}. Over the course of five years, we obtained spectra for all targets using the IMACS and MIKE spectrographs on the Magellan Telescopes. From these spectra, we derive each star's spectral classification and radial velocity. Using RIOTS4, we derived physical parameters such as the stellar effective temperatures and masses, allowing us to investigate the shape of the field IMF above $20M_\odot$ \citep{Lamb13}. We find that the slope of the field massive star IMF is significantly steeper ($\Gamma$=2.3$\pm 0.4$) than the traditional Salpeter slope ($\Gamma$=1.35). This result is consistent with the $\Gamma$=1.8 IMF slope found by \citet{Parker98} and qualitatively corroborates previous observations of a steep field IMF slope of $\Gamma \sim 3-4$ in the Magellanic Clouds \citep{Massey95,Massey02}. Complete details are given by \citet{Lamb13}. We also use RIOTS4 data to probe limits of the most massive stars that can form in isolation or within sparsely populated clusters \citep{Lamb10,Oey13}. In conjunction with {\sl HST} and ground-based imaging, we identify sparse clusters associated with target OB stars in the RIOTS4 sample. With cluster mass estimates and RIOTS4 stellar masses, we examine the relationship between the most massive star in a cluster and the mass of the parent cluster. Our results are consistent with cluster mass being independent of the most massive member star. This applies unless the total cluster masses are so small that stars near the upper-mass limit cannot form. This suppression of the most massive stars in the smallest clusters explains the steep field IMF observed above. We also identify a compelling sample of candidate field OB stars that may have formed in situ, given their apparent lack of runaway velocities and central location within dense {\sc H ii} regions \citep{Oey13}. We use the radial velocities of RIOTS4 stars to examine the large-scale velocity structure of the SMC, and for an initial look at the kinematics of the field OB population and runaway frequency. We find that the kinematics mirror those of other surveys of massive stars \citep{Evans08} and gas \citep{Stanimirovic04}. We find the systemic velocity of the SMC is $\sim 150$ km s$^{-1}$, with a large velocity gradient as a function of position that roughly follows the gradient observed in {\sc H i} gas \citep{Stanimirovic04}. Given this large velocity gradient, we must consider the line-of-sight SMC systemic velocity as given by the gas kinematics when identifying runaway stars within our survey. Thus, we compare the stellar radial velocity for each RIOTS4 star with the local {\sc H i} gas velocity in the line of sight from \citet{Stanimirovic99}. Runaway candidates are defined to be those objects with a difference $> 30$ km s$^{-1}$ between stellar and {\sc H i} radial velocities. We find that 11\% of the sample meets this criterion, which is a lower bound due to our inability to detect transverse runaways. The identification of a binary runaway system and a candidate high-velocity (200 km s$^{-1}$) runaway star suggest that dynamical ejection is a significant and possibly dominant contributor to the runaway OB population. To identify binary stars within our sample, we look for stellar radial velocity variations using $9-16$ epochs of data for three IMACS multi-slit fields encompassing 29 stars. We use three methods to identify binary stars. First, binaries are likely to be those objects that exhibit large radial velocity variations whose amplitudes correlate with time interval. Second, we identify binary candidates using a statistical F-test, comparing the observed velocity variation with that expected from observational uncertainties \citep{Duquennoy91}. Third, we identify candidates using the periodicity power spectrum and then fitting for orbital solutions. All three methods find 10 out of 17 normal OB stars ($59\% \pm 12\%$) to be strong binary candidates. This can be compared with the binary fraction found in Galactic clusters and OB associations, which is $\sim 60-80\%$, and that for Galactic field stars, which is $\sim 40-50\%$. The RIOTS4 sample also includes a large number of emission-line stars, including two Wolf-Rayet stars and a newly identified population of dust-poor B[e] supergiant stars that may represent a transition class of objects \citep{Graus12}. The remainder of the emission-line stars are classical Oe/Be stars, which occur at a higher frequency in the SMC than in the Galaxy. The RIOTS4 data clearly extend this finding to early Oe stars and to field Oe/Be stars. Our Oe/O-star frequency of $0.27\pm0.04$ in the SMC is significantly greater than the Milky Way value, and the SMC spectral type distribution also extends to the hottest effective temperatures, in contrast to Milky Way objects \citep{GoldenMarx15}. These results support the decretion disk model for the Be phenomenon, since metal-poor stars rotate faster due to their inability to remove angular momentum via stellar winds. Similarly, our frequency of Be/B stars is higher than Galactic values, but this result may be biased by our photometric selection criteria. We will examine the RIOTS4 Be stars in a future work. Work is also underway to evaluate the fraction of deep field objects relative to ``tip of the iceberg'' stars, which will further clarify the statistics of OB star formation in the sparsest regime. In addition, we have initiated follow-up spectroscopic monitoring to obtain binary star properties, including systemic velocities. These observations will yield reliable statistics for runaway stars, data on $v\sin i$, and Oe/Be star variability. \acknowledgments Many individuals helped make this publication a reality, including the referee, who provided thoughtful comments. Thanks to Nidia Morrell and Phil Massey for advice on radial velocity measurements, and to Thomas Bensby, Tom Brink, and Jess Werk for advice on the data reduction pipelines. Thanks to Mario Mateo for help with scheduling the binary monitoring runs and observing advice. We thank Fred Adams, Rupali Chandar, Xinyi Chen, Oleg Gnedin, Lee Hartmann, Wen-hsin Hsu, Anne Jaskot, Mario Mateo, Eric Pellegrini, and Jordan Zastrow for helpful discussions. This work was supported by the National Science Foundation grants AST-0907758, AST-1514838; NASA grant NAG4-9248; and the University of Michigan, Rackham Graduate School.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In the central exclusive processes $pp\to p\oplus X \oplus p$, where $\oplus $ denotes the absence of hadronic activity, the central system X can be produced either via $\gamma\gamma$ fusion or in exclusive diffractive production. Detection of the forward scattered protons will allow for new and complementary studies at the LHC. Already at low luminosity, the two photon dilepton production, precisely known from QED, can serve as a calibration process \cite{dis}. For few fb$^{-1}$\ of the integrated $pp$ luminosity the high energy photon physics opens up giving access to precision studies of quartic gauge couplings, anomalous W or Z pair production, and at higher luminosities, super-symmetric particle pair production in very clean environment \cite{gg}. Starting from tens of fb$^{-1}$\ the exclusive diffractive production of the Higgs boson becomes important \cite{Khoze:2001xm,Cox:2005if}. There are three main reasons why the exclusive diffractive production is important especially for the Higgs boson studies at the LHC: \begin{enumerate} \item Forward scattered protons, tend to "select" state of the central system X to be $J_z$ = 0, C and P even ($0^{++}$). Moreover, correlations between outgoing proton momenta are sensitive to these quantum numbers. \item Mass resolution of about 2\% of the central system can be achieved from momenta measurements of the scattered protons alone, which would help to resolve a nearly degenerate super symmetric Higgs sector, for example \cite{ellis}. \item Excellent signal to background ratio of about unity for the SM Higgs production and more than an order of magnitude larger for certain MSSM scenarios. \end{enumerate} The FP420 collaboration proposes \cite{FP420} to install proton detectors at some 420 m from the interaction point (IP) of CMS or ATLAS. Acceptance of such detectors matches very well energy distributions of the forward protons in the exclusive production of the light Higgs boson. As the cross section for diffractive production of the SM Higgs production is expected to be small, 1--10 fb depending of the Higgs boson mass \cite{Cox:2005if}, it is imperative to measure it at high LHC luminosity with many interactions per beam crossing (event pile-up). The aim of a very precise time measurement using GASTOF \footnote{Or, using a complementary QUARTIC detector which is based on quartz radiator.} is to determine $\emph{z}$-coordinate of event vertex using the $\emph{z}$-by-timing technique, and consequently to match it with the vertex measured by central detectors. The $\emph{z}$-by-timing technique is based on the arrival time difference for two protons detected on both sides of the IP. If detectors of forward protons are at distance L, and the event vertex is displaced from the nominal IP at $\emph{z}=0$ to some $\emph{z}$ (width of the longitudinal distribution of the interaction point is expected to be of 50--70 mm) than \begin{equation} \Delta t = \frac{L+\emph{z}}{c} - \frac{L-\emph{z}}{c} = \frac{2 \emph{z}}{c} \label{eq:Delta_t} \end{equation} where $\Delta t$ is the arrival time difference of the two protons to be measured with GASTOF detectors, and $c$ is speed of light ($\beta\approx1$ for the forward protons). It follows from Eq.(\ref{eq:Delta_t}) that precision of $\emph{z}$ measurement is $\delta\emph{z}~=~c\delta t/\sqrt{2}$. Hence the detector time resolutions of $\delta t$~=~20 or 10 ps would result in 4 mm or 2 mm resolutions in $\emph{z}$, respectively. For studies of exclusive processes at high LHC luminosity this will be essential to reduce accidental coincidences due to event pile-up, where the two forward protons and the central system X are not coming from the same interaction. \section{GASTOF} GASTOF is a Cherenkov gas detector -- photons produced by high energy protons traversing gas medium are reflected by a mirror onto a very fast photomultiplier. In gases, thanks to small refractive index, the Cherenkov photons are radiated at very small angles, and propagate at speed very close to $c$, therefore very good time resolution is expected. The detector is simple, very robust, and light -- the multiple scattering induced by GASTOF should be small and allow for placing it in front of or between the planes of the proton tracking detectors, without affecting the eventual resolutions. The FP420 forward detector sensitive areas will be heavily irradiated, in particular by single diffractive protons produced at huge rates at the LHC -- if needed the gas can be flushed therefore it is expected that GASTOF will be radiation hard. In addition, thanks to its directionality and relatively high energy threshold of incident particles GASTOF will not be too sensitive to stray charged particles in the LHC tunnel. On the other hand, in gas the Cherenkov photon yield is not large, and the radiator length is limited -- therefore, as the photon spectrum is peaked at short wavelengths, a lot of effort is put into providing good efficiency of detecting UV photons. \begin{figure}[!ht] \begin{center} \epsfig{file=Plots/Gastof_simulation.eps,width=135mm \end{center} \caption {Results of simulation of the GASTOF prototype -- 31cm long, filled with C$_4$F$_{10}$ at 1.2 atm, with refractive index n$\sim$1.0018. One expects that in average, for one high energy proton hitting GASTOF centrally, about 198 Cherenkov photons hit the mirror, and 13.4 photoelectrons are produced at the Burle MCP-PMT. Upper, left plot shows spatial distribution of the photons at the photocathode. Upper, right plot shows arrival time of these photons, where $t=0$ is set for a proton entering GASTOF. Lower plots show, respectively, numbers of produced photons and photoelectrons as a function of the photon wavelength. Each plot shows number of events per bin, hence sums of the bin contents are equal to the total number of photons and photoelectrons, respectively. } \label{SimulationPlot} \end{figure} First two identical GASTOF prototypes have been built to test these expectations. A 31 cm long, 6 cm square tube, is filled with C$_4$F$_{10}$ at 1.2 atm, with refractive index n$\sim$1.0018. A flat mirror at 45$^\circ$\ reflects Cherenkov light onto a 2 inch square photocathode of the micro-channel plate photomultiplier (MCP-PMT) 85011-501 from Burle \cite{burle}). Special UV coated mirrors have been used, which have non-zero reflectivity for $\lambda >$ 160 nm and more than 75\% above 180 nm. The Burle MCP-PMTs have UV grade fused silica windows, the collection efficiency of 50\%, and multiple anodes in form of 8x8 anode matrix. They are characterized by a sharp rise time of 300 ps and low transit time jitter of about 40 ps. A simple, based on ray-tracing, Monte Carlo simulation has been prepared, and its results for the prototype are shown in the Fig.(\ref{SimulationPlot}). For high energy charged particles hitting GASTOF centrally, along its axis, almost 200 photons in average are radiated and hit the mirror. This results in about 13 photoelectrons produced in average at the photocathode. The light spot at the photocathode has diameter of 4 cm. Finally, all the Cherenkov photons arrive at the photocathode within a 4 ps time window! These results indicate that indeed GASTOF can provide efficient, and extremely fast and accurate timing signal at the LHC. \section{First Results} The GASTOF test stand has been prepared using a simple cosmic ray telescope, as sketched in Fig.(\ref{CosmicStand}). Two small plastic scintillator blocs separated vertically and readout by Philips PMTs XP2020 are used as a cosmic muon trigger. In each MCP-PMT the 4x4 central group of anodes was connected together by short wires of equal lengths, and the rest of anodes was grounded. The signals from the Burle MCP-PMTs are sent via about 20 cm long SMA cables to very fast Hamamatsu C5594 amplifiers. Two GASTOF prototypes placed one after the other and were tested simultaneously. Two GASTOF signals as well as those from the trigger are read using a fast 3GHz LeCroy Wavepro 7300A scope with digital resolution of 50 ps \cite{ref:LeCroy}. \begin{figure}[!ht] \begin{center} \epsfig{file=Plots/schem_of_cosmic_stand.eps,width=100mm \end{center} \caption {Cosmic ray test stand for the GASTOF prototypes. Plastic scintillators (up and down) readout by Philips photomultiplier tubes XP2020 are used as a trigger. The signals from the Burle MCP-PMTs are sent via short SMA cables to very fast Hamamatsu C5594 amplifiers. Signals from the GASTOF prototypes as well as from the trigger are read using a fast 3GHz LeCroy Wavepro 7300A scope. } \label{CosmicStand} \end{figure} In Fig.(\ref{result}) an example of cosmic ray signals from two GASTOF detectors is shown. More than hundred such events were collected in a one day run, allowing to make first statistical analysis of the data. The signals were fitted using the Landau distribution function, and the crucial parameters were extracted -- the arrival and rise time of each pulse. The average rise time is similar for both detectors and is about 600 ps. This is two times worse than the single anode rise time quoted by Burle, therefore it is believed to be worsened by the anode grouping. The time difference distribution was also measured -- it is of gaussian shape with about 100 ps width. Assuming that this width is dominated by the time resolution of two prototypes, and that they are the same, the upper limit of 70 ps on a single detector resolution can be set. This is about two times bigger than the transit time jitter expected for the Burle MCP-PMT, but is consistent with the degradation of the Burle rise time, possibly due to the anode grouping. \begin{figure}[!ht] \begin{center} \epsfig{file=Plots/chosen_event.eps,width=100mm \end{center} \caption {An example of cosmic ray signals from two GASTOF detectors. The data were taken with the LeCroy Wavepro 7300A scope with digital resolution of 50 ps. The Landau distribution function was used for fits. } \label{result} \end{figure} Results from these first tests are encouraging, showing a big potential of GASTOF detectors in domain of ultra-precise timing applications. Next steps will include use of the single anode readout or improved anode grouping scheme, as well as of new Burle MCP-PMTs with even smaller transit time jitter.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The modern technological progress in ultrafast optics makes it possible to produce few-cycle laser pulses \cite{few-cycle,Atto,THz}. Recently, a single-cycle pulse with a duration of 4.3 femtoseconds has been generated experimentally \cite{Single}. Furthermore, great effort for the generation of extremely short pulses via few-cycle laser pulses has been made \cite{Kalosha,soliton}, particularly, single-cycle gap solitons \cite{Macovei} and unipolar half-cycle optical pulses \cite{Song}, respectively, generated in the dense media with a subwavelength structure. If the pulse duration approaches the optical cycle, the strong-field-matter interaction enters into the extreme nonlinear optics \cite{eno}, and the standard approximations of the slowly varying envelope approximation (SVEA), and the rotating-wave approximation (RWA) are invalid \cite{breakdown,fdtd}. When the Rabi frequency of the few-cycle laser pulse becomes comparable to the light frequency, the electric field time-derivative effects will lead to carrier-wave Rabi flopping (CWRF) \cite{CWRF}, which was observed experimentally in the semiconductor GaAs sample \cite{GaAs}. In this extreme pumping regime, the simple two-level system can still serve as a reference point \cite{eno,ceo,exp}. For the few-cycle laser pulses, the absolute carrier-envelope phase (CEP) strongly affects the temporal variation of the electric field. These effects give rise to many CEP dependent dynamics, such as high-harmonic generation \cite{HHG1,HHG2,HHG3}, optical field ionization \cite{ion1,ion2}, atomic coherence and population transfer \cite{Wu, Scully}, etc. The CEP dependent strong interactions also provide routines to determine the CEP of few-cycle ultrashort laser pulses. In particular, the strong-field photoionization provides very efficient tools to measure the CEP of powerful few-cycle femtosecond laser pulses for the first time \cite{first}. Another promising approach to determine the CEP is introduced on the detection of the THz emission by down-conversion from the few-cycle strong laser pulse \cite{THz-em}. Recently, the angular distribution of the photons emitted by an ultra-relativistic accelerated electron also provides a direct way of determining the carrier-envelope phase of the driving laser field \cite{Piazza}. However, all these measurements of CEP are based on light amplification in strong-field regime. Therefore, it is very meaningful to explore routines for determining the CEP of few-cycle laser pulse at relative lower intensities without light amplification. The nonperturbative resonant extreme nonlinear optics effects would be good candidates for measuring the CEP of few-cycle laser pulses with moderate intensities \cite{ceo,exp}. However, the period of these CEP-dependent effects is $\pi$ due to the inversion symmetry of light-matter interaction in two-level systems. Thus, the sign of the few-cycle laser pulse still cannot be determined. In order to remove the $\pi$-shift phase ambiguity, the violation of inversion symmetry should be considered \cite{metal}. In the presence of an electrical bias, the phase-dependent signal of ultrafast optical rectification in a direct-gap semiconductor film implies a possible technique to extract the CEP \cite{Hughes}. Moreover, the inversion-asymmetry media, such as polar molecules \cite{Yang} and the asymmetric quantum well \cite{cj}, could also be utilized to determine the CEP of few-cycle laser pulses. \begin{figure}[b] \centering \includegraphics[width=8.5cm]{Fig1.eps} \caption{\label{fig1} (color online) Schematic of the light-matter interaction scheme. The green curve illustrates the incident few-cycle laser pulse. The blue curve illustrates the transmitted laser field. Symbol $"\rightarrow"$ denotes the propagation direction axis.} \end{figure} In this paper, we introduce the counter-rotating terms (CRT) in the spontaneous emission damping, and investigate the influence of CRT on the propagation dynamics of nonamplified single-cycle laser pulses in two-level media. The CRT should be considered for such ultrashort pulses interacting with the medium with strong relaxation processes, because the CRT can notably suppress the broadening of the pulse envelope and the decrease of the group velocity arising from dispersion. Furthermore, when the incident single-cycle pulse with envelope area $\Theta=4\pi$ propagates throngh the two-level medium, it splits into two pulses. The stronger main pulse moves faster than the weaker generated soliton pulse, and the pulse time-delay between them shows a pronounced CEP dependence. Therefore, in the presence of a static electric field, we present a simpler approach for measuring the CEP of the few-cycle laser pulses, by detecting the time-delay of the generated soliton pulse. \section{Approach} \subsection{Maxwell Equations} We consider the propagation of a few-cycle laser pulse in a resonant two-level medium along the $z$ axis, as shown in Fig.~\ref{fig1}. The pulse initially moves in the free-space region, then it penetrates the medium on an input interface at $z=0$ and propagates through the medium, and finally, it exits again into the free space through the output interface at $z=L$. With the constitute relation for the electric displacement for the linear polarization along the $x$ axis, $D_x=\epsilon_0 E_x+P_x$, the full-wave Maxwell equations for the medium take the form: \begin{subequations} \label{max} \begin{align} \frac{\partial H_y}{\partial t}&=-\frac{1}{\mu_0} \frac{\partial E_x}{\partial z},\\ \frac{\partial E_x}{\partial t}&=-\frac{1}{\epsilon_0} \frac{\partial H_y}{\partial z} -\frac{1}{\epsilon_0} \frac{\partial P_x}{\partial t}, \end{align} \end{subequations} where $E_{x}$ and $H_{y}$ are the electric and magnetic fields, respectively. $\mu_{0}$ and $\epsilon_{0}$ are the magnetic permeability and the electric permittivity in the vacuum, respectively. The macroscopic nonlinear polarization $P_{x} = -Nd_{12}u$ is connected with the off-diagonal density matrix element $\rho_{12} = \frac{1}{2}(u + iv)$ and the population inversion $w=\rho_{22}-\rho_{11}$, which are determined by the Bloch equations below. \subsection{Master Equation} The Hamiltonian of the two-level system we considered can be described by \cite{kmek}: \begin{eqnarray} H&=&\sum_{k}\hbar \omega_{k}a^{\dagger}_{k}a_{k} + \hbar\omega_{0} S_{z} + \hbar\Omega(t)(S^{+} + S^{-})\nonumber\\ & + & i\sum_{k}(\vec g_{k}\cdot \vec d_{21}) \bigl \{a^{\dagger}_{k}(S^{+}+S^{-})-{\rm H.c.}\bigr\}, \label{HI} \end{eqnarray} where $\omega_{0}$ is the transition frequency, and $\vec d_{21}$ is the electric dipole moment of the transition between the upper state $|2\rangle$ and the lower state $|1\rangle$. $a^{\dagger}_{k}$ ($a_{k}$) is the creation (annihilation) operator for photons with momentum $\hbar k$ and energy $\hbar \omega_k$, while $\vec{g}_k=\sqrt{\frac{2\pi\hbar \omega_k}{V}}\vec{e}_\lambda$ describes the vacuum-atom coupling and $\vec{e}_\lambda$ represents the unit polarization vector with $\lambda\in{\lbrace 1, 2\rbrace}$. $S^{+} = |2\rangle\langle1|$ ($S^{-} = |1\rangle\langle2|$) is the dipole raising (lowering) operator of the two-level system, $S_z=(|2\rangle\langle 2|-|1\rangle\langle 1|)/2$ is the inversion operator. $\Omega(t)=d_{12}E_x/\hbar$ is the Rabi frequency of the incident laser field. In the usual Born-Markov and mean-field approximation, but without the rotating-wave approximation, the master equation of the system is determined by \begin{eqnarray} \dot{\rho}(t)&+&i\left[\omega_0 S_{z} +\Omega(t)\left(S^{+}+S^{-}\right),\rho\right]\nonumber\\ &=& -\gamma \left[S^{+},(S^{+}+S^{-})\rho\right]+{\rm H.c.}, \label{ME} \end{eqnarray} where an overdot denotes differentiation with respect to time. Here, $[S^{+},S^{+}\rho(t)]$ and its hermitian conjugate term represent the counter-rotating terms (CRT) for the spontaneous emission damping, which are neglected under the rotating-wave approximation when the duration of the laser field pulse $\tau_p$ is much larger than $\omega_{0}^{-1}$. However, for the few-cycle pulses, even the single-cycle or sub-cycle pulse, the CRT become indispensable and cannot be neglected. In the following, we will investigate the effects of the CRT on the propagation dynamics of the single-cycle laser pulse in the two-level medium. \subsection{Bloch Equations} Based on the master equation (\ref{ME}), including the CRT in the spontaneous emission damping, the Bloch equations with CRT can be easily derived as follows: \begin{subequations} \label{Bloch-CRT} \begin{align} \dot{u}&=\omega_0 v,\\ \dot{v}&=-\omega_0 u +2\Omega(t) w-2\gamma_2 v,\\ \dot{w}&=-2\Omega(t) v-\gamma_1 (w+1), \end{align} \end{subequations} where $\gamma_{1}$ and $\gamma_{2}$ are the spontaneous decay rates of the population and polarization, respectively. The Bloch equations with CRT [Eqs.~(\ref{Bloch-CRT})] are slightly different from the conventional Bloch equations (see for instance Refs.~\cite{fdtd,Kalosha}): \begin{subequations} \label{Bloch} \begin{align} \dot{u}&=\omega_0 v-\gamma_2 u,\\ \dot{v}&=-\omega_0 u +2\Omega(t) w-\gamma_2 v,\\ \dot{w}&=-2\Omega(t) v-\gamma_1 (w+1), \end{align} \end{subequations} in which the relaxation constants $\gamma_{1}$ and $\gamma_{2}$ are added phenomenologically. \begin{figure}[t] \includegraphics[width=8.0cm]{Fig2.eps} \caption{\label{fig2} (color online) (a) The Rabi frequency of the incident single-cycle pulse with envelope area $\Theta=2 \pi$. (b) and (c) are the time-dependent electric fields and the corresponding population inversions at the distance $z=90~\rm \mu m$, respectively. The length of the two-level medium is choosen as $L=110~\rm \mu m$. The blue solid curves are for the case of Maxwell-Bloch equations with CRT, while the red dashed curves are for the case with the conventional Maxwell-Bloch equations. The black lines represent the pulse envelope.} \end{figure} \begin{figure}[b] \centering \includegraphics[width=7.5cm]{Fig3.eps} \caption{\label{fig3} The corresponding spectra of the electric field in Fig.~\ref{fig2}(b). The black dashed-dotted curve is the spectrum of the incident laser pulse. The blue curve depicts the case with CRT, and the red dashed curve the case without CRT. } \end{figure} \subsection{Numerical Method} The propagation properties of the few-cycle laser pulse in the two-level medium can be modeled by the full-wave Maxwell-Bloch equations beyond the SVEA and RWA, which can be solved by the iterative predictor-corrector finite-difference time-domain discretization scheme \cite{fdtd,fdtd-method}. For such an extremely short laser pulse, we define the vector potential at $z=0$ as in Refs.~\cite{liu, niu}: \begin{eqnarray} A_x(t)=A_0~{\rm sech} [1.76(t-t_0)/\tau_p]\sin{[\omega_p(t-t_0)+\phi]}, \label{vec} \end{eqnarray} where $A_{0}$ is the peak amplitude of the vector potential, $\omega_{p}$ is the photon energy, and $\phi$ being the CEP. $\tau_{p}$ is the full width at half maximum (FWHM) of the short pulse and $t_{0}$ is the delay. The electric field can be obtained from $E_x =-\partial A_x(t)/\partial t$. In what follows, we assume that the two-level medium is initialized in the ground state with $u=v=0$ and $w=-1$. The material parameters are chosen as in Ref.~\cite{Kalosha}: $\omega_0=2.3~\rm fs^{-1}$ ($\lambda=830~\rm nm$), $d_{12}=2~\times~10^{-29}~\rm Asm$, $\gamma_1^{-1}=1~\rm ps$, $\gamma_2^{-1}=0.5~\rm ps$, the density $N=4.4~\times~10^{20}~\rm cm^{-3}$. The incident pulse has a FWHM in single optical cycle $\tau_p=2.8~\rm fs$ and the photon energy $\omega_p=\omega_0$. The Rabi frequency $\Omega_0=-A_0 \omega_p d/\hbar=1~\rm fs^{-1}$ corresponds to the electric field of $E_x=5~\times~10^9~\rm V/m$ or an intensity of $I=6.6~\times~10^{12}~\rm W/cm^2$, and the incident pulse area is defined as $\Theta=\int^{\infty}_{-\infty}\Omega(t)dt$. \section{Results and discussion} Now, we focus on the effects of CRT on the propagation dynamics of single-cycle laser pulses in two-level medium by comparing the numerical results from the Maxwell-Bloch equations with CRT [Eqs.~(\ref{max}) and (\ref{Bloch-CRT})] and without CRT [Eqs.~(\ref{max}) and (\ref{Bloch})]. We use an incident single-cycle pulse with envelope area $\Theta=2\pi$ for these simulations with the medium zone length: $z=110~\rm \mu m$. According to the standard area theorem, the pulse with area $\Theta=2\pi$ can transparently propagate through the two-level medium without suffering significant lossness - the so-called self-induced transparency (SIT) \cite{sit}. However, when the laser pulse envelope contains only few optical cycles, the standard area theorem breaks down because of the occurrence of CWRF \cite{CWRF}. From our numerical results, for the short propagation distance, the usual SIT regime is essentially recovered. However, at a further distance, the established conditions for SIT are destroyed due to the extreme nonlinear optical effects. Fig.~\ref{fig2}(b) and Fig.~\ref{fig2}(c) present the normalized electric-field pulses and the corresponding population inversions at the distance $z=90~\rm \mu m$ for different approaches, namely, the blue solid curves depict the case obtained from Maxwell-Bloch equations with CRT, while the red dashed curves are for the conventional approach without CRT. Compared with the incident single-cycle $2\pi$ pulse in Fig.~\ref{fig2}(a), the electric-field pulses for both two cases become clearly broadened induced by the dispersion, and suffer the decrease in pulse amplitude. Accordingly, the population differences for both cases undergo an incomplete Rabi flopping with the CWRF [Fig.~\ref{fig2}(c)]. \begin{figure}[t] \centering \includegraphics[width=8.5cm]{Fig4.eps} \caption{\label{fig4} The transmitted electric field of the incident pulses with envelope area $\Theta=4\pi$ for different initial CEPs: (a) $0$ and (b) $\pi/2$. The length of two-level medium is $L=80~\rm \mu m$.} \end{figure} However, there are notably different features between these two approaches. The electric-field pulse from the approach with CRT [blue solid curves in Fig.~\ref{fig2}(b)] is evidently narrower than that in the case without CRT [red dashed curves in Fig.~\ref{fig2}(b)]. This can be easily seen from the corresponding spectra shown in Fig.~\ref{fig3}. The spectrum of the case with CRT is obviously more broadened than that in the case without CRT, although both of them become narrower than the incident spectrum. The envelope peak from the approach with CRT is relatively larger than that in the case without CRT [Fig.~\ref{fig2}(b)], hence, the former case lead to more population inversion at the leading edge of the electric-field pulse [Fig.~\ref{fig2}(c)]. Moreover, there is a notably time delay of the electric-field pulses and the corresponding population inversions between the two approaches [Fig.~\ref{fig2}(b) and (c)]. It means that the group velocity of the propagating pulse from the conventional approach without CRT is obviously smaller than that in the case with CRT. This difference in the group velocity is rooted from the different influence of dispersion effects for these two approaches. Comparing Eqs.~(\ref{Bloch-CRT}) with Eqs.~(\ref{Bloch}), there is no damping of the real part of polarization $u$ in the Bloch equations with CRT, which indicates that the dispersion does not suffer lossness. That is to say, the presence of CRT evidently suppresses the strong dispersion effects, which lead to the broadening of pulse envelope and the decrease of the group velocity. In addition, we also find that the influence of CRT on the propagation dynamics of the single-cycle laser pulses is significantly enhanced with the increase of the spontaneous decay rates. Therefore, the CRT is important and indispensable for the study of the propagation properties of few-cycle laser pulses in the medium with strong relaxation processes. In the following discussion, we will use our established full-wave Maxwell-Bloch equations with CRT [Eqs.~(\ref{max}) and (\ref{Bloch-CRT})] to explore an approach for determining the CEP of the single-cycle laser pulse. In what follows, we simulate the incident single-cycle pulses with larger envelope area, i.e. $\Theta=4\pi$, propagating through the two-level medium with a length $L=80~\rm \mu m$. During the course of pulse propagation, the medium absorbs and emits photons and redistributes energy in the pulse. The propagating pulses are altered in shape until it reaches a stable status after some propagation distance by splitting into two pulses, the strong main pulse and the SIT soliton pulse. However, the former moves faster than the latter, which is why the generated SIT soliton pulse breaks up from the main pulse. We show the transmitted pulses of the incident single-cycle pulse with pulse envelope area $\Theta=4\pi$ for different CEP $\phi=0$ and $\phi=\pi/2$ in Fig.~\ref{fig4}. It can be seen that both of the transmitted pulses split into two pulses. There is a time delay between the main pulse and the soliton pulse defined as $t(\phi)$. The time-delay for the incident pulse with CEP $\phi=\pi/2$ [$t(\phi=\pi/2)$] is evidently larger than that in the case with CEP $\phi=0$ [$t(\phi=0)$]. It demonstrates that the pulse time delay $t(\phi)$ is sensitive to the initial CEP of the incident pulse. For simplicity, we define the relative pulse time delay $\Delta t=t(\phi)-t(\phi=0)$ to indicate the CEP dependence. We present the relative pulse time delay $\Delta t$ as a function of the initial CEP of the incident pulse in Fig.~\ref{fig5} with blue circles. It is found that the relative pulse time delay $\Delta t$ is related to the CEP of the incident pulse with a nearly cosinelike dependence. However, the time delay $t(\phi=\pi)$ is exactly the same as $t(\phi=0)$, and hence, the period of the CEP-dependent pulse time delay is only $\pi$ because of the inversion symmetry of light-matter interaction. This means that we cannot distinguish the incident pulse from the initial CEP $\phi~\rightarrow~\phi+\pi$. \begin{figure}[b] \centering \includegraphics[width=8.0cm]{Fig5.eps} \caption{\label{fig5} The relative pulse delay between the transmitted soliton pulses as a function of the CEP of the incident pulses for different strengths of the static electric field. The solid lines are a guide for the eyes.} \end{figure} In order to remove the $\pi$-shift phase ambiguity, we add a static electric field to break the inversion symmetry of the light-matter interaction \cite{static}. As a result, the Rabi frequency terms in Bloch equations [Eqs.~(\ref{Bloch-CRT})] should change as $\Omega(t)~\rightarrow~\Omega(t)+f$, where $f$ describes the strength of the static electric field. The presence of the static electric field gives rise to the enhancement of the CEP-dependent variation in the peak electric strength of the single-cycle pulse, which will enhance the CEP dependence of the dynamics effects. The relative pulse time delay $\Delta t$ of the transmitted soliton pulses as a function of the initial CEP of the incident single-cycle pulses for different static electric fields is presented in Fig.~\ref{fig5}. Compared with the blue circles of $f=0$, the influence of the static electric field on the relative pulse time delay is significant. Let us take $f=2\%~\Omega_0$, for example green squares in Fig.~\ref{fig5}, then the relative time delay $\Delta t\neq 0$ at $\phi=\pi$, i.e., $t(\phi=\pi)$ is quite different from $t(\phi=0)$ in the presence of the static electric field. The variation with the CEP of the incident pusle also becomes much stronger. The period of the CEP-dependent pulse time delay becomes $2\pi$ because the inversion symmetry is broken assisted by the static electric field. Moreover, with the increase of the static electric field, such as $f=3\%~\Omega_0$ and $f=4\%~\Omega_0$, the dependence of the relative pulse time delay on the initial CEP is further enhanced [red diamond and black circles in Fig.~\ref{fig5}]. As a result, in the presence of the static electric field, if the relative time delay of the generated soliton pulses is calibrated, this effect suggests an approach for determining the CEP of the incident single-cycle laser pulses in both sign and amplitude. In addition, it should be pointed out that the pulse time delay might be much easier to detect compared with other features of the soliton pulse, such as the intensity and pulse duration \cite{Yang}. Finally, in our discussion, the static electric filed strengths, which are a few percentages of the single cycle laser pulse strength, exceed a few MV/cm. In order to achieve this kind of strength of the static electric field in an experiment, we may proceed with a special case as suggested in Ref.~\cite{static} where an additional electric field with a much lower frequency (such as $CO_2$ laser field, terahertz pulses or a midinfrared optical parameter amplifier pulse) is used instead of the static electric field. The ultra-short dynamics can prevent the system from being destroyed or ionized. \section{Summary} In summary, we investigated the propagation properties of single-cycle laser pulses in a two-level medium including the counter-rotating terms in the spontaneous emission damping. We found that the counter-rotating term can efficiently suppress the broadening of the pulse envelope and the decrease of the group velocity. Thus, the counter-rotating term is important and indispensable for the study of the propagation dynamics of few-cycle laser pulses, even for single-cycle and sub-cycle pulses. Furthermore, we explored the CEP-dependence of the generated soliton pulse from the single-cycle laser pulse propagating through the two-level medium. The time delay of generated soliton pulses depends sensitively on the CEP of single-cycle incident laser pulse. Hence, the presence of the static electric field enhances the CEP-dependence of the relative pulse time delay, which have an potential application in determining the CEP of the incident single-cycle laser pulse.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The research of network analysis has made rapid progress in recent years. In fact, network data are usually complex and therefore hard to process. To mine network data, one fundamental task is to learn a low-dimensional representation for each node, such that network properties are preserved in the vector space. As a result, various downstream applications, such as link prediction~\cite{perozzi2014deepwalk}, classification~\cite{wang2016linked}, and community detection\cite{he2015detecting}, can be directly conducted in such vector space. As for learning representations for networks, there are two main challenges that have not yet been fully resolved: \noindent \textbf{(1) Preservation of heterogeneous relationships between nodes.} There usually exist diverse and different types of relationships between nodes, leading to the heterogeneity of edges. For example, in the twitter network, four types of relationships may be observed in the interactions between two users, that is one user may retweet, reply, like, and mention another user's tweet. Thus it is reasonable to build four types of edges between the two users with each type of edge corresponding to one type of relationship. Although these edges reflect the similarity between the two users, we can not ignore the slight difference at the "semantic" level. Therefore, taking heterogeneity of edges into consideration for representing such networks is quite significant. In literature, several heterogeneous network embedding approaches~(e.g. PTE~\cite{tang2015pte}, Metapath2vec~\cite{dong2017metapath2vec}, and HIN2Vec\cite{fu2017hin2vec}) have been proposed to represent heterogeneous nodes or edges into the same semantic vector space. However, these methods only learn a final representation for all relationships jointly but ignore the different semantic meanings of edges. Therefore, in order to explore the heterogeneity of edges, it is necessary to learn a relation-specific representation for each type of relationship. \noindent \textbf{(2) Preservation of high-order node proximities.} As described in LINE~\cite{Tang2015LINE}, it defines two loss functions to preserve both 1-st and 2-nd order proximities together. However, it is also meaningful to further integrate the information of k-th-order neighbors for enhancing the representation of nodes with small degrees. Moreover, most existing network embedding methods are equivalent to implicit matrix factorization~\cite{qiu2018network}, which is a shallow model that fails to capture high-order nonlinear proximities between nodes. GraRep~\cite{cao2015grarep} aims to capture the k-th-order proximity by factorizing the k-step~(k=1,2,$\cdots$,K) transition matrices. However, the matrix factorization technique is usually time inefficient and hard to learn nonlinear relationships between nodes. SDNE~\cite{wang2016structural} designs a deep auto-encoder framework to extract the nonlinear structural information of networks, but it still only considers 1-st and 2-nd order proximities without preserving even higher order proximities between nodes. Consequently, to preserve the complex network information, a better solution should leverage high-order nonlinear structural information to yield more robust network representations. Recently, it has witnessed that multi-view learning is applied successfully in a wide variety of applications, especially for mining heterogeneous data, such as clustering~\cite{kumar2011co}, computer vision~\cite{li2002statistical}, and information retrieval~\cite{pan2014click}. In this regard, we convert heterogeneous edges into multiple views for a network, and solving a multi-view learning problem to learn representations for such networks. To be more specific, we abstract each relationship as a view of the network, reflecting a type of proximity between nodes, thus the original network can be further interpreted as a multi-view network. Finally, we formalize the task as a multi-view network embedding problem. Existing multi-view network embedding methods, such as MVE~\cite{qu2017attention} and MINEs~\cite{ma2018multi}, first learn a single-view network representation using skip-gram model then fuse them directly. Since their fusion strategies, i.e. averaging and adding, are both linear functions, they fail to capture the complex nonlinear information, leading to a sub-optimal result. Besides, there are some works~\cite{shi2018mvn2vec,xu2019multi,zhang2018scalable} learning a unified representation and a view-specific representation for each view simultaneously, but they are shallow models without considering the high-order proximities between nodes. \begin{figure}[t] \centering \subfigure[]{ \includegraphics[scale=0.3]{rgae.pdf} }\label{fig1a} \quad \subfigure[]{ \includegraphics[scale=0.1]{hot.pdf} }\label{fig1b} \vspace{-0.2cm} \caption{(a) Illustration of converting heterogeneous relationships to multiple views of the network. (b) Consistent and unique information carried by each pair of AMiner network views.} \label{fig1} \vspace{-0.5cm} \end{figure} Targeting at modeling the heterogeneity of edges and preservation of high-order node proximities for learning network representations, we propose a novel \textbf{R}egularized \textbf{G}raph \textbf{A}uto-\textbf{E}ncoders framework, namely RGAE. To better illustrate our motivation, we first introduce a case study on a multi-view AMiner network~(see details in sec~\ref{exp_set}). As shown in Fig.~\ref{fig1} (a), it contains two types of information, consistent information and unique information, as its edges are partial aligned as well as partial distinct between different views. Different views may share some consistent information. At the same time, each of them also carries some unique information that others do not have. We further follow a similar method~\cite{shi2018mvn2vec} to perform a statistical analysis. Given a pair of views, the edge sets are $\mathcal{E}_1$ and $\mathcal{E}_2$. We treat the Jaccard coefficient between the two sets as the proportion of consistent information. As we can see in Fig.~\ref{fig1} (b), there exists noticeable consistent information between coauthor and text similarity views while other pairs of views are quite negligible. Thus we conclude that it is unreasonable to preserve only consistent or unique information for multi-view network embedding. As a result, RGAE model aims to preserve consistent and unique information simultaneously, as well as capturing high-order nonlinear proximities between nodes. The contributions of our model are threefold: \noindent \textbf{(1)}. In consideration of preserving heterogeneous information of edges as much as possible, we design two kinds of graph auto-encoders to deal with consistent and unique information respectively: one is the shared across view and the other is private to each view. Through these deep and nonlinear graph auto-encoders, our RGAE model is able to represent complex high-order structural information. \noindent \textbf{(2)}. We further introduce two regularized loss functions, i.e. the similarity loss and the difference loss, to explicitly avoid the information redundancy of the two types of graph auto-encoders. The similarity loss is used to extract consistent information from shared graph auto-encoders. The difference loss aims to encourage the independence between shared and private graph auto-encoders, so the unique information can also be well preserved at the same time. \noindent \textbf{(3)}. To evaluate the performance of the RGAE model, we conduct abundant experiments on four real-world datasets. The experimental results demonstrate that the proposed model is superior to existing state-of-the-art baseline approaches as well as examining the novelty of our model. \begin{table}[htbp] \centering \renewcommand\arraystretch{1.0} \topcaption{Summary of symbols}\label{tab1} \begin{tabular}{c|c|c|c} \hline \textbf{Symbol} & \textbf{Definition} & \textbf{Symbol} & \textbf{Definition} \\ \hline $\mathcal{U}$ & node set & $\mathcal{E}_{i}$ & edge set of view $i$ \\ \hline $|V|$ & number of views & $D$ & dimension of Y \\ \hline $N$ & number of nodes & $d$ & = ${\lfloor D/(|V|+1)\rfloor}$ \\ \hline $\textbf{A}_i\in \mathbb{R}^{N \times N}$ & adjacency matrix of view $i$ & $\alpha, \beta, \gamma$ & hyper-parameters \\ \hline ${\textbf{Y}_{i,p}}\in \mathbb{R}^d$ & private embedding of view $i$ & $\textbf{I}_N \in \mathbb{R}^{N \times N}$ & an identity matrix \\ \hline ${\textbf{Y}_{i,s}}\in \mathbb{R}^d$ & shared embedding of view $i$ & $\tilde{\textbf{A}}_i$ & = $\textbf{A}_i + I_N$ \\ \hline ${\textbf{Y}_{con}}\in \mathbb{R}^d$ & consistent embedding & $\tilde{\textbf{D}}_{i}(m,m)$ & = $\sum_{n} \tilde{\textbf{A}}_{i}(m,n)$ \\ \hline $\textbf{Y}\in \mathbb{R}^{D}$ & final network embedding & $\textbf{X}_1 \oplus \textbf{X}_2$ & concatenation in the last dimension \\ \hline \end{tabular} \vspace{-0.5cm} \end{table} \section{Problem Formulation and Notations} We first briefly define a multi-view network, multi-view network embedding and list the main notations used throughout this paper in Table~\ref{tab1}: \begin{definition}\textbf{Multi-View Network} A multi-view network is a network defined as $G = \left\{\mathcal{U}, \mathcal{E}_{1}, \mathcal{E}_{2}, \cdots, \mathcal{E}_{|V|} \right\}$, where $\mathcal{U}$ is a node set shared by all views, and $\mathcal{E}_{i}$ $\left ( 1\leq i\leq |V| \right )$ is the edge set of the $i$-th view, which reflects a specific type of relationship between nodes. \end{definition} \begin{problem}\textbf{Multi-View Network Embedding} \qquad Given a multi-view network $G =$ $ \left\{ \mathcal{U}, \mathcal{E}_{1}, \mathcal{E}_{2}, \cdots, \mathcal{E}_{|V|} \right\}$, the multi-view network embedding problem aims to learn a low-dimensional embedding representation $\textbf{Y} \in \mathbb{R}^D$~($D \ll N$). More specifically, an intermediate view-specific embedding representation $\textbf{Y}_{i,p} \in \mathbb{R}^{d}$ is learned to preserve the unique information of view $i$ and a shared embedding representation $\textbf{Y}_{con} \in \mathbb{R}^{d}$ is learned to preserve the consistent information among all views. The final embedding representation $\textbf{Y}$ is obtained from all view-specific embedding representations and the shared embedding representation by an aggregation function. \end{problem} \section{Method} \begin{figure}[htbp] \centering \vspace{-1cm} \includegraphics[scale=0.3]{model.pdf} \vspace{-0.2cm} \caption{The framework of RGAE. The illustration takes a network with three views as an example.} \label{model} \vspace{-0.5cm} \end{figure} In this section, we introduce our proposed \textbf{R}egularized \textbf{G}raph \textbf{A}uto-\textbf{E}ncoders framework, namely RGAE, for tackling the multi-view network embedding problem in detail. An illustrative example of the RGAE model is shown in Fig.~\ref{model}. \subsection{The Shared and Private Graph Auto-Encoders} \emph{Graph convolutional network}~(GCN)~\cite{kipf2016semi} is built on the idea of message passing, and convolves the representation of the central node with the representations of its neighbors to derive an updated representation of the central node. Our shared and private graph auto-encoders are both motivated as an extension of existing GCN that is able to learn valuable information for graphs. By stacking multiple GCN layers as an encoder and a simple inner production operation as a decoder, the graph auto-encoders in the RGAE model is capable of extracting consistent and unique information in a multi-view network. Specifically, given a multi-view network denoted as $G = \left\{\mathcal{U}, \mathcal{E}_{1}, \mathcal{E}_{2}, \cdots, \mathcal{E}_{|V|} \right\}$, for a specific view $i$, the propagation rule of $l$-th layer in the private encoder is formulated as: \begin{equation} \textbf{Y}_{i,p}^{(l+1)} = \sigma(\tilde{\textbf{D}_i}^{-\frac{1}{2}}\tilde{\textbf{A}}_i\tilde{\textbf{D}}_i^{-\frac{1}{2}}\textbf{Y}_{i,p}^{(l)}\textbf{W}_i^{(l)})\label{eq1} \end{equation} where the $\sigma(\cdot)$ is the non-linear activation function. In this paper, we choose $relu$ as activation function in all cases. $\textbf{W}_i^{(l)}$ is the weight matrix, and $\textbf{Y}_{i,p}^{(0)} = \textbf{X}_i$ is the feature matrix for view $i$. Specially, if the node features are not available the $\textbf{X}_i$ will be an identity matrix, as described in \cite{kipf2016semi}. The key point of the shared encoder is that the weight matrices in all layers are shared across different views~\footnote{Note the node set is shared across all views.}, which is clearly different from the private graph auto-encoder. In detail, the propagation rule of the $l$-th layer in the shared graph encoder is formulated as: \begin{equation} \textbf{Y}_{i,s}^{(l+1)} = \sigma(\tilde{\textbf{D}_i}^{-\frac{1}{2}}\tilde{\textbf{A}}_i\tilde{\textbf{D}}_i^{-\frac{1}{2}}\textbf{Y}_{i,s}^{(l)}\textbf{W}^{(l)})\label{eq2} \end{equation} Note that the weight matrix $\textbf{W}^{(l)}$ is only shared in view-wise rather than layer-wise. Through this shared architecture we can project all views into the same semantic space so that the process of extracting the consistent information is more interpretable. We can also allow different views to influence mutually and collaborate implicitly. The GCN layer is motivated by a first-order approximation of the localized spectral filters on graph-structured data~\cite{defferrard2016convolutional}. In this regard, it is possible to stack multiple GCN layers in both shared encoders and private encoders to capture the high-order proximity between nodes. The final outputs of these stacked shared encoders and private encoders are denoted as $\textbf{Y}_{i,s}$ and $\textbf{Y}_{i,p}$ for each view respectively. During the forward pass, the graph decoder in view $i$ aims to calculate the reconstructed adjacency matrix $\hat{\bf{A}}_i$. In order to utilize the complete information to make a better reconstruction, we first concatenate the outputs of the shared encoder and private encoder for view $i$, then we utilize the inner production operation to yield the reconstructed adjacency matrix, as described in \cite{kipf2016variational}, which is computed as follow: \begin{equation} \textbf{Y}_i = \textbf{Y}_{i,s} \oplus \textbf{Y}_{i,p}, \quad \hat{\textbf{A}}_i = sigmoid(\textbf{Y}_i {\textbf{Y}_i}^\mathrm{T})\label{eq3} \end{equation} Since the adjacency matrix preserves the topology information of the graph, it is momentous to minimize the reconstruction loss. It has been demonstrated that minimizing the reconstruction loss is helpful to preserve the similarity between nodes~\cite{salakhutdinov2009semantic}. Due to the sparsity of networks, there exist a great deal of zero elements and the number of zero elements and non-zero elements is extremely unbalanced in the adjacency matrix. As a result, we minimize the reconstruction error by optimizing the Balanced Cross-Entropy loss, which allows the model to pay more attention to the non-zero elements thus ignores the redundant noises from zero elements. For the view $i$, we compute the reconstruction loss as follows: \begin{equation} \mathcal{L}_i^{rec} = \sum_{\tiny{a_i^{(m,n)} \in \textbf{A}_{i}, \hat{a}_{i}^{(m,n)} \in \hat{\textbf{A}}_{i}}} [- a_{i}^{(m,n)} log (\hat{a}_{i}^{(m,n)}) \varsigma - (1-a_{i}^{(m,n)})log(1-\hat{a}_{i}^{(m,n)})] \label{eq4} \end{equation} where the $\varsigma$ is a weighting factor to balance the importance of the non-zero elements, defined as $\frac{\#zero \quad elements}{\#non-zero \quad elements}$ in $\textbf{A}_{i}$. \subsection{Regularization} \subsubsection{Similarity Loss} Intuitively, the consistent information can be extracted from the outputs of the shared encoders. Since we have projected all these outputs into the same semantic space, it is meaningful to make them collaborate to vote for the consistent representation. In this process, we encourage the consistent representation $\textbf{Y}_{con}$ to be similar to the shared representation $\textbf{Y}_{i,s}$ of each view as much as possible. As the importance of views may be different, we further allow the model to assign different weights to them. Taking all these into consideration, we introduce the following similarity loss to regularize the extraction process: \begin{equation} \mathcal{L}^{sim} = \sum_{i=1}^{|V|} \lambda_{i}^{\gamma} \Vert \textbf{Y}_{con} - \textbf{Y}_{i,s} \Vert_F^2, \quad \sum_{i=1}^{|V|} \lambda_{i} = 1, \lambda_{i} \geq 0 \label{eq5} \end{equation} where $\lambda_{i}$ is the weight for view $i$, and $\gamma$ moderates the weight distribution. By learning proper weights, the extraction process can let the consistent representation focus on the most informative views. Naturally, the consistent representation is calculated as the weighted combinations of the outputs of the shared encoders, which illustrates the collaboration between different views. \subsubsection{Difference Loss} In order to preserve the unique information, the difference loss is also introduced to encourage the isolation between consistent embeddings and unique embeddings. As the consistent information and unique information have essential differences, they should be distinguished clearly to avoid the information redundancy. In other words, the shared embeddings and private embeddings should describe the information of multiple views in different perspectives, thus we define the difference loss via an orthogonality constraint between the private embedding and shared embedding in each view: \begin{equation} \mathcal{L}_i^{dif} = \Vert \textbf{Y}_{i,s} \odot \textbf{Y}_{i,p} \Vert_F^2 , \quad i = 1,2,\cdots,|V| \label{eq6} \end{equation} where the $\odot$ is the row-wise inner production. Obviously, the difference loss will drive the shared embeddings to be orthogonal with the private embeddings, thus they will be as dissimilar as possible. In this way, the shared and private encoders are able to encode different aspects of the multi-view network. In this paper, we treat the output of the private graph encoder for each view as its private representation. \subsection{The Aggregation Process} As introduced above, our RGAE model includes the three types of losses, i.e. the reconstruction loss, the similarity loss, and the difference loss. In order to train these losses jointly, the overall loss of our proposed model is summarized as follow: \begin{equation} \mathcal{L} = \sum_{i=1}^{|V|} \mathcal{L}_i^{rec} + \alpha * \mathcal{L}^{sim} + \beta * \sum_{i=1}^{|V|} \mathcal{L}_i^{dif} \label{eq7} \end{equation} where $\alpha$ and $\beta$ are hyper-parameters to control the importance of similarity loss and difference loss respectively. Up to now, we have obtained the representations of consistent and unique information. Finally, we design an aggregation process to yield the final network representation, which can be illustrated as: \begin{equation} \textbf{Y} = Aggregator(\textbf{Y}_{con},\textbf{Y}_{1,p},\cdots,\textbf{Y}_{|V|,p})\label{eq8} \end{equation} The aggregator should be able to integrate both the consistent and unique information effectively, and it can be add, average, pooling and some other designed functions. In this paper, we choose concatenation as the aggregation function since it has been proven to be useful and efficient in many existing network embedding methods~\cite{li2018multi,shi2018mvn2vec,Tang2015LINE}. As shown in Table~\ref{tab1}, the total dimension $D$ has been assigned to each graph auto-encoder equally, thus after the concatenation process the final network embedding will still satisfy $\textbf{Y}\in \mathbb{R}^{D}$. \subsection{Implementation} In practice, we utilize Tensorflow for an efficient GPU-based implementation of the RGAE model. Then the parameters of RGAE model except $\lambda_i$ can be efficiently optimized automatically with back propagation algorithm. To save space, we omit details here. Since the sparsity of network data, we use sparse-dense matrix multiplication for Eqs.~\eqref{eq1} and \eqref{eq2}, as described in ~\cite{kipf2016semi}. Specially, for the view weight $\lambda_i$ in Eq.~\eqref{eq5}, we follow the same method~\cite{cai2013multi} to update it. Let's denote $\Vert \textbf{Y}_{con} - \textbf{Y}_{i,s} \Vert_F^2$ as $\bm{B}_i$, then Eq.~\eqref{eq5} is equivalent to $\sum_{i=1}^{|V|} \lambda_{i}^{\gamma}\bf{B}_i - \xi (\sum_{i=1}^{|V|} \lambda_{i}-1)$, where $\xi$ is Lagrange multiplier. By taking the derivative of this formula with respect to $\lambda_i$ as zero, we can obtain the update rule of $\lambda_i$: $\lambda_i \leftarrow \frac{(\gamma\bf{B}_i)^{\frac{1}{1-\gamma}}}{\sum_{i=1}^{|V|}(\gamma\bf{B}_i)^{\frac{1}{1-\gamma}}}$. It is efficient to use one parameter $\gamma$ for controlling the distribution of view weights during the optimization process dynamically. According to the update rule, we would assign equal weights to all views when $\gamma$ closes to $\infty$. When $\gamma$ closes to 1, the weight for the view whose $\bm{B}_i$ value is smallest will be assigned as 1, while others are almost ignored since their weights are close to 0. The pseudo code is shown in Algorithm.~\ref{algorithm}. \begin{algorithm}[htbp] \LinesNumbered \KwIn{$G = \left \{\mathcal{U}, \mathcal{E}_{1}, \mathcal{E}_{2}, ..., \mathcal{E}_{V} \right \}$,$D$,$K$,$\alpha$,$\beta$,$\gamma$ and hidden size for each encoder layer; } \KwOut{The network representation $\bm{Y}$ for the network G.} \For{i=1,2,$\cdots$,$|V|$}{ According to Eq.~\eqref{eq1}, construct private graph encoder for view $i$;\\ According to Eq.~\eqref{eq2}, construct shared graph encoder for view $i$;\\ According to Eqs.~\eqref{eq3} and \eqref{eq4}, construct graph decoder and calculate $\mathcal{L}_i^{rec}$;\\ } According to Eq.~\eqref{eq5} and Eq.~\eqref{eq6}, calculate $\mathcal{L}^{sim}$ and $\mathcal{L}_i^{dif}$;\\ \Repeat {\text{Convergence}} { Update parameters in RGAE model via optimizing the $\mathcal{L}$ in Eq.~\eqref{eq7};\\ } According to Eq.~\eqref{eq8}, obtain the final network representation $\bm{Y}$;\\ \Return Network representation $\bm{Y}$. \caption{The pseudo-code of RGAE Model} \label{algorithm} \end{algorithm} \section{Experiments} \subsection{Experimental Setup} \label{exp_set} We select four multi-view network datasets in different fields. The statistic analysis is shown in Table~\ref{tab2}. \begin{table}[ht] \centering \renewcommand\arraystretch{1.0} \setlength{\tabcolsep}{1.0mm} \topcaption{Overview of datasets}\label{tab2} \begin{tabular}{c|cccccc} \hline Task & Dataset & Views & Nodes & Edges & Labels & Type \\ \hline \hline \multirow{2}{*}{Multi-class Node Classification} & AMiner & 3 & 8,438 & 2,433,356 & 8 & Academic \\ & PPI & 6 & 4,328 & 1,661,756 & 50 & Biological \\ \hline \multirow{1}{*}{Multi-label Node Classification} & Flickr & 2 & 34,881 & 3,290,030 & 171 & Social \\ \hline \multirow{1}{*}{Link Prediction} & YouTube & 4 & 5,108 & 3,263,045 & - & Social \\\hline \end{tabular} \vspace{-0.5cm} \end{table} \squishlist \item AMiner~\cite{tang2008arnetminer}: AMiner network is an academic network representing the relationships between authors. It consists of three views: author-citation, co-authorship, and text similarity. Text similarity between two authors is calculated by TF-IDF from titles and abstracts in their papers. An author establishes connections with his top ten similar authors and we only preserve authors in eight research fields as \cite{dong2017metapath2vec}. The research fields are treated as node labels. \item PPI~\cite{franceschini2012string}: The PPI network is a human protein-protein interaction network. Six views are constructed based on the co-expression, co-occurrence, database, experiment, fusion, and neighborhood information. Gene groups are treated as node labels. \item Flickr~\cite{tang2009relational}: It is a social network of online users on Flickr with two views. One view is the friendship network among bloggers. The other is a tag-proximity network in which a node connects with its top 10 similar nodes according to their tags. We treat community memberships as node labels. \item YouTube~\cite{yang2015defining}: It is a social network consists of four views: the friendship, the number of common friends, the number of common subscribers, and the number of common favorite videos between two users. \squishend In order to evaluate the effectiveness of RGAE, we compare our model with three types of baselines. The \textbf{single-view} based baselines include: \squishlist \item Deepwalk~\cite{perozzi2014deepwalk}: It is a well-known baseline for network embedding. We set the number and the length for each node as 80 and 40 respectively following the recommendations of the original paper. The window-size is set as 10. \item GraRep~\cite{cao2015grarep}: It aims to capture the k-order proximities by factorizing the k-step transition matrices. We set k as 5. \item SDNE~\cite{wang2016structural}: It utilizes the auto-encoders to preserve the neighbor structure of nodes. The first-order and second-order proximity are proposed to preserve the global and the local network structure. We set the number of layers as 3, and the hidden size as [800,400,128]. \item GAE~\cite{kipf2016variational}: It stacks GCN layers as an encoder and the inner production operation as a decoder. The reconstruction loss helps it to capture structural information in an unsupervised manner. We set the number of layers and hidden sizes same as SDNE. \squishend The \textbf{heterogeneous} network embedding methods include: \squishlist \item PTE~\cite{tang2015pte}: It is a heterogeneous network embedding method which can also be used to jointly train the embedding, because multi-view network is a special type of heterogeneous network. We set the number of negative samples as 5. \item Metapath2vec~\cite{dong2017metapath2vec}: It utilizes meta-paths guided random walk to generate the node sequences then uses skip-gram model to learn the node representations. We set the number, the length of walks and window size same as deepwalk. We perform experiment using one of all possible meta-paths at a time, and report the best result. \squishend The \textbf{multi-view} based baselines include: \squishlist \item Deepwalk-con: It applies Deepwalk to get a $d$ dimensional representation for each view then concatenates these representations from all $K$ views to generate a unified representation with $K \ast d$ dimensions. \item MultiNMF~\cite{liu2013multi}: It is a multi-view matrix factorization algorithm, which extracts consistent information by a joint matrix factorization process. \item MVE~\cite{qu2017attention}: It combines single view embeddings by weights learned from attention mechanism to construct a multi-view network embedding. We set the parameters of random walk and skip-gram model same as Deepwalk, and other parameters are same as the original paper. \item MNE~\cite{zhang2018scalable}: It combines the information of multiple view by preserving a high dimensional common embedding and a lower dimensional embedding for each view. The dimensions of the additional vectors are set as 10. \item MTNE-C~\cite{xu2019multi}: It combines the common embedding and node-specific embedding of each node to be a complete embedding for the closeness measurement. We follow the default parameter setting in the original paper. \squishend For RGAE and all baselines except Deepwalk-con, the embedding dimension is set as 128. The number of graph auto-encoder layers is set as 3, and two hidden layers' dimensions are set as 800 and 400 respectively. Both $\alpha$ and $\beta$ are selected from [0.1,0.3,0.5,0.7,1.0,1.5,2.0], and $\gamma$ is selected from [0.05,0.5,5,10,50,100,500]. The learning rate is selected from [0.001,0.01,0.1]. As node features are not available for our datasets the feature matrix will be an identity matrix. We treat the node embedding learned by various methods as feature to train linear classifiers for multiclass classification, and train one-vs-rest classifiers for multilabel classification. For link prediction, we use the cosine similarity between node pairs as features to train a logistic classifier to predict the link existence. Follow the setting in \cite{qu2017attention}, we use other three views to train embeddings and predict the link existence in friend view. To generate negative edges, we randomly sample an equal number of node pairs which have no edge connecting them. We report the best results among multiple views for single-view based baselines. To guarantee a fair comparison, we repeat each method ten times and the average metrics are reported. \begin{table}[!hbt] \centering \renewcommand\arraystretch{1.0} \setlength{\tabcolsep}{1.0mm} \topcaption{ Node classification results w.r.t. Micro-F1(\%) and Macro-F1(\%) with different training ratio. '-' means out of memory error.}\label{tab3} \begin{tabular}{c|c|c|cc|cc|cc} \hline \multirow{2}{*}{Datesets} & \multirow{2}{*}{Category} & \multirow{2}{*}{Methods} & \multicolumn{2}{c|}{0.1} & \multicolumn{2}{c|}{0.3} & \multicolumn{2}{c}{0.5} \\ \cline{4-9} & & & Micro & Macro & Micro & Macro & Micro & Macro \\ \hline \hline \multirow{12}{*}{AMiner} & \multirow{4}{*}{Single-View} & Deepwalk & 69.9 & 68.4 & 74.3 & 73.3 & 75.1 & 74.3 \\ & & GraRep & 23.3 & 20.5 & 44.8 & 41.8 & 61.9 & 60.6 \\ & & SDNE & 64.8 & 62.5 & 70.3 & 68.0 & 70.8 & 69.4 \\ & & GAE & 60.4 & 54.8 & 62.5 & 57.7 & 63.6 & 59.3 \\ \cline{2-9} & \multirow{2}{*}{Heterogeneous} & PTE & 52.9 & 46.9 & 56.6 & 52.7 & 58.1 & 55.2 \\ & & Metapath2Vec & 70.6 & 70.1 & 75.3 & 73.5 & 76.2 & 74.9 \\ \cline{2-9} & \multirow{6}{*}{Multi-View} & Deepwalk-con & 61.4 & 59.0 & 74.2 & 72.6 & 76.1 & 74.9 \\ & & MultiNMF & 57.4 & 52.6 & 66.4 & 64.1 & 66.8 & 62.8 \\ & & MVE & 73.6 & 72.7 & 78.8 & 77.5 & 78.9 & 77.6 \\ & & MNE & 73.6 & 72.2 & 79.2 & 77.8 & 79.6 & 78.1 \\ & & MTNE-C & 54.5 & 48.8 & 57.2 & 53.9 & 58.6 & 55.2 \\ \cline{3-9} & & RGAE & \textbf{74.9}& \textbf{73.3} & \textbf{80.6} & \textbf{79.7} & \textbf{82.0} & \textbf{80.9} \\ \hline \hline \multirow{12}{*}{PPI} & \multirow{4}{*}{Single-View} & Deepwalk & 8.9 & 4.2 & 10.9 & 6.1 & 12.1 & 7.3 \\ & & GraRep & 4.0 & 2.0 & 5.1 & 3.1 & 13.1 & 10.0 \\ & & SDNE & 11.8 & 10.7 & 14.7 & 13.4 & 17.6 & 15.0 \\ & & GAE & 9.5 & 4.3 & 12.3 & 8.0 & 13.7 & 9.1 \\ \cline{2-9} & \multirow{2}{*}{Heterogeneous} & PTE & 12.8 & 9.5 & 19.7 & 11.7 & 22.0 & 14.0 \\ & & Metapath2Vec & 13.4 & 10.0 & 20.2 & 12.8 & 22.3 & 15.7 \\ \cline{2-9} & \multirow{6}{*}{Multi-View} & Deepwalk-con & 9.9 & 6.2 & 11.9 & 8.5 & 13.6 & 9.9 \\ & & MultiNMF & 15.3 & 11.9 & 17.8 & 15.2 & 20.3 & 17.5 \\ & & MVE & 11.7 & 9.9 & 12.1 & 10.6 & 13.3 & 10.8 \\ & & MNE & 13.3 & 11.8 & 14.1 & 12.2 & 15.6 & 12.1 \\ & & MTNE-C & 3.4 & 1.6 & 4.0 & 2.0 & 6.2 & 3.5 \\ \cline{3-9} & & RGAE & \textbf{19.0} & \textbf{15.1} & \textbf{24.4} & \textbf{21.0} & \textbf{25.0} & \textbf{21.3} \\ \hline \hline \multirow{12}{*}{Flickr} & \multirow{4}{*}{Single-View} & Deepwalk & 51.7 & 32.1 & 51.9 & 27.6 & 53.2 & 27.8 \\ & & GraRep & 52.4 & 32.2 & 53.8 & \textbf{35.0} & 55.9 & 35.8 \\ & & SDNE & 47.6 & 32.1 & 48.2 & 32.6 & 49.6 & 30.5 \\ & & GAE & 34.5 & 9.1 & 37.0 & 10.4 & 38.4 & 11.1 \\ \cline{2-9} & \multirow{2}{*}{Heterogeneous} & PTE & 55.7 & 30.4 & 56.4 & 34.3 & 56.2 & 31.0 \\ & & Metapath2Vec & 55.7 & 30.8 & 56.6 & 33.9 & 56.7 & 32.2 \\ \cline{2-9} & \multirow{6}{*}{Multi-View} & Deepwalk-con & 51.9 & 32.6 & 52.5 & 28.2 & 53.7 & 28.3 \\ & & MultiNMF & - & - & - & - & - & - \\ & & MVE & 52.0 & 32.5 & 53.0 & 28.9 & 54.3 & 28.8 \\ & & MNE & 52.4 & \textbf{33.1} & 53.5 & 29.9 & 54.8 & 29.8 \\ & & MTNE-C & 23.9 & 5.2 & 23.3 & 4.8 & 22.9 & 4.6 \\ \cline{3-9} & & RGAE & \textbf{56.7} & 32.9 & \textbf{57.6} & 33.7 & \textbf{58.4} & \textbf{36.2} \\ \hline \end{tabular} \vspace{-0.5cm} \end{table} \subsection{Experimental Results} \subsubsection{Node Classification} We evaluate the performance of our method and three categories of baselines using the Micro-F1 and Macro-F1 scores. Table~\ref{tab3} shows the comparison on three datasets. As can be seen, our RGAE model outperforms all baselines except for Macro-F1 on Flickr dataset. For example, on AMiner dataset, it achieves a sustainable performance gain of 1\%, 2\%, and 3\% with the percentage of training data increasing. It is noted that RGAE always outperforms GAE consistently, which shows that with making good use of information from multiple views, we are indeed able to learn a robust representation for a multi-view network. The superiority of RGAE over SDNE further verifies that it is reasonable to model the heterogeneity of edges. Although GraRep captures high order proximities between nodes, the matrix factorization process makes it hard to preserve non-linear network information, which is not compared with our model. One may see that the existing multi-view network embedding approaches are also not comparable to our RGAE model. The reason is that either they are not possible to consider the uniqueness of each view, like Metapath2Vec and MVE, or they are not possible to capture high-order proximities between nodes, such as MTNE-C and MNE. All these observed results show that the RGAE model can indeed capture more complete non-linear information from multiple views. \begin{table}[!hbt] \centering \renewcommand\arraystretch{1.0} \setlength{\tabcolsep}{1.5mm} \topcaption{Link Prediction results on YouTube dataset w.r.t. ROC\_AUC Score(\%) and Average Precision Score (AP)(\%) with different training ratio}\label{tab4} \begin{tabular}{c|c|cc|cc|cc} \hline \multirow{2}{*}{Category} & \multirow{2}{*}{Methods} & \multicolumn{2}{c|}{0.1} & \multicolumn{2}{c|}{0.3} & \multicolumn{2}{c}{0.5} \\ \cline{3-8} & & ROC\_AUC & AP & ROC\_AUC & AP & ROC\_AUC & AP \\ \hline \hline \multirow{4}{*}{Single-View} & Deepwalk & 74.4 & 73.6 & 74.7 & 74.0 & 78.4 & 77.2 \\ & GraRep & 80.2 & 79.6 & 80.3 & 79.8 & 80.7 & 80.0 \\ & SDNE & 81.8 & 82.7 & 82.3 & 83.0 & 85.0 & 85.3 \\ & GAE & 77.0 & 77.7 & 77.3 & 78.2 & 80.3 & 79.6 \\ \hline \multirow{2}{*}{Heterogeneous} & PTE & 69.5 & 63.8 & 70.1 & 64.8 & 69.1 & 64.7 \\ & Metapath2Vec & 78.5 & 73.8 & 80.6 & 75.8 & 81.9 & 79.7 \\ \hline \multirow{6}{*}{Multi-View} & Deepwalk-con & 78.9 & 78.0 & 79.8 & 78.9 & 84.7 & 83.1 \\ & MultiNMF & 80.3 & 80.2 & 81.9 & 82.3 & 82.2 & 82.8 \\ & MVE & 82.0 & 82.4 & 83.0 & 82.8 & 83.4 & 83.1 \\ & MNE & 82.3 & 82.7 & 83.3 & 83.5 & 84.1 & 84.6 \\ & MTNE-C & 52.4 & 53.0 & 62.3 & 62.9 & 66.1 & 65.8 \\ \cline{2-8} & RGAE & \textbf{82.7} & \textbf{83.2} & \textbf{85.5} & \textbf{85.2} & \textbf{86.3} & \textbf{85.9} \\ \hline \end{tabular} \vspace{-0.5cm} \end{table} \subsubsection{Link Prediction} We select the YouTube dataset to verify the performance of link prediction. Table~\ref{tab4} shows that the RGAE model significantly outperforms all baseline methods. The results verify again that RGAE indeed can preserve the abundant information in multi-view networks. It is noticeable that the SDNE even outperforms all multi-view and heterogeneous network embedding approaches. By designing two kinds of graph auto-encoders, RGAE utilizes both consistent and unique information from multiple views to describe the node proximity in a detailed way, which achieves better performance than SDNE. As a result, we conclude that RGAE is able to explore the structural properties of multi-view networks. \begin{figure*}[htbp] \centering \subfigure[Citation]{ \includegraphics[scale=0.115]{aminer_citation.pdf} } \subfigure[Coauthor]{ \includegraphics[scale=0.115]{aminer_coauthor.pdf} } \subfigure[Text-Similarity]{ \includegraphics[scale=0.115]{aminer_similarity.pdf} } \subfigure[RGAE]{ \includegraphics[scale=0.115]{aminer_rgae.pdf} } \vspace{-0.2cm} \caption{2d t-SNE Visualization for AMiner Dataset. Each point represents a node and colors represent labels. red : computational linguistics; blue : computer graphics; green : theoretical computer science} \vspace{-1cm} \label{vis} \end{figure*} \subsubsection{Network Visualization} We project the embeddings of AMiner dataset onto 2d vectors with t-SNE~\cite{maaten2008visualizing}. Fig.~\ref{vis} shows the network visualizations of the RGAE model as well as each view's visualization obtained by its shared and private encoders. The difference between RGAE model and the single-view model is that single-view model lacks not only the constraints of the loss function to divide the consistent information and unique information in a view, but also the cooperation and supplementary between different views. In order to make the visualization results more clear and legible, we select three from the eight categories of nodes for visualization, and each color represents a research field. We can see that our multi-view based approach works better than learning each view individually. Citation view may achieve relatively good representation effect, but there are still a few nodes that have not been assigned to the correct cluster. Therefore, it still needs more useful information from other views to complement and properly correct it to get a robust representation. The visualization of RGAE, by contrast, separates the three research fields clearly, which illustrates the necessity of modeling heterogeneous edges with consideration of all types of relationships. \subsection{Influence of Loss Functions} The visualization results have proven the importance of both consistent and unique information. In this part, we research the effect of the loss functions. In our RGAE model, there exist two loss functions, i.e. the similarity loss and the difference loss, that regularize the processes of extracting the consistent and unique information respectively. To evaluate the influences of the two loss functions, we remove similarity loss, difference loss, and both of them respectively, and show the performance in Fig.~\ref{loss}. The histogram clearly shows the importance of the two loss functions for our RGAE model. When we remove the similarity loss function, there is slight decline in performance. Because without similarity loss function, the quality of consistent information will be affected. Whereas there is relatively little consistent information among the views, and the proportion of the dimensions of the common representation in the final representation is small, so that the performance declination will not be quite severe. When there is no difference loss, there will be a noticeable decrease in performance, because the isolation between different view's specific information becomes worse without the regularization of difference loss. Moreover, if we remove the similarity loss and difference loss simultaneously, the performance of the RGAE model declines further dramatically. All these observations can demonstrate the necessity of the similarity loss and difference loss, but the degree of influence varies between the two losses. \begin{figure}[ht] \centering \subfigure[Node classification on AMiner dataset w.r.t Micro-F1(\%)]{ \includegraphics[width=4cm,height=3cm]{loss1.pdf} } \qquad \qquad \subfigure[Node classification on AMiner dataset w.r.t Macro-F1(\%)]{ \includegraphics[width=4cm,height=3cm]{loss2.pdf} } \vspace{-0.2cm} \caption{The effectiveness of the similarity loss and difference loss in our RGAE model} \vspace{-1cm} \label{loss} \end{figure} \subsection{Parameter Sensitivity} With results presented in Fig.~\ref{para}, we focus on the parameter sensitivity of RGAE model, including the number of embedding dimensions, $\alpha$, $\beta$, and $\gamma$. We perform node classification on AMiner dataset and link prediction on YouTube dataset to evaluate the parameter sensitivity. To explore the contributions of these parameters, we fix others to evaluate the effect of one parameter at a time on the experimental results. Overall, different datasets and tasks have different sensitivities to embedding dimensions. On AMiner dataset, the performance increases with the dimension increasing then stabilizes when the dimension reaches 64. While on the YouTube dataset, the model performs well when the dimension is 32 and the performance decreases slightly when the dimension continues to increase. Compared with the AMiner dataset, the Youtube dataset achieves good results in lower dimensions. When the proportion of the training data set is small, a large number of dimensions tend to cause overfitting. The curves of the experimental metrics with the parameters $\alpha$ or $\beta$ are not monotonic. Their overall trends are both first rising and then falling. Because when the proportion of similarity loss function and difference loss function are too large, the proportion of reconstruction loss will be weakened, which will affect the representation abilities of graph auto-encoders. As for $\gamma$, we find that it actually influences the results for both tasks. As we can see, it is more suitable to set the value of $\gamma$ larger than 5. \vspace{-3mm} \begin{figure} \centering \subfigure[Parameter sensitivity on AMiner node classification]{ \includegraphics[scale=0.18]{paras_node.pdf} } \subfigure[Parameter sensitivity on YouTube link prediction]{ \includegraphics[scale=0.18]{paras_link.pdf} } \vspace{-2mm} \caption{Performance of the RGAE models under varying hyper-parameters} \label{para} \vspace{-3mm} \end{figure} \vspace{-8mm} \section{Related Work} \subsubsection{Network Embedding: } Network embedding is dedicated to mapping nodes in a network into a low-dimensional vector space for preserving structural information. Earlier studies such as Deepwalk~\cite{perozzi2014deepwalk}, node2vec~\cite{grover2016node2vec}, and Struc2vec~\cite{ribeiro2017struc2vec} use skip-gram model to preserve network structures through neighborhood sampling. Traditional deep neural networks also get widespread attention because of its nonlinear underlying structure. SDNE~\cite{wang2016structural}, SiNE~\cite{wang2017signed}, and Deepcas~\cite{li2017deepcas} have a strong advantage in retaining the highly nonlinear structure of the network.More recent methods adopt graph neural network to perform convolutional operations on graphs. GCN~\cite{kipf2016semi}, GATs~\cite{velivckovic2017graph}, and GraphSAGE~\cite{hamilton2017inductive} are all representative works as end-to-end approaches for network representation learning. These studies are directed at homogeneous networks. Heterogeneous network embedding has also attracted attention because of its practical significance. PTE~\cite{tang2015pte} is an extension method of LINE on heterogeneous networks. Besides, Metapath2vec~\cite{dong2017metapath2vec}, HIN2vec~\cite{fu2017hin2vec}, and RHINE~\cite{lu2019auto} use meta path to capture the structure and semantic information in heterogeneous networks. \vspace{-5mm} \subsubsection{Multi-view Learning: } Another related work is about multi-view learning. Some traditional multi-view learning algorithms, such as co-training~\cite{kumar2011co}, co-clustering~\cite{yao2017revisiting}, and cross-domain fusion~\cite{franco2005fusion} analyze multi-view networks for specific tasks. MVE~\cite{qu2017attention}, MINES~\cite{ma2018multi}, MVNE~\cite{sun2018multi}, and mvn2vec~\cite{shi2018mvn2vec} account for the first-order collaboration to align the representations of each node across views. For these studies, the models responsible for learning the network representation of each view are shallow so that they cannot capture the high-order non-linear network structure. With that in mind, we consider using deep neural networks to replace the shallow models as the basic components to embed the network. ACMVL~\cite{lu2019auto} uses multiple auto-encoders to learn the specific features of each view and map all specific features to the same potential space. But it requires a supervised network to help the auto-encoder optimize its parameters. Compared with it, our model is totally unsupervised to solve the multi-view network embedding problem. \vspace{-5mm} \section{Conclusion} \vspace{-2mm} In this paper, we explore how to model the heterogeneity of edges by solving a multi-view network embedding problem and propose a novel RGAE model. More specifically, our model makes use of two types of graph auto-encoders to extract consistent and unique information of views respectively, and innovatively proposes two loss functions to distinguish these two types of information. Experimental results not only indicate the superiority of the proposed model but also investigate the contributions of two loss functions. In the future, we plan to apply the framework to more applications. A meaningful direction is to use multi-view learning to represent general heterogeneous networks, that is, the nodes and edges of the network have multiple types at the same time. \vspace{-3mm} \section*{Acknowledgement} \vspace{-0.2cm} The research was supported by National Natural Science Foundation of China (No. 61802140) and Hubei Provincial Natural Science Foundation (No. 2018CFB200). \vspace{-0.3cm} \bibliographystyle{splncs04}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} Style transfer aims to manipulate colors and textures of a given source image to share a target image's ``look and feel''. The source and target images are often so-called ``content'' and ``style'' images, respectively, where style transfer methods aim to generate an image that has the content of the source image and the style from the target image. One of the first deep learning methods to produce images with artistic style was Deep Dream \cite{deepdream}. Deep Dream works by reversing a convolutional neural network (CNN), trained for image classification via image-based optimization. The process begins with a noise image, which is iteratively updated through an optimization process to make the CNN predicts a certain output class. Inspired by Deep Dream, Gatys \emph{et al}\bmvaOneDot, ~\cite{gatys2015neural} proposed a neural style transfer (NST) method to minimize statistical differences of deep features, extracted from intermediate layers of a pre-trained CNN (\emph{e.g}\bmvaOneDot, VGG net~\cite{simonyan2014very}), of content and style images. After the impressive results achieved by the NST work in \cite{gatys2015neural}, many methods have been proposed to perform style transfer leveraging the power of CNNs (\emph{e.g}\bmvaOneDot, \cite{gatys2016preserving, gupta2017characterizing, li2017universal, luan2017deep,snelgrove2017high, ruder2018artistic, shen2018neural, Tesfaldet2018, park2019arbitrary, yang2019controllable, yoo2019photorealistic, svoboda2020two, wang2020collaborative, wang2020diversified, yim2020filter, liu2021learning, afifi2021histogan, kotovenko2021rethinking}). The work presented in this paper extends the idea of image-optimization NST to achieve multi-style transfer through a color-aware optimization loss (see Figure \ref{fig:teaser}). We begin with a brief review of image-optimization NST in Section \ref{sec:background}, then we will elaborate our method in Section \ref{sec:method}. \section{Background} \label{sec:background} NST-based methods use similarity measures between CNN latent features at different layers to transfer the style statistics from the style image to the content image. In particular, the methods of ~\cite{gatys2015neural, berger2016incorporating} utilize the feature space provided by the 16 convolutional and 5 pooling layers of the 19-layer VGG network~\cite{simonyan2014very}. The max-pooling layers of the original VGG were replaced by average pooling as it has been found to be useful for the NST task. For a pre-trained VGG network with fixed weights and a given content image, the goal of NST is to optimize a generated image so that the difference of feature map responses between the generated and content images is minimized. Formally, let $\mathrm{I}_c$ and $\mathrm{I}_g$ be the content and generated images, respectively. Both $\mathrm{I}_c$ and $\mathrm{I}_g$ share the same image dimensions, and each pixel in $\mathrm{I}_g$ is initialized randomly. Let $F^l_c$ and $F^l_g$ be the feature map responses at VGG-layer $l$ for $\mathrm{I}_c$ and $\mathrm{I}_g$, respectively. Then $\mathrm{I}_g$ is optimized by the minimizing the content loss $\mathcal{L}_{content}(\mathrm{I}_c,\mathrm{I}_g)$ as follows: \begin{equation}\label{eq:content_loss} \mathcal{L}_{content} = \frac{1}{2}\sum_l{\left \|F^l_c-F^l_g\right \|_2^2}, \end{equation} where $\left \| \cdot \right \|_2^2$ is squared Frobenius norm. Gatys \emph{et al}\bmvaOneDot, ~\cite{gatys2015neural} leveraged the Gram matrix to calculate the correlations between the different filter responses to build a style representation of a given feature map response. The Gram matrix is computed by taking the inner product between the feature maps: \begin{equation}\label{eq:gram_matrix} G^l_{ij} = \frac{1}{m^l}\langle F^l_{(i:)},F^l_{(j:)}\rangle, \end{equation} where $F^l_{(i:)}$ and $F^l_{(j:)}$ are the $i^\text{th}$ and $j^\text{th}$ vectorized feature maps of VGG-layer $l$, $m^l$ is the number of elements in each map of that layer, and $\langle .,. \rangle$ denotes the inner product. To make the style of the generated $\mathrm{I}_g$ matches the style of a given style image $\mathrm{I}_s$, the difference between the Gram matrices of $\mathrm{I}_g$ and $\mathrm{I}_s$ is minimized as follows: \begin{equation}\label{eq:style_loss} \mathcal{L}_{style} = \sum_l{w_l\left \|G^l_s-G^l_g\right \|_2^2}, \end{equation} where $\mathcal{L}_{style}$ is the style loss, $G^l_s$ and $G^l_g$ are the Gram matrices of $\mathrm{I}_s$ and $\mathrm{I}_g$, respectively, and $w$ refer to scalar weighting parameters to determine the contribution of each layer in $\mathcal{L}_{style}$. To generate an image $\mathrm{I}_g$ such that the general image content is preserved from $\mathrm{I}_c$ and the texture style statistics are transferred from $\mathrm{I}_s$, the optimization process is jointly performed to minimize the final loss function: \begin{equation}\label{eq:final_loss} \mathcal{L} = \alpha \mathcal{L}_{content} + \beta \mathcal{L}_{style}, \end{equation} where $\alpha$ and $\beta$ are scale factors to control the strength of content reconstruction and style transfer, respectively. From Equation \ref{eq:gram_matrix}, it is clear that Gram matrix measures the correlation of feature channels over \textit{the entire image}. As a result, Gram-based loss in Equation \ref{eq:style_loss} results in transferring the \textit{averaged global} image style statistics to the generated image. That means if the style image has multiple styles, the traditional Gram-based optimization often fails to sufficiently convey all styles from the style image to the generated image; instead, it would generate images with mixed styles. Figure \ref{fig:motivation} illustrates this limitation. As shown in Figure \ref{fig:motivation}-(A), the style image has more than a single style, which results in unpleasing mixed-style when transferring using traditional Gram-based NST optimization (Figure \ref{fig:motivation}-[B]). Our style transfer result is shown in Figure \ref{fig:motivation}-(C). \begin{figure}[t] \includegraphics[width=\textwidth]{motivation.pdf} \vspace{0.5mm} \caption{Traditional Gram matrix optimization does not consider the correlation between style image's colors and their styles. Therefore, if the style image has more than a single style, like the case shown in (A), this optimization often results in a mixed style as shown in (B). Our method, in contrast, considers the color-style correlation in both images, as shown in (C). Dashed lines in purple refer to the goal of the optimization process.} \label{fig:motivation} \end{figure} Most existing image-optimization NST methods introduce different variations under this main idea of transferring the averaged global image style statistics to the generated one \cite{gatys2015neural, gatys2016preserving, berger2016incorporating, li2017demystifying, gupta2017characterizing}. However, these methods are restricted with a \textit{single} average style statistics per content and style image pair, and lack artistic controls. While the method of~\cite{risser2017stable} proposed a procedure to control the style of the output image artistically, their procedure requires a tedious human effort that asks the users to annotate semantic segmentation masks and correspondences in both the style and content images. Unlike other existing methods, we introduce the first Color-Aware Multi-Style (CAMS) transfer method that enables style transfer \emph{locally} based on nearest colors, where multiple styles can be transferred from the style image to the generated one. Our proposed method extracts a color palette from both the content and style images, and \emph{automatically} constructs the region/color associations. The CAMS method performs style transfer, in which the texture of a specific color in the style image is transferred to the region that has the nearest color in the content image. Figure \ref{fig:teaser} shows multiple examples of the generated images (bottom row) from a single input content image with different style images (top row). The regions highlighted in yellow and blue indicate two example styles that were transferred from the style image to regions in the generated image based on the nearest color in the content image. Our proposed framework allows multi-style transfer to be applied in a meaningful way. In particular, styles are transferred with associated with colors. By correlating style and colors, we offer another artistic dimension in preserving the content color statistics together with the transferred texture. To further allow artistic controls, we show how our method allows the users to manually select the color associations between the reference style and content image for more transfer options. We believe that our proposed framework and the interactive tool are useful for the research community and enable more aesthetically pleasing outputs. Our source code will be publicly released upon acceptance. \section{Our Method} \label{sec:method} \begin{figure}[t] \includegraphics[width=\textwidth]{images/main.pdf} \vspace{0.5mm} \caption{Our method proposes to extract color palette from both the content and style images. This color palette is then used to generate color weighting masks. These masks are used to weight the extracted deep feature of both style and input images. This color-aware separation results in multiple Gram matrices, which are then used to compute our color-aware style loss. This loss along with the content loss are used for optimization.} \label{fig:method} \end{figure} Figure \ref{fig:method} illustrates an overview of our method. As shown in the figure, we use color palettes to guide the optimization process. Given two color palettes extracted from the content and style images, respectively. We merge them to generate a single input color palette, $\mathrm{C}$, which is then used to generate a set of color masks. We use these masks to weight deep features of both input and style images from different layers of a pre-trained CNN. This color-aware separation of deep features results in multiple Gram matrices used to compute our style loss during image optimization. In this section, we will elaborate on each step of our algorithm. \subsection{Mask Generation} Given an image, $\mathrm{I} \in \mathbb{R}^{n\times m\times3}$, and a target palette, $\mathrm{C}$, our goal is to compute a color mask, $\mathrm{M} \in \mathbb{R}^{n\!\times\!m}$, for each color $\mathrm{t}$ in our color palette $\mathrm{C}$, such that the final mask reflects the similarity of each pixel in $\mathrm{I}$ to color $\mathrm{t}$. We generate $\mathrm{M}$ by computing a radial basis function (RBF) between each pixel in $\mathrm{I}$ and our target color $\mathrm{t}$ as follows: \begin{equation}\label{RBF} \mathrm{M}_j = \exp{\left(-\left \| \mathrm{I}_j - \mathrm{t} \right \|_2/\sigma\right)^2}, \end{equation} where $\sigma$ is the RBF fall-off factor, $\mathrm{I}_j$ is the $j^{\text{th}}$ pixel in $\mathrm{I}$ and $\mathrm{t}$ is the target color. Next, we blur the generated mask $\mathrm{M}$ by applying a $15\!\times\!15$ Gaussian blur kernel, with a standard deviation of $5$ pixels. This smoothing step is optional but empirically was found to improve the final results in most cases. For each of $\mathrm{I}_s$ and $\mathrm{I}_g$, we generate a set of masks, each of which is computed for a color in our color palette, $\mathrm{C}$. Here, $\mathrm{I}_g$ refers to the current value of the image we optimize, not the final generated image. Note that computing our color mask is performed through differentiable operations and, thus, can be easily integrated into our optimization. \subsection{Color-Aware Loss} \label{sec:method.our_loss} \begin{figure}[t] \includegraphics[width=\textwidth]{ours_vs_neural_style.pdf} \vspace{0.5mm} \caption{In many scenarios, the style image could include multiple styles. Traditional Gram matrix-based optimization (\emph{e.g}\bmvaOneDot, Gatys \emph{et al}\bmvaOneDot, \cite{gatys2015neural}) cannot capture these styles and as a result it may result in noisy images. In contrast, our color-aware optimization produces more pleasing results while persevering the style-color matching.} \label{fig:ours_vs_neural_style} \end{figure} After computing the color mask sets $\{\mathrm{M}^{(\mathrm{t})}_s\}_{\mathrm{t} \in \mathrm{C}} $ and $\{\mathrm{M}^{(\mathrm{t})}_g\}_{\mathrm{t} \in \mathrm{C}} $ for $\mathrm{I}_s$ and $\mathrm{I}_g$, respectively, we compute two sets of weighted Gram matrices, for $\mathrm{I}_s$ and $\mathrm{I}_g$. According to the given mask weights, each set of weighted Gram matrices captures the correlation between deep features (extracted from some layers of the network) of interesting pixels in the image. This weighted Gram matrix helps our method to focus only on transferring styles between corresponding interesting pixels in the style and our generated image during optimization. For a color $\mathrm{t}$ in our color palette $\mathrm{C}$, this weighted Gram matrix, $G^{l(\mathrm{t})}$, is computed as follows: \begin{equation}\label{weighted_feature} \hat{F}^l = F^l \odot \mathrm{W}_\mathrm{t}, \end{equation} \begin{equation}\label{weighted_gram} G^{l(\mathrm{t})}_{ij} = \frac{1}{m^l}\langle \hat{F}^l_{(i:)}, \hat{F}^l_{(j:)}\rangle, \end{equation} \noindent where $\hat{F}^l_{(i:)}$ and $\hat{F}^l_{(j:)}$ are the $i^\text{th}$ and $j^\text{th}$ vectorized feature maps of the network layer $l$ after weighting, $m^l$ is the number of elements in each map of that layer, $\odot$ is the Hadamard product, and $\mathrm{W}_\mathrm{t}$ represents the computed mask for $\mathrm{t}$ after the following processing. First, we linearly interpolate the width and height of our computed mask, for color $\mathrm{t}$, to have the same width and height of the original feature map, $F^l$, before vectorization. Second, we duplicated the computed mask to have the same number of channels in $F^l$. For each layer $l$ in the pre-trained classification network and based on Equation \ref{weighted_gram}, we compute $\{G^{l(\mathrm{t})}_s\}_{\mathrm{t} \in \mathrm{C}}$ and $\{G^{l(\mathrm{t})}_g\}_{\mathrm{t} \in \mathrm{C}}$ for our style and generated image, respectively. Finally, our color-aware sytle loss is computed as follows: \begin{equation}\label{eq:color_loss} \mathcal{L}_{CAMS} = \sum_l^{L_s}\sum_\mathrm{t}{\left \|G^{l(\mathrm{t})}_s - G^{l(\mathrm{t})}_g\right \|_2^2}.\\ \end{equation} By generating different weighted Gram matrices, our method is able to convey different styles present in the style image, which is not feasible using classic Gram matrix optimization. As shown in Figure \ref{fig:ours_vs_neural_style}, the style image includes different style and textures. NST using Gram matrix (\emph{e.g}\bmvaOneDot, Gatys \emph{et al}\bmvaOneDot, \cite{gatys2015neural}) fails to capture these multiple styles in the reference style image and produces an unpleasing result as shown in the third column in Figure \ref{fig:ours_vs_neural_style}. In contrast, our color-aware loss considers these styles and effectively transfers them in the generated image as shown in the last column in Figure \ref{fig:ours_vs_neural_style}. For example, the text transferred from the letter (white background) in the style image to the man's white beard in the generated image. \subsection{Optimization and Implementation Details} \label{sec:method.optimization} The flow of our method is shown in Algorithm \ref{algorithm}. The flow of our method is as follows. First, we initialize each pixel in $\mathrm{I}_g$ with the corresponding one in $\mathrm{I}_c$. Afterward, we generate two color palettes for $\mathrm{I}_g$ and $\mathrm{I}_s$, respectively. We used the algorithm proposed in \cite{chang2015palette} to extract the color palette of each image. The number of colors per palette is a hyperparameter that could be changed to get different results. In our experiments, we extracted color palettes of five colors of each of our content and style images. Then, we merge them to generate the final color palette, $\mathrm{C}$. After merging, the final color palette has at most ten colors, as we exclude redundant colors after merging. \begin{algorithm}[t] \scriptsize \SetAlgoLined \textbf{Input:} Style image $\mathrm{I}_s \in \mathbb{R}^{n\times m\times3}$, content image $\mathrm{I}_c \in \mathbb{R}^{n\times m\times3}$, a pre-trained network, $f$, for image classification, layer indices $L_s$ and $L_c$ for style feature and content features, respectively, and loss term weighting factors, $\alpha$ and $\beta$\\ \KwResult{Generated image $\mathrm{I}_g \in \mathbb{R}^{n\times m\times3}$ that shares styles in $\mathrm{I}_s$ and content in $\mathrm{I}_c$.} $\mathrm{I}_g = \mathrm{I}_c$\\ $\mathrm{C}_s = \texttt{palette}\left(\mathrm{I}_s\right)$, $\mathrm{C}_g = \texttt{palette}\left(\mathrm{I}_g\right)$\\ $\mathrm{C} = \texttt{merge}\left(\mathrm{C}_s, \mathrm{C}_g\right)$\\ $\{\mathrm{M}^{(\mathrm{t})}_s\}_{\mathrm{t} \in \mathrm{C}} = \texttt{mask}\left(\mathrm{I}_s, \mathrm{C}\right)$, $\{\mathrm{M}^{(\mathrm{t})}_g\}_{\mathrm{t} \in \mathrm{C}} = \texttt{mask}\left(\mathrm{I}_g, \mathrm{C}\right)$\\ $\{F^{S(l)}_s\}_{l \in L_s} = f\left(\mathrm{I}_s, L_s\right), \{F^{C(l)}_s\}_{l \in L_c} = f\left(\mathrm{I}_s, L_c\right)$\\ $\{G^{l(\mathrm{t})}_s\}_{l \in L_s, \mathrm{t} \in \mathrm{C}} = \texttt{weighted\_Gram}\left(\{F^{S(l)}\}_{s_{l \in L_s}}, \{\mathrm{M}^{(\mathrm{t})}_s\}_{\mathrm{t} \in \mathrm{C}} \right)$\\ \While{not converged}{ $\{F^{C(l)}_g\}_{l \in L_c} = f\left(\mathrm{I}_g, L_c\right)$\\ $\mathcal{L}_{content} = \sum_l^{L_c}{\left \|F^{C(l)}_s-F^{C(l)}_g\right \|_2^2}$\\ $\{F^{S(l)}_g\}_{l \in L_s} = f\left(\mathrm{I}_g, L_s\right)$\\ $\{G^{l(\mathrm{t})}_g\}_{l \in L_s, \mathrm{t} \in \mathrm{C}} = \texttt{weighted\_Gram}\left(\{F^{S(l)}_g\}_{l \in L_s}, \{\mathrm{M}^{(\mathrm{t})}_g\}_{\mathrm{t} \in \mathrm{C}}\right)$\\ $\mathcal{L}_{style} = \sum_l^{L_s}\sum_\mathrm{t}{\left \|G^{l(\mathrm{t})}_s - G^{l(\mathrm{t})}_g\right \|_2^2}$\\ $\mathrm{I}_g = \texttt{minimize}\left(\alpha \mathcal{L}_{content} + \beta \mathcal{L}_{CAMS}\right)$\\ $\{\mathrm{M}^{(\mathrm{t})}_g\}_{\mathrm{t} \in \mathrm{C}} = \texttt{mask}\left(\mathrm{I}_g, \mathrm{C}\right)$\\ } \caption{Color-aware optimization.\label{algorithm}} \end{algorithm} After constructing $\mathrm{C}$, we generate color masks $\{\mathrm{M}^{(\mathrm{t})}_s\}_{\mathrm{t} \in \mathrm{C}}$ and $\{\mathrm{M}^{(\mathrm{t})}_g\}_{\mathrm{t} \in \mathrm{C}}$ for $\mathrm{I}_s$ and $\mathrm{I}_g$, respectively. Then, we extract deep features from $\mathrm{I}_s$ and $\mathrm{I}_c$, which represent our target style latent representation and content latent representation, respectively. We adopted VGG-19 net \cite{simonyan2014very} as our backbone to extract such deep features, where we used the $4^{\text{th}}$ and $5^{\text{th}}$ conv layers to extract deep features for the content loss, and the first 5 conv layers to extract deep features for our color-aware style loss. We then construct the weighted Gram matrices, as described in Section \ref{sec:method.our_loss}, using the deep features of style and generated images. The weighted Gram matrices of both generated and style images, and the deep features of generated and content images are used to compute our color-aware style loss (Equation \ref{eq:color_loss}) and the content loss (Equation \ref{eq:content_loss}), respectively. Then, the final loss computed as: \begin{equation}\label{eq:final_loss_ours} \mathcal{L} = \alpha \mathcal{L}_{content} + \beta \mathcal{L}_{CAMS}, \end{equation} \noindent where $\alpha$ and $\beta$ are set to $1.0$ and $10^4$, respectively. After each iteration, we update the color masks of our generated image to track changes in $\mathrm{I}_g$ during optimization. To minimize Equation \ref{eq:final_loss_ours}, we adopted the L-BFGS algorithm \cite{liu1989limited} for 300 iterations with a learning rate of 0.5. To generate our masks, we have a hyperparameter, $\sigma$, that can be interactively used to control the RBF fall-off, which consequently affects the final result of optimization. Our experiments found that $\sigma = [0.25-0.3]$ works well in most cases. A nice feature of our method is that it allows more transfer flexibility by enabling the user to determine color associations between our style and content images. To that end, we follow the same procedure explained in Algorithm \ref{algorithm} with the following exception. We do not update color masks of the generated image, $\mathrm{I}_g$, to keep considering the user selection. Figure \ref{fig:user_selection} shows two user cases that reflect the benefit of having our manual user selection tool. As shown in the first case (left), the user associates the reddish color in the content image's color palette to different colors in the style image's color palette. Based on this selection, our generated image has transferred styles based on this color-style correlation. In particular, the change happened only to the style of pixels associated with reddish pixels in the face region. As can also be seen in the second case (right), the transferred style is constrained to those styles associated with selected colors in the style image's color palette. For the second user case in Figure \ref{fig:user_selection} (bottom row), the auto mode struggled to transfer multiple styles due to the fact that the given style image has a limited range of colors (i.e., only gray-color variations). Such style images, that have limited style options to offer, may result in less appealing outputs. Nevertheless, our manual color-association tool gives the user the flexibility to modify the generated image for more aesthetically pleasing outputs by associating the colors and restricting the modified region as shown in Figure \ref{fig:user_selection} (bottom row). \begin{figure}[t] \centering \includegraphics[width=\textwidth]{user_selection.pdf} \vspace{0.1mm} \caption{Our method allows artistic controls, where the user can manually select color association or discard some colors from the generated palettes. In this figure, we present our results of the auto and user-selection modes for two different reference style images.} \label{fig:user_selection} \end{figure} \begin{figure}[!b] \includegraphics[width=\textwidth]{qualitative_figure_paper.pdf} \vspace{-1mm} \caption{Qualitative comparisons between our method and other style transfer methods, which are: neural style transfer (NST) \cite{gatys2015neural}, adaptive instance normalization (AdaIN) \cite{huang2017arbitrary}, avatar net \cite{sheng2018avatar}, linear style transfer (LST) \cite{li2018learning}, relaxed optimal transport (ROT) \cite{kolkin2019style}.} \label{fig:qualitative_fig} \end{figure} \section{Evaluation} \label{sec:results} Evaluating NST techniques is a challenging problem facing the NST community, as indicated in \cite{li2017universal, jing2019neural}. With that said, user studies have been widely adopted in the literature to evaluate subjective results. For the sake of completeness, we conducted an online user study to evaluate our proposed color-aware NST compared with other relevant techniques\footnote{We acknowledge that an in-person user-study within a controlled display environments is preferred, however, due to the covid-19 pandemic we are limited to an online study.}. For each image pair (style/content), we used six different NST methods, which are: neural style transfer (NST) by Gatys \emph{et al}\bmvaOneDot, ~\cite{gatys2015neural}, adaptive instance normalization (AdaIN) \cite{huang2017arbitrary}, avatar net \cite{sheng2018avatar}, linear style transfer (LST) \cite{li2018learning}, and relaxed optimal transport (ROT) \cite{kolkin2019style}. The results of these methods, including ours, were anonymously shown to each subject in a single online webpage, such that the result of each method occupies $\sim$25\% of the screen. For our method, we used the auto-style-transfer mode, where the color matching and optimization were performed automatically as described in Section \ref{sec:method}. We collected answers from 30 subjects: 18 female and 12 male. The subjects were asked to give a score from one to five for the result of each method anonymously---higher score indicates more appealing result. We evaluated these methods on eight different image pairs. The images were randomly collected from Flick, such that each style image must include more than one style to evaluate our hypothesis of transferring multiple styles from the style image. The image pairs and results -- along with the color-aware loss (Equation \ref{eq:color_loss}), style loss (Equation \ref{eq:style_loss}), content loss (Equation \ref{eq:content_loss}) -- are shown in Figure \ref{fig:qualitative_fig}. Though Figure \ref{fig:qualitative_fig} shows that our method does not always outperform other methods in all loss terms, our method has a consistent trade-off between the color-aware, style, content losses in most cases, which is confirmed by the user study results (shown in Table \ref{Table:user_study}). As can be seen, 38\% of the total subjects rated the results of our method as high appealing results (score of five). On the other hand, the second best method (i.e., NST~\cite{gatys2015neural}) obtained only 16\% of the votes as five. The study emphasizes the superiority of our proposed method compared with other methods, especially in capturing multiple styles from the style images. Table \ref{Table:quantatitive_flickr} shows the mean value of the color-aware, style, and content losses in our Flickr test set. We also report the average processing time required by each method on a single NVIDIA GeForce GTX 1080 graphics card in Table \ref{Table:quantatitive_flickr}. In addition to the user-study evaluation, we collected another test set that includes 200 content and style pairs from real portrait face images (randomly selected from the FFHQ dataset \cite{karras2019style}) and painting portrait face images (randomly selected from the face set of the WikiArt dataset \cite{saleh2015large, afifi2021histogan}). We report the color-aware, style, content losses in addition to the FID score \cite{heusel2017gans} achieved by our method and other NST methods \cite{li2017universal, sheng2018avatar, li2018learning, yoo2019photorealistic, park2019arbitrary, kotovenko2021rethinking} in Table \ref{Table:quantatitive_faces}. As shown in Table \ref{Table:quantatitive_faces}, our method achieves competing results compared to other NST methods across all error metrics; see Figure \ref{fig:face_set} for qualitative comparisons. \begin{table}[!t] \begin{minipage}{.5\linewidth} \caption{Results of user study conducted on 30 subject to evaluate NST methods. Five represents the best aesthetic appealing results.\label{Table:user_study}} \centering \resizebox{0.9\textwidth}{!}{ \begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{Method} & \multicolumn{5}{c|}{Rating} \\\cline{2-6} & 1 & 2 & 3 & 4 & 5 \\ \hline NST \cite{gatys2015neural} & 34\% & 13\% & 23\% & 14\% & 16\% \\ \hline AdaIN \cite{huang2017arbitrary} & 16\% & 27\% & 32\% & 19\% & 6\% \\ \hline Avatar net \cite{sheng2018avatar} & 50\% & 27\% & 17\% & 4\% & 2\% \\ \hline LST \cite{li2018learning} & 23\% & 35\% & 22\% & 15\% & 5\% \\ \hline ROT \cite{kolkin2019style} & 35\% & 23\% & 20\% & 16\% & 6\% \\ \hline CAMS (ours) & 10\% & 12\% & 24\% & 16\% & \textbf{38\%} \\ \hline \end{tabular}} \end{minipage}% \hspace{1mm} \begin{minipage}{.45\linewidth} \centering \caption{Color-aware, style, and content loss values of NST methods on our collected set from Flickr. The first and second best results are highlighted in yellow (bold) and green, respectively.\label{Table:quantatitive_flickr}} \resizebox{0.95\textwidth}{!}{ \begin{tabular}{|c|c|c|c|c|} \hline Method & Color-aware loss & Style loss & Content loss & Time (secs) \\ \hline NST \cite{gatys2015neural} & 0.280 & \cellcolor[HTML]{FFFFC7}\textbf{0.004} & 71.915 & 32.49 \\ \hline AdaIN \cite{huang2017arbitrary} & 0.532 & 0.013 & 62.844 & \cellcolor[HTML]{FFFFC7}\textbf{1.05} \\ \hline Avatar net \cite{sheng2018avatar} & 0.583 & 0.018 & \cellcolor[HTML]{9AFF99}57.963 & 8.82 \\ \hline LST \cite{li2018learning} & 0.783 & 0.024 & \cellcolor[HTML]{FFFFC7}\textbf{56.413} & \cellcolor[HTML]{9AFF99}3.72 \\ \hline ROT \cite{kolkin2019style} & 0.376 & 0.011 & 60.986 & 241.11 \\ \hline CAMS (ours) & \cellcolor[HTML]{FFFFC7}\textbf{0.180} & \cellcolor[HTML]{9AFF99}0.005 & 69.240 & 60.53 \\ \hline \end{tabular}} \end{minipage} \hspace{-2mm} \end{table} \begin{table}[t] \caption{Quantitative results (FID \cite{heusel2017gans}, color-aware, style, and content loss values) of NST methods on the FFHQ $\rightarrow$ WikiArt test set. The first and second best results are highlighted in yellow (bold) and green, respectively.\label{Table:quantatitive_faces}}\centering \resizebox{0.9\textwidth}{!}{ \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline Loss/Method & WCT \cite{li2017universal} & Avatar net \cite{sheng2018avatar} & LST \cite{li2018learning} & SANET \cite{park2019arbitrary} & WCT2 \cite{yoo2019photorealistic} & RST \cite{kotovenko2021rethinking} & CAMS (ours) \\ \hline Color-aware & 0.009 & 0.019 & 0.020 & \cellcolor[HTML]{9AFF99}0.005 & 0.039 & 0.012 & \cellcolor[HTML]{FFFFC7}\textbf{0.003} \\ \hline Style $\times 10^{-2}$ & 0.0300 & 0.0790 & 0.0804 & \cellcolor[HTML]{FFFFC7}\textbf{0.0186} & 0.1580 & 0.0570 & \cellcolor[HTML]{9AFF99}0.0218 \\ \hline Content & 41.839 & 32.445 & 30.871 & 29.085 & \cellcolor[HTML]{FFFFC7}\textbf{2.227} & 44.473 & \cellcolor[HTML]{9AFF99}23.617 \\ \hline FID & 154.920 & 147.77 & 126.56 & \cellcolor[HTML]{FFFFC7}\textbf{111.58} & 167.54 & 289.60 & \cellcolor[HTML]{9AFF99}119.93 \\ \hline \end{tabular}} \end{table} \begin{figure}[!t] \centering \includegraphics[width=\textwidth]{face_set.pdf} \vspace{-1mm} \caption{Qualitative comparisons on the FFHQ $\rightarrow$ WikiArt test set. Shown results of the following methods: whitening and coloring transforms (WCT) \cite{li2017universal}, adaptive instance normalization (AdaIN) \cite{huang2017arbitrary}, avatar net \cite{sheng2018avatar}, linear style transfer (LST) \cite{li2018learning}, arbitrary style Transfer with style-attentional networks (SANET) \cite{park2019arbitrary}, style transfer via wavelet transforms (WCT2) \cite{yoo2019photorealistic} and rethinking style transfer (RST) \cite{kotovenko2021rethinking}.} \label{fig:face_set} \end{figure} \section{Conclusion} \label{sec:conclusion} We have shown that Gram matrix-based optimization methods often fail to produce pleasing results when the target style image has multiple styles. To fix this limitation, we have presented a color-aware multi-style loss that captures correlations between different styles and colors in both style and generated images. Our method is efficient, simple, and easy to implement, achieving pleasing results while capturing different styles in the given reference image. We also have illustrated how our method could be used in an interactive way by enabling the users to manually control the way of transferring styles from the given style image. Finally, through a user study, we showed that our method achieves the best visually appealing results compared to other alternatives for style transfer.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In this paper we consider finite graphs $G$ with vertex set $V(G)$ and edge set $E(G)$. A graph may contain multiple edges but no loops. A $k$-edge-coloring of a graph $G$ is a function $c\colon E(G)\to \{1,\dots,k\}$. A coloring $c$ is proper if $c(e_1)\ne c(e_2)$ for any two adjacent edges $e_1,e_2\in E(G)$. The chromatic index $\chi'(G)$ of $G$ is the minimum $k$ such that $G$ admits a proper $k$-edge-coloring and it is known that $\chi'(G)\in \{\Delta(G),\dots,\Delta(G)+\mu(G)\}$, where $\mu(G)$ denotes the maximum multiplicity of an edge in $G$ \cite{Vizing}. A graph $G$ is class $1$ if $\chi'(G)=\Delta(G)$ and class $2$ otherwise. Let $r\ge2$ be a real number, a nowhere-zero $r$-flow in a graph $G$ is a pair $(D,f)$ where $D$ is an orientation of $G$ and $f\colon E(G)\to [1,r-1]$ is a function such that $\sum_{e\in E^+(v)} f(e) = \sum_{e\in E^-(v)} f(e)$ for each vertex of $v\in G$, where $E^+(v)$ and $E^-(v)$ denote the set of all outgoing and incoming arcs at $v$ respectively. The circular flow number of $G$ is $$\inf\{ r | G \mbox{ has a nowhere-zero $r$-flow} \},$$ and it is denoted by $\phi_c(G)$. It was proved in \cite{GTZ} that, for every bridgeless graph, $\phi_c(G)\in \Q $ and it is a minimum. If $G$ has a bridge, then it does not admit any nowhere-zero flow. Tutte's $5$-Flow Conjecture \cite{Tutte} states that every bridgeless graph admits a nowhere-zero $5$-flow, and it is one of the most important conjectures in this field. It is well known that it is equivalent to its restriction to cubic graphs. A snark is a cyclically 4-edge connected cubic graph with girth at least 5 and which does not admit a nowhere-zero 4-flow. Tutte (see \cite{Tutte2} and \cite{Tutte}) proved that a cubic graph $G$ has a nowhere-zero 3-flow if and only if $G$ is bipartite and that $G$ has a nowhere-zero 4-flow if and only if $G$ is a class 1 graph. In \cite{Stef_circ_flow} it is shown that there is no cubic graph $H$ with $3 < \phi_c(H) < 4$. In \cite{Stef} these results are generalized to $(2t+1)$-regular graphs. \begin{teo}[\cite{Stef}]\label{teo:bipartite} A $(2t+1)$-regular graph $G$ is bipartite if and only if $\phi_c(G)= 2+ \frac{1}{t}$. Furthermore, if $G$ is not bipartite, then $\phi_c(G)\ge 2+ \frac{2}{2t-1}$. \end{teo} \begin{teo}[\cite{Stef}]\label{teo:1-factor_bipartite} A $(2t+1)$-regular graph $G$ has a $1$-factor $M$ such that $G-M$ is bipartite if and only if $\phi_c(G)\le 2+\frac{2}{2t-1}$. \end{teo} In \cite{Stef} it is further shown that, different from the cubic case, for every $t\ge 2$, there is no flow number that separates $(2t+1)$-regular class $1$ graphs from class $2$ ones. In particular Theorem \ref{teo:1-factor_bipartite} implies that a $(2t+1)$-regular graph $G$ having $\phi_c(G)\le2+ \frac{2}{2t-1}$ is class $1$ and it was conjectured that this is the biggest flow number $r$ such that every $(2t+1)$-regular graph $H$ with $\phi_c(H)\le r$ is class $1$. Let $ \Phi^{(2)}(2t+1) :=\inf \{ \phi_c(G)\colon G \text{ is a } (2t+1) \text{-regular class } 2 \text{ graph}\}$. \begin{conj}[\cite{Stef}]\label{conj:Stef_inf} For every integer $t\ge1$ $$ \Phi^{(2)}(2t+1) = 2+\frac{2}{2t-1}. $$ \end{conj} We prove Conjecture \ref{conj:Stef_inf} in Section \ref{PROOF}. Moreover, let us define $\ca G_{2t+1}:=\{G \colon G$ is a $(2t+1)$-regular class $1$ graph such that there is no perfect matching $M$ of $G$ such that $G-M$ is bipartite$ \}$. Consider the following parameter: $$ \Phi^{(1)}(2t+1):=\inf\{\phi_c(G)\colon G \in \ca G_{2t+1} \}. $$ In Section \ref{PROOF} we further prove that $\Phi^{(1)}(2t+1)=2+\frac{2}{2t-1}$, for all positive integers $t$. If a graph $G$ has a small odd edge cut, say of cardinality $2k+1$, then $\phi_c(G) \geq 2 + \frac{1}{k}$. Recall that an $r$-graph is an $r$-regular graph $G$ such that $|\partial_G(X)|\ge r $, for every $X\subseteq V(G)$ with $|X|$ odd, where $|\partial_G(X)|$ denotes the set of edges of $G$ with exactly one end in $X$. The circular flow number of the complete graph $K_{2t+2}$ on $2t+2$ vertices is $2 + \frac{2}{t}$ \cite{Stef_circ_flow} and $K_{2t+2}$ is a class 1 graph. \begin{conj}[\cite{Stef}]\label{conj:Stef_class1} Let $G$ be a $(2t+1)$-regular class $1$ graph. Then $\phi_c(G)\le 2 + \frac{2}{t}$. \end{conj} \begin{conj}[\cite{Stef}]\label{conj:Stef_r-graph} Let $G$ be a $(2t+1)$-graph. Then $\phi_c(G)\le 2 + \frac{2}{t}$. \end{conj} Since $(2t+1)$-regular class $1$ graphs are $(2t+1)$-graphs Conjecture \ref{conj:Stef_r-graph} implies Conjecture \ref{conj:Stef_class1}. We show that both these conjectures are false by constructing $(2t+1)$-regular class $1$ graphs with circular flow number greater than $2+\frac{2}{t}$. The construction of the counterexamples relies on a family of counterexamples to Jaeger's Circular Flow Conjecture \cite{Jaeger} which was given by Han, Li, Wu, and Zhang in \cite{Zhang1}. \section{Circular flow number of $(2t+1)$-regular graphs.} \label{PROOF} Let $G$ and $H$ be two connected cubic graphs with at least $6$ vertices. Let $G'= G - \{v_1v_2, v_3v_4\}$, where $\{v_1v_2, v_3v_4\}$ is a matching of $G$. Let $u,w$ be two adjacent vertices of $H$ and let $H'=H-\{u,w\}$. Furthermore, let $u_1,u_2$ and $w_1,w_2 \in V(H)$ be the neighbors $u$ and $w$ in $G$, respectively, which are elements of $V(H')$. The dot product $G \cdot H$ is defined to be the graph with $V(G\cdot H) = V (G) \cup V (H')$, and $E(G \cdot H) = E(G') \cup E(H') \cup \{v_1u_1 , v_2u_2 , v_3w_1 , v_4w_2 \}.$ The following is a well known result that goes back to Izbicki \cite{Izbicki_66}. \begin{lem}[Parity Lemma] Let $G$ be a $(2t+1)$-regular graph of class $1$ and $c\colon E(G)\to \{1,2,\dots,2t+1\}$ a proper edge-coloring of $G$. Then, for every edge-cut $C\subseteq E(G)$ and color $i$, the following relation holds $$ |C\cap c^{-1}(i)| \equiv |C| \mod 2. $$ \end{lem} Let $G$ be a graph and let $M\subseteq E(G)$. We denote by $G+M$ the graph obtained by adding a copy of $M$ to $G$. Such a graph has vertex set $V(G+M)=V(G)$ and edge set $E(G+M)=E(G)\cup M'$, where $M'$ is a copy of $M$. Let $G_1$ and $G_2$ be cubic graphs having perfect matching $M_1$ and $M_2$ respectively. Let $G$ be a dot product $G_1\cdot G_2$ where we remove from $G_1$ two non-adjacent edges $e_1,e_2 \in E(G_1- M_1)$ and from $G_2$ two adjacent vertices $x,y$ such that $xy\in M_2$. Then we say that $G$ is an $(M_1,M_2)$-dot-product of $G_1$ and $G_2$. Moreover, let $H$ be a cubic graph with a perfect matching $M_3$ such that, for all positive integers $t$, $H+(2t-2)M_3$ is a $(2t+1)$-regular class $1$ (resp. class $2$) graph. Then we say that $H$ has the $M_3$-class-$1$ (resp. $M_3$-class-$2$) property. \begin{lem}\label{lem:class 2} For $i=1,2$, let $G_i$ be a cubic graph having the $M_i$-class-$2$ property, where $M_i$ is a perfect matching of $G_i$. Moreover let $G$ be an $(M_1,M_2)$-dot-product of $G_1$ and $G_2$ and $x,y\in V(G_2)$ the two adjacent vertices that have been removed from $G_2$ when constructing $G$. Then $M=M_1\cup M_2\setminus \{xy\}$ is a perfect matching of $G$ and $G$ has the $M$-class-$2$ property. \end{lem} \begin{proof} Let $e_1 = v_1v_2$, $e_2=v_3v_4$ be the edges that have been removed from $G_1$ in order to obtain $G$. Define $H=G+(2t-2)M$ and $H_i=G_i+(2t-2)M_i$, $i \in \{1,2\}$, and let $a_1,a_2,a_3,a_4$ be the added edges incident to $v_1,v_2,v_3$ and $v_4$ respectively. Then $C = \{a_1,a_2,a_3,a_4\}$ is a 4-edge-cut in $H$ which separates $H[V(G_2)-\{x,y\}]$ and $H[V(G_1)]$. Suppose to the contrary that $H$ is a class $1$ graph. By the Parity Lemma, either $C$ intersects only one color class, or it intersects two color classes in exactly two edges each. Moreover, if $c(a_1)=c(a_2)$, then $c(a_3)=c(a_4)$ and a $(2t+1)$-edge-coloring is defined naturally on $H_1$ by the coloring of $H$ in contradiction to the fact that $H_1$ is a class $2$ graph. Therefore, $c(a_1)\ne c(a_2)$, and so $\{c(a_3), c(a_4)\} = \{c(a_1),c(a_2)\}$. In this case a $(2t+1)$-edge-coloring is naturally defined on $H_2$ by the coloring of $H$ leading to a contradiction again. \end{proof} For the following result we will need the concept of balanced valuations, see \cite{Bondy} and \cite{Jaeger_bal_val}. Let $G$ be a graph, a balanced valuation of $G$ is a function $\omega \colon V(G)\to \R$ such that $|\sum_{v\in X} \omega(v)| \le |\partial_G(X)|$, for every $X\subseteq V(G)$. Balanced valuations and nowhere-zero flows are equivalent concepts as the following theorem shows (this formulation is given in \cite{Stef_circ_flow}). \begin{teo}[\cite{Jaeger_bal_val}] \label{bv} Let $G$ be a graph. Then $G$ has a nowhere-zero $r$-flow if and only if there is a balanced valuation $\omega\colon V(G)\to \R$ of $G$ such that, for every $v\in X$ there is an integer $k_v$ such that $k_v \equiv \degree_G(v) \mod{2} $ and $\omega(v)=k_v\frac{r}{r-2}$. \end{teo} If $G$ has a nowhere-zero $r$-flow, then $G$ has always an orientation such that all flow values are positive. Thus, if $G$ is cubic, then each vertex has either one or two incoming edges. Hence, $V(G)$ is naturally partitioned into two subsets of equal cardinality $V(G) = \ca B \cup \ca W$. Moreover, we say that $v$ is black (resp.~white) if $v\in\ca B$ (resp. $v\in\ca W$). The balanced valuation $\omega$ of $G$ corresponding to the all-positive nowhere-zero $r$-flow $(D,f)$ is defined as follows: $\omega(v)=\frac{r}{r-2}$ if $v$ is black and $\omega(v)=-\frac{r}{r-2}$ if $v$ is white. Finally, for $X\subseteq V(G)$ we define $b_X = |X\cap \ca B|$ and $w_X=|X\cap \ca W|$. We call the partition $(\ca B, \ca W)$ of $V(G)$ an $r$-bipartition of $G$, see for example \cite{Flows_bisections} for the study of such partitions in cubic graphs. \begin{lem}\label{lem:asymptotic} \label{lemma} Let $i\in\{1,2\}$, and $\{G_n : n\in \N \}$ be a family of cubic class $2$ graphs such that for each $n \ge 1$: \begin{itemize} \item $G_n$ has a $r_n$-bipartition $(\ca B_n, \ca W_n)$ with $r_n\in(4,5)$; \item $G_n$ has a perfect matching $M_n$ with the following properties: \begin{itemize} \item $G_n$ has the $M_n$-class-$i$-property; \item if $ab\in M_n$, then $a\in\ca B_n$ if and only if $b\in \ca W_n$. \end{itemize} \end{itemize} If $\lim_{n \to \infty} r_n = 4$, then $\Phi^{(i)}(2t+1)=2+\frac{2}{2t-1}\text{, for every integer }t\ge1$. \end{lem} \begin{proof} Fix an integer $t\ge1$ and let $H_n := G_n + (2t-2)M_n$. Since $G_n$ has the $M_n$-class-$i$ property $\{H_n\colon n \in \N\}$ is an infinite family of $(2t+1)$-regular class $i$ graphs. By Theorem \ref{bv}, $|\partial_{G_n}(X)|\ge \frac{r_n}{r_n - 2}|b_X-w_X|$, for every $X\subseteq V(G_n)$. Let $Y\subseteq V(H_n)$. Since $M_n$ pairs black and white vertices of $H_n$ we have that $d = |M_n \cap \partial_{H_n}(Y)| \ge |b_Y-w_Y|. $ Therefore, for every $Y\subseteq V(H_n)$, we get the following inequalities: $$ |\partial_{H_n} (Y)| \ge \frac{r_n}{r_n - 2}|b_Y-w_Y| + (2t-2)d \ge (\frac{r_n}{r_n - 2}+2t-2)|b_Y-w_Y|. $$ Hence, $H_n$ has a nowhere-zero $(2+ \frac{2(r_n-2)}{r_n+(2t-3)(r_n-2)})$-flow. Notice that, if $i=1$, then $H_n\in \ca G_{2t+1}$, for every $n$, because $G_n$ is a class $2$ cubic graph, and so it cannot have a $1$-factor whose removal gives rise to a bipartite graph. On the other hand if $i=2$, then $H_n$ is class $2$. Therefore, since the sequence $\{r_n\}_{n\in\N}$ tends to $4$ from above, we have that $$ \Phi^{(i)}(2t+1) \le \lim_{n\to \infty} \Bigl(2+ \frac{2(r_n-2)}{r_n+(2t-3)(r_n-2)}\Bigr) = 2+\frac{2}{2t-1}, $$ and thus, equality holds from Theorem \ref{teo:1-factor_bipartite}. \end{proof} \subsection{A family of snarks fulfilling the hypothesis of Lemma \ref{lem:asymptotic}} \subsubsection*{Class $1$ regular graphs} Consider the family of Flower snarks $\{J_{2n+1}\}_{n\in \N}$, introduced in \cite{Isaacs}. The Flower snark $J_{2n+1}$ is the non-$3$-edge-colorable cubic graph having: \begin{itemize} \item vertex set $V(J_{2n+1})= \{a_i,b_i,c_i,d_i\colon i \in \Z_{2n+1}\}$ \item edge set $E(J_{2n+1})= \{ b_ia_i,b_ic_i,b_id_i,a_ia_{i+1},c_id_{i+1},c_{i+1}d_i\colon i \in \Z_{2n+1}\}$ \end{itemize} The following Lemma holds true. Since its proof requires some case analysis and lies outside the intent of this section we omit it here and add it in the appendix. \begin{lem}\label{lem:Flower} Let $M$ be a $1$-factor of $J_{2n+1}$. Then $J_{2n+1}+M$ is a class $1$ $4$-regular graph. \end{lem} \begin{teo}\label{teo:prop_Flower} The graph $J_{2n+1}$ has a $(4 + \frac{1}{n})$-bipartition $(\ca B_n, \ca W_n)$ and a perfect matching $M_n$ such that: \begin{itemize} \item $J_{2n+1}$ has the $M_n$-class-$1$-property; \item for all $xy\in M_n$, $x\in B_ n$ if and only if $y\in W_n$. \end{itemize} \end{teo} \begin{proof} We construct explicitly a nowhere-zero $(4+\frac{1}{n})$-flow $(D_n,f_n)$ in $J_{2n+1}$ as sum of an integer $4$-flow $(D,f)$ with exactly one edge having flow value $0$ and $n$ flows $(D'_1,f_1'),\dots,(D_n',f_n')$ having value $\frac{1}{n}$ each on a different circuit. Define $(D,f)$ on the directed edges of $J_{2n+1}$ as follows, when we write an edge connecting two vertices $u,v$ in the form $uv$ we assume it to be oriented from $u$ to $v$ in the orientation $D$: \begin{itemize} \item $f(a_0b_0)=0$ and $f(b_0c_0)=f(d_0b_0)=2$; \item $f(b_ia_i)=f(a_{i+1}b_{i+1})=1$, for all $i\in\{1,3,\dots,2n-1\}$; \item $f(b_ic_i)=2$ and $f(b_{i+1}c_{i+1})=3$, for all $i\in\{1,3,\dots,2n-1\}$; \item $f(d_ib_i)=3$ and $f(d_{i+1}b_{i+1})=2$, for all $i\in\{1,3,\dots,2n-1\}$; \item $f(a_ia_{i+1})=f(c_{i+1}d_i)=1$, for all $i\in \{0,2,\dots,2n\}$; \item $f(a_ia_{i+1})=f(c_{i+1}d_i)=2$, for all $i\in\{1,3,\dots,2n-1\}$; \item $f(c_id_{i+1})= 1$, for all $i\in \Z_{2n+1}$; \end{itemize} For $j \in \{1,\dots,n \}$ let $(D'_j,f'_j)$ be the flow on the directed circuit $C_j= a_0b_0c_0 d_1\dots d_l c_{l+1} d_{l+2}\dots d_{2j-1} b_{2j-1} a_{2j-1} a_{2j}a_{2j+1}\dots a_0$ (where $l<j$ and $l$ odd), with $f_j'(e)=\frac{1}{n}$ if $e\in C_j$ and $f_j'(e)=0$ otherwise. The sum $(D,f)+\sum_{i = 1}^{n}(D_i',f_i')$ gives a nowhere-zero $(4+\frac{1}{n})$-flow $(D_n,f_n)$ in $J_{2n+1}$. Let $(\ca B_n,\ca W_n)$ be the bipartition induced by such a flow and consider the $1$-factor $M_n=\{ a_ib_i, c_{i+1}d_i\colon i\in \Z_{2n+1} \}$. Notice that $D_n=D$, and for all $xy\in M_n$: $x\in\ca B_ n$ if and only if $y\in\ca W_n$. By Lemma \ref{lem:Flower}, $J_{2n+1}$ has the $M_n$-class-$1$ property. \end{proof} \subsubsection*{Class $2$ regular graphs} Let $P$ denote the Petersen graph. We recall now the following result, which follows from Theorem 3.1 of \cite{GrundSteff}. \begin{lem}\label{lem:P_class2} Let $M_1,\dots,M_k$ be perfect matchings of $P$. Then $P+\sum_{i=1}^{k}M_i$ is a $(k+3)$-regular class 2 graph. \end{lem} \subsubsection*{Construction of $\ca G =\{ G_n \colon n\in\N\}$} \label{construction} $G_1$: The graph $G_1$ is the Blanu\v sa snark, Figure \ref{Fig:Blanusa}. $G_{n+1}= G_n \cdot G_1$: The dot product of these two graphs will be carried out as follows. If $v\in V(G_1)$, then the vertex $v^{i}\in V(G_n)$ corresponds to the vertex $v$ of the $i$-th copy of $G_1$ that has been added in order to construct $G_{n}$. Consider the bold circuit $C=x_0x_1\dots x_8\subseteq G_1$ as depicted in Figure \ref{Fig:Blanusa_cycles}. Delete the vertices $x_0,x_1$ of $G_1$ and edges $x_4^{n}x_5^{n},x_7^{n}x_8^{n}$ from $G_n$. Perform the dot product $G_n\cdot G_1$ by adding the edges $x_4^nx_8,x_5^ny_0,x_7^ny_1$ and $x_8^nx_2$, where $y_0,y_1$ are vertices of $G_1$ which are not in $C$ and adjacent to $x_0,x_1$ respectively, see Figure \ref{Fig:Blanusa_cycles}. The snark $G_2$ is depicted in Figure \ref{Fig:ind_step}. \begin{teo}\label{teo:Properties_Gn} Let $n\in\N$ and $G_n\in \ca G$. The graph $G_n$ has a $(4 + \frac{1}{n+1})$-bipartition $(\ca B_n, \ca W_n)$ and a perfect matching $M_n$ such that: \begin{itemize} \item $G_n$ has the $M_n$-class-$2$-property; \item for all $xy\in M_n$, $x\in B_ n$ if and only if $y\in W_n$. \end{itemize} \end{teo} \begin{proof} First we show that for every $n$ there is a nowhere-zero $(4 + \frac{1}{n+1})$-flow in $G_n$. We argue by induction over $n\in \N$. Fix on $G_1$ the $4$-flow $(D_1,f_1)$ as depicted in Figure \ref{Fig:Blanusa}. When we write $D_1^{-1}$ we will refer to the orientation constructed by reversing each edge in $D_1$, similarly $D_1^1$ will be the orientation $D_1$. A nowhere-zero $(4 + \frac{1}{2})$-flow in $G_1$ can be constructed by adding $\frac{1}{2}$ along the two directed circuits $C_1,C_2$ depicted in Figure \ref{Fig:Blanusa_cycles}. Indeed they have the following two properties: \begin{enumerate} [label=\textbf{P.\arabic*}] \item \label{P1} the unique edge having flow value $0$ belongs to all circuits; \item \label{P2} every edge with flow value $3$ belongs to at most one of the circuits. \end{enumerate} Notice also that $f_1(x_7x_8)=1$ and $f_1(x_4x_5)=2$. Moreover there is a unique circuit in $\{C_1,C_2\}$ containing the path $x_4\dots x_8$ and the other one does not intersect it. Now we proceed with the inductive step. By the inductive hypothesis there is a $4$-flow $(D_n,f_n)$ in $G_n$ having a unique edge with flow value $0$ and $n+1$ directed circuits $\{C_1,\dots,C_{n+1}\}$ in $D_n$ satisfying properties \ref{P1} and \ref{P2}. It holds $f_n(x_7^nx_8^n)=1$, $f_n(x_4^nx_5^n)=2$. Furthermore, there is a unique circuit $C\in\{C_1,\dots,C_{n+1}\}$ containing the path $\tilde{P}=x_4^n\dots x_8^n$ and such that no other circuit intersects $\tilde{P}$. If $n$ is odd, then $\tilde{P}$ is a directed path in $D_n$, if $n$ is even, then $x_8^nx_7^n\dots x_4^n$ is a directed path in $D_n$. Let $H_n= G_n-\{x_4^nx_5^n,x_7^{n}x_8^{n}\}$ and $H'=G_1-\{x_0,x_1\}$. Then $G_{n+1}$ is constructed by adding edges $x_4^nx_8,x_5^ny_0,x_7^ny_1$ and $x_8^nx_2$. Let $(D_{n+1},f_{n+1})$ be the unique $4$-flow in $G_{n+1}$ such that \begin{itemize} \item $D_{n+1}|_{H_n}=D_n|_{H_n}$ and $D_{n+1}|_{H'}=D_1^{(-1)^{n}}|_{H'}$; \item $f_{n+1}|_{H_n}=f_n|_{H_n}$ and $f_{n+1}|_{H'}=f_1|_{H'}$. \end{itemize} We show that there exists a set of $\tilde{\ca C}$ of $n+2$ circuits satisfying properties \ref{P1} and \ref{P2}. In particular, we are going to construct two circuits out of $C$. First notice that there are exactly two paths $\tilde{P}_1=x_8^{n+1}w_1\dots w_tx_2^{n+1}$ and $\tilde{P}_2=x_8^{n+1}x_7^{n+1}x_6^{n+1}x_5^{n+1}x_4^{n+1}x_3^{n+1}x_2^{n+1}$ in $H'\subseteq G_{n+1}$, which are directed in $D_{n+1}|_{H'}$ and such that $\tilde{P}_1\cap \tilde{P}_2 = x_2^{n+1}x_3^{n+1}$, for some vertices $w_1,\dots,w_t \in V(G_{n+1})$ (see Figure \ref{Fig:ind_step} for an example in the case of $n=1$). In particular, if $n$ is odd, then $P_1$ and $P_2$ are both directed from $x_8^{n+1}$ to $x_2^{n+1}$ and vice versa if $n$ is even. We can suppose without loss of generality that $C=v_0v_1\dots v_k \tilde{P}=v_0\dots v_k x_4^n\dots x_8^n$ in $G_n$. Define $\tilde{C}_i$ to be the circuit $v_0\dots v_kx_4^n\tilde{P}_ix_8^n$, $i \in \{1,2\}$. It follows by construction that $C_1$ and $C_2$ are both directed circuits in $D_{n+1}$. Therefore, the family $\tilde{\ca C} = (\ca C \setminus \{C\}) \cup \{\tilde{C}_1,\tilde{C}_2\}$ consists of $n+2$ circuits satisfying properties \ref{P1} and \ref{P2} and so a nowhere-zero $(4+\frac{1}{n+2})$-flow can be constructed in $G_{n+1}$. Now we show that for every $n$ there is a perfect matching $M_n$ of $G_n$ satisfying the statement. We argue again by induction. Choose as a perfect matching $M_1$ of $G_1$, which is indicated by bold edges in Figure \ref{Fig:Blanusa}. Consider two copies of the Petersen graph $P_1,P_2$ together with a perfect matching $N_i$ of $P_i$, $i \in \{1,2\}$. Recall that $G_1$ is constructed by performing an $(N_1,N_2)$-dot-product $P_1\cdot P_2$. In particular we can choose $N_1$, $N_2$ and perform the dot product in such a way that $M_1 = N_1\cup N_2 \setminus \{x'y'\}$, where $x',y'$ are the vertices we removed from $P_2$ in order to perform the dot product. Therefore, by Lemmas \ref{lem:class 2} and \ref{lem:P_class2}, it follows that $G_1$ has the $M_1$-class-$2$-property. Figure \ref{Fig:Blanusa} shows that the chosen $(4 + \frac{1}{2})$-flow in $G_1$ and the perfect matching $M_1$ are related in the following way: let $\ca B_1 \cup \ca W_1$ be the $(4 + \frac{1}{2})$-bipartition of $V(G_1)$, then for all $xy\in M_1$, $x\in \ca B_1$ if and only if $y\in \ca W_1$. Therefore, the statement is true for $n=1$. Notice that $x_0x_1\in M_1$ and that $x_4x_5,x_7x_8\notin M_1$. For the inductive step we assume that $G_n$ has a perfect matching $M_n$ fulfilling the induction hypothesis and $x_4^nx_5^n,x_7^nx_8^n\notin M_n$. There is a unique perfect matching $M_{n+1}$ of $G_{n+1}$ such that $M_{n+1} \cap E(H_n) = M_n$ and $M_{n+1}\cap E(H')=M_1\setminus \{ x_0x_1 \}$. Thus, by Lemma \ref{lem:class 2}, we get that $G_{n+1}$ has the $M_{n+1}$-class-$2$-property. Define $\ca B_{n+1} = \ca B_n \cup \ca B$ and $\ca W_{n+1} = \ca W_n \cup \ca W$, where $(\ca B, \ca W)$ is the bipartition induced by $(D_1^{(-1)^n},f_1)$ in $G_1$. The bipartition $(\ca B_{n+1}, \ca W_{n+1})$ of $V(G_{n+1})$ is a $(4 + \frac{1}{n+1})$-bipartition. Since both $M_n = M_{n+1} \cap E(H_n)$ and $M_1 \setminus \{ x_0x_1 \} = M_{n+1}\cap E(H')$ pair black and white vertices, it follows that $M_{n+1}$ pairs black and white vertices too. Notice that $x_4^{n+1}x_5^{n+1},x_7^{n+1}x_8^{n+1}\notin M_{n+1}$, this concludes the inductive step. \end{proof} From Lemma \ref{lem:asymptotic} and Theorems \ref{teo:prop_Flower} and \ref{teo:Properties_Gn} we deduce the following corollary. \begin{cor}\label{Cor:inf_class2} For every $t\ge 2$ and $i\in \{1,2\}\colon \Phi^{(i)}(2t+1) = 2 + \frac{2}{2t-1}. $ \end{cor} \begin{figure} \centering \includegraphics[scale=0.7]{Blanusa.eps} \caption{A $4$-flow in the Blanu\v sa snark $G_1$ having just one edge with flow value $0$. The perfect matching consisting of all bold edges pairs black vertices with white vertices.}\label{Fig:Blanusa} \end{figure} \begin{figure} \centering \includegraphics[scale=0.7]{Blanusa_cycles.eps} \caption{A nowhere-zero $(4+\frac{1}{2})$-flow in the Blanu\v sa snark $G_1$ can be constructed by adding $\frac{1}{2}$ along the bold and dotted circuits.}\label{Fig:Blanusa_cycles} \end{figure} \begin{figure} \centering \includegraphics[scale=0.7]{ind_step.eps} \caption{Construction of two more circuits (dotted and bold ones) in $G_2$.}\label{Fig:ind_step} \end{figure} \section{Counterexamples to Conjecture \ref{conj:Stef_class1}} In \cite{Zhang1} Jaeger's Circular Flow Conjecture \cite{Jaeger} is disproved. For each $p \geq 3$ they constructed a $4p$-edge-connected graph which does not admit a nowhere-zero $(2 + \frac{1}{p})$-flow. We will use this family of graphs to construct $(2k+1)$-regular class 1 graphs which do not admit a $(2 + \frac{2}{k})$-flow, for suitable values of $k$. We start with summarizing the construction of \cite{Zhang1}. \begin{constr} Let $p\ge 3$ be an integer and let $\{v_1,v_2,\dots,v_{4p}\}$ be the vertex set of the complete graph $K_{4p}$. \begin{itemize} \item[i.] Construct the graph $G_1$ by adding an additional set of edges $T$ such that $V(T)=\{v_1,v_2,\dots,v_{3(p-1)}\}$ and each component of the edge-induced subgraph $G_1[T]$ is a triangle. \item[ii.] Construct the graph $G_2$ from $G_1$ by adding two new vertices $z_1$ and $z_2$, adding one edge $z_1z_2$, adding $p-2$ parallel edges connecting $v_{4p}$ and $z_i$ for both $i \in \{1,2\}$, and adding one edge $v_iz_j$ for each $3p-2\le i \le 4p-1$ and $j \in \{1,2\}$. \item[iii.] Consider $4p+1$ copies $G_2^1,\dots, G_2^{4p+1}$ of $G_2$. If $v\in V(G_2)$, then we write $v^i$ to refer to the vertex $v$ of the $i$-th copy of $G_2$. Construct the graph $M_p$ the following way. For every $i\in \{1, \dots, 4p+1\}$ identify $z_2^i$ with $z_1^{i+1}$ and call this new vertex $c_{i+1}$, where we take sums modulo $4p+1$. Finally add a new vertex $w$ and all edges of the form $wc_i$ for all $i\in \{1, \dots, 4p+1\}$. \end{itemize} \end{constr} For every $i \in \{1, \dots, 4p+1\}$ we have $d_{M_p}(v_{4p}^i) = 4p-1 + 2(p-2) = 6p - 5$, $d_{M_p}(c_i) = 2(p-2+ 4p-1 - (3p-2) + 1) + 3 = 4p+3$, and all other vertices have degree $4p + 1$. \begin{teo}[\cite{Zhang1}] For all $p \geq 3\colon \phi_c(M_p)>2+\frac{1}{p}.$ \end{teo} Let $k = 2p$. The graph $M_p$ defined in previous section does not have a nowhere-zero $(2+\frac{2}{k})$-flow. We will use $M_p$ in order to construct a $(2k+1)$-regular graph of class $1$ which does not admit a nowhere-zero $(2+\frac{2}{k})$-flow, thus disproving Conjecture \ref{conj:Stef_class1}. We consider odd integers $p\ge3$, say $p = 2t+1$. We say that an expansion of a vertex $v$ of a graph $G$ to a graph $K$ is obtained by the replacement of $v$ by $K$ and by adding $d_G(v)$ edges with one end in $V(G-v)$ and the other end in $V(K)$. \begin{prop} Let $H$ be a graph obtained by expanding a vertex of $G$. Then $\phi_{c}(H)\ge \phi_{c}(G).$ \end{prop} \subsection*{Construction of $M_p'$} The copy of $K_{4p}$ which is used in the construction of $G_2^i$ is denoted by $K_{4p}^i$. Construct the graph $M_p'$ by expanding each vertex $v_{4p}^i $ of $ K_{4p}^i$ in $M_p$ to a vertex $x^{i}$ of degree $4p+1$ and $p-3$ divalent vertices, where $x^{i}$ is adjacent to every vertex of $V(K_{4p}^i) \setminus \{v_{4p}^i\}$ and to $c_i$ and $c_{i+1}$ and each divalent vertex is adjacent to both $c_i$ and $c_{i+1}$. After that suppress the divalent vertices. Note that the construction can also be seen as a edge splitting at $v_{4p}^i$. We have $d_{M_p'}(c_i) = 4p+3$ for all $i \in \{1, \dots, 4p+1\}$ and all other vertices of $M_p'$ have degree $4p+1$. Notice that $M'_p$ remains a bridgless graph. \begin{lem}\label{lem:M_p'} The graph $M_p'$ admits a $(4p+1)$-edge-coloring $c$ such that for all $i \in \{0, \dots, 4p\}$ and all $v \in V(M_p'):$ $|c^{-1}(i)\cap \partial (v)|$ is odd. Furthermore, $\phi_c(M_p') > 2+\frac{1}{p}$. \end{lem} \begin{proof} All operations performed in order to construct $M_p'$ do not decrease the circular flow number of graphs. Thus $\phi_c(M_p')\ge \phi_c(M_p) >2+\frac{1}{p}$. Now we show that $M'_p$ can be colored using $8t+5 = 4p+1$ colors in such a way that every vertex sees each color an odd number of times. We say that a vertex $v$ sees a color $i$, if there is an edge $e$ which is incident to $v$ and $c(e) = i$. Each copy $G_1^i$ can be constructed by considering the complete graph $K_{4p}$ with vertex set $\{0,1,2,\dots,4p-2,\infty\}$ and adding the edges of all following triangles: \begin{itemize} \item $(t+2+j),(t+3+j),(t+4+j)$ for every $j\in \{0,3,6,9,\dots,3(t-1)\}$; \item $-(t+2+j),-(t+3+j),-(t+4+j)$ for every $j\in \{0,3,6,9,\dots,3(t-1)\}$. \end{itemize} Consider the following $1$-factorization of $K_{4p}$. Let the edges of color $0$ be all edges of the set $M_0 = \{ 0\infty \}\cup \{ -ii\colon i\in \Z_{8t+3} \}$ and the edges of color $j\in \{0,1,\dots, 8t-2\}$ be all edges of the set $M_j = M_0+j = \{j\infty\}\cup\{(-i+j)(i+j)\colon i\in \Z_{8t+3}\}$. We can color this way all copies $K_{4p}^i$ of $K_{4p}$ inside $G_1^i$. Notice that we have used $8t+3=4p-1$ colors so far. \begin{figure} \includegraphics[scale=0.4]{triangles.eps} \caption{Color the added triangles using colors of the selected circuit. Color the selected circuit with two new colors. Colors are depicted in bold.}\label{Fig:triangles} \end{figure} \begin{figure} \includegraphics[scale=0.5]{even_cycle.eps} \caption{Assign to uncolored edges colors of the selected even circuit and assign to the edges of the selected circuit two new colors. Colors are depicted in bold.}\label{Fig:even_cycle} \end{figure} Consider the even circuits $ t+2+j,-(t+2+j),t+3+j,-(t+4+j),t+4+j,-(t+3+j),t+2+j$ for every $j\in\{0,3,6,9,\dots, 3(t-1)\}$ inside $G_1^i$. We perform the operation in Figure \ref{Fig:triangles} in order to color all triangles $(t+2+j),(t+3+j),(t+4+j)$ and $-(t+2+j),-(t+3+j),-(t+4+j)$ using two more colors $8t+3$ and $8t+4$. Consider the even circuit $C=0,1,2,\dots,t+1,\infty,-(t+1),-t,\dots,-2,-1$ inside $G_1^i$. Notice that these are all vertices of $K_{4p}^i$ in $G_1^i$ that are connected with both $c_i$ and $c_{i+1}$. Moreover, there are no two edges of $C$ belonging to the same color class. First assign colors $8t+3$ and $8t+4$ to the edges of $C$ alternately. This way we can assign the previous colors of the edges of $C$ to edges of the type $c_iv$ and $c_{i+1}v$, with $v\in V(C)$, in such a way that both $c_i$ and $c_{i+1}$ see different colors (notice that they see the very same set of colors), see Figure \ref{Fig:even_cycle}. Since the length of $C$ is $2t+4$, up to a permutation of colors, we can suppose that $c_i$ receives colors $i+\{1,3,5,7,\dots, 4t+7\} = i + \{2j+1\colon j=0,1,2,\dots,2t+3\}$ from the copy $G_1^i$, where now we are doing sums modulo $8t+5 = 4p+1$. At this point, every vertex not in $\{w\} \cup \{c_i : i \in \{1, \dots, 4p+1\}\}$ sees every color exactly once. For every $i\in \{1,\dots,8t+5\}$, color all $p-2$ parallel edges connecting $c_i$ with $c_{i+1}$ with colors $i+\{4t+8,4t+10,\dots,8t+2,8t+4\}$, where sums are taken modulo $8t+5$ (notice that these are exactly $2t-1 (=p-2)$ colors). Finally, color $c_iw$ with color $i+4t+7$ modulo $8t+5$. The central vertex $w$ sees each color exactly once, whereas the vertex $c_i$ sees all colors once but for $i+4t+7$ which is seen three times. \end{proof} At this point we are going to further modify $M_p'$ in order to obtain a $(4p+1)$-regular graph of class $1$. \subsection*{Construction of $\tilde{M_p}$} Consider the graph $M_p'$. By Lemma \ref{lem:M_p'}, there is a $(4p+1)$-edge-coloring $c$ such that $|c^{-1}(i)\cap \partial (v)|$ is odd for every color $i \in \{0, \dots, 4p\}$ and vertex $v$. Construct $\tilde{M_p}$ by expanding each $c_i$ into a vertex of degree $4p+1$ that receives all colors and into a vertex of degree $2$ that receives the same color from both its adjacent edges. Finally suppress all such vertices of degree $2$. \begin{teo} $\tilde{M_p}$ is a $(4p+1)$-regular class $1$ graph such that $\phi_c(\tilde{M_p})> 2+\frac{1}{p}$. \end{teo} \begin{proof} Since expanding and suppressing vertices does not decrease the circular flow number of graphs $\phi_c(\tilde{M_p})\ge \phi_c(M_p')> 2+\frac{1}{p}$. Furthermore a natural $(4p+1)$-edge-coloring is defined on $\tilde{M_p}$ by the edge-coloring of $M_p'$. \end{proof} \begin{cor} Let $p\ge3$ be any odd integer and let $k=2p$. There is a $(2k+1)$-regular class $1$ graph $G$ such that $\phi_c(G)>2+\frac{2}{k}$. \end{cor}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Relatively dominated representations were introduced in \cite{reldomreps} and provide a common generalization of geometric finiteness in rank-one semisimple Lie groups and the Anosov condition in more general semisimple Lie groups. They are related to earlier common generalizations studied by Kapovich and Leeb in \cite{KL}. These representations furnish a class of discrete relatively hyperbolic subgroups of semisimple Lie groups which are quasi-isometrically embedded modulo controlled distortion along their peripheral subgroups. The definition of these representations is given in terms of singular value gaps, which may be interpreted in terms of the geometry of the associated symmetric spaces as distances from singular flats of specified type. The corresponding characterization of Anosov representations was given first by Kapovich--Leeb--Porti in \cite{KLP} under the name of URU subgroups, and subsequently reformulated, in language more closely resembling that used here, by Bochi--Potrie--Sambarino in \cite{BPS}. The key defining condition asserts that the singular value gap $\frac{\sigma_1}{\sigma_2}(\rho(\gamma))$ grows uniformly exponentially in a notion of word-length $|\gamma|_c$ that has been modified to take into account the distortion along the peripheral subgroups. The definition also involves additional technical conditions to control the images of the peripheral subgroups. In the first part of this note, we remove one of those technical conditions by showing that it follows from the other parts of the definition. We refer the reader to \S\ref{sec:D+-EC}, and specifically Proposition \ref{prop:D+-_EC}, for the full statement and proof; here we present it slightly summarised as follows: \begin{introprop} \label{introprop:3.5} Let $\Gamma$ be a finitely-generated group, and fix $C_0 > 0$. Suppose we have a representation $\rho\colon \Gamma \to \SL(d,\mathbb{R})$ such that $$C_0^{-1} \log \frac{\sigma_1}{\sigma_2}(\rho(\gamma)) - C_0 \leq |\gamma|_c \leq C_0 \log \frac{\sigma_1}{\sigma_2}(\rho(\gamma)) + C_0 $$ for all $\gamma \in \Gamma$ Then, given constants $\ubar\upsilon, \bar\upsilon > 0$, there exists constants $C, \mu > 0$ such that for any bi-infinite sequence of elements $(\gamma_n)_{n \in \mathbb{Z}} \subset \Gamma$ satisfying \begin{enumerate}[(i)] \item $\gamma_0 = \id$, and \item $\ubar\upsilon^{-1}|n| -\ubar\upsilon \leq |\gamma_n|_c \leq \bar\upsilon|n| + \bar\upsilon$ for all $n$, \end{enumerate} and any $k \in \mathbb{Z}$, $$d \left( U_1(\rho(\gamma_{k-1} \cdots \gamma_{k-n})), U_1(\rho(\gamma_{k-1} \cdots \gamma_{k-n-1})) \right) < Ce^{-\mu n} $$ for all $n >0$. \end{introprop} Here $U_1(B)$ denotes the image of the 1-dimensional subspace of $\mathbb{R}^d$ most expanded by $B$. This proposition allows us to obtain uniform convergence towards limit points; the exponential convergence seen here is reminiscent of hyperbolic dynamics, and is straightforward to obtain in the non-relative case. In the proof of Proposition \ref{prop:D+-_EC} we will find it useful to adopt elements of the point of view of Kapovich--Leeb--Porti, which emphasizes the geometry of the symmetric space and the related geometry of its boundary and associated flag spaces. More recently, Kassel--Potrie \cite{KasselPotrie} have given a characterization of Anosov representations in terms of eigenvalue gaps $\frac{\lambda_i}{\lambda_{i+1}}$, which may be interpreted as asymptotic versions of singular value gaps $\frac{\sigma_i}{\sigma_{i+1}}$, i.e.\ distance to the Weyl chamber walls at infinity. In the second part of this note, we give an analogous characterization of relatively dominated representations: \begin{introthm}[Corollary \ref{cor:reldom_eiggap}] \label{thm:intro_reldom_eiggap} Let $\Gamma$ be hyperbolic relative to $\mathcal{P}$. A representation $\rho\colon \Gamma \to \SL(d,\mathbb{R})$ is $P_1$-dominated relative to $\mathcal{P}$ if and only if the following four conditions hold: \begin{itemize} \item (${D}^\lambda_-$) there exist constants $\ubar{C}, \ubar\mu > 0$ such that $$\frac{\lambda_1}{\lambda_2}(\rho(\gamma)) \geq \ubar{C} e^{\ubar\mu |\gamma|_{c,\infty}}$$ for all $\gamma \in \Gamma$, \item (${D}^\lambda_+$) there exist constants $\bar{C}, \bar\mu > 0$ such that $$\frac{\lambda_1}{\lambda_d}(\rho(\gamma)) \leq \bar{C} e^{\bar\mu |\gamma|_{c,\infty}}$$ for all $\gamma \in \Gamma$, \item (unique limits) for each $P \in \mathcal{P}$, there exists $\xi_\rho(P) \in \mathbf{P}(\mathbb{R}^d)$ and $\xi^*_\rho(P) \in \mathrm{Gr}_{d-1}(\mathbb{R}^d)$ such that for every sequence $(\eta_n) \subset P$ with $\eta_n \to \infty$, we have $\lim_{n\to\infty} U_1(\rho(\eta_n)) = \xi_\rho(P)$ and $\lim_{n\to\infty} U_{d-1}(\rho(\eta_n)) = \xi^*_\rho(P)$. \item (uniform transversality) for every $P, P' \in \mathcal{P}$ and $\gamma \in \Gamma$, $\xi_\rho(P) \neq \xi_\rho(\gamma P'\gamma^{-1})$. Moreover, for every $\ubar\upsilon,\bar\upsilon>0$, there exists $\delta_0 > 0$ such that for all $P, P' \in \mathcal{P}$ and $g, h \in \Gamma$ such that there exists a bi-infinite $(\ubar\upsilon,\bar\upsilon)$-metric quasigeodesic path $\eta gh \eta'$ where $\eta'$ is in $P'$ and $\eta$ is in $P$, we have \[ \sin \angle (g^{-1} \xi_\rho(P), h\, \xi^*_\rho(P')) > \delta_0 .\] {\footnotesize (See Definition \ref{defn:metric_proj_qgeod} for the precise definition of a metric quasigeodesic path.)} \end{itemize} \end{introthm} Here $|\gamma|_{c,\infty}$ is a stable version of the modified word-length $|\gamma|_c$; we refer the reader to \S\ref{sub:stablen_grop} for the precise definitions. The proof of this leverages a recent result of Tsouvalas \cite[ Th.\,5.3]{Kostas_AnosovWEG} stating that groups admitting non-trivial Floyd boundaries have property U: this property, roughly speaking, allows us to control stable translation lengths in terms of word-length. Relatively hyperbolic groups admit non-trivial Floyd boundaries (\cite{GerFloyd}, see also Remark \ref{rmk:floyd_relhyp}), and here we establish a modified version of property U adapted to the relatively hyperbolic case. Finally, we present characterizations of relatively dominated representations which replace most of the additional conditions on the peripheral images with a condition about the existence of suitable limit maps. These are relative analogues of results due to Gu\'eritaud--Guichard--Kassel--Wienhard \cite{GGKW}. \begin{introthm} [Theorem \ref{thm:gaps+maps} and Corollary \ref{thm:eiggaps+maps}] \label{thm:intro_gaps+maps} Given $(\Gamma,\mathcal{P})$ a relatively hyperbolic group, a representation $\rho\colon \Gamma \to \SL(d,\mathbb{R})$ is $P_1$-dominated relative to $\mathcal{P}$ if and only if \begin{itemize} \item there exist continuous, $\rho(\Gamma)$-equivariant, transverse limit maps $\xi_\rho\colon \partial(\Gamma,\mathcal{P}) \to \mathbf{P}(\mathbb{R}^d)$ and $\xi_\rho^*\colon \partial(\Gamma,\mathcal{P}) \to \mathbf{P}(\mathbb{R}^{d*})$, \end{itemize} and one of the following (equivalent) sets of conditions holds: \begin{itemize} \item \emph{either} there exist constants $\ubar{C}, \ubar\mu > 0$ and $\bar{C},\bar\mu > 0$ such that \begin{itemize} \item[(D$-$)] $\frac{\sigma_1}{\sigma_{2}}(\rho(\gamma)) \geq \ubar{C} e^{\ubar\mu|\gamma|_c}$ for all $\gamma \in \Gamma$, and \item[(D+)] $\frac{\sigma_1}{\sigma_d}(\rho(\gamma)) \leq \bar{C} e^{\bar\mu|\eta|_c}$ for all $\gamma \in \Gamma$; \end{itemize} \item \emph{or} there exist constants $\ubar{C}, \ubar\mu > 0$ and $\bar{C},\bar\mu > 0$ such that \begin{itemize} \item[(D${}^\lambda_-$)] $\frac{\lambda_1}{\lambda_2}(\rho(\gamma)) \geq \ubar{C} e^{\ubar\mu |\gamma|_{c,\infty}}$ for all $\gamma \in \Gamma$, and \item[(D${}^\lambda_+$)] $\frac{\lambda_1}{\lambda_d}(\rho(\gamma)) \leq \bar{C} e^{\bar\mu |\gamma|_{c,\infty}}$ for all $\gamma \in \Gamma$. \end{itemize} \end{itemize} \end{introthm} As an application of this, we show that certain free groups which contain unipotent generators and which play weak ping-pong in projective space, including images of positive representations in the sense of \cite{FG}, are relatively $P_1$-dominated (Examples \ref{eg:proj_schottky} and \ref{eg:positive} below). \subsection*{Organization} \S\ref{sec:prelim} collects the various preliminaries needed. \S\ref{sec:D+-EC} gives the definition of a relatively dominated representation, with the simplification allowed by Proposition \ref{introprop:3.5} / \ref{prop:D+-_EC}. \S\ref{sec:reldom_eiggap} contains the proof of the eigenvalue gaps + peripheral conditions characterization described in Theorem \ref{thm:intro_reldom_eiggap}, and \S\ref{sec:gaps+maps} contains the proof of the gaps + limit maps characterization described in Theorem \ref{thm:intro_gaps+maps}, as well as its application to weak ping-pong groups. Note the only dependence of \S\ref{sec:reldom_eiggap} and \S\ref{sec:gaps+maps} on \S\ref{sec:D+-EC} is in the definition of relatively dominated representations. \subsection*{Acknowledgements} The author thanks Max Riestenberg for helpful conversations about the Kapovich--Leeb--Porti approach to Anosov representations, Kostas Tsouvalas for stimulating comments, Fran\c{c}ois Gu\'eritaud and Jean-Philippe Burelle for helpful discussions related to ping-pong and positive representations, and Fanny Kassel for comments on an earlier version. The author acknowledges support from ISF grant 871/17. This research was conducted during the covid-19 pandemic. The author extends his heartfelt gratitude to all those --- friends, family, mentors, funding agencies --- who have given him safe harbor in these tumultuous times. \section{Preliminaries} \label{sec:prelim} \subsection{Relatively hyperbolic groups and cusped spaces} Relative hyperbolicity is a group-theoretic notion of non-positive curvature inspired by the geometry of cusped hyperbolic manifolds and free products. Consider a finite-volume cusped hyperbolic manifold with an open neighborhood of each cusp removed: call the resulting truncated manifold $M$. The universal cover $\tilde{M}$ of such a $M$ is hyperbolic space with a countable set of horoballs removed. The universal cover $\tilde{M}$ is not Gromov-hyperbolic; distances along horospheres that bound removed horoballs are distorted. If we glue the removed horoballs back in to the universal cover, however, the resulting space will again be hyperbolic space. Gromov generalized this in \cite[\S8.6]{Gromov} by defining a group $\Gamma$ as hyperbolic relative to a conjugation-invariant collection of subgroups $\mathcal{P}$ if $(\Gamma,\mathcal{P})$ admits a {\bf cusp-uniform action} on a hyperbolic metric space $X$, meaning there exists some system $(\mathcal{H}_P)_{P\in\mathcal{P}}$ of disjoint horoballs of $X$, each preserved by a subgroup $P \in \mathcal{P}$, such that the $\Gamma$ acts on $X$ discretely and isometrically, and the $\Gamma$ action on $X \smallsetminus \bigcup_P \mathcal{H}_P$ is cocompact. The space $X$ is sometimes called a Gromov model for $(\Gamma,\mathcal{P})$. There is in general no canonical Gromov model for a given relatively hyperbolic group, but there are systematic constructions one can give, one of which we describe here. The description below, as well as the material in the next section \S2.2, is taken from \cite[\S2]{reldomreps} and is based on prior literature, in particular \cite{GrovesManning}; it is included here for completeness. \begin{defn}[{\cite[Def.\,3.1]{GrovesManning}}] \label{defn:combhoroball} Given a subgraph $\Lambda$ of the Cayley graph $\Cay(\Gamma,S)$, the {\bf combinatorial horoball} based on $\Lambda$, denoted $\mathcal{H} = \mathcal{H}(\Lambda)$, is the 1-complex\footnote{Groves-Manning combinatorial horoballs are actually defined as 2-complexes; the definition here is really of a 1-skeleton of a Groves-Manning horoball. For metric purposes only the 1-skeleton matters.} formed as follows: \begin{itemize} \item the vertex set $\mathcal{H}^{(0)}$ is given by $\Lambda^{(0)} \times \mathbb{Z}_{\geq 0}$ \item the edge set $\mathcal{H}^{(1)}$ consists of the following two types of edges: \begin{enumerate}[(1)] \item if $k \geq 0$ and $v$ and $w \in \Lambda^{(0)}$ are such that $0 < d_\Lambda(v, w) \leq 2^k$, then there is a (``horizontal'') edge connecting $(v, k)$ to $(w, k)$; \item if $k \geq 0$ and $v \in \Lambda^{(0)}$, there is a (``vertical'') edge joining $(v, k)$ to $(v, k + 1)$. \end{enumerate} \end{itemize} $\mathcal{H}$ is metrized by assigning length 1 to all edges. \end{defn} Next let $\mathcal{P}$ be a finite collection of finitely-generated subgroups of $\Gamma$, and suppose $S$ is a {\bf compatible generating set}, i.e. for each $P \in \mathcal{P}$, $S \cap P$ generates $P$. \begin{defn}[{\cite[Def.\,3.12]{GrovesManning}}] \label{defn:cuspedspace} Given $\Gamma, \mathcal{P}, S$ as above, the {\bf cusped space} $X(\Gamma, \mathcal{P}, S)$ is the simplicial metric graph \[ \Cay(\Gamma,S) \cup \bigcup \mathcal{H}(\gamma P) \] where the union is taken over all left cosets of elements of $\mathcal{P}$, i.e. over $P \in \mathcal{P}$ and (for each $P$) $\gamma P$ in a collection of representatives for left cosets of $P$. Here the induced subgraph of $\mathcal{H}(\gamma P)$ on the $\gamma P \times \{0\}$ vertices is identified with (the induced subgraph of) $\gamma P \subset \Cay(\Gamma,S)$ in the natural way. \end{defn} \begin{defn} \label{defn:relhyp} $\Gamma$ is hyperbolic relative to $\mathcal{P}$ if and only if the cusped space $X(\Gamma,\mathcal{P},S)$ is $\delta$-hyperbolic (for any compatible generating set $S$; the hyperbolicity constant $\delta$ may depend on $S$.) We will also call $(\Gamma, \mathcal{P})$ a {\bf relatively hyperbolic structure}. \end{defn} We remark that for a fixed relatively hyperbolic structure $(\Gamma, \mathcal{P})$, any two cusped spaces, corresponding to different compatible generating sets $S$, are quasi-isometric \cite[Cor.\,6.7]{Groff}: in particular, the notion above is well-defined independent of the choice of generating set $S$. There is a natural action of $\Gamma$ on the cusped space $X = X(\Gamma,\mathcal{P},S)$; with respect to this action, the quasi-isometry between two cusped spaces $X(\Gamma,\mathcal{P},S_i)$ ($i=1,2$) is $\Gamma$-equivariant. In particular, this gives us a notion of a boundary associated to the data of a relatively hyperbolic group $\Gamma$ and its peripheral subgroups $\mathcal{P}$: \begin{defn} \label{defn:bowditch_bdy} For $\Gamma$ hyperbolic relative to $\mathcal{P}$, the {\bf Bowditch boundary} $\partial (\Gamma, \mathcal{P})$ is defined as the Gromov boundary $\partial_\infty X$ of any cusped space $X = X(\Gamma,\mathcal{P},S)$. \end{defn} By the remarks above, this is well-defined up to homeomorphism, independent of the choice of compatible generating set $S$ \cite[\S9]{Bowditch}. Below, with a fixed choice of $\Gamma$, $\mathcal{P}$ and $S$ as above, for $\gamma, \gamma' \in \Gamma$, $d(\gamma, \gamma')$ will denote the distance between $\gamma$ and $\gamma'$ in the Cayley graph with the word metric, and $|\gamma| := d(\id, \gamma)$ denotes word length in this metric. Similarly, $d_c(\gamma, \gamma')$ denotes distance in the corresponding cusped space and $|\gamma|_c := d_c(\id,\gamma)$ denotes cusped word-length. \subsection{Geodesics in the cusped space} \label{sub:hat_unhat} Let $\Gamma$ be a finitely-generated group, $\mathcal{P}$ be a malnormal finite collection of finitely-generated subgroups, and let $S = S^{-1}$ be a compatible finite generating set as above. Let $X = X(\Gamma, \mathcal{P}, S)$ be the cusped space, and $\Cay(\Gamma) = \Cay(\Gamma,S)$ the Cayley graph. Here we collect some technical results about geodesics in these spaces that will be useful below. \begin{lem}[{\cite[Lem.\,3.10]{GrovesManning}}] \label{lem:gm310} Let $\mathcal{H}(\Gamma)$ be a combinatorial horoball. Suppose that $x,y \in \mathcal{H}(\Gamma)$ are distinct vertices. Then there is a geodesic $\gamma(x,y) = \gamma(y,x)$ between $x$ and $y$ which consists of at most two vertical segments and a single horizontal segment of length at most 3. \end{lem} We will call any such geodesic a {\bf preferred geodesic}. Given a path $\gamma\colon I \to \Cay(\Gamma)$ in the Cayley graph such that $\gamma(I \cap \mathbb{Z}) \subset \Gamma$, we can consider $\gamma$ as a {\bf relative path} $(\gamma, H)$, where $H$ is a subset of $I$ consisting of a disjoint union of finitely many subintervals $H_1, \dots, H_n$ occurring in this order along $I$, such that each $\eta_i := \gamma|_{H_i}$ is a maximal subpath lying in a closed combinatorial horoball $B_i$, and $\gamma|_{I \smallsetminus H}$ contains no edges of $\Cay(\Gamma)$ labelled by a peripheral generator. Similarly, a path $\hat\gamma\colon \hat{I} \to X$ in the cusped space with endpoints in $\Cay(\Gamma) \subset X$ may be considered as a relative path $(\hat\gamma, \hat{H})$, where $\hat{H} = \coprod_{i=1}^n \hat{H}_i$, $\hat{H}_1, \dots, \hat{H}_n$ occur in this order along $\hat{I}$, each $\hat\eta_i := \hat\gamma|_{\hat{H}_i}$ is a maximal subpath in a closed combinatorial horoball $B_i$, and $\hat\gamma|_{\hat{I} \smallsetminus \hat{H}}$ lies inside the Cayley graph. Below, we will consider only geodesics and quasigeodesic paths $\hat\gamma\colon \hat{I} \to X$ where all of the $\hat\eta_i$ are preferred geodesics (in the sense of Lemma \ref{lem:gm310}.) We will refer to the $\eta_i$ and $\hat\eta_i$ as {\bf peripheral excursions}. We remark that the $\eta_i$, or any other subpath of $\gamma$ in the Cayley graph, may be considered as a word and hence a group element in $\Gamma$; this will be used without further comment below. Given a path $\hat\gamma\colon \hat{I} \to X$ whose peripheral excursions are all preferred geodesics, we may replace each excursion $\hat\eta_i = \hat\gamma|_{\hat{H}_i}$ into a combinatorial horoball with a geodesic path (or, more precisely, a path with geodesic image) $\eta_i = \pi \circ \hat\eta_i$ in the Cayley (sub)graph of the corresponding peripheral subgroup connecting the same endpoints, by omitting the vertical segments of the preferred geodesic $\hat\eta_i$ and replacing the horizontal segment with the corresponding segment at level 0, i.e. in the Cayley graph.\footnote{As a parametrized path this has constant image on the subintervals of $\hat{H}_i$ corresponding to the vertical segments, and travels along the projected horizontal segment at constant speed.} We call this the ``project'' operation, since it involves ``projecting'' paths inside combinatorial horoballs onto the boundaries of those horoballs. This produces a path $\gamma = \pi\circ\hat\gamma\colon \hat{I} \to \Cay(\Gamma)$. Given any path $\alpha$ in the Cayley graph with endpoints $g, h \in \Gamma$, we write $\ell(\alpha)$ to denote $d(g,h)$, i.e.\ distance measured according to the word metric in $\Cay(\Gamma)$. We have the following biLipschitz equivalence between cusped distances and suitably-modified distances in the Cayley graph: \begin{prop}[{\cite[Prop.\ 2.12]{reldomreps}}] \label{prop:unhat_distance} Given a geodesic $\hat\gamma\colon \hat{J} \to X$ with endpoints in $\Cay(\Gamma) \subset X$ and whose peripheral excursions are all preferred geodesics, let $\gamma = \pi \circ \hat\gamma\colon \hat{J} \to \Cay(\Gamma)$ be its projected image. Given any subinterval $[a,b] \subset \hat{J}$, consider the subpath $\gamma|_{[a,b]}$ as a relative path $(\gamma|_{[a,b]}, H)$ where $H = (H_1, \dots, H_n)$, and write $\eta_i := \gamma|_{H_i}$; then we have \[ \frac13 \leq \frac{d_c(\gamma(a), \gamma(b))}{\ell(\gamma|_{[a,b]}) - \sum_{i=1}^n \ell(\eta_i) + \sum_{i=1}^n \hat\ell(\eta_i)} \leq \frac{2}{\log 2} + 1 < 4 \] where $\hat\ell(\eta_i) := \max\{\log(\ell(\eta_i)), 1\}$. \end{prop} Below we will occasionally find it useful to consider paths in $\Cay(\Gamma)$ that ``behave metrically like quasi-geodesics in the relative Cayley graph'', in the following sense: \begin{defn} \label{defn:projgeod_depth} Given any path $\gamma\colon I \to \Cay(\Gamma)$ such that $I$ has integer endpoints and $\gamma(I \cap \mathbb{Z}) \subset \Gamma$, define the {\bf depth} $\delta(n) = \delta_\gamma(n)$ of a point $\gamma(n)$ (for any $n \in I \cap \mathbb{Z}$) as \begin{enumerate}[(a)] \item the smallest integer $d \geq 0$ such that at least one of $\gamma(n-d)$, $\gamma(n+d)$ is well-defined (i.e. $\{n-d, n+d\} \cap I \neq \varnothing$) and not in the same peripheral coset as $\gamma(n)$, {\bf or} \item if no such integer exists, $\min\{\sup I - n, n - \inf I\}$. \end{enumerate} \end{defn} \begin{defn} \label{defn:metric_proj_qgeod} Given constants $\ubar\upsilon, \bar\upsilon > 0$, an {\bf $(\ubar\upsilon,\bar\upsilon)$-metric quasigeodesic path} is a path $\gamma\colon I \to \Cay(\Gamma)$ with $\gamma(I \cap \mathbb{Z}) \subset \Gamma$ such that for all integers $m, n \in I$, \begin{enumerate}[(i)] \item $ |\gamma(n)^{-1}\gamma(m)|_c \geq \ubar\upsilon^{-1} |m-n| - \ubar\upsilon$, \item $ |\gamma(n)^{-1}\gamma(m)|_c \leq \bar\upsilon(|m-n| + \min\{\delta(m), \delta(n)\} ) + \bar\upsilon$, and \item if $\gamma(n)^{-1} \gamma(n+1) \in P$ for some $P \in \mathcal{P}$, we have $\gamma(n)^{-1} \gamma(n+1) = p_{n,1} \cdots p_{n,\ell(n)}$ where each $p_{n,i}$ is a peripheral generator of $P$, and $$2^{\delta(n)-1} \leq \ell(n) := |\gamma(n)^{-1}\gamma(n+1)| \leq 2^{\delta(n)+1}.$$ \end{enumerate} \end{defn} The terminology comes from the following fact: given a geodesic segment $\hat\gamma$ in the cusped space with endpoints in $\Cay(\Gamma)$, we can project the entire segment to the Cayley graph and reparametrize the projected image to be a metric quasigeodesic path --- the idea being that in such a reparametrization, the increments correspond, approximately, to linear increments in cusped distance: see the discussion in \cite[\S2.3]{reldomreps}, and in particular Prop.\ 2.16 there for more details. \subsection{Floyd boundaries} \label{sub:Floyd_bdy} Let $\Gamma$ be a finitely-generated group, and $S$ a finite generating set giving a word metric $|\cdot|$. A {\bf Floyd boundary} $\partial_f \Gamma$ for $\Gamma$ is a boundary for $\Gamma$ meant to generalize the ideal boundary of a Kleinian group. Its construction uses the auxiliary data of a {\bf Floyd function}, which is a function $f\colon \mathbb{N} \to \mathbb{R}_{>0}$ satisfying \begin{enumerate}[(i)] \item $\sum_{n=1}^\infty f(n) < \infty$, and \item there exists $m > 0$ such that $\frac1m \leq \frac{f(k+1)}{f(k)} \leq 1$ for all $k \in \mathbb{N}$. \end{enumerate} Given such a function, there exists a metric $d_f$ on $\Gamma$ defined by setting $d_f(g,h) = f(\max\{|g|, |h|\})$ if $g,h$ are adjacent vertices in $\Cay(\Gamma,S)$, and considering the resulting path metric. Then the Floyd boundary $\partial_f \Gamma$ with respect to $f$ is given by \[ \partial_f \Gamma := \bar\Gamma \smallsetminus \Gamma \] where $\bar\Gamma$ is the metric completion of $\Gamma$ with respect to the metric $d_f$. Below, the Floyd boundary, in particular the ability of the Floyd function to serve as a sort of ``distance to infinity'', will be useful as a tool in the proof of Theorem \ref{thm:reldom_eiggap}. It may be possible, with more work, to replace the role of the Floyd boundary in that proof with the Bowditch boundary. The Floyd boundary $\partial_f \Gamma$ is called {\bf non-trivial} if it has at least three points. Gerasimov and Potyagailo have studied Floyd boundaries of relatively hyperbolic groups: \begin{thm}[\cite{GerFloyd}, \cite{GP_FloydRH}] \label{thm:relhyp_floydbdy} Suppose we have a non-elementary relatively hyperbolic group $\Gamma$ which is hyperbolic relative to $\mathcal{P}$. Then there exists a Floyd function $f$ such that $\partial_f \Gamma$ is non-trivial, and moreover \begin{enumerate}[(a)] \item there exists a continuous equivariant map $F: \partial_f G \to \partial(\Gamma,\mathcal{P})$, such that \item for any parabolic point $p \in \partial(\Gamma, \mathcal{P})$, we have $F^{-1}(p) = \partial_f(\Stab_\Gamma p)$, and if there exist $a \neq b$ such that $F(a) = F (b) = p$, then $p$ is parabolic. \end{enumerate} \end{thm} \begin{rmk} \label{rmk:floyd_relhyp} It is an open question whether every group with a non-trivial Floyd boundary is relatively hyperbolic --- see e.g.\ \cite{Ivan_thickFloyd}. \end{rmk} For more details, including justifications for some of the assertions above, we refer the reader to \cite{Floyd} and \cite{Karlsson}. \subsection{Gromov products and translation lengths in hyperbolic spaces} \label{sub:stablen_grop} We collect and state here, for the reader's convenience, assorted facts about Gromov products and translation lengths in Gromov-hyperbolic spaces that we use below, in particular in and around the statement and proof of Theorem \ref{thm:reldom_eiggap}. Given $X$ a proper geodesic metric space, $x_0 \in X$ a fixed basepoint, and $\gamma$ an isometry of $X$, we define the {\bf translation length} of $\gamma$ as \[ \ell_X(\gamma) := \inf_{x \in X} d_X(\gamma x, x) \] and the {\bf stable translation length} of $\gamma$ as \[ |\gamma|_{X,\infty} := \lim_{n\to\infty} \frac{d_X(\gamma^n x_0, x_0)}n .\] When $X$ is $\delta$-hyperbolic space, these two quantities are coarsely equivalent: \begin{prop}[{\cite[Chap.\ 10, Prop.\ 6.4]{CDP}}] \label{prop:Xhyp_translen_stranslen} For $X$ a $\delta$-hyperbolic metric space, the quantities $\ell_X(\gamma)$ and $|\gamma|_{X,\infty}$ defined above satisfy \[ \ell_X(\gamma) - 16\delta \leq |\gamma|_{X,\infty} \leq \ell_X(\gamma) .\] \end{prop} The {\bf Gromov product} with respect to $x_0$ is the function $\langle \cdot, \cdot \rangle_{x_0}\colon X \times X \to \mathbb{R}$ defined by \[ \langle x, y \rangle_{x_0} := \frac12 \left( d_X(x,x_0) + d_X(y,x_0) - d_X(x,y) \right) .\] There is a relation between the Gromov product, the stable translation length $|\gamma|_{X,\infty}$, and the quantity $|\gamma|_X = d_X(\gamma x_0, x_0)$, given by \begin{lem} \label{lem:grop_len_stranlen} Given $X$ a proper geodesic metric space, $x_0 \in X$ a basepoint, and $\gamma$ an isometry of $X$, we can find a sequence of integers $(m_i)_{i\in\mathbb{N}}$ \[ 2 \lim_{i\to\infty} \langle \gamma^{m_i}, \gamma^{-1} \rangle_{x_0} \geq |\gamma|_X - |\gamma|_{X,\infty}.\] \begin{proof} By the definition of the stable translation length, we can find a sequence $(m_i)_{i\in\mathbb{N}}$ such that \[ \lim_{i\to\infty} \left(|\gamma^{m_i+1}|_X - |\gamma^{m_i}|_X \right) \leq |\gamma|_{X,\infty} .\] By the definition of the Gromov product, \[ 2 \langle \gamma^{m_i}, \gamma^{-1} \rangle_{x_0} := |\gamma^{m_i}|_X + d_X(\gamma^{-1}x_0, x_0) - d_X(\gamma^{m_i} x_0,\gamma^{-1} x_0) .\] Since $\gamma$ acts isometrically on $X$, $d_X(\gamma^{m_i} x_0,\gamma^{-1} x_0) = |\gamma^{m-i+1}|_X$ and $d_X(\gamma^{-1}x_0, x_0) = |\gamma|_X$. Then we have \[ 2 \langle \gamma^{m_i}, \gamma^{-1} \rangle_{x_0} = |\gamma^{m_i}|_X + |\gamma|_X - |\gamma^{m_i+1}|_X \leq |\gamma|_X - |\gamma|_{X,\infty} \] as desired. \end{proof} \end{lem} \subsection{Singular value decompositions} \label{sub:SVD} We collect here facts about singular values and Cartan decomposition in $\SL(d,\mathbb{R})$. The defining conditions for our representations will be phrased, in the first instance, in terms of these, and more generally they will be helpful for understanding the geometry associated to our representations. Given a matrix $g \in \GL(d,\mathbb{R})$, let $\sigma_i(g)$ (for $1 \leq i \leq d$) denote its $i$\textsuperscript{th} singular value, and write $U_i(g)$ to denote the span of the $i$ largest axes in the image of the unit sphere in $\mathbb{R}^d$ under $g$, and $S_i(g) := U_i(g^{-1})$. Note $U_i(g)$ is well-defined if and only if we have a singular-value gap $\sigma_i(g) > \sigma_{i+1}(g)$. More algebraically, given $g \in \GL(d,\mathbb{R})$, we may write $g = KAL$, where $K$ and $L$ are orthogonal matrices and $A$ is a diagonal matrix with nonincreasing positive entries down the diagonal. $A$ is uniquely determined, and we may define $\sigma_i(g) = A_{ii}$; $U_i(g)$ is given by the span of the first $i$ columns of $K$. For $g \in \SL(d,\mathbb{R})$, this singular-value decomposition is a concrete manifestation of a more general Lie-theoretic object, a (particular choice of) Cartan decomposition $\SL(d,\mathbb{R}) = \SO(d) \cdot \exp(\mathfrak{a}^+) \cdot \SO(d)$, where $\SO(d)$ is the maximal compact subgroup of $\SL(d,\mathbb{R})$, and $\mathfrak{a}^+$ is a positive Weyl chamber. We recall that there is an adjoint action $\Ad$ of $\SL(d,\mathbb{R})$ on $\mathfrak{sl}(d,\mathbb{R})$. We will occasionally write (given $g = KAL$ as above) \[ a(g) := (\log A_{11}, \dots, \log A_{dd}) = (\log \sigma_1(g), \dots, \log \sigma_d(g)) ;\] we note that the norm $\|a(g)\| = \sqrt{(\log \sigma_1(g))^2 + \dots + (\log \sigma_d(g))^2}$ is equal to the distance $d(o, g \cdot o)$ in the associated symmetric space $\SL(d,\mathbb{R})/\SO(d)$ (see e.g. formula (7.3) in \cite{BPS}.) \subsection{Regular ideal points and the projective space} \label{sub:regular_ideal} Finally, we collect here some remarks about a subset of the visual boundary which will be relevant to us, and its relation to the projective space as a flag space boundary. Given fixed constants $C_r, c_r > 0$, a matrix $g \in \SL(d,\mathbb{R})$ will be called {\bf $(P_1,C_r,c_r)$-regular} if it satisfies \begin{align} \log \frac{\sigma_1}{\sigma_2}(g) \geq C_r \log\frac{\sigma_1}{\sigma_d}(g) - c_r . \label{ineq:unif_reg} \end{align} Recall that the visual boundary of the symmetric space $\SL(d,\mathbb{R}) / \SO(d)$ consists of equivalence classes of geodesic rays, where two rays are equivalent if they remain bounded distance apart. For any complete non-positively curved Riemannian manifold $X$, such as our symmetric space, the visual boundary is homeomorphic to a sphere, and may be identified with the unit sphere around any basepoint $o$ by taking geodesic rays $\xi\colon [0,\infty) \to X$ based at $o$ and identifying $\xi(1)$ on the unit sphere with $\lim_{t\to\infty} \xi(t)$ in the visual boundary. The set of all points in this visual boundary which are accumulation points of sequences $(B_n \cdot o)$, where $o$ varies over all possible basepoints in the symmetric space and $(B_n)$ over all divergent sequences of $(P_1,C_r,c_r)$-regular matrices with all $c_r > 0$, will be called the {\bf $(P_1,C_r)$-regular ideal points}. For fixed $C_r$, the set of $(P_1,C_r)$-regular ideal points is compact; indeed it has the structure of a fiber bundle over the projective space $\mathbf{P}(\mathbb{R}^d)$ with compact fibers. There is a map $\pi$ (a fibration) from the set of $(P_1,C_r)$-regular ideal points to $\mathbf{P}(\mathbb{R}^d)$ given by taking $\lim_n g_n \cdot o$ to $\lim_{n\to\infty} U_1(g_n)$ (see \cite[\S\S2.5.1 \& 4.6]{KLP}, where this is stated in slightly different language, or \cite[Th.\,7.2]{reldomreps}). The map $\pi$ is Lipschitz, with Lipschitz constant depending only on the regularity constant $C_r$ and the choice of basepoint $o$ implicit in the measurement of the singular values \cite[\S4.4]{Max}. \section{Relatively dominated representations} \label{sec:D+-EC} \begin{defn}[{\cite[\S4]{reldomreps}}] \label{defn:reldomrep} Let $\Gamma$ be a finitely-generated torsion-free group which is hyperbolic relative to a collection $\mathcal{P}$ of proper infinite subgroups. Let $S$ be a compatible generating set, and let $X = X(\Gamma, \mathcal{P}, S)$ be the corresponding cusped space (see Definitions \ref{defn:combhoroball} and \ref{defn:cuspedspace} above.) As above, let $d_c$ denote the metric on $X$, and $|\cdot|_c := d_c(\id, \cdot)$ denote the cusped word-length. A representation $\rho\colon\Gamma \to \GL(d,\mathbb{R})$ is {\bf $P_1$-dominated relative to $\mathcal{P}$}, with lower domination constants $\ubar{C}, \ubar{\mu} > 0$, if it satisfies \begin{itemize} \item (D$-$) for all $\gamma \in \Gamma$, $\frac{\sigma_1}{\sigma_{2}}(\rho(\gamma)) \geq \ubar{C} e^{\ubar\mu|\gamma|_c}$, \end{itemize} and the images of peripheral subgroups under $\rho$ are well-behaved, meaning that the following three conditions are satisfied: \begin{itemize \item (D+) there exist constants $\bar{C}, \bar\mu > 0$ such that $\frac{\sigma_1}{\sigma_d}(\rho(\eta)) \leq \bar{C} e^{\bar\mu|\eta|_c}$ for every $\gamma \in \Gamma$; \item (unique limits) for each $P \in \mathcal{P}$, there exists $\xi_\rho(P) \in \mathbf{P}(\mathbb{R}^d)$ and $\xi^*_\rho(P) \in \mathrm{Gr}_{d-1}(\mathbb{R}^d)$ such that for every sequence $(\eta_n) \subset P$ with $\eta_n \to \infty$, we have $\lim_{n\to\infty} U_1(\rho(\eta_n)) = \xi_\rho(P)$ and $\lim_{n\to\infty} U_{d-1}(\rho(\eta_n)) = \xi^*_\rho(P)$; \item (uniform transversality) for every $P, P' \in \mathcal{P}$ and $\gamma \in \Gamma$, $\xi_\rho(P) \neq \xi_\rho(\gamma P'\gamma^{-1})$. Moreover, for every $\ubar\upsilon,\bar\upsilon>0$, there exists $\delta_0 > 0$ such that for all $P, P' \in \mathcal{P}$ and $g, h \in \Gamma$ such that there exists a bi-infinite $(\ubar\upsilon,\bar\upsilon)$-metric quasigeodesic path $\eta gh \eta'$ where $\eta'$ is in $P'$ and $\eta$ is in $P$, we have \[ \sin \angle (g^{-1} \xi_\rho(P), h\, \xi^*_\rho(P')) > \delta_0 .\] \end{itemize} \end{defn} \begin{rmk} Since $\Gamma$ is finitely-generated, so are its peripheral subgroups, by \cite[Prop.\,4.28 \& Cor.\,4.32]{DGO}. \end{rmk} \begin{rmk} It is also possible to formulate the definition without assuming relative hyperbolicity, if one imposes additional hypotheses on the peripheral subgroups $\mathcal{P}$; it is then possible to show that any group admitting such a representation must be hyperbolic relative to $\mathcal{P}$: see \cite{reldomreps} for details. \end{rmk} The definition which originally appeared in \cite{reldomreps} also had an additional ``quadratic gaps'' hypothesis, as part of the definition of the peripheral subgroups having well-behaved images. The only input of this assumption into the subsequent results there was in \cite[Lem.\,5.4]{reldomreps}; the next proposition obtains the conclusion of that lemma from the other hypotheses (not including relative hyperbolicity), without using the quadratic gaps hypothesis. \begin{defn} Let $\alpha\colon\mathbb{Z} \to \Cay(\Gamma)$ be a bi-infinite path with $\alpha(\mathbb{Z}) \subset \Gamma$. We define the sequence \begin{align*} x_\gamma & = ( \dots A_{a-1}, \dots, A_{-1}, A_0, \dots, A_{b-1}, \dots) \\ & := \resizebox{.92\hsize}{!}{$(\dots, \rho(\alpha(a)^{-1} \alpha(a-1)), \dots, \rho(\alpha(0)^{-1} \alpha(-1)), \rho(\alpha(1)^{-1} \alpha(0)), \dots, \rho(\alpha(b)^{-1} \alpha(b-1)), \dots )$} \end{align*} and call this the {\bf matrix sequence associated to $\alpha$}. \end{defn} \begin{prop} \label{prop:D+-_EC} Given a representation $\rho\colon\Gamma \to \SL(d,\mathbb{R})$ satisfying (D$\pm$), and given $\ubar\upsilon, \bar\upsilon > 0$, there exist constants $C \geq 1$ and $\mu >0$, depending only on the representation $\rho$ and $\ubar\upsilon, \bar\upsilon$, such that for any matrix sequence $x = x_{\gamma}$ associated to a bi-infinite $(\ubar\upsilon,\bar\upsilon)$-metric quasigeodesic path $\gamma$ with $\gamma(0) = \id$, \begin{align*} d(U_1(A_{k-1} \cdots A_{k-n}), U_1(A_{k-1} \cdots A_{k-(n+1)})) & \leq C e^{-n\mu} \\ d(S_{d-1}(A_{k+n-1} \cdots A_k), S_{d-1}(A_{k+n} \cdots A_k)) & \leq C e^{-n\mu} . \end{align*} \end{prop} \begin{proof} Given (D$\pm$), there exists $C_r,c_r > 0$ such that inequality (\ref{ineq:unif_reg}) is satisfied for all $\gamma \in \Gamma$. Specifically, we can take $C_r = {\ubar\mu}/{\bar\mu}$ and $c_r = ({\ubar\mu}/{\bar\mu}) \log \bar{C} - \log \ubar{C}$, where $\ubar{C},\ubar\mu,\bar{C},\bar\mu$ are the constants coming from the (D$\pm$) conditions. In the language of Kapovich--Leeb--Porti --- see \cite{KLP}, or \cite{KL} for the relative case; we adapt the relevant parts of this language and framework here --- $\rho(\Gamma)$ is a uniformly regular subgroup of $\SL(d,\mathbb{R})$. Hence $\rho(\gamma)$ is $(P_1,C_r,c_r)$-regular, in the sense of \S\ref{sub:regular_ideal}, for all $\gamma \in \Gamma$, and given a divergent sequence $(\gamma_n)_n$, $\rho(\gamma_n) \cdot o$ converges to a $(P_1,C_r,c_r)$-regular ideal point in the visual boundary. Roughly speaking, geodesics converging to $(P_1,C_r,c_r)$-regular ideal points have as many hyperbolic directions as possible in the symmetric space, and thus flag convergence along these geodesics should occur exponentially quickly, just as in the hyperbolic case. This intuition can be made precise with more work, as follows: Recall that we have a Lipschitz map $\pi$ from the set of $(P_1,C_r)$-regular ideal points to $\mathbf{P}(\mathbb{R}^d)$, with Lipschitz constant depending only on the regularity constant $C_r$ and the choice of basepoint $o$ implicit in the measurement of the singular values. Moreover, since $\rho(\gamma)$ is $(P_1,C_r,c_r)$-regular for any $\gamma \in \Gamma$, given the Cartan decomposition $\rho(\gamma) = K_\gamma \cdot \exp(a(\rho(\gamma))) \cdot L_\gamma$, we have \[ \Xi_\rho(\gamma) = \pi \left( \lim_{n\to\infty} K_\gamma \cdot \exp(n a(\rho(\gamma))) \cdot L_\gamma \cdot o \right) .\] Thus, given any sequence $(\gamma_n) \subset \Gamma$, we have \[ d(\Xi_\rho(\gamma_n), \Xi_\rho(\gamma_m)) \leq C_{Lip} \cdot \sin \angle \left( \Ad(K_{\gamma_n})\cdot a(\rho(\gamma_n)), \Ad(K_{\gamma_m})\cdot a(\rho(\gamma_m)) \right) .\] Now, if $x = x_\gamma = (A_n)_{n\in\mathbb{N}}$ is a matrix sequence associated to a bi-infinite $(\ubar\upsilon,\bar\upsilon)$-metric quasigeodesic path $\gamma$ with $\gamma(0) = \id$, then $A_{k-1} \dots A_{k-n} = \rho(\gamma(k)^{-1} \gamma(k-n))$. We write $\rho(\gamma(k)^{-1} \gamma(k-n)) = K_{k,n} \cdot \exp(a(k,n)) \cdot L_{k,n}$ to denote the parts of the Cartan decomposition. By $(P_1,C_r,c_r)$-regularity and the higher-rank Morse lemma \cite[Th.\,1.1]{KLP_Morse}, the limit $$\lim_{n\to\infty} U_1(A_{k-1} \cdots A_{k-n}) = \lim_{n\to\infty} K_{k,n} \langle e_1 \rangle = \lim_{n\to\infty} \Ad(K_{k,n})\cdot a(k,n)$$ exists, and we have a bound $C_a$ on the distance\footnote{For readers more acquainted with the language of Kapovich--Leeb--Porti: this is the distance to the Weyl cone over the $C_r$-regular open star of $\lim_n K_{k,n} \langle e_1 \rangle$.} from $A_{k-1} \cdots A_{k-n} \cdot o$ to a nearest point on any $(P_1,C_r)$-regular ray $(g_n \cdot o)$ starting at $o$ such that $\lim_{n\to\infty} U_1(g_n) = \lim_n K_{k,n} \langle e_1 \rangle$ (below, we refer to any such point as $\pi_{\lim} A_{k-1} \cdots A_{k-n} \cdot o$), where $C_a$ depends only on $C_r, c_r$ and $\ubar\upsilon,\bar\upsilon$. Then, by \cite[Lem.\,4.8]{Max} applied with $p=o$ our basepoint, $\alpha_0 = C_r$, $\tau$ a model Weyl chamber corresponding to the first singular value gap, $q = A_{k-1} \cdots A_{k-n} \cdot o$, the point $r = \pi_{\lim}\, q$, the constant $2l = \|a(k,n)\| \geq \ubar\upsilon^{-1} n - \ubar\upsilon$ and $D = C_a$, we have \begin{align*} \sin \angle \left( \Ad(K_{k,n}) \, a(k,n), \lim_n K_{k,n} \langle e_1 \rangle \right) & = \sin \angle \left( \frac12 \Ad(K_{k,n})\, a(k,n), \lim_n K_{k,n} \langle e_1 \rangle \right) \\ & \leq \frac{d(q/2,\pi_{\lim}q/2)}{d(o,\pi_{\lim} q/2)} \\ & \leq \frac{2C_a e^{C_a / \sqrt{d} + \ubar\upsilon/2} e^{-(C_r / 2\ubar\upsilon) n}}{d(o,\pi_{\lim}\, q/2)} \\ & \leq 2C_a e^{C_a / \sqrt{d} + \ubar\upsilon/2} e^{-(C_r / 2\ubar\upsilon) n} \end{align*} once $n$ is sufficiently large, where ``sufficiently large'' depends only on the dimension $d$, our constants $C_r, C_a$ and choice of basepoint $o$; here $q/2$ denotes the midpoint of $oq$, which can be written as \[ K_{k,n} \cdot \exp\left(\frac12 a(k,n)\right) \cdot L_{k,n} \cdot o . \] Hence we can find $\hat{C} \geq 2C_a e^{C_a / \sqrt{d} + \ubar\upsilon/2}$ such that $$ \sin \angle \left( \Ad(K_{k,n}) \, a(k,n), \lim_n K_{k,n} \langle e_1 \rangle \right) \leq \hat{C} e^{-(C_r / 2\ubar\upsilon) n} $$ for all $n$, and so $d\left(U_1(A_{k-1} \cdots A_{k-n}), U_1(A_{k-1} \cdots A_{k-n-1}) \right)$ is bounded above by \begin{align*} & C_{Lip} \sin \angle \left( \Ad(K_{k,n}) \, a(k,n), \Ad(K_{k,n+1}) \, a(k,n+1) \right) \\ & \quad \leq C_{Lip} \hat{C} \left( 1+ e^{-C_r/2\ubar\upsilon} \right) e^{-(C_r/2\ubar\upsilon) n} . \end{align*} This gives us the desired bound with \begin{align*} \mu = \frac12 C_r \ubar\upsilon^{-1} = \frac12 \ubar\mu (\bar\mu \ubar\upsilon)^{-1} && C = C_{Lip} \hat{C} \left( 1+ e^{-\mu} \right) . \end{align*} The analogous bound for $d\left(S_{d-1}(A_{k+n-1} \cdots A_k), S_{d-1}(A_{k+n} \cdots A_k) \right)$ can be obtained by arguing similarly, or by working with the dual representation --- for the details of this part we refer the interested reader to the end of the proof of \cite[Lem.\ 5.4]{reldomreps}. \end{proof} \section{A characterisation using eigenvalue gaps} \label{sec:reldom_eiggap} Suppose $\Gamma$ is hyperbolic relative to $\mathcal{P}$. We have, as above, the cusped space $X = X(\Gamma,\mathcal{P})$, which is a $\delta$-hyperbolic space on which $\Gamma$ acts isometrically and properly. We define $|\cdot|_{c,\infty}$ to be the stable translation length on this space, i.e. \[ |\gamma|_{c,\infty} := \lim_{n\to\infty} \frac{|\gamma^n|_c}n\] where $|\cdot|_c := d_X(\id,\cdot)$ as above. We remark that by Proposition \ref{prop:Xhyp_translen_stranslen} the eigenvalue gap conditions below may be equivalently formulated in terms of the translation length $\ell_X(\gamma)$. Given $A \in \GL(d,\mathbb{R})$, let $\lambda_i(A)$ denote the magnitude of the $i$\textsuperscript{th} largest eigenvalue of $A$. We will prove \begin{thm} \label{thm:reldom_eiggap} Let $\Gamma$ be hyperbolic relative to $\mathcal{P}$. A semisimple representation $\rho\colon \Gamma \to \SL(d,\mathbb{R})$ satisfies (D$\pm$) (as in Definition \ref{defn:reldomrep}) if and only if it satisfies the following two conditions: \begin{itemize} \item ($\mathrm{D}^\lambda_-$) there exist constants $\ubar{C}, \ubar\mu > 0$ such that $$\frac{\lambda_1}{\lambda_2}(\rho(\gamma)) \geq \ubar{C} e^{\ubar\mu |\gamma|_{c,\infty}}$$ for all $\gamma \in \Gamma$, \item ($\mathrm{D}^\lambda_+$) there exist constants $\bar{C}, \bar\mu > 0$ such that $$\frac{\lambda_1}{\lambda_d}(\rho(\gamma)) \leq \bar{C} e^{\bar\mu |\gamma|_{c,\infty}}$$ for all $\gamma \in \Gamma$. \end{itemize} \end{thm} \begin{cor} \label{cor:reldom_eiggap} Let $\Gamma$ be hyperbolic relative to $\mathcal{P}$. A representation $\rho\colon \Gamma \to \SL(d,\mathbb{R})$ is $P_1$-dominated relative to $\mathcal{P}$ if and only if it satisfies the ($\mathrm{D}^\lambda_\pm$) conditions from Theorem \ref{thm:reldom_eiggap} and the unique limits and uniform transversality conditions from Definition \ref{defn:reldomrep}. \end{cor} \begin{rmk} We remark that the ($\mathrm{D}^\lambda_+$) condition really is equivalent to requiring if $\eta$ is a peripheral element (so $|\eta|_{c,\infty} = 0$), then all the eigenvalue of $\rho(\eta)$ have absolute value 1. If $\gamma$ is such that $|\gamma|_{c,\infty} = |\gamma|_\infty$, then the condition always holds because $\Gamma$ is finitely-generated; more generally we have \begin{align*} \lambda_1(\rho(\gamma\eta)) & \leq \lambda_1(\rho(\gamma)) \lambda_1(\rho(\eta)),\mbox{ and} \\ \lambda_d(\rho(\gamma\eta)) &= \lambda_1(\rho(\eta^{-1} \gamma^{-1}) )^{-1} \geq \lambda_d(\rho(\gamma)) \lambda_d(\rho(\eta)), \end{align*} and we can use these to piece together the condition on non-peripheral and peripheral parts of the word for $\gamma$. \end{rmk} \begin{proof}[Proof of Theorem \ref{thm:reldom_eiggap}] We recall the identity $\log \lambda_i(A) = \lim_{n\to\infty} \frac{\log \sigma(A^n)}n$. Given (D$-$), we have \begin{align*} (\log \lambda_1 - \log \lambda_2) (\rho(\gamma)) & = \lim_{n\to\infty} \frac1n (\log \sigma_1 - \log \sigma_2) (\rho(\gamma^n)) \\ & \geq \lim_{n\to\infty} \frac1n (\log \ubar{C} + \ubar\mu |\gamma^n|_c) = \ubar\mu |\gamma|_{c,\infty} \end{align*} and so \begin{align*} \frac{\lambda_1}{\lambda_2}(\rho(\gamma)) & \geq e^{\ubar\mu |\gamma|_{c,\infty}}. \end{align*} Given (D+), we have \begin{align*} (\log \lambda_1 - \log \lambda_d) (\rho(\gamma)) & = \lim_{n\to\infty} \frac1n (\log \sigma_1 - \log \sigma_d) (\rho(\gamma^n)) \\ & \leq \lim_{n\to\infty} \frac1n (\log \bar{C} + \bar\mu |\gamma^n|_c) = \bar\mu |\gamma|_{c,\infty} \end{align*} and so \begin{align*} \frac{\lambda_1}{\lambda_d}(\rho(\gamma)) & \leq e^{\bar\mu |\gamma|_{c,\infty}}. \end{align*} Hence (D$\pm$) implies (D${}^\lambda_\pm$). In the other direction, we remark that by \cite[Th.\,5.3]{Kostas_AnosovWEG} together with Theorem \ref{thm:relhyp_floydbdy}, $\Gamma$ satisfies property U, i.e. there exist a finite subset $F \subset \Gamma$ and a constant $L > 0$ such that for every $\gamma \in \Gamma$ there exists $f \in F$ with \begin{equation} \label{eq:propweakU} |f\gamma|_\infty \geq |f\gamma| - L . \end{equation} We observe that this means that given any $\gamma \in \Gamma$ and $\epsilon > 0$, there exists $n_0 >0$ such that $|(f\gamma)^n| \geq n|f\gamma| - (1-\epsilon)Ln$ for all $n \geq n_0$, or in words there is bounded cancellation between the start and end of $f\gamma$. We will now leverage this to obtain a relative version of the previous inequality, namely that for any given $\epsilon >0$, there exists $n_1 > 0$ such that $$|(f\gamma)^n|_c \geq \frac1{12} |f\gamma|_c - L$$ for all $n \geq n_1$. To do so, we will impose some additional requirements on the finite set $F$ appearing above. To describe these requirements, and to prove our relative inequality, we will use the framework and terminology described in \S\ref{sub:hat_unhat}. Abusing notation slightly, write $f\gamma$ to denote a geodesic path from $\id$ to $f\gamma$ in the Cayley graph. Consider this $f\gamma$ as a relative path $(f\gamma,H)$ with $H = H_1 \cup \dots \cup H_k$, and write $\eta_i = f\gamma|_{H_i}$, so each $\eta_i$ is a peripheral excursion. \begin{lem} \label{lem:adapt_T53} Given $\Gamma$ a non-elementary relatively hyperbolic group, there exists a finite subset $F \in \Gamma$ and a constant $L > 0$ such that for every $\gamma \in \Gamma$ there exists $f \in F$ such that \[ |f\gamma|_\infty \geq |f\gamma| - L \] and the peripheral excursions of $(f\gamma)^n$ are precisely $n$ copies of the peripheral excursions of $f\gamma$. \end{lem} We defer the proof of this statement and first complete the proof of the theorem given the statement. By Proposition \ref{prop:unhat_distance}, \[ |f\gamma|_c \leq 4 \left( \ell(f\gamma) - \sum_{i=1}^k\ell(\eta_i) + \sum_{i=1}^k \hat\ell(\eta_i) \right).\] By (\ref{eq:propweakU}), $\ell((f\gamma)^n) \geq n|f\gamma| - (1-\epsilon)Ln$ for all sufficiently large $n$. Crucially, by our assumption on the peripheral excursions of $(f\gamma)^n$, the total length of peripheral excursions for $(f\gamma)^n$ remains $n \sum_{i=1}^k \ell(\eta_i)$, and the sum of the resulting $\hat\ell$ remains $n \sum_{i=1}^k \hat\ell(\eta_i)$. Now we may use Proposition \ref{prop:unhat_distance} to conclude that \[ |(f\gamma)^n|_c \geq \frac13 \left(n\ell(f\gamma) - n\sum_{i=1}^k \ell(\eta_i) + n \sum_{i=1}^k \hat\ell(\eta_i) - (1-\epsilon)Ln \right) .\] But this implies \begin{align*} |f\gamma|_{c,\infty} & = \lim_{n\to\infty} \frac1n |(f\gamma)^n|_c \\ & \geq \frac13 \left( \ell(f\gamma) - \sum_{i=1}^k \ell(\eta_i) + \sum_{i=1}^k \hat\ell(\eta_i) - L \right) \\ & > \frac1{12} |f\gamma|_c - L. \end{align*} as desired. On the other hand it is clear that $|f\gamma|_{c,\infty} \leq |f\gamma|_c$. Now recall (for $\rho$ a semisimple representation), by \cite[Th.\,2.6]{Kostas_AnosovWEG}, there exists a finite $F' \subset \Gamma$ and $C > 0$ such that for every $\gamma \in \Gamma$ there exists $f'\in F'$ such that for every $i$, \[ |\log \lambda_i(\rho(\gamma f')) - \log \sigma_i(\rho(\gamma)) | \leq C .\] Now, given ($\mathrm{D}^\lambda_+$), we have \begin{align*} \frac{\sigma_1}{\sigma_d}(\rho(\gamma)) & \leq C^2 \cdot \frac{\lambda_1}{\lambda_d}(\rho(\gamma f)) \\ & \leq C^2 e^{\bar\mu |\gamma f'|_{c,\infty}} \leq C^2 e^{\bar\mu |\gamma f'|_c} \\ & \leq C^2 (C_{F'})^{\bar\mu} \cdot e^{\bar\mu|\gamma|_c} \end{align*} where $C_{F'} := \max_{f'\in F'} e^{|f'|_c}$ and so (D+) holds. Given ($\mathrm{D}^\lambda_-$), we have \begin{align*} \frac{\sigma_1}{\sigma_2}(\rho(\gamma)) & \geq C^{-2} \cdot \frac{\lambda_1}{\lambda_2}(\rho(\gamma f')) \\ & \geq C^{-2} \ubar{C} e^{\ubar\mu |\gamma f'|_{c,\infty}} \geq C^{-2} \ubar{C} e^{-\ubar\mu L} e^{\frac1{12}\ubar\mu |f \gamma f'|_c} \\ & \geq C^{-2} \ubar{C} e^{-\ubar\mu L} (C_F C_F')^{-\frac1{12}\ubar\mu} \cdot e^{\ubar\mu|\gamma|_c} \end{align*} where $C_{F'}$ is as above and $C_F := \max_{f\in F} e^{|f|_c}$, and hence (D$-$) holds. \end{proof} \begin{proof}[Proof of Lemma \ref{lem:adapt_T53}] We adapt the proof of \cite[Th.\,5.3]{Kostas_AnosovWEG} to show that we can choose $F$ to satisfy the additional requirements we have imposed here. Let $f$ be a Floyd function $f\colon \mathbb{N} \to \mathbb{R}^+$ for which the Floyd boundary $\partial_f \Gamma$ of $\Gamma$ is non-trivial. By Theorem \ref{thm:relhyp_floydbdy}, there is a map from $\partial_f\Gamma$ to the Bowditch boundary $\partial(\Gamma,\mathcal{P})$ which is injective on the set of conical limit points; hence, by \cite[Prop.\,5]{Karlsson}, we can find {\it non-peripheral} $f_1, f_2$ such $\{f_1^+, f_1^-\} \cap \{f_2^+, f_2^-\} = \varnothing$. We will use sufficiently high powers of these to form our set $F$; the north-south dynamics of the convergence group action of $\Gamma$ on $\partial_f \Gamma$ will do the rest. To specify what ``sufficiently high'' means it will be useful to define an auxiliary function $G\colon \mathbb{Z}_{>0} \to \mathbb{R}_{>0}$, which gives a measure of ``distance to infinity'' as measured by the Floyd function: concretely, take $G(x) := 10 \sum_{k= \lfloor x/2 \rfloor}^\infty f(k)$. Since $f$ is a Floyd function, $G(x) \searrow 0$ as $x \to \infty$. By \cite[Lem.\,1]{Karlsson}\footnote{By the monotonicity of $f$ and because $x \in \mathbb{Z}$, our choice of $G$ bounds from above the function $4xf(x) + 2 \sum_{k = x}^\infty f(k)$ appearing in Karlsson's proof.}, we have \begin{align} d_f(g,h) & \leq G\left( \langle g, h \rangle_e \right) & d_f(g,g^+) & \leq G\left( |g|/ 2 \right) \label{eq:karlsson_est} \end{align} for all $g, h \in \Gamma$. Let $\epsilon = \frac16 d_f(f_1^\pm, f_2^\pm)$. Fix $M>0$ such that $G(x) \geq \frac\epsilon{10}$ if and only if $x \leq M$, and $R>0$ such that $G(x) \leq \frac\epsilon{10}$ if and only if $x \geq R$, and $N$ such that $\min\{ |f_1^{N'}|, |f_2^{N'}|\} \geq 2(M+R)$ for all $N' \geq N$. \begin{claim} For every non-trivial $\gamma \in \Gamma$ such that $d_f(\gamma^+, \gamma^-) \leq \epsilon$, there exists $i \in \{1,2\}$ such that $d_f(f_i^{N'} \gamma^+, \gamma^-) \geq \epsilon$ for all $N' \geq N$. \begin{proof}[Proof of claim] By our choice of $\epsilon$, we can find $i \in \{1,2\}$ such that $d_f(\gamma^+, f_i^+) \geq 3\epsilon$: if $d_f(\gamma^+, f_1^\pm) < 3\epsilon$, then $d_f(\gamma^+, f_2^\pm) \geq d(f_2^\pm, f_1^\pm) - 3\epsilon = 3\epsilon$. Without loss of generality suppose $i=1$. There exists $n_0$ such that $G\left( \frac12 |\gamma^n|\right) < \epsilon$ for all $n \geq n_0$. For $n \geq n_0$ and $N' \geq N$, by our choice of $N$, we have \begin{align*} d_f(\gamma^n, f_1^{-N'}) & \geq d_f(\gamma^+, f_1^-) - d_f\left(f_1^-, f_1^{-N'}\right) - d_f(\gamma^+, \gamma^n) \\ & \geq 3\epsilon - G \left( \frac12 |f_1^{N'}| \right) - G\left( \frac12|\gamma^n| \right) > \epsilon. \end{align*} Hence, for all $n \geq n_0$ and $N' \geq N$, we have $G(\langle \gamma^n, f_1^{-N'} \rangle_e) \geq d_f(\gamma^n, f_1^{-N'}) > \epsilon$, and $\langle \gamma^n, f_1^{-N'} \rangle_e \leq M$ by our choice of $M$. Now choose a sequence $(k_i)_{i\in\mathbb{N}}$ such that $|f_1^{k_i-N}| < |f_1^{k_i}|$ for all $i \in \mathbb{N}$. For $n \geq n_0$ and $N' \geq N$, we have, by the definition of the Gromov product and the inequalities above, \begin{align*} 2 \langle f_1^{N'}\gamma^n, f_1^{k_n} \rangle_e & = |f_1^{N'} \gamma^n| + |f_1^{k_n}| - |f_1^{N'-k_n} \gamma^n| \\ & = |\gamma^n| + |f_1^{N'}| - 2\langle \gamma^n, f_1^{-N'} \rangle_e + |f_1^{k_n}| - |f_1^{N'-k_n} \gamma^n| \\ & \geq |f_1^{N'}| - 2M + |f_1^{k_n}| - (|f_1^{N'-k_n} \gamma^n|-|\gamma^n|) \\ & \geq |f_1^{N'}| - 2M + |f_1^{k_n}| - |f_1^{N'-k_n}| \\ & \geq |f_1^{N'}| - 2M \geq 2R. \end{align*} Then by our choice of $R$ we have \[ d_f(f_1^{N'} \gamma^+, f_1^+) \leq \lim_{i\to\infty} G( \langle f_1^{N'}\gamma^n, f_1^{k_n} \rangle_e ) \leq \epsilon/10 \] whenever $n \geq n_0$ and $N' \geq N$; thus \[ d_f(f_1^{N'}\gamma^+, \gamma^-) \geq d_f(\gamma^+,f_1^+) - d_f(f_1^{N'}\gamma^+, f_1^+) - d_f(\gamma^+,\gamma^-) \geq \epsilon \] whence the claim. \end{proof} \end{claim} Now, with $f_1, f_2$ and $N$ as above, fix $F_0 = \{f_1^N, f_1^{N+1}, f_2^N, f_2^{N+1}, e\}$. Then there exists $g \in F_0$ such that $d_f(g\gamma^+, \gamma^-) \geq \epsilon$: if $d_f(\gamma^+, \gamma^-) \geq \epsilon$, choose $g = e$. Otherwise, from the above argument, either $g=f_1^N$ or $g=f_2^N$ works, and then so does $g=f_1^{N+1}$ or $g=f_2^{N+1}$ (resp.) Next fix $L = 2 \max_{g \in F_0} |g| + 2R + 1$; we will show that the desired result holds with $F := F_0 \cup S$ and this $L$. Without loss of generality suppose $|\gamma| > L-1$; otherwise $|\gamma| - |\gamma|_\infty \leq L$ and we have our desired inequality with $g=e$. Otherwise choose $g \in F_0$ such that $d_f(g \gamma^+, \gamma^-) \geq \epsilon$. To use this to obtain an inequality between $|g\gamma|$ and $|g\gamma|_\infty$, we use Lemma \ref{lem:grop_len_stranlen} with $g\gamma$ in the place of $\gamma$, the Cayley graph in the place of $X$, and $x_0 = e$ to obtain a sequence $(m_i)_{i\in\mathbb{N}}$ such that \begin{align} \ 2 \lim_{i\to\infty} \langle (g\gamma)^{m_i}, (g\gamma)^{-1} \rangle_e \geq |g\gamma| - |g\gamma|_\infty , \label{eq:len_stablen_gropbd} \end{align} so it suffices to obtain an upper bound on the Gromov products $\langle (g\gamma)^{m_i}, (g\gamma)^{-1} \rangle_e$. To obtain this bound, we start by noting that $g\gamma^+ = (g\gamma g^{-1})^+$, and using this, the triangle inequality, and the inequalities in (\ref{eq:karlsson_est}) to observe that \begin{align*} d_f\left( g\gamma^+, (g\gamma)^+ \right) & \leq d_f\left( g\gamma^+, g\gamma g^{-1}\right) + d_f\left(g\gamma g^{-1}, g\gamma\right) + d_f\left( (g\gamma)^+, g\gamma \right) \\ & \leq G\left( \frac12 |g\gamma g^{-1}| \right) + G\left( \langle g\gamma g^{-1}, g\gamma \rangle_e \right) + G \left( \frac12|g\gamma| \right) \end{align*} and using liberally the monotonicity of $G$ on the last right-hand side, we obtain the further upper bound \begin{align*} d_f\left( g\gamma^+, (g\gamma)^+ \right) & \leq 3 G\left( \frac12|\gamma| - |g| \right) \end{align*} which, finally, because $\frac12 |\gamma| - |g| \geq R$, is bounded above by $\frac{3\epsilon}{10}$. Arguing similarly, we have \begin{align*} d_f\left( \gamma^{-1}, \gamma^{-1} g^{-1} \right) & \leq d_f\left(\gamma^-, \gamma^{-1} \right) + d_f \left( \gamma^{-1}, \gamma^{-1} g^{-1} \right) \\ & \leq G \left( \frac12|g\gamma| \right) + G\left( \langle \gamma^{-1}, \gamma^{-1}g^{-1} \rangle_e \right) \\ & \leq 2 G\left( \frac12|\gamma| - |g| \right) \leq \frac\eps5 \end{align*} and hence we have \begin{align*} d_f\left((g\gamma)^+, \gamma^{-1}g^{-1} \right) & \geq d_f\left( g\gamma^+, \gamma^- \right) - d_f\left( g\gamma^+, (g\gamma)^+ \right) - d_f\left( \gamma^-, \gamma^{-1}g^{-1} \right) \\ & \geq \epsilon - \frac{3\epsilon}{10} - \frac\epsilon 5 = \frac\eps2 . \end{align*} Thus we have $n_1 > 0$ such that $G \left( \langle (g\gamma)^n, (g\gamma)^{-1} \rangle_e \right) \geq d_f\left((g\gamma)^n, (g\gamma)^{-1} \right) \geq \frac\eps3$ and so $\langle (g\gamma)^n, (g\gamma)^{-1} \rangle_e \leq M$ for all $n \geq n_1$. This is the bound we feed into (\ref{eq:len_stablen_gropbd}) to obtain $|g\gamma| - |g\gamma|_\infty \leq 2M \leq 2R \leq L$, which was the inequality to be shown. Finally, we prove the statement about the peripheral excursions. We may also assume, without loss of generality, that $g\gamma$ contains at least one peripheral excursion, otherwise there is nothing left to prove. If we have a relation $\alpha\eta\beta$ with $\eta \in P \smallsetminus \{\id\}$ peripheral and $\alpha, \beta \notin P$ (and $\alpha$ not ending in any letter of $P$ and $\beta$ not starting in any letter of $P$), then $\alpha\eta\alpha^{-1} = \beta^{-1} \eta \beta$, and by malnormality this implies $\alpha = \beta^{-1}$, which is not possible since $\eta \neq \id$. Since we are assuming $g\gamma$ has peripheral excursions, we may thus assume that in $(g\gamma)^n$ there is no cancellation across more two copies of $g\gamma$, i.e.\ it suffices to look at cancellation between adjacent copies. The peripheral excursions of $(g\gamma)^n$ are exactly $n$ copies of that of $g\gamma$ precisely when cancellation between adjacent copies of $g\gamma$ does not reach any of the peripheral excursions. Suppose now that this is not the case, i.e. cancellation between adjacent copies does reach the peripheral excursions. If $g = f_i^N$ (resp.\ $g=f_i^{N+1}$), then we may take $g = f_i^{N+1}$ (resp.\ $g=f_i^N$) instead; the desired inequalities still hold from the arguments above, and now cancellation between adjacent copies no longer reaches the peripheral excursions. Suppose instead $g = e$; then we may assume, from the argument above, that $|\gamma| \leq L-1$. We will instead take $g$ to be a non-peripheral generator $s$; then, while we had cancellation between adjacent copies before with $g=e$, we can no longer have it with $g=s$. Then $|s\gamma| \leq |\gamma| + 1 \leq L$, and we are done. \end{proof} \begin{rmk} Theorem \ref{thm:reldom_eiggap} still holds without explicitly assuming relative hyperbolicity, for $\Gamma$ a finitely-generated group and $\mathcal{P}$ a collection of finitely-generated subgroups satisfying additional hypotheses (RH) (\cite[Def.\,4.1]{reldomreps}; these hypotheses are always satisfied when $\Gamma$ is hyperbolic relative to $\mathcal{P}$) such that $(\Gamma,\mathcal{P})$ satisfies the modified property U stated in Lemma \ref{lem:adapt_T53}. In this case we can still build $X$, and make the definitions and argue as above. Thus it should suffice if $\Gamma$ is virtually torsion-free and admits a non-trivial Floyd boundary: the proof of Lemma \ref{lem:adapt_T53} could be modified to avoid any use of the Bowditch boundary, and malnormality of peripheral subgroup is one of the (RH) hypotheses (cf. \cite[Th.\,5.3]{Kostas_AnosovWEG}.) We remark, though, that these assumptions may be equivalent to relative hyperbolicity --- see Remark \ref{rmk:floyd_relhyp}. It would in any case follow, from \cite[Th.\,6.1]{reldomreps} that if $(\Gamma,\mathcal{P})$ admits a representation satisfying ($\mathrm{D}^\lambda_-$), unique limits and uniform transversality, then $\Gamma$ is hyperbolic relative to $\mathcal{P}$. Without relative hyperbolicity, the eigenvalue gap conditions ($\mathrm{D}^\lambda_\pm$) as stated are in general weaker than the analogous conditions formulated with translation length instead of stable translation length: see \cite[Ex.\,4.8]{KasselPotrie} for an explicit example. \end{rmk} \section{Limit maps imply well-behaved peripherals} \label{sec:gaps+maps} If we assume that our group $\Gamma$ is hyperbolic relative to $\mathcal{P}$, then the additional conditions of unique limits and uniform transversality which appear in either of the definitions of relatively dominated representations so far may also be replaced by a condition stipulating the existence of suitable limit maps from the Bowditch boundary $\partial(\Gamma,\mathcal{P})$. As noted above, this gives us relative analogues of some of the characterizations of Anosov representations due to Gu\'eritaud--Guichard--Kassel--Wienhard \cite[Th.\,1.3 and 1.7 (1,3)]{GGKW}. \begin{thm} \label{thm:gaps+maps} Let $\Gamma$ be hyperbolic relative to $\mathcal{P}$. A representation $\rho\colon \Gamma \to \SL(d,\mathbb{R})$ is $P_1$-dominated relative to $\mathcal{P}$ if and only if the following three conditions are satisfied: \begin{itemize} \item (D$-$) there exist $\ubar{C}, \ubar\mu > 0$ such that $\frac{\sigma_1}{\sigma_{2}}(\rho(\gamma)) \geq \ubar{C} e^{\ubar\mu|\gamma|_c}$ for all $\gamma \in \Gamma$, \item (D+) there exist $\bar{C}, \bar\mu > 0$ such that $\frac{\sigma_1}{\sigma_d}(\rho(\eta)) \leq \bar{C} e^{\bar\mu|\eta|_c}$ for all $\gamma\in\Gamma$, and \item there exist continuous, $\rho$-equivariant, transverse limit maps $\xi_\rho\colon \partial(\Gamma,\mathcal{P}) \to \mathbf{P}(\mathbb{R}^d)$ and $\xi_\rho^*\colon \partial(\Gamma,\mathcal{P}) \to \mathbf{P}(\mathbb{R}^{d*})$. \end{itemize} \begin{proof} If $\rho$ is $P_1$-dominated relative to $\mathcal{P}$, then it satisfies (D$\pm$), and it admits continuous, equivariant, transverse limit maps by \cite[Th.\,7.2]{reldomreps}. Conversely, if suffices to show that the unique limits and uniform transversality conditions must hold once we have continuous, equivariant, transverse limit maps. Unique limits follows from the limit maps being well-defined, since there is a single limit point in $\partial(\Gamma,\mathcal{P})$ for each peripheral subgroup. Transversality is immediate from the hypotheses, and the uniform version follows from a short argument, as done in \cite[Prop.\,8.5]{reldomreps}. \end{proof} \end{thm} \begin{cor} \label{thm:eiggaps+maps} Let $\Gamma$ be hyperbolic relative to $\mathcal{P}$. A representation $\rho\colon \Gamma \to \SL(d,\mathbb{R})$ is $P_1$-dominated relative to $\mathcal{P}$ if and only if the following three conditions are satisfied: \begin{itemize} \item (${D}^\lambda_-$) there exist constants $\ubar{C}, \ubar\mu > 0$ such that $$\frac{\lambda_1}{\lambda_2}(\rho(\gamma)) \geq \ubar{C} e^{\ubar\mu |\gamma|_{c,\infty}}$$ for all $\gamma \in \Gamma$, \item (${D}^\lambda_+$) there exist constants $\bar{C}, \bar\mu > 0$ such that $$\frac{\lambda_1}{\lambda_d}(\rho(\gamma)) \leq \bar{C} e^{\bar\mu |\gamma|_{c,\infty}}$$ for all $\gamma \in \Gamma$, \item there exist continuous, $\rho(\Gamma)$-equivariant, transverse limit maps $\xi_\rho\colon \partial(\Gamma,\mathcal{P}) \to \mathbf{P}(\mathbb{R}^d)$ and $\xi_\rho^*\colon \partial(\Gamma,\mathcal{P}) \to \mathbf{P}(\mathbb{R}^{d*})$. \end{itemize} \begin{proof} This follows from Theorem \ref{thm:gaps+maps} and Theorem \ref{thm:reldom_eiggap}. \end{proof} \end{cor} As an application, we can show that certain groups that play weak ping-pong on flag spaces are relatively dominated. We remark that these examples have previously been claimed in \cite{KL}, and that a different proof that positive representations (Example \ref{eg:positive}) are relatively dominated can be found in \cite{CZZ}. \begin{eg} \label{eg:proj_schottky} Fix biproximal elements $T_1, \dots, T_k \in \PGL(d,\mathbb{R})$. Write $T_i^\pm$ to denote the attracting lines and $H_{T_i}^\pm$ to denote the repelling hyperplanes of $T_i^{\pm1}$. Assuming $T_i^+ \neq T_j^+$ for $i \neq j$ and $T_i^\pm \not\subset H_{T_j}^\mp$ for all $i, j$, and replacing the $T_i$ with sufficiently high powers if needed, we have disjoint open neighborhoods $A_i^\pm \subset \mathbf{P}(\mathbb{R}^d) =: X$ of $T_i^\pm$, and $B_i^\pm \subset X$ of $H_{T_i}^\pm$ such that $T_i^{\pm1} \left( X \smallsetminus B_i^\pm \right) \subset A_i^\pm$ for $i=1, \dots, k$. The group $\Gamma_0 := \langle T_1, \dots, T_k \rangle$ is a non-abelian free group by a ping-pong argument. There is a homeomorphism from the Gromov boundary $\partial_\infty \Gamma_0$ to the limit set $\Lambda_\Gamma \subset \mathbf{P}(\mathbb{R}^d)$ and the inclusion $\Gamma_0 \hookrightarrow \PGL(d,\mathbb{R})$ is $P_1$-Anosov (see e.g.\ \cite[Th.\,A.2]{CLSS}.) Now suppose we have, in addition, unipotent elements $S_1, \dots, S_\ell \in \PGL(d,\mathbb{R})$ which have well-defined attracting lines (equivalently, well-defined largest Jordan blocks). Write $S_j^+$ to denote the attracting line of $S_j$, and suppose (again passing to sufficiently high powers if need be) $S_1,\dots S_\ell$ are such that there exist pairwise disjoint open neighborhoods $C_1^\pm, \dots, C_\ell^\pm$ whose closures contain $S_1^+,\dots,S_\ell^+$ resp., and whose closures are also disjoint from the closures of all of the $A_i^\pm$ and $B_i^\pm$, such that $S_j^{\pm 1}(X \smallsetminus C_j^\mp) \subset C_j^\pm$. Then, again by a ping-pong argument, the group $\Gamma := \langle T_1, \dots, T_k, S_1, \dots, S_\ell \rangle$ is isomorphic to a non-abelian free group $F_{k+\ell}$. Since we have finitely many generators, we can pick $\epsilon_0>0$ such that \begin{itemize} \item for all $i=1,\dots,k$ and for any $n > 0$ (resp.\ $n < 0$), $U_1(T_i^n)$ is within $\epsilon_0$ of $T_i^+$ (resp.\ $T_i^-$), \item for all $i=1,\dots,k$ and for any $n < 0$ (resp.\ $n > 0$), $U_{d-1}(T_i^n)$ is within $\epsilon_0$ of $H_{T_i}^+$ (resp.\ $H_{T_i}^-$), and \item for all $j=1,\dots,\ell$ and for any $n \neq 0$, $U_1(S_j^n)$ is within $\epsilon_0$ of $S_j^+$. \end{itemize} By taking powers of the generators and slightly expanding the ping-pong neighborhoods if needed, we may assume that $\epsilon_0$ is sufficiently small so that the $A_i^\pm$ and $B_i^\pm$ contain the $2\epsilon_0$-neighborhoods of the $T_i^\pm$ and $H_{T_i}^\pm$ respectively, and the $\overline{C_j^+ \cup C_j^-}$ contain the $2\epsilon_0$-neighborhoods of the $S_j^+$. This slight strengthening of ping-pong will be useful for establishing the transversality of our limit maps below. Let $\Gamma' < \Gamma$ be the free subgroup generated by these powers. Let $\mathcal{P}$ be the set of all subgroups of $\Gamma'$ conjugate to one of the $\langle S_j \rangle = \Stab_{\Gamma'} S_j^+$. Then $\Gamma'$ is hyperbolic relative to $\mathcal{P}$ and there are continuous $\Gamma'$-equivariant homeomorphisms $\xi, \xi^*$ from the Bowditch boundary $\partial(\Gamma',\mathcal{P})$ to the limit set $\Lambda_{\Gamma'} \subset \mathbf{P}(\mathbb{R}^d)$ and the dual limit set $\Lambda_{\Gamma'}^* \subset \mathbf{P}(\mathbb{R}^d)^*$ given by $\lim_n \gamma_n \mapsto \lim_n U_1(\gamma_n)$ and $\lim_n \gamma_n \mapsto \lim_n U_{d-1}(\gamma_n)$ respectively. We claim that $\xi$ and $\xi^*$ are transverse: given two distinct points $x = \lim \gamma_n$ and $y = \lim \eta_n$ in $\partial(\Gamma',\mathcal{P})$, we have $\xi(x) \notin \xi^*(y)$ --- the latter considered as a projective hyperplane in $\mathbf{P}(\mathbb{R}^d)$ --- using ping-pong and the following \begin{lem}[{\cite[Lem.\,5.8]{GGKW}; \cite[Lem.\,A.5]{BPS}}] If $A, B \in \GL(d,\mathbb{R})$ are such that $\sigma_p(A)> \sigma_{p+1}(A)$ and $\sigma_p(AB)> \sigma_{p+1}(AB)$, then \[ d\left(B\cdot U_p(A), U_p(BA) \right) \leq \frac{\sigma_1}{\sigma_d}(B) \cdot \frac{\sigma_{p+1}}{\sigma_p}(A) .\] \end{lem} To establish the claim: write $\gamma_n = g_1 \cdots g_n$ and $\eta_n = h_1 \dots h_n$. Pick $n_0$ minimal such that $U_1(\gamma_{n_0})$ and $U_1(\eta_{n_0})$ are in different ping-pong sets. The lemma above implies that for any given $\epsilon >0$, there exists some $n_1$ so that for all $n \geq n_1$, $U_1(\gamma_n) = U_1(g_1 \cdots g_n)$ is $\epsilon$-close to $\gamma_{n_0} \cdot U_1(g_{n_0+1} \cdots g_n)$, and $U_{d-1}(\eta_n)$ is $\epsilon$-close to $\eta_{n_0} \cdot U_{d-1}(h_{n_0+1} \cdots h_n)$. By our ping-pong setup, for sufficiently small $\epsilon$ these are uniformly close to $U_1(\gamma_{n_0})$ and $U_{d-1}(\eta_{n_0})$ respectively, and in particular they are separated from each other. Moreover, the inclusion $\iota\colon \Gamma' \hookrightarrow \PGL(d,\mathbb{R})$ satisfies (D$\pm$); the proof of this claim will not require the strengthened version of ping-pong described above. (D+) is immediate from $\Gamma'$ being finitely-generated, the existence of a polynomial $p$ of degree $d-1$ such that $\frac{\sigma_1}{\sigma_d}(u) \leq p(|u|)$ for every unipotent element $u \in \Gamma'$, and the sub-multiplicativity of the first singular value $\sigma_1$. To obtain (D-), one can use the following \begin{lem}[{\cite[Lem.\,A.7]{BPS}}] \label{lem:BPSA7} If $A, B \in \GL(d,\mathbb{R})$ are such that $\sigma_p(A)> \sigma_{p+1}(A)$ and $\sigma_p(AB)> \sigma_{p+1}(AB)$, then \begin{align*} \sigma_p(AB) & \geq (\sin\alpha) \cdot \sigma_p(A)\, \sigma_p(B) \\ \sigma_{p+1}(AB) & \leq (\sin\alpha)^{-1} \sigma_{p+1}(A)\, \sigma_{p+1}(B) \end{align*} where $\alpha := \angle \left( U_p(B), S_{d-p}(A)\right)$. \end{lem} To use this here, we show that there exists a uniform constant $\alpha_0 > 0$ such that whenever $(\gamma_n = g_1 \cdots g_n)_{n\in\mathbb{N}} \subset \Gamma'$ is a sequence converging to a point in $\partial(\Gamma',\mathcal{P})$, where each $g_i$ is a power of a generator and $g_i$ and $g_j$ are not powers of a a common generator whenever $|i-j|=1$, then $\angle \left(U_p(g_1 \cdots g_{i-1}), S_{d-p}(g_i) \right) \geq \alpha_0$ for $p\in\{1,d-1\}$ and for all $n$. Suppose this were not true, so that there exist \begin{itemize} \item a generator $s$, \item a divergent sequence $(k_n)$ of integers, and \item a divergent sequence of words $(w_n)$ of words in $\Gamma'$ not starting in $s^{\pm1}$, which without loss of generality --- passing to a subsequence if needed --- converge to some point in $\partial(\Gamma',\mathcal{P})$, \end{itemize} such that \[ \angle(U_1(\rho(w_n)), S_{d-1}(\rho(s^{k_n})) ) \leq 2^{-n} ;\] then, in the limit, we obtain \[ \angle \left( \lim_{n\to\infty} U_1(\rho(w_n)), \lim_{n\to\infty} S_{d-1}(\rho(s^{k_n})) \right) = 0\] but this contradicts transversality, since, by our hypothesis that none of the words $w_n$ starts with $s$, we must have $\lim w_n \neq \lim s^{k_n}$ as $n\to\infty$. Thus we do have a uniform lower bound $\alpha_0 \leq \alpha$ as desired, and then Lemma \ref{lem:BPSA7}, together with the explicit estimate \[ \frac{\sigma_1}{\sigma_2}(S_j^n) \geq q(n) \] for some proper polynomial $q$, tells us that $\log\frac{\sigma_p}{\sigma_{p+1}}$ grows at least linearly in $|\gamma_n|_c$, which gives us (D$-$). We then conclude, by Theorem \ref{thm:gaps+maps}, that $\iota\colon \Gamma' \hookrightarrow \PGL(d,\mathbb{R})$ is $P_1$-dominated relative to $\mathcal{P}$. \end{eg} \begin{eg} \label{eg:positive} Let $\Sigma$ be a surface with boundary and write $\Gamma = \pi_1 \Sigma$, Let $\rho\colon \Gamma \to \PSL(d,\mathbb{R})$ be a positive representation in the sense of Fock--Goncharov \cite{FG}, and let $\mathcal{P}$ be the collection of cyclic subgroups of $\Gamma$ corresponding to holonomies of boundary components with unipotent image under $\rho$. By \cite[Th.\,1.9]{FG}, any element of $\Gamma$ not in a conjugate of one of the subgroups in $\mathcal{P}$ is positive hyperbolic; as a consequence, $\Gamma$ is hyperbolic relative to $\mathcal{P}$. Moreover, suppose we put a hyperbolic metric on $\Sigma$, such that the boundary components with unipotent holonomy are represented by punctures, and the ones with non-unipotent holonomy by geodesic boundary components. This gives us a map $\iota_\infty\colon \partial(\Gamma,\mathcal{P}) \hookrightarrow \partial_\infty\mathbb{H}^2 \cong \mathbb{R}\mathbf{P}^1$ identifying $\partial(\Gamma,\mathcal{P})$ with a subset of $\mathbb{R}\mathbf{P}^1$. By \cite[Th.\,4.9]{BT} (via collapsing all of the boundary components of $\Sigma$ to punctures), $\rho$ admits a Schottky presentation in $\mathcal{F}_+(\mathbb{R}^d)$, the space of oriented flags on $\mathbb{R}^d$, and hence, by \cite[Th.\,2.10]{BT}, we can find a left-continuous equivariant increasing boundary map $\xi\colon \mathbb{R}\mathbf{P}^1 \to \mathcal{F}_+(\mathbb{R}^d)$. We can compose this boundary map with the natural projections from $\mathcal{F}_+(\mathbb{R}^d)$ to $\mathbf{P}(\mathbb{R}^d)$ and $\mathbf{P}(\mathbb{R}^d)^*$ to obtain left-continuous equivariant limit maps from $\mathbb{R}\mathbf{P}^1$ to those respective Grassmannians. Moreover, by considering the opposite orientation on $\mathbb{R}\mathbf{P}^1$, we get right-continuous equivariant limit maps, and by construction these will agree on $\iota_\infty(\partial(\Gamma,\mathcal{P})) \subset \mathbb{R}\mathbf{P}^1$, thus giving us continuous equivariant limit maps from $\partial(\Gamma,\mathcal{P})$ to the respective Grassmannians. These maps are transverse from the increasing condition: if $x$ and $y$ are disjoint points in $\mathbb{R}\mathbf{P}^1$, let $z$ be such that $x,z,y$ are in cyclic order; then $\xi(x), \xi(z), \xi(y)$ are oriented flags in cyclic order, meaning they form a 3-hyperconvex triple of oriented flags, in the sense of \cite{BT} (see Prop.\,3.7 and Def.\,3.1 there): in particular, the flags are, {\it a fortiori}, pairwise transverse. As in the previous example, we can obtain (D$\pm$) from suitable linear algebra and ping-pong. Thus, again by Theorem \ref{thm:gaps+maps}, $\rho$ is $P_1$-dominated relative to $\mathcal{P}$. Indeed, such $\rho$ are $P_k$-dominated relative to $\mathcal{P}$ for all $1 \leq k \leq d-1$, as defined in \cite{reldomreps}, analogous to how purely hyperbolic Schottky groups are Borel--Anosov \cite[Th.\,1.3]{BT}. \end{eg} \printbibliography \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Star formation shapes structure and evolution of a galaxy by consuming gas and injecting feedback into the interstellar medium (ISM). A significant amount of feedback comes from the momentum and energy from the winds of massive stars but also some fraction of feedback comes from protostellar outflows. All accreting astronomical objects tend to have bipolar outflows or collimated jets, resulting from the interaction between the gravitational potential of the central rotating object and the magneto-centrifugal potential arising from the accretion disk \citep{krum, bally16}. Accreting neutron stars, quasars, active galactic nuclei, and young stellar objects (YSO) all show bipolar outflows at some point in their lives. While the bipolarity, degree of collimation, and morphology of these outflows are similar regardless of their origin, some outflow properties depend on the central object. For example, for outflows generated by protostars, the ejecta velocities can vary from 1 to 100 km~s$^{-1}$, whereas neutron stars can produce outflow velocities at a significant fraction of the speed of light. Outflows set in as soon as the accretion disks around collapsing protostars are formed. The outflows associated with young stellar objects (protostars) provide useful information about the evolutionary stages of forming stars as well as the condition of the parent clouds, since the size, velocity, mass and momentum of the ejecta depend on the generating YSO (protostar) and the cloud environment \citep{bally16}. Most YSOs show two components: a high-speed, relatively low-mass collimated jet of atomic or ionized matter, and a wide angled, slow-moving, massive molecular component. The bipolar atomic/ionized jet is emitted orthogonal to the plane of the accretion disk reaching large distances. The molecular component appears more closely connected to the rotating core. Outflows inject mass and momentum into the protostellar environment in opposite directions perpendicular to the plane of accretion, and the mass injection rate increases with the accretion rate \citep{eller13}. During the early stages of protostellar evolution, molecular outflows are the dominant sources of momentum and energy injection to the natal cloud. Additionally, the physical characteristics continuously evolve with the YSOs. In the early stages of class 0 YSOs, outflows are predominantly molecular and become progressively more atomic and ionized with increasing velocities as the YSOs evolve into class I. Because of the multiple ionization states for outflowing material, several tracers are required for revealing all the different features of outflows. The atomic and ionized components of the jet are observed with radio and x-ray continuum emission and the (semi-) forbidden line transitions of atomic species in the optical and UV. While the molecular component can be traced through the infrared lines of H$_2$, the low-$J$ rotational transition lines of CO molecules are the most commonly used tracers because of their brightness and their observability. The lines are bright because of the relatively high fractional abundance of CO and the high likelihood of collisions with H$_2$ and He that populate the low-$J$ states. The low $J$ transitions can be observed in the millimetre/submillimetre regimes with ground-based facilities. In addition, high spectral resolution observations can measure the Doppler broadening of the spectral line profile, which reveals the characteristic line wing features in outflows. These line wings extend $10$ to $100$ km~s$^{-1}$ from the line centre. With molecular spectroscopy we can measure several properties of the outflows using bipolar wings. The standard properties inferred are size, morphology, mass, momentum, energy, and mechanical luminosity. By comparing these quantities with the proto-stellar luminosity, we constrain the accretion time, the efficiency of outflow launching, and the momentum and energy injection rates into the ISM. For low-mass cores this feedback is conjectured to play a significant role in providing turbulence and maintaining virial balance against the core gravitational energy in collapsing clouds. For massive cores, outflow feedback can potentially disrupt and shred the cloud \citep{bally16}. However, the impact of outflow feedback and its coupling to the parent clouds remains uncertain. Some studies have argued that outflows provide a minimal contribution of feedback \citep[e.g.,][]{hansen12} and may not be effective at driving local cloud turbulence \citep{swift08,duartecabral12,drabek16}. However, these estimates rely on careful characterization of outflow properties over a large region within the host molecular cloud. The impact of outflows could be larger than previously considered. \citet{Dun} argued that outflow mass, momentum estimated from low-$J$ CO lines only provide lower limits on those quantities given the standard assumptions about opacity and line excitation. Most outflow studies have focused on the nearest star-forming regions, which are mostly relatively quiescent. The Orion molecular cloud is the nearest site of high-mass star formation and remains the case study for outflows and feedback from newly forming O and B stars \citep{bally16}. However, on Galactic scales, Orion is a relatively small molecular cloud and more distant regions contain larger molecular clouds and a wealth of outflow activity. In this work, we study the Cygnus~X region, a massive molecular cloud complex associated with the spiral arm. Cygnus~X is the most active star-forming region within 2~kpc and shows a range of outflow behaviour across the region. This work is a follow-up of a survey of the Cygnus~X region in $^{12}\mathrm{CO}(3-2)$ emission made with the James Clerk Maxwell telescope (JCMT) by \citet[][hereafter \citetalias{gott12}]{gott12}. The \citetalias{gott12} work identified 47 molecular outflows in the $^{12}\mathrm{CO}$ emission. In this work, we present $^{13}\mathrm{CO}(3-2)$ and $\mathrm{C^{18}O}(3-2)$ observations of 13 of these outflows to measure their properties. This work extends the analysis presented in \citet[][hereafter \citetalias{deb18}]{deb18}, which studied one object in detail in the context of triggered star formation. Here, we use standard approaches to measure the outflow properties for the combined sample of 13 outflows. In addition, we also develop a procedure to measure the properties of outflows using only the $^{12}\mathrm{CO}(3-2)$ emission line. This method is motivated because we have carried out a wide area survey of the Cygnus X region in $^{12}\mathrm{CO}(3-2)$ emission using the JCMT that will be presented in forthcoming work (Deb et al. in preparation). As part of that survey, we have identified hundreds of protostellar outflows. Measuring the properties using multiple CO tracers of those outflows would require a heavy investment of telescope time. Hence, validating the methods for a single-tracer measurement of outflow properties is important for studying outflows in the context of feedback. Specifically, we detail our observational techniques and data extraction in Section \ref{results}. In Section \ref{allco}, we discuss the properties of all three CO rotational lines and assuming a constant excitation temperature among all species we determine the optical depths and column densities of the optically thin lines as functions of position and velocity offsets from the line centre. Section \ref{wings} shows how we calculate mass, momentum, and energy of the molecular outflows using all three tracers following a similar approach as \citetalias{deb18}. Finally, in section \ref{12to13} we present a model for extracting outflow properties from $^{12}$CO(3-2) line alone and compare the results to the three-line estimates. \section{Observations} \label{obs} Here, we have used rotational transition lines $^{13}$CO(3-2) and C$^{18}$O(3-2) observed in the bands centered at 330.58 and 329.33 GHz, respectively, using the JCMT at the summit of Mauna Kea in Hawai'i, using the Heterodyne Array Receiver Program (HARP) instrument and the Auto Correlation Spectral Imaging System (ACSIS) spectrometer (see also \citetalias{deb18}). In Table \ref{obs_compare}, we summarize some of the observational details of the 13 outflow sources, including project codes, weather bands and mean atmospheric opacity values at 225 GHz during the observational runs (March 2010, and July 2011) at the JCMT. Most sources were observed using ``jiggle'' mapping but the largest source was observed using a raster map. We configured the receivers and ACSIS correlator to provide 61 kHz spectral resolution in simultaneous observations across the two spectral lines. The 13 outflows presented here were part of the larger sample of outflows identified in \citetalias{gott12}. For this project the brightest outflows from G12 were selected. While we planned to observe more outflows, we only obtained data on these 13 targets based on the constraints set of observational feasibility (telescope scheduling and weather). Thus, our actual sample is not designed to statistically represent the parent outflow population. For data reduction, we used the observatory-maintained {\sc starlink} software package \citep{STARL} and the standard reduction and calibration recipes developed for the JCMT. The observatory provides calibrated data on the $T^{*}_\mathrm{A}$ scale (antenna temperature corrected for atmospheric opacity, but not for source-beam coupling). We convert the data to the main beam temperature scale by assuming a beam efficiency based on observatory recommendations of $\eta_\mathrm{MB}=0.64$\footnote{\url{https://www.eaobservatory.org/jcmt/instrumentation/heterodyne/harp/}} and setting $T_\mathrm{MB} = T_A^*/\eta_\mathrm{MB}$. We grid these data into position-position-velocity spectral line data cube with a beam size of $14.6''$ (pixel size of $7.3''$) and velocity resolution of 0.055 km~s$^{-1}$. The central spatial coordinates of each cube are shown in Table \ref{obs_compare} (refer to \citetalias{deb18} for more details). For each position, we defined an emission-free region of the baseline by-eye and then subtracted a linear baseline. Additionally, we have archival $^{12}$CO(3-2) line data from \citetalias{gott12}, which we re-sampled and aligned to match the same coordinate grid as the $^{13}$CO(3-2) and C$^{18}$O~(3-2) data. The median values of RMS noise in the final $^{13}$CO(3-2) and C$^{18}$O(3-2) data cubes are 0.31 K and 0.38 K in 0.055 km s$^{-1}$ channels, respectively. The noise values for the archival $^{12}$CO(3-2) data at the same velocity resolution are larger (typically 0.4 to 0.8 K) but this line is always strongly detected. All three lines are detected at $>5\sigma$ at some position in each of the targets. The locations of the 13 observed outflows are shown in Figure \ref{cygx_8mu} with a background of 8 $\mu$m PAH emission, which highlights the regions of star formation in Cygnus X \citep{croc11,peet04}. The large cavities in the 8~$\mu$m emission surround regions where molecular gas was destroyed by newly formed stars giving the popular ``Swiss cheese'' appearance \citep{Bal99}. The observed outflows are all located near the DR21 region, with eight in the active star-forming region and five outflows in satellite clouds, including the region studied in \citetalias{deb18}. \begin{center} \begin{figure*} \includegraphics[width=\textwidth]{figures/cygx_8mu_gal_outflows_proto.pdf} \caption{\label{cygx_8mu} Dust thermal emission at 8 $\mu$m reveals molecular clouds in Cygnus X, with the major star-forming regions labeled (blue and cyan). Locations of the outflows discussed in this work are marked with red triangles. The yellow square denotes the location of the cometary feature discussed in \citet{deb18}.} \end{figure*} \end{center} \begin{figure} \includegraphics[width=0.49\textwidth]{figures/datacube.png} \caption{A position-velocity (PV) slice out of a data cube. The $x{-}y$ plane defines the plane of sight. The third axis is for frequency or equivalently velocity, along which spectral line profiles at each spatial pixel along the $x{-}y$ line as shown in the PV-slice. \label{pv} \end{figure} \section{Results} \label{results} Here, we present the observations of the individual outflows and describe how we measure their physical properties. \subsection{Atlas of Outflows} Our primary data product is a multi-tracer atlas of these 13 outflows. In Figure \ref{of1} we show one of the molecular outflows, G79.886+2.552, from three complementary perspectives. We have included similar three-panel figures for the rest of the 12 sources in a supplemental document that is available online. By eye, we extract a position-velocity (PV) slice from the data cube (Figure \ref{pv}) that is centred on middle of the outflow and oriented so that the slice goes through the brightest part of the red- and blueshifted outflow lobes. We extract the property outflows from this PV slice. The PV-slice is one beam ($14.6''$) in width and the emission is spatially averaged perpendicular to the slice direction. We experimented with changing the slice widths but found that the results were most stable for the chosen width, acting as a compromise between including all emission from the outflow and including background emission from the molecular cloud. The first panel of the atlas (Panel a) displays the spectrum for each of the three CO isotopologues averaged over the red- and blue-shifted sides of the PV slice as the red and blue curves, respectively. The spectrum shows the contrasts in the different line structure of the three species. The strong wing features are visible in the high opacity $^{12}$CO(3-2) emission but the optically thin C$^{18}$O(3-2) emission is symmetric and useful for determining the line centre. The shaded bands in blue and red mark the regions we identify, again by eye, as belonging to the blue- and redshifted wings. We give the values for these boundaries in Table \ref{obs_compare}. Panel (b) displays the integrated intensity maps of the $^{12}$CO(3-2) emission, which reveal the spatial distribution (size and morphology) of the outflowing molecular gas. Red and blue contours represent the red- and blueshifted wings of the outflow, plotted over the background of total emission (gray-scale). The gray-scale shows the integration over the entire spectral line, but the blue and red contour sets indicate emission over the velocity ranges indicated in panel (a). Yellow stars show the positions of protostellar sources in the region according to the catalogue of \citet{kry}, which was generated from {\it Spitzer}-IRAC survey of the region. We have identified the infrared source that is driving the outflow, marked by a cyan star, by finding the protostar that best matches the position of the centre of the outflow. Finally, in panel (c) we display the PV slice for the outflow. This panel shows the spatially-averaged contour lines of $^{12}\mathrm{CO}(3-2)$ emission along the PV-slice against the background of spatially-averaged $^{13}\mathrm{CO}(3-2)$ emission. The velocity offsets distributed over position offsets indicate the strength of bipolarity in the outflowing gas. \begin{table*} \centering \captionof{table}{\large{Observational Summary}. The Project ID is the designation from the JCMT. The last five columns give ranges for the blue- and redshifted wings of the outflow and the line centre.} \begin{tabular}{cccccccccc} \hline Outflow & RA & Dec & Proj. & Atm. & Min.\ blue & Max.\ blue & Line centre & Min.\ red & Max.\ red \\ & (J2000) & (J2000) & ID & Opacity & Vel. & vel. & vel.& vel. & vel.\\ & & & & @225GHz & (km s$^{-1}$) & (km s$^{-1}$) & (km s$^{-1}$)&(km s$^{-1}$) &(km s$^{-1}$)\\ \hline \vspace{5mm} G79.886+2.552 & $20^{\rm h}24^{\rm m}31.6^{\rm s}$ &$+42^{\circ}04^{'}20.0^{''}$ & M10AC12 & 0.070 &-20 & 0 & 6.3 & 12 &+20\\ \vspace{5mm} G81.435+2.147 & $20^{\rm h}31^{\rm m}12.5^{\rm s}$ &$+43^{\circ}05^{'}42.0^{''}$ & M10AC12 & 0.069 &-16 & -5 & -2.8 & 12.5 &+15 \\ \vspace{5mm} G81.424+2.140 & $20^{\rm h}31^{\rm m}12.3^{\rm s}$ &$+43^{\circ}04^{'}53.0^{''}$ & M10AC12 & 0.069 &-14 & -6 & -3.1 & -1.5 & +10 \\ \vspace{5mm} G81.302+1.055 & $20^{\rm h}35^{\rm m}33.5^{\rm s}$ &$+42^{\circ}20^{'}17.0^{''}$ & M10AC12 & 0.083 & +8 & 14.5 &15.4 &17 &+24 \\ \vspace{5mm} G80.314+1.330 & $20^{\rm h}31^{\rm m}12.3^{\rm s}$ &$+41^{\circ}42^{'}30.0^{''}$ & M11AC10 & 0.062 &-40 & -33.5 &-32.2 &-29.5 &-27 \\ \vspace{5mm} G80.862+0.385 & $20^{\rm h}37^{\rm m}00.6^{\rm s}$ &$+41^{\circ}35^{'}00.0^{''}$ & M10AC12 & 0.060&-15 &-5 & -1.8 & 0 &+8\\ \vspace{5mm} G81.663+0.468 & $20^{\rm h}39^{\rm m}15.9^{\rm s}$ &$+42^{\circ}16^{'}15.0^{''}$ & M10AC12 & 0.075& +10 & 16.5& 19.3 & 23 &+44\\ \vspace{5mm} G81.551+0.098 & $20^{\rm h}40^{\rm m}28.7^{\rm s}$ &$+41^{\circ}57^{'}21.0^{''}$ & M10AC12 & 0.061&-14.5 & -8.5 &-6&-4.5 &+1.8 \\ \vspace{5mm} G81.582+0.104 &$20^{\rm h}40^{\rm m}33.3^{\rm s}$ &$+41^{\circ}59^{'}05.0^{''}$& M10AC12 & 0.082&-15 & -8.5 &-6.2& -4.5 & +2 \\ \vspace{5mm} G82.581+0.203 & $20^{\rm h}43^{\rm m}27.8^{\rm s}$ &$+42^{\circ}49^{'}58.0^{''}$ & M10AC12 & 0.120 &-10.5 & 6.5 &10.2 & 15.5 &+32 \\ \vspace{5mm} G82.571+0.194 & $20^{\rm h}43^{\rm m}27.9^{\rm s}$ &$+42^{\circ}49^{'}11.0^{''}$ & M10AC12 & 0.120 &-4 & 7.5 & 11.0 & 13.5 & +23 \\ \vspace{5mm} G80.158+2.727 & $20^{\rm h}24^{\rm m}35.7^{\rm s}$ &$+42^{\circ}23^{'}41.0^{''}$ & M11AC10 & 0.066 & -25 & 0.5 & 4.5 & 10 & +21 \\ \vspace{5mm} G80.149+2.710 & $20^{\rm h}24^{\rm m}38.6^{\rm s}$ &$+42^{\circ}22^{'}42.0^{''}$ & M11AC10 & 0.066 & -3 & 3 & 5.0 & 6 & +12 \\ \hline \end{tabular} \label{obs_compare} \end{table*} \begin{figure*} \centering \includegraphics[width=\textwidth]{figures/outflow1_bip_contour_pv.pdf} \caption{Outflow G79.886+2.552 : (a) Average spectral intensity in blue- and redshifted outflow regions are shown in $^{12}$CO(3-2) (offset $+15$~K), $^{13}$CO(3-2) (offset $+5$~K), and C$^{18}$O(3-2) lines. The wing feature is present in $^{12}$CO(3-2) line, which is self-absorbed in the line centre caused by the foreground Cygnus Rift. (b) Integrated intensity of $^{12}$CO(3-2) line emission highlights the spatial distribution of molecular gas. Red and blue contours represent the red- and blueshifted wings, plotted over the background of total emission (gray-scale). Blue and red contours are obtained by integrating over velocity ranges of $v=-20$ to $0\mathrm{~km~s}^{-1}$ and $v=12.5$ to $20\mathrm{~km~s}^{-1}$ respectively. Contour lines are drawn at levels (5, 10, 20, 30, 40) K~km~s$^{-1}$ and (4, 8, 15, 20, 25) K~km~s$^{-1}$ respectively. Yellow stars indicate protostars in \citet{kry} catalouge, with the driving IR source marked in cyan. (c) Spatial and spectral distribution of outflowing gas along the PV-slice marked by the green arrow in panel (b). Contours are drawn at levels (3, 5, 7.5, 9.5, 11) K.} \label{of1} \end{figure*} \subsection{Distances}\label{sec:distances} We determine the distances to each outflow based on their mean line-of-sight velocity. Referring to \cite{rygl12} and \cite{gott12} we associate the outflows here with four different major star-forming regions in Cygnus X. The range of local standard of rest velocities ($\varv_{\rm LSR}$) of the 13 outflows are included in Table \ref{obs_compare}. Using water masers \cite{rygl12} determined the average proper motion velocities of the two star-forming regions W 75N and DR 21 to be 9 km s$^{-1}$ and -3 km s$^{-1}$, along with their parallax distances. Hence we consider an outflow with a slightly positive velocity ($0 < \varv_{\rm LSR}/(\mathrm{km~s^{-1}})<8$) towards Cygnus X to be at the same distance as Cygnus Rift, which is at a mean distance of 650 pc from the sun \citep{gott12} and one with a low negative velocity ( $-10<\varv_{\rm LSR}/(\mathrm{km~s^{-1}})<0$ to be associated with DR 21, at $1.5$ kpc. We associate outflows with positive LSR velocities ( $\varv_{\rm LSR}/(\mathrm{km~s^{-1}})>8$) with W 75N, at $1.3$ kpc. Outflows G81.435+2.147 and G81.435+2.147 are part of the cometary feature mentioned in \cite{deb18} and are being irradiated by Cygnus OB2 hence we assumed a distance of 1.4 kpc for them. \subsection{CO line emission: Column Density}\label{allco} To measure the physical properties of the outflows, we extend the work of \citetalias{deb18} to determine the outflow column density as a function of velocity from the CO lines. The column density estimates are controlled by the opacity of the underlying tracer \citep{Oos}, so we measure the opacity of the spectral line as a function of position using the three molecular rotational transitions ($^{12}$CO(3-2), $^{13}$CO(3-2) and C$^{18}$O(3-2)). Using the radiative transfer equation and the emission model from \citet{mangum}, we express the spectral line emission in terms of radiation temperature as a function of optical depth $\tau_\nu$, $T_R=[J_{\nu}(T_{\rm b})-J_{\nu}(T_{\mathrm bg})](1-e^{-\tau_{\nu}})$, where $J_{\nu}(T)=\frac{c^2}{2k\nu^2}B_{\nu}(T)$ and $T_{\mathrm bg}$ is the constant background temperature, taken to be the cosmic microwave background ($T_\mathrm{bg} \approx 2.73$~K). We assume local thermodynamic equilibrium (LTE) in the molecular gas, and use a constant molecular excitation temperature $T_{\rm ex}$ (corresponding to the rotational transition $J=3\rightarrow 2$) as the characteristic brightness temperature $T_{\rm b}$ associated with emission from all three species. We model the main beam temperature $T_{\rm MB}$, as \begin{equation}\label{Tr} T_{\rm MB}=f[J_{\nu}(T_{\mathrm {ex}})-J_{\nu}(T_{\mathrm{bg}})](1-e^{-\tau_{\nu}}). \end{equation} Here, $f$ is the beam-filling factor and is assumed to be 1. We assume the $^{12}$CO(3-2) line is optically thick, particularly near the line centre, so the excitation temperature can be approximated as \begin{equation}\label{Tex} T_{\text{ex}}=\frac{h\nu/k}{\ln\left[1+\frac{h\nu/k}{T_{\rm max}+J_\nu(T_{\rm bg})}\right]}, \end{equation} where $T_{\rm max}$ is the peak of the $^{12}$CO(3-2) spectral distribution along each line of sight. Following \citet{mangum}, we have the column density of the top state of the transition for $^{13}$CO(3-2) and C$^{18}$O(3-2) expressed in terms of their optical depth integrated over the Doppler-broadened spectral profile for every position \citepalias[e.g.,][]{deb18}, \begin{equation}\label{Nu} N_u =\frac{8\pi \nu_0^3}{c^3 A_{ul}}\frac{1}{e^{\frac{h\nu_0}{kT_{\text{ex}}}}-1}\int \tau_\nu dv. \end{equation} Here, $\nu_0$ is the equivalent rest frequency and $A_{ul}$ is the Einstein coefficient for the $u=3$ to $l=2$ transition. We extrapolate total column density of the species using the partition function $Q$, which is well approximated as \begin{equation} Q\approx \frac{kT}{hB_0}\exp\left(\frac{hB_0}{3kT}\right). \end{equation} With these assumptions, the total column density is \begin{eqnarray}\label{Ntot} N_{\text{tot}}&=&\frac{Q}{g_u}\exp\left(\frac{E_u}{kT_{\text{ex}}}\right) N_u \nonumber \\ &=& \frac{Q}{g_u}\exp\left(\frac{E_u}{kT_{\text{ex}}}\right) \frac{8\pi \nu_0^3}{c^3 A_{ul}}\frac{1}{e^{\frac{h\nu_0}{kT_{\text{ex}}}}-1}\int \tau_\nu dv. \end{eqnarray} For the C$^{18}$O line, the Einstein coefficient $A_{ul}=6.011\times 10^{-7}$ s$^{-1}$, $\nu_0=330.588$ GHz and the rotational constant $B_0=54891.42$ MHz. These values are obtained from LAMDA \footnote{\url{http://home.strw.leidenuniv.nl/~moldata/}} \citep{lamda} and NIST\footnote{\url{ https://physics.nist.gov/PhysRefData/MolSpec/}} databases. In star-forming clouds, C$^{18}$O has a low abundance relative to $^{12}$CO ($N_\mathrm{C^{18}O}/N_\mathrm{{}^{12}CO}\approx 1.5 \times 10^{-3}$) and $^{13}$CO ($N_\mathrm{C^{18}O}/N_\mathrm{{}^{13}CO}\sim 0.1$) \citep{wilson94}, so it's often reasonable to assume the C$^{18}$O emission is optically thin. However, the line can be optically thick in some regions of star formation as some authors have suggested \citep{wh15}. In our case, we verify this by following the approach outlined in \citep{wh15, lad98} to estimate the line-of-sight maximum optical depth of C$^{18}$O (3-2) emission. This approach compares the brightness ratio of $T_\mathrm{^{13}CO}/T_\mathrm{C^{18}O}$ to an assumed abundance ratio of $8$. Finding a brightness ratio significantly smaller than the abundance ratio would imply significant opacity in the C$^{18}$O line. We estimate the C$^{18}$O optical depth for the outflows for every pixel in the regions of significant emission. Then we compute medians of these values for each outflow. The median varies from 0.23 to 0.65 with corresponding standard deviation from 0.04 to 0.28. This may justify the treatment of C$^{18}$O (3-2) line as optically thin. In this case, the optical depths for the two species are derived from Equation \ref{Tr}, in terms of their main beam temperatures, \begin{eqnarray} \label{tau} {\rm C^{18}O}: \tau_{\nu} &=& \frac{ T_\text{MB}}{J_\nu (T_{\rm ex})-J_\nu(T_{\text{bg}})}\\ {\rm {}^{13}CO}: \tau_{\nu}&=& -{\rm ln}\left[1-\frac{T_{\rm MB}}{J(T_{\rm ex})-J(T_{\rm bg})}\right] \end{eqnarray} \subsection{Physical properties of the outflows}\label{wings} We estimate the mass, momentum, and kinetic energy of each outflow given the CO column density measured as a function of line-of-sight velocity. We use the $^{13}$CO(3-2) spectral line as the primary tracer of column density, since the optically thick low-$J$ transition lines of $^{12}$CO is subject to self absorption and will provide an underestimate of the mass near the line centre. The $^{13}$CO(3-2) line has low signal-to-noise ratio in outflow wings. At large velocity offsets, it can be too weak to extract any useful information. Hence, we implement an extrapolation technique, adopted from \citet{arce01}, for inferring $^{13}$CO(3-2) emission from the brighter $^{12}$CO(3-2) line. Using Equation \ref{Ntot}, we express the column density of $^{13}$CO(3-2) as a function of position offset (spatial pixel) and velocity along the spectral axis in a position-velocity (PV) slice \citepalias{deb18}, \begin{figure*} \centering \includegraphics[width=\textwidth]{figures/ratio_spectral4upper.pdf} \caption{(a) Parabolic shape of $R_{12/13}(v)$ (red), plotted along with $^{12}$CO(3-2) (green),$^{13}$CO(3-2) (blue) emissions. Local minimum occurs near the emission peak.(b) Spectral line profiles of $^{12}$CO(3-2) (blue,red),$^{13}$CO(3-2) (dotted), and C$^{18}$O (3-2)(solid) show relative brightness values around the line centre that is best identified by C$^{18}$O. Bipolar outflow is best visible in $^{12}$CO(3-2) line, where $^{13}$CO(3-2) emission is insignificant. Vertical dotted lines in both panels denote velocity centroid (black) and 4$\sigma$ limits (cyan) of the fitted Gaussian. Both diagrams are constructed from the data associated with outflow G81.435+2.147.\label{ratio}} \end{figure*} \begin{equation}\label{N13} N_{^{13}\rm CO} (x,y,v)=\frac{8\pi \nu_0^3Q_{\rm rot}}{7c^3 A_{ul}}\frac{e^{\frac{E_u}{kT_{\text{ex}}}}}{e^{\frac{h\nu_0}{kT_{\text{ex}}}}-1}\tau_\nu (\mathbf{x},v)\, \delta v. \end{equation} For calculating outflow properties we separate the asymmetric blue- and red-shifted wings of the spectral profile from the symmetric central components. As a first step, we estimate the velocity centroid $\varv_0$ of the line by fitting a Gaussian model to the C$^{18}$O(3-2) data, along with line width $\sigma_{\varv}$. We use C$^{18}$O(3-2) because it is optically thin and has a symmetric profile that is bright only around the line centre (Figure \ref{ratio}). We repeat this fitting process for every spatial pixel along the PV-slice (Figure \ref{pv}). After fixing the line centre, we fit a quadratic function to the observed emission ratio of $^{12}$CO(3-2)/$^{13}$CO(3-2), denoted by $R_{12/13}(v)$ since the line ratio typically resembles a parabola around $\varv_0$: $$R_{12/13}(v) \hspace{1mm} \hat{=} \hspace{1mm} C_0 + C_2\hspace{0.5mm}(\varv - \varv_0)^2.$$ This is done separately for each of the blue- and red-shifted lobes. We also set an upper limit of 65 for the ratio, based on the relative abundance of the two molecular species in molecular clouds \citep{wilson94}. The fitted ratio ranges between this value and a minimum at the velocities where the $^{13}$CO(3-2) line is the brightest (Figure \ref{ratio}). Using the main beam temperature of $^{12}$CO(3-2) ($T_{12}$) and the fitted ratio $R_{12/13}$, we can infer the $^{13}$CO(3-2) main beam temperature ($\hat{T}_{13}$) where the signal is undetectable. Following a strategy adapted from \citet{arce01}, we estimate $\hat{T}_{\rm R,13}(x,y,v)$ in three regimes based on the signal-to-noise ratio of the two emission lines: \[\hat{T}_{13}(x,y,v)= \begin{cases} T_{13} & \mathrm{if} \hspace{2mm} T_{13} \geqslant 5 \hspace{1 mm} \sigma_{13}\\ \frac{T_{12}}{R_{12/13}} & \mathrm{if} \hspace{2mm} T_{13} < 2 \hspace{1 mm}\sigma_{13},\hspace{1 mm} T_{12} \geqslant 2 \hspace{1 mm}\sigma_{12} \\ 0 & \mathrm{if} \hspace{2mm} T_{13} < 2 \hspace{1 mm} \sigma_{13},\hspace{1 mm}T_{12}<2 \hspace{1 mm}\sigma_{12}. \end{cases} \] Here, the noise levels of the two lines are given as $\sigma_{12}$ and $\sigma_{13}$. The last condition states that $^{13}$CO(3-2) main beam temperature cannot be estimated when both emission lines are undetectable. Using the optical depth (Equation \ref{tau}) and the column density (Equation \ref{N13}), we determine the H$_2$ column density $N_{\rm H_2}(x,y,\varv)$ as $N_{\rm H_2}=N_{^{13}\rm CO}/X_{\mathrm{CO}}$ by assuming a fixed abundance ratio $X_{\mathrm{CO}}=10^{-6}$ of $^{13}$CO(3-2) relative to H$_2$ \citep{wilson94}. For the total mass in the outflow, we integrate $N_{\rm H_2}(x,y,v)$ over blue- and red-shifted segments of the spectral axis at each position along the PV-slice and then sum over all such positions, \begin{equation} \begin{split} M_{\rm outflow} &=\mu_{\rm H_2} \int_{\mathbf{x}} \int_{\rm wing} N_{\rm H_2}(x,y,\varv)\, d\nu \, d\mathbf{x} \\ &\approx \mu_{\rm H_2} \sum_{\mathbf{x}, \rm wing} N_{\rm H_2}(x,y,\varv)\, \delta \varv\, \delta A_{\rm pix}. \end{split} \end{equation} Here, we have assumed a mean molecular mass per H$_2$ as $\mu_{\rm H_2} = $2.4 $m_{\rm H}$ ($m_{\rm H}$ assuming a 10\% atomic He abundance by number. We determine the physical pixel areas by projecting the angular size of each pixel to the assumed distance of the outflow (Section \ref{sec:distances}; Table \ref{prop_compare}). We also estimate the projected outflow momentum and energy using \begin{eqnarray} p = p_0 \cos \theta & =&\sum M(x,y,\varv) |\varv-\varv_0|\\ E=E_0 \cos^2 \theta & = &\frac{1}{2}\sum M(x,y,\varv) (\varv-\varv_0)^2 \end{eqnarray} where $\theta$ is the unknown inclination angle with respect to the line of sight and $p_0$ and $E_0$ indicated the unprojected momentum and energy. The results are summarized in Table \ref{prop_compare}. Table \ref{proto_association} includes the protostellar sources that generate the outflows, identified by searching the \citet{kry} catalogue, along with their infrared (IR) luminosity, except the source NOMAD1 1323-0477179 for which we were unable to find the spectral index value and IR luminosity. We exclude the source G80.314+1.330 from further analysis. We were unable to find a protostar associated with the object G80.314+1.330 in existing catalogues. \citet{gott12} identified this object as an outflow, which we also have confirmed using spectral distribution, contour, and PV plots (Figure \ref{of7}). However, the high negative velocity (Table \ref{obs_compare}) and weak emission suggest that this outflow is unlikely to be located in Cygnus X but is likely located further away along the line of sight, likely in the Perseus arm. Hence, a protostar is too distant to be detected. \begin{table*} \centering \captionof{table}{Dynamical properties of the 12 outflows: mass, momentum, and energy columns for estimates from all three CO lines, discussed in section \ref{wings}, along with corresponding estimates from $^{12}$CO alone, discussed in section \ref{12to13}.} \label{prop_compare} \begin{tabular}{cccccccccc} \hline Outflow & Distance & $T_{\rm ex}$ & Mass & $^{12}$CO-only Mass& Momentum& $^{12}$CO-only Momentum & Energy &$^{12}$CO-only Energy \\ &(kpc) & (K) &(M$_\odot$)&(M$_\odot$)&(M$_\odot$) km s$^{-1}$ &(M$_\odot$) km s$^{-1}$ &($10^{45}$ergs)&($10^{45}$ergs) \\ \hline \vspace{2mm} G79.886+2.552 & $0.65\pm 0.15$ & 16 & $0.72\pm0.16$ & $0.42\pm0.33$ & $3.86\pm0.89$ & $1.82\pm1.10$ & $0.34\pm 0.08$ & $0.15\pm0.10$\\ \vspace{2mm} G81.435+2.147& $1.4\pm 0.08$& 25& $5.12\pm 0.29$ & $1.42\pm0.63$ & $27.01\pm1.55$ & $6.40\pm1.27$ & $1.95\pm0.11$ & $0.40\pm0.23$\\ \vspace{2mm} G81.424+2.140& $1.4\pm0.08$ & 22 & $1.36 \pm0.06$ & $0.70\pm0.31$ & $5.70\pm0.29$ & $2.31\pm0.40$ & $0.30\pm 0.02$ & $0.09\pm0.05$\\ \vspace{2mm} G81.302+1.055& $1.3\pm0.07$ & 36 & $9.68\pm0.52$ & $5.18\pm0.59$ & $30.20\pm1.62$ & $18.72\pm2.06$ & $1.15\pm0.06$ & $0.97\pm0.11$\\ \vspace{2mm} G80.862+0.385& $1.5\pm 0.08$ & 31 & $4.65\pm0.24$ & $4.70\pm0.54$ & $31.8\pm1.66$ & $22.31\pm2.51$ & $1.48\pm0.11$ & $1.34\pm0.13$\\ \vspace{2mm} G81.663+0.468& $1.3\pm 0.07$ & 19 & $2.90\pm0.17$ & $1.56\pm0.14$ & $21.30\pm1.15$ & $14.58\pm1.30$ & $2.06\pm0.16$ & $1.50\pm0.14$\\ \vspace{2mm} G81.551+0.098& $1.5\pm 0.08$ & 17 & $1.43\pm0.08$ & $0.33\pm0.04$ & $2.94\pm0.16$ & $0.80\pm0.09$ & $0.08\pm0.004$ & $0.05\pm0.002$\\ \vspace{2mm} G81.582+0.104& $1.5\pm 0.08$ & 24 & $3.68\pm0.20$ & $1.41\pm0.15$ & $7.50\pm0.40$ & $8.00\pm0.85$ & $1.69\pm0.01$ & $0.47\pm0.05$\\ \vspace{2mm} G82.581+0.203& $1.3\pm 0.07$ & 20 & $2.13\pm0.11$ & $1.27\pm0.14$ & $12.84\pm0.68$ & $8.21\pm0.94$ & $1.48\pm0.08$ & $0.74\pm0.08$ \\ \vspace{2mm} G82.571+0.194&$1.3\pm 0.07$ & 18 & $1.16\pm0.06$ & $0.33\pm0.04$ & $3.38\pm0.18$ & $1.25\pm0.14$ & $0.22\pm0.01 $ & $0.08\pm0.01$ \\ \vspace{2mm} G80.158+2.727 & $0.65\pm 0.07$ & 16 & $1.50\pm0.16$ & $0.32\pm0.07$ & $6.22\pm0.68$ & $1.21\pm0.23$ & $0.43\pm0.05$ & $0.06\pm0.01$\\ \vspace{2mm} G80.149+2.710 & $0.65\pm 0.07$ & 27 & $0.18\pm0.02$ & $0.36\pm0.08$ & $1.13\pm0.12$ & $1.10\pm0.23$ & $0.13\pm0.01$ & $0.08\pm0.01$\\ \hline \end{tabular} \end{table*} \begin{table*} \centering \captionof{table}{Protostellar sources associated with the 12 outflows, as identified in \citet{kry}.} \label{proto_association} \begin{tabular}{cccccc} \hline Outflow & Distance & IR Source & Angular & Spectral & $L_{\mathrm{IR}}$ \\ & (kpc) & & Separation & Index & log(L/L$_\odot$) \\ \hline \vspace{2mm} G79.886+2.552 & $0.65\pm 0.15$ & J202430.49+420409.19 & $16.42^{''}$ & 0.16 & 1.87\\ \vspace{2mm} G81.435+2.147& $1.4\pm 0.1$& J203111.82+430521.66 &$21.70^{''}$&2.12 &0.84 \\ \vspace{2mm} G81.424+2.140 & $1.4\pm 0.1$& J203112.70+430457.56 &$6.30^{''}$&0.91 &0.45 \\ \vspace{2mm} G81.302+1.055 & $1.3\pm 0.1$ & J203534.44+422006.80 &$14.58^{''}$&1.23 &1.95 \\ \vspace{2mm} G80.862+0.385& $1.5\pm 0.1$& J203702.60+413440.97 &$8.76^{''}$&1.34 &1.72\\ \vspace{2mm} G81.663+0.468& $1.3\pm 0.1$ &J20391672+4216090.00 &$10.94^{''}$&$-0.05$ &2.39 \\ \vspace{2mm} G81.551+0.098& $1.5\pm 0.1$ & J204028.48+415711.97 &$9.36^{''}$&1.64 &2.04 \\ \vspace{2mm} G81.582+0.104& $1.5\pm 0.1$ & J204033.48+415900.63&$4.81^{''}$&1.87 &0.95 \\ \vspace{2mm} G82.581+0.203& $1.3\pm 0.1$ & J204322.87+425022.76 &59.64$^{''}$& 0.93 & $-0.58$ \\ \vspace{2mm} G82.571+0.194&$1.3\pm 0.1$ & J204328.27+424900.09 &$11.64^{''}$& 0.82&2.28 \\ \vspace{2mm} G80.158+2.727 & $0.65\pm 0.15$ & J202434.18+422331.60 & $19.28^{''}$&0.84 &1.56\\ \vspace{2mm} G80.149+2.710 & $0.65\pm 0.15$ & NOMAD1 1323-0477179&17.51$^{''}$& $\cdots$ & $\cdots$ \\ \hline \end{tabular} \end{table*} \medskip \section{Estimation of outflow properties based on $^{12}$CO(3-2) data}\label{12to13} Outflows are ubiquitous in wide-area surveys of molecular emission \citep{gott12,drabek16}, and the feedback from outflows into the molecular ISM is best understood in the context of these large surveys. However, a full determination of outflow properties requires multiple isotopologues (Section \ref{allco}) and, ideally, multiple rotational transitions from those isotopologues to measure both opacity and excitation temperature \citep{Dun}. While ideal, observing all these transitions is expensive in terms of telescope time, so approximate methods are needed to interpret survey data. To analyze outflows in the wide area survey of Cygnus X (Deb et al. in preparation), we need to estimate outflow mass and other dynamical properties without $^{13}$CO(3-2). A common approximation is to estimate an optical depth correction factor, $\frac{\tau_{12}}{1-e^{-\tau_{12}}}$, to measure out mass from the $^{12}$CO(3-2) line alone \citep{Zh20,Plun, Dun, Gins11}. However, even with the correction factor the mass estimate from $^{12}$CO(3-2) alone could still be an underestimate by 0.5 to 1 dex \citep{Gins11}, because the assumption of $^{13}$CO(3-2) to be optically thin in the outflow wings may not be valid for lower velocity offsets from the line centre (refer to section \ref{upe}). Here, we use our in-hand data on $^{13}$CO emission to calibrate empirical relationships between the observed $^{12}$CO(3-2) emission ($T_{12}$) and the outflow properties as characterized from the full analysis of the $^{13}$CO(3-2) data (section \ref{allco}). Specifically, we empirically estimate the opacity that would be seen in the $^{13}\mathrm{CO}$ line, which we infer based on the brightness of the $^{12}\mathrm{CO}$ emission. The empirical estimate avoids using the (unobserved) $T_\mathrm{MB}$ for $^{13}$CO and scales the $^{12}$CO brightness directly to the $^{13}$CO optical depth. We also estimate the line centroid and width so we can define the velocity ranges that correspond to the wings of the outflow and the velocities relative to the line centre. \begin{figure} \includegraphics[width=\columnwidth]{figures/12to13log_fit.pdf} \caption{Scatter plot shows the association between $^{12}$CO(3-2) emission in terms of position-averaged main beam temperature in K and $^{13}$CO(3-2) optical depth. The raw data set is divided into detectable signal (in black) and noise ($<2\sigma_{12}$, in green). The straight line (in red) denotes the line of best fit. \label{curve_fit}} \end{figure} Our empirical relationship between $^{12}\mathrm{CO}$ brightness and $^{13}\mathrm{CO}$ optical depth is shown in Figure \ref{curve_fit}, where we fit a linear relationship between the log of both quantities. Since the $^{12}$CO(3-2) line observations were stored as a data cube (Figure \ref{pv}), we average the value $T_{12}(x,y,\varv)$ over the position coordinate. Similarly, our estimate of $\tau_{\nu,13}$ is from the full analysis in \ref{allco}, and we again average$\tau_{\nu,13} (x,y,v)$ over position coordinates. Figure \ref{curve_fit} shows the scatter plot of $\left(\tau_{\nu,13}, T_{12}\right)$, for all outflows included in Table \ref{prop_compare}. We perform a linear regression on the bivariate set, with adjusted-R$^2=0.8$ and F-statistic$= 3538$ demonstrating a strong relationship. The best fit in log-space is given by, \begin{equation}\label{logmodel} {\rm log}_{10} \tau_{\nu,13} = -2.69\pm 0.02 + (2.07\pm 0.04) \times {\rm log}_{10} T_{12} . \end{equation} \begin{figure} \includegraphics[width=0.9 \columnwidth]{figures/hwh2.png} \caption{A schematic view of HWHM estimation technique from a spectral line profile \label{hwhm}} \end{figure} Using this fitted equation, we estimate $^{13}$CO column density as a function of position and velocity in a PV-slice, again by assuming a mean particle mass of $\mu_{\rm H_2}=2.4 m_{\rm H}$ and using distances of the outflows from the sun. To estimate the wing mass, we also estimate the profile line centre $\varv_0$ and velocity width $\sigma_\varv$. Unlike in Section \ref{wings}, here we assume we do not have access to $^{13}$CO(3-2) and C$^{18}$O(3-2) data, so we approximate the $^{12}$CO(3-2) spectral line with a Gaussian profile. We then estimate the line centre by leaving $\varv_0$ as a free parameter and minimizing the outflow kinetic energy along each line of sight in the PV slice. Next, we calculate $\sigma_\varv$ by measuring the half width at half maximum (HWHM) of the line profile, where for a Gaussian, $\mathrm{HWMW}= \sqrt{2 \hspace{1mm}{\rm ln}\hspace{1mm} 2}\,\sigma_\varv$. Since the line profile is asymmetric, we measure the HWHM on both sides of $\varv_0$ and take the minimum width as the line width as shown in the schematic Figure \ref{hwhm}. We measure the HWHM by finding the velocity channels $\varv^*$ corresponding to the brightness $\frac{1}{2}T_{\rm peak}^{12}$ where $T_{\rm peak}^{12}$ denotes the maximum of $T_{\rm MB}$ for a spectral profile. In that case, referring to figure \ref{hwhm}, we can write, \[ \hspace{3cm}{\rm HWHM}=\min_{\varv^{*}} |\varv^{*} - \varv_0|.\] There is foreground absorption observed in the outflow spectra (Figure \ref{of1}), which is possibly caused by the foreground Cygnus Rift. This absorption feature, however, does not alter the estimation of $\varv_0$ and $\sigma_\varv$ because the outflow wings are unaffected by the absorption. The inferred value of $\sigma_\varv$ can be up to a factor of two larger than the value measured directly from the observed $^{13}$CO(3-2) line. We define the outflow velocity wings as spectral regions with $|\varv - \varv_0| > 2\sigma_\varv$. The mass estimate is obtained by summing over such velocity channels and position offsets along the PV-slice. The estimated values of mass, projected momentum and projected energy are included in Table \ref{prop_compare}. Similar to section \ref{wings}, momentum and energy values estimated from $^{12}$CO alone contain unknown projection angle with respect to the line of sight. In Figure \ref{corr}, we compare the property estimates for the $^{12}\mathrm{CO}$-only method vs those derived from using all three lines. Considering the small sample size, there is good correlation between the fitted the estimated values but some measurable systematic differences. Table \ref{12-13} summarizes the typical differences. The mean mass from $^{12}$CO alone is typically 0.31 dex (a factor of 0.48) smaller than the estimates from all CO lines. The momentum and energy values are also a factor of 0.47 and 0.53 smaller than the corresponding estimates from all CO lines. The consistent slight underestimation of outflow energetics is attributed to the larger inferred $^{12}$CO(3-2) value of $\sigma_\varv$ mentioned before. Table \ref{12-13} also notes the width of the distribution, which is comparable to the offset that we measure. We do not apply any ad hoc scalings at this point to the $^{12}\mathrm{CO}$-only estimates to bring them into agreement with the full analysis, but we will consider the offsets and spread in Table \ref{12-13} as part of our error budget. \begin{table} \begin{tabular}{cccl} \hline $\log_{10}$($^{12}$CO-only / All lines) quantities & Mean & Standard deviation \\ \hline \vspace{1mm} Mass& $-0.31$ & 0.26\\ \vspace{1mm} Momentum & $-0.32$ & 0.23\\ \vspace{1mm} Energy & $-0.28$ & 0.31\\ \hline \end{tabular} \caption{Comparison between outflow properties from the approximations using the $^{12}$CO(3-2) line alone and those estimated from all three CO lines. On average, this approach systematically underestimates dynamical properties by $\sim 0.3$~dex, which should be included in an error budget.} \label{12-13} \end{table} \begin{figure*} \centering \includegraphics[width=\textwidth]{figures/12to13_v3.pdf} \caption{Scatter plots show comparison between outflow mass ($M$), projected momentum ($p$) and projected energy ($E$) estimated from $^{13}$CO(3-2), $^{12}$CO(3-2) and C$^{18}$O(3-2) data (x-axis) and those estimated from $^{12}$CO(3-2) alone (y-axis). Blue dashed lines denote perfect correlation. Green dash-dotted lines denote the relationship between three-line estimated values $^{12}$CO-only values. A comparison between the two sets of lines shows a consistent underestimation of the outflow properties. \label{corr}} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{figures/Wu-12coIII.pdf} \caption{(left) Scatter plots show outflow mass plotted against energy. Red triangles denote quantities estimated from only $^{12}$CO(3-2) data. (right) Infrared luminosity plotted against outflow energy. Colour scheme is the same as in (left).\label{wu} The Cygnus X outflows are consistent with the broader population irrespectrive of the method used for property estimation.} \end{figure*} \section{Discussion} \label{discuss} \subsection{Outflow Properties and Protostellar Sources} We have estimated several dynamical properties of 12 outflows and identified their infrared sources using the \citet{kry} catalogue. Based on the spectral index ($\alpha$) value, we categorize J202430.49+420409.19 and J20391672+4216090.00 as flat-spectrum protostars. All of the remaining protostars have spectral index values (defined as $\alpha$ in $F_\nu \propto \nu^{-\alpha}$) greater than 0.3. These values imply they are in their early stages of evolution and belong to either class 0 or class I. The early evolutionary stages also implies the bolometric luminosity is approximately the same as infrared luminosity ($L_{\rm IR}$), included in Table \ref{prop_compare}. We were unable to find luminosity and spectral index value for NOMAD1 1323-0477179, which was referred as the IR source associated with the outflow G80.149+2.710 in \citet{gott12}. It is likely that the IR source of this outflow is a deeply embedded class 0 protostar in its early stages of evolution. As suggested from previous analyses \citep[][and references therein]{bally16}, we examined the correlation between mechanical luminosity $L_{\rm mech}$, or infrared luminosity $L_{\rm IR}$ and spectral index. As defined, a lower value of spectral index indicates a more evolved protostar. Outflow energy, IR and mechanical luminosities nominally decrease as the protostar evolves, the highest being achieved in the early stages, we would expect a positive correlation of these outflow properties with increasing spectral index. In our sample such correlation is observed but is extremely weak with large scatter. We attribute this to having a small, heterogeneous sample of outflows at various distances and the narrow range of spectral index that is recovered. We will revist these scalings in the context of the larger outflow survey (Deb et al., in preparation). For context, we compare our sample with the catalogue of \citet{wu}, which assembles a meta-analysis of outflow properties from the literature. The 12 outflows are broadly consistent with the population of outflows with respect to all their measured properties. In particular, we find that the mechanical luminosity $L_{\rm mech}$ is, on average $\sim 10^{-3} L_{\rm IR}$, which traces the accretion power, consistent with other sources. \subsection{Uncertainties in Parameter Estimates}\label{upe} We have used CO lines for estimating outflow mass, momentum and energy, which are subject to significant uncertainties based on our assumptions. Even so, CO molecules remain the best species for studying the molecular outflows because of their high line intensity, low critical density, near-LTE excitation modes, and their relatively large abundances relative to other molecules. Our estimates of outflow properties from a single $^{12}$CO line is similar to other approaches forwarded in the literature. Among early work involving CO lines, \citet{Bon} estimated outflow momentum flux from $^{12}$CO~(2-1) emission using $p\propto \int_{\mathrm{wings}} T_{\mathrm R}^{12}(\varv) d\varv A(r,dr) v^2$, $r$ denotes the radius of a projected annulus orthogonal to the outflow direction and $dr$ is the width of such annulus. This is comparable to the approach discussed here, with modifications since the authors estimated momentum from $^{12}$CO emission in terms of radiation temperature and integrating over the spectral and spatial spread of the outflowing gas. Another common assumption found in the literature is that outflow wings are optically thick in the $^{12}$CO line \citep{BL2, arce01, Dun,rohlfs}. An optically thick tracer only reflects the conditions of the surface of the cloud, thus results in an underestimation of mass, and subsequently of momentum and energy. We use the optically thinner $^{13}$CO(3-2) line for tracing H$_2$ column density in the outflow wings, although we have not made any explicit assumption that $\tau_{13}\ll 1$. Instead, we rely on the assumption of a constant excitation temperature for all lines and for all species. \cite{BL2} suggested a similar method for estimating wing column density from $^{13}$CO~(1-0) line. The authors used the observed $^{13}$CO emission when it was above the RMS noise level and extrapolated from $^{12}$CO~(1-0) using a second-order fitted polynomial ratio $R_{12/13}$ when $^{13}$CO was below the noise level. However, the authors used a different intrinsic abundance ratio, which provides a corresponding limit for the fitted brightness ratios ($R_{12/13} \leqslant 89$). Some authors suggested estimating mass from $^{12}$CO(3-2) brightness by using an opacity correction factor $\frac{\tau_{12}}{1-e^{-\tau_{12}}}$ \citep{Dun}. This is done by assuming $^{13}$CO(3-2) is optically thin, thereby numerically solving $\tau_{12}$ from the observed ratio $R_{12/13}$ using Equation \ref{Tr} under LTE, and here using a $^{12}$CO/$^{13}$CO abundance ratio of 65 \citep{wilson94}, \begin{equation}\label{rat} R_{12/13}=\frac{T_{12}}{T_{13}} = \frac{1-e^{-\tau_{12}}}{1-e^{-\tau_{13}}}\approx 65 \frac{1-e^{-\tau_{12}}}{\tau_{12}} \end{equation} $\frac{\tau_{12}}{1-e^{-\tau_{12}}}$ compensates for $^{12}$CO(3-2) being optically thick in line wings, \begin{equation}\label{T12} \hat{T}_{12} \sim 65\hspace{1mm} {T_{13}} = \frac{\tau_{12}}{1-e^{-\tau_{12}}}{T_{12}}. \end{equation} The factor $\frac{\tau_{12}}{1-e^{-\tau_{12}}} \sim \tau_{12}$. \cite{rohlfs} note that Equation \ref{T12} would overestimate the ratio $R_{12/13}$ in Equation \ref{rat} by an amount that scales with $\tau_{12}$, resulting in an underestimation of the opacity correction factor. This underestimate arises because the assumption that $^{13}$CO(3-2) is optically thin may not be true near the line centre. The opacity profile can vary from one outflow to another. This ambiguity motivated our empirical model for determining the gas column density in outflow wings using the conditional estimation technique described in section \ref{wings}. Our primary assumption is that all three CO lines are in LTE and have the same thermal excitation temperature $T_{\rm ex}$. \cite{Gins11} caution that while lower-$J$ transition lines of $^{12}$CO might be in LTE, $^{12}$CO(3-2) may not be in LTE because of its high critical density value ($27\times$ greater than $J=1\rightarrow0$ line). In this case, the $^{12}$CO(3-2) line may be subthermally excited ($T_{\rm ex}<T_{\rm K}$), which following the expression for $^{13}$CO(3-2) optical depth and Equation \ref{N13}, implies the gas column density is underestimated. The mass and other dynamical properties would also then be lower limits. However, the authors mentioned their sample sizes were small and their claim of $^{12}$CO(3-2) being a poor tracer for column density in outflows is more relevant for later stages of evolution with warmer gas. By comparing the rotational transition lines of $^{12}$CO \citet{Gins11} showed that the $J=3\rightarrow2$ line produces lower estimates of column density than the $J=2\rightarrow1$ and $J=1\rightarrow0$ lines for gas at higher excitation temperatures ($T_{\rm ex}>20$K). In contrast, \citet{Plun} measured mass and other dynamical properties by adopting specific fixed values of $T_{\rm ex}$ as well as a functional form of $T_{\rm ex}$ that varied from pixel to pixel. This may be generally better than our method of estimating column density $T_{\rm peak}$ of $^{12}$CO(3-2) emission in LTE as described in section \ref{wings}. However, the pixel-by-pixel $T_{\rm ex}$ profile does not produce significantly different values, unless the gas is warm ($T_{\rm ex}>$50 K) \citep{Plun}. In our case, the estimated excitation temperature ranges from 16 K to 36 K which, following the argument of \cite{Plun}, should produce results in good agreement with that from a more generalized temperature profile. We also developed a model for extrapolating H$_2$ column density from the $^{12}$CO(3-2) line alone. \cite{BL} estimated outflow mass of 12 sources from $^{12}$CO(3-2) and $^{13}$CO(3-2) lines by determining an assumed common excitation temperature by imposing a different 12-to-13 CO abundance ratio. The authors used a functional dependence of column density $N_{13}(\nu)$ on $T_{\mathrm ex}$ and $\tau_{13}$. For the sources with missing $^{13}$CO(3-2) data, they constrained $\tau_{12}\ll 1$ and $T_{\mathrm ex}>10$ K to estimate $^{12}$CO(3-2) column density, and used a fixed $^{12}$CO(3-2) to H$_2$ ratio. In contrast, we have not imposed restrictions on $\tau_{12}$ and $T_{\mathrm ex}$ for measuring H$_2$ column density. Instead, we used a direct approach of least square fitting to establish a functional relation between $\tau_{13}$ and $^{12}$CO(3-2) brightness. Since the two CO line species have approximately the same abundance ratio in all star-forming clouds, and $^{12}$CO(3-2) transition is ubiquitous in outflows of class 0 and I protostars, the advantage of our approach is that Equation \ref{logmodel} may be applicable in any outflow study that lacks $^{13}$CO(3-2) line data. This approach establishes a direct relationship between the two CO lines in the outflow wings with more generality. Figure \ref{corr} and Table \ref{12-13} summarize the small sample correlation between the fitted $^{13}$CO model and estimates based on all lines. There is a systematic underestimate of outflow properties, which may be caused by unaccounted for opacity in the $^{12}$CO line. Since we have a small sample size of 12, we place our estimates in context by comparing them to the catalogue presented in \cite{wu}, which shows 391 high-velocity molecular outflows from various sources in different evolutionary stages. The larger catalogue contains sources that are both low and high mass protostars. We plot our estimated values along with the values calculated by \citep{wu} (Figure \ref{wu}). Specifically, we compare with the \cite{wu} results for (a) outflow mass vs energy and (b) IR luminosity of the central sources vs outflow energy. Both plots show significant correlations, but this can be primarily attributed to all the axes scaling with $d^2$, where $d$ is the distance to the source. In comparing the \cite{wu} data with our two sets of our results (i.e., estimates from all lines and those from $^{12}$CO alone), we see that both sets of estimates follow the general trends and scales from the population as a whole. Furthermore, the margin between the $^{12}\mathrm{CO}$-only estimates and the multi-line estimates (Table \ref{12-13}) is small compared to the distribution of the broader population. The similarity of the distribution of both sets of estimated values to the larger population of outflows indicates our estimates and regression model are providing good estimates of outflow properties suitable for survey analysis. Overall, we estimate that the projected outflow properties have a 0.3~dex uncertainty and the unknown inclination of the angle suggests a further factor of 2 underestimate for the momentum and a factor of 2 underestimate for the energy assuming a uniform distribution of angles on the sky. \section{Conclusions} In this paper, we have studied 13 molecular outflows in the Cygnus X region identified by \citepalias{gott12}, using JCMT observations of the $^{12}$CO(3-2), $^{13}$CO(3-2), and CO$^{18}$(3-2) spectral lines. We have calculated various properties of the outflows, identified associated infrared sources, and evaluated a new method to estimate gas column density from $^{12}$CO(3-2) line alone. \begin{enumerate} \item We present each of 13 molecular outflows in an atlas, displaying the extent of bipolarity, spatial and spectral extent of outflowing gas, along with the velocity distribution in PV-slices. All outflows except for G80.314+1.330 appear to be associated with clouds in the Cygnus X region. The outflow G80.314+1.330 has a relatively larger negative LSR velocity and is likely associated with the Perseus Arm. \item Assuming LTE and uniform excitation temperature among the three CO lines we estimate mass, momentum, and energy of the remaining 12 outflows by following the method descried in \citepalias{deb18}. The results are summarized in Table \ref{prop_compare}. Our estimated values are comparable with those of a larger population study of outflows (\cite{wu}) and shown in Figure \ref{wu}. In particular, we find the mechanical luminosity of the outflows is $L_\mathrm{mech}\sim 10^{-3}L_{\mathrm{IR}}$. \item We also test a method of estimating of outflow properties from only $^{12}$CO(3-2) line data. We compare our $^{12}$CO(3-2)-only estimates with the three-line estimates. A relatively small but consistent underestimation (0.3 dex) is present in all three properties (mass, momentum, and energy; Figure \ref{corr}) and is likely due to projected linewidth of $^{13}$CO(3-2) being larger than the observed $^{13}$CO(3-2) line width so less emission is included in the outflow wings. Since our sample is small, we compare the values with a compilation of properties from \citet{wu}. In this context, the outflow properties we measure are consistent with the general population and the uncertainties are within the scatter in the broader population (Figure \ref{wu}). \end{enumerate} After comparing the projected and estimated outflow properties we conclude that our $^{12}$CO-only optical depth model produces a fairly close correlation between estimated and projected values. Therefore we can utilize this model in our next work which will present a large survey of outflows in Cygnus X. \section*{Acknowledgements} The James Clerk Maxwell Telescope has historically been operated by the Joint Astronomy Centre on behalf of the Science and Technology Facilities Council of the United Kingdom, the National Research Council of Canada and the Netherlands Organisation for Scientific Research. The authors wish to recognize and acknowledge the significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. The authors acknowledge support from the Natural Sciences and Engineering Research Council of Canada, funding reference numbers RGPIN-2017-03987 and RGPIN 418517. {\it Data Availability}: The data underlying this article are available in the Canadian Astronomy Data Centre, at https://dx.doi.org/10.11570/21.0001. \footnote{Data hosted during review at \url{https://www.canfar.net/storage/list/eros/OUTFLOWS_FITS}} \section{Atlas of Molecular Outflows in Cygnus X} \label{app:atlas} Figures \ref{of4u} to \ref{of23l} show maps of the remaining 12 outflows analysed in the main text. \begin{figure*}[h!] \centering \includegraphics[width=0.95\textwidth]{figures/outflow4upper_bip_contour_pv.pdf} \caption{Outflow G81.435+2.147: (a) Blue- and redshifted outflow regions are shown in $^{12}$CO(3-2) (offset $+15$~K), $^{13}$CO(3-2) (offset $+5$~K), and C$^{18}$O(3-2) lines (b) Blue and red contour lines are obtained by integrating over velocity ranges from $v=-16$ to $-5\mathrm{~km~s}^{-1}$ and $v=0$ to $13\mathrm{~km~s}^{-1}$, and drawn at levels (7, 13, 20, 30, 40, 50) K~km~s$^{-1}$ and (10, 20, 30, 40, 50) K~km~s$^{-1}$ respectively. (c) Contours are drawn at levels (2, 5, 7.5, 10, 15, 20) K.} \label{of4u} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{figures/outflow4lower_bip_contour_pv.pdf} \caption{Outflow G81.424+2.140: (a) Blue- and redshifted outflow regions are shown in $^{12}$CO(3-2) (offset $+15$~K), $^{13}$CO(3-2) (offset $+5$~K), and C$^{18}$O(3-2) lines (b) Blue and red contour lines are obtained by integrating over velocity ranges from $v=-14$ to $-6\mathrm{~km~s}^{-1}$ and $v=-1.5$ to $6\mathrm{~km~s}^{-1}$, and drawn at levels (6, 13, 22) K~km~s$^{-1}$ and (4, 10, 24) K~km~s$^{-1}$ respectively. (c) Contours are drawn at levels (4, 10, 16) K.} \label{of4l} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{figures/outflow5_bip_contour_pv.pdf} \caption{Outflow G81.302+1.055: (a) Blue- and redshifted outflow regions are shown in $^{12}$CO(3-2) (offset $+15$~K), $^{13}$CO(3-2) (offset $+5$~K), and C$^{18}$O(3-2) lines (b) Blue and red contour lines are obtained by integrating over velocity ranges from $v=9$ to $14.5\mathrm{~km~s}^{-1}$ and $v=17$ to $24\mathrm{~km~s}^{-1}$, and drawn at levels (0.45, 1, 2.5, 5, 7.5, 10, 12.5, 15, 20, 25, 28) K~km~s$^{-1}$ and (1, 3, 5, 7, 10, 15, 23) K~km~s$^{-1}$ respectively. (c) Contours are drawn at levels (1.2, 2.5, 5, 7.5, 10, 15, 20, 25) K.} \label{of5} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{figures/outflow7_bip_contour_pv.pdf} \caption{Outflow G81.424+2.140: (a) Blue- and redshifted outflow regions are shown in $^{12}$CO(3-2) (offset $+10$~K), $^{13}$CO(3-2) (offset $+5$~K), and C$^{18}$O(3-2) lines (b) Blue and red contour lines are obtained by integrating over velocity ranges from $v=-37$ to $-33.5\mathrm{~km~s}^{-1}$ and $v-29.5$ to $-27\mathrm{~km~s}^{-1}$, and drawn at levels (3, 7, 11, 16, 18) K~km~s$^{-1}$ and (3, 5, 7) K~km~s$^{-1}$ respectively. (c) Contours are drawn at levels (2, 2.6, 2.9, 3.3) K.} \label{of7} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{figures/outflow10_bip_contour_pv.pdf} \caption{Outflow G80.862+0.385: (a) Blue- and redshifted outflow regions are shown in $^{12}$CO(3-2) (offset $+20$~K), $^{13}$CO(3-2) (offset $+5$~K), and C$^{18}$O(3-2) lines (b) Blue and red contour lines are obtained by integrating over velocity ranges from $v=-15$ to $-5\mathrm{~km~s}^{-1}$ and $v=0$ to $8\mathrm{~km~s}^{-1}$, and drawn at levels (7, 12, 16, 22, 30, 40, 50, 63) K~km~s$^{-1}$ and (15, 20, 30, 40, 50, 60, 70, 80, 90) K~km~s$^{-1}$ respectively. (c) Contours are drawn at levels (1.2, 3, 5, 7, 10) K.} \label{of10} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{figures/outflow14_bip_contour_pv.pdf} \caption{Outflow G81.663+0.468: (a) Blue- and redshifted outflow regions are shown in $^{12}$CO(3-2) (offset $+15$~K), $^{13}$CO(3-2) (offset $+5$~K), and C$^{18}$O(3-2) lines (b) Blue and red contour lines are obtained by integrating over velocity ranges from $v=10$ to $16.5\mathrm{~km~s}^{-1}$ and $v=23$ to $44\mathrm{~km~s}^{-1}$, and drawn at levels (3, 7, 12, 20, 30, 40) K~km~s$^{-1}$ and (1.5, 5, 10, 25, 40, 55, 75) K~km~s$^{-1}$ respectively. (c) Contours are drawn at levels (1.2, 3, 5, 7, 10) K.} \label{of14} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{figures/outflow16_bip_contour_pv.pdf} \caption{Outflow G81.551+0.098: (a) Blue- and redshifted outflow regions are shown in $^{12}$CO(3-2) (offset $+15$~K), $^{13}$CO(3-2) (offset $+5$~K), and C$^{18}$O(3-2) lines (b) Blue and red contour lines are obtained by integrating over velocity ranges from $v=-14.5$ to $-8.5\mathrm{~km~s}^{-1}$ and $v=-4.5$ to $1.8\mathrm{~km~s}^{-1}$, and drawn at levels (4, 12, 20, 28, 32, 40) K~km~s$^{-1}$ and (1, 8, 18, 25, 30, 35) K~km~s$^{-1}$ respectively. (c) Contours are drawn at levels (0.6, 2, 4, 6, 8, 10, 12, 13, 14) K.} \label{of16} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{figures/outflow17_bip_contour_pv.pdf} \caption{Outflow G81.582+0.104: (a) Blue- and redshifted outflow regions are shown in $^{12}$CO(3-2) (offset $+7$~K), $^{13}$CO(3-2) (offset $+3$~K), and C$^{18}$O(3-2) lines (b) Blue and red contour lines are obtained by integrating over velocity ranges from $v=-15$ to $-8.5\mathrm{~km~s}^{-1}$ and $v=-4.5$ to $2\mathrm{~km~s}^{-1}$, and drawn at levels (2.5, 4.5, 8, 16, 25, 38, 48, 53) K~km~s$^{-1}$ and (3, 5, 8, 12, 18, 25, 31, 35) K~km~s$^{-1}$ respectively. (c) Contours are drawn at levels (1.7, 3, 6, 10, 13, 17, 21) K.} \label{of17} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{figures/outflow19upper_bip_contour_pv.pdf} \caption{Outflow G82.581+0.203: (a) Blue- and redshifted outflow regions are shown in $^{12}$CO(3-2) (offset $+15$~K), $^{13}$CO(3-2) (offset $+5$~K), and C$^{18}$O(3-2) lines (b) Blue and red contour lines are obtained by integrating over velocity ranges from $v=-10.5$ to $6.5\mathrm{~km~s}^{-1}$ and $v=15.5$ to $32\mathrm{~km~s}^{-1}$, and drawn at levels (5, 10, 20, 30, 45, 68, 77) K~km~s$^{-1}$ and (5, 10, 20, 32, 42) K~km~s$^{-1}$ respectively. (c) Contours are drawn at levels (1.5, 3, 5, 8, 10, 11, 12) K.} \label{of19u} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{figures/outflow19lower_bip_contour_pv.pdf} \caption{Outflow G82.571+0.194: (a) Blue- and redshifted outflow regions are shown in $^{12}$CO(3-2) (offset $+15$~K), $^{13}$CO(3-2) (offset $+5$~K), and C$^{18}$O(3-2) lines (b) Blue and red contour lines are obtained by integrating over velocity ranges from $v=-4$ to $7.5\mathrm{~km~s}^{-1}$ and $v=13.5$ to $23\mathrm{~km~s}^{-1}$, and drawn at levels (4, 7, 12) K~km~s$^{-1}$ and (5, 10, 15, 22) K~km~s$^{-1}$ respectively. (c) Contours are drawn at levels (1, 3, 5, 8, 10, 11, 12) K.} \label{of19l} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{figures/outflow23upper_bip_contour_pv.pdf} \caption{Outflow G80.158+2.727: (a) Blue- and redshifted outflow regions are shown in $^{12}$CO(3-2) (offset $+15$~K), $^{13}$CO(3-2) (offset $+5$~K), and C$^{18}$O(3-2) lines (b) Blue and red contour lines are obtained by integrating over velocity ranges from $v=-25$ to $0.5\mathrm{~km~s}^{-1}$ and $v=10$ to $21\mathrm{~km~s}^{-1}$, and drawn at levels (4, 9, 14, 19, 25, 30, 34) K~km~s$^{-1}$ and (5, 8, 10, 12, 15) K~km~s$^{-1}$ respectively. (c) Contours are drawn at levels (1.9, 4, 6, 8, 10) K.} \label{of23u} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{figures/outflow23lower_bip_contour_pv.pdf} \caption{Outflow G80.149+2.710: (a) Blue- and redshifted outflow regions are shown in $^{12}$CO(3-2) (offset $+15$~K), $^{13}$CO(3-2) (offset $+5$~K), and C$^{18}$O(3-2) lines (b) Blue and red contour lines are obtained by integrating over velocity ranges from $v=-3$ to $3\mathrm{~km~s}^{-1}$ and $v=6$ to $12\mathrm{~km~s}^{-1}$, and drawn at levels (4, 9, 14, 19, 22) K~km~s$^{-1}$ and (8, 11, 13, 15, 25) K~km~s$^{-1}$ respectively. (c) Contours are drawn at levels (2, 3.5, 6, 8, 10) K.} \label{of23l} \end{figure*} \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{\label{sec:intro} Introduction} Neutrino oscillation was firmly established in the late 1990's with the observation by the Super-Kamiokande (SK) experiment that muon neutrinos produced by cosmic ray interactions in our atmosphere changed their flavor~\cite{Ashie:2005ik}. Measurements from the Sudbury Neutrino Observatory a few years later, in combination with SK data, revealed that neutrino oscillation was responsible for the apparent deficit of electron neutrinos produced in the Sun~\cite{PhysRevLett.89.011301}. In the most recent major advance, the T2K experiment~\cite{Abe:2013xua,Abe:2013hdq} and reactor experiments~\cite{An:2012eh,An:2013zwz,Ahn:2012nd,PhysRevLett.108.131801} have established that all three neutrino mass states are mixtures of all three flavor states, which allows the possibility of CP violation in neutrino oscillation. This paper describes our most recent measurements of neutrino oscillation including our first results from analyses that combine measurements of muon neutrino disappearance and electron neutrino appearance. The Tokai to Kamioka (T2K) experiment~\cite{Abe:2011ks} was made possible by the construction of the J-PARC high-intensity proton accelerator at a site that is an appropriate distance from the SK detector for precision measurements of neutrino oscillation. Protons, extracted from the J-PARC main ring, strike a target to produce secondary hadrons, which are focused and subsequently decay in-flight to produce an intense neutrino beam, consisting mostly of muon neutrinos. The neutrino beam axis is directed 2.5~degrees away from the SK detector, in order to produce a narrow-band 600~MeV flux at the detector, the energy that maximizes muon neutrino oscillation at the 295~km baseline. Detectors located 280~m downstream of the production target measure the properties of the neutrino beam, both on-axis (INGRID detector) and off-axis in the direction of SK (ND280 detector). T2K began operation in 2010 and was interrupted for one year by the Great East Japan Earthquake in 2011. The results reported in this paper use data collected through 2013, as summarized in Tab.~\ref{tbl:run1to4_pot}. With these data, almost 10\% of the total proposed for the experiment, T2K enters the era of precision neutrino oscillation measurements. In 2014, we began to collect our first data in which the current in the magnetic focusing horns is reversed, so as to produce a beam primarily of muon anti-neutrinos. Future publications will report on measurements using that beam configuration. We begin this paper by describing the neutrino beamline and how we model neutrino production and interactions. We then summarize the near detectors and explain how we use their data to improve model predictions of neutrino interactions at the far detector. This is followed by an overview of the far detector, how neutrino candidate events are selected, and how we model the detector response. Next, we describe the neutrino oscillation model, list the external inputs for the oscillation parameters, summarize the approaches used in the oscillation analyses, and characterize our main sources of systematic uncertainty. The final sections give detailed descriptions and results for the analysis of \ensuremath{\nu_\mu}\xspace\ disappearance alone~\cite{Abe:2014ugx} and for the joint analyses of \ensuremath{\nu_\mu}\xspace\ disappearance and \ensuremath{\nu_e}\xspace\ appearance. \begin{table}[h] \caption{ T2K data-taking periods and the protons on target (POT) used in the analyses presented in this paper. The maximum stable proton beam power achieved was 230~kW. } \label{tbl:run1to4_pot} \begin{tabular}{ l c c } \hline\hline Run Period & Dates & POT \\ \hline Run 1 & Jan. 2010-Jun. 2010 & \(0.32\times10^{20}\) \\ Run 2 & Nov. 2010-Mar. 2011 & \(1.11\times10^{20}\) \\ Run 3 & Mar. 2012-Jun. 2012 & \(1.58\times10^{20}\) \\ Run 4 & Oct. 2012-May 2013& \(3.56\times10^{20}\) \\ \hline Total & Jan. 2010-May 2013 & \(6.57\times10^{20}\) \\ \hline \hline \end{tabular} \end{table} \section{\label{sec:beam} Neutrino Beamline} The T2K primary beamline transports and focuses the 30\,\ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace proton beam extracted from the J-PARC Main Ring onto a 91.4-cm long graphite target. The secondary beamline consists of the target station, decay volume, and beam dump. The apparatus has been described in detail elsewhere~\cite{Abe:2011ks}. The upstream end of the target station contains a collimator to protect the three downstream focusing horns. The graphite target sits inside the first horn, and pions and other particles exiting the target are focused by these magnetic horns and are allowed to decay in the 96-m-long decay volume. Following the decay volume, protons and other particles that have not decayed are stopped in a beam dump consisting of 3.2~m of graphite and 2.4~m of iron, while muons above 5\,\ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace pass through and are detected in a Muon Monitor, designed to monitor the beam stability. With further absorption by earth, a beam of only neutrinos (primarily \ensuremath{\nu_\mu}\xspace) continues to the near and far detectors. \subsection{Neutrino flux simulation} \label{sec:beam:fluxmc} The secondary beamline is simulated in order to estimate the nominal neutrino flux (in absence of neutrino oscillations) at the near and far detectors and the covariance arising from uncertainties in hadron production and the beamline configuration~\cite{PhysRevD.87.012001}. We use the FLUKA 2008 package~\cite{Ferrari:2005zk,Battistoni:2007zzb} to model the interactions of the primary beam protons and the subsequently-produced pions and kaons in the graphite target. As described below, we tune this simulation using external hadron production data. Particles exiting the target are tracked through the magnetic horns and decay volume in a GEANT3~\cite{GEANT3} simulation using the GCALOR~\cite{GCALOR} package to model the subsequent hadron decays. In order to precisely predict the neutrino flux, each beam pulse is measured in the primary neutrino beamline. The suite of proton beam monitors consists of five current transformers which measure the proton beam intensity, 21 electrostatic monitors which measure the proton beam position, and 19 segmented secondary emission monitors and an optical transition radiation monitor~\cite{Bhadra:2012st} which measure the proton beam profile. The proton beam properties have been stable throughout T2K operation, and their values and uncertainties for the most recent T2K run period, Run 4, are given in Tab.~\ref{tbl:pbeam_run4}. The values for other run periods have been published previously~\cite{PhysRevD.87.012001}. The neutrino beam position and width stability is also monitored by the INGRID detector, and the results are given in Sec.~\ref{sec:INGRID}. \begin{table}[tbp] \caption{Summary of the estimated proton beam properties and their systematic errors at the collimator for the T2K Run 4 period. Shown are the mean position (\(X, Y\)), angle (\(X', Y'\)), width (\(\sigma\)), emittance (\(\epsilon\)), and Twiss parameter (\(\alpha\))~\cite{McDonald:1989}.} \label{tbl:pbeam_run4} \begin{tabular}{ l c c c c } \hline\hline & \multicolumn{2}{c}{X Profile} & \multicolumn{2}{c}{Y Profile} \\ Parameter & Mean & Error & Mean & Error \\ \hline \(X,Y\) (mm) & 0.03 & 0.34 & -0.87 & 0.58 \\ \(X',Y'\) (mrad) & 0.04 & 0.07 & 0.18 & 0.28 \\ \(\sigma\) (mm) & 3.76 & 0.13 & 4.15 & 0.15 \\ \(\epsilon\) (\(\pi\) mm mrad) & 5.00 & 0.49 & 6.14 & 2.88 \\ \(\alpha\) & 0.15 & 0.10 & 0.19 & 0.35 \\ \hline \hline \end{tabular} \end{table} To improve the modeling of hadron interactions inside and outside the target, we use data from the NA61/SHINE experiment~\cite{Abgrall:2011ae,Abgrall:2011ts} collected at 31\,\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace and several other experiments~\cite{eichten,allaby,e910}. The hadron production data used for the oscillation analyses described here are equivalent to those used in our previous publications~\cite{PhysRevD.87.012001,Abe:2013xua}, including the statistics-limited NA61/SHINE dataset taken in 2007 on a thin carbon target. The NA61/SHINE data analyses of the 2009 thin-target and T2K-replica-target data are ongoing, and these additional data will be used in future T2K analyses. We incorporate the external hadron production data by weighting each simulated hadron interaction according to the measured multiplicities and particle production cross sections, using the true initial and final state hadron kinematics, as well as the material in which the interaction took place. The predicted flux at SK from the T2K beam is shown in Fig.~\ref{fig:flux_at_sk}. \subsection{Neutrino flux uncertainties} \label{sec:beam:fluxerrs} Uncertainty in the neutrino flux prediction arises from the hadron production model, proton beam profile, horn current, horn alignment, and other factors. For each source of uncertainty, we vary the underlying parameters to evaluate the effect on the flux prediction in bins of neutrino energy for each neutrino flavor~\cite{PhysRevD.87.012001}. Table~\ref{tbl:beam_errs} shows the breakdown for the \ensuremath{\nu_\mu}\xspace\ and \ensuremath{\nu_e}\xspace\ flux uncertainties for energy bins near the peak energy. \begin{table}[tbp] \caption{Contributions to the systematic uncertainties for the unoscillated \ensuremath{\nu_\mu}\xspace\ and \ensuremath{\nu_e}\xspace\ flux prediction at SK, near the peak energy and without the use of near detector data. The values are shown for the \ensuremath{\nu_\mu}\xspace\ (\ensuremath{\nu_e}\xspace) energy bin 0.6~GeV $<E_\nu<$ 0.7~GeV (0.5~GeV $<E_\nu<$ 0.7~GeV).} \label{tbl:beam_errs} \begin{tabular}{ l c c } \hline\hline Error source & \multicolumn{2}{c}{Uncertainty in SK flux near peak (\%)} \\ & \(\nu_\mu\) & \(\nu_e\) \\ \hline Beam current normalization & 2.6 & 2.6 \\ Proton beam properties & 0.3 & 0.2 \\ Off axis angle & 1.0 & 0.2 \\ Horn current & 1.0 & 0.1 \\ Horn field & 0.2 & 0.8 \\ Horn misalignment & 0.4 & 2.5 \\ Target misalignment & 0.0 & 2.0 \\ MC statistics & 0.1 & 0.5 \\ \hline Hadron production & & \\ \qquad Pion multiplicities & 5.5 & 4.7 \\ \qquad Kaon multiplicities & 0.5 & 3.2 \\ \qquad Secondary nucleon multiplicities & 6.9 & 7.6 \\ \qquad Hadronic interaction lengths & 6.7 & 6.9 \\ Total hadron production & 11.1 & 11.7 \\ \hline Total & 11.5 & 12.4 \\ \hline \hline \end{tabular} \end{table} The largest uncertainty from beam monitor calibrations arises in the beam current measurement using a current transformer, but its effect on the oscillation analyses is reduced through the use of near detector data. The remaining uncertainties due to the uncertain position and calibration of the other beam monitors are significantly smaller. As described in Sec.~\ref{sec:INGRID}, the neutrino beam direction is determined with the INGRID detector, and therefore the assigned uncertainty on the off-axis angle comes directly from the INGRID beam profile measurement. To account for the horn current measurement that drifts over time and a possible scale uncertainty, 5~kA is assigned as a conservative estimate of the horn current error. In the flux simulation, the horn magnetic field is assumed to have a \(1/r\) dependence. Deviations from this field, measured using a Hall probe, are used to define the uncertainty of the horn field. Horn and target alignment uncertainties come from survey measurements. Systematic uncertainties in modeling particle multiplicities from hadronic interactions come from several sources: experimental uncertainties in the external data, the uncertain scaling to different incident particle momenta and target materials, and extrapolation to regions of particle production phase space not covered by external data~\cite{PhysRevD.87.012001}. The overall uncertainty is described by calculating the covariance of the pion, kaon, and secondary nucleon multiplicities and their interaction lengths. The systematic errors on the \(\nu_\mu\) flux at SK, without applying near detector data, are shown in bins of neutrino energy in Fig.~\ref{fig:beam_errs_breakdown}. The dominant source of uncertainty is from hadron production. \begin{figure}[tbp] \begin{center} \includegraphics[width=100mm]{fig01.pdf} \caption{The T2K unoscillated neutrino flux prediction at SK is shown with bands indicating the systematic uncertainty prior to applying near detector data. The flux in the range 8~GeV $< E_\nu <$ 30~GeV is simulated but not shown. The binning for the vector of systematic parameters, $\vec{b}$, for each neutrino component is shown by the four scales. The same binning is used for the ND280 and SK flux systematic parameters, $\vec{b}_{n}$ and $\vec{b}_{s}$. } \label{fig:flux_at_sk} \end{center} \end{figure} \begin{figure}[tbp] \begin{center} \includegraphics[width=0.8\textwidth]{fig02.pdf} \caption{Fractional systematic error on the \(\nu_\mu\) flux at SK arising from the beamline configuration and hadron production, prior to applying near detector data constraints.} \label{fig:beam_errs_breakdown} \end{center} \end{figure} For analyses of near and far detector data, the uncertainties arising from the beamline configuration and hadron production are propagated using a vector of systematic parameters, \(\vec{b}\), which scale the nominal flux in bins of neutrino energy, for each neutrino type (\(\nu_e\), \(\nu_\mu\), \(\bar{\nu}_e\), \(\bar{\nu}_\mu\)) at each detector (ND280 and SK). The energy binning for each neutrino type is shown in Fig.~\ref{fig:flux_at_sk}. The covariance for these parameters is calculated separately for each T2K run period given in Tab.\ \ref{tbl:run1to4_pot}, and the POT-weighted average is the flux covariance, \(V_b\), used by the near detector and oscillation analyses. We define $\vec{b}_{n}$ and $\vec{b}_{s}$ as the sub-vector elements of $\vec{b}$ for ND280 and SK. It is through the covariance between $\vec{b}_{n}$ and $\vec{b}_{s}$ that the near detector measurements of \ensuremath{\nu_\mu}\xspace\ events constrain the expected unoscillated far detector \ensuremath{\nu_\mu}\xspace\ and \ensuremath{\nu_e}\xspace\ event rates in the oscillation analyses. \section{\label{sec:nuint} Neutrino Interaction Model} Precision neutrino oscillation measurements rely on having an accurate neutrino interaction model. The model is used to evaluate the selection efficiencies of the different signal and background interactions as well as the estimate of the neutrino energy from the detected final state particles. Finally, the model forms the basis to account for differences in the predicted neutrino cross sections between different T2K detectors due to their different target nuclei compositions. All of these factors and their uncertainties are incorporated into the model for the T2K experiment through a set of systematic parameters $\vec{x}$ listed in Tab.~\ref{tbl:xsecpar}, and their covariance $V_x$. This section describes the interaction model in NEUT\xspace, the primary neutrino interaction generator used by T2K, explains how we use data from external experiments to provide initial constraints on the model before fitting to T2K data, discusses remaining uncertainties not constrained by external data sources, and discusses uncertainties based on differences between the NEUT\xspace model and those found in other interaction generators. \subsection{Neutrino Interaction Model} \label{subsec:nuintmodel} The interaction model used in this analysis is NEUT\xspace~\cite{Hayato:2009} version 5.1.4.2, which models neutrino interactions on various nuclear targets over a range of energies from $\sim$100\,MeV to $\sim$100\,TeV. NEUT\xspace simulates seven types of charged current (CC) and neutral current (NC\xspace) interactions: (quasi-)elastic scattering, single pion production, single photon production, single kaon production, single eta production, deep inelastic scattering (DIS), and coherent pion production. Interactions not modeled in this version of NEUT\xspace include, but are not limited to, multi-nucleon interactions in the nucleus~\cite{Nieves:2012,Martini:2010}, and neutrino-electron scattering processes. The Llewellyn Smith model~\cite{LlewellynSmith:1972} is used as the basis to describe charged current quasi-elastic (CCQE) and neutral current elastic scattering (NCEL) interactions. In order to take into account the fact that the target nucleon is in a nucleus, the Relativistic Fermi Gas (RFG) model by Smith and Moniz~\cite{SmithMoniz:1972,SmithMonizErratum} is used. The model uses dipole axial form factors and the vector form factors derived from electron scattering experiments~\cite{Bradford:2006yz}. The default quasi-elastic axial mass, \ensuremath{M_{A}^{QE}}\xspace, is 1.21\,\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace and the default Fermi momenta for the two dominant target nuclei carbon and oxygen are 217\,\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c}}\xspace and 225\,\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c}}\xspace, respectively. Appropriate Fermi momenta, $p_F$, and binding energies, $E_B$, are assigned to the other target nuclei. The Rein and Sehgal model~\cite{ReinSehgal:1981} is used to simulate neutrino-induced single pion production. The model assumes the interaction is split into two steps as follows: $\nu + N \to \ell + N^{\star}$, $N^{\star} \to \pi + N'$, where $N$ and $N'$ are nucleons, $\ell$ is an outgoing neutrino or charged lepton, and $N^{\star}$ is the resonance. For the initial cross section calculation, the amplitude of each resonance production is multiplied by the branching fraction of the resonance into a pion and nucleon. Interference between 18 resonances with masses below 2\,\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace are included in the calculation. To avoid double counting processes that produce a single pion through either resonance or DIS in calculating the total cross section, the invariant hadronic mass $W$ is restricted to be less than 2\,\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace. The model assigns a 20\% branching fraction for the additional delta decay channel that can occur in the nuclear medium, $\Delta+ N\rightarrow N + N$, which we refer to as pion-less delta decay (PDD). Since the Rein and Sehgal model provides the amplitudes of the neutrino resonance production, we adjust the NEUT\xspace predictions for the cross sections of single photon, kaon, and eta production by changing the branching fractions of the various resonances. The coherent pion production model is described in~\cite{Rein:1982pf}. The interaction is described as $\nu + A \to \ell + \pi + X$, where $A$ is the target nucleus, $\ell$ is the outgoing lepton, $\pi$ is the outgoing pion, and $X$ is the remaining nucleus. The CC component of the model takes into account the lepton mass correction provided by the same authors~\cite{Rein:2006di}. The DIS cross section is calculated over the range of $W>1.3$\,\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace. The structure functions are taken from the GRV98 parton distribution function~\cite{Gluck:1998xa} with corrections proposed by Bodek and Yang~\cite{Bodek:2003wd} to improve agreement with experiments in the low-$Q^{2}$ region. To avoid double counting single pion production with the resonance production described above, in the region $W\le2$\,\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace the model includes the probability to produce more than one pion only. For $W>2$\,\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace, NEUT\xspace uses PYTHIA/JetSet~\cite{Sjostrand:1993yb} for hadronization while for $W\le2$\,\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace it uses its own model. Hadrons that are generated in a neutrino-nucleus interaction can interact with the nucleus and these final state interactions (FSI) can affect both the total number of particles observed in a detector and their kinematics. NEUT\xspace uses a cascade model for pions, kaons, etas, and nucleons. Though details are slightly different between hadrons, the basic procedure is as follows. The starting point for the cascade model is the neutrino interaction point in the nucleus based on a Woods-Saxon density distribution~\cite{Woods:1954zz} except in DIS, where a formation zone is taken into account. The hadron is moved a small distance and interaction probabilities for that step are calculated. The interaction types include charge exchange, inelastic scattering, particle production, and absorption. If an interaction has occurred, then the kinematics of the particle are changed as well as the particle type if needed. The process is repeated until all particles are either absorbed or escape the nucleus. \subsection{Constraints From External Experiments} \label{subsec:externalconstraints} To establish prior values and errors for neutrino-interaction systematic parameters $\vec{x}$ and constrain a subset for which ND280 observables are insensitive, neutrino-nucleus scattering data from external experiments are used. The datasets external to T2K come from two basic sources: pion-nucleus and neutrino-nucleus scattering experiments. To constrain pion-nucleus cross section parameters in the NEUT\xspace FSI model, pion-nucleus scattering data on a range of nuclear targets are used. The most important external source of neutrino data for our interaction model parameter constraints is the MiniBooNE\xspace experiment~\cite{mb-nim}. The MiniBooNE\xspace flux~\cite{mb-flux} covers an energy range similar to that of T2K and as a 4$\pi$ detector like SK has a similar phase space acceptance, meaning NEUT\xspace is tested over a broader range of $Q^{2}$ than current ND280 analyses. \subsubsection{Constraints From Pion-Nucleus Scattering Experiments} \label{sub:piA} To evaluate the uncertainty in the pion transport model in the nucleus, we consider the effects of varying the pion-nucleus interaction probabilities via six scale factors. These scale factors affect the following processes in the cascade model: absorption ($x^{FSABS}$), low energy QE scattering including single charge exchange ($x^{FSQE}$) and low energy single charge exchange (SCX) ($x^{FSCX}$) in a nucleus, high energy QE scattering ($x^{FSQEH}$), high energy SCX ($x^{FSCXH}$), and pion production ($x^{FSINEL}$). The low (high) energy parameters are used for events with pion momenta below (above) 500\,\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c}}\xspace with the high energy parameters explicitly given and the remaining parameters all low energy. The simulation used to perform this study is similar to the one in~\cite{Salcedo:1988}. The model is fit to a range of energy-dependent cross sections comprising nuclear targets from carbon to lead~\cite{ashery:piscat,levenson:piscat,ingram:piscat,jones:piscat,giannelli:piscat,ransome:piscat,miller:piscat,nakai:piscat,navon:piscat,ashery:pioncx,rowntree:piscat,fujii:piscat,saunders:piscat,allardyce:piscat,cronin:piscat,crozon:piscat,binon:piscat,wilkin:piscat,clough:piscat,carroll:piscat,bowles:piscat,wood:piondcx,takahashi:piscat,gelderloos:piscat,grion:piscat,rahav:piscat,aoki:piscat}. The best-fit scale factors for these parameters are shown in Tab.~\ref{tab:fsi_parsets} as well as the maximum and minimum values for each parameter taken from 16 points on the 1$\sigma$ surface of the 6-dimensional parameter space. The parameter sets are used for assessing systematic uncertainty in secondary hadronic interactions in the near and far detectors, as discussed in Secs.~\ref{sec:BANFF}B and~\ref{sec:SK}C, respectively. \begin{table}[tbp] \small \caption[NEUT\xspace FSI Parameter Sets]{NEUT\xspace FSI parameters, $\vec{x}^{FSI}$, that scale each interaction cross section. Shown are the best-fit and the maximum and minimum scaling values from the 16 parameter sets taken from the 6-dimensional 1$\sigma$ surface.} \begin{tabular}{c c c c c c c} \hline\hline & \ \ $x^{FSQE}$ \ \ & \ \ $x^{FSQEH}$ \ \ & \ \ $x^{FSINEL}$ \ \ & \ \ $x^{FSABS}$ \ \ & \ \ $x^{FSCX}$ \ \ & $x^{FSCXH}$ \ \ \\ \hline Best Fit & 1.0 & 1.8 & 1.0 & 1.1 & 1.0 & 1.8 \\ \hline Maximum & 1.6 & 2.3 & 1.5 & 1.6 & 1.6 & 2.3 \\ Minimum & 0.6 & 1.1 & 0.5 & 0.6 & 0.4 & 1.3 \\ \hline \hline \end{tabular} \label{tab:fsi_parsets} \end{table} \subsubsection{Constraints From MiniBooNE\xspace CCQE\xspace Measurements} To constrain parameters related to the CCQE model and its overall normalization, we fit the 2D cross-section data from MiniBooNE\xspace~\cite{mb-ccqe}, binned in the outgoing muon kinetic energy, $T_{\mu}$, and angle with respect to the neutrino beam direction, $\theta_{\mu}$. The NEUT\xspace interactions selected for the fit are all true CCQE interactions. Our fit procedure follows that described by Juszczak {\it et al.}~\cite{mb-ccqe-wroclaw}, with the \ensuremath{\chi^2}\xspace defined as \begin{equation} \chi^{2}(\ensuremath{M_{A}^{QE}}\xspace,\lambda) = \sum\limits_{i=0}^n \Bigg\{\frac{p^{\textrm{d}}_{i} - p^{\textrm{p}}_{i}(\ensuremath{M_{A}^{QE}}\xspace,\lambda)}{\Delta p_{i}} \Bigg\}^{2}+\bigg( \frac{\lambda^{-1}-1}{\Delta\lambda}\bigg)^{2} \label{eq:ccqefit} \end{equation} where the index $i$ runs over the bins of the ($T_{\mu},\cos{\theta_{\mu}}$) distribution, $p^{\textrm{d(p)}}_{i}$ is the measured (predicted) differential cross section, $\Delta p_{i}$ is its uncertainty, $\lambda$ is the CCQE normalization, and $\Delta\lambda$ is the normalization uncertainty, set at 10.7\% by MiniBooNE measurements. The main difference from the procedure in~\cite{mb-ccqe-wroclaw} is that we include ($T_{\mu},\cos{\theta_{\mu}}$) bins where a large percentage of the events have 4-momentum transfers that are not allowed in the RFG model. We find $\ensuremath{M_{A}^{QE}}\xspace = 1.64 \pm 0.03$\,\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace and $\lambda$ = 0.88$\pm$0.02 with $\chi^{2}_{min}$/DOF = 26.9/135. It should be noted that MiniBooNE\xspace does not report correlations, and without this information assessing the goodness-of-fit is not possible. To take this into account, we assign the uncertainty to be the difference between the fit result and nominal plus the uncertainty on the fit result. The \ensuremath{M_{A}^{QE}}\xspace fit uncertainty is set to 0.45\,\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace, which covers (at 1 standard deviation) the point estimates from our fit to the MiniBooNE data, the K2K result~\cite{Gran:2006jn} and a world deuterium average, 1.03\,\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace~\cite{Bernard:2001rs}. The normalization uncertainty for neutrinos with $E_{\nu}<1.5$\,\ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace, $x_{1}^{QE}$, is set to 11\%, the MiniBooNE\xspace flux normalization uncertainty, since most of the neutrinos from MiniBooNE\xspace are created in this energy range. \subsubsection{Constraints From MiniBooNE\xspace Inclusive $\pi$ Measurements} To constrain single pion production parameter errors, we use published MiniBooNE\xspace differential cross-section datasets for CC single \ensuremath{\pi^0}\xspace production (CC1$\pi^0$\xspace)~\cite{mb-cc1pi0}, CC single \ensuremath{\pi^+}\xspace production (CC1$\pi^+$\xspace)~\cite{mb-cc1pip}, and NC single \ensuremath{\pi^0}\xspace production (NC1$\pi^0$\xspace)~\cite{mb-nc1pi0}. Because the modes are described by a set of common parameters in NEUT\xspace, we perform a joint fit to all three data sets. The selection of NEUT\xspace simulated events follows the signal definition in each of the MiniBooNE\xspace measurements. For the (CC1$\pi^0$\xspace, CC1$\pi^+$\xspace, NC1$\pi^0$\xspace) selections, the signals are defined as (\ensuremath{\nu_\mu}\xspace, \ensuremath{\nu_\mu}\xspace, $\nu$) interactions with (1,1,0) \mun and exactly one (\ensuremath{\pi^0}\xspace,\ensuremath{\pi^+}\xspace,\ensuremath{\pi^0}\xspace) exiting the target nucleus, with no additional leptons or mesons exiting. In all cases, there is no constraint on the number of nucleons or photons exiting the nucleus. We consider a range of models by adjusting 9 parameters shown in Tab.~\ref{tab:singlepi-fitparams}. \ensuremath{M_A^{RES}}\xspace is the axial vector mass for resonant interactions, which affects both the rate and $Q^2$ shape of interactions. The ``$W$ shape'' parameter is an empirical parameter that we introduce in order to improve agreement with NC1$\pi^0$\xspace \ensuremath{|\mathbf{p}_{\pi^0}|}\xspace data. The weighting function used is a Breit-Wigner function with a phase space term: \begin{equation} r(W; S) = \alpha \cdot \frac{S}{(W-W_0)^2 + S^2/4} \cdot P(W;m_\pi, m_N) \label{eq:wshape} \end{equation} where $S$ is the ``$W$ shape'' parameter, $W_0=1218$\,\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c}}\xspace, $P(W; m_\pi, m_N)$ is the phase space for a two-body decay of a particle with mass $W$ into particles with masses $m_\pi$ and $m_N$, and $\alpha$ is a normalization factor calculated to leave the total nucleon-level cross section unchanged as $S$ is varied. The nominal values of $S$ and $W_{0}$ come from averages of fits to two $W$ distributions of NEUT\xspace interactions, one with a resonance decaying to a neutron and $\pi^{+}$ and the other with it decaying to a proton and $\pi^{0}$. The ``CCOther shape'' parameter, $x^{CCOth}$, modifies the neutrino energy dependence of the cross section for a combination of CC modes, as described in Sec.~\ref{subsec:othermodelerrors}, along with the remaining parameters that are normalizations applied to the NEUT\xspace interaction modes. Simulated events modified by $x^{CCOth}$ constitute a small fraction of the selected samples. As a result, the data have minimal power to constrain this parameter and likewise for the NC1$\pi^{+}$, NC coherent pion, and NCOther normalization parameters, $x^{NC1\pi^{\pm}}$, $x^{NCcoh\pi}$, and $x^{NCOth}$, respectively. The T2K oscillation analyses are insensitive to these poorly determined parameters, and an arbitrary constraint is applied to stabilize the fits. In our external data analysis the NC coherent normalization cannot be constrained independently of the NC1$\pi^0$\xspace normalization, $x^{NC1\pi^{0}}$, because there is no difference in the \ensuremath{|\mathbf{p}_{\pi^0}|}\xspace spectrum between the two components. The errors given in Tab.~\ref{tab:singlepi-fitparams} also include the variance observed when refitting using the 16 FSI 1$\sigma$ parameter sets and scaling the errors when fitting multiple datasets following the approach of Maltoni and Schwetz~\cite{maltonischwetz}. The ``$W$ shape'' nominal prior is kept at the default of 87.7\,\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace and in the absence of reported correlations from MiniBooNE\xspace, the uncertainty is estimated as the difference between the best fit and default values. The correlations between \ensuremath{M_A^{RES}}\xspace, $x_{1}^{CC1\pi}$, and $x^{NC1\pi^{0}}$ are given in Table~\ref{tab:1pi_cov}. \begin{table}[tp] \centering \caption{Parameters used in the single pion fits and their results from fitting the MiniBooNE\xspace data. Those with an arbitrary constraint applied have their $1\sigma$ penalty term shown. \ensuremath{M_A^{RES}}\xspace, $x_{1}^{CC1\pi}$, and $x^{NC1\pi^{0}}$ fit results and their covariance are used in subsequent analyses.} \begin{tabular}{cccccc} \hline\hline & \ \ units \ \ & \ \ Nominal value \ \ & \ \ Penalty \ \ & \ \ Best fit \ \ & \ \ Error \ \ \\ \hline \ensuremath{M_A^{RES}}\xspace & \ \ \ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace \ \ & 1.21 & & 1.41 & 0.22 \\ $W$ shape & \ \ \ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace \ \ & 87.7 & & 42.4 & 12 \\ $x^{CCcoh\pi}$ & & 1 & & 1.423 & 0.462 \\ $x_{1}^{CC1\pi}$ & & 1 & & 1.15 & 0.32 \\ $x^{CCOth}$ & & 0 & 0.4 & 0.360 & 0.386 \\ $x^{NCcoh\pi}$ & & 1 & 0.3 & 0.994 & 0.293 \\ $x^{NC1\pi^{0}}$ & & 1 & & 0.963 & 0.330 \\ $x^{NC1\pi^{\pm}}$ & & 1 & 0.3 & 0.965 & 0.297 \\ $x^{NCOth}$ & & 1 & 0.3 & 0.987 & 0.297 \\ \hline\hline \end{tabular} \label{tab:singlepi-fitparams} \end{table} \begin{table}[tp] \centering \caption{Correlation between \ensuremath{M_A^{RES}}\xspace, $x_{1}^{CC1\pi}$, and $x^{NC1\pi^{0}}$.} \begin{tabular}{lccc} \hline\hline & \ \ \ensuremath{M_A^{RES}}\xspace \ \ & \ \ $x_{1}^{CC1\pi}$ \ \ & \ \ $x^{NC1\pi^{0}}$ \ \ \\ \hline \ensuremath{M_A^{RES}}\xspace\ \ \ \ & 1 & $-$0.26 & $-$0.30 \\ $x_{1}^{CC1\pi}$ & $-$0.26 & 1 & 0.74 \\ $x^{NC1\pi^{0}}$ & $-$0.30 & 0.74 & 1 \\ \hline\hline \end{tabular} \label{tab:1pi_cov} \end{table} \subsection{Other NEUT\xspace Model Parameters} \label{subsec:othermodelerrors} The remaining uncertainties are in the modeling of the CC resonant, CCDIS, NC resonant charged pion, CC and NC coherent pion, anti-neutrino, as well as \ensuremath{\nu_e}\xspace CCQE interactions. An additional set of energy-dependent normalization parameters is added for CCQE and CC1$\pi$ interactions. Finally, a normalization parameter for the remaining NC interactions is included. The CCOther shape parameter, $x^{CCOth}$, accounts for model uncertainties for CCDIS and resonant interactions where the resonance decays to a nucleon and photon, kaon, or eta. The nominal interaction model for these interactions is not modified. From MINOS~\cite{minostotalxsec}, the uncertainty of their cross section measurement at 4\ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace, which is dominated by CCDIS, is approximately 10\%. Using this as a reference point, the cross section is scaled by the factor $(1+x^{CCOth}/E_{\nu})$ where $E_{\nu}$ is the neutrino energy in \ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace. The nominal value for $x^{CCOth}$ is 0 and has a 1$\sigma$ constraint of 0.4. Normalization parameters are included for both CC and NC coherent pion interactions, $x^{CCcoh\pi}$ and $x^{NCcoh\pi}$, respectively. The CC coherent pion cross section is assigned an error of 100\% due to the fact that the CC coherent pion cross section had only 90\% confidence upper limits for sub-GeV neutrino energies at the time of this analysis. In addition, when included in the MiniBooNE\xspace pion production fits, the data are consistent with the nominal NEUT\xspace model at 1$\sigma$ and with zero cross section at 2$\sigma$. The NC coherent pion production data~\cite{PhysRevD.81.111102} differ from NEUT\xspace by 15\%, within the measurement uncertainty of 20\%. To account for the difference and the uncertainty, we conservatively assign a 30\% overall uncertainty to $x^{NCcoh\pi}$. The anti-neutrino/neutrino cross section ratios are assigned an uncertainty of 40\%. This is a conservative estimate derived from doubling the maximum deviation between the energy-dependent MiniBooNE\xspace CCQE neutrino cross section and the RFG model assuming an axial mass of $\ensuremath{M_{A}^{QE}}\xspace = 1.03$\,\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace, which was 20\%. For \ensuremath{\nu_e}\xspace CCQE interactions, there may be some effects that are not accounted for in the NEUT\xspace model, such as the existence of second class currents, as motivated in Ref.~\cite{Day:2012gb}. The dominant source of uncertainty is the vector component, which may be as large as 3\% at the T2K beam peak, and thus is assigned as an additional error on \ensuremath{\nu_e}\xspace CCQE interactions relative to \ensuremath{\nu_\mu}\xspace CCQE interactions. Table~\ref{tbl:xsecpar} shows energy-dependent normalization parameters for CCQE and CC1$\pi$ interactions which are included to account for possible discrepancies in the model as suggested, for example, by the difference between the MiniBooNE\xspace and NOMAD~\cite{Lyubushkin:2008pe} results. As mentioned above, the uncertainties for $x_{1}^{QE}$ and $x_{1}^{CC1\pi}$ are assigned from our study of MiniBooNE\xspace data. The remaining CCQE energy regions are assigned a 30\% uncertainty to account for the aforementioned discrepancy while $x_{2}^{CC1\pi}$ has a 40\% uncertainty assigned since it is necessary to extrapolate from the MiniBooNE\xspace CC1$\pi^+$\xspace inclusive measurement at 2\ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace. The NCOther category consists of NCEL, NC resonant production where the resonance decays to a nucleon and kaon, eta, or photon, and NCDIS interactions. For fits to the ND280 data and \ensuremath{\nu_e}\xspace analyses at SK, resonant production that produces a nucleon and charged pion is also included in the NCOther definition, though kept separate in other analyses. NCOther interactions have a 30\% normalization error assigned to them, which is given to the parameters $x^{NCOth}$ and $x^{NC1\pi^{\pm}}$. \subsection{Alternative Models} \label{subsec:othererrors} As mentioned above, NEUT\xspace's default model for CCQE assumes an RFG for the nuclear potential and momentum distribution of the nucleons. An alternative model, referred to as the ``spectral function'' (SF)~\cite{Benhar:1994af}, appears to be a better model when compared to electron scattering data. SF is a generic term for a function that describes the momentum and energy distributions of nucleons in a nucleus. In the model employed in \cite{Benhar:1994af}, the SF consists of a mean-field term for single particles and a term for correlated pairs of nucleons, which leads to a long tail in the momentum and binding energy. It also includes the nuclear shell structure of oxygen, the main target nucleus in the T2K far detector. The difference between the RFG and SF models is treated with an additional systematic parameter. At the time of this analysis, the SF model had not been implemented in NEUT\xspace, so the NuWro generator~\cite{Juszczak:2009qa} was used for generating SF interactions with the assumption that a NEUT\xspace implementation of SF would produce similar results. The SF and RFG distributions were produced by NuWro and NEUT\xspace, respectively, for \ensuremath{\nu_\mu}\xspace and \ensuremath{\nu_e}\xspace interactions on both carbon and oxygen, while using the same vector and axial form factors. The ratio of the SF and RFG cross sections in NuWro is the weight applied to each NEUT\xspace CCQE event, according to the true lepton momentum, angle, and neutrino energy of the interaction. Overall, this weighting would change the predicted total cross section by 10\%. Since we already include in the oscillation analysis an uncertainty on the total CCQE cross section, the NuWro cross section is scaled so that at $E_{\nu}$=1\,\ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace it agrees with the NEUT\xspace CCQE cross section. A parameter $x_{SF}$ is included to allow the cross section model to be linearly adjusted between the extremes of the RFG ($x_{SF}$=0) and SF ($x_{SF}$=1) models. The nominal value for $x_{SF}$ is taken to be zero, and the prior distribution for $x_{SF}$ is assumed to be a standard Gaussian (mean zero and standard deviation one) but truncated outside the range [0,1]. \subsection{Summary of cross section systematic parameters} \label{sec:xsecpriors} All the cross section parameters, $\vec{x}$, are summarized in Tab.~\ref{tbl:xsecpar}, including the errors prior to the analysis of near detector data. They are categorized as follows: \begin{enumerate} \item Common between ND280 and SK; constrained by ND280 data. The parameters which are common with SK and well measured by ND280 are $M_A^{QE}$, $M_A^{RES}$ and some normalization parameters. \item Independent between ND280 and SK, therefore unconstrained by ND280 data. The parameters $p_F$, $E_B$ and SF are target nuclei dependent and so are independent between ND280 ($^{12}$C) and SK ($^{16}$O). \item Common between ND280 and SK, but for which ND280 data have negligible sensitivity, so no constraint is taken from ND280 data. The remaining parameters in Tab.~\ref{tbl:xsecpar} are not expected to be measured well by ND280 and therefore are treated like independent parameters. \end{enumerate} We define $\vec{x}_n$ to be the set of cross section systematic parameters which are constrained by ND280 data (category 1), to distinguish them from the remaining parameters $\vec{x}_s$ (categories 2 and 3). \begin{table}[tbp] \centering \caption[\small]{Cross section parameters $\vec{x}$ for the ND280 constraint and for the SK oscillation fits, showing the applicable range of neutrino energy, nominal value, and prior error. The category of each parameter describes the relation between ND280 and SK and is defined in Sec.~\ref{sec:xsecpriors}. Parameters marked with an asterisk are not included in the parametrization for the appearance analysis.} \begin{tabular}{cccccc}\hline\hline \ \ Parameter \ \ & \ \ $E_{\nu}$/\ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace Range \ \ & \ \ units \ \ & \ \ Nominal \ \ & \ \ Error \ \ & \ \ Category\ \ \\ \hline \ensuremath{M_{A}^{QE}}\xspace & all & \ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace & 1.21 & 0.45 & 1 \\ $x_{1}^{QE}$ & $0<E_{\nu}<1.5$ & & 1.0 & 0.11 & 1 \\ $x_{2}^{QE}$ & $1.5<E_{\nu}<3.5$ & & 1.0 & 0.30 & 1 \\ $x_{3}^{QE}$ & $E_{\nu}>3.5$ & & 1.0 & 0.30 & 1 \\ $p_F$ $^{12}$C & all & \ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c}}\xspace & 217 & 30 & 2\\ $E_B$ $^{12}$C *& all & \ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c}}\xspace & 25 & 9 & 2\\ $p_F\ ^{16}$O & all & \ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c}}\xspace & 225 & 30 & 2\\ $E_B\ ^{16}$O *& all & \ensuremath{\mathrm{\,Me\kern -0.1em V}}\xspace & 27 & 9 & 2\\ $x_{SF}$ for C & all & & 0 (off) & 1 (on) & 2\\ $x_{SF}$ for O & all & & 0 (off) & 1 (on) & 2\\ \ensuremath{M_A^{RES}}\xspace & all & \ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace & 1.41 & 0.22 & 1\\ $x_{1}^{CC1\pi}$ & $0<E_{\nu}<2.5$ & & 1.15 & 0.32 & 1\\ $x_{2}^{CC1\pi}$ & $E_{\nu}>2.5$ & & 1.0 & 0.40 & 1 \\ $x^{NC1\pi^{0}}$ & all & & 0.96 & 0.33 & 1\\ $x^{CCcoh\pi}$ & all & & 1.0 & 1.0 & 3\\ $x^{CCOth}$ & all & & 0.0 & 0.40 & 3\\ $x^{NC1\pi^{\pm}}$ & all & & 1.0 & 0.30 & 3\\ $x^{NCcoh\pi}$ & all & & 1.0 & 0.30 & 3\\ $x^{NCOth}$ & all & & 1.0 & 0.30 & 3\\ $W$ Shape & all & \ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace & 87.7 & 45.3 & 3\\ $x^{PDD}$ & all & & 1.0 & 1.0 & 3\\ CC $\nu_e$ & all & & 1.0 & 0.03 & 3\\ $\nu$/\ensuremath{\overline{\nu}}\xspace & all & & 1.0 & 0.40 & 3\\ $\vec{x}^{FSI}$ & all & & \multicolumn{2}{c}{Section~\ref{sub:piA}} & 3 \\ \hline\hline \end{tabular} \label{tbl:xsecpar} \end{table} \subsection{\label{sec:INGRID} INGRID} \subsubsection{INGRID detector} The main purpose of INGRID is to monitor the neutrino beam rate, profile, and center. In order to sufficiently cover the neutrino beam profile, INGRID is designed to sample the beam in a transverse section of 10\,m$\times$10\,m, with 14 identical modules arranged in two identical groups along the horizontal and vertical axes, as shown in Fig.~\ref{ingrid_overview}. Each of the modules consists of nine iron target plates and eleven tracking scintillator planes, each made of two layers of scintillator bars (X and Y layers). They are surrounded by veto scintillator planes to reject charged particles coming from outside of the modules. Scintillation light from each bar is collected and transported to a photo-detector with a wavelength shifting fiber (WLS fiber) inserted in a hole through the center of the bar. The light is read out by a Multi-Pixel Photon Counter (MPPC)~\cite{Yokoyama:2010qa} attached to one end of the WLS fiber. A more detailed description can be found in Ref.~\cite{Abe2012}. \begin{figure}[tbp] \begin{center} \includegraphics[width=70mm]{fig03.pdf} \caption{Overview of the INGRID viewed from beam upstream. Two separate modules are placed at off-axis positions off the main cross to monitor the asymmetry of the beam.} \label{ingrid_overview} \end{center} \end{figure} \subsubsection{Event selection} Neutrino interactions within the INGRID modules are selected by first reconstructing tracks using the X and Y layers independently with an algorithm based on a cellular automaton. Pairs of tracks in the X and Y layers with the same Z coordinates at the track ends are matched to form 3D tracks. The upstream edges of the 3D tracks in an event are compared to form a vertex. Events are rejected if the vertex is outside the fiducial volumes, the time is more than 100~ns from a beam pulse, or if there is a signal in the veto plane at the upstream position extrapolated from a track. This analysis~\cite{Abe2014_ingrid_ccincl_paper} significantly improves upon the original method established in 2010~\cite{Abe2012}. The new track reconstruction algorithm has a higher track reconstruction efficiency and is less susceptible to MPPC dark noise. Event pileup, defined as more than one neutrino interaction occurring in a module in the same beam pulse, occurs in as many as 1.9\% of events with interactions at the current beam intensity. The new algorithm handles pileup events correctly as long as the vertices are distinguishable. For the full dataset, $4.81\times10^6$ events are selected as candidate neutrino events in INGRID. The expected purity of the neutrino events in INGRID is 99.58\%. \subsubsection{Corrections} Corrections for individual iron target masses and the background are applied in the same way as the previous INGRID analysis~\cite{Abe2012}. In addition, we apply corrections for dead channels and event pileup which can cause events to be lost. There are 18 dead channels out of 8360 channels in the 14 standard modules and the correction factor for the dead channels is estimated from a Monte Carlo simulation. The correction factor for the event pileup is estimated as a linear function of the beam intensity, since the event-pileup effect is proportional to the beam intensity. The slope of the linear function is estimated from the beam data by combining events to simulate event pileup~\cite{Abe2014_ingrid_ccincl_paper}. The inefficiency due to pileup is less than 1\% for all running periods. \subsubsection{Systematic error} Simulation and control samples are used to study potential sources of systematic error and to assign systematic uncertainties. The sources include target mass, MPPC dark noise and efficiency, event pileup, beam-induced and cosmic background, and those associated with the event selection criteria. The total systematic error for the selection efficiency, calculated from the quadratic sum of all the systematic errors, is 0.91\%. It corresponds to about a quarter of the 3.73\% error from the previous analysis method~\cite{Abe2012}. The reduction of the systematic error results from the analysis being less sensitive to MPPC dark noise and event pileup, the improved track reconstruction efficiency, and more realistic evaluations of systematic errors which had been conservatively estimated in the previous analysis. \subsubsection{Results of the beam measurement} Figure~\ref{ingrid_evtrate} shows the daily rates of the neutrino events normalized by POT. When the horn current was reduced to 205\,kA due to a power supply problem, the on-axis neutrino flux decreased because the forward focusing of the charged pions by the horns becomes weaker. An increase by 2\% and a decrease by 1\% of event rate were observed between Run1 and Run2, and during Run4, respectively. However, for all run periods with the horns operated at 250\,kA, the neutrino event rate is found to be stable within 2\% and the RMS/mean of the event rate is 0.7\%. A Monte Carlo (MC) simulation that implements the beamline and neutrino interaction models described earlier, along with the INGRID detector simulation, is used to predict the neutrino event rate with the horns operating at 250\,kA and 205\,kA. The ratios of observed to predicted event rates, using the nominal values for the beamline and neutrino interaction systematic parameters, are: \begin{eqnarray} \frac{N^{\mathrm{data}}_{\mathrm{250kA}}}{N^{\mathrm{MC}}_{\mathrm{250kA}}}&=&1.014\pm 0.001(\mathrm{stat})\pm 0.009(\mathrm{det\ syst}),\\ \frac{N^{\mathrm{data}}_{\mathrm{205kA}}}{N^{\mathrm{MC}}_{\mathrm{205kA}}}&=&1.026\pm 0.002(\mathrm{stat})\pm 0.009(\mathrm{det\ syst}), \end{eqnarray} The uncertainties from the neutrino flux prediction and the neutrino interaction model are not included in the systematic errors. \begin{figure*}[tbp] \begin{center} \includegraphics[width=160mm]{fig04.pdf} \caption[Daily event rate of the neutrino events normalized by protons on target.]{Daily event rate of the neutrino events normalized by protons on target. The error bars show the statistical errors. The horn current was reduced to 205\,kA for part of Run 3.} \label{ingrid_evtrate} \end{center} \end{figure*} The profiles of the neutrino beam in the horizontal and vertical directions are measured using the number of neutrino events in the seven horizontal and seven vertical modules, respectively. The observed horizontal and vertical profiles are fitted with separate Gaussian functions and the profile center is defined as the fitted peak positions. Finally, the neutrino beam direction is reconstructed as the direction from the proton beam target position to the measured profile center at INGRID using the result of accurate surveys of the proton beam target and the INGRID detectors. Figure~\ref{beam_center} shows the history of the horizontal and vertical neutrino beam directions relative to the nominal directions as measured by INGRID and by the muon monitor. The measured neutrino beam directions are stable well within the physics requirement of 1 mrad. A 1 mrad change in angle changes the intensity and peak energy of an unoscillated neutrino beam at SK by 3\% and 13~MeV, respectively. Because a misalignment in the proton beamline was adjusted in November 2010, the subsequent beam centers in the vertical direction are slightly shifted toward the center. A conservative estimate of the systematic error of the profile center is calculated by assuming that the detector systematic uncertainties for the neutrino event rate are not correlated between different INGRID modules. The average horizontal and vertical beam directions are measured as \begin{eqnarray} \bar{\theta}_X^{\mathrm{beam}}&=&0.030\pm 0.011(\mathrm{stat})\pm 0.095(\mathrm{det\ syst})\ \mathrm{mrad},\\ \bar{\theta}_Y^{\mathrm{beam}}&=&0.011\pm 0.012(\mathrm{stat})\pm 0.105(\mathrm{det\ syst})\ \mathrm{mrad}, \end{eqnarray} respectively. The neutrino flux uncertainty arising from possible incorrect modeling of the beam direction is evaluated from this result. This uncertainty, when evaluated without ND280 data, is significantly reduced compared to the previous analysis, as shown in Fig.~\ref{flux_err_oa}. \begin{figure*}[tbp] \begin{center} \includegraphics[width=160mm]{fig05.pdf} \caption[History of neutrino beam directions compared with the muon beam directions.]{History of neutrino beam directions for horizontal (left) and vertical (right) directions as measured by INGRID and by the muon monitor (MUMON). The zero points of the vertical axes correspond to the nominal directions. The error bars show the statistical errors.} \label{beam_center} \end{center} \end{figure*} \begin{figure}[tbp] \begin{center} \includegraphics[width=75mm]{fig06.pdf} \caption{Fractional uncertainties of the $\nu_\mu$ flux at SK due to the beam direction uncertainty evaluated from the previous and this INGRID beam analyses. These evaluations do not include constraints from ND280.} \label{flux_err_oa} \end{center} \end{figure} The horizontal and vertical beam width measurements are given by the standard deviations of the Gaussians fit to the observed profiles. Figure~\ref{beam_width} shows the history of the horizontal and vertical beam widths with the horns operating at 250\,kA which are found to be stable within the statistical errors. The ratios of observed to predicted widths, using nominal values for the systematic parameters, are: \begin{eqnarray} \frac{W_X^{\mathrm{data}}}{W_X^{\mathrm{MC}}}&=&1.015\pm 0.001(\mathrm{stat})\pm 0.010(\mathrm{det\ syst}),\\ \frac{W_Y^{\mathrm{data}}}{W_Y^{\mathrm{MC}}}&=&1.013\pm 0.001(\mathrm{stat})\pm 0.011(\mathrm{det\ syst}), \end{eqnarray} for the horizontal and vertical direction, respectively. \begin{figure*}[tbp] \begin{center} \includegraphics[width=160mm]{fig07.pdf} \caption[History of neutrino beam width.]{History of neutrino beam width for horizontal (left) and vertical (right) directions for the horn 250\,kA operation. The error bars show the statistical errors.} \label{beam_width} \end{center} \end{figure*} \subsection{\label{sec:ND280} ND280} In designing the experiment, it was recognized that detailed measurements of neutrino interactions near the production target and along the direction to the far detector would be necessary to reduce uncertainty in the models of the neutrino beam and of neutrino interactions. To achieve this, the T2K collaboration chose to use a combination of highly segmented scintillator targets and gaseous trackers in a magnetic spectrometer. Segmented active targets allow for the neutrino interaction to be localized and the trajectories of the charged particles to be reconstructed, and those passing through the gaseous trackers have their charge, momentum, and particle type measured. The targets and gaseous trackers are surrounded by a calorimeter to detect photons and assist in particle identification. The refurbished UA1/NOMAD magnet was acquired and its rectangular inner volume led to a design with rectangular sub-detectors. Spaces within the yoke allowed for the installation of outer muon detectors. The following sections describe the ND280 detector, its simulation, and the analyses used as input for the T2K oscillation analyses. \subsubsection{ND280 detector} \label{subsec:ND280_detector} The ND280 detector is illustrated in Fig.~\ref{fig:ND280detector}, where the coordinate convention is also indicated. The $x$ and $z$ axes are in the horizontal plane and the $y$ axis is vertical. The origin is at the center of the magnet and the 0.2~T magnetic field is along the $+x$ direction. The $z$ axis is the direction to the far detector projected onto the horizontal plane. \begin{figure}[tbp] \begin{center} \includegraphics[keepaspectratio=true,width=0.48\textwidth]{fig08.pdf} \caption{Sketch of the ND280 off-axis detector in an exploded view. A supporting basket holds the $\pi^0$ detector (P0D) as well as the Time Projection Chambers (TPCs) and Fine Grained Detectors (FGDs) that make up the ND280 Tracker. Surrounding the basket is a calorimeter (ECal) and within the magnet yoke is the Side Muon Range Detector (SMRD).} \label{fig:ND280detector} \end{center} \end{figure} The analyses presented in this paper use neutrino interactions within the ND280 tracker, composed of two fine-grained scintillator bar detectors (FGDs~\cite{Amaudruz:2012pe}), used as the neutrino interaction target, sandwiched between three gaseous time projection chambers (TPCs~\cite{Abgrall:2010hi}). The most upstream FGD (FGD1) primarily consists of polystyrene scintillator bars having a square cross section, 9.6\,mm on a side, with layers oriented alternately in the $x$ and $y$ directions allowing projective tracking of charged particles. Most of the interactions in the first FGD are on carbon nuclei. The downstream FGD (FGD2) has a similar structure but the polystyrene bars are interleaved with water layers to allow for the measurement of neutrino interactions on water. The FGDs are thin enough that most of the penetrating particles produced in neutrino interactions, especially muons, pass through to the TPCs. Short-ranged particles such as recoil protons can be reconstructed in the FGDs, which have fine granularity so that individual particle tracks can be resolved and their directions measured. Each TPC consists of a field cage filled with Ar:CF$_4$:iC$_4$H$_{10}$ (95:3:2) inside a box filled with CO$_2$. The $+x$ and $-x$ walls of the field cages are each instrumented with 12 MicroMEGAS modules arranged in two columns. The 336\,mm $\times$ 353\,mm active area for each MicroMEGAS is segmented into 1728 rectangular pads arranged in 48 rows and 36 columns, providing 3D reconstruction of charged particles that pass through the TPCs. The curvature due to the magnetic field provides measurements of particle momenta and charges and, when combined with ionization measurements, allows for particle identification (PID). The tracker is downstream of a $\pi^0$ detector (P0D~\cite{Assylbekov201248}) and all of these detectors are surrounded by electromagnetic calorimeters (ECals~\cite{Allan:2013ofa}) and side muon range detectors (SMRDs~\cite{Aoki:2012mf}). Data quality is assessed weekly. Over the entire running period, the ND280 data taking efficiency is 98.5\%. For the analyses presented here, only data recorded with all detectors having good status are used, giving an overall efficiency of 91.5\%. \subsubsection{ND280 simulation} \label{subsec:ND280_simulation} A detailed simulation is used to interpret the data recorded by ND280. The neutrino flux model described in Sec.~\ref{sec:beam:fluxmc} is combined with the NEUT neutrino interaction model described in Sec.~\ref{subsec:nuintmodel} and a detailed material and geometrical description of the ND280 detector including the magnet, to produce a simulated sample of neutrino interactions distributed throughout the ND280 detector with the beam time structure. For studies of particles originating outside of the ND280 detector, separate samples are produced using a description of the concrete that forms the near detector hall and the surrounding sand. The passage of particles through materials and the ND280 detector response are modeled using the GEANT4 toolkit~\cite{Agostinelli2003250}. To simulate the scintillator detectors, including the FGDs, we use custom models of the scintillator photon yield, photon propagation including reflections and attenuation, and electronics response and noise~\cite{Vacheret:2011zza}. The gaseous TPC detector simulation includes the gas ionization, transverse and longitudinal diffusion of the electrons, transport of the electrons to the readout plane through the magnetic and electric field, gas amplification, and a parametrization of the electronics response. Imperfections in the detector response simulation can cause the model to match the detector performance poorly, potentially generating a systematic bias in parameter estimates. After describing the methods to select neutrino interactions in the following section, we quantify the systematic uncertainty due to such effects with data/simulation comparisons in Sec.~\ref{subsec:ND280_systematics}. \subsubsection{ND280 $\nu_\mu$ Tracker analysis} \label{subsec:ND280_numu} We select an inclusive sample of $\nu_\mu$ CC interactions in the ND280 detector in order to constrain parameters in our flux and cross section model. Our earlier oscillation analyses divided the inclusive sample into two: CCQE-like and the remainder. New to this analysis is the division of the inclusive sample into three sub-samples, defined by the number of final state pions: zero (CC0$\pi$-like), one positive pion (CC$1\pi^{+}$-like), and any other combination of number and charge (CCOther-like). This division has enhanced ability to constrain the CCQE and resonant single pion cross section parameters, which, in turn, decreases the uncertainty they contribute to the oscillation analyses. The CC-inclusive selection uses the highest momentum negatively charged particle in an event as the $\mu^-$ candidate and it is required to start inside the FGD1 fiducial volume (FV) and enter the middle TPC (TPC2). The FV begins 58~mm inward from the boundaries of the FGD1 active volume in $x$ and $y$ and 21~mm inward from the upstream boundary of the FGD1 active volume in $z$, thereby excluding the first two upstream layers. The TPC requirement has the consequence of producing a sample with predominantly forward-going $\mu^{-}$. Additional requirements are included to reduce background in which the start of the $\mu^-$ candidate is incorrectly assigned inside the FGD1 FV, due to a failure to correctly reconstruct a particle passing through the FGD1 (through-going veto). The $\mu^-$ candidate is required to be consistent with a muon (muon PID requirement) based on a truncated mean of measurements of energy loss in the TPC gas~\cite{Abgrall:2010hi}. A similar PID has been developed for the FGD, which is not used for the muon selection, but is used in secondary particle identification~\cite{Amaudruz:2012pe}. Events passing this selection comprise the CC-inclusive sample which is then divided into three exclusive sub-samples on the basis of secondary tracks from the event vertex. The names for these samples have the ``-like'' suffix to distinguish them from the corresponding topologies that are based on truth information. Those events with no additional TPC tracks consistent with being a pion or electron and with no additional FGD tracks consistent with being a pion, nor any time-delayed signal in the FGD which is consistent with a Michel electron, comprise the CC0$\pi$-like sample. Those events with one positive pion candidate in a TPC and no additional negative pions, electrons or positrons comprise the CC1$\pi^+$-like sample. The CCOther-like sample contains all other CC-inclusive events not in the CC0$\pi$-like or CC1$\pi^+$-like samples. In the simulation we find that the CC-inclusive sample is composed of 90.7$\%$ true $\nu_\mu$ CC interactions within the FGD fiducial volume, and 89.8$\%$ of the muon candidates are muons (the rest are mainly mis-identified negative pions). Table~\ref{tab:numberEvents_byCut} shows the number of events after each cut for data and simulation scaled to data POT, with systematic parameters set to their nominal values. \begin{table*}[tbp] \begin{center} \caption{Number of events at each cut step, for data and for simulation (scaled to data POT) for the CC-inclusive sample.} \begin{tabular}{l c c} \hline\hline Requirement \ \ \ & Data & \ \ Simulation \ \ \\ \hline $\mu^-$ candidate starts within FGD1 FV and enters TPC2 \ \ & 48731 & 47752 \\ passes through-going veto & 34804 & 36833 \\ passes muon PID requirement & 25917 & 27082 \\ \hline\hline \end{tabular} \label{tab:numberEvents_byCut} \end{center} \end{table*} Table~\ref{tab:purity_CCreaction} shows that the CC0$\pi$-like sample is significantly enhanced in CCQE interactions, the CC1$\pi^{+}$-like sample in CC resonant pion interactions, and the CCOther-like sample in CC deep inelastic scattering (DIS) interactions. This division improves the constraints on several neutrino interaction model parameters. As shown in Tab.~\ref{tab:purity_CCtopology}, the CC1$\pi^+$ true topology is the most difficult to isolate. Most of the contamination in the CC1$\pi^+$-like sample comes from deep inelastic scattering events for which only one pion is detected and any other hadrons have escaped or have been lost to interactions in the surrounding material. Figures~\ref{fig:ND280_mu_CC}, \ref{fig:ND280_mu_CC0pi}, \ref{fig:ND280_mu_CC1pi}, and \ref{fig:ND280_mu_CCNpi} show the distributions of the muon momentum $p_\mu$ and angle $\theta_\mu$ (with respect to the $z$-axis) for the CC-inclusive sample and each sub-sample. These are compared to the nominal simulation, broken down by true reaction type. \begin{table*}[tbp] \begin{center} \caption{Composition for the selected samples (CC-inclusive, CC0$\pi$-like, CC1$\pi^+$-like, CCOther-like) according to the reaction types.} \begin{tabular}{l c c c c} \hline\hline True Reaction & \ \ CC-inclusive \ \ & \ \ CC0$\pi$-like \ \ & \ \ CC1$\pi^+$-like \ \ & \ \ CCOther-like \ \ \\ \hline CCQE & 44.6\% & 63.3\% & 5.3\% & 3.9\% \\ Resonant pion production \ \ & 22.4\% & 20.3\% & 39.4\% & 14.2\% \\ Deep inelastic scattering & 20.6\% & 7.5\% & 31.3\% & 67.7\% \\ Coherent pion production \ \ & 2.9\% & 1.4\% & 10.6\% & 1.4\% \\ NC & 3.1\% & 1.9\% & 4.7\% & 6.8\% \\ $\overline{\nu}_{\mu}$ & 0.5\% & 0.2\% & 1.7\% & 0.9\% \\ $\ensuremath{\nu_e}\xspace$ & 0.3\% & 0.2\% & 0.4\% & 0.9\% \\ Out of FGD1 FV & 5.4\% & 5.2\% & 6.6\% & 4.1\% \\ Other & 0.05\% & 0.03\% & 0.04\% & 0.2\% \\ \hline\hline \end{tabular} \label{tab:purity_CCreaction} \end{center} \end{table*} \begin{table*}[tbp] \begin{center} \caption{Composition of the selected samples (CC-inclusive, CC0$\pi$-like, CC1$\pi^+$-like, CCOther-like) divided into the true topology types. The non-$\nu_\mu$ CC topology includes $\nu_e$, $\bar{\nu}_\mu$ and NC interactions.} \begin{tabular}{l c c c c} \hline\hline True Topology & \ \ CC-inclusive \ \ & \ \ CC0$\pi$-like \ \ & \ \ CC1$\pi^+$-like \ \ & \ \ CCOther-like \ \ \\ \hline CC0$\pi$ & 51.5\% & 72.4\% & 6.4\% & 5.8\% \\ CC1$\pi^+$ & 15.0\% & 8.6\% & 49.2\% & 7.8\% \\ CCOther & 24.2\% & 11.5\% & 31.0\% & 73.6\% \\ non-$\nu_\mu$ CC & 4.1\% & 2.3\% & 6.8\% & 8.7\% \\ Out of FGD1 FV \ \ & 5.2\% & 5.2\% & 6.6\% & 4.1\% \\ \hline\hline \end{tabular} \label{tab:purity_CCtopology} \end{center} \end{table*} \begin{figure*}[tbp] \centering \ifx\figstylebw \includegraphics[keepaspectratio=true,width=.5\textwidth]{fig09a.pdf} \\ \includegraphics[keepaspectratio=true,width=.5\textwidth]{fig09b.pdf} \else \includegraphics[keepaspectratio=true,width=.5\textwidth]{fig09a_col.pdf} \\ \includegraphics[keepaspectratio=true,width=.5\textwidth]{fig09b_col.pdf} \fi \caption{Muon momentum and angle distribution for the CC-inclusive sample. These are compared to the simulation, broken down into the different reaction types shown in Tab.~\ref{tab:purity_CCreaction} and where non $\nu_\mu$ CC refers to NC, $\bar{\nu}_\mu$, and $\nu_e$ interactions. All systematic parameters are set to their nominal values.} \label{fig:ND280_mu_CC} \end{figure*} \begin{figure*}[tbp] \centering \ifx\figstylebw \includegraphics[keepaspectratio=true,width=.5\textwidth]{fig10a.pdf} \\ \includegraphics[keepaspectratio=true,width=.5\textwidth]{fig10b.pdf} \else \includegraphics[keepaspectratio=true,width=.5\textwidth]{fig10a_col.pdf} \\ \includegraphics[keepaspectratio=true,width=.5\textwidth]{fig10b_col.pdf} \fi \caption{Muon momentum and angle distribution for the CC0$\pi$-like sample. These are compared to the simulation, broken down into the different reaction types, with all systematic parameters set to their nominal values.} \label{fig:ND280_mu_CC0pi} \end{figure*} \begin{figure*}[tbp] \centering \ifx\figstylebw \includegraphics[keepaspectratio=true,width=.5\textwidth]{fig11a.pdf} \\ \includegraphics[keepaspectratio=true,width=.5\textwidth]{fig11b.pdf} \else \includegraphics[keepaspectratio=true,width=.5\textwidth]{fig11a_col.pdf} \\ \includegraphics[keepaspectratio=true,width=.5\textwidth]{fig11b_col.pdf} \fi \caption{Muon momentum and angle distribution for the CC1$\pi^+$-like sample. These are compared to the simulation, broken down into the different reaction types, with all systematic parameters set to their nominal values.} \label{fig:ND280_mu_CC1pi} \end{figure*} \begin{figure*}[tbp] \centering \ifx\figstylebw \includegraphics[keepaspectratio=true,width=.5\textwidth]{fig12a.pdf} \\ \includegraphics[keepaspectratio=true,width=.5\textwidth]{fig12b.pdf} \else \includegraphics[keepaspectratio=true,width=.5\textwidth]{fig12a_col.pdf} \\ \includegraphics[keepaspectratio=true,width=.5\textwidth]{fig12b_col.pdf} \fi \caption{Muon momentum and angle distribution for the CCOther-like sample. These are compared to the simulation, broken down into the different reaction types, with all systematic parameters set to their nominal values.} \label{fig:ND280_mu_CCNpi} \end{figure*} \subsubsection{ND280 detector systematics} \label{subsec:ND280_systematics} In this section we explain how we use control samples to assess uncertainty in the modeling of FGD and TPC response and of neutrino interactions outside of the fiducial volume of the FGD. TPC systematic uncertainties are divided into three classes: selection efficiency, momentum resolution and PID. The efficiency systematic uncertainty arises in the modeling of the ionization, cluster finding (where a cluster is defined as a set of contiguous pads in a row or column with charge above threshold), track finding, and charge assignment. This is assessed by looking for missed track components in control samples with particles that pass through all three TPCs. The single track-finding efficiency is determined to be (99.8$^{+0.2}_{-0.4}\%$) for data and simulation for all angles, momenta and track lengths, and shows no dependence on the number of clusters for tracks with 16 clusters or more. The inefficiency due to the overlap from a second nearly collinear track is found to be negligible for both data and simulation, so this systematic uncertainty can be ignored. The same control samples are used to evaluate the charge mis-identification systematic uncertainty. This systematic uncertainty is evaluated by comparing data and simulation of the charge mis-identification probability as a function of momentum. This is found to be less than 1$\%$ for momenta less than 5\,\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace. The momentum resolution is studied using particles crossing at least one FGD and two TPCs by evaluating the effect on the reconstructed momenta when the information from one of the TPCs is removed from the analysis. The inverse momentum resolution is found to be better in simulations than in data, typically by 30\%, and this difference is not fully understood. A scaling of the difference between true and reconstructed inverse momentum is applied to the simulated data to account for this. Uncertainty in the overall magnetic field strength leads to an uncertainty on the momentum scale of 0.6$\%$, which is confirmed using the range of cosmic ray particles that stop in the FGD. The TPC measurement of energy loss for PID is evaluated by studying high-purity control samples of electrons, muons and protons. The muon control sample has the highest statistics and is composed of particles from neutrino interactions outside the ND280 detector that pass through the entire tracker. For muons with momenta below 1\,\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace, the agreement between data and simulation is good, while above 1\,\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace the resolution is better in simulation than in data. Correction factors are applied to the simulation to take into account this effect. The performance for track finding in the FGD is studied separately for tracks which are connected to TPC tracks and tracks which are isolated in the FGD. The TPC-FGD matching efficiency is estimated from the fraction of through-going muons, in which the presence of a track in the TPC upstream and downstream of the FGD implies that a track should be seen there. The efficiency is found to be 99.9\% for momentum above 200\,\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c}}\xspace for both simulation and data. The FGD-only track efficiency is computed as a function of the direction of the track using a sample of stopping protons going from TPC1 to FGD1. This efficiency is found to be slightly better for data than simulation when $\cos \theta_\mu< 0.9$. A correction is applied to the simulation to account for this and the correction uncertainty is included in the overall detector uncertainty. The FGD PID performance is evaluated by comparing the energy deposited along the track with the expected energy deposit for a given particle type and reconstructed range in the FGD. We use control samples of muons and protons tagged by TPC1 and stopping in FGD1. The pull distributions (residual divided by standard error) for specific particle hypotheses (proton, muon or pion) for data and simulation are fitted with Gaussian distributions. To account for the differences in the means and widths of the distributions between data and simulation, corrections are applied to simulation and the correction uncertainty is included in the overall detector uncertainty. The Michel electron tagging efficiency is studied using a sample of cosmic rays that stop in FGD1 for which the delayed electron is detected. The Michel electron tagging efficiency is found to be $(61.1 \pm 1.9)\%$ for simulation and $(58.6 \pm 0.4)\%$ for data. A correction is applied to simulation and the correction uncertainty is included in the overall detector uncertainty. The uncertainty on the mass of the FGD, computed using the uncertainties in the size and density of the individual components, is 0.67\%~\cite{Amaudruz:2012pe}. There is systematic uncertainty in the modeling of pion interactions traveling through the FGD. This is evaluated from differences between external pion interaction data~\cite{ashery:piscat,levenson:piscat,ingram:piscat,jones:piscat,giannelli:piscat,ransome:piscat,miller:piscat,nakai:piscat,navon:piscat,ashery:pioncx,rowntree:piscat,fujii:piscat} and the underlying GEANT4 simulation. The external data do not cover the whole momentum range of T2K, so some extrapolation is necessary. Incorrect modeling can migrate events between the three sub-samples and for some ranges of momentum this produces the largest detector systematic uncertainty. An out-of-fiducial volume (OOFV) systematic is calculated by studying nine different categories of events that contribute to this background. Examples of these categories are: a high energy neutron that creates a $\pi^-$ inside the FGD that is mis-identified as a muon, a backwards-going $\pi^+$ from the barrel-ECal that is mis-reconstructed as a forward-going muon, and a through-going muon passing completely through the FGD and the TPC-FGD matching failed in such a way that mimics a FV event. Each of these categories is assigned a rate uncertainty (of 0 or 20\%) and a reconstruction-related uncertainty. The reconstruction-related uncertainty is below 40\% for all categories but one: we assign a reconstruction-related uncertainty of 150\% to the high-angle tracks category, in which matching sometimes fails to include some hits that are outside the FGD FV. An analysis of the events originating from neutrino interactions outside the ND280 detector (pit walls and surrounding sand) is performed using a dedicated simulation (sand muon simulation). The data/simulation discrepancy is about 10\% and is included as a systematic uncertainty on the predicted number of sand muon events in the CC-inclusive sample. Pileup corrections are applied to account for the inefficiency due to sand muons crossing the tracker volume in coincidence with a FV event. The correction is evaluated for each dataset separately and is always below 1.3\%; the systematic uncertainty arising from this correction is always below 0.16\%. Table~\ref{tab:syst_model} shows the full list of base detector systematic effects considered and the way each one is treated within the simulated samples to propagate the uncertainty. Normalization systematics are treated by a single weight applied to all events. Efficiency systematics are treated by applying a weight that depends on one or more observables. Finally, several systematics are treated by adjusting the observables and re-applying the selection. \begin{table*}[tbp] \begin{center} \caption{List of base detector systematic effects and the way each one is treated within the simulated samples to propagate the uncertainty. Normalization systematics are treated with a signgle weight applied to all events. Efficiency systematics are treated by applying a weight that depends on one or more observables. Observable variation systematics are treated by adjusting the observables and re-applying the selection. } \begin{tabular}{l l} \hline\hline Systematic effect & treatment \\ \hline TPC tracking efficiency & efficiency \\ TPC charge misassignment & efficiency \\ TPC momentum resolution & observable variation \\ TPC momentum scale & observable variation \\ B Field distortion & observable variation \\ TPC PID & observable variation \\ TPC-FGD matching efficiency\ \ & efficiency \\ FGD tracking efficiency & efficiency \\ FGD PID & observable variation \\ Michel electron efficiency & efficiency \\ FGD mass & normalization \\ Pion secondary int. & efficiency \\ Out of Fiducial Volume & efficiency \\ Sand muon & efficiency \\ Pileup & normalization \\ TPC track quality requirements \ \ & efficiency \\ \hline\hline \end{tabular} \label{tab:syst_model} \end{center} \end{table*} The base detector systematic effects are propagated using a vector of systematic parameters $\vec{d}$ that scale the nominal expected numbers of events in bins of $p_\mu$-$\cos\theta_\mu$ for the three selections, with the binning illustrated in Fig.~\ref{fig:ptbins}. When a base systematic parameter is adjusted, $d_i$ is the ratio of the modified to nominal expected number of events in bin $i$. The covariance of $\vec{d}$ due to the variation of each base systematic parameters is evaluated and the full covariance of $\vec{d}$, $V_d$, is found by adding the individual covariances together. This covariance, and the observed number of events in the three samples in bins of $p_\mu$-$\cos\theta_\mu$, shown in Fig.~\ref{fig:ptbins}, are used by the subsequent analyses in order to constrain neutrino flux and interaction systematic parameters. \begin{figure}[tbp] \begin{center} \includegraphics[keepaspectratio=true,width=0.45\textwidth]{fig13a.pdf} \includegraphics[keepaspectratio=true,width=0.45\textwidth]{fig13b.pdf} \caption{The $p_\mu$-$\cos\theta_\mu$ binning for the systematic parameters $\vec{d}$ that propagate the base detector systematic effects are shown in the left figure for the three event selections. The binning for the observed number of events is shown in the right figure. For the CC1$\pi^+$-like sample, the bin division at $p_\mu=3.0$\,\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace is not used.} \label{fig:ptbins} \end{center} \end{figure} \section{\label{sec:ND} Near Detectors} Precision neutrino oscillation measurements require good understanding of the neutrino beam properties and of neutrino interactions. The two previous sections describe how we model these aspects for the T2K experiment and how we use external data to reduce model uncertainty. However, if only external data were used, the resulting systematic uncertainty would limit the precision for oscillation analyses. In order to reduce systematic uncertainty below the statistical uncertainty for the experiment, an underground hall was constructed 280~m downstream of the production target for near detectors to directly measure the neutrino beam properties and neutrino interactions. The hall contains the on-axis INGRID detector, a set of modules with sufficient target mass and transverse extent to continuously monitor the interaction rate, beam direction, and profile, and the off-axis ND280 detector, a sophisticated set of sub-detectors that measure neutrino interaction products in detail. This section describes the INGRID and ND280 detectors and the methods used to select high purity samples of neutrino interactions. The observed neutrino interaction rates and distributions are compared to the predictions using the beamline and interaction models, with nominal values for the systematic parameters. Section~\ref{sec:BANFF} describes how ND280 data are used to improve the systematic parameter estimates and compares the adjusted model predictions with the ND280 measurements. \input{04a-INGRID.tex} \input{04b-ND280.tex} \section{\label{sec:BANFF} Near Detector Analysis} In this section we explain how we use the large and detailed samples from ND280 in conjunction with models for the beam, neutrino interactions, and the ND280 detector to improve our predictions of the flux at SK and some cross section parameters. The systematic parameters for the beam model ($\vec{b}$), binned in energy as shown in Fig.~\ref{fig:flux_at_sk}, the cross section model ($\vec{x}$), listed in Tab.~\ref{tbl:xsecpar}, and detector model ($\vec{d}$), illustrated in Fig.~\ref{fig:ptbins}, are used to describe the systematic uncertainties in the analysis. We use the three \ensuremath{\nu_\mu}\xspace CC samples described in Sec.~\ref{sec:ND280} and external data discussed in Sec.~\ref{subsec:externalconstraints} and summarize our knowledge of the neutrino cross section parameters and unoscillated neutrino flux parameters with a covariance matrix, assuming that a multivariate Gaussian is an appropriate description. \subsection{ND280 Likelihood} The three \ensuremath{\nu_\mu}\xspace CC samples are binned in the kinematic variables $p_{\mu}$ and $\cos\theta_{\mu}$, as shown in Fig.~\ref{fig:ptbins}, and the observed and predicted number of events in the bins are used to define the likelihood, \begin{equation} \begin{split} \mathcal{L}(\vec{b},\vec{x},\vec{d}) =& \prod_i^{N_{bins}} p\left(N^d_{i}|N^{p}_{i}(\vec{b},\vec{x},\vec{d})\right)\\ =& \ c \prod_i^{N_{bins}} \left(N^{p}_{i}(\vec{b},\vec{x},\vec{d})\right)^{N^d_i} e^{-N^{p}_{i}(\vec{b},\vec{x},\vec{d})}\\ \label{eq:Lratio} \end{split} \end{equation} where $N^p_{i}$ is the number of unoscillated MC predicted events and $N^d_i$ is the number of data events in the $i$th bin of the CC samples, the second line assumes the Poisson distribution, and $c$ is a constant. The number of MC predicted events, $N^p_{i}(\vec{b},\vec{x},\vec{d})$, is a function of the underlying beam flux $\vec{b}$, cross section $\vec{x}$, and detector $\vec{d}$ parameters, and these parameters are constrained by external data as described in the previous sections. We model these constraints as multivariate Gaussian likelihood functions and use the product of the above defined likelihood and the constraining likelihood functions as the total likelihood for the near detector analysis. This total likelihood is maximized to estimate the systematic parameters and evaluate their covariance. In practice, the quantity $-2\ln\mathcal{L}_{total}$ is minimized. Explicitly, this quantity is: \begin{equation} \begin{split} & -2\ln\mathcal{L}_{total} = {\mathrm{constant\ }} +\\ & 2 \sum_{i=1}^{N_{bins}}N^{p}_{i}(\vec{b},\vec{x},\vec{d})-N^{d}_{i}\ln[N^{p}_{i}(\vec{b},\vec{x},\vec{d})] \\ & +\sum_{i=1}^{N_{b}}\sum_{j=1}^{N_{b}}(b^0_{i}-b_{i})(V_{b}^{-1})_{i,j}(b^0_{j}-b_{j}) \\ & +\sum_{i=1}^{N_{x}}\sum_{j=1}^{N_{x}}(x^{0}_{i}-x_{i})(V^{-1}_{x})_{i,j}(x^{0}_j-x_{j}) \\ & +\sum_{i=1}^{N_{d}}\sum_{j=1}^{N_{d}}(d^0_{i}-d_{i})(V^{-1}_{d})_{i,j}(d^0_{j}-d_{j}) \\ \end{split} \label{eq:Ltotal} \end{equation} where $\vec{b}^0$, $\vec{x}^0$, and $\vec{d}^0$ are the nominal values (best estimates prior to the ND280 analysis) and $V_{b}$, $V_{x}$, and $V_{d}$ are the covariance matrices of the beam, cross section, and detector systematic parameters. \subsection{Fitting methods} A reference Monte Carlo sample of ND280 events is generated using the models described in the previous sections and the nominal values for the systematic parameters. Predicted distributions for adjusted values of the systematic parameters are calculated by weighting each event of the Monte Carlo sample individually. For the flux parameters, the true energy and flavor of each MC event determine the normalization weight appropriate for that event. For the detector parameters, the reconstructed momentum and angle of the muon candidate are used. For cross section scaling parameters (e.g., $x^{QE}_1$), weights are applied according to the true interaction mode and true energy. For other cross section parameters (e.g., $M_{A}^{QE}$), including the FSI parameters, the ratio of the adjusted cross section to the nominal cross section (calculated as a function of the true energy, interaction type, and lepton kinematics) is used to weight the event. The FSI parameters are constrained by a covariance matrix constructed by using representative points on the 1-$\sigma$ surface for the parameters in Table~\ref{tab:fsi_parsets}. The fit is performed by minimizing $-2\ln\mathcal{L}_{total}$ using MINUIT~\cite{James:1975dr}. Parameters not of interest to the oscillation analyses (e.g. ND280 detector systematic uncertainties) are treated as nuisance parameters. \subsection{Results} The result of this analysis is a set of point estimates ($\vec{g}$) and covariance ($V_g$) for the systematic scaling factors for the unoscillated neutrino flux at SK in bins of energy and flavor ($\vec{b}_{s}$) and the cross section parameters which are constrained by ND280 data ($\vec{x}_{n}$). Figures~\ref{fig:BANFFCC0pi},~\ref{fig:BANFFCC1pi}, and~\ref{fig:BANFFCCOth} show the projected kinematic variable distributions of the three ND280 samples used in this analysis, comparing the data to the MC prediction for the two cases of using nominal values of the systematic parameters and using the best-fit values of the parameters. The MC distributions show better agreement with the data when using the best-fit values for the parameters, especially decreasing the prediction near the momentum peak and in the forward direction ($\cos\theta_{\mu}$ close to 1). \begin{figure}[tbp] \centering \includegraphics[width=0.5\textwidth]{fig14.pdf} \caption{Comparison of the data and Monte Carlo distributions for muon momentum (top) and angle (bottom) in the CC0$\pi$-like sample, using the nominal and fitted values for the systematic parameters.} \label{fig:BANFFCC0pi} \end{figure} \begin{figure}[tbp] \centering \includegraphics[width=0.5\textwidth]{fig15.pdf} \caption{Comparison of the data and Monte Carlo distributions for muon momentum (top) and angle (bottom) in the CC1$\pi^+$-like sample, using the nominal and fitted values for the systematic parameters.} \label{fig:BANFFCC1pi} \end{figure} \begin{figure}[tbp] \centering \includegraphics[width=0.5\textwidth]{fig16.pdf} \caption{Comparison of the data and Monte Carlo distributions for muon momentum (top) and angle (bottom) in the CCOther-like sample, using the nominal and fitted values for the systematic parameters.} \label{fig:BANFFCCOth} \end{figure} Figure~\ref{fig:BANFFuncertainties} shows the values of the \ensuremath{\nu_\mu}\xspace flux and cross section parameters that are constrained by the near detector analysis for the oscillation analyses; Table~\ref{tab:propagatedparameters} lists the flux parameters and Table~\ref{tab:propagatedparametersxsec} lists the values of the cross section parameters. These tables contain all of the point estimates in $\vec{g}$ as well as the errors calculated as the square root of the diagonal of the covariance $V_g$. One of the interesting features of the best-fit parameters is the dip in the flux parameters just below 1~GeV, which is near the peak of the T2K beam flux. This is particularly important, as this is the region of interest for oscillation analyses, and an incorrect prediction of the flux in this region can bias estimates of oscillation parameters. Another interesting point is the value of $M_{A}^{RES}$, which is pulled to a much lower value than the external data constraint used in the fit. This highlights both the power of the ND280 data, and the importance of the CC1$\pi^{+}$-like sample, which is dominant in determining this parameter. This selection is new to the ND280 analysis for the set of oscillation analyses reported in this paper, and provides an improved ability to use T2K data to constrain resonant interaction parameters. The predicted event rate at SK is given by the product of the flux, cross section, and detector efficiency, and the typical uncertainties of the flux and cross section parameters constrained by ND280 are 7-10\%. The estimators of these flux and cross section parameters have a strong negative correlation, however, because they use the rate measurements in the near detector. As a result, their contribution to the SK event rate uncertainty is less than 3\%, significantly smaller than the individual flux and cross section parameter uncertainties. A cross-check to this analysis is performed by studying a selection of electron neutrino interactions in ND280~\cite{intrinsicnumeasurement2014}, and finds that the relative rate of selected electron neutrino events to that predicted by MC using the best-fit parameter values from this analysis is $R(\ensuremath{\nu_e}\xspace) = 1.01\pm0.10$. \begin{figure*}[tbp] \begin{center} \includegraphics[width=0.6\textwidth]{fig17a.pdf} \includegraphics[width=0.6\textwidth]{fig17b.pdf} \caption{Prior and fitted values and uncertainties for the SK $\nu_{\mu}$ flux parameters (upper figure) and cross section parameters (lower figure) constrained by the near detector analysis for the oscillation analyses. Uncertainties are calculated as the square root of the diagonal of the relevant covariance matrix. The value of $M_{A}^{QE}$ and $M_{A}^{RES}$ are given in units of \ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace, and all other parameters are multiplicative corrections. } \label{fig:BANFFuncertainties} \end{center} \end{figure*} \begin{table}[tbp] \centering \caption{Prior and fitted values and uncertainties for the near-detector-constrained SK flux parameters. All parameters are multiplicative corrections, and the uncertainties are calculated as the square root of the diagonal of the covariance matrix.} \begin{tabular}{l c c} \hline\hline Parameter & \ \ Prior Value \ \ & \ \ Fitted Value\ \ \\ \hline \ensuremath{\nu_\mu}\xspace 0.0--0.4 GeV\ \ & 1.00$\pm$0.12&1.03$\pm$0.09\\ \ensuremath{\nu_\mu}\xspace 0.4--0.5 GeV&1.00$\pm$0.13&1.02$\pm$0.09\\ \ensuremath{\nu_\mu}\xspace 0.5--0.6 GeV&1.00$\pm$0.12&0.99$\pm$0.08\\ \ensuremath{\nu_\mu}\xspace 0.6--0.7 GeV&1.00$\pm$0.11&0.97$\pm$0.08\\ \ensuremath{\nu_\mu}\xspace 0.7--1.0 GeV&1.00$\pm$0.13&0.93$\pm$0.08\\ \ensuremath{\nu_\mu}\xspace 1.0--1.5 GeV&1.00$\pm$0.12&0.99$\pm$0.08\\ \ensuremath{\nu_\mu}\xspace 1.5--2.5 GeV&1.00$\pm$0.10&1.04$\pm$0.07\\ \ensuremath{\nu_\mu}\xspace 2.5--3.5 GeV&1.00$\pm$0.09&1.05$\pm$0.06\\ \ensuremath{\nu_\mu}\xspace 3.5--5.0 GeV&1.00$\pm$0.11&1.03$\pm$0.07\\ \ensuremath{\nu_\mu}\xspace 5.0--7.0 GeV&1.00$\pm$0.15&0.98$\pm$0.07\\ \ensuremath{\nu_\mu}\xspace $>$7.0 GeV&1.00$\pm$0.19&0.94$\pm$0.08\\ \hline \ensuremath{\nub_\mu}\xspace 0.0--0.7 GeV&1.00$\pm$0.13&1.03$\pm$0.10\\ \ensuremath{\nub_\mu}\xspace 0.7--1.0 GeV&1.00$\pm$0.12&1.01$\pm$0.09\\ \ensuremath{\nub_\mu}\xspace 1.0--1.5 GeV&1.00$\pm$0.12&1.01$\pm$0.09\\ \ensuremath{\nub_\mu}\xspace 1.5--2.5 GeV&1.00$\pm$0.12&1.03$\pm$0.10\\ \ensuremath{\nub_\mu}\xspace $>$2.5 GeV&1.00$\pm$0.12&1.01$\pm$0.11\\ \hline \ensuremath{\nu_e}\xspace 0.0--0.5 GeV&1.00$\pm$0.13&1.03$\pm$0.10\\ \ensuremath{\nu_e}\xspace 0.5--0.7 GeV&1.00$\pm$0.13&1.01$\pm$0.09\\ \ensuremath{\nu_e}\xspace 0.7--0.8 GeV&1.00$\pm$0.14&0.98$\pm$0.11\\ \ensuremath{\nu_e}\xspace 0.8--1.5 GeV&1.00$\pm$0.11&1.00$\pm$0.07\\ \ensuremath{\nu_e}\xspace 1.5--2.5 GeV&1.00$\pm$0.10&1.02$\pm$0.07\\ \ensuremath{\nu_e}\xspace 2.5--4.0 GeV&1.00$\pm$0.12&1.00$\pm$0.07\\ \ensuremath{\nu_e}\xspace $>$4.0 GeV&1.00$\pm$0.17&0.95$\pm$0.08\\ \hline \ensuremath{\nub_e}\xspace 0.0--2.5 GeV&1.00$\pm$0.19&1.01$\pm$0.18\\ \ensuremath{\nub_e}\xspace $>$2.5 GeV&1.00$\pm$0.14&0.96$\pm$0.08\\ \hline\hline \end{tabular} \label{tab:propagatedparameters} \end{table} \begin{table}[tbp] \centering \caption{Prior and fitted values and uncertainties for the near-detector-constrained cross section model parameters. The value of $M_{A}^{QE}$ and $M_{A}^{RES}$ are given in units of \ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace and all other parameters are multiplicative corrections. The uncertainties are calculated as the square root of the diagonal of the covariance matrix.} \begin{tabular}{l c c c} \hline\hline Parameter & \ \ units \ \ & \ \ Prior Value \ \ & \ \ Fitted Value \ \ \\ \hline $M_{A}^{QE}$ & \ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace &1.21$\pm$0.45&1.24$\pm$0.07\\ $M_{A}^{RES}$ & \ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace & 1.41$\pm$0.22&0.96$\pm$0.07\\ $x^{QE}_1$ & & 1.00$\pm$0.11&0.97$\pm$0.08\\ $ x^{QE}_2$ & &1.00$\pm$0.30&0.93$\pm$0.10\\ $x^{QE}_3$ & &1.00$\pm$0.30&0.85$\pm$0.11\\ $x^{CC1\pi}_1$ & &1.15$\pm$0.32&1.26$\pm$0.16\\ $x^{CC1\pi}_2$ & &1.00$\pm$0.40&1.12$\pm$0.17\\ $x^{NC\pi^0}$ & &0.96$\pm$0.33&1.14$\pm$0.25\\ \hline\hline \end{tabular} \label{tab:propagatedparametersxsec} \end{table} \section{\label{sec:SK}Far Detector} Precision measurements of neutrino oscillation by T2K rely on the capabilities of the far detector, most notably, its large target volume and acceptance and efficient discrimination between the primary leptons produced in $\nu_{\mu}$ and $\nu_{e}$ CC interactions. Additionally, since CCQE scattering interactions are expected to dominate at the energies below 1 GeV, accurate reconstruction of the parent neutrino energy is reliant upon accurate estimation of the lepton kinematics. Finally, the suppression of backgrounds, particularly those from NC and single-pion production processes, is needed. Here we discuss the performance of SK in this context, focusing on the event selections and the estimation of systematic uncertainties in the modeling of SK. Super-Kamiokande is a 50 kton water Cherenkov detector located in the Kamioka Observatory, Gifu, Japan. It is divided into two concentric cylinders, an inner detector (ID) with 11,129 inward-facing 20-inch photomultiplier tubes (PMTs) and an outer detector (OD), used primarily as a veto, which has 1885 outward-facing eight-inch PMTs. The ID PMTs view a 32 kton target volume and the OD collects light within a 2-m wide cylindrical shell surrounding the ID. The photocathode coverage of the ID is 40\% and the space between PMTs is covered with a black plastic sheet to reduce reflection. To overcome its reduced photocathode coverage, reflective Tyvek$^\circledR$ lines the inner and outer surfaces of the OD and each PMT is coupled to a $60 \times 60 \mbox{\ cm}^{2}$ wavelength-shifting plate to improve light collection. Cherenkov radiation from charged particles traversing the detector produces ring patterns recorded by the ID PMTs and is the primary tool for particle identification (PID). Due to their relatively large mass, muons passing through the detector are often unscattered and thereby produce clear ring patterns. Electrons, in contrast, scatter and produce electromagnetic showers, resulting in a diffuse ring edge. These differences in conjunction with estimation of the Cherenkov opening angle enable efficient discrimination between leptons. The probabilities to misidentify a single electron as a muon or a single muon as an electron are 0.7\% and 0.8\%, respectively, for typical lepton energies in T2K events. Since the recoil proton from CC interactions at T2K is usually below Cherenkov threshold, a single lepton is the dominant topology for beam-induced events at SK. For such isolated electrons (muons) the momentum and angular resolutions are estimated to be $0.6\% + 2.6\%/\sqrt{P[{\mathrm{GeV/c}}]}$ $(1.7\% + 0.7\%/\sqrt{P[{\mathrm{GeV/c}}]})$ and $3.0^\circ$ $(1.8^\circ)$, respectively. Since the start of T2K, SK has operated with upgraded electronics which provide lossless acquisition of all PMT hits above threshold. As a result the efficiency for tagging electrons from muon decays within the ID is $89.1\%$, an essential element of removing backgrounds containing sub-threshold muons or charged pions. Further details of the detector and its calibration may be found in~\cite{Fukuda:2002uc,Abe:2011ks,Abe:2013gga}. Due to its large size, SK observes roughly ten atmospheric neutrino interactions per day within its fiducial volume. These neutrinos serve as control samples for the estimation of systematic errors. Similarly, although the detector is located at a depth of 2700 meters water equivalent, cosmic ray muons traverse the detector at approximately $3\mbox{\ Hz}$ and together with their decay electrons provide an additional sample for systematic error evaluation. Details of these and other control samples are presented in the following subsections. \subsection{Event Selection and Data Quality} \label{sec:SK_event_selection} \input{06a-SK_event_selection.tex} \subsection{$\pi^0$ Rejection with the New Event Reconstruction Algorithm} \label{sec:SK_fiTQun} \input{06b-SK_fiTQun.tex} \subsection{Systematic uncertainty} \label{sec:SK_detector_errors} \input{06c-SK_detector_errors.tex} \section{\label{sec:OA} Oscillation model and parameter estimation} The previous sections have described the T2K experiment and the way we model all elements of the experiment and neutrino interactions which are necessary to interpret our data, and how we use internal and external data to improve our models. In this section, we turn our attention to general aspects of estimating neutrino oscillation parameters from our data. The oscillation model is given along with the predictions for the probability for muon neutrino disappearance and electron neutrino appearance, the key observables for our experiment. We explain how we use external data for some of the oscillation parameters and the general approaches we use to estimate the remaining parameters. Finally, we characterize the importance of the different sources of systematic uncertainty. Sections~\ref{sec:numu}--\ref{sec:jointbayes} describe the individual analyses and their results in detail. \subsection{\label{sec:OA:oscmodel}Oscillation model} The Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrix, $U$, defines the mixture of the mass eigenstates ($\nu_1$, $\nu_2$, and $\nu_3$) that make up each flavor state: \begin{equation} \left( \begin{array}{c} \nu_{e} \\ \nu_{\mu} \\ \nu_{\tau} \end{array} \right) = {U} \left( \begin{array}{c} \nu_{1} \\ \nu_{2} \\ \nu_{3} \end{array} \right) \end{equation} and it has become standard to parametrize this matrix, ignoring the Majorana phases, as: \begin{equation} \begin {split} &U = \left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & c_{23} & s_{23} \\ 0 & -s_{23} & c_{23} \end{array} \right) \left( \begin{array}{ccc} c_{13} & 0 & s_{13}e^{-i\delta} \\ 0 & 1 & 0 \\ -s_{13}e^{i\delta} & 0 & c_{13} \end{array} \right) \left( \begin{array}{ccc} c_{12} & s_{12} & 0 \\ -s_{12} & c_{12} & 0 \\ 0 & 0 & 1 \end{array} \right \label{eq:PMNSmatrix} \end{split} \end{equation} where $s_{ij} = \sin\theta_{ij}$, $c_{ij} = \cos\theta_{ij}$, and $\delta = \ensuremath{\delta_{CP}}$ is the CP-violating phase. The \ensuremath{\nu_\mu}\xspace-survival probability for a neutrino with energy $E$ traveling a distance $L$ is: \begin{equation} \begin{array}{l} P(\nu_{\mu}\rightarrow\nu_{\mu}) = \\ 1 -4\left( s_{12}^{2} c_{23}^{2} + s_{13}^{2} s_{23}^{2} c_{12}^{2} + 2 s_{12}s_{13}s_{23} c_{12}c_{23}\cos\delta\right)s_{23}^{2}c_{13}^{2}\sin^{2}\phi_{31}\\ - 4\left( c_{12}^{2} c_{23}^{2} + s_{13}^{2} s_{23}^{2} s_{12}^{2} - 2 s_{12}s_{13}s_{23} c_{12}c_{23}\cos\delta\right)s_{23}^{2}c_{13}^{2}\sin^{2}\phi_{32}\\ -4\left( s_{12}^{2} c_{23}^{2} + s_{13}^{2} s_{23}^{2} c_{12}^{2} + 2 s_{12}s_{13}s_{23} c_{12}c_{23}\cos\delta\right)\left( c_{12}^{2} c_{23}^{2} + s_{13}^{2} s_{23}^{2} s_{12}^{2} - 2 s_{12}s_{13}s_{23} c_{12}c_{23}\cos\delta\right) \sin^{2}\phi_{21} \end{array} \label{eq:numusurv3f} \end{equation} where \begin{equation} \phi_{ij} = \frac{\Delta m_{ij}^{2} L}{4E} \label{eq:phiDef} \end{equation} in natural units and $\Delta m_{ij}^{2} = m_i^2 - m_j^2$ is the difference in the squares of masses of eigenstates. The $\ensuremath{\nu_e}\xspace$-appearance probability, to first order approximation in matter effects, can be written as: \begin{equation} \begin{array}{l} P(\nu_{\mu}\rightarrow\nu_{e}) = \\ 4c^{2}_{13}s^{2}_{13}s^{2}_{23}\sin^{2}\phi_{31}\left(1+\frac{2a}{\Delta m^{2}_{31}}(1-2s^2_{13})\right)\\ +8c^{2}_{13}s_{12}s_{13}s_{23}\left(c_{12}c_{23}\cos\delta-s_{12}s_{13}s_{23}\right)\cos\phi_{23}\sin\phi_{31}\sin\phi_{21}\\ -8c^{2}_{13}c_{12}c_{23}s_{12}s_{13}s_{23}\sin\delta\sin\phi_{32}\sin\phi_{31}\sin\phi_{21}\\ +4s^{2}_{12}c^{2}_{13}\left(c^{2}_{12}c^{2}_{23}+s^{2}_{12}s^{2}_{23}s^{2}_{13}-2c_{12}c_{23}s_{12}s_{23}s_{13}\cos\delta\right)\sin^{2}\phi_{21}\\ -8c^{2}_{13}s^{2}_{13}s^{2}_{23}\left(1-2s^{2}_{13}\right)\frac{aL}{4E_{\nu}}\cos\phi_{32}\sin\phi_{31} \end{array} \end{equation} The effect on oscillation due to the density, $\rho$, of matter through which the neutrinos travel is included with the terms, $a[\ensuremath{{\mathrm{\,e\kern -0.1em V^2\!/}c^4}}\xspace]=7.56\times10^{-5} \rho[$g/cm$^3]$E$_\nu[$GeV$]$. The corresponding $\ensuremath{\nub_e}\xspace$-appearance probability is calculated by changing the sign of $a$ and $\ensuremath{\delta_{CP}}$. Our analyses use the complete formulas, without approximating matter effects, to compute the oscillation probabilities. Since the neutrino mass hierarchy (MH) is not yet known, we parametrize the large mass splitting by $\ensuremath{|\Delta m^{2}|\xspace}=\ensuremath{\Delta m^{2}_{32}\xspace}$ for normal hierarchy (NH, where $m_3$ is the largest mass) and $\ensuremath{|\Delta m^{2}|\xspace}=\ensuremath{\Delta m^{2}_{13}\xspace}$ for inverted hierarchy (IH, where $m_3$ is the smallest mass). It is not possible to estimate all of the oscillation parameters using only our measurements of $\ensuremath{\nu_\mu}\xspace$-disappearance and $\ensuremath{\nu_e}\xspace$-appearance. Instead, we estimate the four oscillation parameters, \ensuremath{|\Delta m^{2}|\xspace}, \stt, \sot, \ensuremath{\delta_{CP}}, and the mass hierarchy, and use external measurements for the solar oscillation parameters, \ensuremath{\textrm{sin}^2\theta_{12}}\xspace and \dmsqso, as we have negligible sensitivity to those. Figure~\ref{fig:oscprob} illustrates how our key observables depend on the two parameters, \stt\ and \ensuremath{\delta_{CP}}, for the two mass hierarchies. In this figure the neutrino energy is at the oscillation maximum (0.6~GeV), and the other oscillation parameters are fixed (solar parameters as established in Sec.~\ref{sec:OA:osc} and $\sot = 0.0243$). To a good approximation, with our current dataset, $\ensuremath{\nu_\mu}\xspace$-disappearance can be treated on its own to estimate $\theta_{23}$. The oscillation parameter dependence on $\ensuremath{\nu_e}\xspace$-appearance cannot be factorized, however. In order to estimate the full set of oscillation parameters and properly account for all uncertainties, it is necessary to do a joint analysis of $\ensuremath{\nu_\mu}\xspace$-disappearance and $\ensuremath{\nu_e}\xspace$-appearance. \begin{figure}[tbp] \begin{center} \includegraphics[width=0.75\textwidth]{fig25.pdf} \caption{The $P(\nu_{\mu}\rightarrow\nu_{\mu})$ survival probability and $P(\nu_{\mu}\rightarrow\nu_{e})$ appearance probability for different values of $\stt$ and for $\ensuremath{\delta_{CP}}$ in the interval $[-\pi,\pi]$ for normal (solid) and inverted (dashed) mass hierarchy. The highlighted dot on each ellipse is the point for $\ensuremath{\delta_{CP}}=0$ and $\ensuremath{\delta_{CP}}$ increases clockwise (anti-clockwise) for normal (inverted) mass hierarchy. The other oscillation parameter values are fixed (solar parameters as established in Sec.~\ref{sec:OA:osc} and $\sot = 0.0243$) and the neutrino energy is fixed to 0.6~GeV. } \label{fig:oscprob} \end{center} \end{figure} \subsection{\label{sec:OA:osc} External input for oscillation parameters} Since our experiment is insensitive to the solar oscillation parameters, we fix them to the values $\ensuremath{\textrm{sin}^2\theta_{12}}\xspace = 0.306$ and $\dmsqso = 7.5\times 10^{-5}$\ensuremath{{\mathrm{\,e\kern -0.1em V^2\!/}c^4}}\xspace from~\cite{PDG2012}. As a check, the Bayesian analysis presented in Sec.~\ref{sec:jointbayes} applies Gaussian priors with standard deviations (0.017 and $0.2\times10^{-5}$\ensuremath{{\mathrm{\,e\kern -0.1em V^2\!/}c^4}}\xspace) and finds that the uncertainties in these parameters do not affect the intervals of the other oscillation parameters. When combining the results for the T2K joint oscillation analyses in Secs.~\ref{sec:jointfreq} and~\ref{sec:jointbayes} with the results from the reactor experiments, we use the weighted average of the results from the three reactor experiments Daya Bay, RENO, and Double Chooz which is: $(\ensuremath{\textrm{sin}^22\theta_{13}})_{reactor} = 0.095\pm0.01$~\cite{PDG2013}. In terms of the parametrization that we use in this paper, $(\sot)_{reactor} = 0.0243\pm0.0026$. \subsection{\label{sec:OA:fit} Oscillation parameter estimation} Sections~\ref{sec:numu}--\ref{sec:jointbayes} describe analyses which use T2K and external data to estimate oscillation parameters and provide frequentist confidence intervals or Bayesian credible intervals. Using the disappearance channel alone, the atmospheric oscillation parameters are studied using frequentist approaches. The disappearance and appearance channels are used in combination to study a larger set of oscillation parameters, using frequentist and Bayesian approaches. This section describes general methods that are applied in these analyses. The oscillation analyses compare the event rate and distribution of the reconstructed neutrino energies for the observed $\nu_\mu$ CC and $\nu_e$ CC candidate events recorded by the far detector, selected as described in Sec.~\ref{sec:SK_event_selection}, with model predictions. The overall number of predicted events for typical oscillation parameter values and without oscillations are shown in Tab.~\ref{tab:jointfreq:prediction:all_extra_templates}. \begin{table}[tbp] \center \caption{Predicted number of $\nu_\mu$ CC candidates and $\nu_e$ CC candidates for an exposure of 6.57 $\times 10^{20}$ POT with and without oscillations and with oscillations using the typical parameter values: $\sin^{2}\theta_{12} = 0.306$, $\Delta m_{21}^{2} = 7.5 \times 10^{-5}$\ensuremath{{\mathrm{\,e\kern -0.1em V^2\!/}c^4}}\xspace, $\sin^{2}\theta_{23} = 0.5$, $\Delta m_{32}^{2}= 2.4 \times 10^{-3}$\ensuremath{{\mathrm{\,e\kern -0.1em V^2\!/}c^4}}\xspace, $\sin^{2}\theta_{13} = 0.0243$, $\delta_{CP} = 0$ and normal mass hierarchy. The total numbers are broken down into the intrinsic beam components (those without an arrow) and oscillated components. } \begin{tabular} {l c c c c c} \toprule & \multicolumn{2}{c}{$\nu_\mu$ CC} & \ \ \ & \multicolumn{2}{c}{$\nu_{e}$ CC} \\ & Osc. & No osc. & & Osc. & No osc. \\ \hline $\nu_{\mu}$ & 116.46 & 431.77 & & 0.94 & 1.38\\ $\nu_{e} \rightarrow \nu_{\mu}$ & 0.16 & 0 & & 0.00 & 0\\ $\bar{\nu}_{\mu}$ & 7.81 & 13.92 & & 0.05 & 0.06\\ $\nu_{e}$ & 0.26 & 0.27 & & 3.13 & 3.38\\ $ \nu_{\mu} \rightarrow \nu_{e}$ & 0.26 & 0 & & 16.55 & 0\\ $\bar{\nu}_{e}$ & 0.02 & 0.02 & & 0.15 & 0.16\\ $\bar{\nu}_{\mu} \rightarrow \bar{\nu_{e}}$ & 0.00 & 0 & & 0.22 & 0\\ \hline Total & 124.98 & 445.98 & & 21.06 & 4.97 \\ \botrule \end{tabular} \label{tab:jointfreq:prediction:all_extra_templates} \end{table} Point estimates for the oscillation parameters are those that maximize a likelihood function (or the posterior probability density for Bayesian analyses) that accounts for T2K-SK data, as well as internal control samples and external data. The observed numbers of events in SK are treated as outcomes of Poisson distributions. Systematic uncertainties are encapsulated by the systematic parameters and their covariance matrices, defined in Sec.s~\ref{sec:beam}-\ref{sec:SK}. These provide a convenient mechanism to connect the separate analyses of the neutrino beamline, neutrino interactions, near detectors, and far detector to the full oscillation analyses. The analyses use different approaches to deal with the large number of oscillation and nuisance parameters, and report intervals based on either frequentist or Bayesian methods. With the large number of oscillation and nuisance parameters involved, it is not possible to calculate confidence intervals for a subset of the parameters with a method that guarantees frequentist coverage\footnote{coverage demands that in an ensemble of repeated experiments, $\alpha$\% of the $\alpha$\% confidence intervals contain the true parameter(s). Coverage in presence of systematic uncertainty is difficult to define, in part due to the definition of an appropriate ensemble.} for any possible values of the remaining parameters. Instead, a pragmatic approach is followed by reducing the high dimensionality of the likelihood functions through either profiling or marginalization. The profile likelihood, a function of only the subset of parameters of interest, is the likelihood maximized over the remaining parameters. The marginal likelihood is found by integrating the product of the likelihood function and priors over all parameters, except those of interest. In the case of linear parameter dependence and where the nuisance parameters appear in a Gaussian form, the profile and marginal likelihood functions will be identical and can be used to produce intervals with correct frequentist coverage. For the neutrino oscillation analysis, the parameter dependence is non-linear and as a result the profile and marginal likelihoods differ, and frequentist coverage is not guaranteed. When practical, we use the Neyman approach of constructing $\alpha$\% confidence intervals whereby, for any value of the parameter(s) of interest, $\alpha$\% of possible data outcomes are accepted on the basis of a statistic. In our analyses, they are accepted if the likelihood ratio is larger than a critical value. The confidence interval is the set of all values for the parameter(s) for which the data are accepted. When physical boundaries or non-linearities appear in the parametrization, as in the case for the oscillation parameters, they can cause confidence intervals to be empty or misleadingly small. In order to reduce the chance of producing such confidence intervals, we use the likelihood ratio recommended by Feldman and Cousins~\cite{PhysRevD.57.3873} to form the interval. When producing joint intervals for two oscillation parameters, this approach is not always computationally practical, and instead approximate intervals are shown using contours of the likelihood ratio, sometimes referred to as the constant $\Delta\chi^2$ method. To construct Bayesian credible intervals, the posterior probability density function of the oscillation and nuisance parameters is calculated as the product of the likelihood function for the SK data with prior probability functions for the parameters. The Markov-Chain Monte Carlo (MCMC) method using the Metropolis-Hastings algorithm~\cite{hastings1970monte} is used to efficiently produce a set of points that populate the full parameter space proportional to the posterior probability density function. The chain is the set of accepted stepping points in a random walk through parameter space, in which a proposed step from point $A$ to a point $B$ with lower density is accepted with a probability equal to the ratio of the densities $f(B)/f(A)$ and is always accepted when the density increases. When a step is not accepted, the last point in the chain is repeated, and another random step from that point is proposed. With the chain, consisting typically of millions of points, $\alpha$\% highest-posterior-density (HPD) credible intervals~\cite{chen1999monte} are constructed by selecting the region of highest density that contain $\alpha$\% of all the points. HPD intervals are constructed such that no point in parameter space outside the interval has a higher probability density than any point inside the interval. This is done for one or two parameters of interest, and the values of the remaining parameters are ignored in the process, equivalent to producing a set of points distributed according to the marginalized posterior probability density function. Unlike the frequentist approaches used, for which coverage is approximate, there are no approximations necessary to produce the credible intervals. The prior probability densities are, by default, uniform for the oscillation parameters over a large bounded region in the standard oscillation parametrization (\ensuremath{|\Delta m^{2}|\xspace}, \stt, \sot, \ensuremath{\delta_{CP}}), multidimensional Gaussians for the nuisance parameters, and the prior probabilities for the two mass hierarchies are set to 0.5. As a result, the posterior probability density is proportional to the likelihood functions used for the frequentist analyses. Checks are made for alternative priors which are uniform in the oscillation angles, and the resulting interval boundaries are not strongly affected. \subsection{\label{sec:OA:syst} Characterizing systematic uncertainty} The systematic parameters considered for the oscillation analyses can be grouped into three different categories: i) SK flux parameters and cross section parameters in common with ND280, ii) independent cross section parameters and iii) SK efficiencies, final state and secondary interactions (FSI+SI) and photo-nuclear (PN) parameters. The first category includes the systematic uncertainties related to the neutrino flux at SK and some cross sections, which are constrained by the near detector data as explained in Sec.~\ref{sec:BANFF}. The values and uncertainties of these parameters used in the oscillation analyses are summarized in Tabs.~\ref{tab:propagatedparameters} and~\ref{tab:propagatedparametersxsec}. The independent cross section parameters, described in Sec.~\ref{sec:nuint}, are related to the nuclear model, therefore independent between the near and far detector as they contain different nuclei, or those which are common between the near and far detector but for which the near detector is insensitive. Table~\ref{tbl:xsecpar} in Sec.~\ref{sec:nuint} summarizes the values and uncertainties of the independent cross section parameters used for the SK oscillation analyses. Finally, the far detector efficiencies and uncertainties on final state, secondary and photo-nuclear interactions are described in Sec.~\ref{sec:SK}. A covariance matrix is computed for the uncertainties in this group; however, the uncertainty on the SK reconstructed energy scale, estimated to be 2.4\%, is not included in the calculation of the covariance matrix, but considered as an independent systematic parameter. The effects of the systematic uncertainties on the predicted event rate are summarized in Tab.~\ref{tab:systematics:nsk_table_summary} for the typical values of the oscillation parameters. In this table, the effects are presented as percentage uncertainties computed by throwing $10^{6}$ toy experiments, varying only the systematics in the selected category (fixing the rest to their nominal values) and finding the RMS/mean of the distribution of number of events. \begin{table}[tbp] \caption{ Relative uncertainty (1$\sigma$) on the predicted rate of $\nu_\mu$ CC and $\nu_e$ CC candidate events.} \begin{tabular}{lcc} \toprule { Source of uncertainty } & \ \ $\nu_\mu$ CC \ \ & \ \ $\nu_e$ CC \ \ \\ \hline Flux and common cross sections & & \\ (w/o ND280 constraint) & 21.7\% & 26.0\% \\ (w ND280 constraint) & 2.7\% & 3.2\% \\ \hline Independent cross sections & 5.0\% & 4.7\% \\ \hline SK & 4.0\% & 2.7\% \\ FSI+SI(+PN) & 3.0\% & 2.5\% \\ \hline {{Total}} & & \\ {{(w/o ND280 constraint) }} & {{23.5\%}} & {{26.8\%}} \\ {{(w ND280 constraint) }} & {{7.7\%}} & {{6.8\%}} \\ \botrule \end{tabular} \label{tab:systematics:nsk_table_summary} \end{table} Figure~\ref{fig:systematics:error_envelope} shows the total error envelope combining all systematic uncertainties, calculated as the RMS from $10^6$ toy MC experiments generated with randomized systematic parameters, taking into account all correlations between them, with and without the constraint from the ND280 data, showing a clear reduction of the error envelope when the constraint is applied. \begin{figure}[tbp] \begin{center} \includegraphics[width=0.47\textwidth]{fig26a.pdf} \includegraphics[width=0.47\textwidth]{fig26b.pdf} \caption { Total error envelopes for the reconstructed energy distributions of $\nu_\mu$ CC (left) and $\nu_e$ CC (right) candidate events, using typical oscillation parameter values, with and without the ND280 constraint applied.} \label{fig:systematics:error_envelope} \end{center} \end{figure} \section{\label{sec:numu} $\ensuremath{\nu_\mu}\xspace \rightarrow \ensuremath{\nu_\mu}\xspace$ Analysis} T2K has published several measurements of muon neutrino disappearance~\cite{PhysRevD.85.031103,abe2013measurement,PhysRevLett.112.181801}. These measurements were performed within the framework of the PMNS oscillation model described in Section~\ref{sec:OA:oscmodel} and provided best-fit estimates and frequentist confidence intervals for the values of the mixing parameter \stt and the mass-squared splitting \ensuremath{\Delta m^{2}_{32}\xspace}\ (\ensuremath{\Delta m^{2}_{13}\xspace}) in the case of the normal (inverted) hierarchy. Each successive measurement analyzed a larger dataset, and the most recent measurement provides the world's strongest constraint on \stt~\cite{Abe:2014ugx}. This section gives a more detailed description of that analysis and the study of multi-nucleon effects. Reducing the uncertainty on the values of these two parameters is important for measuring CP violation in neutrino oscillations by T2K and other current and future experiments. Furthermore, precise measurements of \stt\ could constrain models of neutrino mass generation~\cite{King:2013iu,Albright:2010kl,Altarelli:2010jl,Ishimori:2010uu,Albright:2006hr,Mohapatra:940213}. \subsection{Method} \label{sec:numu:method} The \ensuremath{\nu_\mu}\xspace-disappearance analysis is performed by comparing the rate and spectrum of reconstructed neutrino energies, Eq.~(\ref{eq:SK_Erec}), in the $\nu_\mu$ CC candidate event sample with predictions calculated from Monte Carlo simulation. The predicted spectrum is calculated by applying the survival probability in Eq.~(\ref{eq:numusurv3f}) to a prediction for the unoscillated rate and spectrum. These predictions are derived from our models of the total expected neutrino flux at the detector (explained in Sec.~\ref{sec:beam}) and the cross section predictions for neutrino-nucleus interactions on water (described in Sec.~\ref{sec:nuint}), which are constrained by near detector data (described in Sec.~\ref{sec:BANFF}), and a GEANT3 model of particle interactions and transport in the SK\xspace detector. The models of the flux, interaction physics, and detector include systematic parameters, whose uncertainties are accounted for in the analysis by using their corresponding covariance matrices. The oscillation parameters are estimated using two independent maximum likelihood fits to the reconstructed energy spectrum. The fits use different likelihoods and software in order to serve as cross-checks to each other. One analysis uses an extended unbinned likelihood (M1), while the other uses a binned likelihood (M2). The log-likelihood definitions, ignoring constant terms, are: \begin{itemize} \item {\it M1 likelihood}, \begin{equation} \begin{split} -2\ln\mathcal{L}(\vec{\theta},\vec{g},\vec{x}_s,\vec{s}) = & -2\sum^{N^d}_{i=1}\ln f(E_{\nu,i}^{rec}|\vec{\theta},\vec{g},\vec{x}_s,\vec{s}) \\ & + 2\left(N^{p}(\vec{\theta},\vec{g},\vec{x}_s,\vec{s}) -N^d \ln N^{p}(\vec{\theta},\vec{g},\vec{x}_s,\vec{s})\right) \\ & + \Delta\vec{g}^T V_g^{-1} \Delta\vec{g} + \Delta\vec{x}_s^T V_{xs}^{-1} \Delta\vec{x}_s + \Delta\vec{s}^T V_s^{-1} \Delta\vec{s} \ , \end{split} \label{eq:numu_M1_likelihood} \end{equation} and \item {\it M2 likelihood}, \begin{equation} \begin{split} -2\ln\mathcal{L}(\vec{\theta},\vec{g},\vec{x}_s,\vec{s}) = & - 2\sum_{j=1}^{N_{bins}} N^d_j \ln N^p_j(\vec{\theta},\vec{g},\vec{x}_s,\vec{s}) \\ & + 2N^p(\vec{\theta},\vec{g},\vec{x}_s,\vec{s}) \\ & + \Delta\vec{g}^T V_g^{-1} \Delta\vec{g} + \Delta\vec{x}_s^T V_{xs}^{-1} \Delta\vec{x}_s + \Delta\vec{s}^T V_s^{-1} \Delta\vec{s} \ . \end{split} \label{eq:numu_M2_likelihood} \end{equation} \end{itemize} In both definitions, $N^d$ and $N^p$ are the total number of data and predicted events respectively; $\vec{\theta}$ represents a vector of the PMNS oscillation parameters (Sec.~\ref{sec:OA:oscmodel}); $\vec{g}$ is a vector containing the values of the systematic parameters constrained by the near detector (Tabs.~\ref{tab:propagatedparameters},\ref{tab:propagatedparametersxsec}), $\vec{x}_s$ are the cross section parameters not constrained by the near detector (Tab.~\ref{tbl:xsecpar}), and $\vec{s}$ are the SK\xspace\ detector systematic parameters (Sec.~\ref{sec:SK_detector_errors}). $\Delta$ designates the difference between the systematic parameters and their nominal values, and $V$ designates the covariance for the systematic parameters. For the M1 likelihood, $f(E_{\nu,i}^{rec}|\vec{\theta},\vec{g},\vec{x}_s,\vec{s})$ is the probability density of observing an event with reconstructed energy, $E_{\nu,i}^{rec}$, given values for the oscillation and systematic parameters. The value of $f(E_{\nu,i}^{rec}|\vec{\theta},\vec{g},\vec{x}_s,\vec{s})$ is calculated with a linear interpolation between the bins of a histogram of the normalized energy spectrum. For the M2 likelihood, the number of data and predicted events in the $j^{th}$ reconstructed energy bin, $N^d_j$ and $N^p_j$ respectively, are used instead. Both \ensuremath{\nu_\mu}\xspace-disappearance fits consider a total of 48 parameters: 6 oscillation parameters, 16 flux parameters, 20 neutrino interaction parameters and 6 parameters related to the response of SK\xspace. In order to find the best-fit values and confidence intervals for \stt\ and \ensuremath{|\Delta m^{2}|\xspace}, the profiled likelihood is maximized. Separate fits are performed for the different neutrino mass hierarchy assumptions. \subsection{Determining Confidence Intervals} As explained in Sec.~\ref{sec:OA:fit}, the Neyman method with the approach recommended by Feldman and Cousins (FC) was used to calculate confidence intervals for the two oscillation parameters, \stt and \ensuremath{\Delta m^{2}_{32}\xspace} (\ensuremath{\Delta m^{2}_{13}\xspace}), for the normal (inverted) hierarchy. The constant-$\Delta\chi^{2}$ method does not provide correct coverage due to the physical boundary near \sttt$=1$ and because of the non-linear parametrization. Critical values of the FC statistic were determined on a fine grid of the two oscillation parameters of interest using 10,000 toy datasets at each point. Each toy dataset had a set of values of the systematic parameters sampled from a multi-dimensional Gaussian having means at the nominal values, and covariances $V$. Each oscillation parameter, \ensuremath{\textrm{sin}^2\theta_{12}}\xspace, \dmsqso, and \sot, is sampled from a Gaussian with mean and sigma values listed in Sec.~\ref{sec:OA:osc}. The values of \ensuremath{\delta_{CP}}\ are sampled uniformly between $-\pi$ and $+\pi$. The systematic parameters and these additional oscillation parameters are removed from the likelihood function by profiling. In order to calculate an interval of just one oscillation parameter (\stt\ or $\Delta m^2$), we determine the critical values by marginalizing over the second oscillation parameter. The marginalization assumes that the probability is proportional to the likelihood using T2K data. \subsection{Results} Both the M1 and M2 analyses find the point estimates $\stt=0.514$ and $\ensuremath{\Delta m^{2}_{32}\xspace}=2.51\times10^{-3}$\,\ensuremath{{\mathrm{\,e\kern -0.1em V^2\!/}c^4}}\xspace when assuming the normal mass hierarchy and $\stt=0.511$ and $\ensuremath{\Delta m^{2}_{13}\xspace}=2.48\times10^{-3}$\,\ensuremath{{\mathrm{\,e\kern -0.1em V^2\!/}c^4}}\xspace when assuming the inverted mass hierarchy. Table~\ref{tab:numu:summary} summarizes these results from the M1 and M2 analyses. Likewise, the confidence intervals produced by M1 and M2 are similar. Since the M1 and M2 analyses are consistent with each other, only results from M1 are given below. Figure~\ref{fig:numu_contour_nh_wsens} shows the best-fit values of the oscillation parameters, the 2D confidence intervals calculated using the Feldman and Cousins method, assuming normal and inverted hierarchy, and the sensitivity at the current exposure. The size of the confidence interval found by the fit to the data is smaller than the sensitivity. This arises because the best-fit point is at the physical boundary corresponding to maximum disappearance probability. The amount by which the region is smaller is not unusual in an ensemble of toy MC experiments produced under the assumption of maximal disappearance. The best-fit spectrum from the normal hierarchy fit compared to the observed spectrum is shown in Figure~\ref{fig:numu_spectrum_result}, showing as well the ratio of the number of observed events to the predicted number of events with \stt$=0$. The observed oscillation dip is significant and well fit by simulation. The calculated 1D Feldman and Cousins confidence intervals are given in Table~\ref{tab:numu:confidence_intervals}. Figure~\ref{fig:fc_1d_s23_normal} shows the -2$\Delta\ln\mathcal{L}$ distributions for \stt\ and \ensuremath{|\Delta m^{2}|\xspace}\ from the data, along with the 90\% CL critical values. \begin{figure}[tbp] \includegraphics[width=0.65\textwidth]{fig27.pdf} \caption{The 68\% (dashed) and 90\% (solid) CL intervals for the M1 \ensuremath{\nu_\mu}\xspace-disappearance analysis assuming normal and inverted mass hierarchies. The 90\% CL sensitivity contour for the normal hierarchy is overlaid for comparison. } \label{fig:numu_contour_nh_wsens} \end{figure} \begin{figure}[tbp] \centering \includegraphics[width=0.7\textwidth]{fig28.pdf} \caption{Top: Reconstructed neutrino energy spectrum for data, best-fit prediction, and unoscillated prediction. Bottom: Ratio of oscillated to unoscillated events as a function of neutrino energy for the data and the best-fit spectrum.} \label{fig:numu_spectrum_result} \end{figure} \begin{table}[tbp] \centering \caption{ Summary of the point estimates from the two independent 3-flavor muon neutrino disappearance oscillation frequentist analyses. } \begin{tabular}{ c c c c c } \hline\hline \ \ Analysis \ \ & \ \ MH \ \ & \ \ \ensuremath{\Delta m^{2}_{32}\xspace} or \ensuremath{\Delta m^{2}_{13}\xspace} \ \ & \ \ \stt \ \ & \ \ N$_{exp}^{1R\mu}$ \ \ \\ & & $(10^{-3} \ensuremath{{\mathrm{\,e\kern -0.1em V^2\!/}c^4}}\xspace)$ & & \\ \hline M1 & NH & $2.51$ & $0.514$ & 121.4 \\ M1 & IH & $2.48$ & $0.511$ & 121.4 \\ \hline M2 & NH & $2.51$ & $0.514$ & 121.5 \\ M2 & IH & $2.48$ & $0.511$ & 121.4 \\ \hline\hline \end{tabular} \label{tab:numu:summary} \end{table} \begin{table}[tbp] \centering \caption{68\% and 90\% confidence level intervals for the \ensuremath{\nu_\mu}\xspace-disappearance analysis.} \begin{tabular}{ c c c c } \hline\hline & MH & 68\% CL & 90\% CL \\ \hline \stt & NH & [0.458, 0.568] & [0.428, 0.598] \\ \stt & IH & [0.456, 0.566] & [0.427, 0.596] \\ \hline \ \ \ensuremath{\Delta m^{2}_{32}\xspace} ($10^{-3}$\ensuremath{{\mathrm{\,e\kern -0.1em V^2\!/}c^4}}\xspace) \ \ & \ \ NH \ \ & \ \ [2.41, 2.61] \ \ & \ \ [2.34, 2.68] \ \ \\ \ensuremath{\Delta m^{2}_{13}\xspace} ($10^{-3}$\ensuremath{{\mathrm{\,e\kern -0.1em V^2\!/}c^4}}\xspace) & IH & [2.38, 2.58] & [2.31, 2.64] \\ \hline\hline \end{tabular} \label{tab:numu:confidence_intervals} \end{table} \begin{figure} \centering \includegraphics[width=0.48\textwidth]{fig29a.pdf} \includegraphics[width=0.48\textwidth]{fig29b.pdf} \caption{Profiled -2$\Delta\ln$L as a function of \stt\ (left) and \ensuremath{|\Delta m^{2}|\xspace}\ (right), for the normal and inverted mass hierarchy assumptions. The 90\% CL critical values are indicated by the lines with points. } \label{fig:fc_1d_s23_normal} \end{figure} \subsection{Multi-Nucleon Effects Study} Recently, experimental~\cite{mb-ccqe, AguilarArevalo:2013hm, Fields:2013zhk, Fiorentini:2013ezn} and theoretical~\cite{Marteau:1999kt, Martini:2009, Carlson:2001mp, Shen:2012xz,Bodek:2011, Martini:2010, Martini:2013sha, Nieves:2005rq, Benhar:1994hw,Gran:2013kda, Nieves:2012yz, Lalakulich3, Martini2, Martini3, Meloni, Nieves:2012} results have suggested that the charged-current neutrino-nucleus scattering cross section at T2K energies could contain a significant multi-nucleon component. Such processes are known to be important in describing electron-nucleus scattering (for a review, see~\cite{RevModPhys.80.189}), but have not yet been included in the model of neutrino-nucleus interactions in our muon neutrino disappearance analyses. If such multi-nucleon effects are important, their omission could introduce a bias in the oscillation analyses. Since low energy nucleons are not detected in SK\xspace, such events can be selected in the QE sample and assigned incorrect neutrino energies. A Monte Carlo study was performed in order to explore the sensitivity of the analysis to multi-nucleon effects. The nominal interaction model includes pion-less delta decay (PDD), which can be considered to be a multi-nucleon effect. As an alternative, we turn off PDD and use a model by Nieves~\cite{Nieves:2012} to simulate multi-nucleon interactions for neutrino energies below 1.5~GeV. Pairs of toy Monte Carlo experiments including both near and far detector data were generated, one with the nominal and one with the alternative model. Each dataset in a pair was produced by using the same distribution of interacting neutrinos, in order to reduce statistical fluctuations in the comparison. Each pair of experiments used a different distribution of interacting neutrinos and a different set of systematic parameters sampled from multivariate Gaussian distributions. The complete analysis with near and far detector data is performed, assuming the nominal model in all cases. In so doing, the study properly accounts for the reduction in sensitivity to mis-modeling neutrino interactions when using near detector data to constrain flux and cross section parameters. The differences in the point estimates for the oscillation parameters for the two samples in each pair are shown in Figure~\ref{fig:nieves_nom}. The overall bias for both is negligible, compared to the precision obtained for the parameters. However, the additional variation in \stt\ is about 3\%, comparable to the size of other systematic uncertainties. The bias was evaluated at $\stt=0.45$ to avoid the the physical boundary at maximal disappearance which could reduce the size of the apparent bias. For the present exposure, the effect can be ignored, but future analyses will need to incorporate multi-nucleon effects in their model of neutrino-nucleus interactions. \begin{figure} \includegraphics[width=0.45\textwidth]{fig30a.pdf} \includegraphics[width=0.45\textwidth]{fig30b.pdf} \caption{Difference in the point estimates of \stt (left) and \ensuremath{|\Delta m^{2}|\xspace} (right) between pairs of toy MC datasets with and without including multi-nucleon effects.} \label{fig:nieves_nom} \end{figure} \section{\label{sec:jointfreq} Joint \ensuremath{\nu_\mu}\xspace disappearance and \ensuremath{\nu_e}\xspace appearance analysis using a frequentist approach.} This section describes the joint 3-flavor oscillation analysis performed by combining the \ensuremath{\nu_\mu}\xspace disappearance and \ensuremath{\nu_e}\xspace appearance channels using a frequentist approach. The oscillation parameters, $\vec{\theta}=\ensuremath{|\Delta m^{2}|\xspace}$, \stt, \sot, and \ensuremath{\delta_{CP}}, described in Sec.~\ref{sec:OA:oscmodel}, are simultaneously determined. This is done by comparing the reconstructed energy spectra of the $\nu_\mu$ CC and $\nu_e$ CC candidate events observed at SK, selected as described in Sec.~\ref{sec:SK}, with the predicted reconstructed energy spectra. Point estimates of the oscillation parameters are found by minimizing the negative log-likelihood \begin{equation} \begin {split} \displaystyle \chi^2 = -2 \, \ln\mathcal{L} (\vec{\theta},\vec{g},\vec{x}_s,\vec{s}) = \ & 2N^p_\mu(\vec{\theta},\vec{g},\vec{x}_s,\vec{s}) - 2\sum_{i=1}^{N_{\mu\ \mathrm{bins}}} N^d_{\mu,i} \ln N^p_{\mu,i}(\vec{\theta},\vec{g},\vec{x}_s,\vec{s}) \\ + \ & 2N^p_e(\vec{\theta},\vec{g},\vec{x}_s,\vec{s}) - 2\sum_{i=1}^{N_{e\ \mathrm{bins}}} N^d_{e,i} \ln N^p_{e,i}(\vec{\theta},\vec{g},\vec{x}_s,\vec{s}) \\ + \ & \Delta\vec{g}^T V_g^{-1} \Delta\vec{g} + \Delta\vec{x}_s^T V_{xs}^{-1} \Delta\vec{x}_s + \Delta\vec{s}^T V_s^{-1} \Delta\vec{s} \ . \label{eq:jointfreq:chisquare} \end{split} \end{equation} where $N^d_{\mu,i}$ ($N^d_{e,i}$) is the observed number of $\nu_\mu$ CC ($\nu_e$ CC) candidate events in the $i^{th}$ reconstructed energy bin, and $N^p_{\mu,i}$ ($N^p_{e,i}$) is the corresponding predicted number of events, calculated as a function of the oscillation parameters $\vec{\theta}$ and the vectors of systematic parameters, $\vec{g},\vec{x}_s,\vec{s}$, as described for Eq.~(\ref{eq:numu_M2_likelihood}). The negative log-likelihood function is minimized using MINUIT. As explained in Sec.~\ref{sec:OA}, the solar oscillation parameters are kept fixed for this analysis. To combine our measurement with the reactor measurements, we add the term, \begin{equation} \chi^2_{reactor} = \Bigg( \frac{\sot - (\sot)_{reactor} }{\sigma_{reactor}} \Bigg)^2 \ , \label{eq:jointfreq:reactor} \end{equation} where (\sot)$_{reactor}$ and $\sigma_{reactor}$ are given in Sec.~\ref{sec:OA:osc}. When maximizing the likelihood, the systematic parameters are allowed to vary in a wide range [-5$\sigma$, +5$\sigma$] (where $\sigma$ is the square root of the corresponding diagonal element in the covariance matrix), with the exception of the spectral function parameter which is constrained to lie between 0 (RFG) and 1 (SF). A total of 64 systematic parameters, representing uncertainties in the far detector efficiencies, the reconstructed neutrino energy scale, final state and secondary interactions, the flux prediction, and the relevant neutrino interaction models, are considered. As with the disappearance analyses, the fit to the ND280 near detector data described in Sec.~\ref{sec:BANFF} is applied as a multivariate Gaussian penalty term to constrain the flux uncertainties and cross sections common to the near and far detectors. The 1-dimensional limits and 2-dimensional confidence regions reported in this analysis are constructed using the constant \ensuremath{\Delta \chi^2}\xspace method~\cite{PDG2012} with respect to a 4-dimensional best-fit point obtained by minimizing Eq.~(\ref{eq:jointfreq:chisquare}). An exception is the (\sot, \ensuremath{\delta_{CP}}) space without the reactor measurement, as that analysis has little power to constrain \ensuremath{\delta_{CP}}. For that case, a best-fit value of \sot\ is found for fixed values of \ensuremath{\delta_{CP}}\ in the interval [-$\pi$, $\pi$] (divided into 51 bins), resulting in 1-dimensional confidence regions for different values of \ensuremath{\delta_{CP}}\ with respect to a line of best-fit points. For the T2K data fit combined with the reactor constraint, described in Sec.~\ref{sec:jointfreq:results_reactor}, the Feldman and Cousins method~\cite{PhysRevD.57.3873} is used to produce confidence intervals, by finding critical values of \ensuremath{\Delta \chi^2}\xspace\ as a function of \ensuremath{\delta_{CP}}\ and we report excluded regions for \ensuremath{\delta_{CP}}. \subsection{\label{sec:jointfreq:results} Results} Point estimates for the oscillation parameters and the expected number of events are summarized in Tab.~\ref{tab:jointfreq:bestfit}. Notably, the value obtained for \sot\ by T2K is larger than the value found by the reactor experiments, the best-fit value of \stt\ is consistent with maximal disappearance, and the difference in \ensuremath{\Delta \chi^2}\xspace\ between the solutions for each mass hierarchy is negligible. \begin{table}[tbp] \centering \caption{ Point estimates of the oscillation parameters for the joint 3-flavor oscillation frequentist analysis. } \begin{tabular}{ c c c c c c c c } \toprule \ \ MH \ \ & \ \ \ensuremath{\Delta m^{2}_{32}\xspace}\ or \ensuremath{\Delta m^{2}_{13}\xspace} \ \ & \ \ \stt \ \ & \ \ \sot \ \ & \ \ \ensuremath{\delta_{CP}} \ \ & \ \ N$_{exp}^{1R\mu}$ \ \ & \ \ N$_{exp}^{1Re}$ \ \ & \ \ \ensuremath{\Delta \chi^2}\xspace \ \ \\ & $(10^{-3} \ensuremath{{\mathrm{\,e\kern -0.1em V^2\!/}c^4}}\xspace)$ & & & & & & \\ \hline NH & 2.51 & 0.524 & 0.0422 & 1.91 & 119.9 & 28.00 & 0.01\\ IH & 2.49 & 0.523 & 0.0491 & 1.01 & 119.9 & 28.00 & 0.00\\ \botrule \end{tabular} \label{tab:jointfreq:bestfit} \end{table} The profiled \ensuremath{\Delta \chi^2}\xspace of each oscillation parameter was obtained by minimizing the negative log-likelihood with respect to the systematic parameters and other three oscillation parameters using MINUIT. Figure \ref{fig:jointfreq:1D} presents the profiled \ensuremath{\Delta \chi^2}\xspace of each oscillation parameter, comparing the results for the normal and the inverted mass hierarchy. From these figures, the 1$\sigma$ intervals estimated using the $\ensuremath{\Delta \chi^2}\xspace=1$ criterion are: \begin{center} $\stt = 0.524^{+0.057}_{-0.059}$ (NH) \ \ $\stt = 0.523^{+0.055}_{-0.065}$ (IH) $\sot = 0.042^{+0.013}_{-0.021}$ (NH) \ \ $\sot = 0.049^{+0.015}_{-0.021}$ (IH) $\ensuremath{\Delta m^{2}_{32}\xspace} = 2.51^{+0.11}_{-0.12}$ ($10^{-3}\ensuremath{{\mathrm{\,e\kern -0.1em V^2\!/}c^4}}\xspace$, NH) \ \ $\ensuremath{\Delta m^{2}_{13}\xspace} = 2.49^{+0.12}_{-0.12}$ ($10^{-3}\ensuremath{{\mathrm{\,e\kern -0.1em V^2\!/}c^4}}\xspace$, IH). \end{center} \begin{figure}[tbp] \begin{center} \includegraphics[width=0.9\textwidth]{fig31.pdf} \caption { Profiled \ensuremath{\Delta \chi^2}\xspace for the joint 3-flavor oscillation analysis without using reactor data. The parameter $\ensuremath{|\Delta m^{2}|\xspace}$ represents \ensuremath{\Delta m^{2}_{32}\xspace}\ or \ensuremath{\Delta m^{2}_{13}\xspace}\ for normal and inverted mass hierarchy assumptions respectively. The horizontal lines show the critical \ensuremath{\Delta \chi^2}\xspace values for one dimensional fits at the 68 \% and 90 \% CL (\ensuremath{\Delta \chi^2}\xspace = 1.00 and 2.71 respectively). } \label{fig:jointfreq:1D} \end{center} \end{figure} Figure~\ref{fig:jointfreq:2D} presents the 68\% and 90\% CL regions for the two mass hierarchy assumptions in the four 2-dimensional oscillation parameter spaces (\stt, \ensuremath{\Delta m^{2}_{32}\xspace}), (\sot, \ensuremath{\Delta m^{2}_{13}\xspace}), (\sot, \ensuremath{\delta_{CP}}), and (\stt, \sot), constructed using constant \ensuremath{\Delta \chi^2}\xspace\ with respect to the inverted hierarchy best-fit point. \begin{figure*} \centering \includegraphics[width=0.5\textwidth]{fig32a.pdf}\includegraphics[width=0.5\textwidth]{fig32b.pdf} \includegraphics[width=0.5\textwidth]{fig32c.pdf}\includegraphics[width=0.5\textwidth]{fig32d.pdf} \caption { 68\% (dashed) and 90\% (solid) CL regions, from the analysis without using reactor data, with different mass hierarchy assumptions using \ensuremath{\Delta \chi^2}\xspace\ with respect to the best-fit point -- that from the inverted hierarchy. The parameter $\ensuremath{|\Delta m^{2}|\xspace}$ represents \ensuremath{\Delta m^{2}_{32}\xspace}\ or \ensuremath{\Delta m^{2}_{13}\xspace}\ for normal and inverted mass hierarchy assumptions respectively. The lower left plot shows 1D confidence intervals in \sot\ for different values of \ensuremath{\delta_{CP}}. } \label{fig:jointfreq:2D} \end{figure*} \subsection{\label{sec:jointfreq:results_reactor} Results for T2K combined with the reactor experiment result} The point estimates for the oscillation parameters and the predicted number of events, when the reactor measurements are included in the likelihood function, are given in Tab.~\ref{tab:jointfreq:bestfit_reactor}. The estimate for \sot\ is smaller than the result obtained with T2K data only, shown in Tab.~\ref{tab:jointfreq:bestfit}. The likelihood is maximum for normal mass hierarchy and for $\ensuremath{\delta_{CP}}=-\pi/2$, where the appearance probability is largest, as shown in Fig.~\ref{fig:oscprob}. \begin{table}[tbp] \centering \caption{ Point estimates of the oscillation parameters for the joint 3-flavor oscillation frequentist analysis combined with the results from reactor experiments. } \begin{tabular}{c c c c c c c c } \toprule \ \ MH \ \ & \ \ \ensuremath{\Delta m^{2}_{32}\xspace}\ or \ensuremath{\Delta m^{2}_{13}\xspace} \ \ & \ \ \stt \ \ & \ \ \sot \ \ & \ \ \ensuremath{\delta_{CP}} \ \ & \ \ N$_{exp}^{1R\mu}$ \ \ & \ \ N$_{exp}^{1Re}$ \ \ & \ \ \ensuremath{\Delta \chi^2}\xspace \ \ \\ & $(10^{-3} \ensuremath{{\mathrm{\,e\kern -0.1em V^2\!/}c^4}}\xspace)$ & & & & & & \\ \hline NH & 2.51 & 0.527 & 0.0248 & -1.55 & 120.4 & 25.87 & 0.00\\ IH & 2.48 & 0.533 & 0.0252 & -1.56 & 121.2 & 23.57 & 0.86 \\ \botrule \end{tabular} \label{tab:jointfreq:bestfit_reactor} \end{table} The profiled \ensuremath{\Delta \chi^2}\xspace as a function of each oscillation parameter are presented in Fig.~\ref{fig:jointfreq:1D_reactor}, and the 68\% and 90\% CL regions for the two mass hierarchies constructed using \ensuremath{\Delta \chi^2}\xspace\ with respect to the best-fit point, the one for the normal hierarchy, are presented in Figs.~\ref{fig:jointfreq:2D_reactor} and \ref{fig:jointfreq:2D_reactor2}. \begin{figure}[tbp] \begin{center} \includegraphics[width=0.9\textwidth]{fig33.pdf} \caption { Profiled \ensuremath{\Delta \chi^2}\xspace for the joint 3-flavor oscillation analysis combined with the results from reactor experiments. The parameter $\ensuremath{|\Delta m^{2}|\xspace}$ represents \ensuremath{\Delta m^{2}_{32}\xspace}\ or \ensuremath{\Delta m^{2}_{13}\xspace}\ for normal and inverted mass hierarchy assumptions respectively. The horizontal lines show the critical \ensuremath{\Delta \chi^2}\xspace\ values for one dimensional fits at the 68 \% and 90 \% CL (\ensuremath{\Delta \chi^2}\xspace = 1.00 and 2.71 respectively). } \label{fig:jointfreq:1D_reactor} \end{center} \end{figure} \begin{figure*}[tbp] \centering \includegraphics[width=0.5\textwidth]{fig34a.pdf}\includegraphics[width=0.5\textwidth]{fig34b.pdf} \includegraphics[width=0.5\textwidth]{fig34c.pdf}\includegraphics[width=0.5\textwidth]{fig34d.pdf} \caption { 68\% (dashed) and 90\% (solid) CL regions from the analysis that includes results from reactor experiments with different mass hierarchy assumptions using \ensuremath{\Delta \chi^2}\xspace\ with respect to the best-fit point, the one from the fit with normal hierarchy. The parameter $\ensuremath{|\Delta m^{2}|\xspace}$ represents \ensuremath{\Delta m^{2}_{32}\xspace}\ or \ensuremath{\Delta m^{2}_{13}\xspace}\ for normal and inverted mass hierarchy assumptions respectively. } \label{fig:jointfreq:2D_reactor} \end{figure*} \begin{figure}[tbp] \begin{center} \includegraphics[width=0.5\textwidth]{fig35a.pdf}\includegraphics[width=0.5\textwidth]{fig35b.pdf} \caption { Comparison of 68\% (dashed) and 90\% (solid) CL regions combined with the results from reactor experiments with different mass hierarchy assumptions using \ensuremath{\Delta \chi^2}\xspace\ with respect to the best-fit point, the one from the fit with normal hierarchy. The parameter $\ensuremath{|\Delta m^{2}|\xspace}$ represents \ensuremath{\Delta m^{2}_{32}\xspace}\ or \ensuremath{\Delta m^{2}_{13}\xspace}\ for normal and inverted mass hierarchy assumptions respectively. } \label{fig:jointfreq:2D_reactor2} \end{center} \end{figure} The confidence regions obtained in the (\stt, \ensuremath{|\Delta m^{2}|\xspace}) space are compared with the results from Super-Kamiokande~\cite{Himmel:2013jva} and the MINOS~\cite{Adamson:2014vgd} experiments in Fig.~\ref{fig:jointfreq:SKMINOS}. The results from T2K and MINOS used the latest value of \sot\ from~\cite{PDG2013} to fit this parameter whereas the result from SK has \sot\ fixed to the previous reactor value in~\cite{PDG2012}. In the three analyses \ensuremath{\delta_{CP}}\ was removed by profiling. \begin{figure}[tbp] \begin{center} \includegraphics[width=0.6\textwidth]{fig36.pdf} \caption { 68\% (dashed) and 90\% (solid) CL regions for normal (top) and inverted (bottom) mass hierarchy combined with the results from reactor experiments in the (\stt, \ensuremath{\Delta m^{2}_{32}\xspace}) space compared to the results from the Super-Kamiokande~\cite{Himmel:2013jva} and MINOS~\cite{Adamson:2014vgd} experiments. } \label{fig:jointfreq:SKMINOS} \end{center} \end{figure} An analysis using the Feldman and Cousins method was performed for the measurement of \ensuremath{\delta_{CP}}\ including a reactor constraint by creating 4000 toy MC experiments at fixed values of \ensuremath{\delta_{CP}}\ in the interval [-$\pi$, $\pi$] (divided into 51 bins), taking into account statistical fluctuations and systematic variations. The other three oscillation parameters are removed by profiling following the 3-dimensional \ensuremath{\Delta \chi^2}\xspace surface obtained as a result of the joint fit with the reactor constraint. The values of the critical \ensuremath{\Delta \chi^2}\xspace calculated using these toy experiments are overlaid with the curve of \ensuremath{\Delta \chi^2}\xspace as a function of \ensuremath{\delta_{CP}}\ in Fig.~\ref{fig:jointfreq:FC}, and give the following excluded regions for \ensuremath{\delta_{CP}}\ at the 90\% C.L: \ensuremath{\delta_{CP}} = [0.15,0.83]$\pi$ for normal hierarchy and \ensuremath{\delta_{CP}} = [$-$0.08,1.09]$\pi$ for inverted hierarchy. \begin{figure}[tbp] \begin{center} \includegraphics[width=0.8\textwidth]{fig37.pdf} \caption { Profiled \ensuremath{\Delta \chi^2}\xspace as a function of \ensuremath{\delta_{CP}}\ with the results of the critical \ensuremath{\Delta \chi^2}\xspace values for the normal and inverted hierarchies for the joint fit with reactor constraint, with the excluded regions found overlaid.} \label{fig:jointfreq:FC} \end{center} \end{figure} In order to thoroughly cross-check the analysis described above, an alternate frequentist joint fit analysis was performed which differs in the treatment of the systematic errors. This originated as part of an effort to simplify and reduce the computing power needed for the analysis and to perform a study of the future sensitivity of the experiment~\cite{Abe:2014tzr}. A new set of systematic parameters is used; they multiply the nominal expected number of \ensuremath{\nu_\mu}\xspace\ or \ensuremath{\nu_e}\xspace\ events, with one parameter for each reconstructed energy bin. Results from the alternate analysis agree with the results presented in Secs.~\ref{sec:jointfreq:results} and~\ref{sec:jointfreq:results_reactor}. \section{\label{sec:jointbayes} Joint $\ensuremath{\nu_\mu}\xspace \rightarrow \ensuremath{\nu_\mu}\xspace$ and $\ensuremath{\nu_\mu}\xspace \rightarrow \ensuremath{\nu_e}\xspace$ Bayesian Analysis} This section describes a complementary approach to the analysis detailed in Sec.~\ref{sec:jointfreq}, which uses Bayesian techniques to extract most probable values of oscillation parameters and their uncertainties. Bayesian inference analysis methods construct posterior probabilities of a hypothesis given the data observed by combining prior information with the likelihood function. This technique allows one to naturally include prior information about systematic parameters and external experimental data in the interpretation of the results of the experiment. Another distinguishing feature for this analysis is the fact that full marginalization of systematic parameters is achieved intrinsically, without the assumption that the observables are linear functions of the systematic parameters, taking into account the actual dependencies on the nuisance parameters. The posterior distribution, produced using Bayes' theorem, is too difficult to compute analytically. We use two numerical methods to perform the high-dimensional integral necessary when computing the posterior distribution: a Markov Chain Monte Carlo (MCMC) in Sec.~\ref{sec:mcmc_analysis} and a sampling method in Sec.~\ref{sec:crosscheck_analysis} which is used as a cross-check. \subsection{Joint Near-Far Markov Chain Monte Carlo Analysis} \label{sec:mcmc_analysis} \subsubsection{Point estimates} To extract information about the point estimate of oscillation parameters from the posterior distribution generated by the MCMC, the density of points in 4-dimensional space was estimated using a kernel density estimator (KDE)~\cite{mills2011efficient,cranmer2001kernel}. A KDE estimates a PDF by smearing the discrete points of a MCMC in the 4 dimensions of interest. The Gaussian width of the smearing was set to be variable, and inversely proportional to the local density of MCMC points; this technique counters potential under-smoothing in low density regions and potential over-smoothing in high density regions. The maximum of the PDF produced by the KDE was then maximized using MINUIT to find the most probable value. In the case of using only T2K data, there is little sensitivity to the $\delta_{CP}$ parameter, and so a line of most probable values was created by finding the 3-dimensional density of the MCMC at a series of values of $\delta_{CP}$. \subsubsection{Samples} Unlike the frequentist analyses described above, the joint near-far analysis does not use the covariance matrix produced by the ND280 analysis described in Sec.~\ref{sec:BANFF}. Instead, this analysis is performed simultaneously with the three ND280 \ensuremath{\nu_\mu}\xspace CC samples, and the SK $\nu_{\mu}$ CC, and SK $\nu_e$ CC samples. By fitting all samples simultaneously, this analysis avoids any error coming from neglecting non-linear dependencies of the systematic parameters constrained by ND280 analysis on the oscillation parameters. The systematic uncertainties used for the ND280 samples are nearly identical to those in Sec.~\ref{sec:BANFF} with the following exceptions: the uncertainties on the cross section ratios $\sigma_{\ensuremath{\nu_e}\xspace}/\sigma_{\ensuremath{\nu_\mu}\xspace}$ and $\sigma_{\bar{\nu}}/\sigma_{\nu}$ are applied and the NC normalization uncertainties are divided into NC1$\pi^0$, NC$1\pi^{\pm}$, NC coherent, and NCOther for all samples. Additionally, the number of bins in the ND280 detector systematic covariance matrix is reduced to 105, in order to reduce the total number of parameters. There are no differences in the systematic uncertainties for the SK samples. Ignoring constant terms, the negative log of the posterior probability is given by, \begin{equation} \begin{split} -\ln(P) = & \sum_{i}^{ND280bins}N^{p}_{i}(\vec{b},\vec{x},\vec{d}) -N^{d}_{i} \ln N^{p}_{i}(\vec{b},\vec{x},\vec{d}) \\ & + \sum_{i}^{N_{\mu\ \mathrm{bins}}}N^{p}_{\mu,i}(\vec{\theta}, \vec{b},\vec{x},\vec{s}) -N^{d}_{\mu,i} \ln N^{p}_{\mu,i}(\vec{\theta}, \vec{b},\vec{x},\vec{s}) \\ & + \sum_{i}^{N_{e\ \mathrm{bins}}}N^{p}_{e,i}(\vec{\theta},\vec{b},\vec{x},\vec{s}) -N^{d}_{e,i} \ln N^{p}_{e,i}(\vec{\theta}, \vec{b},\vec{x},\vec{s}) \\ & + {\textstyle \frac{1}{2}}\, \Delta\vec{b}^T V_b^{-1} \Delta\vec{b} + {\textstyle \frac{1}{2}}\, \Delta\vec{x}^T V_x^{-1} \Delta\vec{x} + {\textstyle \frac{1}{2}}\, \Delta\vec{d}^T V_d^{-1} \Delta\vec{d} \\ & + {\textstyle \frac{1}{2}}\, \Delta\vec{s}^T V_s^{-1} \Delta\vec{s} + {\textstyle \frac{1}{2}}\, \Delta\vec{\theta}_{sr}^T V_{\theta sr}^{-1} \Delta\vec{\theta}_{sr} \ \ .\\ \end{split} \label{eq:likelihood} \end{equation} The vector $\vec{\theta}_{sr}$ contains the solar oscillation parameters and for combined fits with reactor data $\sin^2 2\theta_{13}$, with priors described in Sec.~\ref{sec:OA:osc}. The priors on the other oscillation parameters of interest are uniform in $\sin^2 \theta_{13}$ between 0 and 1, $\sin^2 \theta_{23}$ between 0 and 1, $|\Delta m^2_{32}|$ between 0.001 and 0.005 \ensuremath{{\mathrm{\,e\kern -0.1em V^2\!/}c^4}}\xspace, and $\delta_{CP}$ between $-\pi$ and $\pi$. Additionally, the prior probability of the normal hierarchy and inverted hierarchy are each 0.5. Priors for the systematic parameters are the multivariate Gaussian terms shown, with the exception of the cross section spectral function parameters which are given a uniform prior between 0 and 1. In this analysis, both ND280 and SK MC sample events are weighted individually for all parameters in the analysis. This means that each PDF is rebuilt from the MC at every iteration of the MCMC. This has the advantage of retaining shape information within each bin of the PDF, especially desirable for the oscillation parameters, and also allows a more natural treatment of certain parameters such as the SK energy scale uncertainty which may cause events to migrate between bins. The increase in computational load was offset by performing certain calculations on GPUs, including the event-by-event calculation of oscillation probability~\cite{calland2014accelerated}. \subsubsection{Results} The MCMC was run with $5.6\times10^7$ steps using only T2K data, and for $1.4\times10^8$ steps for T2K data combined with reactor experiment results. The most probable values for the oscillation parameters for both analyses are shown in Table~\ref{tab:data_results}. For the T2K-only analysis, the values are shown for \ensuremath{\delta_{CP}}=0, as the analysis has little sensitivity to the value of \ensuremath{\delta_{CP}}. The 68\% 1D credible intervals, marginalized over all other parameters, including mass hierarchy, for each of the parameters except $\ensuremath{\delta_{CP}}$ are shown in Table~\ref{tab:data_CI}. \begin{table}[tbp] \caption{Most probable values for oscillation parameters from Bayesian analysis.} \begin{tabular}{llcccc} \toprule & Hierarchy \ \ & $|\Delta m^{2}_{32} |$ & \ \ $\sin^{2}\theta_{23}$ \ \ & \ \ $\sin^{2}\theta_{13}$ \ \ & \ \ $\ensuremath{\delta_{CP}}$ \ \ \\ Analysis & & \ \ $10^{-3}$ \ensuremath{{\mathrm{\,e\kern -0.1em V^2\!/}c^4}}\xspace \ \ & & & \\ \hline T2K-only &Inverted & 2.571 & 0.520 & 0.0454 & \ \ 0 (fixed) \ \ \\ T2K+reactor \ \ & Normal & 2.509 & 0.528 & 0.0250 & -1.601 \\ \botrule \end{tabular} \label{tab:data_results} \end{table} \begin{table}[tbp] \caption{68\% Bayesian credible intervals for oscillation parameters.} \begin{tabular}{lcccc} \toprule & $|\Delta m^{2}_{32} |$ & $\sin^{2}\theta_{23}$ & $\sin^{2}\theta_{13}$ \\ Analysis \ \ & \ \ $10^{-3}$ \ensuremath{{\mathrm{\,e\kern -0.1em V^2\!/}c^4}}\xspace \ \ & & \\ \hline T2K-only &[2.46, 2.68] & \ \ [0.470, 0.565] \ \ & \ \ [0.0314 ,0.0664] \ \ \\ T2K+reactor &[2.40, 2.62] & [0.490, 0.583] & [0.0224, 0.0276] \\ \botrule \end{tabular} \label{tab:data_CI} \end{table} Figures~\ref{fig:th13_dcp_reactor} and~\ref{fig:th23_dm32_both} show the \ensuremath{\delta_{CP}}\ versus $\sin^2\theta_{13}$ and $\Delta m^2_{32}$ versus $\sin^2\theta_{23}$ credible regions for the T2K-only and T2K+reactor analyses. Note that the contours in Fig.~\ref{fig:th13_dcp_reactor} are marginalized over the mass hierarchy; in particular, the most probable value line appears to be offset from the center of the credible region. This is because the most probable value line is for the preferred inverted hierarchy, and the credible intervals are marginalized over hierarchy. Fig.~\ref{fig:dcp_marg_MCMC} shows the posterior probability for \ensuremath{\delta_{CP}}\ with 68\% and 90\% credible intervals for the T2K+reactor combined analysis. Figure~\ref{fig:bayes_overlay} shows comparisons of SK $\nu_\mu$ CC and $\nu_e$ CC candidate events with the best-fit spectra produced from the T2K-only and T2K+reactor combined analyses. Each best-fit spectrum is formed by calculating the most probable value for the predicted number of events in each energy bin, using all of the MCMC points from the corresponding analysis. The fit spectrum for $\nu_{\mu}$ CC events does not change appreciably when the reactor prior is included, but the $\nu_e$ CC fit spectrum shows a noticeable reduction in the number of events. \begin{figure} \includegraphics[width=0.7\textwidth]{fig38.pdf} \caption{Credible regions for $\sin^2\theta_{13}$ and $\delta_{CP}$ for T2K-only and T2K+reactor combined analyses. These are constructed by marginalizing over both mass hierarchies. For the T2K-only analysis, the best fit line is shown instead of the best fit point because the analysis has little sensitivity to \ensuremath{\delta_{CP}}.} \label{fig:th13_dcp_reactor} \end{figure} \begin{figure} \includegraphics[width=0.7\textwidth]{fig39.pdf} \caption{Credible regions for $\sin^2\theta_{23}$ and $\Delta m^2_{32}$ for T2K-only and T2K+reactor combined analyses. The normal hierarchy corresponds to positive values of $\Delta m^2_{32}$ and the inverted hierarchy to negative values. } \label{fig:th23_dm32_both} \end{figure} \begin{figure} \includegraphics[width=0.5\textwidth]{fig40.pdf} \caption{The posterior probability for \ensuremath{\delta_{CP}}, marginalized over all other parameters, including mass hierarchy, for the T2K+reactor combined analysis. } \label{fig:dcp_marg_MCMC} \end{figure} \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{fig41.pdf} \caption{T2K-only and T2K+reactor prior best-fit spectra overlaid with SK $\nu_\mu$ CC and $\nu_e$ CC candidate samples.} \label{fig:bayes_overlay} \end{figure*} Figures~\ref{fig:osc_tri_t2konly} and~\ref{fig:osc_tri_reactor} show the posterior PDFs for the oscillation parameters both singly and pairwise, using MCMC points from the inverted and normal hierarchy respectively, which reflect the most probable mass hierarchy for the T2K-only and T2K+reactor analysis respectively. The plots along the diagonal show the posterior PDFs for each of the four oscillation parameters of interest, marginalized over all other parameters, except for the mass hierarchy. The off-diagonal elements show the pairwise posterior PDFs. \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{fig42.pdf} \caption{Distributions of posterior probability between the oscillation parameters of interest for the T2K-only analysis. These posteriors use only MCMC points that are in the inverted hierarchy.} \label{fig:osc_tri_t2konly} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{fig43.pdf} \caption{Distributions of posterior probability between the oscillation parameters of interest for the T2K+reactor analysis. These posteriors use only MCMC points that are in the normal hierarchy. In comparing to Fig.~\ref{fig:osc_tri_t2konly}, note the change in scales for some parameters.} \label{fig:osc_tri_reactor} \end{figure*} Another interesting feature of this analysis is that it provides a natural way to study the preference of the data for normal versus inverted hierarchy and lower versus upper octant in $\theta_{23}$. This is done simply by comparing the total probability (that is, the number of MCMC steps) in the region of interest. Table~\ref{tab:dm_model_comp_t2konly} shows the probability for the various cases for the T2K-only analysis. Note that the inverted hierarchy is preferred in this analysis, but the posterior odds ratio\footnote{with the prior odds assumed to be 1, the posterior odds ratio is equivalent to the Bayes Factor} is only 1.2. Table~\ref{tab:dm_model_comp} shows the same for the T2K+reactor combined analysis. In this analysis, the normal hierarchy is preferred, but with a posterior odds ratio of 2.2, the inverted hierarchy is not significantly excluded with the present analysis. To evaluate the dependency of this analysis on the form of the prior of the oscillation parameters, the analysis was repeated with a uniform prior in $\theta_{13}$ and $\theta_{23}$. The credible intervals and model comparison probabilities do not change appreciably with these alternative priors. \begin{table}[tbp] \centering \caption{Model comparison probabilities for normal and inverted mass hierarchies, as well as upper and lower octants, without including reactor data.} \begin{tabular}{c c c c} \hline\hline & NH & IH & Sum\\ \hline $\sin^2\theta_{23}\leq0.5$\ \ &\ \ 0.165\ \ &\ \ 0.200\ \ &\ \ 0.365\ \ \\ $\sin^2\theta_{23}>0.5$&0.288&0.347&0.635\\ \hline Sum & 0.453& 0.547&1.0\\ \hline\hline \end{tabular} \label{tab:dm_model_comp_t2konly} \end{table} \begin{table}[tbp] \centering \caption{Model comparison probabilities for normal and inverted mass hierarchies, as well as upper and lower octants, including reactor data.} \begin{tabular}{c c c c} \hline\hline & NH & IH & Sum\\ \hline $\sin^2\theta_{23}\leq0.5$\ \ &\ \ 0.179\ \ &\ \ 0.078\ \ &\ \ 0.257\ \ \\ $\sin^2\theta_{23}>0.5$&0.505&0.238&0.743\\ \hline Sum & 0.684& 0.316&1.0\\ \hline\hline \end{tabular} \label{tab:dm_model_comp} \end{table} \subsection{Cross-check analysis} \label{sec:crosscheck_analysis} A second Bayesian joint analysis (JB2) is used to cross-check the results from the analysis described above (JB1). Like the frequentist analyses, JB2 uses the output from the ND280 analysis described in Sec.~\ref{sec:BANFF} to constrain some of the systematic uncertainties, by applying them as prior probability densities. Also, JB2 does not use (by default) the reconstructed energy spectrum for $\ensuremath{\nu_e}\xspace$ candidate events, but instead the 2D distribution of the momentum and angle with respect to beam direction $(p_{e},\theta_{e})$ of the particle reconstructed as an electron in those events. This is similar to what was used in the previously reported electron neutrino appearance observation~\cite{Abe:2013hdq}. JB2 can also use the shape of the reconstructed energy spectrum for $\ensuremath{\nu_e}\xspace$ candidate events, so that the results of the two analyses can be compared in both cases. On a technical level, MCMC is not used in this second analysis to marginalize over the nuisance parameters; the integration is done numerically by averaging the posterior probability over 10,000 throws of those parameters following their prior distribution. Finally, a second technical difference is that in JB2 the weighting is not done event by event but by $(p_{e},\theta_{e})$ bin. \subsection{Comparison of analyses} \subsubsection{Comparison of Bayesian joint analyses} The results obtained with the two joint Bayesian analyses are very similar, both in terms of posterior probabilities for the different models and credible intervals for the oscillation parameters. The comparison in the case of the posterior probability for $\delta_{CP}$ is shown in Fig.~\ref{fig:CompPosterior}: the posterior probabilities obtained by the two analyses are similar, and most of the difference comes from JB2 using the $(p_{e},\theta_{e})$ spectrum shape for $\ensuremath{\nu_e}\xspace$ candidate events instead of the reconstructed energy spectrum shape as JB1 does. This also shows that at the current statistics, fitting the near and far detector samples at the same time and using the output of the near detector analysis described in Sec.~\ref{sec:BANFF} are equivalent. \begin{figure} \includegraphics[width=0.6\textwidth]{fig44.pdf} \caption{Posterior probabilities for $\delta_{CP}$ obtained by the two joint Bayesian analyses using the reactor experiments prior for $\sin^{2}\theta_{13}$.} \label{fig:CompPosterior} \end{figure} \subsubsection{Treatment of the systematic uncertainties} We also compare, using JB2, the marginalization and profiling approaches described in Sec.~\ref{sec:OA}C to reduce the dimensionality of the likelihood. In the case of $\delta_{CP}$, the marginal (obtained by integrating the product of the likelihood and priors over the nuisance parameters) and profile (obtained by maximizing the likelihood with respect to those parameters) likelihoods are visibly different, as can be seen on figure \ref{fig:CompTreatment}. Such differences are expected as some of the nuisance parameters appear in a non-Gaussian form and have a non-linear dependence. Within the Bayesian framework, only marginalization is well motivated. \begin{figure} \includegraphics[width=0.6\textwidth]{fig45.pdf} \caption{Marginal and profile likelihoods of the T2K data with reactor constraint assuming normal hierarchy.} \label{fig:CompTreatment} \end{figure} \section{\label{sec:conclusions} Conclusions} With the data collected between 2010 and 2013 we have analyzed the \ensuremath{\nu_\mu}\xspace -disappearance to estimate the two oscillation parameters, \ensuremath{|\Delta m^{2}|\xspace}\ and \stt. For the first time, we have used a combined analysis of \ensuremath{\nu_\mu}\xspace -disappearance and \ensuremath{\nu_e}\xspace -appearance, to advance our knowledge of the oscillation parameters \ensuremath{|\Delta m^{2}|\xspace}, \stt, \sot, \ensuremath{\delta_{CP}}, and the mass hierarchy. Uncertainty arising from systematic factors has been carefully assessed in the analyses and its effect is small compared to statistical errors. Our understanding of neutrino oscillation will continue to improve as we collect more data in the coming years, in both neutrino and anti-neutrino mode~\cite{Abe:2014tzr}. The general approach followed in this paper that couples the separate analysis of the beamline, neutrino interactions, near detectors, and far detector, through sets of systematic parameters and their covariances, will be extended to deal with additional information from anti-neutrino data and from additional selections with the near detector data.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction.}\label{SectionIntro} In 2003, Bondal and Van den Bergh in \cite[Theorem 2.2]{BonVanB}, defined what it means for an object to (strongly) generate a triangulated category. The definitions were inspired by the close relation between certain types of triangulated categories having a strong generator and being saturated, i.e., that every contravariant cohomological functor of finite type to vector spaces is representable. Categories that admit a strong generator were then called \textit{regular}. Moreover, in the same article Bondal and Van den Bergh showed that whenever $X$ is a smooth variety, $\textbf{D}_{\text{perf}}(X)$ is regular if, and only if, $X$ can be covered by open affine subschemes $Spec(R_i)$ with each $R_i$ of finite global dimension. It was then asked by Bondal and Van den Bergh if one could generalize the condition over the scheme to be quasicompact and separated. Over the next decade, several steps followed in this direction. First, the case where $X$ is regular and of finite type over a field $k$ was proved by both Orlov \cite{Orlov}, Theorem 3.27, and Rouquier \cite{Rouquier}, Theorem 7.38,. This last paper from Rouquier is also responsible for the generality of the following important theorem. \begin{theorem}[Rouquier] Let $R$ be a neotherian, commutative ring. Let $\mathcal{T}$ be a regular triangulated category proper over $R$, and suppose that $\mathcal{T}$ is idempotent complete. Then an $R$-linear functor $H: \mathcal{T} \rightarrow R-Mod$ is representable if and only if \begin{itemize} \item[i)]H is homological, and \item[ii)]for any object $X \in \mathcal{T}$, the direct sum $\oplus^{\infty}_{i=-\infty} H(\Sigma^i X)$ is a finite $R$-module. \end{itemize} \end{theorem} This motivate finding examples of regular, idempotent complete triangulated categories proper over a noetherian ring $R$. In particular, it is a well-known fact that the category $\textbf{D}_{\text{perf}}(X)$, for $X$ a quasicompact, quasiseparated scheme, is idempotent complete. In 2017, Neeman \cite{Amnon} proved the Bondal and Van den Bergh conjecture, i.e. \begin{theorem}[Neeman] \label{IntroTheo1} Let $X$ be a quasicompact, separated scheme. Then $\textbf{D}_{\text{perf}}(X)$ is regular if, and only if, $X$ can be covered by open affine subschemes $Spec(R_i)$, with each $R_i$ of finite global dimension. \end{theorem} \begin{remark}\label{rmk1.3} One direction of the Theorem \ref{IntroTheo1} has been proven in full generality. If $\textbf{D}_{\text{perf}}(X)$ is regular, one may show that if $U = Spec(R)$ is any open affine subscheme of $X$, then $R$ is of finite global dimension. This claim follows by Thomason and Trobaugh \cite{ThTro}, which shows that the restriction functor $j^*: \textbf{D}_{\text{perf}}(X) \to \textbf{D}_{\text{perf}}(U)$ is the idempotent completion of the Verdier quotient map. If $G \in \textbf{D}_{\text{perf}}(X)$ is a strong generator, then $j^* G \in \textbf{D}_{\text{perf}}(U)$ is also a strong generator. By \cite[Theorem 7.25]{Rouquier}, this implies that $R$ must be of finite global dimension. \end{remark} One might ask if the separated condition could be weakened to quasiseparated. As shown above, one of the main applications involves idempotent complete triangulated categories, and $\textbf{D}_{\text{perf}}(X)$ is an idempotent complete triangulated category, for $X$ a quasicompact, quasiseparated scheme. This paper gives one step in this direction, extending Theorem \ref{IntroTheo1}. We show that for quasicompact, quasiseparated schemes that admit a \textit{separator,} the theorem hold. \begin{theorem} \label{main} Let $X$ be a quasicompact, quasiseparated scheme that admits a separator. Then $\textbf{D}_{\text{perf}}(X)$ is regular if, and only if, $X$ can be covered by open affine subschemes $Spec(R_i)$ with each $R_i$ of finite global dimension. \end{theorem} A separator is a morphism with some universal property from a quasicompact, quasiseparated scheme to a particular quasicompact separated scheme, introduced by Ferrand and Khan in \cite{Separator}. One direction of the proof of theorem \ref{main} is identical to the Remark \ref{rmk1.3}, so it remains to show that $\textbf{D}_{\text{perf}}(X)$ is regular if $X$ can be covered by affines of finite global dimension. With the assumption of the existence of a $separator$, the main idea is to pull back the strong generator from the separated scheme and showing that it is again a strong generator in $\textbf{D}_{\textbf{qc}}(X)$. Not all quasiseparated schemes admits a separator, but several examples may be found in \cite{Separator}. \section{Preliminaries.}\label{SecBackground} \subsection{Local Isomorphism}\label{SubSecLocIso} This section follows \cite[Apendix A]{Separator}. Another main source, as usual, is \cite[§4.4, §5]{EGA I}. This is an important property to introduce because the separator is a local isomorphism, which turns out to simplify the proof of the main theorem \ref{thm3.5} in Section 3. \begin{defi} [\textbf{Local Isomorphism}] A morphism of schemes $f: X \to Y$ is a \textit{local isomorphism} if every point of $X$ is contained in an open $U \subset X$ such that $f$ induces an open immersion $U \to Y$. \end{defi} Local isomorphisms translate the idea of gluing open sets or covering spaces. If $U, V$ are opens of $X$ such that $f$ induces an open immersion, the image of $f( U \cup V)$ is obtained by gluing $f(U)$ and $f(V)$ --- which are isomorphic to $U$ and $V$ respectively --- along the open set $f(U) \cap f(V)$, which contains $f(U \cap V)$. A local isomorphism is necessarily open, flat and locally of finite presentation \cite[6.2.1]{EGA I}. From the local property of the morphism it also follows that for any point $x \in X$, the induced morphism, $\theta_x \colon \mathcal{O}_{Y,f(x)} \to \mathcal{O}_{X,x}$ is an isomorphism. The reverse direction also holds: if $f$ is locally of finite presentation, and $\theta_x$ is an isomorphism for all $x \in X$, then $f$ is a local isomorphism \cite[6.6.4]{EGA I}. \begin{prop} \label{LocIso1} \cite[Proposition A.3.1]{Separator} Let $f: X \to Y$ be a separated local isomorphism. If $f$ induces an injection over all maximum points of $X$, then $f$ is an open immersion. \end{prop} \subsection{Separator.}\label{SubSecSep} A separator of a morphism $f: T \to S$ is another morphism $h$, which is universal among morphisms from $T$ to separated $S$-schemes $E$. This section will follow \cite{Separator}, which contains in-depth explanations and further properties of separators and local isomorphisms. \begin{defi} \label{Def2.1} Let $f:T \to S$ a morphism of schemes. A \textit{separator} of $f$, or a \textit{separator} through $f$, is a morphism of $S$-schemes $h: T \to E$, with $E$ separated over $S$, such that the following propreties are satisfied: \begin{itemize} \item[i)] $h$ is a quasicompact, quasiseparated, surjective local isomorphism, and \item[ii)] the diagonal morphism $\Delta_h$ is schematic dominant. \end{itemize} \end{defi} If $S = Spec(\Z)$, we call $h$ a separator of $T$. We observe that \begin{itemize} \item[i)] A morphism $f \colon Y \rightarrow X$ is \textit{schematic dominant} if $\mathcal{O}_X \rightarrow f_*(\mathcal{O}_Y)$ is injective. \item[ii)] A morphism $f : T \to S$ that admits a separator is quasiseparated. This follows from the fact that the diagonal $\Delta_f$ factorize as \begin{align*} T \xrightarrow{\Delta_h} T \times_E T \xrightarrow{u} T \times_S T \end{align*} where $u$ is the induced morphism from the base change. Since $\Delta_h$ is quasicompact, as $h$ is quasiseparated and $u$ is a closed immersion, the composition is quasicompact. \item[iii)] If $T$ is integral, property $i)$ of Definition \ref{Def2.1} implies property $ii)$ of Definition \ref{Def2.1}. \end{itemize} The separator has several desired properties, all of which have an in-depth explanation in \cite{Separator}. We are only interested in the following: \begin{prop} \label{hopenimm} Let $f: T \to S$ be a morphism and $T \xrightarrow{h} E \xrightarrow{g} S$ a separator of $f$. \begin{itemize} \item[i)] Let $U$ be an open set of $T$ that is separated over $S$. Then the restriction of $h$ induces an isomorphism of $U$ to $h(U)$. In particular, $h(U)$ is open and if $T$ is already separated, $h$ is an isomorphism. \item[ii)](Universal Property) For all $S$-morphisms $h': T \to E'$ with $E'$ separated over $S$, there exists a unique $S$-morphism $u : E \to E'$ such that $h'=uh$. \end{itemize} \end{prop} \begin{proof} \ \begin{itemize} \item[i)] Let $U$ be an open set of $T$ that is separated over $S$. First notice the morphism $U \to E$ induced from $h$ is a separated morphism, since both $U$ and $E$ are, and $h$ is quasi-separated. Second, by the definition of a separator, $\Delta_h$ is schematicaly dominant. By \cite[2.2.1]{Separator}, this implies that the restriction of $h$ to all maximal points of $T$ is injective. Hence by Proposition \ref{LocIso1}, $h$ is an open immersion. \item[ii)] Let $h':T \to E'$ be an $S$-morphism with $E'$ separated over $S$. Then, there exists a commutative diagram $$ \begin{tikzcd}[column sep=5pc] T \arrow{d}[swap]{\Delta_{h'}} \arrow{r}{\Delta_h} & T \times_E T \arrow{d}{\phi} \arrow[dashrightarrow]{dl}{\exists w} \\ T \times_{E'} T \arrow{r}[swap]{{\phi'}} & T \times_S T \end{tikzcd} $$ where the morphisms $\phi, \phi'$ are closed immersions, since both $E$ and $E'$ are separated over $S$. Since $\Delta_h$ is schematic dominant by assumption, the conditions on both $\phi$ and $\phi'$, the requirements for the existence of $w$ are met by the Uniqueness of Schematic Closure \cite[A.5.3]{Separator}. Hence the diagram $$ \begin{tikzcd}[column sep=5pc] T \times_E T \arrow{d}[swap]{w} \arrow[r, shift left] \arrow[r, shift right] & T \arrow[d,equal] \arrow{r}{h} & E \arrow[dashrightarrow]{d}{u} \\ T \times_{E'} T \arrow[r, shift left] \arrow[r, shift right] & T \arrow{r}[swap]{h'} & E' \end{tikzcd} $$ commutes, and the result follows. \end{itemize} \end{proof} Finally, it is important to understand when a separator exists. The following theorem will give some criteria to work with \begin{theorem}\label{Thm2.5.1} Let $f:T \to S$ be a quasi-separated morphism, and let $T_1 \subset T \times_S T$ be the schematic closure of the diagonal morphism $\Delta_f : T \to T \times_S T$. Then, $f$ admits a separator $h$ if, and only if, every irreducible component of $T$ is locally finite ,i.e., every point has an open neighborhood which is disjoint to all but finitely many irreducible components of $T$, over $S$ and both the composition morphisms induced by the projections \begin{align*} T_1 \rightarrow & T \times_S T \rightrightarrows T \end{align*} are flat and of finite type. \end{theorem} The proof is in \cite[Theorem 5.1.1 ]{Separator} \begin{cor} \label{Cor2.5.3} Let $T$ be a quasiseparated $S$-scheme where each irreducible component is locally finite. Then $T$ admits a separator $h: T \to E$ if, and only if, for all affine opens $U,V$ of $T$, the scheme $U \cup V$ admits a separator. \end{cor} \begin{proof} It suffices to show that $T$ has a cover by affine opens $U_{\lambda}$ such that the union of every two opens in the cover $U_{\lambda} \cup U_{\mu}$ admits a separator. First, let $U,V \subset T$ be affine opens. Since $T$ is quasi-separated, the intersection of any affine open with $U \cup V$ is quasi-compact. Recall that a subset $Z$ of a topological space $X$ is said to be \textit{retrocompact} if $Z \cap U$ is quasi-compact for every quasi-compact open subset $U$ of $X$. So $U \cup V$ is retrocompact in $T$. Hence, it suffices to show that for all retrocompact open $U \subset T$, $h(U)$ is open and the morphism $U \to h(U)$ is a separator of $U$. That $h(U)$ is open in $E$ follows from the fact that $U$, by hypothesis, is retrocompact. It remains to show that $h(U)$ is separated over $S$. Since $h$ is a local isomorphism, we have the induced isomorphism $h': U \to h(U)$, which induces the commutative diagram $$ \begin{tikzcd}[column sep=5pc] U \arrow{d}[swap]{i} \arrow{r}{\Delta_{h'}} & U \times_{h(U)} U \arrow{r}{u} & U \times_E U \arrow{d}{i \times i} \\ T \arrow{r}[swap]{\Delta_h} & T \times_E T \arrow[r, equal, shift left] & T \times_E T \end{tikzcd} $$ where $u$ is an isomorphism, since $h(U) \to E$ is an immersion. Since $ i \times i$ is an open immersion and $\Delta_h$ is quasi-compact and schematic dominant, $\Delta_{h'}$ is also schematic dominant. Finally, for $h'$ be a separator, it remains to show that it is quasi-compact. But $h'$ can be expressed as the composition of two quasi-compact morphisms, i.e., \begin{align*} U \xrightarrow{i'} h^{-1}(h(U)) \xrightarrow{h} h(U) \end{align*} where $i'$ is the open immersion induced by the inclusion $i$. Therefore the condition is necessary. Next, notice that $(U \cup V) \times (U \cup V) \subset T \times T$ is the union of four canonical opens, namely, $U \times U, V \times V, U \times V, V \times U$. Let $T_1$ be the schematic closure of the diagonal in $T \times T$. Then both $U \times U$ and $V \times V$ are isomorphic via the projection to $U$ and $V$ respectively in $T$, hence flat and of finite type. It suffices to work with $U \times V$. Let $W = T_1 \cap (U \times V)$. By Theorem \ref{Thm2.5.1}, both projections $d_1 : W \to U$ and $d_0 : W \to V$ are flat and of finite type. Since T is quasi-separated, the open immersions $U \to T$ and $V \to T$ are (flat and) of finite type. The open sets $U \times V$, with $U$ and $V$ affines, cover $T \times T$, so the two projections of $T_1$ to $T$ are flat and of finite type and, again, from Theorem \ref{Thm2.5.1}, the corollary follows. \end{proof} In \cite{Separator}, Ferrand and Kahn show some schemes that admit a separator and several others that do not. We end this section with some examples: \begin{itemize} \item[i)] Every regular locally neotherian scheme of dimension 1 admits a separator, for instance if T is a Neotherian Dedekind scheme over $Spec(\Z)$. \item[ii)] If $f: T \to S$ is étale of finite presentation and $S$ is normal, then $f$ admits a separator. \item[iii)] Any normal scheme of finite type over a Noetherian ring admits an open subscheme containing all points of codimension 1and this subscheme has a separator. \end{itemize} \subsection{Strong Generators of $\textbf{D}_{\textbf{qc}}(\Xs)$}\label{SubSecAmnons} \subsubsection{Strong Generators of a Triangulated Category} We begin with some definitions, terminology and key properties of a strongly generated category. Most of what is written here follows the first few chapters of \cite{Amnon}. \begin{defi} Let $\mathcal{T}$ be a triangulated category and $G \in \mathcal{T}$ an object. The full subcategory $\Gn_n \subset \mathcal{T}$ is defined inductively as follows: \begin{itemize} \item[i)] $\Gn_1$ is the full subcategory consisting of all direct summands of finite coproducts of suspensions of $G$. \item[ii)] For $n>1$, $\Gn_n$ is the full subcategory consisting of all objects that are direct summand of an object $y$, where $y$ fits into a triangle $x \to y \to z$, with $x \in \Gn_1$ and $z \in \Gn_{n-1}$. \end{itemize} \end{defi} \begin{defi} Let $G$ be an object in a triangulated category $\mathcal{T}$. Then $G$ is said to be a \textit{classical generator} if $\mathcal{T} = \cup_{n=1}^{\infty} \Gn_n$ and a \textit{strong generator} if there exists an $n \in \Z_{\geq 1}$ with $\mathcal{T} = \Gn_n$. \end{defi} \begin{defi} A triangulated category $\mathcal{T}$ is called \textit{regular} or \textit{strongly generated} if a strong generator exists. \end{defi} \begin{remark} \ \begin{itemize} \item One might also say that a regular category $\mathcal{T}$ is built from $G$ in finitely many steps. \item In \cite{Amnon}, a general discussion about different properties of triangulated category, such as being $proper$ or $idempotent complete$ follows. It also gives insight about the importance of studying such objects. \end{itemize} \end{remark} \begin{defi} \label{deficoprod} Let $\mathcal{T}$ be a triangulated category with coproducts, $G \in \mathcal{T}$ an object and $ A<B$ integers. Then $\overline{\Gn}_{n}^{[A,B]} \subset \mathcal{T}$ is the full subcategory defined inductively as follows: \begin{itemize} \item[i)] $\overline{\Gn}_{1}^{[A,B]}$ is the full subcategory consisting of all direct summands of abritary coproducts of objects in the set $\{ \Sigma^{-i} G, A \leq i \leq B \}$ . \item[ii)] $\overline{\Gn}_{n}^{[A,B]}$ is the full subcategory consisting of all objects that are direct summand of an object $y$, where $y$ fits into a triangle $x \to y \to z$, with $x \in \overline{\Gn}_{1}^{[A,B]}$ and $z \in \overline{\Gn}_{n-1}^{[A,B]}$. \end{itemize} \end{defi} The difference between the categories $\Gn_n$ and $\overline{\Gn}_{n}^{[A,B]}$ is that $\overline{\Gn}_{n}^{[A,B]}$ allows arbitrary coproducts, but restrict the allowed suspensions to a fixed range from $A$ to $B$. \subsubsection{Operations between subcategories} There are several ways to create a new subcategory from others. Some of them will be defined in this section, which follows \cite{Amnon}. \begin{defi} Let $\mathcal{T}$ be a triangulated category with $\mathcal{A}$ and $\mathcal{B}$ two subcategories of $\mathcal{T}$. Then: \begin{itemize} \item[i)] $\mathcal{A} \star \mathcal{B}$ is the full subcategory of all objects $y$ for which there exist a triangle $x \to y \to z$ with $x \in \mathcal{A}$ and $z \in \mathcal{B}$. \item[ii)] $add(\mathcal{A})$ is the full subcategory containing all finite coproducts of objects in $\mathcal{A}$. \item[iii)] If $\mathcal{T}$ is closed under coproducts, then $Add(\mathcal{A})$ is the full subcategory containing all (set-indexed) coproducts of objects in $\mathcal{A}$ \item[iv)] If $\mathcal{A}$ is also a full subcategory, then $smd(\mathcal{A})$ is the full subcategory of all direct summands of objects in $\mathcal{A}$. \end{itemize} \end{defi} Note that the empty coproduct is $0$, hence $0 \in add(\mathcal{A}) \subset Add(\mathcal{A})$ for any $\mathcal{A}$. \begin{defi} Let $\mathcal{T}$ be a triangulated category and $\mathcal{A}$ a subcategory. Define: \begin{align*} \text{i)} \ coprod_1(\mathcal{A}) &:= add(\mathcal{A}); & coprod_{n+1}(\mathcal{A}) &:= coprod_1(\mathcal{A}) \star coprod_n(\mathcal{A}). \\ \text{ii)} \ Coprod_1(\mathcal{A}) &:= Add(\mathcal{A}); & Coprod_{n+1}(\mathcal{A}) &:= Coprod_1(\mathcal{A}) \star Coprod_n(\mathcal{A}).\\ \text{iii)} \ coprod(\mathcal{A}) &:= \cup^{\infty}_{n=1} coprod_n(\mathcal{A}).\\ \end{align*} $\text{iv) } Coprod(\mathcal{A}) \text{ is the smallest strictly full subcategory of } \mathcal{T}$ [assumed to have coproducts] containing $\mathcal{A}$ and satisfying \begin{align*} Add(Coprod(\mathcal{A})) \subset Coprod(\mathcal{A}) & \ & \text{and} & \ & Coprod(\mathcal{A}) \star \text{Coprod}(\mathcal{A}) \subset Coprod(\mathcal{A}). \end{align*} \end{defi} \begin{remark} The diagram $$ \begin{tikzcd}% coprod_n(\mathcal{A}) \arrow[hookrightarrow]{r} \arrow[hookrightarrow]{d} & coprod(\mathcal{A}) \arrow[hookrightarrow]{d} \\ Coprod_n(\mathcal{A}) \arrow[hookrightarrow]{r} & Coprod(\mathcal{A}) \end{tikzcd}% $$ commutes. Moreover, the associativity of the $\star$ operation gives that \begin{align*} coprod_m(\mathcal{A}) \star coprod_n(\mathcal{A}) = & \ coprod_{m+n}(\mathcal{A}), \\ Coprod_m(\mathcal{A}) \star Coprod_n(\mathcal{A}) = & \ Coprod_{m+n}(\mathcal{A}). \end{align*} It can also be shown that $Coprod_1 (Coprod_n (\mathcal{A})) = Add (Coprod_n (\mathcal{A})) = Coprod_n(\mathcal{A})$. Hence $Coprod_n (Coprod_m (\mathcal{A})) \subset Coprod_{nm}(\mathcal{A})$. \end{remark} The following lemma may be found in \cite[Lemma 1.7]{Amnon} and will be used once to prove the next corollary. \begin{lemma} Let $\mathcal{T}$ be a triangulated category with coproducts, $\mathcal{T}^c$ be the subcategory of compact objects in $\mathcal{T}$, and let $\mathcal{B}$ be a subcategory of $\mathcal{T}^c$. Then \begin{itemize} \item[(i)] For $x \in Coprod_n(\mathcal{B})$ and $s \in \mathcal{T}^c$, any map $s \to x$ factors as $s \to b \to x$ with $b \in coprod_n(\mathcal{B})$. \item[(ii)] For $x \in Coprod(\mathcal{B})$ and $s \in \mathcal{T}^c$, any map $s \to x$ factors as $s \to b \to x$ with $b \in coprod(\mathcal{B})$. \end{itemize} \end{lemma} \begin{proof} \cite[Lemma 1.7]{Amnon} \end{proof} \begin{cor} \label{lemma1.8} Let $\mathcal{T}$ be a triangulated category with coproducts, $\mathcal{T}^c$ be the subcategory of compact objects in $\mathcal{T}$, and let $\mathcal{B}$ be a subcategory of $\mathcal{T}^c$. Then \begin{itemize} \item[(i)] Any compact object in $Coprod_n(\mathcal{B})$ belongs to $smd(coprodn(\mathcal{B}))$. \item[(ii)] Any compact object in $Coprod(\mathcal{B})$ belongs to $smd(coprod(\mathcal{B}))$. \end{itemize} \end{cor} \begin{proof} Let $x$ be a compact object in $Coprod_n(\mathcal{B})$. The identity map $1 \colon x \to x$ is a morphism from the compact object $x$ to $x \in Coprod_n(\mathcal{B})$. By the previous Lemma \ref{Lemma 1.7}, the morphism factors through an object $b \coprod_n(\mathcal{B})$. Thus $x$ is a direct summand of $b$ and the results follows. The same proof holds be removing the subscript $n$, which proves item (ii). \end{proof} The next three results follow from these definitions with proofs found in the background section from \cite{Amnon}. \begin{lemma} \label{Amnlemma1.8} Let $\mathcal{T}$ be a triangulated category with coproducts, and let $\mathcal{B}$ be an arbitrary subcategory. Then \begin{align*} Coprod_n(\mathcal{B}) \subset smd(Coprod_n(\mathcal{B})) \subset Coprod_{2n} (\mathcal{B} \cup \Sigma \mathcal{B}). \end{align*} \end{lemma} \begin{remark} \label{rmkcoprod} Let $\mathcal{T}$ be a triangulated category with coproducts, and let $\mathcal{B} \subset \mathcal{T}$ be a subcategory. For any pair of integers $m \leq n$ define \begin{align*} \mathcal{B}[m,n] = \bigcup^{-m}_{i=-n} \Sigma^i\mathcal{B}. \end{align*} \end{remark} \begin{cor} \label{corcoprod} For integers $N >0, A\leq B$ the identity $\overline{\Gn}_N^{[A,B]} = smd(Coprod_N(G[A,B]))$ always holds. Furthermore, one has the inclusions: \begin{align*} Coprod_N(G[A,B]) \subset \ \overline{\Gn}_N^{[A,B]} \subset Coprod_{2N} (G[A-1,B]). \end{align*} \end{cor} From these two results, one concludes that regarding finiteness conditions there is no loss in generality when working with $Coprod_n(G[A,B])$ instead of $\overline{\Gn}_N^{[A,B]}$, and $smd$ will not change the finiteness of the category generated by $G[A,B]$. So we may work with $Coprod_n(G[A,B])$, which behaves well with $smd$ and the $\star$ operations. We end this section with a brief discussion about the $Coprod_n(G[A,B])$ subcategory. As stated in [\ref{deficoprod}], $Coprod_n(G[A,B])$ is a full subcategory. Together with Remark \ref{rmkcoprod}, we may let the suspensions free by considering $Coprod_n(G[-\infty, \infty])$. This means that any object $x \in Coprod_n(G[-\infty, \infty])$ factors through $Coprod_n(G[A, B])$ for integers $A$ and $B$. As usual, is it possible that $\mathcal{T} = Coprod_n(G[-\infty, \infty])$, which motivates the following definition. \begin{defi} Let $\mathcal{T}$ be a triangulated category with coproducts and $G \in \mathcal{T}$ an object in $\mathcal{T}$. Then, $\mathcal{T}$ is said to be \textit{fast generated by $G$} if $\mathcal{T} = Coprod_n(G[-\infty, \infty])$. \end{defi} When $G$ is a compact object, Corollary \ref{corcoprod} tells us that if $\mathcal{T} = \overline{\Gn}_{n}^{[-\infty,\infty]}$, then $\mathcal{T}$ is fast generated. In this paper, we will always consider the case when $G$ is a compact generator. \subsection{The $\textbf{D}_{\textbf{qc}}(\Xs)$ case} Let $\Xs$ be a quasicompact separated scheme. One may consider the category $\textbf{D}_{\textbf{qc}}(\Xs)$, which is the unbounded derived category of cochain complexes sheaves of $\mathcal{O}_{\Xs}-$modules with quasicoherent cohomology, and let $\textbf{D}_{\textbf{perf}}(X)$ be the subcategory of compact objects. Although the main result is about $\textbf{D}_{\textbf{perf}}(X)$, the next result is the reason why we may work over the bigger triangulated category $\textbf{D}_{\textbf{qc}}(X)$, which contains coproducts for $X$ quasicompact, quasiseparated and moreover is compactly generated. \begin{prop} \label{propreduc} Let $X$ be a quasicompact, quasiseparated scheme and $G \in \textbf{D}_{\textbf{perf}}(X)$ be a compact generator of $\textbf{D}_{\textbf{qc}}(X)$. If $\textbf{D}_{\textbf{qc}}(X)$ is fast generated by $G$, then $G$ strongly generates $\textbf{D}_{\textbf{perf}}(X).$ \end{prop} \begin{proof} Consider $\mathcal{B} = \{ \Sigma^i G, i \in \Z \}$. Then Corollary \ref{lemma1.8} gives that $\textbf{D}_{\textbf{perf}}(X) = smd(coprod_n(\mathcal{B}))$, which implies that $G$ strongly generates $\textbf{D}_{\textbf{perf}}(X)$. \end{proof} The path should be clear by now. With the conditions of Theorem \ref{main}, if we show that $\textbf{D}_{\textbf{qc}}(X)$ is fast generated by a compact generator, then by the above Proposition \ref{propreduc} the main result will follow. We finish this section with two more results from \cite{Amnon} stated without proof. \begin{theorem}[Neeman] \label{Amn6.2} Let $j: V \to \Xs$ be an open immersion of quasicompact, separated schemes, and let $G$ be a compact generator for $\textbf{D}_{\textbf{qc}}(\Xs)$. If $H$ is any compact object of $\textbf{D}_{\textbf{qc}}(V)$, and we are given integers $n,a \leq b$, then there exist integers $N, A \leq B$ so that $\text{Coprod} _n (\textbf{R}j_*H[a,b]) \subset \text{Coprod} _N(G[A,B])$. \end{theorem} \begin{proof} \cite[Theorem 6.2]{Amnon} \end{proof} \begin{theorem}[Neeman] \label{Amnthm} Let $\Xs$ be a quasicompact separated scheme. If $\Xs$ can be covered by affine subschemes $Spec(R_i)$ with each $R_i$ of finite global dimension, then there exists a compact generator $G$ that fast generates $\textbf{D}_{\textbf{qc}}(\Xs)$, i.e., $\textbf{D}_{\textbf{qc}}(\Xs) = Coprod_n(G[-\infty, \infty])$. \end{theorem} \begin{proof} \cite[Theorem 2.1]{Amnon} \end{proof} \section{Schemes with Separator} Throughout this section, assume $X$ to be a quasicompact, quasiseparated scheme with separator $f: X \to \Xs$. Without lost of generality, $X$ may be written as $X = U \cup V$ with $U$ and $V$ quasicompact open subschemes of $X$. Let $V$ to be affine and $Z$ be the closed complement of $U$ on $X$, i.e., $Z = X \backslash U$. Then $Z \subset V$ and we have the commuting diagram $$ \begin{tikzcd}[column sep=5pc] Z \arrow{d}{c} \\ V \arrow{d}{i} \arrow{rd}{j} & { } \\ X \arrow{r}{f} & \Xs \end{tikzcd} $$ where $c: Z \to V$ is a closed immersion and $i: V \to X$, $j: V \to \Xs$ are open immersions. \begin{remark} Throughout this section, the index $[A,B]$ is omitted, as the range itself is not relevant for almost all proofs, only that it is finite. That means that $\text{Coprod} _N(G[A,B])$ for some integers $A < B$ is written as $\text{Coprod} _N(G)$. Unless otherwise specified, $G$ will be the compact strong generator of $\textbf{D}_{\textbf{qc}}(\Xs)$, which exists by Theorem \ref{Amnthm}. \end{remark} \begin{lemma} \label{lem3.1} Assume $\Xs$ to be a quasicompact, separated scheme and $V \subset \Xs$ an open subscheme. For $P \in \textbf{D}_{\textbf{perf}}(V )$, let $j: V \to \Xs$ be the open immersion and $G$ the compact strong generator of $\textbf{D}_{\textbf{qc}}(\Xs)$ . Then the pushforward $j_*P$ is in $\text{Coprod}_N(G)$. \end{lemma} \begin{proof} Notice that $V$ and $\Xs$ are separated and since $P$ is in $\text{Coprod} _M(j^*G)$, for $G$ is a global generator, it follows from Theorem \ref{Amn6.2} that $j_*(P) \in \text{Coprod} _N(G)$. \end{proof} \begin{prop} \label{prop3.2} Let $P \in \textbf{D}_{\textbf{qc}}(V \text{on } Z)$, $i: V \to X$ and $j: V \to \Xs$ be the open immersions. Then $i_*P$ is a retract of $f^*j_*P$. \end{prop} \begin{proof} First, we show that $f^*j_*P$ is supported on $ Z \coprod W$, by viewing $Z$ as $Z = X \backslash U$ and $W$ as some closed subset of $U \subset X$. Now it suffices to show that the pullback of $f^*j_*P$ to the intersection $U \cap V$ is zero. That would imply that there exist a closed subset in $X \backslash V \subset U$, say $W$, that contains the remainder (if any) of the pullback via $f$ of $j_*P$ that is not in $Z$. Since the diagram $$ \begin{tikzcd}[column sep=5pc] {} & V \arrow{d}{i} \arrow{rd}{j} & { } \\ U \cap V \arrow{ru}{k} \arrow{r}{l} & X \arrow{r}{f} & \Xs \end{tikzcd} $$ is commutative (every map, except $f$ is an open immersion), $Z \nsubseteq U \cap V$, using the counit equivalence map one can see that \begin{align*} l^*f^*j_*P &\cong k^*j^*j_*P \\ &\cong k^*P \\ &\cong 0. \end{align*} Hence, $f^*j_*P \simeq R \oplus S$, with $S$ supported on $W \subset U$ and $R$ supported on $Z \subset V$. Finally, since $V$ is separated and $f$ is a local isomorphism on separated open subschemes, the restriction of $f^*j_*P$ to $V$ is $i_*P$, i.e., $i_*P \simeq R$. Therefore $f^*j_*P \simeq i_*P \oplus Q$ and the result follows.\end{proof} The next goal is to show that the pullback of a compact generator via a separator is again a compact generator. This will be done in several steps. First, notice that the isomorphism over separated opens property from the separator induces locally the notion of ``\textit{non separated points}''. Those are the points in an open affine for which the separator is not an isomorphism. \begin{defi}Let $f: X \to Y$ be the separator and $V \subset X$ be an open affine. Define $Z_V$ as the closure of the set $\{x \in V; f^{-1}f(x) \neq \{x\}\}$ (the \textit{non separated points of V}). \end{defi} $Z_V$ is then a closed subscheme of $V$. Recall that the localization sequence for $\textbf{D}_{\textbf{qc}}(X)$ holds true for $X$ quasicompact and quasiseparated, i.e, for $U \subset X$ quasicompact open and $Z$ the closed complement, one have \begin{align*} \textbf{D}_{\textbf{qc}}(X \ \textbf{on} \ Z) \rightarrow \textbf{D}_{\textbf{qc}}(X) \rightarrow \textbf{D}_{\textbf{qc}}(U) \end{align*} Moreover, $G \in \textbf{D}_{\textbf{qc}}(X)$ is a compact generator if, and only if, for any $F \in \textbf{D}_{\textbf{qc}}(X)$, $Hom_{\textbf{D}_{\textbf{qc}}(X)} ( G, F ) = 0$ implies $F = 0$. With that in mind, it is possible to prove that the pullback via a separator of a compact generator is a compact generator. \begin{prop} \label{lem3.3} Let $f: X \to Y$ be the separator. If $G \in \textbf{D}_{\textbf{perf}} (Y)$ is a compact generator, then $f^*G \in \textbf{D}_{\textbf{perf}} (X)$ is a compact generator. \end{prop} \begin{proof} Since $X$ is quasicompact and quasiseparated, it suffices to show that the restriction of $f^*(G)$ is a generator for any affine open and for any quasicompact open subscheme of $X$. For the affine case consider the commutative diagram \[ \begin{tikzcd} & X \arrow{dr}{f} & \\ V \arrow[hookrightarrow]{ur}{i} \arrow{r}{Id} & V \arrow[hookrightarrow]{r}[swap]{\restr{f}{V}}& Y \end{tikzcd} \] where $V \hookrightarrow X$ is an open affine. The restriction to $V$ is indeed a generator, since $\restr{f}{V}$ is an isomorphism on $f(V)$, i.e, \begin{align*} i^* f^*(G) = (f \circ i)^* (G) = (\restr{f}{V})^*G = G_{\textbf{D}_{\textbf{qc}}(V)} \end{align*} where $G_{\textbf{D}_{\textbf{qc}}(V)}$ is the restriction of $G$ to $\textbf{D}_{\textbf{qc}}(V)$. Notice this remains true if we replace $V$ by any separated open subscheme of $V$, a fact that will be used soon. Next, one proceeds with the case of some quasicompact, quasiseparated open subscheme, say $U \subset X$, by induction on the number of affines covering $U$. The case where $U$ can be covered by only one affine is exactly the case above. So we may assume $U$ may be covered by $n$ affines and the property is true for any quasicompact, quasiseparated subscheme covered by up to $n-1$ affines. Let $V$ be some open affine from the cover of $U$ and let $W$ be the union of the other $n-1$ affines, i.e. $U = V \cup W$. Let $Z_{V} \subset V$ be the closed subscheme of non separated points of $V$. Let $L = V \cap W$. We have three morphisms $i: V \hookrightarrow U$, $j: W \hookrightarrow U$ and $k: L \hookrightarrow U$. Let $F \in \textbf{D}_{\textbf{qc}}(U)$ and consider the square \[ \begin{tikzcd} Hom_{\textbf{D}_{\textbf{qc}}(U)}(f^*G, F) \arrow{r} \arrow{d} & Hom_{\textbf{D}_{\textbf{qc}}(W)}(j^*f^*G, j^*(F)) \arrow{d} \\ Hom_{\textbf{D}_{\textbf{qc}}(V)}(i^*f^*G, i^*(F)) \arrow{r} & Hom_{\textbf{D}_{\textbf{qc}}(L)}(k^*f^*G,k^*(F)) \end{tikzcd} \] in the derived category, where an abuse of notation was used for $f = \restr{f}{U}$. The goal is to show that the restriction of $f^*G$ to each category is a compact generator. By the induction hypothesis, the restrictions of $f^*G$ to $W, V$ are generators in each respective derived category. For the purpose of the proof, one may assume that $W \cap Z_V = \emptyset$, as one may consider the complement of $(Z_V)^c = U \backslash Z_V$ and define $\hat{W} = W \cap (Z_V)^c$. Indeed, with $\hat{j}:\hat{W} \hookrightarrow W$ the open immersion, it suffices to show that the further restriction $\hat{j}^*j^*f^*G$ is again a generator. Notice that for any $H \in \textbf{D}_{\textbf{qc}}(\hat{W})$ such that $\hat{j}_* H = 0$, one has that $H = 0$, as $(\hat{j}_*H)(U) = H(U \cap \hat{W})$. So $ 0 = Hom( \hat{j}^*f^*G, H) = Hom( f^*G, \hat{j}_* H)$ implies that $\hat{j}_*H = 0$, which implies that $H=0$. Hence, even though $\hat{W}$ may be covered by more than $n-1$ affines, one may replace $W$ for $\hat{W}$ and still get the same square of $Hom$s as before, with the restriction of $f^*G$ being a compact generator for $\hat{W}$. Therefore, without loss of generality assume that $U = V \cup W$, with $W \cap Z_V = \emptyset$. In particular, $L \cap Z_V = \emptyset$. Now $L$ is an open separated subscheme of the affine $V$, hence the restriction of $f^*G$ to $L$ is again a generator. Assume that $Hom_{\textbf{D}_{\textbf{qc}}(U)}(f^*G, F) = 0$. Then, the proof will follow if $F = 0$. By adjunction, one have that $f_*F[n] = 0$ for all $n \in \Z$. Consider the localization sequence \begin{align*} \textbf{D}_{\textbf{qc}}(U \ \textbf{on} \ Z) \rightarrow \textbf{D}_{\textbf{qc}}(U) \rightarrow \textbf{D}_{\textbf{qc}}(L) \end{align*} in the derived category over $k : L \hookrightarrow U$ with $Z$ the complement of $L$, which induces the triangle \begin{align*} M \rightarrow F \rightarrow k_* k^* F. \end{align*} Applying $f_*$, one obtains the triangle \begin{align*} f_*M \rightarrow f_*F \rightarrow f_* k_* k^* F. \end{align*} By the hypothesis, the middle term is zero, and hence $f_*M[1] \simeq f_* k_* k^* F$. The claim is that $k^*F = 0$. If that was not the case, the support of $k^*F$ would not be empty, which would imply the existence of a point $p \in L$ such that $p \in Supph(k^*F)$. Let $U_p \subset L$ be some neighborhood of $p$ that satisfies the condition for $p$ in $Supph(k^*F)$. Consider the diagram \[ \begin{tikzcd} U_p \arrow[hookrightarrow]{r} \arrow{d}{\simeq}[swap]{\restr{f}{U_p}} & L \arrow[hookrightarrow]{r}{k} \arrow{d}{\simeq}[swap]{\restr{f}{L}} & U \arrow{d}[swap]{f} \\ \overline{U_p} \arrow[hookrightarrow]{r} & \overline{L} \arrow[hookrightarrow]{r}{\overline{k}} & U_{\textit{sep}} \end{tikzcd} \] After a diagram chase, one sees that \begin{align*} f_* k_* k^* F &= \overline{k}_* (\restr{f}{L})_* k^* F \\ &= (\overline{k} \circ \restr{f}{L})_* k^* F. \end{align*} By the definition of the derived pushfoward functor, the cochain complex evaluated at $\overline{U_p} \subset U_{\textit{sep}}$ agrees with $k^*F$, i.e., \begin{align*} (f_* k_* k^* F)(\overline{U_p}) &= ((\overline{k} \circ \restr{f}{L})_* k^* F )(\overline{U_p}) \\ &= (k^* F) ((\overline{k} \circ \restr{f}{L})^{-1} (\overline{U_p})) \\ &= k^*F({U_p}) \\ \end{align*} So $(f_* M[1])(\overline{U_p}) = k^*F(U_p)$, but that is absurd, since $U_p \simeq f^{-1}(\overline{U_p})$ would imply that $p \notin Z$ is in the support of $M[1] \in \textbf{D}_{\textbf{qc}}(U \textbf{\ on \ } Z)$. Therefore $k^* F = 0$. Going back to the square of morphism, the top left term is $0$ by hypothesis and the bottom right is also zero, since $k^* F = 0$. Hence the whole square is zero. That means that each restriction of $F$ is zero, i.e., $j^*(F) = i^*(F) = k^*(F) = 0$ Using another square, now for $\textbf{D}_{\textbf{qc}}(U)$, one may glue each restriction back to $F$. Hence $F = 0$ as desired. \end{proof} To prove the main theorem, a standard induction argument over the covering of $X$ will be used. The following proposition will provide the induction hypothesis needed. The notation of subschemes and morphisms will follow the diagram shown in the beginning of this section. \begin{prop} \label{prop3.4} Let $X$ be a quasicompact and quasiseparated scheme that admits a separator $f : X \to \Xs$. Assume $X$ can be covered by affine subschemes $Spec(R_i)$ with each $R_i$ of finite global dimension. Moreover, let $X = U \cup V$ with $U$ and $V$ open subschemes of $X$ and assume $V$ to be affine. Consider the diagram $$ \begin{tikzcd}[column sep=5pc] U \arrow{d}{u} \arrow{rd}{o} & { } \\ X \arrow{r}{f} & \Xs \end{tikzcd} $$ where $u: U \to X$ is the open immersion and $o: U \to \Xs$ the induced map. Let $G$ be the strong generator from $\textbf{D}_{\textbf{qc}}(\Xs)$. Then $u_*o^*G$ can be built in finitely many steps from $f^*G$. \end{prop} \begin{proof} First, by Proposition \ref{hopenimm}, $X$ being covered by affine subschemes of finite global dimension implies $\Xs$ also can be covered by affines with the same properties. Hence, by Theorem \ref{Amnthm} there exists a $G$ that fast generates $\textbf{D}_{\textbf{qc}} (\Xs)$. Let $G$ be the generator of $\textbf{D}_{\textbf{qc}} (\Xs)$ as above. One can fit $f^*G$ into a triangle $$ \begin{tikzcd}% Q \arrow[r] & f^*G \arrow[r] & u_*u^*f^*G \simeq u_*o^*G \end{tikzcd}% ,$$ so it suffices to show that $Q \in \text{Coprod} _N(f^*G)$ for some $N \in \Z$. Let $Q$ be as above. Since $Q$ vanishes on $U$, and $V$ is assumed to be affine, by Thomason-Trobaugh there exists a closed subscheme $Z \subset V$ and $P \in \textbf{D}_{\textbf{qc}}( V )$ such that $Q \simeq i_*P$. By Theorem \ref{Amnthm} there exists a fast generator $G' \in \textbf{D}_{\textbf{qc}}( V)$. Hence, there exists $M \in \Z$, such that $ P \in \text{Coprod} _M(G')$. So $Q \simeq i_*P$ is in $i_* \text{Coprod} _M(G') \subseteq \text{Coprod} _M(i_*G')$. But, by Proposition \ref{prop3.2}, $i_*G'$ is a retract of $f^*j_*G'$, which again by Theorem \ref{Amn6.2} is in $\text{Coprod} _N(f^*G)$ for some $N$. Hence $Q \in \text{Coprod} _N(f^*G)$, proving that $u_*o^*G$ is indeed generated by $f^*G$. \end{proof} Now we move to the main Theorem \ref{main}. The goal is to show that one may pull back a fast generator via a separator and obtain a fast generator. By Proposition \ref{propreduc}, this implies Theorem \ref{main}. \begin{theorem} \label{thm3.5} Let $X$ be a quasicompact and quasiseparated scheme that admits a separator $f : X \to \Xs$ and let $G$ be a compact fast generator of $\textbf{D}_{\textbf{qc}}(\Xs)$. Assume $X$ can be covered by affine subschemes $Spec(R_i)$ with each $R_i$ of finite global dimension. Then there exists an object $H$ in $\textbf{D}_{\textbf{perf}} (X)$ that fast generates $\textbf{D}_{\textbf{qc}}(X)$. \end{theorem} \begin{proof} First, we notice that by Lemma \ref{lem3.3}, $f^*G$ is already a compact generator. Hence, it suffices to show that it is a fast generator. We proceed by induction on the number of affines in the cover of $X$ to show that $f^*G$ is indeed a fast generator. The case $n=1$ means that $X$ is affine, hence separated. Therefore, the separator $f$ is an isomorphism and $f^*G = G$. Assume the theorem holds for any scheme which admits a cover by up to $n$ affines $Spec(R_i)$, each $R_i$ with finite global dimensions. Suppose that $X$ can be covered by $n+1$ affines $U_i = Spec(R_i)$, each $R_i$ with finite global dimensions, i.e., $X = \bigcup^{n+1}_{i=1} U_i $. Let $U = \bigcup^{n}_{i=1} U_i$ and $V = U_{n+1}$, so $X = U \cup V$. Assume we are in the same situation as the previous diagrams. Let $G \in \textbf{D}_{\textbf{qc}} (\Xs)$ be a fast generator. Since the restriction of $f$ to $U$ is also a separator, by Lemma \ref{lem3.3}, the restriction of $f^*G$ to $U$, i.e. $u^*f^*(G) = o^*(G)$ is a compact generator. By induction hypothesis, there exists $G'$ that fast generates $\textbf{D}_{\textbf{qc}} (U)$. Since $G'$ is compact and is in the subcategory generated by coproducts of $o^*(G)$, without loss of generality we may take $o^*G$ to be the fast generator of $\textbf{D}_{\textbf{qc}} (U)$. Now, by Proposition \ref{prop3.4}, there exist $N$ such that $u_*o^*G \in \text{Coprod} _N(f^*G)$. In similar fashion, since $V$ is an affine open from $X$, we may take $j^*G$ as a fast generator of $\textbf{D}_{\textbf{qc}} (V)$. Using another localization sequence, one obtain the triangle $$ \begin{tikzcd}% H \arrow[r] & f^*G \arrow[r] & i_*i^*f^*G = i_*j^*G \end{tikzcd}% $$ in $\textbf{D}_{\textbf{qc}} (X)$ for $H$ not supported in $V$. That implies that there exist some $P \in \textbf{D}_{\textbf{qc}} (U)$ such that $H = u_* P$. By the previous paragraph, $\textbf{D}_{\textbf{qc}} (U)$ is fast generated by $o^*(G)$, which implies that $H \in Coprod_L ( u_*o^*G)$, for some $L >0$. Since $u_*o^*G \in Coprod_N(f^*G)$, one obtains that $H \in Coprod_{LN}(f^*G)$. Therefore, there exists $M > LN > 0$ such that $i_*j^*G \in Coprod_M(f^*G)$. Let $T = U \cap V$ and $t: T \to U$ be the inclusion. Then, any object $F \in \textbf{D}_{\textbf{qc}} (X)$ fits in the triangle $$ \begin{tikzcd}% u_*[t_*t^*u^* \Sigma^{-1} F ] \arrow[r] & F \arrow[r] & u_*[u^*F] \oplus i_*[i^*F]. \end{tikzcd}% $$ Thus, $F$ belongs to $[u_*( \textbf{D}_{\textbf{qc}} (U)] \star [u_* \textbf{D}_{\textbf{qc}} (U) \oplus i_* \textbf{D}_{\textbf{qc}}(V)] $ which is contained in \begin{align*} Coprod _{MN}(f^*G) \star Coprod _{MN}(f^*G) = Coprod _{2MN}(f^*G) \end{align*} Therefore $\textbf{D}_{\textbf{qc}}(X)$ is fast generated and the result follows \end{proof} Theorem \ref{thm3.5} together with Remark \ref{rmk1.3} prove the main Theorem \ref{main}, restated below \begin{theorem*} Let $X$ be a quasicompact, quasiseparated scheme that admits a separator. Then $\textbf{D}_{\text{perf}}(X)$ is regular if, and only if, $X$ can be covered by open affine subschemes $Spec(R_i)$ with each $R_i$ of finite global dimension. \end{theorem*} \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Galaxy clusters are the largest non-thermal sources in the universe. Radio \cite{giovannini00}, \cite{feretti04} and hard X-ray \cite{rephaeli02}, \cite{fusco04} observations show the presence of accelerated electrons in these systems. It is understood that hadronic cosmic rays accelerated within the cluster volume will be confined there (with energies of up to 10$^{15}$ eV) for timescales longer than the Hubble time \cite{voelk96}, \cite{berezinsky97}. Hence clusters of galaxies act as storehouses for such particles, and therefore a large component of cosmic rays is expected in these systems. Several sources of cosmic rays can be found in galaxy clusters. Accretion and merger shocks driven by large-scale structure formation have the ability to accelerate cosmic rays \cite{colafrancesco00}, \cite{loeb00}, \cite{ryu03}. Supernova remnant shocks and galactic winds can also produce high-energy particles \cite{voelk96}. Additionally AGN outbursts can distribute non-thermal particles in the cluster volume \cite{ensslin97}, \cite{aharonian02}, \cite{hinton07}. Due to the expected large component of non-thermal particles, galaxy clusters are potential sources for gamma-ray emission (see \cite{blasi07} for a recent review). Various processes can lead to the production of gamma-ray radiation in these objects. Inelastic collisions between cosmic ray protons and thermal nuclei from the intra-cluster medium (ICM) will lead to gamma-ray emission through $\pi^0$-decay \cite{dennison80}, \cite{voelk96}. Electrons with sufficiently high energies can up-scatter cosmic microwave background (CMB) photons to the gamma-ray range in inverse Compton processes \cite{atoyan00}, \cite{gabici03}, \cite{gabici04}. Despite the arguments for potential gamma-ray emission given above, no galaxy cluster has firmly been established as a source of high-energy and very high-energy electromagnetic radiation \cite{reimer03}, \cite{perkins06}. \section{The H.E.S.S. experiment} The H.E.S.S. experiment is an array of imaging atmospheric Cherenkov telescopes located in the Khomas highlands, Namibia \cite{hinton04}. It observes in the VHE gamma-ray regime and has a field of view of $\sim$5$^\circ$. Due to the large field of view it is possible to detect extended sources such as supernova remnants \cite{aharonian04}, \cite{aharonian07}. Galaxy clusters are expected to feature extended VHE gamma-ray emission. The H.E.S.S. experiment is well suited to search for such a signal (see e.g. \cite{aharonian04}). \section{Targets} \subsection{Abell 496} Abell 496 is a nearby (z = 0.033), relaxed cluster of galaxies with a mean temperature of 4.7 keV. It features a cooling core at its center \cite{markevitch99}. It is located in the Southern Hemisphere \cite{boehringer04} and is therefore well suited for observations with H.E.S.S. Data taking was performed during moonless nights in the time period from October to December 2005, and in October 2006. In total 23.4 hours of data were taken, with 15.9 hours passing standard data-quality selection (live time 14.6 hours). The mean zenith angle is 27.6$^\circ$ which results in an energy threshold of 0.31 TeV for standard cuts and 0.57 TeV for hard cuts. H.E.S.S. standard data analysis (described in \cite{benbow05}) was performed using different geometrical size cuts to account for the extended nature of the target. No significant excess of VHE gamma-ray emission is found at the position of Abell 496 (see Fig. \ref{abell}). Upper limits for this object for two different size cuts are derived. All upper limits are obtained following \cite{feldman98} assuming a power law spectral index of -2.1 and are given at the 99.9\% confidence level. The first radial size cut, 0.1$^\circ$, is applied to test gamma-ray emission associated with the high density core region of the cluster. This is of particular interest for a hadronic scenario, since the gamma-ray emission should be enhanced in regions with higher density of target material. In this region an upper limit of F$_\mathrm{UL}$($>$0.31 TeV) = $1.0 \times 10^{-12}$ ph cm$^{-2}$ s$^{-1}$ (0.8\% Crab flux) is determined. A radial size cut of 0.6$^\circ$ is also applied, which covers the entire cluster \cite{reiprich02}. For this extended region an upper limit of F$_\mathrm{UL}$($>$0.57 TeV) = $2.4 \times 10^{-12}$ ph cm$^{-2}$ s$^{-1}$ (4.5\% Crab flux) is found. It should be noted that the H.E.S.S. upper limits scale approximately with $r/r_0$ with $r$ and $r_0$ being geometrical size cuts and this relation can be used to convert the presented upper limits to other sizes. \begin{figure*} \begin{center} \includegraphics [width=0.99\textwidth]{abell496_std_hard.eps} \end{center} \caption{Significance map of the cluster Abell 496 seen by H.E.S.S. with standard cuts (left panel) and hard cuts (right panel). No signal is detected from this region of the sky. The dashed circles show the two size cuts and the white contours correspond to \textit{ROSAT} X-ray contours \cite{durret00}.}\label{abell} \end{figure*} \subsection{Coma cluster} The Coma cluster is a prominent hot (T = 8.25 keV, \cite{arnaud01}), nearby (z = 0.023) galaxy cluster which shows a merger signature in the X-ray gas (\cite{neumann03}). It features a hard X-ray excess (\cite{rephaeli02}, \cite{fusco04} but see \cite{rossetti04} for a different interpretation) and a radio halo \cite{giovannini93}. The Coma cluster is {\bf often} considered as a {\bf "standard cluster"}, and, due to the wealth of data on this object, it is very important for theoretical interpretations. It is located in the Northern Hemisphere which makes it less accessible for H.E.S.S. This cluster was observed during moonless nights in April and May 2006. 7.9 hours of good data were obtained resulting in 7.3 hours live time. The mean zenith angle of these observation is 53.5$^\circ$ which results in an energy threshold of 1.0 TeV for standard cuts and 2.0 TeV for hard cuts. No significant signal is found in these observations using various geometrical size cuts (see Fig. \ref{coma}). Upper limits on the VHE gamma ray emission of the Coma cluster for the core region and for the entire cluster are derived. Applying a radial size cut of 0.2$^\circ$ (core region) an upper limit of F$_\mathrm{UL}$($>$1.0 TeV) = $8.3 \times 10^{-13}$ ph cm$^{-2}$ s$^{-1}$ (3.7\% Crab flux) is found. For the entire cluster, with a radial size cut of 1.4$^\circ$, an upper limit of F$_\mathrm {UL}$($>$2.0 TeV) = $4.8 \times 10^{-12}$ ph cm$^{-2}$ s$^{-1}$ (65.6\% Crab flux) is obtained. For the latter, very extended analysis, only data with a live time of 6.4 hours are used due to an insufficient number of OFF-source runs with such {\bf a} large zenith angle for the background estimation. \begin{figure*} \begin{center} \includegraphics [width=0.99\textwidth]{coma_std_hard.eps} \end{center} \caption{Significance map with standard cuts (left) and hard cuts (right) of the Coma cluster. No signal is found in the data. The two dashed circles correspond to the two size cuts.}\label{coma} \end{figure*} \section{Summary \& outlook} Clusters of galaxies are the most massive gravitationally bound structures in the universe and as such they are believed to be representatives for the universe as a whole. Therefore they are important tools for cosmology. The detection of gamma-ray emission from these objects will give important information about structure formation and supernova activity over the entire history of the universe. No gamma-ray excess has been found with H.E.S.S. from any cluster, with observation times in the range of 10 $-$ 20 hours. As a next step, one promising galaxy cluster will be given a very deep H.E.S.S. exposure of at least 50 hours. \section{Acknowledgments} The support of the Namibian authorities and of the University of Namibia in facilitating the construction and operation of H.E.S.S. is gratefully acknowledged, as is the support by the German Ministry for Education and Research (BMBF), the Max Planck Society, the French Ministry for Research, the CNRS-IN2P3 and the Astroparticle Interdisciplinary Programme of the CNRS, the U.K. Science and Technology Facilities Council (STFC), the IPNP of the Charles University, the Polish Ministry of Science and Higher Education, the South African Department of Science and Technology and National Research Foundation, and by the University of Namibia. We appreciate the excellent work of the technical support staff in Berlin, Durham, Hamburg, Heidelberg, Palaiseau, Paris, Saclay, and in Namibia in the construction and operation of the equipment.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{Acknowledgment} We would like to thank Rushabh Sheth for providing us the required funding and resources. We are also grateful for the Documso annotators for manually labeling the dataset. \bibliographystyle{IEEEtran} \urlstyle{tt} \section*{Acknowledgment} We would like to thank Rushabh Sheth for providing us the required funding and resources. We are also grateful for the Documso annotators for manually labeling the dataset. \bibliographystyle{IEEEtran} \urlstyle{tt} \section{Introduction} Every business needs to process a lot of documents such as invoices, bills, statements, forms, etc. saved in unstructured formats such as PDF or scanned images into their accounting software. The larger ones have to process many thousands of documents per month. There are few ways to do this currently: (a) manual data entry and processing, (b) template-based extraction model, or (c) template-less machine learning approach. Manual data entry is not only time-consuming and expensive but very error-prone as well. Template-based approach requires an initial setup of hard-coded rules for every template, but it still fails badly when an unseen template is encountered \cite{str-template}. Template-less machine learning method tries to learn generic features of various fields to extract so that they work well in various templates but they need to be trained with a large number of annotated documents to perform well. In this paper, we propose a novel one-shot template matching algorithm that brings the best of both worlds---template-based engine and template-less machine learning. Our algorithm doesn't require any initial template setup, nor does it need a very large amount of data to get high accuracy. Once provided with one annotated document, future documents in the same format are processed automatically with 90\% accuracy. We exploit the fact that for a specific vendor and document type, the document format is very similar, i.e., the position of annotated values and the neighboring keywords don't change much. Moreover, if it extracts a field incorrectly, the user can correct it very easily using our convenient review tool and subsequent documents in that format will learn the corrections as well. Our algorithm saves the contextual features of every annotated value that includes information about both visual as well as textual features of not just the actual value but the surrounding keywords as well, which is explained in detail in Section~\ref{sec:method}. Our algorithm also automatically finds out whether a new document belongs to any of the previously saved formats. To match a new document with a saved template, we use a combination of image similarity \cite{svd} and textual similarity \cite{levenshtein} metrics. The rest of the paper is organized into four sections. In Section~\ref{sec:related}, we revisit the various previous approaches to solve similar problems, and also mention how our approach stands out. In Section~\ref{sec:method}, we explain our algorithm in detail. We discuss the experiments and their results in Section~\ref{sec:result}. Finally, in Section~\ref{sec:conclusion}, we provide concluding remarks and possible future works. \section{Methodology}\label{sec:method} Fig~\ref{fig:architecture} shows the high-level architecture of our model. There are three major steps: template matching, region proposal, and final area selection. These are explained in detail shortly. Our model maintains a database of unique annotated templates, takes a new document as input, and predicts the annotation for the new document if such a template exists in our database. \begin{figure}[ht] \centering \includegraphics[width=0.9\linewidth]{fig/bd} \caption{Model architecture (placeholder image)} \label{fig:architecture} \end{figure} \subsection{Optical Character Recognition (OCR)} A new document may be an image, without any texts embedded. In that case, we can get the texts present in the document by OCR. For this research, OCR can be thought of as a black box that takes an image as input and outputs a list of words and their positions (bounding boxes). There are many high-quality commercial OCR engines such as Google Vision\cite{gvision}, Microsoft Azure Vision\cite{azure}, Amazon Textract\cite{aws-textract}. We used Amazon Textract for this research because they produced the best result as they were trained exclusively for document images. \subsection{Template Matching} In this step, a matching template from the database is chosen for the input document. If there is no match, the algorithm halts and the document is sent for manual annotation. We use a combination of visual and textual similarity measures to find a match. For image similarity, we compute the SVD of images and measure the cosine similarity between the $\Sigma$ diagonal matrices\cite{svd}. Equation \eqref{eq:svd} shows the SVD of a matrix $I$. However, before SVD, we perform some image preprocessing to adapt this metric to document images and make it invariant to image dimensions and lighting conditions. Let $I$ = preprocessed input document image, and $T$ = preprocessed template document image. \begin{align} (U_I, \Sigma_I, V_I) &= \text{SVD}(I) \label{eq:svd}\\ (U_T, \Sigma_T, V_T) &= \text{SVD}(T) \nonumber \end{align} Now, the visual similarity $\text{Sim}_{\textit{visual}}$ is given by: \begin{align} \text{Sim}_{\textit{visual}} = \cos\theta = \frac{\Sigma_I\cdot\Sigma_T}{\abs{\Sigma_I}\abs{\Sigma_T}} \in [-1,1] \end{align} For text similarity, we compute the fuzzy match\cite{fuzzywuzzy} (based on Levenshtein distance\cite{levenshtein}) of the top-$n$ and bottom-$n$ lines of text in the documents. Again, before the fuzzy matching, we perform some preprocessing steps to normalize the text. \begin{align} \text{Sim}_{\textit{text}} = \textsc{Fuzzy-Match}(t_I, t_T) \in [0,1] \end{align} where, $t_I$ is the concatenation of the preprocessed top-$n$ and bottom-$n$ lines of text in the input document, and $t_T$ is the same for the template document. The combined similarity is simply the sum of visual and textual similarities: \begin{align} \text{Sim}_{\textit{combined}} = \text{Sim}_{\textit{text}} + \text{Sim}_{\textit{visual}} \end{align} The template having the highest $\text{Sim}_{\textit{combined}}$, with $\text{Sim}_{\textit{text}} \geq C$, is selected as the final template for the input image, where $C$ is a manually set threshold. If $\text{Sim}_{\textit{text}} < C$ for all templates in the database, then the document is manually annotated and added to the database as a new template. \subsection{Region Proposal} Once the template document is selected, we use the template image and annotation to predict the approximate regions of all the fields in the input document. The annotation object is a JSON with field names as keys and the associated values and positions (top-left and bottom-right coordinates of the values) as the values. An example of an annotation object is given below: \begin{lstlisting}[language=Python,caption={An annotation sample. The coordinates are calculated from the top-left corner of the document.},captionpos=b] {"invoice_no": {"position": [53, 671, 452, 702], "value": "INV1234"}, "date": {"position": [50, 635, 312, 666], "value": "2019-08-24"}, "seller": {"position": [259, 27, 464, 58], "value": "ABC Pvt. Ltd."}, "buyer": {"position": [821, 445, 1153, 468], "value": "Zinc Enterprises"}, "total": {"position": [48, 553, 419, 577], "value": "1,234.56"}} \end{lstlisting} In this illustration, the annotation is a JSON object where the keys are the fields to be captured and the values have the position information of the text as well as the actual text for the field value. The ``position'' parameter contains the top-left $(x_{min},y_{min})$ and bottom-right $(x_{max},y_{max})$ coordinates of the rectangle surrounding the text. Once we have the annotation of the matching template image, the following algorithm is used to get approximate region proposal for each field. We use the correlation coefficient $R$ between the input document image and template image \cite{opencv_library} to obtain the approximate region-proposal. \begin{equation} R(x,y)= \sum _{x',y'} \left(T'(x',y') \cdot I'(x+x',y+y')\right) \label{eq_corr} \end{equation} where, \begin{align*} T'(x',y')&=T(x',y') - \frac{\sum _{x'',y''} T(x'',y'')}{w \cdot h} \end{align*} \begin{multline*} I'(x+x',y+y') = I(x+x',y+y') \\ - \frac{\sum _{x'',y''} I(x+x'',y+y'')}{w \cdot h} \end{multline*} \begin{algorithm} \caption{Region Proposal}\label{alg:region-proposal} \textbf{Input:} \begin{itemize} \item Preprocessed input document image ($I$). \item Preprocessed template document image ($T$). \item Template document annotation ($A_{T}$). \end{itemize} \textbf{Output:} Region proposals for all the fields in the input document.\\ \textbf{Procedure:} \begin{algorithmic}[1] \label{region_prop} \STATE $w_T$ $\leftarrow$ \textsc{Width}($T$) \COMMENT{Template Image Width} \STATE $h_T \leftarrow$ $1.414*w_T$\footnotemark{} \COMMENT{Template Image Height} \FOR{each field in $A_{T}$} \STATE Get rectangular area of the field.\\ $(x_{min},y_{min},x_{max},y_{max})$ \STATE Increase the area of the rectangle slightly in all directions.\footnotemark{}\\ \STATE Crop out the new rectangular area in the template image. \STATE Find the area in the input image where the cropped area from template image is most likely to match using \eqref{eq_corr}. \ENDFOR \end{algorithmic} \end{algorithm} \addtocounter{footnote}{-2} \stepcounter{footnote}\footnotetext{This is done to make the regions have the same aspect ratio to handle documents with different aspect ratios.} \stepcounter{footnote}\footnotetext{We expand the area in order to include some keywords common in both the template and the input image. Those keywords will help us accurately pinpoint the location of field values in the input regions proposed.} Algorithm~\ref{alg:region-proposal} presents the pseudocode of our region proposal algorithm. \subsection{Final Area Selection} Next, we pinpoint the location of the annotation values in the input document by finding common words---present in both the input region and the template region of the field---and projecting the distance and displacement from the template region to the input region. This method was first devised in \cite{str-template}, but they looked for common words in whole document. We only look for common keywords inside the proposed regions for computational efficiency. Algorithm~\ref{alg:fin-area} shows the pseudocode for this algorithm. \begin{algorithm} \caption{Final Area Selection}\label{alg:fin-area} \textbf{Input:} \begin{itemize} \item Input document OCR \item Template document OCR \item Region Proposals $RP_I$ in input document (from Algorithm~\ref{alg:region-proposal}) \item Input document and Template document dimensions. \end{itemize} \textbf{Output:} Final bounding-boxes for all fields in the input document.\\ \textbf{Procedure:} \begin{algorithmic}[1] \FOR{each field in $RP_I$} \STATE Find the texts that are common in this proposed area in the input image and corresponding area in the template image. \IF{matches $>$ 0} \STATE Find the text that is closest to the actual value for the field in the template image. \STATE Get the vector from the center of the closest text and the actual value for the field in template image. \STATE Normalize the vector with the dimensions of template image. De-normalize using the input image dimensions. \STATE Using the vector in the input image, predict the center where the value of the field is present. \STATE Using this center coordinates and the dimensions of rectangle surrounding the value of the field in the template image, obtain the rectangle in the input image using appropriate scaling. \ELSE \STATE Using the dimensions of rectangle surrounding the value of the field in the template image, obtain a rectangle at the center of the approximate area. \ENDIF \ENDFOR \end{algorithmic} \end{algorithm} \subsection{Text extraction} Finally, once we have the final bounding box of the value, we can extract the text from the OCR data. We extract all the words in the OCR data whose area overlaps with the proposed bounding box by more than a preset threshold and then combine them in natural reading order to get the final text for each of the fields. \section{Related Work}\label{sec:related} Deciding whether two documents are of the same format requires a combination of image similarity and text similarity metrics. A number of perceptual hashing methods\cite{phash-overview,min-hash} have been used to detect near-duplicate images. Zeng et al. used eigenvalue matrix computed by Singular Value Decomposition (SVD) of image as features and computed similarity by comparing the angle between the eigenvalue matrices mapped into vector space \cite{svd}. Similarly, the most common approach for measuring textual similarity is the Levenshtein edit distance \cite{levenshtein}. Flexible template-based extraction systems \cite{str-template,informys,apriori,intellix} locate the required text in the document by using the distance and direction from important surrounding keywords such as field labels. Cesarini et al.\cite{informys} only look at the nearest keyword whereas d'Andecy et al.\cite{apriori} computes distances and angles from every other word in the document and predicts the final location by averaging over the distances and angles from all words weighted by their \emph{itf-df} scores. The first serious attempt at solving the automatic data capture problem using machine learning was made by Rossum\cite{rossum,rossum-table}. Trying to mimic human brain, they process documents in three stages: skim-reading, data localization, and precise reading. Holt et al.\cite{sypht} and Palm et al.\cite{cloudscan} used a content-based template-less machine learning approach that can classify any text block into one of predefined labels thereby claiming to work in unseen document formats as well. In \cite{sypht}, the authors reported 92.8\% percent accuracy after training the model with 300,000 documents. Completely vision object-detection models such as Faster-RCNN\cite{frcnn} and YOLO\cite{yolo,yolov3}, being trained on natural scenes, produce mixed results on document images. All these approaches require a large volume of annotated documents to train well. Our method automatically utilizes template-features without needing any template-based rules. Since existing methods are either rigidly template-dependent or template-less, we cannot compare our work directly with any of them. \section{Experiment and Result}\label{sec:result} \subsection{Dataset} There are no publicly available dataset of modern business documents such as invoices, bank statements or employee forms, which is understandable given their strict confidentiality. Therefore, for this research, we acquired a dataset of 595 annotated invoices from a large international invoice financing company. All of them were in the English language and there were about 35 unique formats or templates. For fields to extract, we considered the most common ones: (a) invoice number, (b) date, (c) seller, (d) buyer, and (e) total due. We used one sample each of every template as the training set and the rest 560 documents as the test set. Again, due to confidential reasons, we cannot make the dataset public. \subsection{Evaluation Metrics} The ground truth values don't have positional values, so we can't compute the quality of output bounding boxes. Therefore, we evaluate our model by comparing the output values of the extracted fields. Since text output may have few erroneous characters, mostly due to OCR error, we define two metrics for evaluation---\textsc{Mean-Fuzzy-Match} and \textsc{Accuracy}---as follows: \begin{align} \textsc{Mean-Fuzzy-Match} &= \frac{ \sum_{i=1}^{N} \textsc{Fuzzy-Match}(\hat{y_i}, y_i) }{N} \end{align} \begin{align} \textsc{Accuracy} &= \frac{ \text{no. of samples where } \hat{y}_i = y_i }{N} \end{align} where, $\hat{y}$ and $y$ are the output and ground truth values respectively both with length $N$, and \textsc{Fuzzy-Match} $\in [0, 1]$ is the fuzzy text matching function based on Levenshtein distance. In our implementation, we used the \texttt{fuzzywuzzy} library\cite{fuzzywuzzy} for this. The \textsc{Accuracy}, which checks for exact match between the predicted value and the ground-truth, is affected by minor OCR errors (such as recognizing ``0'' as ``O''). We include the \textsc{Mean-Fuzzy-Match} metric to see how our model would perform in cases where exact match isn't required. \subsection{Result} The results of our model on the 560 test invoices are shown in Table~\ref{tab:result}. We can see that \textsc{Mean-Fuzzy-Match} is significantly greater than \textsc{Accuracy}, implying that our model can leverage better accuracy if the minor OCR errors are corrected by post-processing. For instance, the buyer and seller names can be matched with a lookup table. Similarly, dates and amounts can be canonicalized to eliminate format discrepancies. Other texts can be normalized by trimming whitespaces, converting to lowercase, and so on. \begin{table}[ht] \caption{The performance of our model.} \label{tab:result} \centering \begin{tabular}{lcc} \toprule \textbf{Field} & \textbf{\textsc{Accuracy}} & \textbf{\textsc{Mean-Fuzzy-Match}}\\ \midrule Invoice number~~~~~~~~ & 79.2 & 80.7\\ Date & 86.4 & 89.4 \\ Seller & 91.5 & 93.8 \\ Buyer & 90.2 & 94.1 \\ Total due & 84.7 & 88.2 \\ \midrule \textbf{Overall} & \textbf{86.4} & \textbf{89.2}\\ \bottomrule \end{tabular} \end{table} Considering the fact that our model doesn't require per-template rules and requires very few training samples, combined with our easy review tool, getting over 86\% accuracy can result in a significant reduction in time, cost, and effort it takes for businesses to process documents. \section{Conclusion and Future Work}\label{sec:conclusion} In this paper, we presented a new way of solving the problem of automatic data capture from documents. Requiring only one example per template, our method is very effective for dataset with recurring templates. This research has many areas for improvement though. First of all, it can't handle multi-page documents. Future works can attempt to tackle this. In addition, our model, which right now only looks at only one saved sample to predict the outputs, can be made to predict based on all saved samples of the specific template to generalize better and improve overall accuracy. Also, further research can be done to make it work with recurring fields like table line items. \section{Introduction} Every business needs to process a lot of documents such as invoices, bills, statements, forms, etc. saved in unstructured formats such as PDF or scanned images into their accounting software. The larger ones have to process many thousands of documents per month. There are few ways to do this currently: (a) manual data entry and processing, (b) template-based extraction model, or (c) template-less machine learning approach. Manual data entry is not only time-consuming and expensive but very error-prone as well. Template-based approach requires an initial setup of hard-coded rules for every template, but it still fails badly when an unseen template is encountered \cite{str-template}. Template-less machine learning method tries to learn generic features of various fields to extract so that they work well in various templates but they need to be trained with a large number of annotated documents to perform well. In this paper, we propose a novel one-shot template matching algorithm that brings the best of both worlds---template-based engine and template-less machine learning. Our algorithm doesn't require any initial template setup, nor does it need a very large amount of data to get high accuracy. Once provided with one annotated document, future documents in the same format are processed automatically with 90\% accuracy. We exploit the fact that for a specific vendor and document type, the document format is very similar, i.e., the position of annotated values and the neighboring keywords don't change much. Moreover, if it extracts a field incorrectly, the user can correct it very easily using our convenient review tool and subsequent documents in that format will learn the corrections as well. Our algorithm saves the contextual features of every annotated value that includes information about both visual as well as textual features of not just the actual value but the surrounding keywords as well, which is explained in detail in Section~\ref{sec:method}. Our algorithm also automatically finds out whether a new document belongs to any of the previously saved formats. To match a new document with a saved template, we use a combination of image similarity \cite{svd} and textual similarity \cite{levenshtein} metrics. The rest of the paper is organized into four sections. In Section~\ref{sec:related}, we revisit the various previous approaches to solve similar problems, and also mention how our approach stands out. In Section~\ref{sec:method}, we explain our algorithm in detail. We discuss the experiments and their results in Section~\ref{sec:result}. Finally, in Section~\ref{sec:conclusion}, we provide concluding remarks and possible future works. \section{Methodology}\label{sec:method} Fig~\ref{fig:architecture} shows the high-level architecture of our model. There are three major steps: template matching, region proposal, and final area selection. These are explained in detail shortly. Our model maintains a database of unique annotated templates, takes a new document as input, and predicts the annotation for the new document if such a template exists in our database. \begin{figure}[ht] \centering \includegraphics[width=0.9\linewidth]{fig/bd} \caption{Model architecture (placeholder image)} \label{fig:architecture} \end{figure} \subsection{Optical Character Recognition (OCR)} A new document may be an image, without any texts embedded. In that case, we can get the texts present in the document by OCR. For this research, OCR can be thought of as a black box that takes an image as input and outputs a list of words and their positions (bounding boxes). There are many high-quality commercial OCR engines such as Google Vision\cite{gvision}, Microsoft Azure Vision\cite{azure}, Amazon Textract\cite{aws-textract}. We used Amazon Textract for this research because they produced the best result as they were trained exclusively for document images. \subsection{Template Matching} In this step, a matching template from the database is chosen for the input document. If there is no match, the algorithm halts and the document is sent for manual annotation. We use a combination of visual and textual similarity measures to find a match. For image similarity, we compute the SVD of images and measure the cosine similarity between the $\Sigma$ diagonal matrices\cite{svd}. Equation \eqref{eq:svd} shows the SVD of a matrix $I$. However, before SVD, we perform some image preprocessing to adapt this metric to document images and make it invariant to image dimensions and lighting conditions. Let $I$ = preprocessed input document image, and $T$ = preprocessed template document image. \begin{align} (U_I, \Sigma_I, V_I) &= \text{SVD}(I) \label{eq:svd}\\ (U_T, \Sigma_T, V_T) &= \text{SVD}(T) \nonumber \end{align} Now, the visual similarity $\text{Sim}_{\textit{visual}}$ is given by: \begin{align} \text{Sim}_{\textit{visual}} = \cos\theta = \frac{\Sigma_I\cdot\Sigma_T}{\abs{\Sigma_I}\abs{\Sigma_T}} \in [-1,1] \end{align} For text similarity, we compute the fuzzy match\cite{fuzzywuzzy} (based on Levenshtein distance\cite{levenshtein}) of the top-$n$ and bottom-$n$ lines of text in the documents. Again, before the fuzzy matching, we perform some preprocessing steps to normalize the text. \begin{align} \text{Sim}_{\textit{text}} = \textsc{Fuzzy-Match}(t_I, t_T) \in [0,1] \end{align} where, $t_I$ is the concatenation of the preprocessed top-$n$ and bottom-$n$ lines of text in the input document, and $t_T$ is the same for the template document. The combined similarity is simply the sum of visual and textual similarities: \begin{align} \text{Sim}_{\textit{combined}} = \text{Sim}_{\textit{text}} + \text{Sim}_{\textit{visual}} \end{align} The template having the highest $\text{Sim}_{\textit{combined}}$, with $\text{Sim}_{\textit{text}} \geq C$, is selected as the final template for the input image, where $C$ is a manually set threshold. If $\text{Sim}_{\textit{text}} < C$ for all templates in the database, then the document is manually annotated and added to the database as a new template. \subsection{Region Proposal} Once the template document is selected, we use the template image and annotation to predict the approximate regions of all the fields in the input document. The annotation object is a JSON with field names as keys and the associated values and positions (top-left and bottom-right coordinates of the values) as the values. An example of an annotation object is given below: \begin{lstlisting}[language=Python,caption={An annotation sample. The coordinates are calculated from the top-left corner of the document.},captionpos=b] {"invoice_no": {"position": [53, 671, 452, 702], "value": "INV1234"}, "date": {"position": [50, 635, 312, 666], "value": "2019-08-24"}, "seller": {"position": [259, 27, 464, 58], "value": "ABC Pvt. Ltd."}, "buyer": {"position": [821, 445, 1153, 468], "value": "Zinc Enterprises"}, "total": {"position": [48, 553, 419, 577], "value": "1,234.56"}} \end{lstlisting} In this illustration, the annotation is a JSON object where the keys are the fields to be captured and the values have the position information of the text as well as the actual text for the field value. The ``position'' parameter contains the top-left $(x_{min},y_{min})$ and bottom-right $(x_{max},y_{max})$ coordinates of the rectangle surrounding the text. Once we have the annotation of the matching template image, the following algorithm is used to get approximate region proposal for each field. We use the correlation coefficient $R$ between the input document image and template image \cite{opencv_library} to obtain the approximate region-proposal. \begin{equation} R(x,y)= \sum _{x',y'} \left(T'(x',y') \cdot I'(x+x',y+y')\right) \label{eq_corr} \end{equation} where, \begin{align*} T'(x',y')&=T(x',y') - \frac{\sum _{x'',y''} T(x'',y'')}{w \cdot h} \end{align*} \begin{multline*} I'(x+x',y+y') = I(x+x',y+y') \\ - \frac{\sum _{x'',y''} I(x+x'',y+y'')}{w \cdot h} \end{multline*} \begin{algorithm} \caption{Region Proposal}\label{alg:region-proposal} \textbf{Input:} \begin{itemize} \item Preprocessed input document image ($I$). \item Preprocessed template document image ($T$). \item Template document annotation ($A_{T}$). \end{itemize} \textbf{Output:} Region proposals for all the fields in the input document.\\ \textbf{Procedure:} \begin{algorithmic}[1] \label{region_prop} \STATE $w_T$ $\leftarrow$ \textsc{Width}($T$) \COMMENT{Template Image Width} \STATE $h_T \leftarrow$ $1.414*w_T$\footnotemark{} \COMMENT{Template Image Height} \FOR{each field in $A_{T}$} \STATE Get rectangular area of the field.\\ $(x_{min},y_{min},x_{max},y_{max})$ \STATE Increase the area of the rectangle slightly in all directions.\footnotemark{}\\ \STATE Crop out the new rectangular area in the template image. \STATE Find the area in the input image where the cropped area from template image is most likely to match using \eqref{eq_corr}. \ENDFOR \end{algorithmic} \end{algorithm} \addtocounter{footnote}{-2} \stepcounter{footnote}\footnotetext{This is done to make the regions have the same aspect ratio to handle documents with different aspect ratios.} \stepcounter{footnote}\footnotetext{We expand the area in order to include some keywords common in both the template and the input image. Those keywords will help us accurately pinpoint the location of field values in the input regions proposed.} Algorithm~\ref{alg:region-proposal} presents the pseudocode of our region proposal algorithm. \subsection{Final Area Selection} Next, we pinpoint the location of the annotation values in the input document by finding common words---present in both the input region and the template region of the field---and projecting the distance and displacement from the template region to the input region. This method was first devised in \cite{str-template}, but they looked for common words in whole document. We only look for common keywords inside the proposed regions for computational efficiency. Algorithm~\ref{alg:fin-area} shows the pseudocode for this algorithm. \begin{algorithm} \caption{Final Area Selection}\label{alg:fin-area} \textbf{Input:} \begin{itemize} \item Input document OCR \item Template document OCR \item Region Proposals $RP_I$ in input document (from Algorithm~\ref{alg:region-proposal}) \item Input document and Template document dimensions. \end{itemize} \textbf{Output:} Final bounding-boxes for all fields in the input document.\\ \textbf{Procedure:} \begin{algorithmic}[1] \FOR{each field in $RP_I$} \STATE Find the texts that are common in this proposed area in the input image and corresponding area in the template image. \IF{matches $>$ 0} \STATE Find the text that is closest to the actual value for the field in the template image. \STATE Get the vector from the center of the closest text and the actual value for the field in template image. \STATE Normalize the vector with the dimensions of template image. De-normalize using the input image dimensions. \STATE Using the vector in the input image, predict the center where the value of the field is present. \STATE Using this center coordinates and the dimensions of rectangle surrounding the value of the field in the template image, obtain the rectangle in the input image using appropriate scaling. \ELSE \STATE Using the dimensions of rectangle surrounding the value of the field in the template image, obtain a rectangle at the center of the approximate area. \ENDIF \ENDFOR \end{algorithmic} \end{algorithm} \subsection{Text extraction} Finally, once we have the final bounding box of the value, we can extract the text from the OCR data. We extract all the words in the OCR data whose area overlaps with the proposed bounding box by more than a preset threshold and then combine them in natural reading order to get the final text for each of the fields. \section{Related Work}\label{sec:related} Deciding whether two documents are of the same format requires a combination of image similarity and text similarity metrics. A number of perceptual hashing methods\cite{phash-overview,min-hash} have been used to detect near-duplicate images. Zeng et al. used eigenvalue matrix computed by Singular Value Decomposition (SVD) of image as features and computed similarity by comparing the angle between the eigenvalue matrices mapped into vector space \cite{svd}. Similarly, the most common approach for measuring textual similarity is the Levenshtein edit distance \cite{levenshtein}. Flexible template-based extraction systems \cite{str-template,informys,apriori,intellix} locate the required text in the document by using the distance and direction from important surrounding keywords such as field labels. Cesarini et al.\cite{informys} only look at the nearest keyword whereas d'Andecy et al.\cite{apriori} computes distances and angles from every other word in the document and predicts the final location by averaging over the distances and angles from all words weighted by their \emph{itf-df} scores. The first serious attempt at solving the automatic data capture problem using machine learning was made by Rossum\cite{rossum,rossum-table}. Trying to mimic human brain, they process documents in three stages: skim-reading, data localization, and precise reading. Holt et al.\cite{sypht} and Palm et al.\cite{cloudscan} used a content-based template-less machine learning approach that can classify any text block into one of predefined labels thereby claiming to work in unseen document formats as well. In \cite{sypht}, the authors reported 92.8\% percent accuracy after training the model with 300,000 documents. Completely vision object-detection models such as Faster-RCNN\cite{frcnn} and YOLO\cite{yolo,yolov3}, being trained on natural scenes, produce mixed results on document images. All these approaches require a large volume of annotated documents to train well. Our method automatically utilizes template-features without needing any template-based rules. Since existing methods are either rigidly template-dependent or template-less, we cannot compare our work directly with any of them. \section{Experiment and Result}\label{sec:result} \subsection{Dataset} There are no publicly available dataset of modern business documents such as invoices, bank statements or employee forms, which is understandable given their strict confidentiality. Therefore, for this research, we acquired a dataset of 595 annotated invoices from a large international invoice financing company. All of them were in the English language and there were about 35 unique formats or templates. For fields to extract, we considered the most common ones: (a) invoice number, (b) date, (c) seller, (d) buyer, and (e) total due. We used one sample each of every template as the training set and the rest 560 documents as the test set. Again, due to confidential reasons, we cannot make the dataset public. \subsection{Evaluation Metrics} The ground truth values don't have positional values, so we can't compute the quality of output bounding boxes. Therefore, we evaluate our model by comparing the output values of the extracted fields. Since text output may have few erroneous characters, mostly due to OCR error, we define two metrics for evaluation---\textsc{Mean-Fuzzy-Match} and \textsc{Accuracy}---as follows: \begin{align} \textsc{Mean-Fuzzy-Match} &= \frac{ \sum_{i=1}^{N} \textsc{Fuzzy-Match}(\hat{y_i}, y_i) }{N} \end{align} \begin{align} \textsc{Accuracy} &= \frac{ \text{no. of samples where } \hat{y}_i = y_i }{N} \end{align} where, $\hat{y}$ and $y$ are the output and ground truth values respectively both with length $N$, and \textsc{Fuzzy-Match} $\in [0, 1]$ is the fuzzy text matching function based on Levenshtein distance. In our implementation, we used the \texttt{fuzzywuzzy} library\cite{fuzzywuzzy} for this. The \textsc{Accuracy}, which checks for exact match between the predicted value and the ground-truth, is affected by minor OCR errors (such as recognizing ``0'' as ``O''). We include the \textsc{Mean-Fuzzy-Match} metric to see how our model would perform in cases where exact match isn't required. \subsection{Result} The results of our model on the 560 test invoices are shown in Table~\ref{tab:result}. We can see that \textsc{Mean-Fuzzy-Match} is significantly greater than \textsc{Accuracy}, implying that our model can leverage better accuracy if the minor OCR errors are corrected by post-processing. For instance, the buyer and seller names can be matched with a lookup table. Similarly, dates and amounts can be canonicalized to eliminate format discrepancies. Other texts can be normalized by trimming whitespaces, converting to lowercase, and so on. \begin{table}[ht] \caption{The performance of our model.} \label{tab:result} \centering \begin{tabular}{lcc} \toprule \textbf{Field} & \textbf{\textsc{Accuracy}} & \textbf{\textsc{Mean-Fuzzy-Match}}\\ \midrule Invoice number~~~~~~~~ & 79.2 & 80.7\\ Date & 86.4 & 89.4 \\ Seller & 91.5 & 93.8 \\ Buyer & 90.2 & 94.1 \\ Total due & 84.7 & 88.2 \\ \midrule \textbf{Overall} & \textbf{86.4} & \textbf{89.2}\\ \bottomrule \end{tabular} \end{table} Considering the fact that our model doesn't require per-template rules and requires very few training samples, combined with our easy review tool, getting over 86\% accuracy can result in a significant reduction in time, cost, and effort it takes for businesses to process documents. \section{Conclusion and Future Work}\label{sec:conclusion} In this paper, we presented a new way of solving the problem of automatic data capture from documents. Requiring only one example per template, our method is very effective for dataset with recurring templates. This research has many areas for improvement though. First of all, it can't handle multi-page documents. Future works can attempt to tackle this. In addition, our model, which right now only looks at only one saved sample to predict the outputs, can be made to predict based on all saved samples of the specific template to generalize better and improve overall accuracy. Also, further research can be done to make it work with recurring fields like table line items.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{sec:introduction} The Mott insulator (MI) to superfluid (SF) quantum phase transition in the generic Bose-Hubbard model\cite{PhysRevB.40.546} has attracted a lot of attention in recent years due to the progress in experiments on cold atomic gases in optical lattices.\cite{Gr.Ma.Es.Ha.Bl.02} More recently, there have also been significant advances in the coherent coupling of single atoms and cold atomic gases to cavity radiation (cavity quantum electrodynamics).\cite{BiBoMiBoNoKi05,BrDoRiBoKoEs07} A clean realization of the Jaynes-Cummings Hamiltonian has been achieved by coupling a superconducting qubit to a microwave cavity.\cite{Fi.Go.Ba.Bi.Le.Bl.Wa.08} On the theory side, multi-component Bose gases coupled to light have \eg been shown to support a superradiant Mott insulator phase with polariton condensation.\cite{BhHoSiSi08} In parallel, several theoretical proposals have shown the possibility of having a state of strongly correlated photons or polaritons in solid-state systems of coupled cavity arrays (also referred to as polariton models or Jaynes-Cummings-Hubbard models),\cite{GrTaCoHo06,HaBrPl06} and a review of work along these lines has been given.\cite{Ha.Br.Pl.08} The possibility of preparing a system of photons in a Mott state with one photon per site is a promising starting point for quantum information processing. An important feature shared with cold atomic gases coupled to light is the composite nature of the polaritons. Particularly attractive properties of cavity arrays would include accessibility of local properties in measurements and scalability in terms of size. Perhaps the most likely candidate for setting up such a model experimentally is based on extending the work on superconducting qubits to arrays.\cite{Ko.LH.09,Fi.Go.Ba.Bi.Le.Bl.Wa.08} In contrast to cold atomic gases, where the interaction and/or hopping strength can be varied, the phase transition may be observed by changing the detuning between the two-level system and the resonator. Analysis of coupled cavity models is fruitful in its own right, as a detailed understanding of the corresponding models offers insight into strongly correlated polariton systems. An important aspect of such studies is the extent to which such systems resemble the familiar Bose-Hubbard physics. From the above examples and many more in the literature, it is apparent that interacting boson systems on a lattice are of great interest for the progress of both theory and experiment. Compared to Bose fluids, the lattice changes the physics in several aspects. Although long-range phase coherence still gives rise to phonon excitations---despite the breaking of translational symmetry---the quenching of the kinetic energy makes the system much more strongly correlated.\cite{Zw03} Besides, the lattice allows the formation of incompressible MI states with the same integer particle number at each site. A large amount of work has been devoted to detailed studies of the Bose-Hubbard model, leading to a wealth of knowledge with and without additional complications such as trapping potentials or disorder. However, the dynamical properties and excitations in particular of the SF phase in the vicinity of the quantum phase transition, are still not completely understood. A number of authors have addressed the dynamics of the Bose-Hubbard model in different dimensions, \cite{ 0295-5075-22-4-004, PhysRevB.59.12184, Ku.Wh.Mo.00, Ro.Bu.04, Ba.As.Sc.De.05, Se.Du.05, KoDu06, Hu.Al.Bu.Bl.07, CaSa.GuSo.Pr.Sv.07, capogrosso-sansone:134302, Oh.Pe.08, Me.Tr.08} with results providing valuable information about the underlying physics, while corresponding work on coupled cavity models has just begun.\cite{Ai.Ho.Ta.Li.08,Sc.Bl.09} The two most important dynamic observables are the dynamic structure factor and the single-particle spectral function, which are also at the heart of theoretical and experimental works on Bose fluids.\cite{Griffin93} Experimentally, the dynamic structure factor may be measured by Bragg spectroscopy or lattice modulation (in cold atomic gases) as well as by neutron scattering (in liquid helium), and single-particle excitations of optical solid-state systems are accessible by means of photoluminescence measurements. Whereas the standard Bose-Hubbard model only supports MI and SF phases, the physics of the polariton models is slightly richer. Owing to the composite nature of the conserved particles (polaritons), these phases can either be of polaritonic, excitonic or photonic character\cite{Ai.Ho.Ta.Li.08,irish_polaritonic_2008,Le.Li.08,Ir.09} with distinct dynamic properties. Which of the cases is realized depends on the value of the detuning between the cavity mode and the transition frequency of the atoms that mediate polariton repulsion. Very recently it has been proposed that the fractional quantum Hall effect may also be realized in coupled cavity arrays.\cite{Ch.An.Bo.08} In general, accurate and unbiased results are very hard to obtain. Most existing work on spectral properties in the Mott phase is based on mean-field and/or strong-coupling approximations, in which fluctuations of the particle numbers are more or less restricted. Results of extensive strong coupling expansions for the phase diagram \cite{Fr.Mo.96,PhysRevB.59.12184} do, however, agree very well with precise density-matrix renormalization group (DMRG)\cite{Ku.Wh.Mo.00} and quantum Monte Carlo (QMC) results.\cite{capogrosso-sansone:134302,CaSa.GuSo.Pr.Sv.07} Bogoliubov type descriptions have been found to accurately describe the SF phase only in the limit of weak interaction, and fail to account for the transition to a MI and correlation features in the SF close to the transition. Hence the most interesting (and most difficult) regime is that near the quantum phase transition, where quantum fluctuations and correlation effects cannot be neglected. In one dimension (1D), quantum fluctuation effects are particularly pronounced and mean-field methods are in general insufficient. Notable exceptions include situations where coupling to additional degrees of freedom provides an effective long-range interaction.\cite{BhHoSiSi08} An interesting aspect of 1D is that for strong (repulsive) interaction, fermions and bosons behave in a very similar way, and that the low-energy, long-wavelength physics is described by the Luttinger liquid model.\cite{PhysRevLett.93.210401} In the present paper we employ the directed loop quantum Monte Carlo method,\cite{SySa02} which is exact and therefore yields unbiased results also in difficult parameter regimes. Importantly, our simulations preserve the full quantum dynamics. Few nonperturbative results are available for the spectra in the Bose-Hubbard model, namely for the dynamical conductivity,\cite{Ku.Wh.Mo.00} for the dynamic structure factor $S(k,\omega)$\cite{Ro.Bu.04,Ba.As.Sc.De.05} on small systems, and for the single-boson spectral function $A(k,\om)$ in the Mott phase deduced from small systems,\cite{KoDu06} all in 1D. For the polariton model considered here, only $A(k,\om)$ in the Mott phase has been calculated.\cite{Ai.Ho.Ta.Li.08} The focus of our work is therefore on the calculation of excitation spectra for both the Bose-Hubbard model and the polariton model within and around the first Mott lobe (\ie, the lobe with density one), for which comparison to recent analytical and numerical results is made. Other issues addressed include the sound velocity in the SF phase, particle and hole masses, as well as temperature and detuning effects for the case of the polariton model. Our simulations are performed at low but finite temperatures. On one hand, this complicates the analysis of the results, but on the other hand it matches the experimental situation.\cite{Griffin93,Griffin98} The paper is organized as follows. In Sec.~\ref{sec:model} we introduce the two models considered. Section~\ref{sec:method} contains some details about the method. Results are discussed in Sec.~\ref{sec:results}, and in Sec.~\ref{sec:conclusions} we present our conclusions. \section{Models}\label{sec:model} The polariton model we consider is the simplest among several recent proposals.\cite{GrTaCoHo06,HaBrPl06,AnSaBo07,hartmann:070602,Ha.Br.Pl.08} It describes an array of $L$ optical microcavities, each of which contains a single two-level atom with states $\ket{\DO}$, $\ket{\UP}$ separated by energy $\epsilon$. Within the rotating wave approximation one such cavity is represented by the Jaynes-Cummings Hamiltonian\cite{Ja.Cu.63} ($\hbar=1$) \begin{eqnarray}\label{eq:JC}\nonumber \hat{H}^{\text{JC}}_i &=& \epsilon \ket{\UP_i}\bra{\UP_i} + \omega_0 a^\dag_i a^{\phantom{\dag}}_i \\ &&+ g (\ket{\UP_i}\bra{\DO_i} a^{\phantom{\dag}}_i + \ket{\DO_i}\bra{\UP_i} a^\dag_i) \,. \end{eqnarray} Here $\omega_0$ is the cavity photon energy, and $\Delta=\epsilon-\omega_0$ defines the detuning. The atom-photon coupling $g$ ($a^\dag_i$, $a^{\phantom{\dag}}_i$ are photon creation and annihilation operators) gives rise to formation of polaritons (combined atom-photon or exciton-photon excitations). Allowing for nearest-neighbor photon hopping between cavities with amplitude $t$ leads to the lattice Hamiltonian \begin{eqnarray}\label{eq:ham_PM} \hat{H}^{\text{PM}} &=& -t\sum_{\las i,j\ras} a^\dag_i a^{\phantom{\dag}}_j + \sum_i \hat{H}^{\text{JC}}_i - \mu \hat{N}_\text{p} \,. \end{eqnarray} The conserved polariton number $\hat{N}_\text{p}=\sum_i \hat{n}_{\text{p},i}$, with $\hat{n}_{\text{p},i}= a^\dag_i a^{\phantom{\dag}}_i + \ket{\UP_i}\bra{\UP_i}$, is determined by the chemical potential $\mu$.\cite{Ma.Co.Ta.Ho.Gr.07} Polaritons experience an effective repulsion $U_\text{eff}(n_\mathrm{p})$ [see Eq.~(\ref{eq:Ueff})] due to the nonlinear dependence of the single-site energy on the local occupation number $n_\mathrm{p}$. We use $g$ as the unit of energy and set $\omega_0/g$, $k_\text{B}$ and the lattice constant equal to unity. The rotating wave approximation becomes unjustified for $g$ comparable to $\en$. The motivation for setting $g=\en$ is direct comparison to previous work. The Hamiltonian~(\ref{eq:ham_PM}) has been studied in [\onlinecite{GrTaCoHo06,Ma.Co.Ta.Ho.Gr.07,AnSaBo07,Ro.Fa.07,Ai.Ho.Ta.Li.08,rossini_photon_2008,Ha.Br.Pl.08,Le.Li.08,irish_polaritonic_2008,Ir.09,Sc.Bl.09,Ko.LH.09}]. We also consider the Bose-Hubbard Hamiltonian \begin{equation}\label{eq:ham_BHM} \hat{H}^{\text{BHM}} = -t\sum_{\las i,j\ras} b^\dag_i b^{\phantom{\dag}}_j + \frac{U}{2}\sum_i n_i (n_i-1) - \mu \hat{N} \,, \end{equation} describing soft-core bosons with repulsion $U$ and hopping $t$. Here $\hat{N}=\sum_i \hat{n}_i=\sum_i b^\dag_i b^{\phantom{\dag}}_i$, is the total number of bosons, and we use $U$ as the unit of energy. As an alternative to the spin language used here, the polariton model~(\ref{eq:ham_PM}) can be written as a two-band Bose-Hubbard Hamiltonian;\cite{koch:042319} one boson species is itinerant, whereas the other is immobile (corresponding to localized excitons) with a hard-core constraint. This correspondence provides a direct connection to recent work on cold atomic gases in optical lattices, with the natural extension to the case where the excitons are mobile as well.\cite{BhHoSiSi08} We shall see below that owing to the composite nature of the bosonic particles in the polariton model, it is generally easier to understand the features of the Bose-Hubbard model first, and then explore similarities to the polariton model. Moreover, analytical approximations are more readily available for the Bose-Hubbard model and provide insight into the numerical data. Periodic boundary conditions in real space are applied in all simulations, and the system size is denoted as $L$. \section{Method}\label{sec:method} We use the directed loop method,\cite{SySa02} a generalization of the loop algorithm,\cite{loop_evertz_93,HGE03} which has no systematic errors and is efficient (low autocorrelations), facilitating the simulation of large systems at low temperatures. We make use of the ALPS library~\cite{ALPS_I,ALPS_II} and of the ALPS applications,\cite{ALPS_DIRLOOP} which use the stochastic series expansion (SSE) representation\cite{SandvikSSE} of worldline path integrals. We have verified that we obtain the correct phase boundary in 1D for selected points in parameter space. In contrast to most previous QMC calculations of the Bose-Hubbard model, the focus of the current paper is on dynamical properties. The SSE representation has the drawback that dynamical correlation functions in imaginary time, which we need to obtain spectra, are very inefficient to calculate, since they involve a convolution of Green functions at different SSE distances.\cite{DorneichT01} On the other hand, Green functions can be measured easily in an imaginary time representation. For this reason we revert a mapping from continuous time to SSE\cite{interaction_representation__sandvik__PRB97} when measuring Green functions. To each operator in a given SSE operator string we associate a time $\tau \in [0, \beta]$ which is stochastically sampled out of a uniform distribution. This maps the SSE configuration into a worldline configuration in continuous imaginary time.\cite{Spin_peierls_franzi_07} Correlation functions of diagonal operators can then be measured directly. For example, in the case of $\langle\hat{\rho}_i(\tau)\hat{\rho}_j(0)\rangle$ we evaluate the density $\rho_i(\tau)$ on a fine time grid. This time discretization limits the high energy range of the Green function, but does not introduce any discretization error to the QMC algorithm itself. With the Fourier transformation of the density $\mathcal{F}(\hat{\rho}_i(\tau)) = \hat{\rho}_{k,\omega}$, we measure the correlation function $\mathcal{F}(\braket{\hat{\rho}_i(\tau)\hat{\rho}_j(0)}) = \braket{\hat{\rho}_{k,\om}\hat{\rho}_{-k,\om}}$ using fast Fourier transforms. The evaluation of off-diagonal single-particle correlation functions of the form $\langle\psi_i^{\phantom{\dag}}(\tau) \psi_0^\dag(0)\rangle$ requires some care. We again make use of the worldline picture, in which two operators $\psi^\dag$ and $\psi$ are inserted whenever a new loop update starts. Let us assume that $\psi$ moves around (loop head) while $\psi^\dagger$ is pinned (loop tail). The time and position of the loop tail are set as the new origin of our coordinate system, and we store the values $\langle\alpha | \psi_i^{\phantom{\dag}}(\tau) \psi_0^\dag(0) |\beta\rangle$ whenever the loop head $\psi_i(\tau)$ crosses a point on the time grid with distance $(i,\tau)$ from the new origin. Here $\ket{\alpha}$, $\ket{\beta}$ are the states in the world line configuration prior to the arrival of the loop head. We then again use fast Fourier transformation to evaluate the correlation functions in Fourier space. Let us now define the observables of interest. The quantum phase transition can be detected by calculating the superfluid density $\rho_\text{s}$, measured in the simulations in terms of the spatial winding number $w$ as $\rho_\mathrm{s} = L\las w^2\ras/\beta$,\cite{PhysRevB.36.8343,prokofev_two_2000} $\beta=1/kT$ being the inverse temperature. Another important observable in the context of the MI-SF transition is the total density, $n=\las\hat{N}\ras/L$ in the Bose-Hubbard model, and $n_\mathrm{p}=\las\hat{N}_\mathrm{p}\ras/L$ in the polariton model. Concerning dynamical properties, we compute the dynamic structure factor $S(k,\om)$ and the single-particle spectral function $A(k,\om)$. The dynamic structure factor at momentum $k$ and energy $\om$ is given by \begin{eqnarray} S(k,\om) &=& \frac{1}{2\pi L}\int_{-\infty}^\infty d \tau e^{\text{i} \om \tau} \braket{\hat{\rho}_k(\tau){\hat{\rho}^{\dag}_k}(0)} \\\nonumber &=& \frac{1}{ L} \sum_{n,m} \frac{e^{-\beta E_n}}{Z} \left|\bra{m}{\hat{\rho}^{\dag}}_k\ket{n}\right|^2 \delta[\om-(E_m-E_n)] \,, \end{eqnarray} with the grand-canonical partition function $Z$ and the energy of the $n$th eigenstate $E_n$. In our simulations, $S(k,\om)$ is obtained from \begin{equation}\label{eq:skw} \braket{\hat{\rho}_k(\tau) \hat{\rho}_{-k}(0)} = \int d\omega S(k,\omega) \frac{e^{-\tau \omega}}{1 + e^{-\omega \beta}} \end{equation} by means of the maximum entropy method For the Bose-Hubbard model, the density operator $\hat{\rho}_i=\hat{n}_i$, and $\rho^\dag_k=\sum_q b^\dag_{q+k} b_q$. For the polariton model, we can calculate the dynamic structure factor for photons [$S^\text{ph}(k,\om)$], atoms [$S^\text{at}(k,\om)$] or polaritons [$S(k,\om)$] by using \begin{equation} \hat{\rho}_i = \begin{cases} a^\dag_i a^{\phantom{\dag}}_i & \text{for\;photons}\,,\\ \ket{\UP_i}\bra{\UP_i} & \text{for\;atoms}\,,\\ a^\dag_i a^{\phantom{\dag}}_i + \ket{\UP_i}\bra{\UP_i} & \text{for\;polaritons}\,, \end{cases} \end{equation} respectively. The single-particle spectral function is defined as \begin{eqnarray} A(k,\om) &=& -\frac{1}{\pi} \Im\, \las\las \hat{\psi}^{\phantom{\dag}}_k;\hat{\psi}_k^\dag\ras\ras_\om \\\nonumber &=& \sum_{n,m} \frac{e^{-\beta E_n}}{Z} \left|\bra{m}{\hat{\psi}^{\dag}}_k\ket{n}\right|^2 \delta[\om-(E_m-E_n)] \,, \end{eqnarray} where the real-space operator $\hat{\psi}_i$ entering the Green function is given by $\hat{\psi}_i=b_i$ for the Bose-Hubbard model, and by $\hat{\psi}_i=a_i$ for the polariton model. maximum entropy is again used to map to real frequencies. The QMC algorithm samples the partition function in the grand canonical ensemble. However, using only those configurations which have a given number of polaritons enables us to measure observables in the canonical ensemble as well. Here this simple but powerful trick permits us to study the fixed-density phase transition which occurs in the polariton model as a function of $t/g$. The SSE representation requires to set a maximum boson number per site. In the Bose-Hubbard model, we allow a maximum of six bosons per site. In the polariton model we allow from six (Mott insulator, fixed-density transition) up to 16 (SF phase) photons per site. Convergence has been monitored by plotting histograms of the photon number distribution, and the cut-offs have been chosen generously such that there was no truncation error. \section{Results}\label{sec:results} We begin with a review of the properties of the Bose-Hubbard model and the polariton model as they emerge from previous work. Whereas a substantial literature exists for the Bose-Hubbard model, work on the polariton model began only recently, based on mean-field theory,\cite{GrTaCoHo06,Ko.LH.09} exact diagonalization,\cite{Ma.Co.Ta.Ho.Gr.07} the DMRG,\cite{Ro.Fa.07} the variational cluster approach,\cite{Ai.Ho.Ta.Li.08}, QMC\cite{Zh.Sa.Ue.08} and strong coupling theory.\cite{Sc.Bl.09} Our discussion focuses on 1D, and follows Fisher \etal\cite{PhysRevB.40.546} and K\"uhner \etal\cite{Ku.Wh.Mo.00} The {\em Bose-Hubbard model} describes the competition of kinetic energy and local, repulsive interaction. Depending on the ratio $t/U$ and the density of bosons $n$ (the system is superfluid for any $t>0$ if $n$ is not integer), the Bose-Hubbard model at temperature $T=0$ is either in a MI state or in a SF state. The MI is characterized by an integer particle density, phase fluctuations and a gap in the single-particle excitation spectrum. In the SF phase, we have significant density fluctuations, phase coherence, and nonzero superfluid density $\rho_\text{s}$, as well as gapless (phonon) excitations with linear dispersion at small $k$. For the case of one dimension considered here, a precise zero-temperature phase diagram in the $\mu/U,t/U$ plane has been determined by K\"uhner \etal,\cite{Ku.Wh.Mo.00} and these data are shown in Fig.~\ref{fig:phasediagrams}(a). There exists a Mott lobe inside which the density $n=1$ (higher lobes with integer $n>1$ are not shown), and which is surrounded by the SF phase. There are two qualitatively different ways to make a transition from the MI to the SF.\cite{PhysRevB.40.546} The generic MI-SF transition is driven by addition or subtraction of small numbers of particles to the incompressible MI phase, the total energy cost for which is given by the distance in $\mu$-direction from the nearest phase boundary. Since additional particles or holes (which Bose condense at $T=0$) can move freely, the gain in kinetic energy can outweigh the interaction energy, leading to the MI-SF transition. Across the generic transition, which is mean-field like in character, the density varies continuously and the single-particle gap closes linearly as a function of the distance from the phase boundary, $E_\text{g}\propto\delta$, where $\delta=t-t_\text{c}$ or $\mu-\mu_\text{c}$ is the distance from the phase boundary.\cite{PhysRevB.40.546} \begin{figure} \includegraphics[width=0.4\textwidth]{pd_bhm}\\ \includegraphics[width=0.4\textwidth]{pd_jcm} \caption{\label{fig:phasediagrams} Zero-temperature phase diagram for (a) the Bose-Hubbard model and (b) the polariton model in 1D. We only show the Mott lobes with density one. These DMRG results were obtained by (a) K\"uhner \etal\cite{Ku.Wh.Mo.00} and (b) Rossini \etal\cite{Ro.Fa.07}} \end{figure} There also exists a MI-SF transition at fixed density, driven by the onset of boson hopping due to the increase of the ratio $t/U$, \ie by quantum fluctuations. It has been shown that this transition occurs at the tip of the Mott lobe, and that it has a different universality class than the generic transition.\cite{PhysRevB.40.546} In $d$ dimensions, the universality class is that of the $(d+1)$ dimensional $XY$ model, so that in 1D there is a Kosterlitz-Thouless phase transition at the multicritical point. For this case, the Mott gap $E_\text{g}\propto\exp(\text{-const}/\sqrt{t_\mathrm{c}-t})$ closes exponentially (\ie, very slowly) as a function of the distance from the lobe tip,\cite{PhysRevB.59.12184} and strong deviations from the parabolic lobes predicted by mean-field theory\cite{PhysRevB.40.546} are observed in both strong-coupling\cite{0295-5075-26-7-012,Fr.Mo.96,PhysRevB.59.12184} and DMRG results.\cite{PhysRevB.58.R14741} Another remarkable aspect of the 1D case is the occurrence of multiple MI-SF transitions along lines of constant chemical potential over an extended range of $\mu\lesssim0.2$ [see Fig.~\ref{fig:phasediagrams}(a)].\cite{PhysRevB.58.R14741,Ku.Wh.Mo.00} The {\em polariton model} also shows a series of Mott lobes, in which the polariton density $n_\text{p}$ is pinned to an integer (see Fig.~\ref{fig:phasediagrams}(b) for the phase boundaries of the $n_\text{p}=1$ lobe obtained by DMRG\cite{Ro.Fa.07}). Even for pinned $n_\text{p}$ the photon and exciton densities can fluctuate. Deep in the Mott phase and for $n_\text{p}\geq1$, we can approximate the ground state by a product over single sites, each of which is described by the Jaynes-Cummings eigenstates (see, \eg, [\onlinecite{Sc.Bl.09}]) \begin{eqnarray}\nonumber\label{eq:eigenstates} \ket{n_\text{p},-} &=& \cos\theta(n_\text{p})\ket{n_\text{p},\DO}-\sin\theta(n_\text{p})\ket{n_\text{p}-1,\UP}\,,\\ \ket{n_\text{p},+} &=& \sin\theta(n_\text{p})\ket{n_\text{p},\DO}+\cos\theta(n_\text{p})\ket{n_\text{p}-1,\UP}\,, \end{eqnarray} where $\tan\theta(n_\text{p})=2g\sqrt{n_\text{p}}/[2\chi(n_\text{p})-\Delta]$, $\chi(n_\text{p})=\sqrt{g^2n_\text{p}+\Delta^2/4}$, and with eigenvalues $E^\pm(n_\text{p})=-(\mu-\om_0)n_\text{p}+\Delta/2\pm\chi(n_\text{p})$. Hence for fixed polariton number $n_\text{p}$, the ground state $\ket{n_\text{p},-}$ is a coherent superposition of two states which differ by the state of the atom (or spin) as well as the number of photons; this hybridization provides the connection to exciton polaritons. The extent of the lobes in both the $\mu$ and $t$ directions diminishes quickly with increasing $n_\text{p}$ due to the reduced polariton-polariton repulsion $U_\mathrm{eff}(n_\mathrm{p})$; the $t=0$ vertical width of the lobes in the Bose-Hubbard model is always $U$. At large values $\zeta t>\om-\mu$ ($\zeta$ being the coordination number), beyond those considered in the present work, the polariton model shows an instability.\cite{Ko.LH.09} In this work we restrict our discussion to the region in the phase diagram in or close to the Mott lobes with density $n_\mathrm{p}=1$ or $n=1$. This lobe is the largest in the polariton model with zero detuning, and quantum effects are most pronounced. A density of one is also the most interesting case for experimental realizations.\cite{GrTaCoHo06,Ai.Ho.Ta.Li.08} All the discussion so far has been for $T=0$. Both experiments and our simulations are carried out at low but finite temperatures, with several important consequences. Strictly speaking, there is no true MI at $T>0$ due to thermal excitations. However, there exist quasi-MI regions which have finite but very small compressibility (see also the discussion of temperature effects later). As long as the density remains close to an integer, these regions may be regarded as Mott insulating. Corresponding ``phase diagrams'' at finite $T$ have been obtained for both the polariton and the Bose-Hubbard model.\cite{gerbier:120405,Ai.Ho.Ta.Li.08,Oh.Pe.08} Except for our analysis of temperature effects in Sec.~\ref{sec:results}, the simulations have been carried out at values of $\beta=3L$, large enough to ensure that we have an (almost) integer density in the Mott phase. The Bose-Hubbard model in more than one dimension (and most likely the polariton model as well) exhibits a phase transition from a SF to a normal state (gapless with no phase coherence), related to the well-known $\lambda$ transition in liquid helium, at a temperature $T_\lambda$.\cite{CaSa.GuSo.Pr.Sv.07,capogrosso-sansone:134302} This gives rise to an intervening normal region in the phase diagram, between the MI (at small $t/U$) and the SF (at large $t/U$).\cite{gerbier:120405} In the 1D case considered here we have $T_\lambda=0$, so that for any $T\neq0$ only quasi-MI and normal states exist in the thermodynamic limit. However, when the temperature is so low that the SF correlation length in the thermodynamic limit far exceeds the system size $L$, results will be representative of the SF state. Making use of finite size and finite temperature effects, a scaling analysis in fact yields accurate results for the $T=0$ phase boundaries.\cite{alet:024513,Zh.Sa.Ue.08} Remarkably, interacting 1D bosons can be realized using cold atomic gases (the Tonks-Girardeau gas)\cite{Pa.Wi.Mu.Va.Ma.Fo.Ci.Sh.Ha.Bl.04,Fa.Cl.Fa.Fo.Mo.vdS.In.09} and are described by the Bose-Hubbard model at low but finite temperatures.\cite{PhysRevLett.93.210401} Similar to Bose fluids, the low-energy excitations in the SF phase are phonons. Within Bogoliubov theory,\cite{Bogolyubov_superfluidity_1947} these quasiparticles are described by a creation operator $\psi^\dag_k = \mathsf{u}_k b^\dag_k + \mathsf{v}_k b^{\phantom{\dag}}_{-k}$, and they have been observed experimentally in ultracold atom systems.\cite{PhysRevLett.88.060402} As some of our results can be understood in terms of Bogoliubov theory, let us state some key results for the Bose-Hubbard model. The coefficients of the coherent superpositions of particle and hole excitations are given by\cite{rey_bogoliubov_2003} \begin{equation} \label{eq:boson_weights} \begin{split} |\mathsf{u}_k|^2 = & \frac{K(k)+n_0 U + \omega_k}{2 \omega_k}\,, \\ |\mathsf{v}_k|^2 = & \frac{K(k)+n_0 U - \omega_k}{2 \omega_k} = |\mathsf{u}_k|^2-1\,, \end{split} \end{equation} with excitation energy \begin{align} \label{eq:boson_energies} \omega_k & = \sqrt{K(k)(2 n_0 U + K(k))}\,, \\ K(k) & = 4 t \sin^2(k/2)\,. \nonumber \end{align} Here $n_0$ is the condensate fraction, equal to $n_0 = (\mu+t)/U$ in the simple Bogoliubov approach at $T=0$.\cite{Me.Tr.08} For small $k\approx0$, we have a linear dispersion $\omega_k \approx \pm \sqrt{2 n_0 t U} k$, and both $|\mathsf{u}_k|$ and $|\mathsf{v}_k|$ are nonzero. For large $k\approx\pi$, the energy dispersion is $\pm(-c k^2 + 2 \sqrt{4 t^2+2 n_0 U t})$ and thus free particle like. If we assume $t \gg U$, which is the parameter region where Bogoliubov theory is valid, then $|\mathsf{u}_k|^2 \approx 1$ and $|\mathsf{v}_k|^2 \approx 0$ for $k\gg0$, \ie only one excitation branch is populated at large momenta. This also holds true for the parameters studied numerically in this work. We further compare to the higher order approximation proposed in Ref.~\onlinecite{rey_bogoliubov_2003}. For the Bose-Hubbard model, the latter yields the same equations for $|\mathsf{u}_k|^2$, $|\mathsf{v}_k|^2$ and $\om_k$, but $n_0$ is determined self-consistently, allowing for depletion effects. In the case of free bosons at $T=0$, all particles condense in the same $k=0$ state. However, finite temperature and/or interactions cause a certain fraction of these particles to occupy states of higher energy. Indeed, both for $U\rightarrow0$ (noninteracting bosons) and $n_0\rightarrow0$ (high temperature limit) we have $|\mathsf{u}_k|^2=1$, $|\mathsf{v}_k|^2=0$. Moreover, with decreasing $U$ or $n_0$, $|\mathsf{v}_k|^2$ approaches zero most quickly at large $k$ since in this case $K(k)\gg n_0U$ so that $\omega_k\approx K(k)$, canceling the term $-\om_k$ in the expression for $|\mathsf{v}_k|^2$. This will explain the temperature evolution of the single-particle spectrum shown in Sec.~\ref{sec:results}. \subsection{Bose-Hubbard model} \begin{figure} \subfigure{ \includegraphics[width=0.46\linewidth]{bh_L64_greens_0_k_t0_05} } \subfigure{ \includegraphics[width=0.46\linewidth]{bh_L64_greens_0_k_t0_13} } \subfigure{ \includegraphics[width=0.46\linewidth]{bh_L64_greens_0_k_t0_14} } \subfigure{ \includegraphics[width=0.46\linewidth]{bh_L64_greens_0_k_t0_2} } \caption{\label{fig:BH_green} (color online) Single-boson spectral function $A(k,\om)$ of the 1D Bose-Hubbard model, for different hoppings $t$ (and total density $n$), corresponding to (a) the MI phase, (b) just below the MI-SF transition, (c) just above the transition, and (d) the SF phase. Here $\mu/U = 0.5$, $L = 64$ and $\beta U = 3L$. Here and in subsequent spectra, the symbols and errorbars indicate the maxima of the peaks and the associated errors obtained by the maximum entropy method. As discussed in Sec.~\ref{sec:bhm_skw}, features with very small spectral weight are difficult to determine accurately. % The solid red lines in (a) are mean field results.\cite{PhysRevA.63.053601} The solid lines in (c) and (d) are the Bogoliubov results, while the dashed lines are a fourth order approximation (see text).\cite{rey_bogoliubov_2003} } \end{figure} Despite the extensive literature on this model, there are few nonperturbative results available for the spectra, as mentioned in Sec.~\ref{sec:introduction}. Therefore, we investigate the single-boson spectral function $A(k,\om)$ and the dynamic structure factor $S(k,\om)$, with results shown in Figs.~\ref{fig:BH_green} and \ref{fig:BH_szsz}. \begin{figure}[Htb] \centering \includegraphics[width=0.95\linewidth]{quasiparticle_weight_bh_t0_2_inset_t0_14} \caption{(color online) Quasiparticle weights $\mathsf{u}_k$ and $\mathsf{v}_k$ of the gapless modes at $t/U=0.2$. The symbols are integrated intensities from QMC and maximum entropy, the lines are the predictions from Bogoliubov-theory. The inset shows data at $t/U=0.14$. Again, $\mu=0.5$, $L=64$ and $\beta = 3 L$. } \label{fig:quasiparticle_weight_BH} \end{figure} \subsubsection{Single-particle spectrum} Menotti and Trivedi reviewed previous work on the single-particle spectrum, and presented results from a random phase approximation.\cite{Me.Tr.08} Their main findings are as follows. For large $t/U$, a weakly interacting SF exists, and the spectrum consists of the usual two gapless phonon modes which exhaust the sum rule for $A(k,\om)$. Reducing $t/U$, two additional gapped modes appear at small $k$ whose spectral weight increases upon approaching the quantum phase transition. At the transition, one of the phonon modes evolves into the particle or hole mode (depending on which of the gaps $E_\text{g,p}$, $E_\text{g,h}$ is smaller), whereas one of the gapped modes in the SF becomes a gapped mode in the MI. Menotti and Trivedi\cite{Me.Tr.08} argued that the appearance of gapped modes and the redistribution of spectral weight from coherent phonon modes to incoherent gapped modes indicate the strongly correlated nature of the SF state near the transition. Let us point out that particle and hole dispersions in the MI have been calculated by several authors before, \cite{Me.Tr.08,PhysRevB.59.12184,Oh.Pe.08,PhysRevA.63.053601,lu:043607,Hu.Al.Bu.Bl.07,0295-5075-22-4-004,Se.Du.05,0953-4075-40-1-013,KoDu06} whereas the full spectral function of the MI (which also reveals the spectral weight and the width of the excitations) was only shown in [\onlinecite{KoDu06,Me.Tr.08}]. Our numerical results for the single-particle spectral function $A(k,\om)$ are shown in Fig.~\ref{fig:BH_green}. The four different values of the ratio $t/U$ cover the range in which the generic MI-SF transition takes place. According to Fig.~\ref{fig:phasediagrams}(a), for the chosen value of $\mu/U=0.5$ the transition occurs at $t/U\approx0.14$. In each panel we also report the total density $n$ to three decimal places, although our simulations provide much higher accuracy. The MI [(a) and (b)] exhibits the familiar gapped particle and hole bands.\cite{PhysRevB.59.12184} The additional particles exhibit a free-particle dispersion since the energy penalty for double occupation is the same at every site. In particular, we see in Fig.~\ref{fig:BH_green}(a), (b) that the particle band width is $8t$ (the factor of two arising from the fact that particle hopping involves a doubly occupied site), whereas the hole bandwidth is $4t$. The Mott gap decreases with increasing $t$ and a symmetry of particle and hole bands emerges.\cite{PhysRevB.40.546,Ai.Ho.Ta.Li.08,CaSa.GuSo.Pr.Sv.07} In addition to our QMC results we plot the mean-field dispersion~\cite{PhysRevA.63.053601} in Fig.~\ref{fig:BH_green}(a). For larger $t/U=0.13$, mean-field theory already predicts a superfluid, although the critical hopping in 1D is $t_\text{c}/U\approx 0.14$. In the SF phase [Fig.~\ref{fig:BH_green}(c),(d)], we obtain the expected Goldstone modes with linear dispersion at small $k$. Additionally, we see two gapped signals which we relate to the gapped modes discussed by other authors.\cite{Se.Du.05,Hu.Al.Bu.Bl.07,Me.Tr.08} Whereas the negative-energy gapped mode is clearly visible in Fig.~\ref{fig:BH_green}(c) just above $t_\mathrm{c}$, the gapped modes have almost disappeared in Fig.~\ref{fig:BH_green}(d). Since we approach the phase transition above the lobe tip ($\mu/U = 0.5$) the particle band becomes the gapless mode and carries more spectral weight, while the gapped hole band evolves into a gapped mode in the SF. This agrees well with the findings of Menotti and Trivedi.\cite{Me.Tr.08} In accordance with Bogoliubov theory, the excitations in the SF phase are free-particle like for large $k$. The bandwidths of the excitations both in the MI and the SF phase scale roughly linearly with $t$. In Figs.~\ref{fig:BH_green}(c) and (d) we also show results for the phonon dispersion $\pm\om_k$ (without taking into account the weights $|\mathsf{u}_k|$, $|\mathsf{v}_k|$) from Bogoliubov theory as well as the higher-order approximation of Ref.~\onlinecite{rey_bogoliubov_2003}. Whereas the simple Bogoliubov approach (neglecting depletion of the condensate) agrees quite well with our data despite the rather small value of $t/U$, we do not find the higher order approach to be systematically better. In particular, at large $k$, the phonon bandwidth is noticeably underestimated, which may be a result of an overestimate of depletion effects (these are most visible at large $k$). The agreement with Bogoliubov theory at small $k$ coincides with the findings of Menotti and Trivedi.\cite{Me.Tr.08} Rey \etal\cite{rey_bogoliubov_2003} found the higher order approximation to be consistent with numerical results for other observables but do not show the spectra seen in Fig.~\ref{fig:BH_green}. Note that these authors consider larger particle densities $n\geq5$ where the Bogoliubov-type approximations are more reliable. Finally, we tried to use our QMC results for the superfluid fraction for $n_0$ in the expressions obtained from Bogoliubov theory, but the results are worse than for $n_0=n$. The spectral weight of the excitations decreases with increasing $k$ in all spectra of Fig.~\ref{fig:BH_green}, although this is more pronounced in the SF phase than in the MI. In Fig.~\ref{fig:quasiparticle_weight_BH} we show the quasiparticle weights of the massless modes in the SF phase, obtained by integrating over the quasiparticles peaks in the spectra, and compare them to Bogoliubov theory (Eqs.~\ref{eq:boson_weights} and~\ref{eq:boson_energies}). We verified that the QMC spectra satisfy the sum rule. The spectral weight of the lower branch decreases more quickly, consistent with the Bogoliubov picture. However, Bogoliubov theory overestimates the quasiparticle weights, especially at small $k$. Besides, there is a significant broadening of the peaks on approaching the zone boundary. At strong coupling close to the phase transition (inset of Fig.~\ref{fig:quasiparticle_weight_BH}), the quasiparticle weight of the lower branch decays much more quickly than Bogoliubov theory would predict. \begin{figure} \subfigure{ \includegraphics[width=0.46\linewidth]{bh_L64_szsz_k_0_t0_05} } \subfigure{ \includegraphics[width=0.46\linewidth]{bh_L64_szsz_k_0_t0_13} } \subfigure{ \includegraphics[width=0.46\linewidth]{bh_L64_szsz_k_0_t0_14} } \subfigure{ \includegraphics[width=0.46\linewidth]{bh_L64_szsz_k_0_t0_2} } \caption{\label{fig:BH_szsz} (color online) Dynamic structure factor $S(k,\om)$ of the Bose-Hubbard model for the same parameters as in Fig.~\ref{fig:BH_green}. Panel (d) includes the same analytical approximations as Fig.~\ref{fig:BH_green}(d). } \end{figure} {\em Sound velocity.} The sound velocity $v_\text{s} = \frac{\partial \om_k}{\partial k}|_{k \to 0}$ of the phonon excitations in the SF phase was calculated for the Bose-Hubbard model by Menotti and Trivedi using a random phase approximation.\cite{Me.Tr.08} They concluded that $v_\text{s}$ vanishes at the generic transition, but remains nonzero when crossing the multicritical point.\cite{Me.Tr.08,Hu.Al.Bu.Bl.07} In their results, there is a very sharp downturn of $v_\text{s}$ toward zero close to $t_\text{c}$. We are not aware of any calculations of $v_\text{s}$ for the polariton model. From our QMC simulations, we can determine $v_\text{s}$ from linear fits to the spectrum. Apart from the limited accuracy of the maximum entropy inversion, this works quite well away from $t_\text{c}$. In agreement with Bogoliubov theory, we find for the Bose-Hubbard model a linear dependence $v_\text{s}\propto |t-t_\text{c}|$ and good agreement of results for $L=32$ and 64. Determining the behavior of $v_\text{s}$ as $t\rightarrow t_\text{c}$ is more difficult for two reasons. First, the phonon spectrum becomes nonlinear due to finite-temperature effects (see discussion below), rendering linear fits ill-defined. Second, the position of the phase transition changes with system size, so that no reliable finite-size scaling of $v_\text{s}$ can be carried out. The situation is similar for the polariton model, and we therefore do not show results for $v_\text{s}$ here, leaving this as an interesting issue for future work. \subsubsection{Dynamic structure factor}\label{sec:bhm_skw} The single-particle spectral function provides information about the energy and lifetime of particles or holes added to the interacting ground state. In contrast, the dynamic structure factor---corresponding to the imaginary part of the dressed particle-hole propagator---yields insight into the density fluctuations in the ground state. In general the two quantities do not exhibit the same features. However, for broken U(1) gauge symmetry in the SF phase, they are both dominated by the same single-particle excitations (phonons).\cite{Griffin93} We find this statement to hold in 1D even though no symmetry breaking occurs. The density operator in the Bose-Hubbard model is $\rho^\dag_k=\sum_l e^{-i k l} \hat{n}_l$. For $k=0$, we have $\rho^\dag_k=\sum_l \hat{n}_l$, and $S(k,\om)$ has a trivial contribution at $\om=0$ which we dismiss by considering $\tilde{\rho}^\dag_k=\sum_l e^{-i k l} (\hat{n}_l-\las \hat{n}_l\ras)$. The above-mentioned relation to particle-hole excitations becomes evident by rewriting the density operator as $\rho^\dag_k=\sum_q b^\dag_{q+k} b_q$. We show results for $S(k,\om)$ in Fig.~\ref{fig:BH_szsz}. According to Huber \etal,\cite{Hu.Al.Bu.Bl.07} $S(k,\om)$ in the MI phase should exhibit a continuum of particle-hole excitations, starting at $\omega=E_\text{g}$ due to the Mott gap in the single-particle spectrum (see Fig.~\ref{fig:BH_green}). For the parameters in Fig.~\ref{fig:BH_szsz}(a), $E_\text{g}/U\approx0.7$. The dispersion of the particle and hole bands is very weak, Note that we find no agreement with the two single-particle excitations $E^\text{p}_\text{g}+\epsilon_\text{h}(k)$, $E^\text{h}_\text{g}+\epsilon_\text{p}(k)$ discussed by Huber \etal This may be a result of their mean-field treatment of the two-dimensional case. Our results do agree qualitatively with exact numerical results on small clusters.\cite{Ro.Bu.04} For larger $t/U$, the Mott state contains nontrivial density fluctuations, and the upper band in $S(k,\om)$ acquires some $k$ dependence. The energy of the excitations in $S(k,\om)$ [following $\sum_q \{\epsilon_\text{h}(q)+\epsilon_\text{p}(k-q)\}$] \cite{Hu.Al.Bu.Bl.07} generally increases with increasing $k$. This is obvious from the momentum dependence of the particle and hole bands in $A(k,\om)$, and also agrees with the expectation that long-wavelength density fluctuations in a Mott state require less energy than fluctuations with short periods in real space. For $t\lesssim t_\mathrm{c}$ in Fig.~\ref{fig:BH_szsz}(b), we find a low-energy mode with nonlinear dispersion, which we interpret as a precursor of the linear excitations of the SF phase [see panel (d)]. Even for $t\gtrsim t_\mathrm{c}$ [Fig.~\ref{fig:BH_szsz}(c)], the gapless low-energy mode in our numerical results is not linear. A linear spectrum is a result of the condensation of bosons in the SF phase, but is not expected in the normal phase. Since our simulations are done at finite temperature, and because the phase coherence length is small close to $t_\mathrm{c}$, we can understand the absence of a clear, linear signature in Fig.~\ref{fig:BH_szsz}(c). Going to larger $t_\mathrm{c}$, we indeed see linear excitations near $k=0$ [Fig.~\ref{fig:BH_szsz}(d)]. Similar effects are expected for the single-particle excitations, but are difficult to see on the scale of Fig.~\ref{fig:BH_green}. Coming back to Fig.~\ref{fig:BH_szsz}(c), away from $k=0$, we find a free-particle like contribution, similar to the case of the MI. This excitation carries negligible spectral weight near $k=0$. Apart from finite-temperature effects, these features are qualitatively similar to the excitations discussed by Huber \etal,\cite{Hu.Al.Bu.Bl.07} namely a gapless sound mode (related to phase and density modulations) dominant at small $k$, and a massive mode (corresponding to exchange between condensate and noncondensate at fixed density) acquiring spectral weight at $k>0$. Additionally, we see in Fig.~\ref{fig:BH_szsz}(c) the (weak) signature of a gapped mode at small $k$, the nature of which we cannot determine from our present simulations. For $t/U=0.2$ [Fig.~\ref{fig:BH_szsz}(d)] the excitation ``band'' in $S(k,\om)$ follows closely the Bogoliubov mode, in accordance with the discussion at the beginning of this section. At this point, a comment concerning the accuracy of the spectra obtained from the maximum entropy inversion is in order. The spectral weight of the features visible in density plots such as Fig.~\ref{fig:BH_szsz}(d) varies over orders of magnitude. Some very weak signals, such as the group of points located at around $k=\pi/2$ below the main excitation band (with a weight that is a factor 10000 smaller than that of the dominant features), are expected to be artifacts. We shall see below that in the polariton model, there actually exist real excitations with very small spectral weight which are easy to miss in the maximum entropy inversion. To reliably study such excitations, analytical approaches (if available) are clearly superior.\cite{Sc.Bl.09} Our findings for the dynamic structure factor are consistent with previous numerical results on small systems $(L=10,20)$.\cite{Ba.As.Sc.De.05,Ro.Bu.04} We can confirm the broadening of the excitations with increasing $k$ in the SF phase,\cite{Ba.As.Sc.De.05} related to two-particle continua.\cite{Hu.Al.Bu.Bl.07} However, the maximum entropy method is not capable of resolving fine structures as (generically) seen in exact diagonalization results for small clusters.\cite{Ro.Bu.04} \subsection{Polariton model} For the polariton model, the only published results on dynamic properties are for the single-particle spectrum of the MI phase at zero temperature.\cite{Ai.Ho.Ta.Li.08,Sc.Bl.09} As pointed out before, the nature of the conserved particles in the polariton model is determined by the detuning. We start by discussing the case $\Delta=0$ for which the polaritonic character of the excitations is most pronounced. This can readily be seen from Eq.~(\ref{eq:eigenstates}), where $\ket{n_\text{p},\DO}$ and $\ket{n_\text{p}-1,\UP}$ contribute with equal weight. \begin{figure} \subfigure{ \includegraphics[width=0.46\linewidth]{greens_0_k_t0_01} } \subfigure{ \includegraphics[width=0.46\linewidth]{greens_0_k_t0_061} } \subfigure{ \includegraphics[width=0.46\linewidth]{greens_0_k_t0_07} } \subfigure{ \includegraphics[width=0.46\linewidth]{greens_0_k_t0_15} } \caption{\label{fig:PT_greens} Single-photon spectral function $A^\text{ph}(k,\om)$ of the 1D polariton model at $\mu/g = 0.4$ for different hoppings $t$, corresponding to (a) deep in the MI, (b) just below the MI-SF transition, (c) just above the transition, and (d) in the SF phase. Here $\beta g = 3L$ and $L = 64$. With increasing $t$, the density plots are more and more ``overexposed'' to see less dominant features.} \end{figure} \begin{figure} \subfigure{ \includegraphics[width=0.46\linewidth]{uppper_polariton_greens_0_k_t0_01} } \subfigure{ \includegraphics[width=0.46\linewidth]{uppper_polariton_greens_0_k_t0_061} } \caption{\label{fig:PT_greens2} Single-photon spectral function in the Mott phase for the same parameters as in Fig.~\ref{fig:PT_greens}, showing additional excitations at higher energies.} \end{figure} \subsubsection{Single-particle spectrum} \begin{figure}[Htb] \centering \includegraphics[width=0.95\linewidth]{quasiparticle_weight_pm} \caption{(color online) Quasiparticle weights $\mathsf{u}_k$ and $\mathsf{v}_k$ of the gapless modes of the polariton model, similar to Fig.~\ref{fig:quasiparticle_weight_BH}. } \label{fig:quasiparticle_weight_pm} \end{figure} In Fig.~\ref{fig:PT_greens} we show our QMC results for the single-photon spectral function. As for the Bose-Hubbard model, the values of the ratio $t/g$ range from deep in the Mott phase across the generic transition well into the SF phase. According to a finite size scaling analysis for $\mu/g=0.4$, the phase transition occurs at $t_\mathrm{c}/g =0.0626(1)$ (see Fig.~\ref{fig:fss_mu0_4}), in agreement with Fig.~\ref{fig:phasediagrams}(b). Hence panels (a) and (b) are for the MI regime, whereas (c) and (d) are for the SF phase. The results in the MI shown in Fig.~\ref{fig:PT_greens}(a), (b) agree well with previous numerical work.\cite{Ai.Ho.Ta.Li.08} Similar to the Bose-Hubbard model, there exist particle and hole bands, separated by the Mott gap. It is important to stress that although we add bare photons to the system, the particle and hole excitations reflect the properties of the polaritons in the system. Whereas the ratio of particle and hole bandwidths is two to one in the Bose-Hubbard model, it depends on the character of the quasiparticles (polaritons) in the polariton model and varies with detuning.\cite{Ai.Ho.Ta.Li.08} With increasing $t/g$, the gap closes and the bandwidths of excitations increase (effective masses decrease). Recent analytical work revealed the existence of so-called upper polariton modes at higher energies, which represent an important difference between the Bose-Hubbard model and the polariton model.\cite{Sc.Bl.09} For the Mott lobe with $n_\text{p}=1$, only one such (particle) band exists, corresponding (for small enough $t/g$) to a transition between the ground state $\ket{n_\text{p}=1,-}$ and the state $\ket{n_\text{p}=2,+}$ (see Eq.~(\ref{eq:eigenstates})). The weight of this high-energy excitation is very small compared to the dominant particle and hole modes discussed above (0.04 as compared to 1.46 for the $k=0$ atomic-limit results in [\onlinecite{Sc.Bl.09}]); with increasing $t/g$ the weight difference becomes even larger.\cite{Sc.Bl.09} The energy splitting between the $-$ and $+$ branches of eigenstates increases further for detuning $\Delta\neq0$ (Fig.~2 in [\onlinecite{GrTaCoHo06}]). The upper polariton mode is not visible in Figs.~\ref{fig:PT_greens}(a) or (b). Excitations with small spectral weight are notoriously difficult to see using QMC in combination with maximum entropy. In the present case, this is aggravated by the fact that the resolution of maximum entropy decreases at high energy. Nevertheless, we see a signature of the upper polariton band in Fig.~\ref{fig:PT_greens2}, and the latter is also present (but not shown) in the high-temperature data of Fig.~\ref{fig:finite_T_greens}(a); high-energy features are easier to resolve in QMC/maximum entropy at higher temperatures. From the eigenvalues of the states~(\ref{eq:eigenstates}) we can determine the excitation energy of the upper mode in the atomic limit as $w^+_p/g=-(\mu-\om_0)/g+(\sqrt{2}+1)\approx3$ for the parameters of Fig.~\ref{fig:PT_greens2}, in reasonable agreement with our results in Fig.~\ref{fig:PT_greens2}(a) given the ill-conditioned nature of the problem under consideration. Note that the upper polariton mode can be seen even close to the phase transition in Fig.~\ref{fig:PT_greens2}(b). The weight of the upper mode in Fig.~\ref{fig:PT_greens2} is about a factor of 100 smaller than that of the conventional particle and hole excitations. Although the upper polariton mode exists also in other results for the single-particle spectrum in the Mott phase (Figs.~\ref{KT_A_ph},~\ref{fig:finite_T_greens} and~\ref{fig:detuning_A_ph}), we focus on the low-energy conventional modes with large spectral weight. The latter can be determined accurately from our simulations, and will be the dominant feature in experiments. Figures~\ref{fig:PT_greens}(c) and (d) contain the first spectra of the polariton model in the SF phase. There is a clear signature of the gapless phonon modes starting at $k=\om=0$, with linear dispersion at small $k$. In the SF phase but close to the transition, we see an additional gapped mode at $\om<0$ [Fig.~\ref{fig:PT_greens}(c)]. Our results at these and at further couplings $t/g$ (and $t/U$) suggest that these gapped modes disappear more quickly with increasing $t/g$ than for the Bose-Hubbard model, which can be explained in terms of the photonic SF expected for the present parameters (see below).\cite{Ir.09} Note that a simple Bogoliubov type theory for the polariton model does not exist, due to the composite nature of polariton excitations. Figure~\ref{fig:quasiparticle_weight_pm} shows the quasiparticle weights. The general shapes resemble the Bose-Hubbard model case (Fig.~\ref{fig:quasiparticle_weight_BH}), but the lower branch decays very quickly in the polariton model even at $t/g=0.15$ quite far from the phase transition. This may be attributed to the fact that the energy cost for particle and hole excitations is different due to the dependence of $U_\text{eff}$ on $n_\text{p}$. Again, most of the spectral weight in the SF phase is found at small $k$. \begin{figure} \centering \subfigure{ \includegraphics[width=0.46\linewidth]{KT_greens_0_k_t0_12} } \subfigure{ \includegraphics[width=0.46\linewidth]{KT_greens_0_k_t0_21} } \caption{Single-photon spectrum $A^\text{ph}(k,\om)$ of the polariton model along the line $n_\mathrm{p}=1$ crossing the Kosterlitz-Thouless transition. Here $L=64$ and $\beta g = 3 L$.} \label{KT_A_ph} \end{figure} \begin{figure \centering \includegraphics[width=0.8\linewidth]{effective_masses_PM_KT} \caption{(color online) Effective particle ($m^+$) and hole masses ($m^-$) along the line $n_\textrm{p}=1$, as obtained from fits to the bands in $A^\text{ph}(k,\om)$ near $k=0$.} \label{PM_meff} \end{figure} {\em Fixed density.} In Fig.~\ref{KT_A_ph} we show the single-photon spectrum across the fixed density transition ($n_\mathrm{p} = 1$), obtained by selecting configurations only at that density. In Ref.~\onlinecite{rossini_photon_2008}, the critical hopping was determined as $t_\mathrm{c}/g = 0.198$ [cf Fig.~\ref{fig:phasediagrams}(b)]. The spectra in both the MI and the SF look very similar to those across the generic transition shown above. This may be different very close to the multicritical point, but this regime is most demanding numerically if the results are to be used in a maximum entropy inversion. From the spectra obtained at constant density in the MI, we can estimate the effective particle and hole masses by fitting a quadratic dispersion to the bands in the vicinity of $k=0$. In the Bose-Hubbard model, there is an emergent particle-hole symmetry on approaching the lobe tip,\cite{PhysRevB.40.546,CaSa.GuSo.Pr.Sv.07} and similar behavior is suggested by the evolution of the particle and hole bands with increasing $t/g$ also in the polariton model. For fixed polariton density, the two masses approach each other and vanish at the phase transition. This has been demonstrated in 2D based on a strong-coupling approach.\cite{Sc.Bl.09} In the region not too close to the phase transition, where stable fits can be obtained, Fig.~\ref{PM_meff} confirms this observation also in 1D. \begin{figure} \subfigure{ \includegraphics[width=0.46\linewidth]{polariton_k_t0_01} } \subfigure{ \includegraphics[width=0.46\linewidth]{polariton_k_t0_061} } \subfigure{ \includegraphics[width=0.46\linewidth]{polariton_k_t0_07} } \subfigure{ \includegraphics[width=0.46\linewidth]{polariton_k_t0_15} } \caption{\label{fig:PT_szsz} Polariton dynamic structure factor $S(k,\om)$ for the same parameters as in Fig.~\ref{fig:PT_greens}. The insets show an extrapolation of the Mott gap at small $k$ to $L \to \infty$.} \end{figure} \subsubsection{Dynamic structure factor} The evolution of the polariton dynamic structure factor $S(k,\om)$ across the MI-SF transition is shown in Fig.~\ref{fig:PT_szsz}. Remarkably, the results look very similar to those for the Bose-Hubbard model. Close to the atomic limit [$t/g=0.01$ in Fig.~\ref{fig:PT_szsz}(a)] we see a gapped, almost flat feature with energy $\om\approx 0.6g$. A look at the corresponding single-particle spectrum in Fig.~\ref{KT_A_ph}(a) reveals that this value is identical to the Mott gap. The almost flat particle and hole bands cause a very weak dispersion also for the particle-hole excitations visible in $S(k,\om)$. It is useful to remember that it is the effective polariton-polariton repulsion mediated by the atom-photon coupling that determines the Mott gap. For a single site and $n_\mathrm{p}=1$ (\ie, for the case of adding a second polariton), \begin{equation}\label{eq:Ueff} U_\mathrm{eff}(1) = 2 \sqrt{g^2 + (\Delta/2)^2} - \sqrt{2g^2 + (\Delta/2)^2} - \Delta/2\,. \end{equation} For zero detuning ($\Delta=0$), $U_\mathrm{eff}(1)/g=2-\sqrt{2}\approx0.59$. As for the Bose-Hubbard model, the excitations in $S(k,\om)$ acquire a noticeable dispersion with increasing $t/g$, and the $k=0$ gap closes. Figures~\ref{fig:PT_szsz}(b) and (c) are both close to the phase transition. An inspection of the $k=0$ region shows a weak linear mode with very small slope $O(0.01)$, corresponding to the small superfluid density existing in both $L=64$ systems. The massive mode extends to $k=0$ with a tiny intensity (thus not visible in the figure). An extrapolation of the gap to $L=\infty$ (insets) shows that it scales to zero in Fig.~\ref{fig:PT_szsz}(c), but stays finite at the smaller hopping in Fig.~\ref{fig:PT_szsz}(b). Indeed, a finite size scaling of the superfluid density (discussed later) implies that Fig.~\ref{fig:PT_szsz}(b) is just below the phase transition. In addition to finite size effects, we again observe finite temperature effects in the form of deviations from the expected linear spectrum close to $t_\text{c}$ (see discussion for the Bose-Hubbard model). For even larger $t/g$, the spectrum exhibits a single linear mode at small $k$. Similar to the spectral function, gapped modes seem to be suppressed quickly in the SF phase. Clearly, the polariton dynamic structure factor $S(k,\om)$ represents a useful probe to distinguish between the MI and the SF phases. We have argued before that the polariton MI has fluctuations in the photon and exciton density, whereas the polariton density is pinned. We now demonstrate that the exciton (atom) and photon structure factors, $S^\mathrm{at}(k,\om)$ and $S^\mathrm{ph}(k,\om)$, shown in Fig.~\ref{mott_S_ph_S_at}, do not reflect this fact, and therefore cannot be used to characterize the nature of the Mott state. To this end, it is important to notice that the Jaynes-Cummings Hamiltonian has two branches of eigenstates $\ket{n_\text{p},+}$ and $\ket{n_\text{p},-}$ (the latter containing the ground state, see also Eq.~(\ref{eq:eigenstates})) with the same polariton number but different energy.\cite{Ja.Cu.63} In the atomic limit, the dynamic structure factor for photons and excitons [Eq.~(\ref{eq:skw})] is dominated by the contributions $\bra{n_\text{p},-} \hat{\rho}^\dag_k\ket{n_\text{p},-}$ (with $\om=0$) and $\bra{n_\text{p},+} \hat{\rho}^\dag_k \ket{n_\text{p},-}$ (with energy $\om=2 \sqrt{n g^2 + \Delta^2/4}$, equal to 2 in Fig.~\ref{mott_S_ph_S_at}). Any additional peaks at finite $t/g$ have much smaller spectral weight and cannot be accurately resolved by our method. However, since the matrix elements of exciton and photon density operators for the combination $\bra{n_\text{p},+} \hat{\rho}^\dag_k \ket{n_\text{p},-}$ have the same modulus but opposite sign, the dominant contributions to $S^\mathrm{at}(k,\om)$ and $S^\mathrm{ph}(k,\om)$ cancel in the case of the polariton structure factor $S(k,\om)$. The dispersionless excitations near $\om/g=2$ seen in Fig.~\ref{mott_S_ph_S_at} are therefore absent in $S(k,\om)$, as confirmed by our data. Hence, while the upper polariton modes in the single-particle spectrum have small but finite weight, here their contribution is zero. Consequently, the polariton dynamic structure factor closely resembles $S(k,\om)$ of the Bose-Hubbard model. \begin{figure \centering \subfigure{ \includegraphics[width=0.46\linewidth]{szsz_k_0_t0_01} } \subfigure{ \includegraphics[width=0.46\linewidth]{szsz_k_1_t0_01} } \caption{Dynamic structure factor for excitons ($S^\mathrm{at}$) and photons ($S^\mathrm{ph}$) for the same parameters as Fig.~\ref{fig:PT_szsz}(a).} \label{mott_S_ph_S_at} \end{figure} \begin{figure} \subfigure{ \includegraphics[width=0.46\linewidth]{greens_0_k_t0_01_beta6_4} } \subfigure{ \includegraphics[width=0.46\linewidth]{greens_0_k_t0_15_beta6_4} } \subfigure{ \includegraphics[width=0.46\linewidth]{greens_0_k_t0_01_beta32_0} } \subfigure{ \includegraphics[width=0.46\linewidth]{greens_0_k_t0_15_beta32_0} } \caption{\label{fig:finite_T_greens} Temperature dependence of $A^\text{ph}(k,\om)$ in the polariton model at $\mu/g = 0.4$, in the MI (left) and in the SF phase (right). Here again $L = 64$. Corresponding results at $\beta g =192$ can be found in Figs.~\ref{fig:PT_greens}(a),(d).} \end{figure} \subsubsection{Temperature effects} Experimental realizations of Bose-Hubbard models using cold atomic gases are usually prepared very close to zero temperature (nK range). In contrast, due to the strong matter-light coupling achievable in cavities, realizations of polariton models offer a chance of operation at significantly higher temperatures. The critical temperature at which the MI state starts to lose its characteristic integer density has been estimated for the polariton model as $T^*/g\approx0.03$.\cite{Ai.Ho.Ta.Li.08,Ma.Co.Ta.Ho.Gr.07} For feasible values of the coupling $g$, $T^*$ falls into the mK range. Generally, Mott-like physics is expected as long as the Mott gap is significantly larger than the thermal energy (and the number of particle-hole excitations is small). The finite-temperature physics of the Bose-Hubbard model has been analyzed by several groups.\cite{gerbier:120405,PhysRevA.68.043623,PhysRevA.70.013611,lu:043607} Here we consider the effect of low but finite temperatures on the excitation spectra of the polariton model. This also provides information about the sensitivity of the results to the (necessarily finite) value of $\beta$ used in our simulations. The present method permits calculation of spectra also outside the MI, \ie, in the SF and the normal phase. We have pointed out above that, strictly speaking, there is no SF phase at $T\neq0$ in 1D. Nevertheless, SF like properties can be seen for $T$ sufficiently small. The results in Fig.~\ref{fig:finite_T_greens} underline the discussion of finite-temperature effects on the dispersion in the SF near $k=0$. The particle excitation is obviously not linear in panels (b) and (d), for which the temperature is higher than in Fig.~\ref{fig:PT_greens}. Results for $A^\text{ph}(k,\om)$ are shown in Fig.~\ref{fig:finite_T_greens}. At finite but low temperature [Fig.~\ref{fig:finite_T_greens}(c,d)] they still closely resemble the results at $T\approx 0$ in Figs.~\ref{fig:PT_greens}(a) and (d). At high temperature [Fig.~\ref{fig:finite_T_greens}(a,b)], we observe strong broadening of the particle band at all $k$, and strongly suppressed spectral weight for hole excitations. Existing work for the Bose-Hubbard model finds that at finite temperature additional multi-particle and hole bands arise.\cite{PhysRevA.68.043623} We see an additional excitation for $\beta g = 4.4$ and $t/g = 0.01$ at an energy of $\om/g \sim 3.1$. The weight of that excitation is about $50$ times smaller than the weight of the main peak with energy $\om/g \sim 0.2$ and thus not shown in Fig.~\ref{fig:finite_T_greens}~(a). This excitation is consistent with the upper polariton mode discussed above. We note that a broadened ``gapped'' spectrum is compatible with a density that deviates from the integer value characteristic of the MI, and this has to be kept in mind for potential applications relying on integer density. The numerical results for the total density are shown in each of the panels, demonstrating that despite the large particle-hole gap the polariton density deviates significantly from the low-temperature value $n_\text{p}=1$ for the parameters of Fig.~\ref{fig:finite_T_greens}(a). Some of the features observed in the SF phase can be explained by means of Bogoliubov theory for the Bose-Hubbard model. In particular, we have discussed above that with increasing temperature (where the condensate fraction $n_0\to0$) the spectral weight of the negative energy branch vanishes first at large $k$, in agreement with our numerical results. In addition the broadened positive energy branch no longer has a clear linear behavior at small $k$. We would like to point out that not only finite temperature but also disorder is an inevitable feature of experimental realizations of coupled cavity systems. Although not studied here directly, it has been stated\cite{Ma.Co.Ta.Ho.Gr.07} that the effect of disorder (in the form of local variations of the parameters $\om_0$, $g$ and $t$) has similar consequences as finite temperature. \begin{figure}\hspace*{0.25em} \includegraphics[width=0.46\linewidth]{greens_0_k_d-1_t0_4} \hspace*{0.3em} \includegraphics[width=0.46\linewidth]{greens_0_k_d-1_t0_5}\\\vspace*{1em} \includegraphics[width=0.49\linewidth]{greens_0_k_d3_t0_008} \includegraphics[width=0.49\linewidth]{greens_0_k_d3_t0_016} \caption{\label{fig:detuning_A_ph} Single-photon spectra with detuning $\Delta=\epsilon-\omega_0$ for the excitonic case $\Delta/g= -2$, $\mu/g = -0.5$ (a,b) and photonic case $\Delta/g = 2$, $\mu/g = 0.64$ (c,d), in the MI (a,b) and in the SF (c,d). Here $L=64$ and $\beta g = 3L$.} \end{figure} \subsubsection{Detuning} The detuning between the cavity photon mode and the atomic level splitting is an important parameter in the polariton model which is absent in the Bose-Hubbard model. Its influence on the physics has been discussed before.\cite{Ai.Ho.Ta.Li.08,irish_polaritonic_2008,Zh.Sa.Ue.08} Detuning can also be easily changed experimentally, motivating a calculation of the excitation spectra for $\Delta\neq0$. Our results are shown in Fig.~\ref{fig:detuning_A_ph}. The extent of the different phases, namely excitonic or polaritonic MI and photonic or polaritonic SF in the phase diagram has been analyzed for a two-site system.\cite{irish_polaritonic_2008} The way to distinguish between the polaritonic SF and the photonic SF is to monitor fluctuations in the exciton occupation number (pinned in the photonic SF but fluctuating in the polaritonic SF). The conclusion has been that for $t\approx|\Delta|$ a polaritonic SF exists only for $\Delta/g<-1$ This would match with the conjectured photonic nature of the SF near the lobe tip in two dimensions.\cite{Zh.Sa.Ue.08} However, it is not clear if these strict values also hold for larger systems and the thermodynamic limit. Besides, the work by Irish \etal\cite{irish_polaritonic_2008} is exclusively concerned with the fixed density transition occurring at $t\approx|\Delta|$, whereas a polariton SF may exist also for $t<|\Delta|$ if density fluctuations are allowed (generic transition). The spectrum in Fig.~\ref{fig:detuning_A_ph}(b) is for such a set of parameters. Again pertaining to the fixed density case, the MI state is supposed to be of excitonic nature for $\Delta/g<-1$, $t<|\Delta|$, and of polaritonic nature for $|\Delta|/g<1$ and small enough $t/g$ respectively $t/|\Delta|$.\cite{irish_polaritonic_2008} The former case is depicted in Fig.~\ref{fig:detuning_A_ph}(a), whereas the latter corresponds to the $\Delta=0$ results reported in Fig.~\ref{fig:PT_greens}. For $\Delta\gg g$, photon excitations are always lower in energy, and the effective interaction approaches zero. As a result, MI regions are very small or nonexistent, and the photonic SF state is similar to that of the Bose-Hubbard model in the limit of large $t/U$.\cite{Ai.Ho.Ta.Li.08} Here we consider $\Delta/g=\pm2$ for comparison to previous calculations of the spectra in the Mott phase.\cite{Ai.Ho.Ta.Li.08} (Note that the rotating wave approximation formally requires $|\Delta|\ll\epsilon,\om_0$.\cite{narozhny_coherence_1981}) These correspond to effective repulsions $U_\mathrm{eff}/g=0.096$ (for $\Delta/g=2$) respectively $U_\mathrm{eff}/g=2.096$ (for $\Delta/g=-2$), in excellent agreement with the width of the $n_\text{p}=1$ Mott lobes for the same parameters.\cite{Ai.Ho.Ta.Li.08} Our results in Fig.~\ref{fig:detuning_A_ph} show that again the spectra are dominated by the generic features of the MI and the SF. However, the detuning in the present case changes the ratio of the bandwidths of particle and hole bands $W_\text{p}/W_\text{h}$ in the Mott state.\cite{Ai.Ho.Ta.Li.08} While for $\Delta=0$, $W_\text{p}/W_\text{h} \approx 3$, we find $W_\text{p}/W_\text{h} \approx 2$ (similar to the result for the Bose-Hubbard model) for $\Delta/g = 2$ and $W_\text{p}/W_\text{h} \approx 7$ for $\Delta/g = -2$. The incoherent features observed for $\Delta/g=-2$ in Ref.~\onlinecite{Ai.Ho.Ta.Li.08} are not seen here. As mentioned before, the energy of the upper polariton modes (not shown) increases for $\Delta\neq0$.\cite{GrTaCoHo06} In the SF, we find the expected gapless excitations, as well as gapped modes indicative of a correlated superfluid. Since for $\Delta/g=2$, $U_\text{eff}$ is very small, the Mott gap of the dispersive bands in Fig.~\ref{fig:detuning_A_ph}(c) is also small (0.057$g$), but it is still larger than the temperature scale in our simulation $T/g = 0.005$. We note that, within our resolution, the positive energy spectrum in Fig.~\ref{fig:detuning_A_ph}(d) looks gapless, but not clearly linear. In this respect, the spectra for finite detuning resemble those obtained at high temperatures. Apart from this issue and a scaling of energies (due to the dependence of $U_\text{eff}$ on $\Delta$), the spectra obtained for $\Delta/g=-2$ are very similar to those for $\Delta=0$, whereas those for $\Delta/g=2$ resemble closely the results for the Bose-Hubbard model. \subsubsection{Phase transition} To end with, we present a scaling analysis for the generic phase transition. As pointed out by Fisher~\etal~\cite{PhysRevB.40.546} the scaling relation \begin{equation}\label{eq:fss_hyp} \rho_\text{s} = L^{2-d-z} \Tilde \rho(\delta L^{1/\nu},\beta/L^z) \end{equation} should hold for the superfluid density across the MI-SF transition. Here $\nu$ is the critical exponent of the correlation length which is expected to diverge like $\xi \sim \delta^{-\nu}$, and $z$ is the dynamical critical exponent. The generic transition in the Bose-Hubbard model has mean-field exponents $z=2$ and $v=1/z=1/2$.\cite{PhysRevB.40.546,Ba.Sc.Zi.90,alet:024513,capogrosso-sansone:134302} Recent field theory\cite{Ko.LH.09} and strong-coupling results\cite{Sc.Bl.09} predict the same universality classes for the polariton model, in conflict with numerical results in two dimensions which suggest the absence of multicritical points.\cite{Zh.Sa.Ue.08} \begin{figure} \centering \includegraphics[width=.95\linewidth]{scaling_mu_04_d0} \caption{(color online) Finite size scaling for the generic transition in the polariton model at $\mu/g = 0.4$, testing the scaling hypothesis~Eq.~(\ref{eq:fss_hyp}).} \label{fig:fss_mu0_4} \end{figure} We test in 1D the scaling hypothesis Eq.~(\ref{eq:fss_hyp}) with $z=2$ and the hyperscaling relation $z = 1/\nu$, along the line $\mu/g=0.4$ where the generic transition is expected [see Fig.~\ref{fig:phasediagrams}(b)]. To this end we keep the temperature constant at $\beta=L^2/10$ and plot $\rho_\text{s} L^{d+z-2}$ over $(t-t_\mathrm{c}) L^{1/\nu}$ to obtain the universal function $\tilde \rho$ (Fig.~\ref{fig:fss_mu0_4}). Defining a cost function in the spirit of Ref.~\onlinecite{harada_XY_1998}, allows us to evaluate the quality of the finite-size-scaling plot quantitatively. We find the minimum of our cost function at $t_\mathrm{c} = 0.0626(1)$ and $\nu = 0.50(2)$ when matching $\tilde \rho$ close to the phase transition [$|(t-t_c)L^{1/\nu}|<1$ in Fig.~\ref{fig:fss_mu0_4}]. We note that with the system sizes available, this result is not very stable. A fit in a larger region of $|(t-t_c)L^{1/\nu}|$ provides a better data collapse overall (but worse close to the phase transition), with $\nu\approx 0.65$ and very slightly smaller $t_\mathrm{c}$. A similar scaling with $z=1$ did not succeed, so that we conclude that the universality class of the generic transition in the polariton model is the same as that in the Bose-Hubbard model, despite the composite nature of the quasiparticles. This is consistent with recent field-theory and strong-coupling results.\cite{Ko.LH.09,Sc.Bl.09} An accurate scaling analysis for the fixed density transition through the lobe tip has been found to require much larger system sizes and is therefore not shown. In the 1D case considered, the shape of the lowest Mott lobe in 1D (see Fig.~\ref{fig:phasediagrams}) suggests that the similarity to the Bose-Hubbard model holds also in this respect, \ie a Kosterlitz-Thouless type phase transition. \section{Conclusions}\label{sec:conclusions} We calculated the single-boson spectral function and the dynamic structure factor of the Bose-Hubbard model, and for a recently proposed model of itinerant polaritons in coupled-cavity arrays. These models undergo a quantum phase transition from a Mott insulator to a superfluid state upon increasing the hopping integral of the bosons respectively photons with respect to the interaction. Results in one dimension, within and close to the Mott lobe with density one, have been obtained. Despite the generally different nature of the conserved particles, the models exhibit very similar spectral properties, including gapped particle and hole bands in the Mott insulating phase, and Bogoliubov type excitations in the superfluid phase. Additional excitations related to the second branch of upper polariton states exist in the single-particle spectrum of the polariton model,\cite{Sc.Bl.09} but cancel out in the dynamic structure factor. In general, these features have high energy and very small spectral weight, so that for practical purposes the excitation spectra are qualitatively similar to the Bose-Hubbard model. Correlation effects are particularly strong in the one dimensional case considered. Our results in the superfluid phase represent the first unbiased nonperturbative spectra for $A(k,\om)$ in both models and for $S(k,\om)$ in the polariton model (in both phases). Good qualitative agreement with recent analytical work on the two-dimensional Bose-Hubbard model was found, and we have compared our results in the superfluid phase to Bogoliubov theory. The limiting cases of the Mott insulator close to the atomic limit, as well as the weakly interacting superfluid are described quite well by analytical approximations, whereas in the phase transition region, our nonperturbative results show considerable deviation. Emerging particle-hole symmetry on approach of the multicritical lobe tip has been demonstrated for the polariton model. For the polariton model, we have also explored the influence of detuning and finite temperature on the spectral properties, and have presented a scaling analysis to determine the universality class of the generic phase transition. Keeping in mind experimental realizations of coupled cavity arrays, interesting open issues for future work include the excitation spectra in the two-dimensional case (and comparison to analytical results\cite{Sc.Bl.09}), the behavior of the sound velocity across the phase transition (also for the Bose-Hubbard model) and disorder. The present work further highlights the fact that the physics of strongly correlated bosons as described by the Bose-Hubbard model may be observed in terms of optical models that, if realized, would have some distinct experimental advantages and further contain new degrees of freedom due to the mixed nature of the quasiparticles. \begin{acknowledgments} MH was supported by the FWF Schr\"odinger Fellowship No.~J2583. PP acknowledges support from the FWF, projects P18551 and P18505. We made use of the ALPS library\cite{ALPS_I,ALPS_II} and the ALPS applications.\cite{ALPS_DIRLOOP} We acknowledge fruitful discussions with F. Assaad, M. J. Bhaseen, J. Keeling, D. Khmelnitskii and P. B. Littlewood. We are grateful to H. Monien and D. Rossini for providing us with data for Figure~\ref{fig:phasediagrams}. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Multi-object tracking (MOT) is the problem of detecting object instances and then temporally associating them to form trajectories. Early works~\cite{zhang2008global,berclaz2011multiple,zamir2012gmcp,kim2015multiple,tang2017multiple,henschel2017improvements,ristani2018features,sheng2018heterogeneous,xu2019spatial,xu2019train, wang2019exploit, andriyenko2011multi, berclaz2006robust, evangelidis2008parametric, tang2017multiple} formulate instance association as a graph-based optimization problem under the ``tracking-by-detection" paradigm, in which a node represents a detection and an edge encodes the likelihood of two nodes being linked. In practice, they use a combination of visual and motion cues to represent each node, which often requires expensive computation. Furthermore, they usually construct a large offline graph, which is non-trivial to solve, making them inapplicable for real-time tracking. Recently, online trackers \cite{bewley2016simple, wojke2017simple,bergmann2019tracking, zhou2020tracking} started to emerge, as they are more desirable in real-time tracking scenarios. They focus on improving local linking over consecutive frames rather than building an offline graph to re-identify instances across large temporal gaps. Among these, some recent works \cite{bergmann2019tracking, zhou2019deep} have pushed \textit{online} MOT into state-of-the-art territory, making them very competitive. In this work, we explore the importance of modelling motion in \textit{online MOT} by building upon ``Simple and Online Realtime Tracking'' (SORT) \cite{bewley2016simple, wojke2017simple} that underlies recent state-of-the-art models \cite{bergmann2019tracking, zhou2020tracking}. In SORT, a better motion model is the key to improving its local linking accuracy. For example, SORT~\cite{bewley2016simple} uses Kalman Filters~\cite{kalman1960new} to model the instance's motion with simple geometric features, while the more recent state-of-the-art trackers~\cite{bergmann2019tracking,zhou2020tracking} learn a deep network to predict the displacement (motion) of instances based on both visual and geometric features, significantly outperforming the simpler SORT. We conduct our motion modelling exploration by leveraging a region-based Siamese Multi-Object Tracking network, which we name {\bf SiamMOT}. We combine a region-based detection network (Faster-RCNN \cite{ren2015faster}) with two motion models inspired by the literature on Siamese-based single-object tracking~\cite{li2018high, li2019siamrpn++, guo2020siamcar, bertinetto2016fully, fan2019siamese}: an implicit motion model (IMM) and an explicit motion model (EMM). Differently from CenterTrack \cite{zhou2020tracking} that implicitly infers the motion of instances with point-based features \cite{duan2019centernet, tian2019fcos, qiu2020borderdet}, SiamMOT uses region-based features and develops (explicit) template matching to estimate instance motion, which is more robust to challenging tracking scenarios, such as fast motion. We present extensive ablation analysis on three different multi-person tracking datasets. Our results suggest that instance-level motion modelling is of great importance for robust online MOT, especially in more challenging tracking scenarios. Furthermore, we show that the motion models of SiamMOT can improve tracking performance substantially, especially when cameras are moving fast and when people's poses are deforming significantly. On the popular MOT17 Challenge~\cite{milan2016mot16} SiamMOT with EMM achieves \textbf{65.9} MOTA / \textbf{63.3} IDF1 with a DLA-34 \cite{yu2018deep} backbone by using \textit{public} detection, outperforming all previous methods. Moreover, on the recently introduced large-scale TAO-person dataset \cite{dave2020tao}, SiamMOT substantially improves over the state-of-the-art Tracktor++ \cite{bergmann2019tracking} from \textbf{36.7} to \textbf{41.1} TrackAP~\cite{dave2020tao, yang2019video}. Finally, we benchmark SiamMOT on the Human In Events (HiEve) dataset~\cite{lin2020human}, where it outperforms the winner of the ACM MM'20 grand HiEve challenge~\cite{lin2020hieve}. \section{Related work} \subsection{Siamese trackers in SOT} Single object tracking (SOT) refers to tracking a given object of interest, which is usually specified in the first frame and could belong to any semantic object class. Instead of detecting pre-defined objects in a frame and linking them back to earlier tracked instances, single object trackers (SOT) usually model the motion of the object of interest directly to predict its trajectory. Siamese-based trackers~\cite{held2016learning, bertinetto2016fully, li2018high, li2019siamrpn++, tao2016siamese, valmadre2017end, guo2017learning, he2018twofold, zhang2019structured, zhu2018distractor, fan2019siamese, zhang2019deeper, guo2020siamcar} are a family of state-of-the-art SOT. As the name suggests, Siamese trackers operate on pairs of frames. Their goal is to track (by matching) the target object in the first frame within a search region in the second frame. This matching function is usually learned offline on large-scale video and image datasets. In this paper, we formulate Siamese trackers within an end-to-end trainable multi-object tracking network (SiamMOT). The closest work to ours is DeepMOT that also trains Siamese trackers with other components under the proposed MOT training framework. However, DeepMOT focuses on improving the structured loss in MOT rather than formulating the detector and tracker in a unified network, so an off-the-shelf single object tracker is needed in DeepMOT. Finally, while we take inspiration from particular Siamese trackers \cite{leal2016learning, li2018high, guo2020siamcar}, our formulation is generic enough that other Siamese trackers can easily be adapted in our MOT framework. \vspace{-1em} \paragraph{Siamese network.} It's worth noting that Siamese trackers are different from general Siamese networks~\cite{leal2016learning, sun2019deep, varior2016siamese}. Siamese networks usually learn a affinity function between two detected instances, whereas Siamese trackers learn a matching function that is used to search for a detected instance within a larger contextual region. \subsection{Tracking-by-Detection in MOT} Many works tackle multi-object tracking (MOT) by adopting the ``tracking-by-detection" paradigm ~\cite{zhang2008global,berclaz2011multiple,zamir2012gmcp,kim2015multiple,tang2017multiple,henschel2017improvements,ristani2018features,sheng2018heterogeneous,xu2019spatial,xu2019train, sadeghian2017tracking, leal2016learning, wang2019exploit, andriyenko2011multi, berclaz2006robust, choi2010multiple, evangelidis2008parametric, fang2018recurrent}, where objects instances are first detected in each frame and then associated across time based on their visual coherence and spatial-temporal consistency. Some of these works focused on learning new functions to evaluate short-term associations more robustly \cite{ristani2018features, sheng2018heterogeneous, tang2017multiple, xu2019spatial, zhang2008global, tang2017multiple, sadeghian2017tracking, leal2016learning, choi2015near, fang2018recurrent, zhang2008global}. Others, instead, focused on learning how to output more temporally consistent long-term tracks by optimizing locally connected graphs~\cite{zhang2008global,berclaz2011multiple,zamir2012gmcp,kim2015multiple,tang2017multiple,henschel2017improvements,ristani2018features,sheng2018heterogeneous,xu2019spatial,xu2019train, wang2019exploit, andriyenko2011multi, berclaz2006robust, evangelidis2008parametric, tang2017multiple}. Many of these approaches are inefficient, as they employ separate computationally expensive cues, like object detection~\cite{girshick2015fast, ren2015faster, dai2016r, he2017mask}, optical flow~\cite{dosovitskiy2015flownet, sun2018pwc, tang2017multiple, choi2015near}, and re-identification~\cite{hermans2017defense, tang2017multiple, zhou2019deep, tang2017multiple, ristani2018features}. \vspace{-1em} \paragraph{Online MOT.} Online MOT refers to performing instance association on the fly without knowledge of future frames \cite{keuper2018motion, ban2016tracking, xiang2015learning, bewley2016simple, wojke2017simple, bergmann2019tracking, zhou2020tracking}. Therefore, online MOT focuses more on accurate local association rather than global-optimal association in which detections can be linked across long temporal gaps (as in offline graph modelling). It has seen a resurgence of popularity recently as new models are efficient enough to be applicable to real-time tracking. For example, Ban et al.~\cite{ban2016tracking} formulated it in a probabilistic framework by using a variational expectation maximization algorithm to find the tracks. Xiang et al.~\cite{xiang2015learning} used Markov Decision Processes and reinforcement learning for online instance association. Bewley et al. \cite{bewley2016simple, wojke2017simple} developed simple object and realtime tracking (SORT) for quick online instance association. SORT has been widely used in recent deep neural network based models \cite{zhou2020tracking, bergmann2019tracking} which achieve state-of-the-art performance on public MOT datasets. Our SiamMOT is based on SORT, and we explore how to improve its tracking performance. \vspace{-1em} \paragraph{Motion modelling in SORT.} The original SORT~\cite{bewley2016simple} only used geometric features of tracks (location, box shape, etc) in its motion model to track locations across frames. Later, Wojke et al.~\cite{wojke2017simple} improved SORT by incorporating visual features into the motion model to link the detected instances. Recently, Bergmann et al.~\cite{bergmann2019tracking} and Zhou et al.~\cite{zhou2020tracking} jointly learned the motion model with the detector such that both visual and geometric features are used. In detail, Tracktor~\cite{bergmann2019tracking} leveraged a two stage detector~\cite{ren2015faster} to regress from previous person's location to current frame; CenterTrack~\cite{zhou2020tracking} adopted a track branch to regress the displacement of object centers between frames. In this paper, we explore how to improve the motion model in SORT-based tracking model -- SiamMOT, and more importantly how it leads to improved MOT accuracy. \begin{figure*}[t] \centering \includegraphics[width=1.0\textwidth]{Figs/SiamMOT.png} \caption{\small \it (Best viewed in color) SiamMOT is a region-based multi-object tracking network that detects and associates object instances simultaneously. The Siamese tracker models the motion of instances across frames and it is used to temporally link detection in online multi-object tracking. Backbone feature map for frame $\mathbf{I}^t$ is visualized with 1/2 of its actual size. } \label{figure:teaser} \end{figure*} \section{SiamMOT: Siamese Multi-Object Tracking} \label{sec:architecture} SiamMOT builds upon Faster-RCNN object detector~\cite{girshick2015fast, ren2015faster, he2017mask}, which consists of a Region Proposal Network (RPN) and a region-based detection network. On top of the standard Faster-RCNN, SiamMOT adds a region-based Siamese tracker to model instance-level motion. As shown in Fig.~\ref{figure:teaser}, SiamMOT takes as input two frames $\mathbf{I}^t, \mathbf{I}^{t+\delta}$ together with a set of detected instances $\mathbf{R}^t = \{R_1^t, \ldots R_i^t, \ldots \}$ at time $t$. In SiamMOT, the detection network outputs a set of detected instances $\mathbf{R}^{t+\delta}$, while the tracker propagates $\mathbf{R^t}$ to time $t+\delta$ to generate $\tilde{\mathbf{R}}^{t+\delta}$. As in SORT, SiamMOT contains a motion model that \textit{tracks} each detected instance from time $t$ to $t+\delta$ by propagating the bounding box $R_i^t$ at time $t$ to $\tilde{R}_i^{t+\delta}$ at $t+\delta$ ; and a spatial matching process that \textit{associates} the output of tracker $\tilde{R}_i^{t+\delta}$ with the detections $R_i^{t+\delta}$ at time ${t+\delta}$ such that detected instances are linked from $t$ to $t+\delta$. In the next section we introduce how our Siamese tracker models instance motion in SiamMOT (Sec.~\ref{sec:motion_model}) and present two variants of Siamese trackers in Sec.~\ref{sec:imm} and Sec.~\ref{sec:emm}. Finally, we provide the details for training and inference (Sec.~\ref{sec:train_infer}). \subsection{Motion modelling with Siamese tracker} \label{sec:motion_model} In SiamMOT, given a detected instance $i$ at time $t$, the Siamese tracker searches for that particular instance at frame $\mathbf{I}^{t+\delta}$ in a contextual window around its location at frame $\mathbf{I}^t$ (i.e, $R_i^{t}$). Formally, \begin{equation} (v_i^{t+\delta}, \tilde{R}_i^{t+\delta}) = \mathcal{T}(\mathbf{f}_{R_i}^t, \mathbf{f}_{S_i}^{t+\delta}; \Theta) \label{equation:sot} \end{equation} where $\mathcal{T}$ is the learnable Siamese tracker with parameters $\Theta$, $\mathbf{f}_{R_i}^t$ is the feature map extracted over region $R_i^{t}$ in frame $\mathbf{I}^{t}$, and $\mathbf{f}_{S_i}^{t+\delta}$ is the feature map extracted over the search region $S_i^{t+\delta}$ in frame $\mathbf{I}^{t+\delta}$. We compute $S_i^{t+\delta}$ by expanding $R_i^t$ by a factor r ($> 1$) while maintaining the same geometric center (e.g., dashed bounding box in Fig.~\ref{figure:teaser}). We extract features $\mathbf{f}_{R_i}^t$ and $\mathbf{f}_{S_i}^{t+\delta}$ using the region of interest align (ROIAlign) layer of Mask-RCNN\cite{he2017mask}. Finally, $v_i^{t+\delta}$ is the confidence of visibility for detected instance $i$ at time $t+\delta$. As long as the instance is visible in $S_i^{t+\delta}$, $\mathcal{T}$ should produce a high score $v_i^{t+\delta}$, otherwise $\mathcal{T}$ should produce a low score. Note how this formulation is reminiscent of that of Siamese-based single-object trackers~\cite{bertinetto2016fully, held2016learning, li2018high, li2019siamrpn++} and specifically, how they model the instance's motion between frames. In the context of multi-object tracking, we apply Eq.~\ref{equation:sot} multiple times, once for each detected instance $R_i^{t} \in \mathbf{R}^t$. Importantly, our SiamMOT architecture allows these operations to run in parallel and only requires the backbone features to be computed once, making online tracking inference efficient. We conjecture that motion modelling is particularly important for online MOT. Specifically, association between $R^t$ and $R^{t+\delta}$ will fail if 1) $\tilde{R}^{t+\delta}$ does not match to the right instance in $R^{t+\delta}$ or 2) $v_i^{t+\delta}$ is low for a visible person at $t+\delta$. Previous works \cite{bergmann2019tracking, zhou2020tracking} approach the problem of regressing $\tilde{R}^{t+\delta}$ from the previous location (i.e. $R_i^t$) by feeding the model with features from both frames. By doing so these works aim to implicitly model the instance's motion in the network. However, as research in single-object tracking~\cite{li2018high, li2019siamrpn++, bertinetto2016fully, guo2020siamcar} reveals, finer-grained spatial-level supervision is of great significance to explicitly learn a robust target matching function in challenging scenarios. Based on this rationale, we present two different parameterizations of $\mathcal{T}$ in SiamMOT -- an implict motion model in Sec.~\ref{sec:imm} and an explicit motion model in Sec.~\ref{sec:emm}. \subsection{Implicit motion model}~\label{sec:imm} Implicit motion model (IMM) uses an MLP to implicitly estimate the instance-level motion between two frames. In detail, the model concatenates $\mathbf{f}_{S_i}^t$ and $\mathbf{f}_{S_i}^{t+\delta}$ and feeds that to an MLP that predicts the visibility confidence $v_i$ and the relative location and scale changes: \begin{equation} m_i=[\frac{x_i^{t+\delta} - x_i^t }{w_i^t}, \ \frac{y_i^{t+\delta}-y_i^t}{h_i^t}, \text{log}\frac{w_i^{t+\delta}}{w_i^t} \ \text{log}\frac{h_i^{t+\delta}}{h_i^t}] \label{equation:motion} \end{equation} in which $(x_i^t, y_i^t, w_i^t, h_i^t)$ is the parameterization of $R_i^t$. We can trivially derive $\tilde{R}^{t+\delta}$ from an inversion transformation of Eq.~\ref{equation:motion} by taking as input $R_i^t$ and $m_i$. \vspace{-1em} \paragraph{Loss.} Given a triplet $(R_i^t, S_i^{t+\delta}, R_i^{t+\delta})$, we train IMM with the following training loss: \begin{equation} \mathbf{L} = \ell_{focal}(v_i, v_i^*) + \mathbbm{1}[v_i^*] \, \ell_{reg}(m_i, m_i^*) \label{equation:imm} \end{equation} where $v_i^*$ and $m_i^*$ refer to ground truth values derived from $R_i^{t+\delta}$, $\mathbbm{1}$ is the indicator function, $\ell_{focal}$ the focal loss for classification~\cite{lin2017focal} and $\ell_{reg}$ the commonly used smooth $\ell_1$ loss for regression. Please refer to the supplementary material for the network architecture. \subsection{Explicit motion model}~\label{sec:emm} \begin{figure}[t] \centering \centering \includegraphics[width=0.5\textwidth]{Figs/emm-03.png} \caption{\small \it Network architecture of Explicit Motion Model (EMM), $*$ represents channel-wise cross correlation operator.} \label{fig:emm} \end{figure} Inspired by the literature on single-object tracking~\cite{li2018high, li2019siamrpn++, guo2020siamcar, tao2016siamese, bertinetto2016fully}, we propose an explicit motion model (EMM, Fig.\ref{fig:emm}) in SiamMOT. Specifically, it uses a channel-wise cross-correlation operator ($*$) to generate a pixel-level response map $\mathbf{r}_i$, which has shown to be effective in modelling dense optical flow estimation~\cite{dosovitskiy2015flownet} and in SOT for instance-level motion estimation~\cite{li2018high, li2019siamrpn++, bertinetto2016fully, guo2020siamcar}. In SiamMOT, this operation correlates each location of the search feature map $\mathbf{f}_{S_i}^{t+\delta}$ with the target feature map $\mathbf{f}_{R_i}^t$ to produce $\mathbf{r}_i = \mathbf{f}_{S_i}^{t+\delta} * \mathbf{f}_{R_i}^t$, so each map $\mathbf{r}_i[k, :, :]$ captures a different aspect of similarity. Inspired by FCOS~\cite{tian2019fcos}, EMM uses a fully convolutional network $\psi$ to detect the matched instances in $\mathbf{r}_i$. Specifically, $\psi$ predicts a dense visibility confidence map $\mathbf{v}_i$ indicating the likelihood of each pixel to contain the target object, and a dense location map $\mathbf{p}_i$ that encodes the offset from that location to the top-left and bottom-right bounding box corners. Thus, we can derive the instance region at $(x, y)$ by the following transformation $\mathcal{R}(\mathbf{p}(x,y)) = [x-l, y-t, x+r, y+b]$ in which $\mathbf{p}(x,y) = [l, t, r, b]$ (the top-left and bottom-right corner offsets). Finally, we decode the maps as follows: \begin{equation} \begin{split} \tilde{R}_i^{t+\delta} = \mathcal{R}(\mathbf{p}_i(x^*, y^*)); \ \ \ v_i^{t+\delta} = \mathbf{v}_i(x^*, y^*)& \\ \mathbf{s.t.} (x^*, y^*) = \argmax_{x,y}(\mathbf{v}_i \odot \bm{\eta}_i)& \\ \end{split} \end{equation} where $\odot$ is the element-wise multiplication, $\bm{\eta}_i$ is a penalty map that specifies a non-negative penalty score for the corresponding candidate region as follows: \begin{equation} \bm{\eta}_i(x, y) = \lambda \mathcal{C} + (1-\lambda)\mathcal{S}(\mathcal{R}(\mathbf{p}(x,y)), R_i^t) \end{equation} where $\lambda$ is a weighting scalar ($0\leq\lambda\leq1$), $\mathcal{C}$ is the cosine-window function w.r.t the geometric center of the previous target region $\mathcal{R}_i^t$ and $\mathcal{S}$ is a Guassian function w.r.t the relative scale (height / width) changes between the candidate region ($\mathbf{p}(x,y))$) and $ R_i^t$. The penalty map $\bm{\eta}_i$ is introduced to discourage dramatic movements during the course of tracking, similar to that in \cite{li2018high, li2019siamrpn++, guo2020siamcar, fan2019siamese}. \vspace{-1em} \paragraph{Loss.} Given a triplet $(R_i^t, S_i^{t+\delta}, R_i^{t+\delta})$, we formulate the training loss of EMM as follows: \begin{equation} \begin{split} \mathbf{L} &= \sum_{x, y}\ell_{focal}(\mathbf{v}_i(x,y), \mathbf{v}_i^*(x,y)) \\ &+ \sum_{x, y}\mathbbm{1}[\mathbf{v}_i^*(x,y)=1](w(x,y) \cdot \ell_{reg}(\mathbf{p}_i(x,y), \mathbf{p}_i^*(x,y))) \end{split} \end{equation} where $(x,y)$ enumerates all the valid locations in $S_i^{t+\delta}$, $\ell_{reg}$ is the IOU Loss for regression~\cite{yu2016unitbox, danelljan2019atom} and $\ell_{focal}$ is the focal loss for classification~\cite{lin2017focal}. Finally, $\mathbf{v}_i^*$ and $\mathbf{p}_i^*$ are the pixel-wise ground truth maps. $\mathbf{v}_i^*(x,y) = 1$ if $(x,y)$ is within $R_i^{*t+\delta}$ and $0$ otherwise. $\mathbf{p}_i^*(x,y) = [x-x^*_0, y-y^*_0, x^*_1-x, y^*_1-y]$ in which $(x^*_0, y^*_0)$ and $(x^*_1, y^*_1)$ corresponds to the coordinates of the top-left and the bottom-right corner of ground truth bounding box $R_i^{t+\delta}$. Similar to \cite{zhu2019soft}, we modulate $\ell_{reg}$ with $w(x,y)$, which is the centerness of location $(x, y)$ w.r.t to the target instance $R_i^{t+\delta}$ and is defined as $w(x, y) = \sqrt{\frac{\min(x-x_0, x_1-x)}{\max(x-x_0, x_1-x)} \cdot \frac{\min(y-y_0, y_1-y)}{\max(y-y_0, y_1-y)}}$. EMM improves upon the IMM design in two ways. First it uses the channel independent correlation operation to allow the network to explicitly learn a matching function between the same instance in sequential frames. Second, it enables a mechanism for finer-grained pixel-level supervision which is important to reduce the cases of falsely matching to distractors. \subsection{Training and Inference} \label{sec:train_infer} We train SiamMOT in an end-to-end fashion with the following loss $\ell = \ell_{rpn} + \ell_{detect} + \ell_{motion}$, in which $\ell_{rpn}$ and $\ell_{detect}$ are the standard losses for RPN~\cite{ren2015faster} and the detection sub-network~\cite{girshick2015fast} in Faster-RCNN. $\ell_{motion} = \sum_{x_i \in \mathcal{X}} \mathbf{L}(x_i)$ is used to train the Siamese tracker, wherein $\mathcal{X} = \cup_{i=1}^M (R_i^t, S_i^{t+\delta}, R_i^{t+\delta})$ are training triplets. Note that $R_i^{t+\delta} = \varnothing$ if $R_i^t$ does not include a ground truth instance or the instance in $R_i^t$ is not visible in $S_i^{t+\delta}$. Similar to Faster-RCNN training, we sample $R_i^t$ from the outputs of the RPN~\cite{ren2015faster}. At inference, a standard IOU-based NMS operation is first used on the outputs of the detection sub-network ($R^{t+\delta}$ in Fig.~\ref{figure:teaser}) and on those of the Siamese tracker ($\tilde{R}^{t+\delta}$ in Fig.~\ref{figure:teaser}) independently. Next, the following spatial matching process is used to merge $R^{t+\delta}$ and $\tilde{R}^{t+\delta}$: detections that spatially match ($IOU \geq 0.5$) to any tracked instance are suppressed and thus removed. Then, we adopt a standard \textit{online} solver as that in \cite{bergmann2019tracking,zhou2020tracking,bewley2016simple,wojke2017simple}: 1) a trajectory is continued if its visibility confidence ($v_i^t$) is above $\alpha$; 2) a trajectory is born if there is a non-matched detection and its confidence is above $\beta$ and 3) a trajectory is killed if its visibility confidence ($v_i^t$) is below $\alpha$ for consecutive $\tau$ frames. \vspace{-1em} \paragraph{Short occlusion handling.} In the case of short occlusions, the visibility confidence for the target would be low (lower than the threshold $\alpha$). Instead of killing them, we keep those tracks in memory and continue searching for them in future frames (up to $\tau > 1$ frames) to check whether they can be re-instated. We use the last predicted location and its corresponding feature as the searching template. \section{Experimental settings} \subsection{Datasets and Metrics} \label{sec:datasets} \noindent {\bf MOT17}\cite{milan2016mot16} is the most widely used multi-person tracking benchmark. It consists of 7 training and 7 test videos, ranging from from $7$ to $90$ seconds long. The videos feature crowded scenes in indoor shopping malls or outdoor streets. We follow the evaluation protocol of~\cite{milan2016mot16} and report our results using several metrics: MOTA (Multiple Object Tracking Accuracy), IDF1 (ID F1 score), FP (False Positives), FN (False Negatives) and IDsw (ID switches). \\ \vspace{-1em} \noindent {\bf TAO-person}\cite{dave2020tao} is a newly-established large scale multi-person tracking benchmark. It is a subset of the TAO dataset~\cite{dave2020tao} and it consists of 418 training and 826 validation videos. To include a large variability of scenes, the videos are collected by mixing existing datasets like AVA~\cite{gu2018ava} (generic movies), Charades~\cite{sigurdsson2016hollywood} (indoor activities), BDD~\cite{yu2020bdd100k} (streets), Argoverse~\cite{Argoverse} (streets) and other sports videos. This dataset contains rich motion artifacts (e.g. motion and defocus blur), as well as diverse person motion patterns (Fig.~\ref{fig:dataset_motion_stat}{\color{red}c}), which makes tracking persons challenging. We follow the evaluation protocol of~\cite{dave2020tao} and use the provided toolbox to report Federated Track-AP (TAP). Federated evaluation \cite{gupta2019lvis} is used because not all videos are exhaustively annotated. Different from MOTA, Track-AP \cite{yang2019video} highlights the temporal consistency of the underlying trajectories.\\ \vspace{-1em} \noindent {\bf Caltech Roadside Pedestrians (CRP)}\cite{hall2015fine} is a dataset for person analysis in videos. It consists of 7 videos, each roughly 20 minutes long. The videos are captured from a camera mounted to a car while driving, and they mainly feature outdoor roadside scenes. Due to the fast camera motion, the pedestrians appear as they are moving relatively much faster than in other datasets (Fig.~\ref{fig:dataset_motion_stat}{\color{red}b}). We report results on the same metrics used for MOT17.\\ \vspace{-1em} \noindent {\bf Datasets analysis.} Each of these datasets contains different challenges for tracking. For example, tracking people in MOT17 is challenging due to occlusion and crowded scenes, even though people do not move fast and their poses are constant (i.e., standing). In contrast, scenes in CRP are not as crowded, but the camera motion is very large and the pedestrian's position changes quickly. Finally, TAO includes a wide range of scene types and video corruption artifacts. As we focus on modelling short term motion for tracking, here we examine the characteristics of motion in each of these datasets. Towards this, we calculate the ground truth motion vector $\mathbf{m}$ as in Eq.~\ref{equation:imm} for every person, between two consecutive annotated frames. As videos are not annotated densely (i.e., every frame), we normalize $\mathbf{m}$ by $\delta$ (their time difference). We present dataset-specific histograms in Fig.~\ref{fig:dataset_motion_stat}. People in MOT17 dataset have relatively small motion compared to those in TAO and CRP. \begin{figure} \centering \begin{subfigure}[b]{0.245\textwidth} \centering \includegraphics[width=1.0\textwidth]{Figs/MOT17_motion_xy.png} \caption{\small \it MOT17} \end{subfigure} \begin{subfigure}[b]{0.245\textwidth} \centering \includegraphics[width=1.0\textwidth]{Figs/caltech_roadside_pedestrians_motion_xy.png} \caption{\small \it CRP} \end{subfigure} \begin{subfigure}[b]{0.245\textwidth} \centering \includegraphics[width=1.0\textwidth]{Figs/TAO_motion_xy.png} \caption{\small \it TAO-Person} \end{subfigure} \caption{\small \it 3D histogram of normalized motion offset per second across different datasets. } \label{fig:dataset_motion_stat} \end{figure} \subsection{Implementation details} \paragraph{Network.} We use a standard DLA-34~\cite{yu2018deep} with feature pyramid~\cite{lin2017feature} as the Faster-RCNN backbone. We set $r = 2$, so that our search region is $2 \times$ the size of the tracking target. In IMM, $\mathbf{f}_{S_i}^t$ and $\mathbf{f}_{S_i}^{t+\delta}$ have the same shape $\mathbb{R}^{c \times 15 \times 15}$ and the model is parametrized as a 2-layer MLP with $512$ hidden neurons. In EMM, instead, $\mathbf{f}_{R_i}^{t} \in \mathbb{R}^{c \times 15 \times 15}$ and $\mathbf{f}_{S_i}^{t+\delta} \in \mathbb{R}^{c \times 30 \times 30}$, so that they are at the same spatial scale; the model is a 2-layer fully convolutional network, with stacks of $3 \times 3$ convolution kernels and group normalization~\cite{wu2018group}). \vspace{-1em} \paragraph{Training samples.} As previously mentioned, we train SiamMOT on pairs of images. When video annotations are not available, we follow ~\cite{held2016learning, zhou2020tracking} by employing \textit{image training}, in which spatial transformation (crop and re-scale) and video-mimicked transformation (motion blur) are applied to an image such that a corresponding image pair is generated. When video annotations are available, we use \textit{video training}, in which we sample pairs of two random frames that are at most 1 second apart. \vspace{-1em} \paragraph{Training.} We jointly train the tracker and detection network. We sample 256 image regions from the output of the RPN to train them. We use SGD with momentum as the optimizer, and we train our model for $25K$ and $50K$ iterations, for CrowdHumand \cite{shao2018crowdhuman} and COCO \cite{lin2014microsoft} datasets respectively. We resize the image pair during training such that its shorter side has 800 pixels. We start training with a learning rate of $0.02$ and decrease it by factor $10$ after $60\%$ of the iterations, and again after $80\%$. We use a fixed weight decay of $10^{-4}$ and a batch size of $16$ image pairs. \vspace{-1em} \paragraph{Inference.} We empirically set linking confidence $\alpha = 0.4$ and detection confidence $\beta= 0.6$, and we present the sensitivity analysis of $\alpha$ and $\beta$ in the supplementary material. We keep a trajectory active until it is unseen for $\tau=30$ frames. \begin{table*}[t] \resizebox{\linewidth}{!}{ \centering \begin{tabular}{l|c|ccccc|ccccc | c} \toprule \multicolumn{1}{l}{Models} & \multicolumn{1}{c}{} & \multicolumn{5}{c}{MOT17} & \multicolumn{5}{c}{Caltech Roadside Pedestrians (CRP)} & \multicolumn{1}{c}{TAO-person} \\ \toprule & Runtime & MOTA $\uparrow$ & IDF1 $\uparrow$ & FP $\downarrow$ & FN $\downarrow$ & IDsw $\downarrow$ & MOTA $\uparrow$ & IDF1 $\uparrow$ & FP $\downarrow$ & FN $\downarrow$ & IDsw $\downarrow$ & [email protected] $\uparrow$ \\ \midrule Faster-RCNN (Tracktor) & 23.0 fps & 58.6 & 53.0 & 3195 & 42488 & 858 & 15.9 & 25.1 & 632 & 21238 & 1126 & 29.1\%\\ Faster-RCNN + Flow & 12.5 fps & 60.3 & 54.8 & 3518 & 40387 & 716 & 41.8 & 56.4 & 2381 & 11934 & 1594 & 32.8\% \\ Faster-RCNN + IMM & 19.5 fps & 61.5 & 57.5 & 5730 & 36863 & 678 & 76.8 & 81.2 & 2583 & 2391 & 1377 & 34.7\% \\ Faster-RCNN + EMM & 17.6 fps & 63.3 & 58.4 & 5726 & 34833 & 671 & 76.4 & 81.1 & 2548 & 2575 & 1311 & 35.3\% \\ \bottomrule \end{tabular} } \caption{\small \it Results on MOT17 train, Caltech Roadside Pedestrians and TAO-Person datasets. FPS are calculated based on MOT17 videos that are resized to 720P. IMM and EMM are the motion model presented for SiamMOT.} \label{table:ablation_mot_crp_tao} \end{table*} \section{Ablation analysis} ~\label{sec:ablation} We carry out ablation analysis on MOT17, CRP and TAO-person, which are considerably different from each other (sec.~\ref{sec:datasets}, Fig.~\ref{fig:dataset_motion_stat}) and provide a good set for ablation study. We adopt \textit{image training} to train SiamMOT, as we don't have large-scale video annotations to train a generalized model. Specifically, we train models from the full-body annotation of CrowdHuman \cite{shao2018crowdhuman}, and evaluate it on MOT17-train and CRP datasets as they have amodal bounding box annotation. We train models from visible-body annotations from CrowdHuman and COCO \cite{lin2014microsoft} and evaluate it on the TAO-person dataset. We do this to try to keep the models as comparable as possible while still adhering to the annotation paradigm of each dataset (amodal vs modal person bounding boxes). In order to directly compare frame-to-frame tracking, we adopt the same solver as that in Tracktor\cite{bergmann2019tracking}, in which the trajectory is killed immediately if it is unseen (i.e. $\tau=1$ frame). \subsection{Instance-level motion modelling} \label{sec:ilmm} We investigate the benefits of motion modelling for MOT (Table~\ref{table:ablation_mot_crp_tao}). We compare SiamMOT with IMM and EMM against two baselines: (1) our implementation of $\operatorname{Tracktor}$~\cite{bergmann2019tracking}, which we obtain by removing the Siamese tracker from SiamMOT and instead use the detection network to regress the location of the target in the current frame, and (2) $\operatorname{Tracktor+Flow}$, that adds a flow-based model to estimate the movement of people across frames. This flow-based model can be considered a simple forward tracker that ``moves'' the previous target region to the current frame and then uses the detection network (as in $\operatorname{Tracktor}$) to regress to its exact location in the current frame. The movement of the person instance is estimated by taking the median flow field of its constituent pixels. In our experiments we use a pre-trained state-of-the-art PWC-net~\cite{sun2018pwc} to estimate the pixel-wise optical flow field. Finally, for fair comparison we use the same detections for all four models. Results show that our implementation of $\operatorname{Tracktor}$ achieves competitive results on both MOT17 and TAO-person (higher than those reported by \cite{dave2020tao},\cite{bergmann2019tracking}), but performs poorly on CRP, as its motion model is too weak to track people that move too fast. Adding flow to $\operatorname{Tracktor}$ significantly improves its performance ($\operatorname{Tracktor+Flow}$), especially on the challenging CRP and TAO-person datasets. $\operatorname{SiamMOT}$ improves these results even further, for both $\operatorname{IMM}$ and $\operatorname{EMM}$. The performance gap is especially interesting on the CRP dataset, where both MOTA and IDF1 increased substantially (i.e., $+35$ MOTA and $+25$ IDF1 over $\operatorname{Tracktor+Flow}$). Between these, $\operatorname{EMM}$ performs similar to $\operatorname{IMM}$ on CRP, but significantly better on MOT17 and TAO-person. This shows the importance of explicit template matching, which is consistent with what observed in the SOT literature~\cite{li2019siamrpn++, leal2016learning}. Finally, note that tracking performance keeps increasing as we employ better motion models (i.e., $\operatorname{Tracktor} < \operatorname{Flow} < \operatorname{IMM} < \operatorname{EMM}$). This further validates the importance of instance-level motion modelling in MOT. In addition, SiamMOT are significantly more efficient than $\operatorname{Tracktor+Flow}$, in which flow does not share computation with $\operatorname{Tracktor}$. \subsection{Training of SiamMOT: triplets sampling} \begin{table}[t] \resizebox{\columnwidth}{!}{ \centering \begin{tabular}{l|c @{\hskip 0.5em} c @{\hskip 0.5em} c @{\hskip 0.5em} c @{\hskip 0.5em} c @{\hskip 0.5em} |c} \toprule \multicolumn{1}{c}{Sampled triplets} & \multicolumn{5}{c}{MOT17} & \multicolumn{1}{c}{TAO-person}\\ \toprule & MOTA $\uparrow$ & IDF1 $\uparrow$ & FP $\downarrow$ & FN $\downarrow$ & IDsw $\downarrow$ & [email protected] $\uparrow$\\ \midrule P + H & 59.7 & 58.6 & 9639 & 34976 & 618 & 34.2\% \\ P + N & 62.7 & 58.3 & 6275 & 34955 & 697 & 35.0\% \\ P + H + N & 63.3 & 58.4 & 5726 & 34833 & 671 & 35.3\% \\ \bottomrule \end{tabular} } \caption{\small \it Effects of sampled triplets for training forward tracker in SiamMOT. P / N / H are positive / negative / hard training triplet. P+H triplets are usually used in single-object tracking. } \label{table:ablation_sampling} \end{table} We now evaluate how the distribution of triplets used to train SiamMOT (sec.~\ref{sec:train_infer}) affects its tracking performance. Given a set of training triplets $\mathcal{X} = \cup_{i=1}^N (R_i^t, S_i^{t+\delta}, R_i^{t+\delta})$ from image pair $\{\mathbf{I}^t, \mathbf{I}^{t+\delta}\}$, a triplet can be either negative, positive or hard. It is negative (N) when $R_i^t$ does not include a person, positive (P) when $R_i^t$ and $S_i^{t+\delta}$ includes the same person, and hard (H) when $R_i^t$ includes a person, but $S_i^{t+\delta}$ does not include the target person. Similar to the training of SOT, we start by training the Siamese tracker with positive and hard negative ($\operatorname{P+H}$) triplets. As results in Tab.~\ref{table:ablation_sampling} shows, the model achieves reasonable IDF1 on MOT17, which means that the tracker can follow a true person quite robustly, but it achieves relatively low MOTA, as it occasionally fails to kill false positive tracks. This is because the Siamese tracker in SiamMOT usually starts with noisy detection rather than with human-annotated regions (as in SOT). Instead, $\operatorname{P+N}$ performs better and combining all of them $\operatorname{P+H+N}$ achieves the best results overall. \subsection{Training of SiamMOT: joint training} We now investigate the importance of training the region-based detection network jointly with our Siamese tracker. First, we look at the impact that joint training has on the accuracy of our tracker and later on the accuracy of the person detector. \noindent {\bf Tracking performance.} We train a model only with the Siamese tracker (i.e. detection branch is discarded) and utilize the same detections used in the experiments presented in sec.~\ref{sec:ilmm} and Tab.~\ref{table:ablation_mot_crp_tao}. The MOTA achieved by EMM on MOT17 is 63.3 with joint training vs 61.5 without. This gap shows the benefits of joint training. \noindent {\bf Detection performance.} We compare two Faster-RCNN models trained with and without our Siamese tracker on MOT17. These models achieve 73.3\% and 73.4\% AP@IOU=$0.5$, which indicates that the joint training in SiamMOT has no negative impact on the detection network. Overall, these results show that joint training is very important for SiamMOT and leads to the best results. \subsection{Inference of SiamMOT} \begin{table}[t] \resizebox{\linewidth}{!}{ \centering \begin{tabular}{l|c @{\hskip 0.5em} c @{\hskip 0.5em} c @{\hskip 0.5em} c @{\hskip 0.5em} c @{\hskip 0.5em} |c} \toprule \multicolumn{1}{c}{$\tau$ (frames)} & \multicolumn{5}{c}{MOT17} & \multicolumn{1}{c}{TAO-person}\\ \toprule & MOTA $\uparrow$ & IDF1 $\uparrow$ & FP $\downarrow$ & FN $\downarrow$ & IDsw $\downarrow$ & [email protected] $\uparrow$\\ \midrule 1 & 63.3 & 58.4 & 5726 & 34833 & 671 & 35.3\% \\ 5 & 63.5 & 60.0 & 5818 & 34497 & 622 & 35.6\% \\ 15 & 63.4 & 60.8 & 5979 & 34454 & 616 & 36.3\% \\ 30 & 63.3 & 60.6 & 6106 & 34465 & 658 & 36.6\% \\ 60 & 63.0 & 60.2 & 6510 & 34385 & 699 & 37.2\% \\ \bottomrule \end{tabular} } \caption{\small \it Results of SiamMOT inference that terminates active trajectories after they are unseen within $\tau$ consecutive frames.} \label{table:ablation_track_inference} \end{table} Finally, we investigate how the inference of SiamMOT affects MOT performance. Similar to Tracktor \cite{bergmann2019tracking} and CenterTrack \cite{zhou2020tracking}, SiamMOT focuses on improving local tracking as long as the person is visible. However, a person can be shortly invisible due to occlusion (e.g. when crossing each other) that is common in crowded scenes such as MOT17. In order to track through these cases, we allow SiamMOT to track forward even when the trajectory is not visible, i.e. the tracker does not terminate the trajectory until it fails to track the corresponding target for $\tau$ consecutive frames. Results in Tab.~\ref{table:ablation_track_inference} show that tracking performance increases with $\tau$, especially IDF1 score / TrackAP that measures the temporal consistency of trajectories. This means that our tracker is capable of tracking beyond few consecutive frames. Results also show that the improvement saturates around $\tau=30$ (1s for 30 FPS videos). The reason is that people have likely moved outside of our search region by that time. In the future we will explore improving the motion modelling of the tracker in SiamMOT such that it can track through longer occlusions. \begin{table}[t] \resizebox{\columnwidth}{!}{ \centering \begin{tabular}{l| @{\hskip 0.5em} c |@{\hskip 0.5em} c @{\hskip 0.5em} c @{\hskip 0.5em} c @{\hskip 0.5em} c @{\hskip 0.5em} c @{\hskip 0.5em} c @{\hskip 0.5em} c } \toprule Method & MOTA & IDF1 & MT & ML & FP & FN & IDsw \\ \midrule STRN \cite{xu2019spatial} & 50.9 & 56.5 & 20.1\% & 37.0\% & 27532 & 246924 & 2593 \\ Tracktor++ \cite{bergmann2019tracking} & 53.5 & 52.3 & 19.5\% & 36.6\% & 12201 & 248047 & 2072 \\ DeepMOT \cite{xu2019train} & 53.7 & 53.8 & 19.4\% & 36.6\% & 11731 & 247447 & 1947 \\ Tracktor++ v2 \cite{bergmann2019tracking} & 56.5 & 55.1 & 21.1\% & 35.3\% & 8866 & 235449 & 3763 \\ NeuralSolver \cite{braso2020learning} & 58.8 & 61.7 & 28.8\% & 33.5\% & 17413 & 213594 & 1185 \\ CenterTrack\cite{zhou2020tracking} & 61.5 & 59.6 & 26.4\% & 31.9\% & 14076 & 200672 & 2583\\ \midrule SiamMOT & \textbf{65.9} & \textbf{63.3} & 34.6\% & 23.9\% & 18098 & 170955 & 3040 \\ \bottomrule \end{tabular} } \vspace{-2mm} \caption{\small \it Results on MOT17 test set with public detection.} \label{table:mot17_results} \end{table} \section{Comparison to State-of-the-art} \label{sec:results} Finally, we compare our SiamMOT with state-of-the-art models on three challenging multi-person tracking datasets: MOT17 \cite{milan2016mot16}, TAO-person \cite{dave2020tao} and HiEve Challenge \cite{lin2020human}. \paragraph{MOT17} (Tab.~\ref{table:mot17_results}). We report results on the test set using publicly released detections, as done for the official MOT17 Challenge. We use EMM as the tracker of SiamMOT, pre-train using {\it image training} on CrowdHuman and train on MOT17 using {\it video training}. We obtain our results by submitting SiamMOT predictions to the official evaluation server of the challenge\footnote{\url{https://motchallenge.net/}}. The results show that SiamMOT outperforms all previous methods, including the popular Tracktor++ v2 (+9.4 MOTA) and state-of-the-art CenterTrack \cite{zhou2020tracking} (+4.4 MOTA). Note how SiamMOT models instance's motion with region-based features while CenterTrack uses point-based features. As recent research shows \cite{qiu2020borderdet, yang2019reppoints, tian2019directpose}, region-based features are consistently better for instance recognition and localization. We conjecture this is also true for instance tracking. In addition, CenterTrack implicitly learns to infer the instance's motion in a similar way to the proposed IMM, which is not as good as EMM, as shown in Sec.~\ref{sec:ablation}, and by a large body of research in single-object tracking~\cite{leal2016learning, li2019siamrpn++, guo2020siamcar, tao2016siamese}. \begin{table}[t] \centering \resizebox{0.8\linewidth}{!}{ \centering \begin{tabular}{l|@{\hskip 0.5em} l@{\hskip 0.5em} c@{\hskip 0.5em} c } \toprule Method & Backbone & [email protected] & [email protected] \\ \midrule Tractor \cite{dave2020tao} & ResNet-101 & 26.0\% & n/a \\ Tracktor++ \cite{dave2020tao} & ResNet-101 & 36.7\% & n/a \\ SiamMOT & ResNet-101 & \textbf{41.1}\% & \textbf{23.0}\% \\ \midrule SiamMOT & DLA-169 & 42.1\% & 24.3\% \\ SiamMOT+ & DLA-169 & \textbf{44.3}\% & \textbf{26.2}\% \\ \bottomrule \end{tabular} } \caption{\small \it Results on TAO-person validation set.} \label{table:tao_results} \end{table} \vspace{-1em} \paragraph{TAO-person} (Tab.~\ref{table:tao_results}). We report results on the validation set similar to \cite{dave2020tao}. We train a SiamMOT with EMM using {\it image training} on MSCOCO and CrowdHuman datasets. SiamMOT outperforms state-of-the-art Tracktor++ by a significant 4.4\ [email protected]. As pointed out in \cite{dave2020tao}, linking tracklets with person re-identification embeddings is important in the TAO dataset, as there are a number of videos where people are moving in and out of camera view, which, in this case, is beyond the capability of instance-level motion modelling. Thus, we evaluate SiamMOT+ that merges tracklets with an off-the-shelf person re-id model, the one used in Tracktor++ \cite{bergmann2019tracking}. Thanks to this, SiamMOT+ sets new state-of-the-arts on the challenging TAO-person dataset. Although Tracktor++ gains a large 8\% [email protected] boost by re-id linking, we observe a less significant improvement for SiamMOT. This is because our motion model is already capable of linking challenging cases in TAO, reducing the cases where re-id linking is necessary. \begin{table}[t] \resizebox{\linewidth}{!}{ \centering \begin{tabular}{l @{\hskip 0.5em} | c @{\hskip 0.5em} c @{\hskip 0.5em} c @{\hskip 0.5em} c @{\hskip 0.5em} c @{\hskip 0.5em} c @{\hskip 0.5em} c } \toprule Method & MOTA & IDF1 & MT & ML & FP & FN & IDsw \\ \midrule DeepSORT \cite{wojke2017simple} & 27.1 & 28.6 & 8.5\% & 41.5\% & 5894 & 42668 & 2220 \\ FCS-Track \cite{lin2020hieve} & 47.8 & 49.8 & 25.3\% & 30.2\% & 3847 & 30862 & 1658 \\ Selective JDE \cite{wu2020transductive} & 50.6 & 56.8 & 25.1\% & 30.3\% & 2860 & 29850 & 1719 \\ LinkBox \cite{peng2020dense} & 51.4 & 47.2 & 29.3\% & 29.0\% & 2804 & 29345 & 1725 \\ \midrule SiamMOT (DLA-34) & 51.5 & 47.9 & 25.8\% & 26.1\% & 3509 & 28667 & 1866 \\ SiamMOT (DLA-169) & \textbf{53.2} & 51.7 & 26.7\% & 27.5\% & 2837 & \textbf{28485} & 1730 \\ \bottomrule \end{tabular} } \caption{\small \it HiEve benchmark leaderboard (public detection). } \label{table:hie_results} \end{table} \paragraph{HiEve challenge} (Tab.~\ref{table:hie_results}). Finally, to further show the strength of SiamMOT, we present results on the recently released Human in Events (HiEve) dataset~\cite{lin2020human}, hosted at the HiEve Challenge at ACM MM'20 ~\cite{lin2020hieve}. The dataset consists of 19 training and 13 test videos with duration ranging from $30$ to $150$ seconds, and the videos mainly feature surveillence scenes in subways, restaurants, shopping malls and outdoor streets. We report results on the test set using the publicly released detections. We jointly train a SiamMOT with EMM on CrowdHuman and HiEve training videos. We obtain our results by submitting its predictions to the official evaluation server of the challenge\footnote{\url{http://humaninevents.org/}}. We submit two sets of results, one obtained with a lightweight DLA-34 backbone and one with a heavier-weight DLA-169. While the former already matches the top performance in ACM MM'20 HiEve Challenge \cite{lin2020hieve}, the latter beats all winning methods that are heavily tuned for the challenge. \section{Conclusion} We presented a region-based MOT network -- SiamMOT, which detects and associates object instances simultaneously. In SiamMOT, detected instances are temporally linked by a Siamese tracker that models instance motion across frames. We found that the capability of the tracker within SiamMOT is particularly important to the MOT performance. We applied SiamMOT to three different multi-person tracking datasets, and it achieved top results on all of them, demonstrating that SiamMOT is a state-of-the-art tracking network. Although SiamMOT has proven to work well on person tracking, its framework can be easily adapted to accommodate multi-class multi-object tracking, and we plan to explore this direction in the future. \begin{appendices} \section{Implicit Motion Model} We show the graphic illustration of our Implicit Motion Model (IMM) in Fig. \ref{fig:imm}. Please refer to the main paper for definition of mathematical notation. In general, IMM learns the relative location / scale changes (encoded in $m_i$) of person instances with visual features of both frames. We empirically set the shape of $\mathbf{f}_{S_i}^{t+\delta}$ to be $c \times 15 \times 15 $, and we observe diminished performance gain when we increase it to $c \times 30 \times 30 $. Under current configurations, IMM has already entailed significantly more ($400 \times$) learnable parameters than EMM in the parameterization of Siamese tracker. \begin{figure}[htp] \centering \includegraphics[width=0.45\textwidth]{Figs/imm-02.png} \caption{\small \it Network architecture of Implicit Motion Model (IMM). } \label{fig:imm} \end{figure} \section{Explicit Motion Model} During inference, we empirically set $\lambda=0.4$ in generating penalty map ($\bm \eta_i$) by default. Due to the large person motion in CRP videos, we use $\lambda = 0.1$, which does not heavily penalize a matched candidate region if it is far away from the target's location in previous frame. \section{Caltech Roadside Pedestrians (CRP)} We use CRP for ablation analysis mainly because videos in CRP are long and people are moving very fast, which presents a different tracking scenario comparing to existing dataset including MOT17 and TAO. As CRP is not widely used for multi-person tracking, we adopt the following evaluation protocol: we only evaluate on frames where ground truth is available and we do not penalize detected instances that overlap with background bounding boxes (instance id = 0). As background bounding boxes are not annotated tightly, we enforce a very loose IOU matching, i.e. a detected bounding box is deemed matched to a background one if their IOU overlap is larger then 0.2. \paragraph{Training in SiamMOT.} We present the ablation experiments in Tab. \ref{table:ablation_sampling_crp}. Overall, we observe similar trend as that in MOT17, but we don't observe that FP (in MOTA metric) is reduced as significant as in MOT when negative triplets are added ($+\operatorname{N}$) during training. We find this is mainly because 1), detection in CRP is very accurate and 2), CRP is not exhaustively annotated, so large percentage of FP results from tracking un-annotated person in the background rather from real false detection. Note how hard examples (+$\operatorname{H}$) is important to reduce id switches (i.e. false matching). \begin{table}[t] \footnotesize \begin{center} \begin{tabular}{l|c @{\hskip 0.5em} c @{\hskip 0.5em} c @{\hskip 0.5em} c @{\hskip 0.5em} c @{\hskip 0.5em}} \toprule \multicolumn{1}{c}{Sampled triplets} & \multicolumn{5}{c}{Caltech Roadside Pedestrians} \\ \toprule & MOTA $\uparrow$ & IDF1 $\uparrow$ & FP $\downarrow$ & FN $\downarrow$ & IDsw $\downarrow$ \\ \midrule P + H & 76.1 & 81.3 & 2679 & 2595 & 1266 \\ P + N & 74.6 & 79.0 & 2428 & 2768 & 1758 \\ P + H + N & 76.4 & 81.1 & 2548 & 2575 & 1311 \\ \bottomrule \end{tabular} \end{center} \caption{\small \it Effects of sampled triplets for training forward tracker in SiamMOT. P / N / H are positive / negative / hard training triplet. P+H triplets are usually used in single-object tracking. } \label{table:ablation_sampling_crp} \end{table} \paragraph{Inference in SiamMOT.} We find that $\tau > 1$ (frame) has negligible effect in CRP. This is mainly because person moves too fast in CRP videos, so the tracker in SiamMOT fails to track them forward beyond 2 frames in CRP. \section{MOT17} We use public detection to generate our results on test set. We follow recent practices \cite{bergmann2019tracking, zhou2020tracking} that re-scores the provided public detection by using the detector in SiamMOT. This is allowed in public detection protocol. We report detailed video-level metrics in Tab. \ref{table:supplement_results_mot17}. \begin{table}[t] \scriptsize \begin{center} \begin{tabular}{l @{\hskip 0.25em} c @{\hskip 0.25em} c @{\hskip 0.25em} c @{\hskip 0.25em} c @{\hskip 0.5em} c @{\hskip 0.25em} r @{\hskip 0.25em} r @{\hskip 0.25em} r} \toprule Sequence & Det & MOTA$\uparrow$ & IDF1$\uparrow$ & MT$\uparrow$ & ML$\downarrow$ & FP$\downarrow$ & FN $\downarrow$ & IDsw$\downarrow$ \\ \midrule MOT17-01 & DPM & 53.3 & 47.1 & 33.3\% & 37.5\% & 150 & 2830 & 34 \\ MOT17-03 & DPM & 76.5 & 71.7 & 57.4\% & 11.5\% & 1359 & 23137 & 131 \\ MOT17-06 & DPM & 54.9 & 52.7 & 31.9\% & 30.2\% & 1089 & 4043 & 178 \\ MOT17-07 & DPM & 59.9 & 52.5 & 23.3\% & 18.3\% & 651 & 6034 & 86 \\ MOT17-08 & DPM & 40.1 & 35.1 & 21.1\% & 31.6\% & 443 & 12094 & 125 \\ MOT17-12 & DPM & 56.1 & 62.8 & 36.3\% & 31.9\% & 436 & 3349 & 21 \\ MOT17-14 & DPM & 43.9 & 49.0 & 15.9\% & 29.3\% & 947 & 9077 & 340 \\ \midrule MOT17-01 & FRCNN & 52.5 & 45.6 & 33.3\% & 37.5\% & 198 & 2836 & 27 \\ MOT17-03 & FRCNN & 76.8 & 74.9 & 56.8\% & 10.1\% & 1428 & 22787 & 123 \\ MOT17-06 & FRCNN & 58.2 & 54.8 & 37.8\% & 18.0\% & 1283 & 3412 & 227 \\ MOT17-07 & FRCNN & 58.2 & 54.0 & 23.3\% & 15.0\% & 740 & 6264 & 65 \\ MOT17-08 & FRCNN & 36.4 & 35.5 & 21.1\% & 39.5\% & 399 & 12933 & 99 \\ MOT17-12 & FRCNN & 50.1 & 59.2 & 27.5\% & 41.8\% & 512 & 3796 & 19 \\ MOT17-14 & FRCNN & 44.2 & 49.7 & 16.5\% & 28.7\% & 1352 & 8542 & 414 \\ \midrule MOT17-01 & SDP & 55.4 & 47.8 & 33.3\% & 33.3\% & 237 & 2601 & 37 \\ MOT17-03 & SDP & 82.5 & 74.5 & 68.2\% & 8.10\% & 1846 & 16283 & 183 \\ MOT17-06 & SDP & 57.6 & 54.7 & 41.0\% & 23.9\% & 1304 & 3469 & 219 \\ MOT17-07 & SDP & 62.7 & 52.6 & 33.3\% & 11.7\% & 984 & 5228 & 89 \\ MOT17-08 & SDP & 42.1 & 36.7 & 25.0\% & 28.9\% & 527 & 11559 & 152 \\ MOT17-12 & SDP & 54.8 & 63.6 & 37.4\% & 35.2\% & 665 & 3233 & 24 \\ MOT17-14 & SDP & 48.9 & 63.5 & 18.3\% & 23.2\% & 1548 & 7448 & 447 \\ \midrule \multicolumn{2}{c}{All} & 65.9 & 63.5 & 34.6\% & 23.9\% & 18098 & 170955 & 3040 \\ \bottomrule \end{tabular} \end{center} \caption{\small \it Detailed result summary on MOT17 test videos.} \label{table:supplement_results_mot17} \end{table} \section{HiEve} We use public detection to generate our results on test videos, the same practice as that in MOT17. Please refer to the following link in official leaderboard for detailed video-level metrics as well as visualized predictions. \url{http://humaninevents.org/tracker.html?tracker=1&id=200} \section{TAO-person} \paragraph{Performance per dataset.} We report performance of different subset in TAO-person in Tab. \ref{table:dataset_split_tao}. This dataset-wise performance gives us understanding how SiamMOT performs on different tracking scenarios. Overall, SiamMOT performs very competitive on self-driving street scenes, e.g. BDD and Argoverse as well as on movie dataset Charades. \begin{table}[t] \scriptsize \begin{center} \begin{tabular}{l|cc|cc} \toprule \multicolumn{1}{c}{Subset in TAO} & \multicolumn{2}{c}{SiamMOT(ResNet-101)} & \multicolumn{2}{c}{SiamMOT(DLA-169)} \\ \toprule & [email protected] & [email protected] & [email protected] & [email protected]\\ \midrule YFCC100M & 41.3\% & 18.3\% & 40.8\% & 20.0\% \\ HACS & 33.1\% & 17.3\% & 35.1\% & 18.2\% \\ BDD & 72.3\% & 41.3\% & 73.8\% & 42.8\% \\ Argoverse & 66.3\% & 39.5\% & 71.7\% & 42.7\% \\ AVA & 41.2\% & 25.8\% & 41.8\% & 26.8\% \\ LaSOT & 28.4\% & 14.9\% & 28.7\% & 16.7\% \\ Charades & 74.8\% & 68.2\% & 85.7\% & 68.4\% \\ \midrule All & 41.1\% & 23.0\% & 42.1\% & 24.3\% \\ \bottomrule \end{tabular} \end{center} \caption{\small \it dataset-wise performance on TAO-person.} \label{table:dataset_split_tao} \end{table} \paragraph{Federated MOTA.} For reference, we also report MOT Challenge metric \cite{milan2016mot16} on Tao-person validation set in Tab. \ref{table:mota_tao}. We find that SiamMOT also significantly outperforms Tracktor++ \cite{dave2020tao} on those metrics. \begin{table}[htp] \scriptsize \begin{center} \begin{tabular}{l|l@{\hskip 0.5em} c @{\hskip 0.5em} c @{\hskip 0.5em} c @{\hskip 0.5em} c @{\hskip 0.5em} c @{\hskip 0.5em} c @{\hskip 0.5em} c} \toprule Model & Backbone & MOTA $\uparrow$ & IDF1 $\uparrow$ & MT $\uparrow$ & ML$\downarrow$ & FP $\downarrow$ & FN $\downarrow$ & IDsw $\downarrow$ \\ \toprule Tracktor++ \cite{dave2020tao} & ResNet-101 & 66.6 & 64.8 & 1529 & 411 & 12910 & 2821 & 3487 \\ \midrule SiamMOT & ResNet-101 & 74.6 & 68.0 & 1926 & 204 & 7930 & 4195 & 1816 \\ SiamMOT & DLA-169 & 75.5 & 68.3 & 1941 & 190 & 7591 & 4176 & 1857 \\ SiamMOT+ & DLA-169 & 76.7 & 70.9 & 1951 & 190 & 7845 & 3561 & 1834 \\ \bottomrule \end{tabular} \end{center} \caption{\small \it MOT Challenge metric on TAO-person validation. } \label{table:mota_tao} \end{table} \section{Sensitivity analysis of parameters} We present the sensitivity analysis of parameters $\alpha$ and $\beta$ that is used in inference, as we observe that the tracking performance is relatively more sensitive to their value changes. To elaborate, $\alpha$ indicates the detection confidence threshold that we use to start a new trajectory, and $\beta$ is the visibility confidence threshold that is used to determined whether a trajectory needs to be continued. We do a grid search of $\alpha$ ($[0.4: 0.8 : 0.2]$ ) and $\beta$ ($[0.4: 0.8 : 0.2]$), and we present their results on MOT17 in Tab.~\ref{table:alpha_beta}. As expected, large values of $\alpha$ and $\beta$ makes the solver too cautious, which leads to high FN. A good balance is achieved when $\beta=0.4$, and $\alpha=0.6$ is used in the rest of paper to avoid the solver overfitting specifically to MOT17. \begin{table}[t] \scriptsize \begin{center} \begin{tabular}{ll|c @{\hskip 0.5em} c @{\hskip 0.5em} c @{\hskip 0.5em} c @{\hskip 0.5em} c @{\hskip 0.5em}} \toprule $\alpha$& $\beta$ & MOTA $\uparrow$ & IDF1 $\uparrow$ & FP $\downarrow$ & FN $\downarrow$ & IDsw $\downarrow$ \\ \midrule 0.4 & 0.4 & 63.8 & 58.5 & 6105 & 33876 & 707 \\ 0.4 & 0.6 & 63.0 & 54.4 & 4973 & 35707 & 922 \\ 0.4 & 0.8 & 59.7 & 51.1 & 2595 & 41686 & 975 \\ \midrule 0.6 & 0.4 & 63.3 & 58.4 & 5726 & 34833 & 671 \\ 0.6 & 0.6 & 62.4 & 54.5 & 4330 & 37034 & 869 \\ 0.6 & 0.8 & 59.6 & 51.1 & 2322 & 42167 & 918 \\ \midrule 0.8 & 0.4 & 61.8 & 58.3 & 4742 & 37611 & 588 \\ 0.8 & 0.6 & 60.9 & 54.8 & 3169 & 40030 & 729 \\ 0.8 & 0.8 & 58.7 & 51.6 & 1842 & 43730 & 793 \\ \bottomrule \end{tabular} \end{center} \caption{\small \it Sensitity analysis of $\alpha$ and $\beta$ on MOT17 dataset. The experiment settings are exactly the same as that in ablation analysis. } \label{table:alpha_beta} \end{table} \end{appendices} {\small \bibliographystyle{ieee_fullname}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Fair division is the study of how to distribute a set of items among a set of agents in a fair manner. Achieving fairness is particularly challenging when items are \emph{in}divisible. Computational and conceptual challenges have motivated researchers and practitioners to develop a variety of fairness concepts that are applicable to a large number of allocation problems.\footnote{See \citet{Bouveret2016Fair,lang2016fair,markakis2017approximation,doi:10.1146/annurev-economics-080218-025559} for detailed surveys and discussions.} One of the most common fairness concepts, proposed by \citet{budish2011combinatorial}, is Maximin Share (MMS), that aims to give each agent a bundle that is valued at a certain threshold. The MMS threshold, also known as $1$-out-of-$d$ MMS, generalizes the guarantee of the \textit{cut-and-choose} protocol. It is the value that an agent can secure by partitioning the items into $d$ bundles, assuming it will receive the least preferred bundle. The MMS value depends on the number of partitions, $d$. When all items are goods (i.e., have non-negative values), the $1$-out-of-$d$ MMS threshold is (weakly) monotonically decreasing as the number of partitions ($d$) increases. When allocating goods among $n$ agents, a natural desirable threshold is satisfying $1$-out-of-$n$ MMS for all agents. Unfortunately, while this value can be guaranteed for $n=2$ agents through the cut-and-choose protocol, a $1$-out-of-$n$ MMS allocation of goods may not exist in general for $n \geq 3$ \citep{procaccia2014fair,kurokawa2018fair}. These negative results have given rise to \emph{multiplicative approximations}, wherein each agent is guaranteed at least a constant fraction of its 1-out-of-$n$ MMS. While there have been many attempts in developing algorithms that improve the bound to close to 1, the best currently known fraction is $\frac{3}{4} + \frac{1}{12n}$ \citep{garg2020improved}. Despite numerous studies devoted to their existence and computation, there is a conceptual and practical problem with the multiplicative approximations of MMS: they are very sensitive to agents' precise cardinal valuations. To illustrate, suppose $n=3$ and there are four goods $g_1,g_2,g_3,g_4$ that Alice values at $30, 39, 40, 41$ respectively. Her $1$-out-of-$3$ MMS is $40$, and thus a $\frac{3}{4}$ fraction guarantee can be satisfied by giving her the bundle $\{g_1\}$ or a bundle with a higher value. But if her valuation of good $g_3$ changes slightly to $40+\varepsilon$ (for any $\varepsilon>0$), then $\frac{3}{4}$ of her $1$-out-of-$3$ MMS is larger than $30$, the bundle $\{g_1\}$ is no longer acceptable for her. Thus, the acceptability of a bundle (in this example $\{g_1\}$) might be affected by an arbitrarily small perturbation in the value of an \emph{irrelevant} good (i.e. $g_3$). In the microeconomics literature, it is common to measure agents' preferences as \emph{ordinal rankings} of the bundles; even when utility functions are used, it is understood that they only represent rankings. From this viewpoint, the set of acceptable bundles should only depend on the ranking of the bundles, and should not be affected by changes in valuations that---similar to the $\varepsilon$ change in the value of $g_3$---do not affect this ranking. According to this principle, \citet{budish2011combinatorial} suggested the 1-out-of-$(n+1)$ MMS as a relaxation of the 1-out-of-$n$ MMS. In the above example, $1$-out-of-$4$ MMS fairness can be satisfied by giving Alice $\{g_1\}$ or a better bundle; small inaccuracies or noise in the valuations do not change the set of acceptable bundles. Hence, this notion provides a more robust approach in evaluating fairness of allocations. To date, it is not known if 1-out-of-$(n+1)$ MMS allocations are guaranteed to exist. We aim to find allocations of goods that guarantee $1$-out-of-$d$ MMS for some integer $d>n$. A $1$-out-of-$d$ MMS allocation guarantees to each agent a bundle that is at least as good as the worst bundle in the best $d$-partition. The aforementioned guarantee can be naturally generalized to $\ell$-out-of-$d$ MMS \citep{babaioff2021competitive}, that guarantees to each agent the value obtained by partitioning the goods into $d$ bundles and selecting the $\ell$ least-valuable ones. Therefore, we further investigate the $\ell$-out-of-$d$ MMS generalization that allows us to improve the fairness thresholds. The notion of $\ell$-out-of-$d$ MMS fairness is \emph{robust} in the sense that, a fair allocation remains fair even when each agent's utility function goes through an arbitrary monotonically-increasing transformation. Given these notions, we ask the following questions: \begin{quote} \emph{In the allocation of indivisible goods, (a) For what combinations of integers $\ell$ and $d$, can $\ell$-out-of-$d$ MMS allocations be guaranteed? and (b) For what integers $\ell$ and $d$ can $\ell$-out-of-$d$ MMS allocations be computed in polynomial time? } \end{quote} \subsection{Our Contributions} We investigate the existence and computation of ordinal MMS approximations and make the several contributions. In \textbf{Section \ref{sec:goods-lone}}, we prove the existence of $\ell$-out-of-$d$ MMS allocation of goods when $d\geq \floor{ (\ell+\frac{1}{2})n}$ (Theorem \ref{thm:l-out-of-d-existence}). In particular, $1$-out-of-$\floor{3n/2}$ MMS, $2$-out-of-$\floor{5n/2}$ MMS, $3$-out-of-$\floor{7n/2}$ MMS, and so on, are all guaranteed to exist. This finding generalizes the previously known existence result of $1$-out-of-$\ceil{3n/2}$ MMS \citep{hosseini2021mms}. The proof uses an algorithm which, given lower bounds on the $\ell$-out-of-$d$ MMS values of the agents, returns an $\ell$-out-of-$d$ MMS allocation. The algorithm runs in polynomial time given the agents' lower bounds. However, computing the exact $\ell$-out-of-$d$ MMS values is NP-hard. In the following sections we propose two solutions to this issue. In \textbf{Section \ref{sec:goods-poly}}, we present polynomial-time algorithms that find an $\ell$-out-of-$(d + o(n))$ MMS-fair allocation, where $d=(\ell+\frac{1}{2})n$. Specifically, for $\ell=1$, we present a polynomial-time algorithm for finding a 1-out-of-$\ceil{3n/2}$ MMS allocation (Theorem~\ref{thm:3n-2poly}); this matches the existence result for 1-out-of-$\floor{3n/2}$ MMS up to an additive gap of at most 1. For $\ell>1$, we present a different polynomial-time algorithm for finding a 1-out-of-$\ceil{(\ell+\frac{1}{2})n + O(n^{2/3})}$ MMS allocation (Theorem~\ref{thm:goods-approx}). In \textbf{Appendix \ref{sec:simulation}}, we conduct simulations with valuations generated randomly from various distributions. For several values of $\ell$, we compute a lower bound on the $\ell$-out-of-$\floor{ (\ell+\frac{1}{2})n}$ MMS guarantee using a simple greedy algorithm. We compare this lower bound to an upper bound on the $(\frac{3}{4}+\frac{1}{12n})$-fraction MMS guarantee, which is currently the best known worst-case multiplicative MMS approximation.% \footnote{ In general, ordinal and multiplicative approximations are incomparable from the theoretical standpoint---each of them may be larger than the other in some instances (see Appendix \ref{sec:simulation}). Therefore, we compare them through simulations using synthetic data. } We find that, for any $\ell\geq 2$, when the number of goods is at least $\approx 20 n$, the lower bound on the ordinal approximation is better than the upper bound on the multiplicative approximation. This implies that, in practice, the algorithm of Section \ref{sec:goods-lone} can be used with these lower bounds to attain an allocation in which each agent receives a value that is significantly better than the theoretical guarantees. \iffalse A key implication of our result is enabling the interpolation between indivisible and divisible goods by changing the value of $\ell$. To illustrate, consider a land-estate with value $n$. If the land is perfectly divisible, then it can be divided by classic fair cake-cutting algorithms, guaranteeing each agent a land-plot with value $1$. But the more common process in practice is that a central authority first partitions the land-estate into plots and then allocates the plots as indivisible objects. If the land is partitioned into $(\ell+\frac{1}{2})n$ plots of similar value, then an $\ell$-out-of-$(\ell+\frac{1}{2})n$ MMS guarantee corresponds to about $2 \ell/(2 \ell+1)$ value per agent; a finer partition corresponds to a higher guarantee. At the limit $\ell\to \infty$, the value per agent approaches $1$, corresponding to the cake-cutting guarantee. In contrast, with a multiplicative $3/4$ approximation, the guaranteed value does not approach $1$. \fi \iffalse \footnote{ \er{ While we do not have a complete characterization for the cases in which the ordinal approximation is better, we can provide a sufficient condition for a setting with $m$ identical goods (of value 1). For such an agent, the $l$-out-of-$d$ MMS equals $\lfloor\ell m/d\rfloor$ which is between $(\ell m-d)/d$ and $\ell m/d$. So the ordinal approximation is better than the $3/4$ multiplicative approximation whenever $\frac{\ell m-d}{d}\geq\frac{3}{4}\frac{m}{n}\quad\Rightarrow\quad\frac{\ell}{\ell+1/2}m\geq\frac{3}{4}m+n$. When $m\gg n$, a sufficient condition is that $\ell>3/2$. } } \fi \subsection{Techniques} At first glance, it would seem that the techniques used to attain $2/3$ approximation of MMS should also work for achieving $1$-out-of-$\floor{3n/2}$ MMS allocations, since both guarantees approximate the same value, namely, the $\frac{2}{3}$ approximation of the ``proportional share'' ($\frac{1}{n}$ of the total value of all goods). In \textbf{Appendix \ref{sec:negative}} we present an example showing that this is not the case, and thus, achieving ordinal MMS approximations requires new techniques. In this section, we briefly describe the techniques that we utilize to achieve ordinal approximations of MMS. \paragraph{Lone Divider.} To achieve the existence result for any $\ell\geq 1$, we use a variant of the \emph{Lone Divider} algorithm, which was first presented by \citet{kuhn1967games} for finding a proportional allocation of a divisible good (also known as a ``cake''). Recently, it was shown that the same algorithm can be used for allocating indivisible goods too. When applied directly, the Lone Divider algorithm finds only an $\ell $-out-of-$((\ell +1)n-2)$ MMS allocation \citep{aigner2022envy}, which for small $\ell$ is substantially worse than our target approximation of $\ell $-out-of-$\floor{ (\ell+\frac{1}{2})n}$. We overcome this difficulty by adding constraints on the ways in which the `lone divider' is allowed to partition the goods, as well as arguing on which goods are selected to be included in each partition (see Section \ref{sec:goods-lone}). \paragraph{Bin Covering.} To develop a polynomial-time algorithm when $\ell = 1$, we extend an algorithm of \citet{csirik1999two} for the \emph{bin covering} problem---a dual of the more famous \emph{bin packing} problem \citep{johnson1973near}. In this problem, the goal is to fill as many bins as possible with items of given sizes, where the total size in each bin must be above a given threshold. This problem is NP-hard, but \citet{csirik1999two} presents a polynomial-time $2/3$ approximation. This algorithm cannot be immediately applied to the fair division problem since the valuations of goods are \textit{subjective}, meaning that agents may have different valuations of each good. We adapt this technique to handle subjective valuations. \section{Related Work} \label{sec:related} \subsection{Maximin Share} The idea of using the highest utility an agent could obtain if all other agents had the same preferences as a benchmark for fairness, originated in the economics literature \cite{Moulin1990Uniform,Moulin1992Welfare}. It was put to practice in the context of course allocation by \citet{budish2011combinatorial}, where he introduced the ordinal approximation to MMS, and showed a mechanism that guarantees $1$-out-of-$(n+1)$ MMS to all agents by adding a small number of excess goods. In the more standard fair division setting, in which adding goods is impossible, the first non-trivial ordinal approximation was 1-out-of-$(2n-2)$ MMS \citep{aigner2022envy}. \citet{hosseini2021mms} studied the connection between guaranteeing 1-out-of-$n$ MMS for $2/3$ of the agents and the ordinal approximations for \textit{all} agents. The implication of their results is the existence of $1$-out-of-$\ceil{3n/2}$ MMS allocations and a polynomial-time algorithm for $n<6$. Whether or not $1$-out-of-$(n+1)$ MMS can be guaranteed without adding excess goods remains an open problem to date. The generalization of the maximin share to arbitrary $\ell\geq 1$ was first introduced by \citet{babaioff2019fair,babaioff2021competitive}, and further studied by \citet{segal2020competitive}. They presented this generalization as a natural fairness criterion for agents with different entitlements. The implication relations between $\ell$-out-of-$d$ MMS-fairness guarantees for different values of $\ell$ and $d$ were characterized by \citet{segal2019maximin}. Recently, the maximin share and its ordinal approximations have also been applied to some variants of the \emph{cake-cutting} problem \citep{ElkindSeSu21,ElkindSeSu21b,ElkindSeSu21c,bogomolnaia2022guarantees}. \subsection{Multiplicative MMS Approximations} The multiplicative approximation to MMS originated in the computer science literature \citep{procaccia2014fair}. The non-existence of MMS allocations \citep{kurokawa2018fair} and its intractability \citep{Bouveret2016,woeginger1997polynomial} have given rise to a number of approximation techniques. These algorithms guarantee that each agent receives an approximation of their maximin share threshold. The currently known algorithms guarantee $\beta \geq 2/3$ \citep{kurokawa2018fair,amanatidis2017approximation,garg2018approximating} and $\beta \geq 3/4$ \citep{ghodsi2018fair,garg2020improved} in general, and $\beta \geq 7/8$ \citep{amanatidis2017approximation} as well as $\beta\geq 8/9$ \cite{gourves2019maximin} when there are only three agents. There are also MMS approximation algorithms for settings with constraints, such as when the goods are allocated on a cycle and each agent must get a connected bundle \citep{truszczynski2020maximin}. \citet{mcglaughlin2020improving} showed an algorithm for approximating the maximum Nash welfare (the product of agents' utilities), which also attains a fraction $1/(2n)$ of the MMS. Recently, \citet{nguyen2017approximate} gave a Polynomial Time Approximation Scheme (PTAS) for a notion defined as \textit{optimal-MMS}, that is, the largest value, $\beta$, for which each agent receives at least a fraction $\beta$ of its MMS. Since the number of possible partitions is finite, an optimal-MMS allocation always exists, and it is an MMS allocation if $\beta \geq 1$. However, an optimal-MMS allocation may provide an arbitrarily bad ordinal MMS guarantee. \citet{Searns_Hosseini_2020,hosseini2021mms} show that for every $n$, there is an instance with $n$ agents in which under \textit{any} optimal-MMS allocation only a constant number of agents ($\leq 4$) receive their MMS value. \subsection{Fairness Vased on Ordinal Information} An advantage of the ordinal MMS approximation is that it depends only on the ranking over the bundles. Other fair allocation algorithms with this robustness property are the Decreasing Demands algorithm of \citet{herreiner2002simple}, the Envy Graph algorithm of \citet{lipton2004approximately}, and the UnderCut algorithm of \citet{Brams2012Undercut}. \citet{amanatidis2016truthful,halpern2021fair} study an even stronger robustness notion, where the agents report only a ranking over the \emph{goods}. Their results imply that, in this setting, the highest attainable multiplicative approximation of MMS is $\Theta(1/\log n)$. \citet{menon2020algorithmic} define a fair allocation algorithm as \emph{stable} if it gives an agent the same value even if the agent slightly changes his cardinal valuations of goods, as long as the ordinal ranking of goods remains the same. They show that most existing algorithms are not stable, and present an approximately-stable algorithm for the case of two agents. Finally, robustness has been studied also in the context of \emph{fair cake-cutting}. \citet{aziz2014cake} define an allocation \emph{robust-fair} if it remains fair even when the valuation of an agent changes, as long as its ordinal information remains unchanged. \citet{edmonds2011cake} study cake-cutting settings in which agents can only cut the cake with a finite precision. \section{Preliminaries} \label{sec:prel} \subsection{Agents and Goods} \label{sub:agents} Let $N = [n] := \{1,\ldots, n\}$ be a set of agents and $M$ denote a set of $m$ indivisible goods. We denote the value of agent $i\in N$ for good $g\in M$ by $v_{i}(g)$. We assume that the valuation functions are \textit{additive}, that is, for each subset $G\subseteq M$, $v_{i}(G) = \sum_{g\in G} v_{i}(g)$, and $v_i(\emptyset)=0$.% \footnote{In Appendix~\ref{app:responsive} we complement our results with a non-existence result for the more general class of \textit{responsive} preferences.} An \emph{instance} of the problem is denoted by $I = \ins{N, M, V}$, where $V = (v_1, \ldots, v_n)$ is the valuation profile of agents. We assume all agents have a non-negative valuation for each good $g\in M$, that is, $v_i(g) \geq 0$. An \emph{allocation} $A = (A_1, \ldots, A_n)$ is an $n$-partition of $M$ that allocates the bundle of goods in $A_{i}$ to each agent $i\in N$. It is convenient to assume that the number of goods is sufficiently large. Particularly, some algorithms implicitly assume that $m\geq n$, while some algorithms implicitly assume that $m\geq \ell \cdot n$. These assumptions are without loss of generality, since if $m$ in the original instance is smaller, we can just add dummy goods with a value of $0$ to all agents. \subsection{The Maximin Share} For every agent $i\in N$ and integers $1\leq \ell < d$, the \emph{$\ell$-out-of-$d$ maximin share of $i$ from $M$}, denoted $\mms{i}{\ell}{d}{M}$, is defined as \begin{align*} \mms{i}{\ell}{d}{M} := ~~ \max_{\mathbf{P}\in \partition{M}{d}} ~~ \min_{Z\in \union{\mathbf{P}}{\ell}} ~~ v_i(Z) \end{align*} where the maximum is over all partitions of $M$ into $d$ subsets, and the minimum is over all unions of $\ell$ subsets from the partition. We say that an allocation $A$ is \emph{an $\ell$-out-of-$d$-MMS allocation} if for all agents $i\in N$, $v_{i}(A_i) \geq \mms{i}{\ell}{d}{M}$. Obviously $\mms{i}{\ell}{d}{M} \leq \frac{\ell}{d}v_i(M)$, and the equality holds if and only if $M$ can be partitioned into $d$ subsets with the same value. Note that $\mms{i}{\ell}{d}{M}$ is a weakly-increasing function of $\ell$ and a weakly-decreasing function of $d$. The value $\mms{i}{\ell}{d}{M}$ is at least as large, and sometimes larger than, $\ell\cdot \mms{i}{1}{d}{M}$. For example, suppose $\ell=2$, there are $d-1$ goods with value $1$ and one good with value $\varepsilon < 1$. Then $\mms{i}{2}{d}{M} = 1 + \varepsilon$ but $2\cdot \mms{i}{1}{d}{M} = 2\varepsilon$. The maximin-share notion is scale-invariant in the following sense: if the values of each good for an agent, say $i$, are multiplied by a constant $c$, then agent $i$'s MMS value is also multiplied by the same $c$, so the set of bundles that are worth for $i$ at least $\mms{i}{\ell}{d}{M}$ does not change. \subsection{The Lone Divider Algorithm} \label{sub:lone-divider} A general formulation of the Lone Divider algorithm, based on \citet{aigner2022envy}, is shown in Algorithm \ref{alg:lone-divider-general}. It accepts as input a set $M$ of items and a threshold value $t_i$ for each agent $i$. These values should satisfy the following condition for each agent $i\in N$. \begin{definition}[Reasonable threshold] \label{def:reasonable} Given a set $M$, a value function $v_i$ on $M$, and an integer $n\geq 2$, \emph{a reasonable threshold for $v_i$} is a real number $t_i\in \mathbb{R}$ satisfying the following condition: For every integer $k\in\{0,\ldots,n-1\}$ and any $k$ disjoint subsets $B_1,\ldots,B_k \subseteq M$, if \begin{align*} \forall c\in[k]: v_i(B_c) < t_i, \end{align*} then there exists a partition of $M\setminus \cup_{c\in[k]} B_c$ into $M_1 \cup \cdots \cup M_{n-k}$, such that \begin{align*} \forall j\in[n-k]: v_i(M_j)\geq t_i. \end{align*} Informally, if any $k$ unacceptable subsets are given away, then $i$ can partition the remainder into $n-k$ acceptable subsets. In particular, the case $k=0$ implies that agent $i$ can partition the original set $M$ into $n$ acceptable subsets. Given an instance $I = \ins{N, M, V}$ with $N=[n]$, a vector $(t_i)_{i=1}^n$ of real numbers is called \emph{a reasonable threshold vector for $I$} if $t_i$ is a reasonable threshold for $v_i$ for all $i\in N$.% \footnote{ While we use the Lone Divider algorithm for allocating indivisible goods, it is a more general scheme that can also be used to divide chores or mixed items, divisible or indivisible. See \citet{aigner2022envy} for details. } \end{definition} \begin{example} [\textbf{Reasonable threshold}] Suppose $M$ is perfectly divisible (e.g. a cake), and let $t_i := v_i(M)/n$. This threshold is reasonable, since if some $k$ bundles with value less than $t_i$ are given away, the value of the remaining cake is more than $(n-k)t_i$. Since the cake is divisible, it can be partitioned into $n-k$ acceptable subsets. This does not necessarily hold when $M$ is a set of indivisible items; hence, finding reasonable thresholds for indivisible items setting is more challenging. $\blacksquare$ \end{example} \begin{algorithm}[t] \caption{ \label{alg:lone-divider-general} The Lone Divider algorithm. Based on \citet{kuhn1967games}. } \begin{algorithmic}[1] \REQUIRE ~ An instance $\ins{N, M, V}$ where $N$ is the set of agents, $M$ is the set of items, $V$ is the vector of agents' valuations; and a reasonable threshold vector $(t_i)_{i=1}^n$ as denoted in Definition \ref{def:reasonable}. \ENSURE A partition $M = A_1\cup \cdots\cup A_n$ such that $v_i(A_i)\geq t_i$ for all $i\in [n]$. \STATE \label{step:ldg-cut} Some arbitrary agent $a\in N$ is asked to partition $M$ into $|N|$ disjoint subsets, $(Y_j)_{j\in N}$, with $\forall j\in N: v_a(Y_j)\geq t_a$. \STATE \label{step:ldg-graph} Define a bipartite graph $G$ with the agents of $N$ on the one side and the set $Y := \{Y_1,\ldots,Y_{|N|}\}$ on the other side. Add an edge $(i, Y_j)$ whenever $v_i(Y_j)\geq t_i$. \STATE \label{step:ldg-efm} Find a maximum-cardinality envy-free matching$^{\ref{ftn:efm}}$ in $G$. Give each matched element in $Y$ to the agent paired to it in $N$. \STATE \label{step:ldg-recurse} Let $N \leftarrow $ the unallocated agents and $M \leftarrow $ the unallocated objects. If $N\neq\emptyset$ go back to Step \ref{step:ldg-cut}. \end{algorithmic} \end{algorithm} \paragraph{Algorithm Description} Algorithm \ref{alg:lone-divider-general} proceeds in the following way: in each step, a single remaining agent is asked to partition the remaining goods into acceptable bundles---bundles whose values are above the divider's threshold. Then, all agents point at those bundles that are acceptable for them, and the algorithm finds an \textit{envy-free matching} in the resulting bipartite graph.% \footnote{ \label{ftn:efm} An \emph{envy-free matching} in a bipartite graph $(N\cup Y, E)$ is a matching in which each unmatched agent in $N$ is not adjacent to any matched element in $Y$. The bipartite graph generated by the Lone Divider algorithm always admits a nonempty envy-free matching, and a maximum-cardinality envy-free matching can be found in polynomial time \citep{aigner2022envy}. } The matched bundles are allocated to the matched agents, and the algorithm repeats with the remaining agents and goods. It is easy to see that, if all threshold values $t_i$ are reasonable, then Lone Divider guarantees agent $i$ a bundle with a value of at least $t_i$. For example, when $M$ is a cake, $t_i = v_i(M)/n$ is a reasonable threshold for every $i$, so Lone Divider can be used to attain a \emph{proportional cake-cutting} \citep{kuhn1967games}. When $M$ is a set of indivisible goods, $t_i = \mms{i}{\ell}{[(\ell+1)n-2]}{M}$ is a reasonable threshold for every $\ell\geq 1$ \citep{aigner2022envy}, so these ordinal approximations can all be computed directly through the Lone Divider algorithm. However, directly applying the Lone Divider algorithm cannot guarantee a better ordinal approximation, as we show next. \begin{example}[\textbf{Execution of Algorithm \ref{alg:lone-divider-general}}] \label{exm:lone-divider} For simplicity, we present an example for $\ell=1$. We show that, while Algorithm \ref{alg:lone-divider-general} can guarantee $1$-out-of-$(2n-2)$ MMS, it cannot guarantee $1$-out-of-$(2n-3)$ MMS. Suppose that there are $4n-6$ goods, and that all agents except the first divider value some $2n-3$ goods at $1-\varepsilon$ and the other $2n-3$ goods at $\varepsilon$ (see Figure \ref{fig:lone-divider}). Then the 1-out-of-$(2n-3)$ MMS of all these agents is $1$. However, it is possible that the first divider takes an unacceptable bundle containing all $2n-3$ goods of value $\varepsilon$. Then, no remaining agent can partition the remaining goods into $n-1$ bundles of value at least $1$. In this instance, it is clear that while $\mms{i}{\ell}{(2n-2)}{M}$ is a reasonable threshold, $\mms{i}{\ell}{(2n-3)}{M}$ is not. $\blacksquare$ \begin{figure} \begin{subfigure}[b]{0.5\textwidth} \begin{tikzpicture}[scale=0.5] \mmsbundle{0}{6}{$1-\varepsilon$}{7}{$\varepsilon$} \mmsbundle{3}{6}{$1-\varepsilon$}{7}{$\varepsilon$} \mmsbundle{6}{6}{$1-\varepsilon$}{7}{$\varepsilon$} \mmsbundle{9}{6}{$1-\varepsilon$}{7}{$\varepsilon$} \mmsbundle{12}{6}{$1-\varepsilon$}{7}{$\varepsilon$} \end{tikzpicture} \caption{} \end{subfigure} ~~ \begin{subfigure}[b]{0.1\textwidth} \begin{tikzpicture}[scale=0.5] \itemrect{0}{0}{2}{1}{$\varepsilon$} \itemrect{0}{1}{2}{1}{$\varepsilon$} \itemrect{0}{2}{2}{1}{$\varepsilon$} \itemrect{0}{3}{2}{1}{$\varepsilon$} \itemrect{0}{4}{2}{1}{$\varepsilon$} \end{tikzpicture} \caption{} \end{subfigure} ~~ \begin{subfigure}[b]{0.3\textwidth} \begin{tikzpicture}[scale=0.5] \itemrect{0}{0}{2}{6}{$1-\varepsilon$} \itemrect{0}{6}{2}{6}{$1-\varepsilon$} \itemrect{3}{0}{2}{6}{$1-\varepsilon$} \itemrect{3}{6}{2}{6}{$1-\varepsilon$} \itemrect{6}{0}{2}{6}{$1-\varepsilon$} \end{tikzpicture} \caption{} \end{subfigure} \caption{ \label{fig:lone-divider} An illustration of the goods' values in Example \ref{exm:lone-divider}, for $n=4$. \\ (a) The $2n-3=5$ MMS bundles of some agent. \\ (b) The unacceptable bundle taken by the first divider. \\ (c) The remaining goods, which cannot be combined into $n-1=3$ acceptable bundles. } \end{figure} \end{example} \section{Ordinal Approximation of MMS for Goods} \label{sec:goods-lone} In this section we prove the following theorem. \begin{theorem} \label{thm:l-out-of-d-existence} Given an additive goods instance, an $\ell$-out-of-$d$ MMS allocation always exists when $d = \floor{ (\ell+\frac{1}{2})n}$. \end{theorem} The proof is constructive: we present an algorithm (Algorithm \ref{alg:existenceMMS}) for achieving the above MMS bound. Since the algorithm needs to know the exact MMS thresholds for each agent (which is NP-hard to compute), its run-time is not polynomial. In Section \ref{sec:goods-poly} we present a different algorithm to compute $\ell$-out-of-$d$ MMS allocation when $\ell = 1$ in polynomial-time. Algorithm \ref{alg:existenceMMS} starts with two normalization steps, some of which appeared in previous works and some are specific to our algorithm. For completeness, we describe the normalization steps in Sections \ref{sub:normalize-goods} and \ref{sub:ordering}. The algorithm applies to the normalized instance an adaptation of the Lone Divider algorithm, in which the divider in each step must construct a \emph{balanced} partition. We explain this notion in Section \ref{sub:restricted-ld}. \subsection{Scaling} \label{sub:normalize-goods} We start by scaling the valuations such that $\mms{i}{\ell}{d}{M} = \ell$ for each agent $i$. The scale-invariance property implies that such rescalings do not modify the set of bundles that are acceptable for $i$. Then, for each $i$ we perform an additional scaling as follows. \begin{itemize} \item Consider a particular $d$-partition attaining the maximum in the $\mms{i}{\ell}{d}{M}$ definition. Call the $d$ bundles in this partition the \emph{MMS bundles} of agent $i$. \item Denote the total value of the $\ell-1$ least-valuable MMS bundles by $x_i$ (or just $x$, when $i$ is clear from the context). By definition, the value of the $\ell$-th MMS bundle must be exactly $\ell-x$, while the value of each of the other $d-\ell$ MMS bundles is at least $\ell-x$. \item For each MMS bundle with value larger than $\ell-x$, arbitrarily pick one or more goods and decrease their value until the value of the MMS bundle becomes exactly $\ell-x$. Note that this does not change the MMS value. \end{itemize} After the normalization, the sum of values of all goods is \begin{align*} v_i(M) = & \ (d-\ell+1)\cdot (\ell-x_i) + x_i \\ =& \ \ell + (d-\ell)(\ell-x_i) = d + (d-\ell)(\ell-1-x_i). \end{align*} Since $d = \floor{ (\ell+\frac{1}{2})n} \geq (\ell+\frac{1}{2})n -\frac{1}{2} = \ell n + n/2 -1/2 $, \begin{align} \notag v_i(M) \geq & \ (\ell n + n/2 -1/2 ) + (\ell n + n/2 -1/2 -\ell)(\ell-1-x_i) \\ \label{eq:total-value} =& \ n\cdot \ell ~~+~~ (n-1)\cdot \ell(\ell-1-x_i) ~~+~~ (n-1)\cdot (\ell-x_i)/2. \end{align} The goal of the algorithm is to give each agent $i$ a bundle $A_i$ with $v_i(A_i)\geq \ell$. We say that such a bundle is \emph{acceptable} for $i$. \begin{example}[\textbf{Scaling}] \label{exm:x} To illustrate the parameter $x$, consider the following two instances with $n=5, \ell=3$ and $d=\floor{ (\ell+\frac{1}{2})n}=17$. \begin{enumerate} \item There are $17$ goods with the value of $1$. \item There are $16$ goods valued $1.2$ and one good with the value of $0.6$. \end{enumerate} Here, each MMS bundle contains a single good. In both cases, the value of every $3$ goods is at least $3$. In the first case $x=2$ and the total value is $5\cdot 3 + 4\cdot 3\cdot 0 + 4\cdot 1/2 = 17$. In the second case, $x=1.8$ and the total value is $5\cdot 3 + 4\cdot 3\cdot 0.2 + 4\cdot 1.2/2 = 19.8$. $\blacksquare$ \end{example} \subsection{Ordering the Instance} \label{sub:ordering} As in previous works \citep{Bouveret2016,barman2017approximation,garg2018approximating,huang2021algorithmic}, we apply a preliminary step in which the instance is \emph{ordered}, i.e., $v_i(g_1)\geq \cdots \geq v_i(g_m)$ for each agent $i\in N$. Ordering is done as follows: \begin{itemize} \item Index the goods in $M$ arbitrarily $g_1,\ldots, g_m$. \item Tell each agent $i$ to adopt, for the duration of the algorithm, a modified value function that assigns, to each good $g_j$, the value of the $j$-th most valuable good according to $i$. For example, the new $v_i(g_1)$ should be the value of $i$'s most-valuable good; the new $v_i(g_m)$ should be the value of $i$'s least-valuable good; etc. Ties are broken arbitrarily. \end{itemize} During the execution of the algorithm, each agent answers all queries according to this new value function. For example, an agent asked whether the bundle $\{g_1,g_4,g_5\}$ is acceptable, should answer whether the bundle containing his best good, 4th-best good and 5th-best good is acceptable. Once the algorithm outputs an allocation, it can be treated as a \emph{picking sequence} in which, for example, an agent who receives the bundle $\{g_1,g_4,g_5\}$ has the first, fourth and fifth turns. It is easy to see that such an agent receives a bundle that is at least as good as the bundle containing her best, 4th-best and 5th-best goods. Hence, if the former is acceptable then the latter is acceptable too. Clearly, given an \textit{un}ordered instance, its corresponding ordered instance can be generated in polynomial time (for each agent $i\in [n]$, we need $O(m\log m)$ steps for ordering the valuations). Given an allocation for the ordered instance, one can compute the allocation for the corresponding unordered instance in time $O(n)$, using the picking-sequence described above. \subsection{Restricted Lone Divider} \label{sub:restricted-ld} \begin{algorithm}[t] \caption{ \label{alg:(L+1/2)n} Finding an $\ell$-out-of-$\floor{ (\ell+\frac{1}{2})n}$ MMS allocation. } \begin{algorithmic}[1] \REQUIRE An instance $\ins{N, M, V}$ and an integer $\ell \geq 1$. \ENSURE An $\ell$-out-of-$\floor{ (\ell+\frac{1}{2})n}$ MMS allocation. \STATE Scale the valuations of all agents as explained in Section \ref{sub:normalize-goods}. \STATE Order the instance as explained in Section \ref{sub:ordering}. \STATE Run the Lone Divider algorithm (Algorithm \ref{alg:lone-divider-general}) with threshold values $t_i = \ell$ for all $i\in N$, with the restriction that, in each partition made by the lone divider, all bundles must be $\ell$-balanced (Definition \ref{def:balanced-bundle}). \end{algorithmic} \label{alg:existenceMMS} \end{algorithm} In Section \ref{sub:lone-divider} we illustrated the limitations of the plain Lone Divider algorithm (Algorithm \ref{alg:lone-divider-general}). We can improve its performance by restricting the partitions that the lone divider is allowed to make in Step 1 of Algorithm \ref{alg:lone-divider-general}. Without loss of generality, we may assume (by adding dummy goods if needed) that $m\geq n\cdot \ell$. For every $l\range{1}{\ell}$, denote $G_l^n := \{g_{(l-1)n+1},\ldots,g_{l n}\}$. In other words, $G_1^n$ contains the $n$ most-valuable goods; $G_2^n$ contains the $n$ next most-valuable goods; and so on. Since the instance is ordered, these sets are the same for all agents. \begin{definition}[$\ell$-balanced bundle] \label{def:balanced-bundle} Given an ordered instance and an integer $\ell\geq 1$, a nonempty bundle $B\subseteq M$ is called \emph{$\ell$-balanced} if \begin{itemize} \item $B$ contains exactly one good from $G^n_1$. \item If $|B|\geq 2$, then $B$ contains exactly one good from $G^n_2$. \item If $|B|\geq 3$, then $B$ contains exactly one good from $G^n_3$. \item $\ldots$ \item If $|B|\geq \ell$, then $B$ contains exactly one good from $G^n_{\ell}$. \end{itemize} Note that an $\ell$-balanced bundle contains at least $\ell$ goods. The definition of $\ell$-balanced bundles only constrains the allocation of the first $\ell n$ goods; there may be arbitrarily many additional goods in $M \setminus \bigcup_{i = 1}^{\ell} G_{i}^{n}$, and they may be allocated arbitrarily. \end{definition} Algorithm \ref{alg:(L+1/2)n} requires the lone divider to construct a partition in which all $n$ bundles are $\ell$-balanced. \begin{example}[\textbf{$\ell$-balanced bundles}] Suppose there are five agents ($n=5$) and $m=20$ goods, where the value of each good $j\in[20]$ is precisely $j$ for all agents. Then, a $1$-balanced bundle must contain a good $j \in \{20,19, 18, 17,16\}$; a $2$-balanced bundle must contain a good from $\{20,19, 18, 17,16\}$ and a good from $\{15, 14, 13, 12, 11\}$; a $3$-balanced bundle must contain, in addition to these, a good from $\{10, 9, 8, 7 ,6\}$; and so on. $\blacksquare$ \end{example} \subsection{Construction for a Single Divider} In order to prove the correctness of Algorithm \ref{alg:(L+1/2)n}, it is sufficient to prove that the threshold value $t_i = \ell$ is a \textit{reasonable threshold} (see Definition \ref{def:reasonable}) for each agent $i$, with the additional restriction that all bundles should be $\ell$-balanced. To do this, it is sufficient to consider a single divider, Alice. We denote her normalized ordered value measure by $v$, and the sum of her $\ell-1$ least-valuable MMS bundles by $x$. We consider a particular MMS partition for Alice, and refer to the bundles in this partition as the \emph{MMS bundles}. Assume that $k$ unacceptable bundles $(B_c)_{c=1}^k$ have already been given to other agents and that all these bundles are $\ell$-balanced. Therefore, for each $c\in[k]$, it must be that $v(B_c)<\ell$.\footnote{ Recall that the Lone Divider algorithm allocates bundles using an envy-free matching. This means that all bundles allocated before Alice's turn are unacceptable to Alice. } We have to prove that Alice can use the remaining goods to construct $n-k$ acceptable bundles that are also $\ell$-balanced. Particularly, we prove below that Alice can construct $n-k$ acceptable bundles, each of which contains exactly $1$ remaining good from each of $G^n_1\ldots,G^n_{\ell}$. \subsection{Main Idea: Bounding the Waste} Given a bundle $B_a$, denote its \emph{waste} by $w(B_a) := v(B_a) - \ell$. This is the value the bundle contains beyond the acceptability threshold of $\ell$. Note that the waste of acceptable bundles is positive and that of unacceptable bundles is negative. The total initial value for Alice is given by \eqref{eq:total-value}. The total waste she can afford in her partition is therefore \begin{align*} v(M) - n\cdot \ell = (n-1)\cdot (\ell-x)/2 + (n -1)\cdot \ell(\ell-1-x). \end{align*} The first term implies that she can afford an average waste of $(\ell-x)/2$ for $n-1$ bundles; the second term implies that she can afford an average waste of $\ell(\ell-1-x)$ for $n-1$ bundles. \begin{example}[\textbf{Bounding the waste}] Consider Example \ref{exm:x}. In case (1), the total value is $17$ and we need $5$ bundles with a value of $3$, so the affordable waste is $2$. The average over $4$ bundles is $0.5 = (3-2)/2 + 3\cdot 0$. In case (2), the total value is $19.8$, so the affordable waste is $4.8$. The average over $4$ bundles is $1.2 = (3-1.8)/2 + 3\cdot (0.2)$. In both cases, if there are $4$ acceptable bundles with that amount of waste, then the remaining value is exactly $3$, which is sufficient for an additional acceptable bundle. $\blacksquare$ \end{example} The following lemma formalizes this observation. \begin{lemma} \label{lem:waste} Suppose there exists a partition of $M$ into \begin{itemize} \item Some $t\geq 0$ bundles with an average waste of at most $(\ell-x)/2 + \ell(\ell-1-x)$; \item A subset $S$ of remaining goods, with $v(S)<\ell$. \end{itemize} Then $t \geq n$. \end{lemma} \begin{proof} For brevity, we denote $w := (\ell-x)/2 ~+~ \ell(\ell-1-x)$. The \emph{total} value of the bundles equals their number times their \emph{average} value. So the total value of the $t$ bundles is at most $t\cdot \ell + t\cdot w$. After adding $v(S) < \ell$ for the remaining goods, the sum equals $v(M)$, so \begin{alignat*}{4} & (t+1)\cdot \ell + t\cdot w > v(M) \\ \geq & n\cdot \ell + (n-1)\cdot w &&& \text{~~~~(by \eqref{eq:total-value}).} \end{alignat*} Therefore, at least one of the two terms in the top expression must be larger than the corresponding term in the bottom expression. This means that either $(t+1)\ell > n\ell$, or $t w > (n-1)w$. Both options imply $t\geq n$. \end{proof} \begin{remark} \label{rem:waste} The value of each MMS bundle is at most $\ell-x$. Therefore, any bundle that is the union of exactly $\ell$ such MMS bundles has a waste of at most $\ell(\ell-x)-\ell = \ell(\ell-1-x)$ and thus it satisfies the upper bound of Lemma \ref{lem:waste}. In particular, this is satisfied for every bundle with at most $\ell$ goods. \end{remark} Below we show how Alice can find a partition in which the average waste is upper bounded as in Lemma \ref{lem:waste}. This partition will consist of the following bundles: \begin{itemize} \item The $k$ previously-allocated bundles, with wsate $<0$ (since they are unacceptable); \item Some newly-constructed bundles with exactly $\ell$ goods and waste $\leq \ell(\ell-1-x)$ (by Remark \ref{rem:waste}); \item Some newly-constructed bundles with waste at most $(\ell-x)/2$; \item Some pairs of bundles, where the waste in one is larger than $(\ell-x)/2$ but the waste in the other is smaller than $(\ell-x)/2$, such that the average is at most $(\ell-x)/2$. \end{itemize} \subsection{Step 0: Bundles with Exactly $\ell$ Goods} Recall that before Alice's turn, some $k$ bundles have been allocated, with a value of less than $\ell$. Hence, their waste is less than $0$. Since these bundles are $\ell$-balanced, they contain exactly one good from each of $G^n_{1},\ldots,G^n_{\ell}$ (and possibly some additional goods). \iffalse But some of these bundles may contain fewer than $\ell$ goods. To simplify and unify the arguments to follow, we would like to ensure that all these bundles contain at least $\ell$ goods. To each such bundle $B_c$ with $l < \ell$ goods, Alice adds a good from each of $G^n_{l+1},\ldots,G^n_{\ell}$, such that it contains exactly $\ell$ goods. \erel{ I think we can assume that all bundles contain at least $\ell$ goods: we assumed previously that $m\geq n\cdot \ell$, so all $\ell$-balanced bundles must contain one good from each $G^n_l$. What do you think? } \HH{makes sense to me. We can also here remind the reader of the assumption that $m\geq n\cdot \ell$.} By Remark \ref{rem:waste}, the waste of each such bundle is at most $\ell(\ell-1-x)$, which satisfies the upper bound of Lemma \ref{lem:waste}. Now, for each $c\in[k]$, the bundle $B_c$ contains exactly $1$ good from each of $G^n_1\ldots,G^n_{\ell}$. \fi Therefore, exactly $n-k$ goods are available in each of $G^n_1\ldots,G^n_{\ell}$. Next, Alice checks all the $\ell$-tuples containing one good from each of $G^n_1\ldots,G^n_{\ell}$ (starting from the highest-valued goods in each set). If the value of such an $\ell$-tuple is at least $\ell$, then it is acceptable and its waste is at most $\ell(\ell-1-x)$ by Remark \ref{rem:waste}. After Step 0, there are some $k' \geq k$ bundles with a waste of at most $\ell(\ell-1-x)$, each of which contains exactly one good from each of $G^n_1\ldots,G^n_{\ell}$. Of these, $k$ are previously-allocated bundles, and $k'-k$ are newly-constructed acceptable bundles. In each of $G^n_1\ldots,G^n_{\ell}$, there remain exactly $n-k'$ goods. The total value of each $\ell$-tuple of remaining goods from $G^n_1\ldots,G^n_{\ell}$ is less than $\ell$. Alice will now construct from them some $n-k'$ bundles with an average waste of at most $(\ell-x)/2 + \ell(\ell-1-x)$. Lemma \ref{lem:waste} implies the following lemma on the remaining goods (the goods not in these $k'$ bundles): \begin{lemma} \label{lem:waste2} Suppose there exists a partition of the remaining goods into \begin{itemize} \item Some $t\geq 0$ bundles with an average waste of at most $(\ell-x)/2 + \ell(\ell-1-x)$; \item A subset $S$ of remaining goods, with $v(S)<\ell$. \end{itemize} Then $t \geq n-k'$. \end{lemma} Alice's strategy branches based on the number of \emph{high-value goods}. \subsection{High-value Goods} \label{sub:high-value-goods} We define high-value goods as goods $g$ with $v(g)>(\ell-x)/2$. Denote by $h$ the number of high-valued goods in $M$. Since the instance is ordered, goods $g_1,\ldots, g_h$ are high-valued. All MMS bundles are worth at most $\ell-x$, and therefore may contain at most one high-value good each. Since the number of MMS bundles is $\ell n + n/2$, we have $h\leq \ell n + n/2$. For each $j\in[h]$, we denote \begin{itemize} \item $M_j$ := the MMS bundle containing $g_j$. Since the value of all MMS bundles is at most $(\ell-x)$, each MMS bundle contains at most one high-value good, so the $M_j$ are all distinct. \item $R_j$ := the \emph{remainder set} of $g_j$, i.e., the set $M_j \setminus \{g_j\}$. \item $r_j := v(R_j)$. \end{itemize} We consider three cases, based on the number of high-value goods. \paragraph{\textbf{Case \#1}: $h\leq \ell n$.} This means that all high-value goods are contained in $G^n_1\cup\cdots \cup G^n_{\ell}$, so after removing $(B_c)_{c=1}^{k'}$, at most $\ell n-\ell k'$ high-value goods remain---at most $n-k'$ in each of $G^n_1\ldots,G^n_{\ell}$. Alice constructs the required bundles by \emph{bag-filling}---a common technique in MMS approximations (e.g. \citet{garg2018approximating}). \begin{itemize} \item Repeat at most $n-k'$ times: \begin{itemize} \item Initialize a bag with a good from each of $G^n_1\ldots,G^n_{\ell}$ (Step 0 guarantees that the total value of these goods is less than $\ell$). \item Fill the bag with goods from outside $G^n_1\ldots,G^n_{\ell}$. Stop when either no such goods remain, or the bag value raises above $\ell$. \end{itemize} \end{itemize} Since all goods used for filling the bag have a value of at most $(\ell-x)/2$, the waste of each constructed bundle is at most $(\ell-x)/2$. By construction, all these bundles are acceptable except the last one. Apply Lemma \ref{lem:waste2} with $S$ being the set of goods remaining in the last bag, and $t$ being the number of acceptable bundles constructed by bag-filling. The lemma implies that $t\geq n-k'$. \paragraph{\textbf{Case \#2}: $k' \geq n/2$.} Alice uses bag-filling as in Case \#1. Here, the waste per constructed bundle might be more than $(\ell-x)/2$. However, since the value of a single good is at most $\ell-x$, the waste of each constructed bundle is at most $\ell-x$. In each of the $k'$ bundles of Step 0, the waste is at most $\ell(\ell-1-x)$. Since $k'\geq n/2 \geq n-k'$, the \emph{average} waste per bundle is at most $\ell(\ell-1-x)+(\ell-x)/2$. Hence Lemma \ref{lem:waste2} applies, and at least $n-k'$ acceptable bundles are constructed. \paragraph{\textbf{Case \#3}: $h > \ell n$ and $k'<n/2$.} In this case, Alice will have to construct some bundles with waste larger than $(\ell-x)/2$. However, she will compensate for it by constructing a similar number of bundles with waste smaller than $(\ell-x)/2$, such that the average waste per bundle remains at most $(\ell-x)/2$. After removing $(B_c)_{c=1}^{k'}$, exactly $h-\ell k'$ high-value goods remain. They can be partitioned into two subsets: \begin{itemize} \item $H_+ := $ the $(n-k')\ell$ top remaining goods --- those contained in $G^n_1\cup\cdots \cup G^n_{\ell}$; exactly $n-k'$ in each of $G^n_1\ldots,G^n_{\ell}$. By assumption, $n-k' > n/2$. \item $H_- := $ the other $h-\ell n$ high-value goods --- those not contained in $G^n_1\cup\cdots \cup G^n_{\ell}$. Since $h\leq \ell n + n/2$, the set $H_-$ contains at most $n/2$ goods. \end{itemize} This is the hardest case; to handle this case, we proceed to Step 1 below. \subsection{Step 1: Bundling High-value Goods.} Alice constructs at most $|H_-|$ bundles as follows. \begin{itemize} \item Repeat while $H_-$ is not empty: \begin{itemize}\item Initialize a bag with the lowest-valued remaining good from each of $G^n_1\ldots,G^n_{\ell}$ (Step 0 guarantees that their total value is less than $\ell$). \item Fill the bag with goods from $H_-$, until the bag value raises above $\ell$. \end{itemize} \end{itemize} Note that $|H_-| \leq n/2 < n-k' = |G^n_1| = \ldots = |G^n_{\ell}|$, so as long as $H_-$ is nonempty, each of $G^n_1\ldots,G^n_{\ell}$ is nonempty too, and Alice can indeed repeat. By construction, all filled bags except the last one are valued at least $\ell$; it remains to prove that the number of these bags is sufficiently large. Let $s$ be the number of acceptable bundles constructed once $H_-$ becomes empty. Note that, in addition to these bundles, there may be an \emph{incomplete bundle} --- the last bundle, whose construction was terminated while its value was still below $\ell$. Let $P_+ \subseteq H_+$ be the set of $s \ell$ goods from $H_+$ in the acceptable bundles. Denote by $P_- \subseteq H_-$ the set of $s$ goods from $H_-$ that were added \emph{last} to these $s$ bundles (bringing their value from less-than-$\ell$ to at-least-$\ell$). Note that the waste in each of these $s$ bundles might be larger than $(\ell-x)/2$, but it is at most the value of a single good from $P_-$, so the total waste is at most $\sum_{j \in P_-} v(g_{j})$. After this step, besides the $s$ acceptable bundles, there are some $(n-k'-s)\ell$ high-value goods remaining in $H_+$ (some $\ell$ of these goods are possibly in the incomplete bundle, if such a bundle exists). Alice now has to construct from them some $n-k'-s$ acceptable bundles. \subsection{Step 2: Using the Remainders.} Alice now constructs acceptable bundles by bag-filling. She initializes each bag with the incomplete bundle from Step 1 (if any), or with an $\ell$-tuple of unused goods from $G^n_1\ldots,G^n_{\ell}$. Then, she fills the bag with low-value goods from the following \emph{remainder sets}: \begin{itemize} \item There are $\ell k$ remainder sets that correspond to the $\ell k$ goods allocated within the $k$ unacceptable bundles $(B_c)_{c=1}^k$. We denote them by $R^U_{c,1},\ldots,R^U_{c,\ell}$ and their values by $r^U_{c,1},\ldots,r^U_{c,\ell}$ for $c\in[k]$. \item There are $\ell s + s$ remainder sets that correspond to the $\ell s$ high-value goods in $P_+$ and the $s$ high-value goods in $P_-$. We denote them by $R^P_{j}$ and their values by $r^P_{j}$ for $j\in P_+ \cup P_-$. \end{itemize} By definition, the total value of all these remainders is: \begin{align*} \textsc{Total-Remainder-Value} &= v \left( \bigcup_{j \in P_+ \cup P_-} R^P_{j} ~~\cup~~ \bigcup_{c=1}^{k} \bigcup_{l=1}^{\ell} R^U_{c,l} \right) \\ &= \sum_{j \in P_+ \cup P_-} r^P_{j} ~~+~~ \sum_{c=1}^{k} \sum_{l=1}^{\ell} r^U_{c,l}. \end{align*} For each remainder-set $R_j$, denote by $R'_j$, the subset of $R_j$ that remains after removing the at most $k$ unacceptable bundles $(B_c)_{c=1}^k$ with more than $\ell$ goods.% \footnote{ Bundles $B_c$ with at most $\ell$ goods do not consume anything from the remainder-sets $R_j$, since they contain only high-value goods from $G^n_1,\ldots,G^n_{\ell}$. From the same reason, the $k'-k$ acceptable bundles constructed in Step 0 do not consume anything from the remainder sets. } Each unacceptable bundle $B_c$ contains, in addition to the $\ell$ high-value goods $g_{c,l}$ for $l\range{1}{\ell}$, some low-value goods with a total value of less than $\sum_{l=1}^\ell r^U_{c,l}$ (since the total value of the unacceptable bundle is less than $\ell$). Therefore, the total value of low-value goods included in these unacceptable bundles is at most $\sum_{c=1}^k \sum_{l=1}^\ell r^U_{c,l}$ (equality holding iff $k=0$). Therefore, the total remaining value satisfies \begin{align} \notag \textsc{Total-Remainder-Value} &= v \left( \bigcup_{j \in P_+ \cup P_-} R'^P_{j} ~~\cup~~ \bigcup_{c=1}^{k} \bigcup_{l=1}^{\ell} R'^U_{c,l} \right) \\ \notag &\geq \left( \sum_{j \in P_+ \cup P_-} r^P_{j} ~~+~~ \sum_{c=1}^{k} \sum_{l=1}^{\ell} r^U_{c,l} \right) - \sum_{c=1}^{k} \sum_{l=1}^{\ell} r^U_{c,l} \\ &= \sum_{j \in P_+ \cup P_-} r^P_{j} . \label{eq:total-value-remainders} \end{align} The bag-filling proceeds as follows. \begin{enumerate} \item Initialize $a := 1$. \item Initialize a bag with either the incomplete bundle from Step 1 (if any), or some $\ell$ unused top goods. We denote the $\ell$ top goods used for initializing bag $a$ by $g_{j[a,1]}\in G^n_1,\ldots,g_{j[a,\ell]}\in G^n_{\ell}$. \item Add to the bag the remainder-sets $R'^P_{j}$ and $R'^U_{c,l}$ in an arbitrary order. Stop when either no such remainder-sets remain, or the bag value raises above $\ell$. \item If there are still some unused remainder-sets and high-value goods, let $a := a+1$ and go back to Step 2. \end{enumerate} The bag-filling stops when either there are no more high-value goods, or no more remainder-sets. In the former case, Alice has all $n-k'$ required bundles ($s$ from Step 1 and $n-k'-s$ from Step 2), and the construction is done. We now analyze the latter case. By construction, we go to the next bag only after the current bag becomes at least $\ell$. Therefore, all bags except the last one are valued at least $\ell$. Our goal now is to prove that the number of these ``all bags except the last one'' is sufficiently large. Let $t$ be the number of bundles constructed with a value of at least $\ell$. For each $a\in[t]$, The $a$-th bag contains the high-value goods $g_{j[a,1]},\ldots,g_{j[a,\ell]}$ and some remainder-sets. How much remainder-sets should it contain? Suppose it contains remainder-sets with a total value of $\sum_{l=1}^\ell r_{j[a,l]}$. Then, the total bundle value is $\sum_{l=1}^\ell v(M_{j[a,l]})$. By assumption, the total value of every $\ell$ MMS bundles is at least $\ell$, so the bundle value is at least $\ell$. Therefore, to make bundle $a$ acceptable, it is sufficient to add to it a value of $\sum_{l=1}^\ell r_{j[a,l]}$. Denote by $j[a,*]$ the index of the last remainder-set added to bag $a$ (bringing its value from less-than-$\ell$ to at-least-$\ell$). The total value of remainder-sets in the bag is thus less than $r_{j[a,*]} + \sum_{l=1}^\ell r_{j[a,l]}$. The total value of remainder-sets in the unfilled $(t+1)$-th bag is less than $\sum_{l=1}^\ell r_{j[t+1,l]}$, where $j[t+1,1],\ldots,j[t+1,\ell]$ are indices of some remaining high-value goods. Therefore, the total value of remainder-sets in all $t+1$ bags together satisfies \begin{align} \notag v \left( \bigcup_{j \in P_+ \cup P_-} R'^P_{j} ~~\cup~~ \bigcup_{c=1}^{k} \bigcup_{l=1}^{\ell} R'^U_{c,l} \right) &~~<~~ \sum_{a=1}^t \left( r_{j[a,*]} + \sum_{l=1}^\ell r_{j[a,l]} \right) ~~+~~ \left( \sum_{l=1}^\ell r_{j[t+1,l]} \right) \\ \label{eq:bagvalue} &= \left(\sum_{a=1}^{t+1} \sum_{l=1}^\ell r_{j[a,l]} \right) ~~+~~ \left(\sum_{a=1}^t r_{j[a,*]} \right). \end{align} Combining \eqref{eq:total-value-remainders} and \eqref{eq:bagvalue} gives \begin{align} \left(\sum_{a=1}^{t+1} \sum_{l=1}^\ell r_{j[a,l]} \right) + \left(\sum_{a=1}^t r_{j[a,*]} \right) > \sum_{j \in P_+ \cup P_-} r^P_{j} . \end{align} In the left-hand side there are $\ell(t+1)+t = (\ell+1)t+\ell$ terms, while in the right-hand side there are $(\ell+1)s$ terms --- $\ell+1$ for each bundle constructed in Step 1. We now show that each term in the left-hand side is equal or smaller than a unique term in the right-hand side. Since the left-hand side is overall larger than the right-hand side, this indicates that the left-hand side must have more terms, that is, $(\ell+1)t+\ell > (\ell+1)s$. This implies that $t\geq s$, i.e., Alice has successfully constructed from the remainder-sets some $s$ acceptable bundles. \begin{itemize} \item Consider first the $\ell (t+1)$ terms $r_{j[a,l]}$, and compare them to $r^P_{j}$ for $j \in P_+$. Since the bundles in Step 1 were constructed in ascending order of value, starting at the lowest-valued available goods in each of $G^n_1\ldots,G^n_{\ell}$, every index $j[a,l]$ is smaller than any index $j \in G^n_l$. Therefore, every term $r_{j[a,l]}$ is smaller than some unique term $r^P_{j}$ for $j \in G^n_l$, for every $l\in[\ell]$. \item Consider now the $t$ terms $r_{j[a,*]}$, and compare them to $r^P_{j}$ for $j \in P_-$. Each of the indices $j[a,*]$ is an index of some unique remainder-set, so it is either equal to some unique index $j\in P_+\cup P_-$, or to some unique index $_{c,l}$ (the index some remainder-set $R^U_{c,l}$ of some unacceptable bundle $B_c$). All indices $_{c,l}$ are in $\{1,\ldots, \ell n\}$, so they are smaller than the indices $j \in P_-$ . Therefore, every $r_{j[a,*]}$ is either equal or smaller than some unique term $r^P_{j}$ for $j\in P_-$. \end{itemize} So Alice has $s$ new acceptable bundles. The waste of each of these is $r_{j[a,*]}$, which --- as mentioned above --- is equal to or smaller than some unique term $r^P_{j}$ for $j\in P_-$. Therefore, the total waste of all these $s$ bundles is at most the following sum of $s$ terms: $\sum_{j \in P_-} r^P_{j}$. Recall that the waste of each of the $s$ acceptable bundles from Step 1 was at most $v(g_{j})$ for some $j\in P_-$. Therefore, the total waste of the $2 s$ acceptable bundles constructed so far is at most \begin{align*} &\sum_{j \in P_-} r^P_{j} + \sum_{j \in P_-} v(g_{j}) \\ =& \sum_{j \in P_-} (r^P_{j} + v(g_{j})) \\ =& \sum_{j \in P_-} v(M_{j}) \\ \leq & \sum_{j \in P_-} (\ell-x) && \text{by the normalization (Section~\ref{sub:normalize-goods})} \\ =& |P_-|\cdot (\ell-x) \\ =& s\cdot (\ell-x). \end{align*} Therefore, the average waste per bundle is at most $s(\ell-x)/(2s) = (\ell-x)/2$. \subsection{Step 3: Plain Bag-filling.} At this stage, there are no more high-value goods outside $H_+$. Therefore, Alice can construct the remaining bundles by plain bag-filling, initializing each bag with some $\ell$-tuple of unused goods remaining in $H_+$, and filling it with some low-value goods outside $H_+$. Since the waste in each bundle is at most $(\ell-x)/2$, Lemma \ref{lem:waste2} implies that the total number of constructed bundles is $n-k'$. This completes the proof that $\ell$ is a reasonable threshold for Algorithm \ref{alg:(L+1/2)n}. Therefore, the algorithm finds the allocation promised in Theorem \ref{thm:l-out-of-d-existence}. \subsection{Limits of Algorithm \ref{alg:(L+1/2)n}} To illustrate the limitation of Algorithm \ref{alg:(L+1/2)n}, we show that it cannot guarantee $1$-out-of-$((\ell+\frac{1}{2}) n -2)$ MMS. For simplicity we assume that $n$ is even so that $(\ell+\frac{1}{2})n$ is an integer. \begin{example}[\textbf{Tight bound for our technique}] \label{exm:lone-divider-2} Suppose that in the first iteration all agents except the divider have the following MMS bundles: \iffalse \begin{itemize} \item $n-1$ bundles are made of two goods with values $1-\varepsilon$ and $\varepsilon$. \item One bundle is made of two goods with values $1-n\varepsilon$ and $n\varepsilon$. \item $n/2-2$ bundles are made of two goods with values $1/2,1/2$. \end{itemize} So their $1$-out-of-$(3n/2-2)$ MMS equals $1$. However, it is possible that the first divider takes an unacceptable bundle containing the good of value $1-n\varepsilon$ and the $n-1$ goods of value $\varepsilon$. Note that this bundle is $1$-balanced. All remaining goods have a value of less than $1$, so an acceptable bundle requires at least two goods. However, the number of remaining goods is only $2n-4$ ($n-1$ goods of value $1-\varepsilon$, one good of value $n\varepsilon$ and $n-4$ goods of value $1/2$), so at most $n-2$ acceptable bundles can be constructed. \fi \begin{itemize} \item $\ell n-1$ bundles are made of two goods with values $1-\varepsilon$ and $\varepsilon$. \item One bundle is made of two goods with values $1-\ell n\varepsilon$ and $\ell n\varepsilon$. \item $n/2-2$ bundles are made of two goods with values $1/2,1/2$. \end{itemize} So their $1$-out-of-$(\ell n + n/2-2)$ MMS equals $\ell$. However, it is possible that the first divider takes an unacceptable bundle containing $\ell-1$ goods of value $1-\varepsilon$, the good of value $1-\ell n\varepsilon$, and the $\ell n-1$ goods of value $\varepsilon$. Note that this bundle is $\ell$-balanced. All remaining goods have a value of less than $1$, so an acceptable bundle requires at least $\ell+1$ goods. However, the number of remaining goods is only $\ell n - \ell + 1 + n - 4 = (\ell+1)(n-1) - 2 $: $\ell n-\ell$ goods of value $1-\varepsilon$, one good of value $\ell n\varepsilon$ and $n-4$ goods of value $1/2$. Hence, at most $n-2$ acceptable bundles can be constructed. $\blacksquare$ \end{example} \begin{figure} \begin{center} \begin{tikzpicture}[scale=0.7] \mmsbundle{0}{6}{$1-\varepsilon$ $\bullet$}{6.5}{$\varepsilon$ $\bullet$} \draw [dotted] (3,0) rectangle +(2,6) node[pos=.5] {\shortstack{$...$\\$11$\\bundles\\$...$}}; \draw [dotted] (3,0) rectangle +(2,6) rectangle +(0,6.5) node[pos=.5] {...$\varepsilon$... $\bullet$}; \mmsbundle{6}{6}{$1-\varepsilon$}{6.5}{$\varepsilon$ $\bullet$} \mmsbundle{9}{4.5}{\small$1-12 \varepsilon$ $\bullet$}{6.5}{$12 \varepsilon$} \mmsbundle{12}{3.25}{$1/2$}{6.5}{$1/2$} \end{tikzpicture} \end{center} \caption{ \label{fig:lone-divider-2} An illustration of the goods' values in Example \ref{exm:lone-divider-2}, for $n=6$ and $\ell=2$. Here, $(\ell+1/2)n-2 = 13$. The $2$-out-of-$13$ MMS is $2$. Each rectangle represents an MMS bundle containing two goods. The first divider takes the goods marked by a bullet. Note that this is a 2-balanced bundle. The next divider cannot construct $5$ bundles of value 2 from the remaining goods. } \end{figure} \section{Ordinal Approximation for Goods in Polynomial Time} \label{sec:goods-poly} Algorithm \ref{alg:(L+1/2)n} guarantees that each agent receives an $\ell$-out-of-$d$ MMS allocation for $d \geq \floor{ (\ell+\frac{1}{2})n}$. However, the algorithm requires exact MMS values to determine whether a given bundle is acceptable to each agent. Since computing an exact MMS value for each agent is NP-hard, Algorithm \ref{alg:(L+1/2)n} does not run in polynomial-time even for the case of $\ell = 1$. The objective of this section is to develop polynomial-time approximation algorithms for computing $\ell$-out-of-$d$ MMS allocations. We utilize optimization techniques used in the \emph{bin covering} problem. This problem was presented by \citet{assmann1984dual} as a dual of the more famous \emph{bin packing} problem. In the bin covering problem, the goal is to fill bins with items of different sizes, such that the sum of sizes in each bin is at least $1$, and subject to this, the number of bins is maximized. This problem is NP-hard, but several approximation algorithms are known. These approximation algorithms typically accept a bin-covering instance $I$ as an input and fill at least $a\cdot (OPT(I) - b)$ bins, where $a<1$ and $b>0$ are constants, and $OPT(I)$ is the maximum possible number of bins in $I$. Such an algorithm can be used directly to find an ordinal approximation of an MMS allocation when all agents have \emph{identical} valuations. Our challenge is to adapt them to agents with \emph{different} valuations. \subsection{The case when $\ell=1$} \label{sub:l=1} \begin{algorithm}[t] \caption{ \label{alg:3n/2-polytime} Bidirectional bag-filling } \begin{algorithmic}[1] \REQUIRE An instance $\ins{N, M, V}$ and threshold values $(t_i)_{i=1}^n$. \ENSURE At most $n$ subsets $A_i$ satisfying $v_{i}(A_{i}) \geq t_{i}$. \STATE Order the instance in \textbf{descending order} of value as in Section \ref{sub:ordering}, so that for each agent $i$, $v_i(g_1)\geq \cdots \geq v_i(g_m)$. \FOR{$k = 1,2,\ldots$:} \STATE \label{step:new-bag} Initialize a bag with the good $g_k$. \STATE Add to the bag zero or more remaining goods in \textbf{ascending order} of value, until at least one agent $i$ values the bag at least $t_i$. \STATE Give the goods in the bag to an arbitrary agent $i$ who values it at least $t_i$. \STATE If every remaining agent $i$ values the remaining goods at less than $t_i$, stop. \ENDFOR \end{algorithmic} \end{algorithm} For the case when $\ell=1$, we adapt the algorithm of \citet{csirik1999two}, which finds a covering with at least $\frac{2}{3}\cdot (OPT(I)-1)$ bins (an approximation with $a=\frac{2}{3}$ and $b=1$). Algorithm \ref{alg:3n/2-polytime} generalizes the aforementioned algorithm to MMS allocation of goods. Thus, the algorithm of \citet{csirik1999two} corresponds to a special case of Algorithm \ref{alg:3n/2-polytime} wherein \begin{itemize} \item All agents have the same $v_i$ (describing the item sizes); and \item All agents have the same $t_i$ (describing the bin size).\footnote{ There is a minor difference: we initialize the first bag with only a single good from the left ($g_1$) before filling it with goods from the right ($g_m,g_{m-1},\ldots$). In contrast, \citet{csirik1999two} fill the first bag with several goods from the left ($g_1,g_2,\ldots$ while its value is less than the bin size), and only then start filling it with goods from the right. However, this difference is not substantial: their proof of the approximation ratio assumes only that each bin has at least one good from the left and one good from the right, so the same proof holds for our variant. } \end{itemize} For this case, we have the following lemma: \begin{lemma}[Lemma 4 of \citet{csirik1999two}] \label{lem:csirik} When all agents have the same valuation $v$ and the same threshold $t$, Algorithm \ref{alg:3n/2-polytime} allocates at least $\frac{2}{3}(OPT(v,t)-1)$ bundles, where $OPT(v,t)$ is the maximum number that can be filled. \end{lemma} Note that Algorithm \ref{alg:3n/2-polytime} works for any selection of the threshold values $t_i$, but if the thresholds are too high, it might allocate fewer than $n$ bundles. Our challenge now is to compute thresholds for which $n$ bundles are allocated. To compute a threshold for agent $i$, we simulate Algorithm \ref{alg:3n/2-polytime} using $n$ clones of $i$, that is, $n$ agents with valuation $v_i$. We look for the largest threshold for which this simulation allocates at least $n$ bundles. \begin{definition} The \emph{1-out-of-$n$ bidirectional-bag-filling-share of agent $i$}, denoted $\BBFSA{n}_i$, is the largest value $t_i$ for which Algorithm \ref{alg:3n/2-polytime} allocates at least $n$ bundles when executed with $n$ agents with identical valuation $v_i$ and identical threshold $t_i$. \end{definition} \label{def:bbfs} The BBFS of agent $i$ can be computed using binary search up to $\varepsilon$, where $\varepsilon$ is the smallest difference between values that is allowed by their binary representation. The following lemma relates the BBFS to the MMS. \begin{lemma} \label{lem:bbfs} For any integer $n\geq 1$ and agent $i\in[n]$, \begin{align*} \BBFSA{n}_i \geq \mms{i}{1}{\ceil{\frac{3}{2}n}}{M}. \end{align*} \end{lemma} \begin{proof} Let $t_i := \mms{i}{1}{\ceil{\frac{3}{2}n}}{M}$. By definition of MMS, there is a partition of $M$ into $\ceil{\frac{3}{2}n}$ bundles of size at least $t_i$. By Lemma \ref{lem:csirik}, the Bidirectional-Bag-Filling algorithm with valuation $v_i$ and bin-size $t_i$ fills at least $\frac{2}{3}(\ceil{\frac{3}{2}n}-1)$ bundles, which means at least $n$ bundles since the number of bundles is an integer. \iffalse \fi By definition of the BBFS, since Algorithm \ref{alg:3n/2-polytime} allocates at least $n$ bundles with threshold $t_i$, we have $t_i\leq \BBFSA{n}_i$. \end{proof} \iffalse \begin{enumerate} \item \label{bullet:fail} With threshold $t_i+\varepsilon$, fewer than $n$ bundles are allocated; \item \label{bullet:succeed} With threshold $t_i$, at least $n$ bundles are allocated. \end{enumerate} Condition \ref{bullet:fail} with Lemma \ref{lem:csirik} imply that \begin{align*} n-1 & \geq \frac{2}{3}\big(OPT(v_i,t_i+\varepsilon)-1\big) \\ \implies OPT(v_i,t_i+\varepsilon) &\leq \frac{3}{2}n - \frac{1}{2}. \end{align*} In other words, there is no partition of the goods into bundles of value larger than $t_i$, where the number of bundles is larger than $\frac{3}{2}n - \frac{1}{2}$. ``Larger than $\frac{3}{2}n - \frac{1}{2}$'' is equivalent to ``At least $\ceil{\frac{3}{2}n}$''. Therefore, there is no partition into $\ceil{\frac{3}{2}n}$ bundles in which the value for $i$ of all bundles is larger than $t_i$. Therefore, $\mms{i}{1}{\ceil{3n/2}}{M} \leq t_i$. Condition \ref{bullet:succeed} implies that Algorithm \ref{alg:3n/2-polytime} allocates $n$ bundles even when executed with $n$ different agents, as shown below. \fi We define an allocation as \emph{BBFS-fair} if it allocates to each agent $i\in[n]$ a bundle with a value of at least $\BBFSA{n}_i$. Lemma \ref{lem:bbfs} indicates that a BBFS-fair allocation is also 1-out-of-$\ceil{3n/2}$ MMS-fair, though the BBFS may be larger than 1-out-of-$\ceil{3n/2}$ MMS. \begin{lemma} \label{lem:bbsfair} A BBFS-fair allocation always exists, and can be found in time polynomial in the length of the binary representation of the problem. \end{lemma} \begin{proof} We first show that, when Algorithm \ref{alg:3n/2-polytime} is executed with threshold values $t_i = \BBFSA{n}_i$ for all $i\in[n]$, it allocates $n$ bundles. For each $j\geq 1$, denote: \begin{itemize} \item $A_j$ --- the bundle allocated at iteration $j$ of Algorithm \ref{alg:3n/2-polytime} with the true (different) valuations $v_1,\ldots,v_n$. \item $B^i_j$ --- the bundle allocated at iteration $j$ of agent $i$'s successful simulation with threshold $t_i = \BBFSA{n}_i$. \end{itemize} We claim that, for every $k\geq 1$, the set of goods allocated before step $k$ by the global algorithm is a subset of the goods allocated before step $k$ during agent $i$'s simulation, . That is, $\bigcup_{j=1}^{k-1} A_j \subseteq \bigcup_{j=1}^{k-1} B^i_j$ for any remaining agent $i$. The claim is proved by induction on $k$. The base is $k=1$. Before step $1$, both $\bigcup_{j=1}^{k-1} A_j$ and $\bigcup_{j=1}^{k-1} B^i_j$ are empty, so the claim holds vacuously. Let $k\geq 1$. We assume the claim is true before iteration $k$, and prove that it is still true after iteration $k$. The initial goods $g_1,\ldots,g_k$ are obviously allocated in both runs. In agent $i$'s simulation, some additional goods $g_m,\ldots,g_s$ are allocated, for some $s\leq m$; in the global run, goods $g_m,\ldots,g_r$ are allocated, for some $r\leq m$. The induction assumption implies that $r \geq s$ (weakly fewer goods are allocated in the global run). In iteration $k$, both runs initialize the bag with the same good $g_k$. In $i$'s simulation, the bag is then filled with goods $g_{s-1},\ldots,g_{s'}$ for some $s'<s$, such that $v_i(\{g_k,g_{s-1},\ldots,g_{s'}\})\geq t_i$. In the global run, the bag is filled with goods $g_{r-1},\ldots,g_{r'}$ for some $r'<r$. It is sufficient to prove that $r'\geq s'$. Indeed, if no agent takes the bag until it contains the goods $(\{g_k,g_{r-1},\ldots,g_{s'}\})$, then because $r\geq s$, the bag value is at least $v_i(\{g_k,g_{s-1},\ldots,g_{s'}\})\geq t_i$. Therefore, it is acceptable to agent $i$, so the algorithm allocates it (either to $i$ or to another agent). This completes the proof of the claim. The claim implies that, as long as $k < n$, the goods in $B^i_n$ are still available. This means that agent $i$ values the remaining goods at least $t_i$. This is true for every remaining agent; therefore, the global algorithm continues to run until it allocates $n$ bundles. The binary search and the simulation runs for each agent $i$ take time polynomial in the length of the binary representation of the valuations. Once the thresholds are computed, Algorithm \ref{alg:3n/2-polytime} obviously runs in polynomial time. This completes the proof of the lemma. \end{proof} Lemmas \ref{lem:bbfs} and \ref{lem:bbsfair} together imply: \begin{theorem} \label{thm:3n-2poly} There is an algorithm that computes a 1-out-of-$\ceil{3n/2}$ MMS allocation in time polynomial in the length of the binary representation of the problem. \qed \end{theorem} \begin{example}[\textbf{Computing thresholds}] Consider a setting with $m=6$ goods and $n=3$ agents with the following valuations: \begin{center} \begin{tabular}{c|cccccc|c} & $g_{1}$ & $g_{2}$ & $g_{3}$ & $g_{4}$ & $g_{5}$ & $g_{6}$ & $t_{i}$ \\\hline $v_{1}$ & \circled{10} & 8 & 6 & 3 & 2 & 1 & 9\\ $v_{2}$ & 12 & \circled{7} & 6 & 5 & \circled{4} & \circled{2} & 11\\ $v_{3}$ & 9 & 8 & \circled{7} & \circled{4} & 3 & 1 & 10 \end{tabular} \end{center} Each player computes a threshold via binary search on $[0, v_{i}(M)]$ for the maximum value $t_{i}$ such that the simulation of Algorithm \ref{alg:3n/2-polytime} yields three bundles. For agent $1$, the simulation with $t_{1} = 9$ yields bundles $\{g_{1}\}, \{g_{2}, g_{6}\}, \{g_{3}, g_{4}, g_{5}\}$. The corresponding simulation with $t_{1} = 10$ yields bundles $\{g_{1}\}, \{g_{2}, g_{5}, g_{6}\}$ with $\{g_{3}, g_{4}\}$ insufficient to fill a third bundle. After all thresholds have been determined from simulations, Algorithm \ref{alg:3n/2-polytime} computes the circled allocation. Theorem \ref{thm:3n-2poly} guarantees that this allocation is at least $1$-out-of-$5$ MMS. Here the circled allocation satisfies $1$-out-of-$3$ MMS. $\blacksquare$ \end{example} \begin{remark} When $n$ is odd, there is a gap of $1$ between the existence result for 1-out-of-$\floor{3n/2}$ MMS, and the polynomial-time computation result for 1-out-of-$\ceil{3n/2}$ MMS. \end{remark} In experimental simulations on instances generated uniformly at random, Algorithm \ref{alg:3n/2-polytime} significantly outperforms the theoretical guarantee of 1-out-of-$\ceil{3n/2}$ MMS. In Appendix~\ref{sec:bidi-vs-uni}, we provide detailed experimentations and compare the bidirectional bag-filling algorithm with other bag-filling methods (e.g. the unidirectional bag-filling algorithm). \subsection{The case when $\ell>1$} \label{sub:l>1} So far, we could not adapt Algorithm \ref{alg:3n/2-polytime} to finding an $\ell$-out-of-$\floor{(\ell+1/2)n}$ MMS allocation for $\ell \geq 2$. Below, we present a weaker approximation to MMS, based on the following lemma. \begin{lemma} \label{lem:L-out-of-D} For all integers $d > \ell\geq 1$: \begin{align*} \ell\cdot \MMSA{1}{d}(M) \leq \MMSA{\ell}{d}(M) \leq \ell\cdot \MMSA{1}{(d-\ell+1)}(M). \end{align*} \end{lemma} \begin{proof} For the leftmost inequality, Let $A_1,\ldots,A_d$ be the optimal $d$-partition in the definition of $\MMSA{1}{d}(M)$, and suppose w.l.o.g. that the bundles are ordered by ascending value. Then: \begin{align*} \ell\cdot \MMSA{1}{d}(M) &= \ell\cdot v_i(A_1) \\ &\leq v_i(A_1)+\cdots + v_i(A_{\ell}) \\ &\leq \MMSA{\ell}{d}(M), \end{align*} where the last inequality follows from the existence of a $d$-partition in which the $\ell$ least-valuable bundles are $A_1,\ldots,A_{\ell}$. For the rightmost inequality, let $B_1,\ldots,B_d$ be the optimal $d$-partition in the definition of $\MMSA{\ell}{d}(M)$, and suppose w.l.o.g. that the bundles are ordered by ascending value. Then: \begin{align*} \MMSA{\ell}{d}(M) &= v_i(B_1)+\cdots + v_i(B_{\ell}) \\ &\leq \ell \cdot v_i(B_{\ell}) \\ &\leq \ell\cdot \MMSA{1}{(d-\ell+1)}(M), \end{align*} where the last inequality is proved by the partition with $(d-\ell+1)$ bundles: $B_1\cup\cdots\cup B_{\ell}, B_{\ell+1},\ldots, B_{d}$, in which the value of each bundle is at least $v_i(B_{\ell})$. \end{proof} For any positive integer $d$, we can approximate $\MMSA{1}{d}(M)$ by using an approximation algorithm for bin-covering, which we call \emph{Algorithm JS} \citep{jansen2003asymptotic}. \begin{lemma}[\citet{jansen2003asymptotic}] \label{lem:js1} For any $\varepsilon >0$, Algorithm JS runs in time $\widetilde{O}\left( \frac{1}{\varepsilon^6}m^2 + \frac{1}{\varepsilon^{8.76}} \right)$.% \footnote{ A more exact expression for the run-time is $O\left( \frac{1}{\varepsilon^5} \cdot \ln{\frac{m}{\varepsilon}} \cdot \max{(m^2,\frac{1}{\varepsilon}\ln\ln\frac{1}{\varepsilon^3})} + \frac{1}{\varepsilon^4}\mathcal{T_M}(\frac{1}{\varepsilon^2}) \right)$, where $\mathcal{T_M}(x)$ is the run-time complexity of the best available algorithm for matrix inversion, which is currently $O(x^{2.38})$. We simplified it a bit for clarity, and used $\widetilde{O}$ to hide the logarithmic factors. } If the sum of all valuations is at least $13t/\varepsilon^3$ (where $t$ is the bin size), then Algorithm JS fills at least $(1 - \varepsilon)\cdot \mathrm{OPT}(I) - 1$ bins. \end{lemma} We can choose $\varepsilon$ based on the instance, and get the following simpler guarantee. \begin{lemma} \label{lem:js2} Algorithm JS fills at least \begin{align*} \mathrm{OPT} - 2.35\cdot \mathrm{OPT}^{2/3} - 1 \end{align*} bins, and runs in time $\widetilde{O}(m^{4})$. \end{lemma} \begin{proof} If any input value is at least $t$, then it can be put in a bin of its own, and this is obviously optimal. So we can assume w.l.o.g. that all input values are smaller than $t$. Let $s$ be the sum of values, and set $\varepsilon := (13 t / s)^{1/3}$. The number of bins in any legal packing is at most $s/t$, so \begin{align*} \mathrm{OPT} \leq& s/t \\ t/s \leq &1/\mathrm{OPT} \\ \varepsilon \leq &(13/\mathrm{OPT})^{1/3} \\ &\approx 2.35/ \mathrm{OPT}^{1/3}. \end{align*} The $\varepsilon$ is chosen such that $s = 13t/\varepsilon^3$. So by Lemma \ref{lem:js1}, the number of bins filled by Algorithm JS is at least \begin{align*} & \mathrm{OPT} - \varepsilon\cdot \mathrm{OPT} - 1 \\ \geq& \mathrm{OPT} - 2.35\cdot \mathrm{OPT}^{2/3} - 1. \end{align*} Since by assumption each value is smaller than $t$, we have $s<m t$, so $\varepsilon > (13/m)^{1/3}$ and $1/\varepsilon \in O(m^{1/3})$. Therefore, the run-time is in \begin{align*} & \widetilde{O}\left( \frac{1}{\varepsilon^6}m^2 + \frac{1}{\varepsilon^{8.8}} \right) \\ \approx& \widetilde{O}\left( m^2\cdot m^2 + m^{2.92} \right) \\ \approx& \widetilde{O}\left(m^4\right). \qedhere \end{align*} \end{proof} Analogously to the definition of the BBF-Share (BBFS) (Definition \ref{def:bbfs}), we define the 1-out-of-$d$ \emph{JS-Share} (JSS) of each agent $i$ as the largest value $t_i$ for which Algorithm JS fills at least $d$ bins when executed with valuation $v_i$ and bin-size $t_i$. The JSS can be computed up to any desired accuracy using binary search. Clearly, $ \mms{i}{1}{d}{M} \geq \JSSA{d}_i $. Analogously to Lemma \ref{lem:bbfs}, we have \begin{lemma} \label{lem:jss} For any integer $d\geq 1$ and agent $i\in[n]$: \begin{align*} \JSSA{d}_i \geq \mms{i}{1}{\ceil{d+15\cdot d^{2/3}+1}}{M}. \end{align*} \end{lemma} \begin{proof} Let $t_i := \mms{i}{1}{\ceil{d+15\cdot d^{2/3}+1}}{M}$. By definition of MMS, there is a partition of $M$ into $\ceil{d+15\cdot d^{2/3}+1}$ bundles of size at least $t_i$. By Lemma \ref{lem:js2}, Algorithm JS with bin-size $t_i$ fills at least \begin{align*} & (d+15\cdot d^{2/3}+1) - 2.35\cdot (d+15\cdot d^{2/3}+1)^{2/3} - 1 \\ \geq & (d+15\cdot d^{2/3}+1) - 2.35\cdot (16 d)^{2/3} - 1 \\ \geq & (d+15\cdot d^{2/3}+1) - 14.92 d^{2/3} - 1 \\ \geq & d \end{align*} bins. By definition of the JSS, since Algorithm JS allocates at least $d$ bins with size $t_i$, we have $t_i\leq \JSSA{d}_i$. \end{proof} Now, each agent can participate in the algorithm of Section \ref{sec:goods-lone} without computing the exact MMS value. Given an integer $\ell\geq 2$, let $d := \floor{(\ell+\frac{1}{2})n}$. Each agent $i$ can compute the value of $\JSSA{d}_i$ using binary search. The search also finds a partition of $M$ into $d$ bundles, each of which has a value of at least $\JSSA{d}_i$. The agent can now use this partition for scaling: the valuations are scaled such that the value of each bundle in the partition is exactly $1$. The algorithm in Section \ref{alg:(L+1/2)n} guarantees to each agent a bundle with value at least $\ell$, which is at least $\ell\cdot \JSSA{d}_i$. By Lemma \ref{lem:jss}, this value is at least $\ell\cdot\mms{i}{1}{\ceil{d+15d^{2/3}+1}}{M}$. By the right-hand side of Lemma \ref{lem:L-out-of-D}, it is at least $\mms{i}{\ell}{\ceil{d+15d^{2/3}+\ell}}{M}$. Thus, we have proved the following theorem. \begin{theorem} \label{thm:goods-approx} Let $\ell\geq 2$ an integer, and $d := \floor{(\ell+\frac{1}{2})n}$. It is possible to compute an allocation in which the value of each agent $i$ is at least \begin{align*} \mms{i}{\ell}{\ceil{d+15d^{2/3}+\ell}}{M}, \end{align*} in time $\widetilde{O}\left( n\cdot m^{4} \right)$. \end{theorem} \section{Future Directions} The existence of tighter ordinal approximations that improve $\ell$-out-of-$\floor{(\ell+1/2)n}$ MMS allocations is a compelling open problem. Specifically, one can generalize the open problem raised by \citet{budish2011combinatorial} and ask, for any $\ell\geq 1$ and $n\geq 2$: does there exist an $\ell$-out-of-$(\ell n + 1)$ MMS allocation? For the polynomial-time algorithm when $\ell = 1$, we extend the bin covering algorithm of \citet{csirik1999two}. We believe that the interaction between this problem and fair allocation of goods may be of independent interest, as it may open new ways for developing improved algorithms. For example, \citet{csirik1999two} also present a $3/4$ approximation algorithm for bin covering, which may potentially be adapted to yield a $1$-out-of-$\ceil{4n/3}$ MMS allocation. Similarly, \citet{csirik2001better} and \citet{jansen2003asymptotic} present polynomial-time approximation schemes for bin covering, which may yield even better MMS approximations in future work. Finally, it is interesting to study ordinal maximin approximation for items with non-positive valuations (i.e. chores), as well as for mixtures of goods and chores. Techniques for allocation of goods do not immediately translate to achieving approximations of MMS when allocating chores, so new techniques are needed \citep{hosseini2022ordinal}. \section*{Acknowledgments} Hadi Hosseini acknowledges support from NSF IIS grants \#2052488 and \#2107173. Erel Segal-Halevi is supported by the Israel Science Foundation (grant no. 712/20). We are grateful to Thomas Rothvoss, Ariel Procaccia, Joshua Lin, Inuyasha Yagami, Chandra Chekuri, Neal Young, and the anonymous referees of EC 2021 and JAIR for their valuable feedback. \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction and Setup} \label{1s} In recent years, there appeared a number of publications describing infuence of quantum effects on phase transitions in quantum anharmonic crystals, where the results were obtained by means of path integrals, see \cite{[AKK2],[AKKR],[AKKRPRL],[AKKRNN],[KK],Koz,[Koz4],[Minlos],[RZ],[VZ]}. Their common point is a statement that the phase transition (understood in one or another way) is suppressed if the model parameters obey a sufficient condition (more or less explicitely formulated). The existence of phase transitions in quantum crystals of certain types was proven earlier, see \cite{[BaK],[BaK0],[DLP],[Kondr],[Pastur]}, also mostly by means of path integral methods. At the same time, by now only two works, \cite{[RevMF]} and \cite{[KoT]}, have apperared where both these phenomena are studied in one and the same context. In the latter paper, a more complete and extended version of the theory of interacting systems of quantum anharmonic oscillators based on path integral methods has been elaborated, see also \cite{[AKPR],[AKPR1],[AKPR2]} for more recent development, and \cite{[KoT1]} where the results of \cite{[KoT]} were announced. The aim of the present article is to refine and extend the previous results and to develop a unified and more or less complete theory of phase transitions and quantum effects in quantum anharmonic crystals, also in the light of the results of \cite{[KoT1],[KoT]}. Note that, in particular, with the help of these results we prove here phase transitions in quantum crystals with asymmetric anharmonic potentials\footnote{This result was announced in \cite{[KaK]}.}, what could hardly be done by other methods. The quantum crystal studied in this article is a system of interacting quantum anharmonic oscillators indexed by the elements of a crystal lattice $\mathbb{L}$, which for simplicity we assume to be a $d$-dimensional simple cubic lattice $\mathbb{Z}^d$. The quantum anharmonic oscillator is a mathematical model of a quantum particle moving in a potential field with possibly multiple minima, which has a sufficient growth at infinity and hence localizes the particle. Most of the models of interacting quantum oscillators are related with solids such as ionic crystals containing localized light particles oscillating in the field created by heavy ionic complexes, or quantum crystals consisting entirely of such particles. For instance, a potential field with multiple minima is seen by a helium atom located at the center of the crystal cell in bcc helium, see page 11 in \cite{[Koeler]}. The same situation exists in other quantum crystals, ${\rm He}$, ${\rm H}_2$ and to some extent ${\rm Ne}$. An example of the ionic crystal with localized quantum particles moving in a double-well potential field is a ${\rm KDP}$-type ferroelectric with hydrogen bounds, in which such particles are protons or deuterons performing one-dimensional oscillations along the bounds, see \cite{[Blinc],[S],[Tokunaga],[Vaks]}. It is believed that in such substances phase transitions are triggered by the ordering of protons. Another relevant physical object of this kind is a system of apex oxygen ions in YBaCuO-type high-temperature superconductors, see \cite{[Frick],[KMueller],[StasT],[StasT1]}. Quantum anharmonic oscillators are also used in models describing interaction of vibrating quantum particles with a radiation (photon) field, see \cite{[Hainzl],[HHS],[Osada]}, or strong electron-electron correlations caused by the interaction of electrons with vibrating ions, see \cite{[Freericks],[FreerL]}, responsible for such phenomena as superconductivity, charge density waves etc. Finally, we mention systems of light atoms, like ${\rm Li}$, doped into ionic crystals, like ${\rm KCl}$. The quantum particles in this system are not necessarily regularly distributed. For more information on this subject, we refer to the survey \cite{[Horner]}. To be more concrete we assume that our model describes an ionic crystal and thus adopt the ferroelectric terminology. In the corresponding physical substances, the quantum particles carry electric charge; hence, the displacement of the particle from its equilibrium point produces dipole moment. Therefore, the main contribution into the two-particle interaction is proportional to the product of the displacements of particles and is of long range. According to these arguments our model is described by the following formal Hamiltonian \begin{equation} \label{U1} H = - \frac{1}{2} \sum_{\ell,\ell'} J_{\ell\ell'} \cdot(q_{\ell} , q_{\ell'}) + \sum_{\ell} H_{\ell}. \end{equation} Here the sums run through the lattice $\mathbb{L}=\mathbb{Z}^d$, $d\in \mathbb{N}$, the displacement, $q_\ell$, of the oscillator attached to a given $\ell\in \mathbb{L}$ is a $\nu$-dimensional vector. In general, we do not assume that the interaction intensities $J_{\ell\ell'}$ have finite range. By $(\cdot, \cdot)$ and $|\cdot|$ we denote the scalar product and norm in $\mathbb{R}^\nu$, $\mathbb{R}^d$. The one-site Hamiltonian \begin{equation} \label{U2} H_\ell = H_{\ell}^{\rm har} + V_\ell (q_\ell ) \ \stackrel{\rm def}{=}\ \frac{1}{2m} |p_\ell|^2 + \frac{a}{2} |q_\ell|^2 + V_\ell (q_\ell ), \quad a>0, \end{equation} describes an isolated quantum anharmonic oscillator. Its part $H^{\rm har}_\ell$ corresponds to a $\nu$-dimensional harmonic oscillator of rigidity $a$. The mass parameter $m$ includes Planck's constant, that is, \begin{equation} \label{In} m = m_{\rm ph}/\hbar^2, \end{equation} where $m_{\rm ph}$ is the physical mass of the particle. Therefore, the commutation relation for the components of the momentum and displacement takes the form \begin{equation} \label{cr} p_\ell^{(j)} q_{\ell'}^{(j')} - q_{\ell'}^{(j')} p_\ell^{(j)} = - \imath \delta_{\ell \ell'} \delta_{jj'}, \quad j, j' = 1, \dots , \nu. \end{equation} For a detailed discussion on how to derive a model like (\ref{U1}), (\ref{U2}) from physical models of concrete substances, we refer the reader to the survey \cite{[S]}. The theory of phase transitions is one of the most important and spectacular parts of equilibrium statistical mechanics. For classical lattice models, a complete description of the equilibrium thermodynamic properties is given by constructing their Gibbs states as probability measures on appropriate configuration spaces. Usually, it is made in the Dobrushin-Lanford-Ruelle (DLR) approach which is now well-elaborated, see Georgii's monograph \cite{[Ge]} and the references therein. In general, the quantum case does not permit such a universal description. For some systems with bounded one-site Hamiltonians, e.g., quantum spin models, the Gibbs states are defined as positive normalized functionals on algebras of quasi-local observables obeyng the condition of equilibrium between the dynamic and thermodynamic behavior of the model (KMS condition), see \cite{[BrR]}. However, this algebraic way cannot be applied to the model (\ref{U1}), (\ref{U2}) since the construction of its dynamics in the whole crystal $\mathbb{L}$ is beyond the technical possibilities available by this time. In 1975, an approach employing path integral methods to describe thermodynamic properties of models like (\ref{U1}), (\ref{U2}) has been initiated in \cite{[AH-K]}. Its main idea was to pass from real to imaginary values of time, similarly as it was done in Euclidean quantum field theory, see \cite{[GJ],[Sim1]}, and thereby to describe the dynamics of the model in terms of stochastic processes. Afterwards, this approach, also called Euclidean, has been developed in a number of works. Its latest and most general version is presented in \cite{[KoT1],[KoT]}, where the reader can also find an extensive bibliography on this subject. The methods developed in these works will be extensively used in the present study. Phase transitions are very important phenomena in the substances modeled by the Hamiltonians (\ref{U1}), (\ref{U2}). According to their commonly adopted physical interpretation, at low temperatures the oscillations of the particles become strongly correlated that produces macroscopic ordering. The mathematical theory of phase transitions in models like (\ref{U1}), (\ref{U2}) is based on quantum versions of the method of infrared estimates developed in \cite{[FSS]}. The first publication where the infrared estimates were applied to quantum spin models seems to be \cite{[DLS]}. After certain modifications this method, combined with path integral techniques, was applied in \cite{[BaK],[BaK0],[DLP],[Kondr],[Pastur]} to particular versions of our model. The main characteristic feature of these versions was a symmetry, broken by the phase transition. In classical systems, ordering is achieved in competition with thermal fluctuations only. However, in quantum systems quantum effects play a significant disordering role, especially at low temperatures. This role was first discussed in \cite{[9]}. Later on a number of publications dedicated to the study of quantum effects in such systems had appeared, see e.g., \cite{[Minlos],[VZ]} and the references therein. For better understanding, illuminating exactly solvable models of systems of interacting quantum anharmonic oscillators were introduced and studied, see \cite{[Plakida1],[STZ],[VZ1],[VZ2]}. In these works, the quantity $m^{-1} = \hbar^2 / m_{\rm ph}$ was used as a parameter describing the rate of quantum effects. Such effects became strong in the small mass limit, which was in agreement with the experimental data, e.g., on the isotopic effect in the ferroelectrics with hydrogen bounds, see \cite{[Blinc],[Vaks]}, see also \cite{[KMueller]} for the data on the isotopic effect in the YBaCuO-type high-temperature superconductors. However, in those works no other quantum effects, e.g., those connected with special properties of the anharmonic potentials, were discussed. At the same time, experimental data, see e.g., the table on page 11 in the monograph \cite{[Blinc]} or the article \cite{[12]}, show that high hydrostatic pressure applied to KDP-type ferroelectrics prevents them from ordering. It is believed that the pressure shortens the hydrogen bounds and thereby changes the anharmonic potential. This makes the tunneling motion of the quantum particles more intensive, which is equivalent to diminishing the particle mass. In \cite{[AKKR],[AKKRPRL],[AKKRNN]}, a theory of such quantum effects in the model (\ref{U1}), (\ref{U2}), which explains both mentioned mechanisms, was buit up. Its main conclusion is that the quantum dynamical properties, which depend on the mass $m$, the interaction intensities $J_{\ell\ell'}$, and the anharmonic potentials $V_\ell$, can be such that the model is stable with respect to phase transitions at all temperatures. As was mentioned above, the aim of this article is to present a unified description of phase transitions and quantum stabilization in the model (\ref{U1}), (\ref{U2}), mostly by means of methods developed in \cite{[KoT1],[KoT]}. We also give here complete proofs of a number of statements announced in our previous publications. The article is organized as follows. In Section \ref{2s}, we briefly describe those elements of the theory developed in \cite{[KoT1],[KoT]} which we then apply in the subsequent sections. In Section \ref{3s}, we present the theory of phase transitions in the model (\ref{U1}), (\ref{U2}). We begin by introducing three definitions of a phase transition in this model and study the relationships between them. Then we develop a version of the method of infrared estimates adapted to our model, which is more transpatrent and appropriate than the one employed in \cite{[RevMF]}. Afterwards, we obtain a sufficient conditions for the phase transitions to occur in a number of versions of the model (\ref{U1}), (\ref{U2}). This includes also the case of asymmetric anharmonic potentials $V_\ell$ which was never studied before. At the end of the section we make some comments on the results obtained and compare them with similar results known in the literature. Section \ref{4s} is dedicated to the study of quantum stabilization, which we understand as the suppression of phase transitions by quantum effects. Here we discuss the problem of stability of quantum crystals and the ways of its description. In particular, we introduce a parameter (quantum rigidity), responsible for the stability and prove a number of statements about its properties. Then we show that under the stability condition which we introduce here the correlations decay `in a proper way', that means the absence of phase transitions. The relationship between the quantum stabilization and phase transitions are also analyzed. In the simplest case, where the model is translation invariant, scalar ($\nu =1$), and with the interaction of nearest neighbor type, this relation looks as follows. The key parameter is $ 8 d m J \vartheta_*^2$, where $d$ is the lattice dimension, $J>0$ is the interaction intensity, and $\vartheta_*>0$ is determined by the anharmonic potential $V$ (the steeper is $V$ the smaller is $\vartheta_*$). Then the quantum stabilization condition (respectively, the phase transition condition) is $ 8 d m J \vartheta_*^2 <1$, see (\ref{De20}), (respectively, $ 8 d m J \vartheta_*^2 > \phi(d)$, see (\ref{rp52}) and (\ref{DeE})). Here $\phi$ is a function, such that $\phi(d) >1$ and $\phi(d)\rightarrow 1$ as $d \rightarrow + \infty$. We conclude the section by commenting the results obtained therein. \section{Euclidean Gibbs States} \label{2s} The main element of the Euclidean approach is the description of the equilibrium thermodynamic properties of the model (\ref{U1}), (\ref{U2}) by means of Euclidean Gibbs states, which are probability measures on certain configuration spaces. In this section, we briefly describe the main elements of this approach which are then used in the subsequent parts of the article. For more details, we refer to \cite{[KoT]}. \subsection{Local Gibbs states} \label{2.1ss} Let us begin by specifying the properties of the model described by the Hamiltonian (\ref{U1}). The general assumptions regarding the interaction intensities $J_{\ell \ell'}$ are \begin{equation} \label{a1} J_{\ell\ell'}= J_{\ell'\ell} \geq 0, \quad J_{\ell\ell}=0, \quad \hat{J}_0 \ \stackrel{\rm def}{=} \ \sup_{\ell} \sum_{\ell'} J_{\ell \ell'} < \infty. \end{equation} In view of the first of these properties the model is {\it ferroelectric}. Regarding the anharmonic potentials we assume that each $V_\ell:\mathbb{R}^\nu \rightarrow \mathbb{R}$ is a continuous function, which obeys \begin{equation} \label{a2} A_V |x|^{2r} + B_V \leq V_\ell (x) \leq V(x), \end{equation} with a continuous function $V$ and constants $r>1$, $A_V>0$, $B_V\in \mathbb{R}$. In certain cases, we shall include an external field term in the form \begin{equation} \label{4} V_\ell (x) = V_\ell^{0} (x) - (h , x), \ \ \quad h \in \mathbb{R}^\nu, \end{equation} where $V_\ell^{0}$ is an appropriate function. \begin{definition} \label{1df} The model is translation invariant if $V_\ell = V$ for all $\ell$, and the interaction intensities $J_{\ell \ell'}$ are invariant under the translations of $\mathbb{L}$. The model is rotation invariant if for every orthogonal transformation $U\in O(\nu)$ and every $\ell$, $V_\ell (U x) = V_\ell(x)$. The interaction has finite range if there exists $R>0$ such that $J_{\ell\ell'} = 0$ whenever $|\ell-\ell'|>R$. \end{definition} \noindent If $V_\ell \equiv 0$ for all $\ell$, one gets a quantum harmonic crystal. It is stable if $\hat{J}_0 < a$, see Remark \ref{apprm} below. By $\Lambda$ we denote subsets of the lattice $\mathbb{L}$; we write $\Lambda \Subset \mathbb{L}$ if $\Lambda$ is non-void and finite. For such $\Lambda$, by $|\Lambda|$ we denote its cardinality. A sequence of subsets $\Lambda \Subset \mathbb{L}$ is called {\it cofinal} if it is ordered by inclusion and exhausts the lattice $\mathbb{L}$. If we say that something holds for all $\ell$, we mean it holds for all $\ell \in \mathbb{L}$; sums like $\sum_{\ell}$ mean $\sum_{\ell \in \mathbb{L}}$. We also use the notations $\mathbb{R}^+ = [0, +\infty)$ and $\mathbb{N}_0 = \mathbb{N}\cup \{0\}$, $\mathbb{N}$ being the set of positive integers. Given $\Lambda \Subset \mathbb{L}$, the local Hamiltonian of the model is \begin{equation} \label{a3} H_\Lambda = - \frac{1}{2} \sum_{\ell ,\ell'\in \Lambda} J_{\ell \ell'} \cdot (q_\ell , q_{\ell'}) + \sum_{\ell \in \Lambda}H_\ell, \end{equation} which by the assumptions made above is a self-adjoint and lower bounded operator in the physical Hilbert space $L^2 (\mathbb{R}^{\nu |\Lambda|})$. For every $\beta= 1/ k_{\rm B} T$, $T$ being absolute temperature, the local Gibbs state in $\Lambda\Subset \mathbb{L}$ is \begin{equation} \label{a4} \varrho_\Lambda (A) = {\rm trace}[A\exp(- \beta H_\Lambda)]/ Z_\Lambda, \quad A\in \mathfrak{C}_\Lambda, \end{equation} where \begin{equation} \label{a5} Z_\Lambda = {\rm trace}[\exp(- \beta H_\Lambda)]< \infty \end{equation} is the partition function, and $\mathfrak{C}_\Lambda$ is the algebra of all bounded linear operators on $L^2 (\mathbb{R}^{\nu |\Lambda|})$. Note that adjective {\it local} will always stand for a property related with a certain $\Lambda \Subset \mathbb{L}$, whereas {\it global} will characterize the whole infinite system. The dynamics of the subsystem located in $\Lambda$ is described by the time automorphisms \begin{equation} \label{a6} \mathfrak{C}_\Lambda \ni A \mapsto \mathfrak{a}_t^\Lambda (A) = \exp(\imath t H_\Lambda) A \exp(-\imath t H_\Lambda), \end{equation} where $t\in \mathbb{R}$ is time. Given $n \in \mathbb{N}$ and $A_1 , \dots , A_n\in \mathfrak{C}_\Lambda$, the corresponding {\it Green function} is \begin{equation} \label{a7} G^\Lambda_{A_1 , \dots A_n} (t_1 , \dots , t_n) = \varrho_\Lambda \left[ \mathfrak{a}^\Lambda_{t_1} (A_1) \cdots \mathfrak{a}^\Lambda_{t_n}(A_n) \right], \end{equation} which is a complex valued function on $\mathbb{R}^n$. Each such a function can be looked upon, see \cite{[AH-K],[RevMF]}, as the restriction of a function $G^\Lambda_{A_1 , \dots A_n}$ analytic in the domain \begin{equation} \label{a8} \mathcal{D}^n_\beta = \{ (z_1 , \dots , z_n)\in \mathbb{C}^n \ | \ 0 < \Im (z_1) < \cdots < \Im(z_n ) < \beta\}, \end{equation} and continuous on its closure. The corresponding statement is known as {\it the multiple-time analyticity theorem}, see \cite{[AH-K],[RevMF]}, as well as \cite{[KL]} for a more general consideration. For every $n \in \mathbb{N}$, the subset \begin{equation} \label{a88} \{ (z_1 , \dots , z_n)\in \mathcal{D}^n_\beta \ | \ \Re(z_1) = \cdots = \Re(z_n) = 0\} \end{equation} is an inner uniqueness set for functions analytic in $\mathcal{D}^n_\beta$, see pages 101 and 352 in \cite{[Shabat]}. This means that two such functions which coincide on this set should coincide everywhere on ${\mathcal{D}}^n_\beta$. For a bounded continuous function $F:\mathbb{R}^{\nu|\Lambda|}\rightarrow \mathbb{C}$, the corresponding multiplication operator $F\in \mathfrak{C}_\Lambda$ acts as follows \[ (F\psi )(x) = F(x) \psi (x), \qquad \psi \in L^2 (\mathbb{R}^{\nu|\Lambda|}). \] Let $\mathfrak{F}_\Lambda \subset \mathfrak{C}_\Lambda$ be the set of all such operators. One can prove (the density theorem, see \cite{[Koz5],[Koz6]}) that the linear span of the products \[ \mathfrak{a}^\Lambda_{t_1} (F_1) \cdots \mathfrak{a}^\Lambda_{t_n} (F_n), \] with all possible choices of $n\in \mathbb{N}$, $t_1 , \dots , t_n \in \mathbb{R}$, and $F_1 , \dots , F_n\in \mathfrak{F}_\Lambda$, is dense in $\mathfrak{C}_\Lambda$ in the $\sigma$-weak topology in which the state (\ref{a4}) is continuous as a linear functional. Thus, the latter is determined by the set of Green functions $G^\Lambda_{F_1 , \dots F_n}$ with $n\in \mathbb{N}$ and $F_1 , \dots , F_n\in \mathfrak{F}_\Lambda$. The restriction of the Green functions $G^\Lambda_{F_1 , \dots F_n}$ to the imaginary-time sets (\ref{a88}) are called {\it Matsubara functions}. For \begin{equation} \label{a10}\tau_1 \leq \tau_2 \leq \cdots \leq \tau_n \leq \beta, \end{equation} they are \begin{equation} \label{a9} \Gamma^\Lambda_{F_1, \dots, F_n} (\tau_1 , \dots , \tau_n) = G^\Lambda _{F_1, \dots, F_n} (\imath \tau_1 , \dots , \imath \tau_n). \end{equation} Since (\ref{a88}) is an inner uniqueness set, the collection of the Matsubara functions (\ref{a9}) with all possible choices of $n\in \mathbb{N}$ and $F_1 , \dots , F_n\in \mathfrak{F}_\Lambda$ determines the state (\ref{a4}). The extensions of the functions (\ref{a9}) to $[0, \beta]^n$ are defined as \[ \Gamma^\Lambda_{F_1, \dots, F_n} (\tau_1 , \dots , \tau_n) = \Gamma^\Lambda_{F_{\sigma(1)}, \dots, F_{\sigma(n)}} (\tau_{\sigma(1)} , \dots , \tau_{\sigma(n)})\] where $\sigma$ is the permutation such that $\tau_{\sigma(1)}\leq \tau_{\sigma(2)}\leq \cdots \leq \tau_{\sigma(n)}$. One can show that for every $\theta \in [0, \beta]$, \begin{equation} \label{a11} \Gamma^\Lambda_{F_1, \dots, F_n} (\tau_1 +\theta , \dots , \tau_n+\theta) = \Gamma^\Lambda_{F_1, \dots, F_n} (\tau_1 , \dots , \tau_n), \end{equation} where addtion is modulo $\beta$. \subsection{Path spaces} \label{2.2ss} By (\ref{a7}), the Matsubara function (\ref{a9}) can be written as \begin{eqnarray} & & \label{a12} \Gamma^\Lambda_{F_1, \dots, F_n} (\tau_1 , \dots , \tau_n) = \\ & & \quad \ \ = {\rm trace}\left[F_1 e^{-(\tau_2 - \tau_1)H_\Lambda}F_2 e^{-(\tau_3 - \tau_2)H_\Lambda} \cdot F_n e^{-(\tau_{n+1} - \tau_n)H_\Lambda} \right] /Z_\Lambda, \nonumber \end{eqnarray} where $\tau_{n+1} = \beta + \tau_1$ and the arguments obey (\ref{a10}). This expression can be rewritten in an integral form \begin{equation} \label{a13} \Gamma^\Lambda_{F_1, \dots, F_n} (\tau_1 , \dots , \tau_n) = \int_{\Omega_\Lambda} F_1 (\omega_\Lambda (\tau_1)) \cdots F_n (\omega_\Lambda (\tau_n)) \nu_\Lambda ({\rm d}\omega_\Lambda), \end{equation} that is the main point of the Euclidean approach. Here $\nu_\Lambda$ is a probability measure on the path space $\Omega_\Lambda$ which we introduce now. The main single-site path space is the space of continuous periodic paths (temperature loops) \begin{equation} \label{a14} C_\beta = \{ \phi\in C([0, \beta]\rightarrow \mathbb{R}^\nu) \ | \ \phi(0) = \phi(\beta)\}. \end{equation} It is a Banach space with the usual sup-norm $\|\cdot \|_{C_\beta}$. For an appropriate $\phi \in C_\beta$, we set \begin{equation} \label{a15} K_\sigma (\phi) = \beta^\sigma \cdot \sup_{\tau , \tau' \in [0, \beta] \ \tau \neq \tau'} \frac{|\phi(\tau) - \phi(\tau')|}{|\tau - \tau'|^\sigma_\beta}, \quad \sigma >0, \end{equation} where \begin{equation} \label{a16} |\tau - \tau'|_\beta = \min\left\{ |\tau - \tau'|; \beta - |\tau - \tau'|\right\} \end{equation} is the periodic distance on the circle $S_\beta \sim[0, \beta]$. Then the set of H\"older-continuous periodic functions, \begin{equation} \label{17} C^\sigma_\beta = \{ \phi \in C_\beta \ | \ K_\sigma(\phi) < \infty\}, \end{equation} can be equipped with the norm \begin{equation} \label{a18} \|\phi \|_{C_\beta^\sigma} = |\phi(0) | + K_\sigma(\phi), \end{equation} which turns it into a Banach space. Along with the spaces $C_\beta$, $C_\beta^\sigma$, we shall use the Hilbert space $L^2_\beta = L^2 (S_\beta \rightarrow \mathbb{R}^\nu, {\rm d}\tau)$, equipped with the inner product $(\cdot, \cdot)_{L^2_\beta}$ and norm $\|\cdot\|_{L^2_\beta}$. By $\mathcal{B}(C_\beta)$, $\mathcal{B}(L^2_\beta)$ we denote the corresponding Borel $\sigma$-algebras. In a standard way, see page 21 of \cite{[Part]} and the corresponding discussion in \cite{[KoT]}, it follows that \begin{equation} \label{a19} C_\beta \in \mathcal{B}(L^2_\beta) \quad \ \ {\rm and} \ \ \ \mathcal{B}(C_\beta) = \mathcal{B}(L^2_\beta) \cap C_\beta. \end{equation} Given $\Lambda \subseteq \mathbb{L}$, we set \begin{eqnarray} \label{a20} & & \Omega_\Lambda = \{\omega_\Lambda = (\omega_\ell)_{\ell \in \Lambda} \ | \ \omega_\ell \in C_\beta\}, \\ & & \Omega = \Omega_{\mathbb{L}} = \{\omega = (\omega_\ell)_{\ell \in \mathbb{L}} \ | \ \omega_\ell \in C_\beta\}. \nonumber \end{eqnarray} These path spaces are equipped with the product topology and with the Borel $\sigma$-algebras $\mathcal{B}(\Omega_\Lambda)$. Thereby, each $\Omega_\Lambda$ is a complete separable metric space, called {\it Polish space}, its elements are called {\it configurations in} $\Lambda$. For $\Lambda \subset \Lambda'$, the juxtaposition $\omega_{\Lambda'} = \omega_\Lambda \times \omega_{\Lambda' \setminus \Lambda}$ defines an embedding $\Omega_\Lambda \hookrightarrow \Omega_{\Lambda'}$ by identifying $\omega_\Lambda \in \Omega_\Lambda$ with $\omega_\Lambda \times 0_{\Lambda' \setminus \Lambda} \in \Omega_{\Lambda'}$. By $\mathcal{P}(\Omega_\Lambda)$, $\mathcal{P}(\Omega)$ we denote the sets of all probability measures on $(\Omega_\Lambda, \mathcal{B}(\Omega_\Lambda))$, $(\Omega, \mathcal{B}(\Omega))$ respectively. \subsection{Local Euclidean Gibbs measures} \label{2.3ss} Now we construct the measure $\nu_\Lambda$ which appears in (\ref{a13}). A single harmonic oscillator is described by the Hamiltonian, c.f., (\ref{U2}), \begin{equation} \label{a21} H_\ell^{\rm har} = - \frac{1}{2m} \sum_{j=1}^\nu \left( \frac{\partial}{\partial x_\ell^{(j)}}\right)^2 + \frac{a}{2} |x_\ell|^2. \end{equation} It is a self-adjoint operator in the space $L^2(\mathbb{R}^\nu)$, the properties of which are well-known. The operator semigroup $\exp(-\tau H_\ell^{\rm har})$, $\tau \in S_\beta$, defines a $\beta$-periodic Markov process, see \cite{[KLP]}. In quantum statistical mechanics, it first appeared in R. H{\o}egh-Krohn's paper \cite{[HK]}. The canonical realization of this process on $(C_\beta, \mathcal{B}(C_\beta))$ is described by the path measure which can be introduced as follows. In the space $L^2_\beta$, we define the following self-adjoint Laplace-Beltrami type operator \begin{equation} \label{a22} A = \left( - m \frac{{\rm d}^2}{{\rm d}\tau^2} + a \right)\otimes \mathbf{I}, \end{equation} where $\mathbf{I}$ is the identity operator in $\mathbb{R}^\nu$. Its spectrum consists of the eigenvalues \begin{equation} \label{a23} \lambda_l = m (2 \pi l/ \beta)^2 + a, \quad \ \ l\in \mathbb{Z}. \end{equation} Therefore, the inverse $A^{-1}$ is a trace-class operator on $L^2_\beta$ and the Fourier transform \begin{equation} \label{a24} \int_{L^2_\beta} \exp\left[ \imath (\psi , \phi)_{L^2_\beta}\right]\chi({\rm d}\phi) = \exp\left\{ - \frac{1}{2} (A^{-1} \psi, \psi)_{L^2_\beta}\right\} \end{equation} defines a zero mean Gaussian measure $\chi$ on $(L^2_\beta, \mathcal{B}(L^2_\beta))$. Employing the eigenvalues (\ref{a23}) one can show that, for any $p\in \mathbb{N}$, \begin{eqnarray} \label{a24a} \int_{C_\beta} \left\vert\omega (\tau) - \omega(\tau') \right\vert^{2p} \chi({\rm d} \omega )\leq \frac{\Gamma (\nu/2 + p)}{\Gamma (\nu/2)} \left( \frac{2}{m}\right)^p \cdot |\tau - \tau'|_\beta^p. \end{eqnarray} Therefrom, by Kolmogorov's lemma (see page 43 of \cite{[Sim2]}) it follows that \begin{equation} \label{a25} \chi (C_\beta^\sigma) = 1, \quad \ \ {\rm for} \ {\rm all} \ \ \sigma \in (0, 1/2). \end{equation} Thereby, $\chi (C_\beta) = 1$; hence, with the help of (\ref{a19}) we redefine $\chi$ as a measure on $(C_\beta, \mathcal{B}(C_\beta))$, possessing the property (\ref{a25}). We shall call it {\it H{\o}egh-Krohn's measure}. An account of the properties of $\chi$ can be found in \cite{[RevMF]}. Here we present the following two of them. The first property is obtained directly from Fernique's theorem (see Theorem 1.3.24 in \cite{[DS]}). \begin{proposition} [Fernique] \label{a1pn} For every $\sigma \in (0, 1/2)$, there exists $\lambda_\sigma >0$, which can be estimated explicitely, such that \begin{equation} \label{a26} \int_{L^2_\beta} \exp \left( \lambda_\sigma \|\phi\|^2_{C^\sigma_\beta} \right) \chi({\rm d}\phi ) < \infty. \end{equation} \end{proposition} The second property follows from the estimate (\ref{a24a}) by the Garsia-Rodemich-Rumsey lemma, see \cite{[Garsia]}. For fixed $\sigma \in (0, 1/2)$, we set \begin{equation} \label{a26a} \Xi_{\vartheta} (\omega ) = \sup_{\tau , \tau': \ 0< |\tau - \tau'|_\beta < \vartheta}\left\{\frac{|\omega (\tau ) - \omega (\tau')|^{2}}{|\tau - \tau'|_\beta^{2 \sigma } }\right\}, \ \ \quad \vartheta \in (0, \beta/2), \quad \omega \in C_\beta^\sigma. \end{equation} One can show that, for each $\sigma$ and $\vartheta$, it can be extended to a measurable map $\Xi_{\vartheta} : C_\beta \rightarrow [0, +\infty]$. \begin{proposition}[Garsia-Rodemich-Rumsey estimate] \label{grrpn} Given $\sigma\in(0, 1/2)$, let $p\in \mathbb{N}$ be such that $(p-1)/2p > \sigma$. Then \begin{equation} \label{a26b} \int_{C_\beta} \Xi^p_{\vartheta} (\omega ) \chi({\rm d}\omega) \leq D(\sigma, p , \nu) m^{-p} \vartheta^{p(1-2\sigma)}, \end{equation} where $m$ is the mass (\ref{In}) and \begin{equation} \label{a26c} D(\sigma, p , \nu) = \frac{2^{3(2p+1)}(1 + 1 / \sigma p)^{2p}}{(p - 1 - 2 \sigma p)( p - 2 \sigma p)}\cdot \frac{2^p \Gamma (\nu/2 + 1)}{\Gamma (\nu/2)}. \end{equation} \end{proposition} The H{\o}egh-Krohn measure is the local Euclidean Gibbs measure for a single harmonic oscillator. The measure $\nu_\Lambda\in \mathcal{P}(\Omega_\Lambda)$, which is the Euclidean Gibbs measure corresponding to the system of interacting anharmonic oscillators located in $\Lambda\Subset \mathbb{L}$, is defined by means of the Feynman-Kac formula as a Gibbs modification \begin{equation} \label{a27} \nu_\Lambda ({\rm d}\omega_\Lambda) = \exp\left[- I_\Lambda (\omega_\Lambda) \right]\chi_\Lambda ({\rm d}\omega_\Lambda)/N_\Lambda \end{equation} of the `free measure' \begin{equation} \label{a28} \chi_\Lambda ({\rm d}\omega_\Lambda) = \prod_{\ell \in \Lambda} \chi({\rm d}\omega_\ell). \end{equation} Here \begin{equation} \label{a29} I_\Lambda (\omega_\Lambda) = - \frac{1}{2} \sum_{\ell , \ell' \in \Lambda} J_{\ell \ell'} (\omega_\ell , \omega_{\ell'})_{L^2_\beta } + \sum_{\ell \in \Lambda}\int_0^\beta V_\ell (\omega_\ell (\tau)){\rm d}\tau \end{equation} is \emph{the energy functional} which describes the interaction of the paths $\omega_\ell$, $\ell \in \Lambda$. The normalizing factor \begin{equation} \label{a30} N_\Lambda = \int_{\Omega_\Lambda} \exp\left[- I_\Lambda (\omega_\Lambda) \right]\chi_\Lambda ({\rm d}\omega_\Lambda) \end{equation} is the relative partition function, whereas the Feynman-Kac representation of the partition function (\ref{a5}) is \begin{equation} \label{a300} Z_\Lambda = N_\Lambda Z_\Lambda^{\rm har} , \end{equation} where \begin{eqnarray*} Z_\Lambda^{\rm har} & \stackrel{\rm def}{=} & {\rm trace} \exp\left[ - \beta \sum_{\ell \in \Lambda}H^{\rm har}_\ell \right] \\ & = & \left\{\frac{\exp\left[ - (\beta/2) \sqrt{a/m} \right]}{1 -\exp\left( - \beta \sqrt{a/m} \right)} \right\}^{\nu |\Lambda|}. \end{eqnarray*} Now let us summarize the connections between the description of the subsystem located in $\Lambda \Subset \mathbb{L}$ in terms of the states (\ref{a4}) and of the Euclidean Gibbs measures (\ref{a27}). By the density theorem, the state $\varrho_\Lambda$ is fully determined by the Green functions (\ref{a7}) corresponding to all choices of $n\in \mathbb{N}$ and $F_1 , \dots , F_n\in \mathfrak{F}_\Lambda$. Then the multiple-time analyticity theorem leads us from the Green functions to the Matsubara functions (\ref{a9}), which then are represented as integrals over path spaces with respect to the local Euclidean Gibbs measures, see (\ref{a13}). On the other hand, these integrals taken for all possible choices of bounded continuous functions $F_1 , \dots , F_n$ fully determine the measure $\nu_\Lambda$. Thereby, we have a one-to-one correspondence between the local Gibbs states (\ref{a4}) and the states on the algebras of bounded continuous functions determined by the local Euclidean Gibbs measures (\ref{a27}). Our next aim is to extend this approach to the global states. To this end we make more precise the definition of the path spaces in infinite $\Lambda$, e.g., in $\Lambda = \mathbb{L}$. \subsection{Tempered configurations} \label{2.4ss} To describe the global thermodynamic properties we need the conditional distributions $\pi_\Lambda ({\rm d}\omega|\xi)$, $\Lambda \Subset \mathbb{L}$. For models with infinite-range interactions, the construction of such distributions is a nontrivial problem, which can be solved by imposing a priori restrictions on the configurations defining the corresponding conditions. In this and in the subsequent subsections, we present the construction of such distributions performed \cite{[KoT]}. The distributions $\pi_\Lambda ({\rm d}\omega|\xi)$ are defined by means of the energy functionals $I_\Lambda (\omega|\xi)$ describing the interaction of the configuration $\omega$ with the configuration $\xi$, fixed outside of $\Lambda$. Given $\Lambda \Subset \mathbb{L}$, such a functional is \begin{equation} \label{a31} I_\Lambda (\omega|\xi) = I_\Lambda (\omega_\Lambda) - \sum_{\ell \in \Lambda , \ \ell'\in \Lambda^c}J_{\ell \ell'} (\omega_\ell , \xi_{\ell'})_{L^2_\beta}, \quad \omega \in \Omega, \end{equation} where $I_\Lambda$ is given by (\ref{a29}). Recall that $\omega = \omega_{\Lambda}\times \omega_{\Lambda^c}$; hence, \begin{equation} \label{a32} I_\Lambda (\omega|\xi) = I_\Lambda (\omega_\Lambda \times 0_{\Lambda^c} | 0_\Lambda \times \xi_{\Lambda^c}). \end{equation} The second term in (\ref{a31}) makes sense for all $\xi\in \Omega$ only if the interaction has finite range, see Definition \ref{1df}. Otherwise, one has to impose appropriate restrictions on the configurations $\xi$, such that, for all $\ell$ and $\omega \in \Omega$, \begin{equation} \label{a33} \sum_{\ell'}J_{\ell \ell'} \cdot |(\omega_\ell , \xi_{\ell'})_{L^2_\beta}| < \infty. \end{equation} These restrictions are formulated by means of special mappings (weights), which define the scale of growth of $\{\|\xi_{\ell}\|_{L^2_\beta}\}_{\ell \in \mathbb{L}}$. Their choice depends on the asymptotic properties of $J_{\ell \ell'}$, $|\ell - \ell'|\rightarrow +\infty$, see (\ref{a1}). If for a certain $\alpha >0$, \begin{equation} \label{a34} \sup_{\ell } \sum_{\ell'} J_{\ell \ell'} \exp (\alpha |\ell - \ell'|) < \infty, \end{equation} then the weights $\{w_\alpha (\ell, \ell')\}_{\alpha \in \mathcal{I}}$ are chosen as \begin{equation} \label{a35} w_\alpha (\ell , \ell') = \exp (-\alpha |\ell - \ell'|), \quad \ \ \mathcal{I}= (0 , \overline{\alpha}), \end{equation} where $\overline{\alpha}$ is the supremum of $\alpha>0$, for which (\ref{a34}) holds. If the latter condition does not hold for any $\alpha>0$, we assume that \begin{equation} \label{a36} \sup_{\ell } \sum_{\ell'} J_{\ell \ell'} \cdot ( 1+ |\ell - \ell'|)^{\alpha d}, \end{equation} for a certain $\alpha >1$. Then we set $\overline{\alpha}$ to be the supremum of $\alpha>1$ obeying (\ref{a36}) and \begin{equation} \label{a37} w_\alpha (\ell , \ell') = ( 1+ \varepsilon |\ell - \ell'|)^{-\alpha d}, \end{equation} where $\varepsilon >0$ is a technical parameter. In the sequel, we restrict ourselves to these two kinds of $J_{\ell\ell'}$. For more details on this item, we refer the reader to \cite{[KoT]}. Given $\alpha \in \mathcal{I}$ and $\omega \in \Omega$, we set \begin{equation} \label{a42} \|\omega \|_\alpha = \left[\sum_{\ell} \|\omega_\ell \|^2_{L^2_\beta}w_\alpha (0, \ell) \right]^{1/2}, \end{equation} and \begin{equation} \label{a43} \Omega_\alpha = \{ \omega \in \Omega \ | \ \|\omega\|_\alpha < \infty\}. \end{equation} Thereby, we endow $\Omega_\alpha$ with the metric \begin{equation} \label{a44} \rho_\alpha (\omega , \omega') = \|\omega - \omega'\|_\alpha + \sum_{\ell} 2^{-|\ell|} \frac{\|\omega_\ell - \omega'_\ell\|_{C_\beta}}{ 1 +\|\omega_\ell - \omega'_\ell\|_{C_\beta}}, \end{equation} which turns it into a Polish space. The set of tempered configurations is defined to be \begin{equation} \label{a45} \Omega^{\rm t} = \bigcap_{\alpha \in \mathcal{I}}\Omega_\alpha. \end{equation} We endow it with the projective limit topology, which turns it into a Polish space as well. For every $\alpha\in \mathcal{I}$, the embeddings $\Omega^{\rm t} \hookrightarrow \Omega_\alpha \hookrightarrow \Omega$ are continuous; hence, $\Omega_\alpha , \Omega^{\rm t} \in \mathcal{B}(\Omega)$ and the Borel $\sigma$-algebras $\mathcal{B}(\Omega_\alpha)$, $\mathcal{B}(\Omega^{\rm t})$ coincide with the ones induced on them by $\mathcal{B}(\Omega)$. \subsection{Local Gibbs specification} \label{2.5.ss} Let us turn to the functional (\ref{a31}). By standard methods, one proves that, for every $\alpha \in \mathcal{I}$, the map $\Omega_\alpha \times \Omega_\alpha \mapsto I_\Lambda (\omega|\xi)$ is continuous. Furthermore, for any ball $B_\alpha (R) = \{ \omega \in \Omega_\alpha \ | \ \rho_\alpha (0, \omega) < R\}$, $R>0$, one has \begin{eqnarray*} \inf_{\omega \in \Omega , \ \ \xi \in B_\alpha (R)} I_\Lambda (\omega |\xi) > - \infty, \quad \ \sup_{\omega , \xi \in B_\alpha (R)}| I_\Lambda (\omega |\xi)| < + \infty. \end{eqnarray*} Therefore, for $\Lambda \Subset \mathbb{L}$ and $\xi \in \Omega^{\rm t}$, the conditional relative partition function \begin{equation} \label{a46} N_\Lambda (\xi) = \int_{\Omega_\Lambda} \exp\left[- I_\Lambda (\omega_\Lambda \times 0_{\Lambda^c}|\xi) \right] \chi_\Lambda ({\rm d}\omega_\Lambda) \end{equation} is continuous in $\xi$. Furthermore, for any $R>0$ and $\alpha \in \mathcal{I}$, \[ \inf_{\xi \in B_\alpha (R) } N_\Lambda (\xi) >0. \] For such $\xi$ and $\Lambda$, and for $B \in \mathcal{B}(\Omega)$, we set \begin{equation} \label{34} \pi_\Lambda (B|\xi) = \frac{1}{N_\Lambda(\xi)} \int_{{\Omega}_\Lambda}\exp\left[- I_\Lambda(\omega_\Lambda \times 0_{\Lambda^c} |\xi) \right] \mathbb{I}_B (\omega_\Lambda \times \xi_{\Lambda^c})\chi_\Lambda ({\rm d}\omega_\Lambda ), \end{equation} where $\mathbb{I}_B $ stands for the indicator of $B$. We also set \begin{equation} \label{34a} \pi_\Lambda (\cdot|\xi) \equiv 0, \quad {\rm for} \ \ \xi \in {\Omega} \setminus {\Omega}^{\rm t}. \end{equation} From these definitions one readily derives a consistency property \begin{equation} \label{35} \int_{{\Omega}} \pi_\Lambda (B|\omega) \pi_{\Lambda'} ({\rm d} \omega |\xi) = \pi_{\Lambda'} (B |\xi), \quad \Lambda \subset \Lambda', \end{equation} which holds for all $B\in \mathcal{B}({\Omega})$ and $\xi \in {\Omega}$. The local Gibbs specification is the family $\{\pi_\Lambda \}_{\Lambda \Subset \mathbb{L}}$. Each $\pi_\Lambda$ is a measure kernel, which means that, for a fixed $\xi\in \Omega$, $\pi(\cdot|\xi)$ is a measure on $(\Omega, \mathcal{B}(\Omega))$, which is a probability measure whenever $\xi \in \Omega^{\rm t}$. For any $B \in \mathcal{B}(\Omega)$, $\pi_\Lambda (B|\cdot)$ is $\mathcal{B}(\Omega)$-measurable. By $C_{\rm b}({\Omega}_\alpha)$ (respectively, $C_{\rm b}({\Omega}^{\rm t})$) we denote the Banach spaces of all bounded continuous functions $f:{\Omega}_\alpha \rightarrow \mathbb{R}$ (respectively, $f:{\Omega}^{\rm t} \rightarrow \mathbb{R}$) equipped with the supremum norm. For every $\alpha \in \mathcal{I}$, one has a natural embedding $C_{\rm b}({\Omega}_\alpha ) \hookrightarrow C_{\rm b}({\Omega}^{\rm t})$. Given $\alpha \in \mathcal{I}$, by $\mathcal{W}_\alpha$ we denote the usual weak topology on the set of all probability measures $\mathcal{P}({\Omega}_\alpha)$ defined by means of $C_{\rm b}({\Omega}_\alpha)$. By $\mathcal{W}^{\rm t}$ we denote the weak topology on $\mathcal{P}({\Omega}^{\rm t})$. With these topologies the sets $\mathcal{P}({\Omega}_\alpha)$ and $\mathcal{P}({\Omega}^{\rm t})$ become Polish spaces (Theorem 6.5, page 46 of \cite{[Part]}). By standard methods one proves the following, see Lemma 2.10 in \cite{[KoT]}, \begin{proposition} [Feller Property] \label{2lm} For every $\alpha \in \mathcal{I}$, $\Lambda \Subset \mathbb{L}$, and any $f \in C_{\rm b}({\Omega}_{\alpha})$, the function \begin{eqnarray} \label{f} & & {\Omega}_\alpha \ni \xi \mapsto \pi_\Lambda (f | \xi) \\ & & \qquad \qquad \stackrel{\rm def}{=} \ \frac{1}{N_\Lambda (\xi)} \int_{{\Omega}_\Lambda}f (\omega_\Lambda \times \xi_{\Lambda^c}) \exp\left[- I_\Lambda (\omega_\Lambda \times 0_{\Lambda^c}|\xi) \right]\chi_\Lambda ({\rm d}\omega_\Lambda), \nonumber \end{eqnarray} belongs to $C_{\rm b}({\Omega}_{\alpha})$. The linear operator $f \mapsto \pi_\Lambda (f|\cdot)$ is a contraction on $C_{\rm b}({\Omega}_\alpha)$. \end{proposition} Note that by (\ref{34}), for $\xi \in {\Omega}^{\rm t}$, $\alpha \in \mathcal{I}$, and $f \in C_{\rm b}({\Omega}_\alpha)$, \begin{equation} \label{fp} \pi_\Lambda (f|\xi) = \int_{{\Omega}}f(\omega) \pi_\Lambda({\rm d}\omega|\xi). \end{equation} Recall that the particular cases of our model were specified by Definition \ref{1df}. For $B\in \mathcal{B}({\Omega})$ and $U\in O(\nu)$, we set \[ U \omega = (U \omega_\ell)_{\ell \in \mathbb{L}} \qquad UB = \{ U \omega \ | \ \omega \in B\}. \] Furthermore, for a given $\ell_0$, we set \[ t_{\ell_0} (\omega) = (\omega_{\ell - \ell_0})_{\ell \in \mathbb{L}}, \qquad t_{\ell_0}( B) = \{t_{\ell_0}(\omega) \ | \ \omega \in B\}. \] Then if the model possesses the corresponding symmetry, one has \begin{equation} \label{MA10} \pi_\Lambda (U B |U\xi) = \pi_\Lambda (B| \xi), \qquad \pi_{\Lambda + \ell} (t_\ell (B)|t_\ell (\xi)) = \pi_\Lambda (B|\xi), \end{equation} which ought to hold for all $U$, $\ell$, $B$, and $\xi$. \subsection{Tempered Euclidean Gibbs measures} \label{2.6.ss} \begin{definition} \label{3df} A measure $\mu \in \mathcal{P}({\Omega})$ is called a tempered Euclidean Gibbs measure if it satisfies the Dobrushin-Lanford-Ruelle (equilibrium) equation \begin{equation} \label{40} \int_{{\Omega}}\pi_\Lambda (B |\omega) \mu({\rm d}\omega) = \mu(B), \quad {\rm for} \ {\rm all} \ \ \ \Lambda \Subset \mathbb{L} \ \ {\rm and} \ \ B \in \mathcal{B}({\Omega}). \end{equation} \end{definition} \noindent By $\mathcal{G}^{\rm t}$ we denote the set of all tempered Euclidean Gibbs measures of our model existing at a given $\beta$. The elements of $\mathcal{G}^{\rm t}$ are supported by ${\Omega}^{\rm t}$. Indeed, by (\ref{34}) and (\ref{34a}) $\pi_\Lambda ({\Omega} \setminus {\Omega}^{\rm t} |\xi) = 0$ for every $\Lambda \Subset \mathbb{L}$ and $\xi \in {\Omega}$. Then by (\ref{40}), \begin{equation} \label{40a} \mu ({\Omega} \setminus {\Omega}^{\rm t}) = 0. \end{equation} Furthermore, \begin{equation} \mu \left( \left\{ \omega \in {\Omega }^{ \mathrm{t}} \ | \ \forall \ell \in \mathbb{L}: \ \omega_\ell \in C_{\beta }^{\sigma } \right\} \right) =1, \label{40b} \end{equation}% which follows from (\ref{a25}), (\ref{a26}). If the model is translation and/or rotation invariant, then, for every $U\in O(\nu)$ and $\ell\in \mathbb{L}$, the corresponding transformations preserve $\mathcal{G}^{\rm t}$. That is, for any $\mu \in \mathcal{G}^{\rm t}$, \begin{equation} \label{MA11} \Theta_U (\mu) \ \stackrel{\rm def}{=} \ \mu \circ U^{-1} \in \mathcal{G}^{\rm t}, \qquad \theta_\ell (\mu) \ \stackrel{\rm def}{=} \ \mu \circ t^{-1}_\ell \in \mathcal{G}^{\rm t}. \end{equation} In particular, if $\mathcal{G}^{\rm t}$ is a singleton, its unique element should be invariant in the same sense as the model. From Proposition \ref{2lm} one readily gets the following important fact. \begin{proposition} \label{3lm} For each $\alpha \in \mathcal{I}$, every $\mathcal{W}_\alpha$-accumulation point $\mu \in \mathcal{P}({\Omega}^{\rm t})$ of the family $\{\pi_\Lambda (\cdot |\xi) \ | \ \Lambda \Subset \mathbb{L}, \ \xi \in {\Omega}^{\rm t}\}$ is an element of $\mathcal{G}^{\rm t}$. \end{proposition} Now let us pay some attention to the case where the model (\ref{U1}), (\ref{U2}) is translation invariant. Recall that the lattice $\mathbb{L} = \mathbb{Z}^d$ is considered as an additive group. For ${\ell}_0\in \mathbb{L}$, $\Lambda \Subset \mathbb{L}$, and $\omega \in \Omega$, we set \begin{equation} \label{la106a} \Lambda + \ell_0 = \{ \ell + \ell_0\ | \ \ell \in \Lambda \}; \quad t_{\ell_0}(\omega) = (\xi^{\ell_0}_\ell)_{\ell\in \mathbb{L}}, \ \ \xi^{\ell_0}_\ell = \omega_{\ell - \ell_0}. \end{equation} Furthermore, for $B\in \mathcal{B}(\Omega)$, we set \begin{equation} \label{Ula106a} t_\ell(B) = \{ t_\ell(\omega) \ | \ \omega\in B\}. \end{equation} Clearly, $t_\ell (B) \in \mathcal{B}(\Omega)$ and $t_{\ell} (\Omega^{\rm t}) = \Omega^{\rm t}$ for all $\ell$. \begin{definition}\label{tripdf} A probability measure $\mu \in \mathcal{P}(\Omega)$ is said to be translation invariant if for every $\ell$ and $B \in \mathcal{B}(\Omega)$, one has $\mu(t_\ell(B)) = \mu(B)$. \end{definition} As was mentioned above, the Gibbs specification $\{\pi_{ \Lambda}\}_{\Lambda \Subset \mathbb{L}}$ of the translation invariant model is translation invariant, that is, it has the property (\ref{MA10}). \begin{remark} \label{triprm} The translation invariance of the Gibbs specification does not mean that each probability kernel $\pi_{ \Lambda}$ as a measure is translation invariant. Moreover, it does not mean that all the Euclidean Gibbs measures defined by this specification are translation invariant. One can only claim that if the set $\mathcal{G}^{\rm t}$ consists of one element only, this element ought to translation invariant. \end{remark} Set \begin{equation} \label{ph1} \mathcal{B}^{\rm inv} = \{ B \in \mathcal{B}(\Omega) \ | \ \forall \ell : \ \ t_\ell(B) = B\}, \end{equation} which is the set of all translation invariant events. By construction, $\Omega^{\rm t} \in\mathcal{B}^{\rm inv}$. We say that $\mu \in \mathcal{P}(\Omega)$ is trivial on $\mathcal{B}^{\rm inv}$ if for every $B\in \mathcal{B}^{\rm inv}$, one has $\mu(B) = 0$ or $\mu(B)=1$. By $\mathcal{P}^{\rm inv}(\Omega)$ we denote the set of translation invariant probability measures on $(\Omega, \mathcal{B})$. \begin{definition} \label{ph1df} A probability measure $\mu \in \mathcal{P}^{\rm inv}(\Omega)$ is said to be ergodic (with respect to the group $\mathbb{L}$) if it is trivial on $\mathcal{B}^{\rm inv}(\Omega)$. \end{definition} Ergodic measures are characterized by a mixing property, which we formulate here according to \cite{[Simon]}, see Theorem III.1.8 on page 244. For $L\in \mathbb{N}$, we set \begin{equation} \label{box} \Lambda_L = (-L, L]^d\cap \mathbb{Z}^d, \end{equation} which is called {\it a box}. For a measure $\mu$ and an appropriate function $f$, we write \begin{equation} \label{B1} \langle f \rangle_{\mu} = \int f {\rm d}\mu \end{equation} \begin{proposition}[Von Neumann Ergodic Theorem] \label{Uergpn} Given $\mu\in \mathcal{P}^{\rm inv}(\Omega)$, the following statements are equivalent: \vskip.1cm \begin{tabular}{ll} (i) \ &$\mu$ is ergodic;\\[.2cm] (ii) \ &for all $f, g \in L^2(\Omega, \mu)$, \end{tabular} \begin{equation} \label{U9} \lim_{L\rightarrow +\infty} \frac{1}{|\Lambda_L|}\left\{\sum_{ \ell \in \Lambda_L}\left(\int_{\Omega} f(\omega) g(t_\ell(\omega)) \mu({\rm d} \omega) - \langle f \rangle_\mu\cdot\langle g \rangle_\mu \right) \right\} = 0. \end{equation} \end{proposition} \begin{proposition} \label{Uergco} If the model is translation invariant and $\mathcal{G}^{\rm t}$ is a singleton, its unique element is ergodic. \end{proposition} Now we give a number of statements describing the properties of $\mathcal{G}^{\rm t}$. More details can be found in \cite{[KoT]}. \begin{proposition} \label{1tm} For every $\beta>0$, the set of tempered Euclidean Gibbs measures $\mathcal{G}^{\rm t}$ is non-void, convex, and $\mathcal{W}^{\rm t}$- compact. \end{proposition} Recall that the H\"{o}lder norm $\|\cdot \|_{C_{\beta }^{\sigma }}$ was defined by (\ref{a18}). \begin{proposition} \label{2tm} For every $\sigma \in (0, 1/2)$ and $\varkappa >0$, there exists a positive constant $ C$ such that, for any $\ell $ and for all $\mu \in \mathcal{G}^{\rm t}$, \begin{equation} \label{43} \int_{\mathit{\Omega}} \exp\left(\lambda_\sigma \|\omega_\ell \|_{C^\sigma_\beta}^2 + \varkappa \|\omega_\ell \|_{L^2_\beta}^2 \right)\mu({\rm d}\omega) \leq C, \end{equation} where $\lambda_\sigma$ is the same as in (\ref{a26}). \end{proposition} In view of (\ref{43}), the one-site projections of each $\mu\in \mathcal{G}^{\rm t}$ are sub-Gaussian. The constant $C$ does not depend on $\ell$ and is the same for all $\mu \in \mathcal{G}^{\rm t}$, though it may depend on $\sigma $ and $\varkappa$. The estimate (\ref{43}) plays a crucial role in the theory of the set $\mathcal{G}^{\rm t}$. According to \cite{[Ge]} certain Gibbs states correspond to the thermodynamic phases of the underlying physical system. Thus, in our context multiple phases exist only if $\mathcal{G}^{\rm t}$ has more than one element for appropriate values of $\beta$ and the model parameters. On the other hand, a priori one cannot exclude that this set always has multiple elements, which would make it useless for describing phase transitions. The next statement which we present here\footnote{C.f., Theorem 3.4 in \cite{[KoT]}, Theorem 2.1 in \cite{[AKRT1]}, and Theorem 4.1 in \cite{[AKRT]}.} clarifies the situation. Let us decompose \begin{equation}\label{decom} V_{\ell} = V_{1, \ell} + V_{2, \ell}, \end{equation} where $V_{1, \ell}\in C^2 (\mathbb{R}^\nu)$ is such that \begin{equation} \label{dc1} - a \leq b \ \stackrel{\rm def}{=} \ \inf_{\ell} \inf_{x, y \in \mathbb{R}^\nu, \ y\neq 0}\left( V''_{1,\ell}(x)y, y \right)/|y|^2 < \infty. \end{equation} As for the second term, we set \begin{equation} \label{dc2} 0 \leq \delta \ \stackrel{\rm def}{=} \ \sup_{\ell} \left\{ \sup_{x \in \mathbb{R}^\nu}V_{2, \ell}(x) - \inf_{x \in \mathbb{R}^\nu}V_{2, \ell}(x) \right\} \leq \infty. \end{equation} Its role is to produce multiple minima of the potential energy responsible for eventual phase transitions. Clearly, the decomposition (\ref{decom}) is not unique; its optimal realizations for certain types of $V_\ell$ are discussed in section 6 of \cite{[AKRT]}. Recall that the interaction parameter $\hat{J}_0$ was defined in (\ref{a1}). \begin{proposition} \label{httm} The set $\mathcal{G}^{\rm t}$ is a singleton if \begin{equation} \label{dc3} e^{\beta \delta} <(a + b)/\hat{J}_0. \end{equation} \end{proposition} \begin{remark} \label{apprm} The latter condition surely holds at all $\beta$ if \begin{equation} \label{si} \delta = 0 \quad {\rm and} \quad \hat{J}_0 < a + b. \end{equation} If the oscillators are harmonic, $\delta = b = 0$, which yields the stability condition \begin{equation} \label{si1} \hat{J}_0 < a. \end{equation} The condition (\ref{dc3}) does not contain the particle mass $m$; hence, the property stated holds also in the quasi-classical limit\footnote{More details on this limit can be found in \cite{[AKKR]}.} $m \rightarrow + \infty$. \end{remark} By the end of this subsection we consider the scalar case $\nu=1$. Let us introduce the following order on $\mathcal{G}^{\rm t}$. As the components of the configurations $\omega\in {\Omega}$ are continuous functions $\omega_\ell :S_\beta \rightarrow \mathbb{R}^\nu$, one can set $\omega \leq \tilde{\omega}$ if $\omega_\ell(\tau) \leq \tilde{\omega}_\ell(\tau)$ for all $\ell$ and $\tau$. Thereby, \begin{equation} \label{MA1} K_+ ({\Omega}^{\rm t}) \ \stackrel{\rm def}{ =} \ \{ f\in C_{\rm b}({\Omega}^{\rm t}) \ | \ f(\omega) \leq f(\tilde{\omega}), \quad {\rm if} \ \ \omega \leq \tilde{\omega}\}, \end{equation} which is a cone of bounded continuous functions. \begin{proposition} \label{MAlm} If for given $\mu, \tilde{\mu} \in \mathcal{G}^{\rm t}$, one has \begin{equation} \label{MA1a} \langle f \rangle_\mu = \langle f \rangle_{\tilde{\mu}}, \qquad {\rm for} \ \ {\rm all} \ \ f \in K_+ ({\Omega}^{\rm t}), \end{equation} then $\mu = \tilde{\mu}$. \end{proposition} This fact allows for introducing the FKG-order. \begin{definition} \label{MAdf} For $\mu, \tilde{\mu}\in\mathcal{G}^{\rm t}$, we say that $\mu\leq \tilde{\mu}$, if \begin{equation} \label{MAW} \langle f \rangle_{{\mu}} \leq \langle f \rangle_{\tilde{\mu}}, \qquad {\rm for} \ \ {\rm all} \ \ f \in K_+({\Omega}^{\rm t}). \end{equation} \end{definition} \begin{proposition} \label{MAtm} The set $\mathcal{G}^{\rm t}$ possesses maximal $\mu_{+}$ and minimal $\mu_{-}$ elements in the sense of Definition \ref{MAdf}. These elements are extreme; they also are translation invariant if the model is translation invariant. If $V_\ell (-x ) = V_\ell (x)$ for all $\ell $, then $\mu_{+} (B) = \mu_{-} (- B)$ for all $B \in \mathcal{B}({\Omega})$. \end{proposition} The proof of this statement follows from the fact that, for $f\in K_{+}(\Omega^{\rm t})$ and any $\Lambda \Subset \mathbb{L}$, \begin{equation} \label{MAW1} \langle f \rangle_{\pi_\Lambda (\cdot|\xi)} \leq \langle f \rangle_{\pi_\Lambda (\cdot|\xi')}, \quad \ \ {\rm whenever} \ \ \xi \leq \xi', \end{equation} which one obtaines by the FKG inequality, see \cite{[KoT]}. By means of this inequality, one also proves the following \begin{proposition} \label{MA1tm} The family $\{\pi_\Lambda (\cdot |0)\}_{\Lambda \Subset \mathbb{L}}$ has only one $\mathcal{W}^{\rm t}$-accumulation point, $\mu_0$, which is an element of $\mathcal{G}^{\rm t}$. \end{proposition} \subsection{Periodic Euclidean Gibbs measures} \label{2.7.ss} If the model is translation invariant, there should exist $\phi: \mathbb{N}_0^d \rightarrow \mathbb{R}^+$ such that \begin{equation} \label{A1} J_{\ell \ell'} = \phi (|\ell_1 - \ell'_1|, \dots, |\ell_d - \ell'_d|).\end{equation} For the box (\ref{box}), we set \begin{equation} \label{A2} J^\Lambda_{\ell \ell'} \ \stackrel{\rm def}{=} \ \phi (|\ell_1 - \ell'_1|_L, \dots, |\ell_d - \ell'_d|_L), \end{equation} where \begin{equation} \label{A3} |\ell_j - \ell'_j|_L \ \stackrel{\rm def}{ =} \ \min\{ |\ell_j - \ell'_j| \ ; \ L - |\ell_j - \ell'_j|\}, \ \ \ j= 1 , \dots , d. \end{equation} For $\ell , \ell'\in \Lambda$, we introduce the periodic distance \begin{eqnarray} \label{box1} |\ell - \ell'|_\Lambda = \sqrt{|\ell_1 - \ell'_1|_L^2 + \cdots + |\ell_d - \ell'_d|_L^2}. \end{eqnarray} With this distance the box $\Lambda$ turns into a torus, which one can obtained by imposing periodic conditions on its boundaries. Now we set, c.f., (\ref{a29}), \begin{equation} \label{A4} I^{\rm per}_\Lambda (\omega_\Lambda) = - \frac{1}{2} \sum_{\ell , \ell' \in \Lambda} J^\Lambda_{\ell \ell'} (\omega_\ell , \omega_{\ell'})_{L^2_\beta } + \sum_{\ell \in \Lambda}\int_0^\beta V_\ell (\omega_\ell (\tau)){\rm d}\tau , \end{equation} and thereby, c.f., (\ref{a27}), \begin{eqnarray} \label{A5} \nu^{\rm per}_\Lambda ({\rm d}\omega_\Lambda) & = & \exp\left[- I^{\rm per}_\Lambda (\omega_\Lambda) \right]\chi_\Lambda ({\rm d}\omega_\Lambda)/N^{\rm per}_\Lambda, \\ N^{\rm per}_\Lambda & = & \int_{\Omega_\Lambda}\exp\left[- I^{\rm per}_\Lambda (\omega_\Lambda) \right]\chi_\Lambda ({\rm d}\omega_\Lambda). \nonumber \end{eqnarray} By means of (\ref{A2}) we introduce the periodic Hamiltonian \begin{equation} \label{A5a} H_\Lambda^{\rm per} = H_\Lambda = - \frac{1}{2} \sum_{\ell ,\ell'\in \Lambda} J^\Lambda_{\ell \ell'} \cdot (q_\ell , q_{\ell'}) + \sum_{\ell \in \Lambda}H_\ell, \end{equation} and the corresponding periodic local Gibbs state \begin{equation} \label{A5b} \varrho^{\rm per}_\Lambda (A) = {\rm trace}[A\exp(- \beta H^{\rm per}_\Lambda)]/ {\rm trace}[\exp(- \beta H^{\rm per}_\Lambda)], \quad A\in \mathfrak{C}_\Lambda. \end{equation} The relationship between the measure $\nu_\Lambda^{\rm per}$ and this state is the same as in the case of $\nu_\Lambda$ and $\varrho_\Lambda$. Set, c.f., (\ref{34}), \begin{equation} \label{d51} \pi^{\rm per}_{ \Lambda} (B) = \frac{1}{N_{ \Lambda}^{\rm per}} \int_{\Omega_{ \Lambda}} \exp\left[ - I_{ \Lambda}^{\rm per} (\omega_{\Lambda}) \right] \mathbb{I}_B (\omega_\Lambda \times 0_{\Lambda^c}) \chi_{ \Lambda} ({\rm d}x_\Lambda), \end{equation} which is a probability measure on $\Omega^{\rm t}$. Then \begin{equation} \label{d510} \pi^{\rm per}_{ \Lambda}({\rm d}(\omega_\Lambda \times \omega_{\Lambda^c})) = \nu_{ \Lambda}^{\rm per} ({\rm d}\omega_\Lambda)\prod_{\ell'\in \Lambda^c} \delta_{0_{\ell'}} ({\rm d}x_{\ell'}), \end{equation} where $0_{\ell'}$ is the zero element of the Banach space $C_\beta$. Note that the projection of $\pi_{\Lambda}^{\rm per}$ onto $\Omega_{\Lambda}$ is $\nu_{\Lambda}^{\rm per}$. Let $\mathcal{L}_{\rm box}$ be the sequence of all boxes (\ref{box}). Arguments similar to those used in the proof of Lemma 4.4 in \cite{[KoT]} yield the following \begin{lemma} \label{d5lm} For every $\alpha \in \mathcal{I}$ and $\sigma \in (0, 1/2)$, there exists a constant $C>0$ such that, for all boxes $\Lambda$, \begin{equation} \label{d516} \int_{\Omega^{\rm t}} \left( \sum_{\ell} \|\omega_\ell \|^2_{ C_\beta^\sigma} w_\alpha (0,\ell) \right)^2 \pi^{\rm per}_{ \Lambda} ({\rm d}\omega) \leq C. \end{equation} Thereby, the family $\{\pi_{ \Lambda}^{\rm per}\}_{\Lambda \in \mathcal{L}_{\rm box}}$ is $\mathcal{W}^{\rm t}$-relatively compact. \end{lemma} Let $\mathcal{M}$ be the family of $\mathcal{W}^{\rm t}$-accumulation points of $\{\pi_{ \Lambda}^{\rm per}\}_{\Lambda \in \mathcal{L}_{\rm box}}$. \begin{proposition} \label{periodtm} It follows that $\mathcal{M}\subset \mathcal{G}^{\rm t}$. The elements of $\mathcal{M}$, called periodic Euclidean Gibbs measures, are translation invariant. \end{proposition} The proof of this statement is similar to the proof of Proposition \ref{3lm}. It can be done by demonstrating that each $\mu \in \mathcal{M}$ solves the DLR equation (\ref{40}). To this end, for chosen $\Lambda \Subset \mathbb{L}$, one picks the box $\Delta$ containing this $\Lambda$, and shows that \[ \int_\Omega \pi_\Lambda (\cdot |\xi) \pi^{\rm per}_\Delta ({\rm d}\xi) \Rightarrow \mu (\cdot) , \quad \ \ {\rm if} \ \ \pi^{\rm per}_\Delta \Rightarrow \mu \quad {\rm in} \ \ \mathcal{W}^{\rm t}. \] Here both convergence are taken along a subsequence of $\mathcal{L}_{\rm box}$. \subsection{The pressure} \label{2.8.ss} In the translation invariant case, one can introduce a thermodynamic function, which contains important information about the thermodynamic properties of the model. This is the pressure, which in our case up to a factor coincides with the free energy density. As our special attention will be given to the dependence of the pressure on the external field $h$, c.f. (\ref{4}), we indicate this dependence explicitely. For $\Lambda \Subset \mathbb{L}$, we set, see (\ref{a46}), \begin{equation} \label{c1} p_\Lambda (h, \xi) = \frac{1}{|\Lambda|} \log N_\Lambda (h,\xi), \quad \xi \in {\Omega}^{\rm t}. \end{equation} To simplify notations we write $p_\Lambda (h) = p_\Lambda (h, 0)$. Thereby, for $\mu \in \mathcal{G}^{\rm t}$, we set \begin{equation} \label{c2} p^\mu_\Lambda (h) = \int_{{\Omega}}p_\Lambda (h , \xi) \mu({\rm d}\xi). \end{equation} Furthermore, we set \begin{equation} \label{ac} p^{\rm per}_\Lambda (h) = \frac{1}{|\Lambda|} \log N^{\rm per}_\Lambda (h). \end{equation} If, for a cofinal sequence $\mathcal{L}$, the limit \begin{equation} \label{c3} p^\mu (h) \ \stackrel{\rm def}{=} \ \lim_{\mathcal{L}}p^\mu_\Lambda (h), \end{equation} exists, we call it pressure in the state $\mu$. We shall also consider \begin{equation} \label{CC} p(h) \ \stackrel{\rm def}{=} \ \lim_{\mathcal{L}} p_\Lambda (h), \quad \ \ p^{\rm per} (h) \ \stackrel{\rm def}{=} \ \lim_{\mathcal{L}_{\rm box}} p^{\rm per}_\Lambda (h) . \end{equation} Given $l = (l_1 , \dots l_d)$, $l' = (l'_1 , \dots l'_d)\in \mathbb{L}= \mathbb{Z}^d$, such that $l_j < l'_j$ for all $j=1, \dots , d$, we set \begin{equation} \label{de1} \Gamma = \{ \ell \in \mathbb{L} \ | \ l_j \leq \ell_j \leq l'_j, \ \ {\rm for} \ {\rm all}\ j = 1 , \dots , d\}. \end{equation} For this parallelepiped, let $\mathfrak{G}(\Gamma)$ be the family of all pair-wise disjoint translates of $\Gamma$ which cover $\mathbb{L}$. Then for $\Lambda \Subset \mathbb{L}$, we let $N_{-}(\Lambda|\Gamma)$ (respectively, $N_{+}(\Lambda|\Gamma)$) be the number of the elements of $\mathfrak{G}(\Gamma)$ which are contained in $\Lambda$ (respectively, which have non-void intersections with $\Lambda$). \begin{definition} \label{rdf} A cofinal sequence $\mathcal{L}$ is a van Hove sequence if for every $\Gamma$, \begin{equation} \label{de2} (a) \ \ \lim_{\mathcal{L}} N_{-}(\Lambda |\Gamma) = +\infty; \quad \quad (b) \ \ \lim_{\mathcal{L}}\left( N_{-}(\Lambda |\Gamma)/ N_{+}(\Lambda |\Gamma)\right)= 1. \end{equation} \end{definition} One observes that $\mathcal{L}_{\rm box}$ is a van Hove sequence. It is known, see Theorem 3.10 in \cite{[KoT]}, that \begin{proposition} \label{pressuretm} For every $h\in \mathbb{R}$ and any van Hove sequence $\mathcal{L}$, it follows that the limits (\ref{c3}) and (\ref{CC}) exist, do not depend on the particular choice of $\mathcal{L}$, and are equal, that is $p(h) = p^{\rm per} (h) = p^\mu (h)$ for each $\mu\in \mathcal{G}^{\rm t}$. \end{proposition} Let the model be rotation invariant, see Definition \ref{1df}. Then the pressure depends on the norm of the vector $h\in \mathbb{R}^\nu$. Therefore, without loss of generality one can choose the external field to be $(h, 0, \dots , 0)$, $h \in \mathbb{R}$. For the measure (\ref{a27}), by $\nu_\Lambda^{(0)}$ we denote its version with $h=0$. Then \begin{equation} \label{A10} N_\Lambda (h) = N_\Lambda (0) \int_{\Omega_\Lambda} \exp \left( h \sum_{\ell \in \Lambda} \int_0^\beta \omega_\ell^{(1)} (\tau) {\rm d}\tau \right)\nu_\Lambda^{(0)} ({\rm d}\omega_\Lambda). \end{equation} The same representation can also be written for $N_\Lambda^{\rm per}(h)$. One can show that the pressures $p_\Lambda (h)$ and $p_\Lambda^{\rm per}(h)$, as functions of $h$, are analytic in a subset of $\mathbb{C}$, which contains $\mathbb{R}$. Thus, one can compute the derivatives and obtain \begin{equation} \label{ZiF1} \frac{\partial}{\partial h} p_\Lambda (h) = \beta M_\Lambda (h), \qquad \frac{\partial}{\partial h} p^{\rm per}_\Lambda (h) = \beta M^{\rm per}_\Lambda (h), \end{equation} where \begin{equation} \label{ZiF2} M_\Lambda (h) \ \stackrel{\rm def}{=} \ \frac{1}{|\Lambda|} \sum_{\ell \in \Lambda} \varrho_{ \Lambda} [q^{(1)}_\ell], \quad M^{\rm per}_\Lambda (h) \ \stackrel{\rm def}{=} \ \varrho^{\rm per}_{ \Lambda} [q^{(1)}_\ell] \end{equation} are local {\it polarizations}, corresponding to the zero and periodic boundary conditions respectively. Furthermore, \begin{eqnarray} \label{ZiF3} & & \frac{\partial^2}{\partial h^2} p_\Lambda (h) \\ & & \qquad = \frac{1}{2|\Lambda|} \int_{\Omega_{ \Lambda}} \int_{\Omega_{ \Lambda}} \left[\sum_{\ell \in \Lambda} \int_0^\beta \left(\omega^{(1)}_\ell (\tau) - \tilde{\omega}^{(1)}_\ell (\tau) \right){\rm d}\tau \right]^2 \nu_{ \Lambda} ({\rm d}\omega_\Lambda)\nu_{ \Lambda} ({\rm d}\tilde{\omega}_\Lambda) \geq 0. \nonumber \end{eqnarray} The same can be said about the second derivative of $p^{\rm per}_\Lambda (h)$. Therefore, both $p_\Lambda (h)$ and $p^{\rm per}_\Lambda (h)$ are convex functions. For the reader convenience, we present here the corresponding properties of convex functions following \cite{[Simon]}, pages 34 - 37. For a function $\varphi: \mathbb{R} \rightarrow \mathbb{R}$, by $ \varphi'_{\pm}(t)$ we denote its one-side derivatives at a given $t\in \mathbb{R}$. By {\it at most countable set} we mean the set which is void, finite, or countable. \begin{proposition} \label{convpn} For a convex function $\varphi: \mathbb{R} \rightarrow \mathbb{R}$, it follows that: \vskip.1cm \begin{tabular}{ll} (a) \ &the derivatives $\varphi'_{\pm}(t)$ exist for every $t\in \mathbb{R}$;\\ &the set $\{t\in \mathbb{R} \ | \ \varphi'_{+} (t) \neq \varphi'_{-} (t)\}$ is at most countable;\\[.2cm] (b) \ &for every $t\in \mathbb{R}$ and $\theta> 0$, \end{tabular} \vskip.1cm \begin{equation} \label{S6z} \varphi'_{-} (t) \leq \varphi'_{+} (t) \leq \varphi'_{-} (t+ \theta ) \leq \varphi'_{+} (t+\theta); \end{equation} \vskip.1cm \begin{tabular}{ll} (c) \ &the point-wise limit $\varphi$ of a sequence of convex functions $\{\varphi_n\}_{n\in \mathbb{N}}$\\ &is a convex function; if $\varphi$ and all $\varphi_n$'s are differentiable at a\\ &given $t$, $\varphi'_{n}(t) \rightarrow \varphi' (t)$ as $n \rightarrow + \infty$. \end{tabular} \end{proposition} \begin{proposition} \label{ZiFtm} The pressure $p(h)$, see Proposition \ref{pressuretm}, is a convex function of $h\in \mathbb{R}$. Therefore, the set \begin{equation} \label{ZiF4} \mathcal{R} \ \stackrel{\rm def}{=} \ \{ h \in \mathbb{R} \ | \ p'_{-} (h) < p'_{+} (h) \} \end{equation} is at most countable. For any $h\in \mathcal{R}^c$ and any van Hove sequence $\mathcal{L}$, it follows that \begin{equation} \label{ZiF5} \lim_{\mathcal{L}} M_\Lambda (h) = \lim_{\mathcal{L}_{\rm box}} M^{\rm per}_\Lambda (h) = \beta^{-1} p'(h) \ \stackrel{\rm def}{=} \ M(h) . \end{equation} \end{proposition} By this statement, for any $h\in \mathcal{R}^c$, the limiting periodic state is unique. In the scalar case, one can tell more on this item. The following result is a consequence of Propositions \ref{pressuretm} and \ref{MAtm}. \begin{proposition} \label{Mco} If $\nu = 1$ and $p(h)$ is differentiable at a given $h\in \mathbb{R}$, then $\mathcal{G}^{\rm t}$ is a singleton at this $h$. \end{proposition} Returning to the general case $\nu \in \mathbb{N}$ we note that by Proposition \ref{ZiFtm} the global polarization $M (h)$ is a nondecreasing function of $h\in \mathcal{R}^c$; it is continuous on each open connected component of $\mathcal{R}^c$. That is, $M(h)$ is continuous on the intervals $(a_{-}, a_{+})\subset \mathcal{R}^c$, where $a_{\pm}$ are two consecutive elements of $\mathcal{R}$. At each such $a_{\pm}$, the global magnetization is discontinuous. One observes however that the set $\mathcal{R}^c$ may have empty interior; hence, $M(h)$ may be nowhere continuous. In the sequel, to study phase transitions in the model with the anharmonic potentials $V$ of general type, we use the regularity of the temperature loops and Proposition \ref{grrpn}. Let the model be just translation invariant. i.e., the anharmonic potential has the form (\ref{4}), where $V^0$ is independent of $\ell$. Let us consider the following measure on $C_\beta$: \begin{eqnarray} \label{ZiFF1} \lambda ({\rm d} \omega ) & = & \frac{1}{N_\beta} \exp\left( - \int_0^\beta V^0(\omega(\tau)){\rm d} \tau \right) \chi ({\rm d} \omega), \\ N_\beta & = & \int_{C_\beta} \exp\left( - \int_0^\beta V^0(\omega(\tau)){\rm d} \tau \right) \chi ({\rm d} \omega), \nonumber \end{eqnarray} where $\chi$ is H{\o}egh-Krohn's measure. For a box $\Lambda$, we introduce the following functions on $\Omega_{ \Lambda}$ \begin{eqnarray} \label{ZiFF9} Y_\Lambda (\omega_\Lambda) & = & \frac{1}{2} \sum_{\ell , \ell' \in \Lambda} J^\Lambda_{\ell \ell'} \sum_{j=1}^\nu \int_0^\beta \omega^{(j)}_\ell (\tau) \omega^{(j)}_{\ell'}(\tau){\rm d}\tau ,\\ X^{(j)}_\Lambda (\omega_\Lambda)& = & \sum_{\ell \in \Lambda} \int^{\beta}_0 \omega^{(j)}_\ell (\tau) {\rm d}\tau, \quad \ \ j = 1 , \dots , \nu. \nonumber \end{eqnarray} Then from (\ref{ac}) one gets \begin{eqnarray} \label{ZiFF10} p^{\rm per}_{\Lambda} (h) & = & \log N_\beta \nonumber \\ & + & \frac{1}{|\Lambda|}\log \left\{\int_{\Omega_{ \Lambda}} \exp \left[Y_\Lambda (\omega_\Lambda ) + \sum_{j=1}^\nu h^{(j)} X^{(j)}_\Lambda (\omega_\Lambda) \right] \prod_{\ell \in \Lambda} \lambda ({\rm d}\omega_\ell) \right\}. \end{eqnarray} As the measure (\ref{ZiFF1}) is a perturbation of the H{\o}egh-Krohn measure, we can study the regularity of the associated stochastic process by means of Proposition \ref{grrpn}. Fix some $p\in \mathbb{N}\setminus\{1\}$ and $\sigma \in (0, 1/2 - 1/2p)$. Thereby, for $\vartheta \in (0, \beta)$, one obtains \begin{eqnarray*} \int_{C_\beta} \Xi_\vartheta^p (\omega) \lambda ({\rm d}\omega) \leq e^{-\beta B_V }\cdot \langle \Xi_\vartheta^p \rangle_{\chi}/ N_\beta , \end{eqnarray*} $B_V$ being as in (\ref{a2}). By Proposition \ref{grrpn} this yields \begin{equation} \label{ZiFF4} \langle \Xi^p_\vartheta \rangle_{\lambda} \leq D_V(\sigma, \nu, p) m^{-p} \vartheta^{ p(1 - 2 \sigma)}, \end{equation} where, see (\ref{a26c}), \[ D_V(\sigma, \nu, p) \ \stackrel{\rm def}{=} \ \frac{2^{3(2p+1)}( 1 + 1/ \sigma p)^{2p}}{(p-1 - 2 p \sigma) (p - 2 p \sigma)}\cdot \frac{2^p \exp\left( - \beta B_V\right) \Gamma (\nu/2 + p)}{ N_\beta \Gamma (\nu/2)}. \] For $c>0$ and $n\in \mathbb{N}$, $n\geq 2$, we set \begin{equation} \label{ZiFF5} C^{\pm} (n;c) = \{\omega\in C_\beta \ | \ \pm \omega^{(j)}(k\beta /n)\geq c, \ j=1 , \dots , \nu; \ k = 0, 1, \dots n\}. \end{equation} For every $n\in \mathbb{N}$, $j_1, \dots, j_n \in \{1, \dots, \nu\}$, and $\tau_1 , \dots , \tau_n \in [0,\beta]$, the joint distribution of $\omega^{(j_1)}(\tau_1), \dots , \omega^{(j_n)}(\tau_n)$ induced by H{\o}egh-Krohn's measure $\chi$ is Gaussian. Therefore, $\chi (C^{\pm} (n;c)) >0$. Clearly, the same property has the measure (\ref{ZiFF1}). Thus, we have \begin{equation} \label{WQ} \Sigma(n;c) \ \stackrel{\rm def}{=} \ \min\left\{\lambda \left(C^{+}(n;c) \right);\lambda \left(C^{-}(n;c) \right) \right\}>0. \end{equation} For $\varepsilon \in (0, c)$, we set \begin{eqnarray} \label{ZiFF6} A(c;\varepsilon) & = & \{\omega\in C_\beta \ | \ \Xi_{\beta/n} (\omega) \leq (c - \varepsilon)^{2}(\beta/n)^{-2 \sigma } \},\\ B^{\pm}(\varepsilon,c)& = & A(c;\varepsilon)\bigcap C^{\pm} (n;c). \nonumber \end{eqnarray} Then for any $\tau \in [0, \beta]$, one finds $k \in \mathbb{N}$ such that $|\tau - k \beta /n| \leq \beta/n$, and hence, for any $j=1, \dots , \nu$, \[ |\omega^{(j)} (\tau) - \omega^{(j)} (k\beta/n)| \leq \left[\Xi_{\beta/n} (\omega)\right]^{1/2}(\beta/ n)^{\sigma}, \] which yields $\pm \omega^{(j)} (\tau) \geq \varepsilon$ if $\omega\in B^{\pm}(\varepsilon,c)$. Let us estimate $\lambda [ B^{\pm}(\varepsilon,c)]$. By (\ref{ZiFF4}) and Chebyshev's inequality, one gets \begin{eqnarray*} \lambda \left(C_\beta \setminus A(c; \varepsilon) \right) &\leq & \frac{\beta^{ 2 \sigma p}}{n^{ 2 \sigma p}(c-\varepsilon)^{2p}} \langle \Xi^p_{\beta/n} \rangle_{\lambda}\\ & \leq & \frac{\beta^p D_V (\sigma, \nu, p)}{[m n (c-\varepsilon)^2]^p} . \end{eqnarray*} Thereby, \begin{eqnarray} \label{ZiFF7} \lambda \left[B^{\pm}(\varepsilon,c) \right] & = & \lambda\left[ C^{\pm}(n;c) \setminus \left(C_\beta \setminus A(c;\varepsilon) \right)\right] \\ &\geq& \Sigma(n;c) - \lambda \left(C_\beta \setminus A(c;\varepsilon) \right) \nonumber \\ & \geq & \Sigma(n;c) - \frac{\beta^p D_V (\sigma , \nu, p)}{\left[m n (c-\varepsilon)^2\right]^p} \nonumber\\ & \stackrel{\rm def}{=} & \gamma (m), \nonumber \end{eqnarray} which is positive, see (\ref{WQ}), for all \begin{equation} \label{ZiFF8} m \geq m_* \ \stackrel{\rm def}{=} \ \frac{\beta}{n (c-\varepsilon)^2} \cdot\left( \frac{D_V (\sigma , \nu, p)}{\Sigma(n;c)}\right)^{1/p}. \end{equation} This result will be used for estimating the integrals in (\ref{ZiFF10}). \section{Phase Transitions} \label{3s} There exist several approaches to describe phase transitions. Their common point is that the macroscopic equilibrium properties of a statistical mechanical model can be different at the same values of the model parameters. That is, one speaks about the possibility for the multiple states to exist rather than the transition (as a process) between these states or between their uniqueness and multiplicity. \subsection{Phase transitions and order parameters} \label{3.1.ss} We begin by introducing the main notion of this section. \begin{definition} \label{phdef} The model described by the Hamiltonians (\ref{U1}), (\ref{U2}) has a phase transition if $|\mathcal{G}^{\rm t}|>1$ at certain values of $\beta$ and the model parameters. \end{definition} Note that here we demand the existence of multiple \emph{tempered} Euclidean Gibbs measures. For models with finite range interactions, there may exist Euclidean Gibbs measures, which are not tempered. Such measures should not be taken into account. Another observation is that in Definition \ref{phdef} we do not assume any symmetry of the model, the translation invariance including. If the model is rotation invariant (symmetric for $\nu=1$, see Definition \ref{1df}), the unique element of $\mathcal{G}^{\rm t}$ should have the same symmetry. If $|\mathcal{G}^{\rm t}|>1$, the symmetry can be `distributed' among the elements of $\mathcal{G}^{\rm t}$. In this case, the phase transition is connected with {\it a symmetry breaking}. In the sequel, we consider mostly phase transitions of this type. However, in subsection \ref{6.3.3.ss} we study the case where the anharmonic potentials $V_\ell$ have no symmetry and hence there is no symmetry breaking connected with the phase transition. If the model is translation invariant, the multiplicity of its Euclidean Gibbs states is equivalent to the existence of non-ergodic elements of $\mathcal{G}^{\rm t}$, see Corollary \ref{Uergco}. Thus, to prove that the model has a phase transition it is enough to show that there exists an element of $\mathcal{G}^{\rm t}$, which fails to obey (\ref{U9}). In the case where the model is not translation invariant, we employ a comparison method, based on correlation inequalities. Its main idea is that the model has a phase transition if the translation invariant model with which we compare it has a phase transition. Let us consider first the translation and rotation invariant case. Given $\ell$ and $j = 1, \dots , \nu$, we set \begin{equation} \label{nrp10} D^\Lambda_{\ell \ell'} = \beta \int_0^\beta \big{\langle} \left(\omega_\ell (\tau), \omega_{\ell'} (\tau')\right) \big{\rangle}_{\nu_{\Lambda}^{\rm per}} {\rm d}\tau'. \end{equation} The right-hand side in (\ref{nrp10}) does not depend on $\tau$ due to the property (\ref{a11}). To introduce the Fourier transformation in the box $\Lambda$ we employ the conjugate set $\Lambda_*$ (Brillouin zone), consisting of the vectors $p = (p_1 , \dots , p_d)$, such that \begin{equation} \label{rp39} \ p_j = - \pi + \frac{ \pi}{L} s_j, \ s_j = 1 , \dots , 2L, \ j = 1, \dots , d. \end{equation} Then the Fourier transformation is \begin{eqnarray} \label{rp40} {\omega}^{(j)}_{\ell} (\tau) & = & \frac{1}{|\Lambda|^{1/2}} \sum_{p \in \Lambda_*} \hat{\omega}^{(j)}_p (\tau) e^{\imath (p,\ell)}, \\ \hat{\omega}^{(j)}_p (\tau) & = & \frac{1}{|\Lambda|^{1/2}} \sum_{\ell \in \Lambda} {\omega}^{(j)}_\ell (\tau) e^{-\imath (p,\ell)}. \nonumber \end{eqnarray} In order that ${\omega}^{(j)}_\ell (\tau)$ be real, the Fourier coefficients should satisfy \[ \overline{ \hat{\omega}^{(j)}_p (\tau)} = \hat{\omega}^{(j)}_{ -p} (\tau). \] By the rotation invariance of the state $\langle \cdot \rangle_{\nu_{ \Lambda}^{\rm per}}$, as well as by its invariance with respect to the translations of the torus $\Lambda$, it follows that \begin{equation} \label{rp40k} \langle \hat{\omega}^{(j)}_p (\tau) \hat{\omega}^{(j')}_{p'} (\tau') \rangle_{\nu_{ \Lambda}^{\rm per}} = \delta_{jj'} \delta (p + p') \sum_{\ell'\in \Lambda} \langle {\omega}_\ell^{(j)} (\tau) {\omega}^{(j)}_{\ell'} (\tau') \rangle_{\nu_{ \Lambda}^{\rm per}} e^{\imath (p, \ell' - \ell)}. \end{equation} Thus, we set \begin{eqnarray} \label{rp40z} \widehat{D}^\Lambda_p & = & \sum_{\ell' \in \Lambda} D^\Lambda_{\ell \ell'}e^{\imath (p, \ell' - \ell)} ,\\ D^\Lambda_{\ell \ell'} & = & \frac{1}{|\Lambda|} \sum_{p \in \Lambda_*}\widehat{D}^\Lambda_p e^{\imath (p, \ell - \ell')}. \nonumber \end{eqnarray} One observes that $\widehat{D}^\Lambda_p$ can be extended to all $p \in (-\pi, \pi]^d$. Furthermore, \begin{equation} \label{RP} \widehat{D}^\Lambda_p = \widehat{D}^\Lambda_{-p} = \sum_{\ell'\in \Lambda} D^\Lambda_{\ell \ell'}\cos (p, \ell' - \ell) , \end{equation} and \begin{equation} \label{RP1} D^\Lambda_{\ell \ell'} = \frac{1}{|\Lambda|} \sum_{p \in \Lambda_*} \widehat{D}^\Lambda_p e^{\imath (p, \ell - \ell')} = \frac{1}{|\Lambda|} \sum_{p \in \Lambda_*} \widehat{D}^\Lambda_p \cos (p, \ell - \ell'). \end{equation} For $u_\Lambda = (u_\ell)_{\ell \in \Lambda}$, $u_\ell \in \mathbb{R}$, \begin{eqnarray} \label{MaR1} \left(u_\Lambda, D^\Lambda u_\Lambda \right)_{l^2(\Lambda)} & \stackrel{\rm def}{=} & \sum_{\ell, \ell'\in \Lambda} D^\Lambda_{\ell \ell'} u_{\ell} u_{\ell'} \\ & = & \sum_{j=1}^\nu \bigg{\langle} \left[\sum_{\ell \in \Lambda} u_\ell \int_0^\beta \omega_\ell^{(j)}(\tau) {\rm d}\tau \right]^2\bigg{\rangle}_{\nu_{\Lambda}^{\rm per}}\geq 0. \nonumber \end{eqnarray} Thereby, the operator $D^\Lambda : l^2(\Lambda) \rightarrow l^2(\Lambda)$ is strictly positive; hence, all its eigenvalues $\widehat{D}^\Lambda_p$ are also strictly positive. Suppose now that we are given a continuous function $\widehat{B}: (-\pi, \pi]^d \rightarrow (0, +\infty]$ with the following properties: \begin{eqnarray} \label{rp40x} & & {\rm (i)} \qquad \int_{(-\pi, \pi]^d} \widehat{B} (p) {\rm d}p < \infty,\\ & & {\rm (ii)} \qquad \widehat{D}^\Lambda_p \leq \widehat{B} (p), \quad {\rm for} \ {\rm all} \ p \in \Lambda_*\setminus \{0\}, \nonumber \end{eqnarray} holding for all boxes $\Lambda$. Then we set \begin{equation} \label{RP2} B_{\ell \ell'} = \frac{1}{(2\pi)^d} \int_{(-\pi, \pi]^d} \widehat{B} (p) \cos (p , \ell -\ell'){\rm d}p, \quad \ell , \ell' \in \mathbb{L}, \end{equation} and \begin{equation} \label{RP3} B_{\ell \ell'}^\Lambda = \frac{1}{|\Lambda|} \sum_{p \in \Lambda_* \setminus\{0\}} \widehat{B} (p) \cos (p , \ell -\ell'), \quad \ell, \ell' \in \Lambda. \end{equation} We also set $B_{\ell \ell'}^\Lambda = 0$ if either of $\ell , \ell'$ belongs to $\Lambda^c$. \begin{proposition} \label{RPpn} For every $\ell , \ell'$, it follows that $B^{\Lambda_L}_{\ell \ell'} \rightarrow B_{\ell \ell'}$ as $L \rightarrow +\infty.$ \end{proposition} {\it Proof:} By (\ref{rp40x}), $\widehat{B} (p) \cos (p , \ell -\ell')$ is an absolutely integrable function in the sense of improper Riemann integral. The right-hand side of (\ref{RP3}) is its integral sum; thereby, the convergence stated is obtained in a standard way. $\square$ From claim (i) of (\ref{rp40x}) by the Riemann-Lebesgue lemma, see page 116 in \cite{[LiebL]}, one obtains \begin{equation} \label{RL} \lim_{|\ell - \ell'|\rightarrow +\infty} B_{\ell \ell'} = 0. \end{equation} \begin{lemma} \label{RPlm} For every box $\Lambda$ and any $\ell , \ell'\in \Lambda$, it follows that \begin{equation} \label{RP5} D^\Lambda_{\ell \ell'} \geq \left(D^\Lambda_{\ell \ell} - B^\Lambda_{\ell \ell} \right) + B^\Lambda_{\ell \ell'}. \end{equation} \end{lemma} {\it Proof:} By (\ref{RP1}), (\ref{RP3}), and claim (ii) of (\ref{rp40x}), one has \begin{eqnarray*} D^\Lambda_{\ell \ell} - D^\Lambda_{\ell \ell'} & = & \frac{2}{|\Lambda|} \sum_{p\in \Lambda_* \setminus \{0\}} \widehat{D}^\Lambda_p \sin^2 (p , \ell - \ell') \\ & \leq & \frac{2}{|\Lambda|} \sum_{p\in \Lambda_* \setminus \{0\}} \widehat{B}(p) \sin^2 (p , \ell - \ell') \\ & = & B^\Lambda_{\ell \ell} - B^\Lambda_{\ell \ell'}, \end{eqnarray*} which yields (\ref{RP5}). $\square$ For $\mu \in \mathcal{G}^{\rm t}$, we set, c.f., (\ref{nrp10}), \begin{equation} \label{nrp90} D^\mu_{\ell\ell'} = \beta \int_0^\beta \langle (\omega_\ell (\tau), \omega_{\ell'} (\tau') )\rangle_{\mu} {\rm d}\tau'. \end{equation} \begin{corollary} \label{RPco} For every periodic $\mu\in \mathcal{G}^{\rm t}$, it follows that \begin{equation} \label{RP6} D^\mu_{\ell \ell'} \geq \left(D^\mu_{\ell \ell} - B_{\ell \ell} \right) + B_{\ell \ell'}, \end{equation} holding for any $\ell , \ell'$. \end{corollary} {\it Proof:} For periodic $\mu\in \mathcal{G}^{\rm t}$, one finds the sequence $\{L_n\}_{n\in \mathbb{N}}\subset \mathbb{N}$, such that $\pi^{\rm per}_{\Lambda_{L_n}} \Rightarrow \mu$ as $n \rightarrow +\infty$, see Proposition \ref{periodtm}. This fact alone does not yet mean that $D^{\Lambda_{L_n}}_{\ell \ell'} \rightarrow D^{\mu}_{\ell \ell'}$, what we would like to get. To prove the latter convergence one employs Lemma \ref{d5lm} and proceeds as in the proof of claim (b) of Lemma 5.2 in \cite{[KoT]}. Then (\ref{RP6}) follows from (\ref{RP5}) and Proposition \ref{RPpn}. $\square$ One observes that the first summand in (\ref{RP6}) is independent of $\ell$, whereas the second one may be neither positive nor summable. Suppose now that there exists a positive $\vartheta$ such that, for any box $\Lambda$, \begin{equation}\label{rp40w} D^\Lambda_{\ell \ell} \geq \vartheta. \end{equation} Then, in view of (\ref{RL}), the phase transition occurs if \begin{equation} \label{RP4} \quad \vartheta > B_{\ell \ell}. \end{equation} For certain versions of our model, we find the function $\widehat{B}$ obeying the conditions (\ref{rp40x}) and the bound (\ref{RP4}). Note that under (\ref{rp40w}) and (\ref{RP4}) by (\ref{RP6}) it follows that \begin{equation} \label{nrp1} \lim_{L\rightarrow +\infty} \frac{1}{|\Lambda_L|} \sum_{\ell'\in \Lambda_L} D^\mu_{\ell \ell'} = \lim_{L\rightarrow +\infty} \frac{1}{|\Lambda_L|^2} \sum_{\ell, \ell'\in \Lambda_L} D^\mu_{\ell \ell'} > 0. \end{equation} Let us consider now another possibilities to define phase transitions in translation invariant versions of our model. For a box $\Lambda$, see (\ref{box}), we introduce \begin{eqnarray} \label{rpp} P_\Lambda & = & \frac{1}{(\beta |\Lambda|)^2} \sum_{\ell ,\ell'\in \Lambda} D^\Lambda_{\ell \ell'} \\ & = & \int_{\Omega_{ \Lambda}} \left\vert\frac{1}{\beta |\Lambda|}\sum_{\ell \in \Lambda}\int_0^\beta \omega_\ell (\tau) {\rm d}\tau \right\vert^2 \nu^{\rm per}_{ \Lambda}({\rm d }\omega_\Lambda), \nonumber \end{eqnarray} and set \begin{equation} \label{rpp1} P \ \stackrel{\rm def}{=} \ \limsup_{L \rightarrow +\infty } P_{\Lambda_L}. \end{equation} \begin{definition} \label{rppdf} The above $P$ is called the order parameter. If $P>0$ for given values of $\beta$ and the model parameters, then there exists a long range order. \end{definition} By standard arguments one proves the following \begin{proposition} \label{rpppn} If (\ref{rp40w}) and (\ref{RP4}) hold, then $P>0$. \end{proposition} The appearance of the long range order, which in a more `physical' context is identified with a phase transition, does not imply the phase transition in the sense of Definition \ref{phdef}. At the same time, Definition \ref{phdef} describes models without translation invariance. On the other hand, Definition \ref{rppdf} is based upon the local states only and hence can be formulated without employing $\mathcal{G}^{\rm t}$. Yet another `physical' approach to phase transitions in translation invariant models like (\ref{U1}), (\ref{U2}) is based on the properties of the pressure $p(h)$, which by Proposition \ref{pressuretm} exists and is the same in every state. It does not employ the set $\mathcal{G}^{\rm t}$ and is based on the continuity of the global polarization (\ref{ZiF5}), that is, on the differentiability of $p(h)$. \begin{definition} [Landau Classification] \label{landau} The model has a first order phase transition if $p'(h)$ is discontinuous at a certain $h_*$. The model has a second order phase transition if there exists $h_*\in \mathbb{R}^\nu$ such that $p'(h)$ is continuous but $p''(h)$ is discontinuous at $h=h_*$. \end{definition} \begin{remark} \label{landaurk} Like in Definition \ref{phdef}, here we do not assume any symmetry of the model (except for the translation invariance). As $p(h)$ is convex, $p'(h)$ is increasing; hence, $p''(h) \geq 0$. The discontinuity of the latter mentioned in Definition \ref{landau} includes the case $p''(h_*) = +\infty$, where the polarization $M(h)$ at $h=h_*$ grows infinitely fast, but still is continuous. \end{remark} The relationship between the first order phase transition and the long range order is established with the help of the following result, the proof of which can be done by a slight modification of the arguments used in \cite{[DLS]}, see Theorem 1.1 and its corollaries. Let $\{\mu_n\}_{N\in \mathbb{N}}$ (respectively, $\{M_n\}_{n\in \mathbb{N}}$) be a sequence of probability measures on $\mathbb{R}$ (respectively, positive real numbers, such that $\lim M_n = + \infty$). We also suppose that, for any $y\in\mathbb{R}$, \begin{equation} \label{ggri} f(y) = \lim_{n \rightarrow + \infty} \frac{1}{M_n} \log \int e^{y u}\mu_n({\rm d}u) \end{equation} exists and is finite. As the function $f$ is convex, it has one-sided derivatives $f'_{\pm}(0)$, see Proposition \ref{convpn}. \begin{proposition}[Griffiths] \label{Grpn} Let the sequence of measures $\{\mu_n\}_{N\in \mathbb{N}}$ be as above. If $f'_+ (0) = f'_- (0) = \phi$ (i.e., $f$ is differentiable at $y=0$), then \begin{equation} \label{gri} \lim_{n\rightarrow +\infty} \int g(u/M_n) \mu_n({\rm d}u) = g(\phi), \end{equation} for any continuous $g:\mathbb{R}\rightarrow \mathbb{R}$, such that $|g(u)| \leq \lambda e^{\varkappa |u|}$ with certain $\lambda, \varkappa >0$. Furthermore, for each such a function $g$, \begin{equation} \label{gri1} \limsup_{n\rightarrow +\infty}\int g(u/M_n) \mu_n({\rm d}u) \leq \max_{z \in [f'_- (0), f'_+ (0)]} g (z). \end{equation} In particular, if $f'_- (0) = - f'_+ (0)$, then for any $k \in \mathbb{N}$, \begin{equation} \label{gri2} f'_+ (0) \geq \limsup_{n\rightarrow +\infty}\left(\int (u/M_n)^{2k} \mu_n({\rm d}u)\right)^{1/2k}. \end{equation} \end{proposition} Write, c.f., (\ref{A10}), \begin{equation} \label{gri3} N_{ \Lambda}^{\rm per} (h) = N_{ \Lambda}^{\rm per} (0) \int_{\Omega_{ \Lambda}} \exp\left( h \sum_{\ell \in \Lambda} \int_0^\beta \omega^{(1)}_\ell (\tau){\rm d}\tau \right) \nu^{0,\rm per}_{\Lambda}({\rm d}\omega_\Lambda), \end{equation} where $\nu^{0,\rm per}_{ \Lambda}$ is the local periodic Euclidean Gibbs measure with $h=0$. Now let $\{L_n\}_{n\in \mathbb{N}}\subset \mathbb{N}$ be the sequence such that the sequences of local measures $\{\nu^{0,\rm per}_{ \Lambda_{L_n}}\}$ and $\{\nu^{\rm per}_{ \Lambda_{L_n}}\}$ converge to the corresponding periodic Euclidean Gibbs measures $\mu^{0}$ and $\mu$ respectively. Set \begin{equation} \label{gri4} \mathcal{X}_n = \left\{ \omega_{\Lambda_{L_n}} \in \Omega_{\Lambda_{L_n}} \ \left\vert \ \exists u \in \mathbb{R}: \ \ \sum_{\ell \in \Lambda_{L_n}} \int_0^\beta \omega^{(1)}_\ell (\tau) {\rm d}\tau = u \right. \right\}.\end{equation} Clearly, each such $\mathcal{X}_n$ is measurable and isomorphic to $\mathbb{R}$. Let $\mu_n$, $n \in \mathbb{N}$, be the projection of $\{\nu^{0,\rm per}_{ \Lambda_{L_n}}\}$ onto this $\mathcal{X}_n$. Then \begin{equation} \label{gri5} p(h) = p(0) + f(h), \end{equation} where $f$ is given by (\ref{ggri}) with such $\mu_n$ and $M_n = |\Lambda_{L_n}|= (2L_n)^{d}$. Thereby, we apply (\ref{gri2}) with $k=2$ and obtain \[ p'_{+} (0) \geq \beta \limsup_{n\rightarrow +\infty}\sqrt{ P_{\Lambda_{L_n}}}. \] Thus, in the case where the model is just rotation and translation invariant, the existence of the long range order implies the first order phase transition. Consider now the second order phase transitions in the rotation invariant case. For $\alpha \in [0, 1]$, we set, c.f., (\ref{rpp}), \begin{equation} \label{gri6} P_\Lambda^{(\alpha)} = \frac{\beta^{-2}}{|\Lambda|^{1+\alpha}}\int_{\Omega_{ \Lambda}} \left\vert\sum_{\ell \in \Lambda}\int_0^\beta \omega_\ell (\tau) {\rm d}\tau \right\vert^2 \nu^{\rm per}_{\beta, \Lambda}({\rm d }\omega_\Lambda), \end{equation} where $\Lambda$ is a box. Then $P_\Lambda^{(1)} =P_\Lambda$ and, as we just have shown, the existence of a positive limit (\ref{rpp1}) yields a first order phase transition. \begin{proposition} \label{sophpn} If there exists $\alpha \in (0,1)$, such that for a sequence $\{L_n\}$, there exists a finite limit \begin{equation} \label{gri7} \lim_{n\rightarrow +\infty} P_{\Lambda_{L_n}}^{(\alpha)} \ \stackrel{\rm def}{=} \ P^{(\alpha)} >0. \end{equation} Then the model has at $h=0$ a second order phase transition. \end{proposition} {\it Proof:} We observe that \[ P_{\Lambda}^{(\alpha)} = \nu p_\Lambda ''(0)/\beta^2 |\Lambda|^\alpha. \] Then there exists $c>0$, such that \[ p''_{\Lambda_{L_n}} (0) \geq c |\Lambda_{L_n}|^\alpha, \quad \ \ {\rm for} \ \ {\rm all} \ \ n \in \mathbb{N}. \] As each $p''_\Lambda$ is continuous, one finds the sequence $\{\delta_n\}_{n\in \mathbb{N}}$ such that $\delta_n \downarrow 0$ and \begin{equation} \label{gri8} p''_{\Lambda_{L_n}} (h) \geq \frac{1}{2} c |\Lambda_{L_n}|^\alpha, \quad \ \ {\rm for} \ \ {\rm all} \ \ h \in [0, \delta_n] \ \ \ {\rm and} \ \ \ n \in \mathbb{N}. \end{equation} If $p''(0)$ were finite, see Remark \ref{landaurk}, one would get \[ p''(0) = \lim_{n\rightarrow +\infty} \left[ p'_{\Lambda_{L_n}}(\delta_n) - p'_{\Lambda_{L_n}} (0) \right]/\delta_n, \] which contradicts (\ref{gri8}). $\square$ \vskip.1cm Proposition \ref{sophpn} remains true if one replaces in (\ref{gri6}) the periodic local measure $\nu_{ \Lambda}^{\rm per}$ by the one corresponding to the zero boundary condition, i.e., by $\nu_{ \Lambda}$. Then the limit in (\ref{gri7}) can be taken along any van Hove sequence $\mathcal{L}$. We remind that Proposition \ref{sophpn} describes the rotation invariant case. The existence of a positive $P^{(\alpha)}$ with $\alpha>0$ may be interpreted as follows. According to the central limit theorem for independent identically distributed random variables, for our model with $J_{\ell \ell'} =0$ and $V_\ell = V$, the only possibility to have a finite positive limit in (\ref{gri7}) is to set $\alpha =0$. If $P^{(0)}< \infty$ for nonzero interaction, one can say that the dependence between the temperature loops is weak; it holds for small $\hat{J}_0$. Of course, in this case $P^{(\alpha)} = 0$ for any $\alpha >0$. If $P^{(\alpha)}$ gets positive for a certain $\alpha \in (0,1)$, one says that a strong dependence between the loops appears. In this case, the central limit theorem holds with an abnormal normalization. However, this dependence is not so strong to make $p'$ discontinuous, which occurs for $\alpha =1$, where a new law of large numbers comes to power. In statistical physics, the point at which $P^{(\alpha)} > 0$ for $\alpha \in(0,1)$ is called {\it a critical point}. The quantity $P^{(0)}$ is called {\it susceptibility,} it gets discontinuous at the critical point. Its singularity at this point is connected with the value of $\alpha$ for which $P^{(\alpha)} > 0$. The above analysis opens the possibility to extend the notion of the critical point to the models which are not translation invariant. \begin{definition} \label{Weberdf} The rotation invariant model has a critical point if there exist a van Hove sequence $\mathcal{L}$ and $\alpha \in (0,1)$ such that \begin{equation} \label{gri9} \lim_{\mathcal{L}}\frac{1}{|\Lambda|^{1+\alpha}}\int_{\Omega_{ \Lambda}} \left\vert\sum_{\ell \in \Lambda}\int_0^\beta \omega_\ell (\tau) {\rm d}\tau \right\vert^2 \nu_{ \Lambda}({\rm d }\omega_\Lambda) >0 \end{equation} at certain values of the model parameters, including $h$ and $\beta$. \end{definition} Note that by Proposition \ref{sophpn}, it follow that in the translation invariant case the notions of the critical point and of the second order phase transition coincide. \subsection{Infrared bound} \label{3.2.ss} Here, for the translation and rotation invariant version of our model, we find the function $\widehat{B}$ obeying (\ref{rp40x}). For a box $\Lambda$, let $E$ be the set of all unordered pairs $\langle \ell , \ell'\rangle$, $\ell , \ell'\in \Lambda$, such that $|\ell - \ell'|_\Lambda=1$, see (\ref{box1}). Suppose also that the interaction intensities (\ref{A2}) are such that $J^\Lambda_{\ell \ell'} = J>0$ if and only if $\langle \ell , \ell'\rangle \in E$ and hence the measure (\ref{A5}) can be written \begin{equation} \label{rp16} \nu^{\rm per}_{ \Lambda} ({\rm d}\omega_\Lambda) = \frac{1}{Y_\Lambda (0) } \exp\left( - \frac{J}{2}\sum_{\langle \ell, \ell'\rangle \in E} \|\omega_\ell - \omega_{\ell'}\|^2_{L^2_\beta} \right) \sigma_{ \Lambda} ({\rm d}\omega_\Lambda), \end{equation} where \begin{eqnarray} \label{rp17} & & \sigma_{ \Lambda} ({\rm d}\omega_\Lambda) \\ & & \qquad = \exp\left( Jd \sum_{\ell \in \Lambda} \|\omega_\ell\|^2_{L^2_\beta} - \sum_{\ell \in \Lambda} \int_0^\beta V(\omega_\ell (\tau)) {\rm d}\tau \right)\chi_{ \Lambda}({\rm d}\omega_\Lambda), \nonumber \end{eqnarray} and \begin{equation} \label{rp18} Y_\Lambda (0) = \int_{\Omega_{ \Lambda}} \exp\left( - \frac{J}{2}\sum_{\langle \ell, \ell'\rangle \in E} \|\omega_\ell - \omega_{\ell'}\|^2_{L^2_\beta} \right) \sigma_{ \Lambda} ({\rm d}\omega_\Lambda). \end{equation} With every edge $\langle\ell , \ell' \rangle \in E$ we associate $b_{\ell \ell'} \in L^2_\beta$ and consider \begin{equation}\label{rp19} Y_\Lambda (b) = \int_{\Omega_{ \Lambda}} \exp\left( - \frac{J}{2}\sum_{\langle \ell, \ell'\rangle \in E} \|\omega_\ell - \omega_{\ell'} - b_{\ell \ell'}\|^2_{L^2_\beta} \right) \sigma_{ \Lambda} ({\rm d}\omega_\Lambda). \end{equation} By standard arguments, see \cite{[KKE]} and the references therein, one proves the following \begin{lemma} [Gaussian Domination] \label{irelm} For every $b = (b_{\ell \ell'})_{\langle \ell , \ell'\rangle \in E}$, $b_{\ell \ell'} \in L^2_\beta$, it follows that \begin{equation} \label{rp20} Y_\Lambda (b) \leq Y_\Lambda (0). \end{equation} \end{lemma} Let $\mathcal{X}_E$ be the real Hilbert space \begin{equation} \label{rp33} \mathcal{X}_E = \{ b = (b_{\ell\ell'})_{\langle \ell , \ell'\rangle \in E} \ | \ b_{\ell \ell'} \in L^2_\beta\}, \end{equation} with scalar product \begin{equation} \label{rp34} (b,c)_{\mathcal{X}_E} = \sum_{\langle \ell , \ell'\rangle \in E} (b_{\ell \ell'}, c_{\ell \ell'})_{L^2_\beta}. \end{equation} To simplify notations we write $e = \langle \ell , \ell' \rangle$. A bounded linear operator $Q: \mathcal{X}_E \rightarrow \mathcal{X}_E $ may be defined by means of its kernel $Q^{jj'}_{ee'} (\tau, \tau')$, $j,j' = 1, \dots , \nu$, $e,e' \in E$, and $\tau , \tau' \in [0,\beta]$. That is \begin{equation} \label{rp35} \left( Q b \right)^{(j)}_e (\tau) = \sum_{j'=1}^d \sum_{e' \in E} \int_0^\beta Q^{jj'}_{ee'} (\tau, \tau') b_{e'}^{(j')} (\tau') {\rm d}\tau'. \end{equation} Let us study the operator with the following kernel \begin{equation} \label{rp36} Q^{jj'}_{\langle \ell_1 , \ell_1' \rangle \langle \ell_2 , \ell_2' \rangle} (\tau, \tau') = \bigg{ \langle} \left[ \omega^{(j)}_{\ell_1} (\tau) - \omega^{(j)}_{\ell_1'} (\tau)\right] \cdot\left[ \omega^{(j')}_{\ell_2} (\tau') - \omega^{(j')}_{\ell_2'} (\tau')\right] \bigg{\rangle}_{\nu_{ \Lambda}^{\rm per}}, \end{equation} where the expectation is taken with respect to the measure (\ref{rp16}). This operator in positive. Indeed, \begin{eqnarray*} (b , Q b)_{\mathcal{X}_E} = \bigg{ \langle} \left[ \sum_{\langle \ell , \ell'\rangle \in E} (\omega_\ell - \omega_{\ell'}, b_{\ell \ell'})_{L^2_\beta}\right]^2 \bigg{\rangle}_{\nu_{ \Lambda}^{\rm per}} \geq 0. \end{eqnarray*} The kernel (\ref{rp36}) can be expressed in terms of the Matsubara functions; thus, as a function of $\tau, \tau'$, it has the property (\ref{a11}). We employ the latter by introducing yet another Fourier transformation. Set \begin{equation} \label{FY} \mathcal{K} = \{ k = (2\pi/\beta) \kappa \ | \ \kappa \in \mathbb{Z}\}, \end{equation} \begin{equation} \label{FY1} e_k (\tau) = \left\{ \begin{array}{ll} \beta^{-1/2} \cos k \tau , \quad &{\rm if} \ k>0; \\ - \beta^{-1/2} \sin k \tau , \quad &{\rm if} \ k<0; \\ \sqrt{2/\beta}, \quad &{\rm if} \ k=0. \end{array} \right. \end{equation} The transformation we need is \begin{eqnarray} \label{FY2} \hat{\omega}_\ell^{(j)} (k) & = & \int_0^\beta {\omega}_\ell^{(j)} (\tau) e_k (\tau) {\rm d}\tau, \\ {\omega}_\ell^{(j)} (\tau) & = & \sum_{k \in \mathcal{K}} \hat{\omega}_\ell^{(j)} (k) e_k (\tau). \end{eqnarray} Then the property (\ref{a11}) yields, c.f., (\ref{rp40k}) \[ \langle \hat{\omega}^{(j)}_\ell (k) \hat{\omega}^{(j')}_{\ell'} (k' ) \rangle_{\nu_{\Lambda}^{\rm per}} = 0 \quad {\rm if} \ \ k \neq k', \ \ {\rm or} \ \ j \neq j'. \] Taking this into account we employ in (\ref{rp36}) the transformation (\ref{FY2}) and obtain \begin{equation} \label{rp37} Q^{jj'}_{\langle \ell_1 , \ell_1' \rangle \langle \ell_2 , \ell_2' \rangle} (\tau, \tau') = \delta_{jj'} \sum_{k \in \mathcal{K}} \widehat{Q}_{\langle \ell_1 , \ell_1' \rangle \langle \ell_2 , \ell_2' \rangle} (k) e_k (\tau) e_k (\tau'), \end{equation} with \begin{equation} \label{rp38} \widehat{Q}_{\langle \ell_1 , \ell_1' \rangle \langle \ell_2 , \ell_2' \rangle} (k) = \bigg{ \langle }\left[\hat{\omega}^{(j)}_{\ell_1} (k) - \hat{\omega}^{(j)}_{\ell'_1} (k) \right] \cdot \left[\hat{\omega}^{(j)}_{\ell_2} (k) - \hat{\omega}^{(j)}_{\ell'_2} (k) \right]\bigg{\rangle}_{\nu_{\Lambda}^{\rm per}}. \end{equation} In view of the periodic conditions imposed on the boundaries of the box $\Lambda$ the latter kernel, as well as the one given by (\ref{rp36}), are invariant with respect to the translations of the corresponding torus. This allows us to `diagonalize' the kernel (\ref{rp38}) by means of a spatial Fourier transformation (\ref{rp39}), (\ref{rp40}). Then the spacial periodicity of the state $\langle \cdot \rangle_{\nu_{ \Lambda}^{\rm per}}$ yields \begin{equation} \label{rp40a} \langle \hat{\omega}^{(j)} (p,k) \hat{\omega}^{(j)} (p',k) \rangle_{\nu_{ \Lambda}^{\rm per}} = 0 \quad {\rm if} \ \ p + p' \neq 0. \end{equation} Taking this into account we obtain \begin{eqnarray} \label{rp40b} \widehat{Q}_{\langle \ell_1 , \ell_1' \rangle \langle \ell_2 , \ell_2' \rangle} (k) & = & \sum_{p \in \Lambda_*} \langle \hat{\omega}^{(j)} (p,k) \hat{\omega}^{(j)} (-p,k) \rangle_{\nu_{\beta , \Lambda}^{\rm per}}\\ & \times &\left(e^{\imath (p,\ell_1)} - e^{\imath (p,\ell'_1)} \right)/|\Lambda|^{1/2} \nonumber \\ & \times & \left(e^{-\imath (p,\ell_2)} - e^{\imath (-p,\ell'_2)} \right)/|\Lambda|^{1/2}. \nonumber \end{eqnarray} Since the summand corresponding to $p=0$ equals zero, the sum can be restricted to $\Lambda_* \setminus \{0\}$. This representation however cannot serve as a spectral decomposition similar to (\ref{rp37}) because the eigenfunctions here are not normalized. Indeed, \begin{eqnarray*} \sum_{\langle \ell , \ell' \rangle \in E} \left(e^{\imath (p,\ell)} - e^{\imath (p,\ell')} \right)/|\Lambda|^{1/2} \times \left(e^{-\imath (p,\ell)} - e^{-\imath (p,\ell')} \right)/|\Lambda|^{1/2} = 2 \mathcal{E}(p) \end{eqnarray*} where \begin{equation} \label{rp40c} \mathcal{E}(p) \ \stackrel{\rm def}{=} \ \sum_{j=1}^d [ 1 - \cos p_j ]. \end{equation} Then we set \begin{equation} \label{rp43} \sigma_{\ell \ell'} (p) = \left(e^{\imath (p,\ell)} - e^{\imath (p,\ell')} \right)/\sqrt{2|\Lambda| \mathcal{E}(p)}, \quad p \in \Lambda_* \setminus \{0\}, \end{equation} and \begin{equation} \label{rp42} \widehat{Q} (p,k) = 2 \mathcal{E}(p) \langle \hat{\omega}^{(j)} (p,k) \hat{\omega}^{(j)} (-p,k) \rangle_{\nu_{ \Lambda}^{\rm per}}, \quad p \in \Lambda_* \setminus \{0\}. \end{equation} Thereby, \begin{eqnarray} \label{rp41} & & Q_{\langle \ell_1 , \ell_1' \rangle \langle \ell_2 , \ell_2' \rangle} (\tau, \tau') = \\ & & \quad = \sum_{p \in \Lambda_* \setminus\{0\}} \sum_{k \in \mathcal{K}}\widehat{Q} (p,k) \sigma_{\ell_1 \ell_1'} (p) \sigma_{\ell_2 \ell_2'} (-p) e_k(\tau) e_k (\tau'), \nonumber \end{eqnarray} which is the spectral decomposition of the operator (\ref{rp35}). Now we show that the eigenvalues (\ref{rp42}) have a specific upper bound\footnote{Their natural lower bound is zero as the operator (\ref{rp35}) is positive}. \begin{lemma} \label{rp2tm} For every $p \in \Lambda_* \setminus\{0\}$ and $k \in \mathcal{K}$, the eigenvalues (\ref{rp42}) obey the estimate \begin{equation} \label{rp44a} \widehat{Q} (p,k) \leq 1 /J, \end{equation} where $J$ is the same as in (\ref{rp16}). From this estimate one gets \begin{equation} \label{rp44} \langle \hat{\omega}^{(j)} (p,k) \hat{\omega}^{(j)} (-p,k) \rangle_{\nu_{\Lambda}^{\rm per}} \leq \frac{1}{ 2 J \mathcal{E}(p)} , \quad p \in \Lambda_* \setminus \{0\}. \end{equation} \end{lemma} {\it Proof:} The estimate in question will be obtained from the Gaussian domination (\ref{rp20}). For $t \in \mathbb{R}$ and a given $b \in \mathcal{X}_E$, we consider the function $\phi(t) = Y_\Lambda (t b)$. By Lemma \ref{irelm}, $\phi'' (0) \leq 0$. Computing the derivative from (\ref{rp19}) we get \[ \phi'' (0) = J (b, Q b)_{\mathcal{X}_E} - \|b\|^2_{\mathcal{X}_E}, \] where the operator $Q$ is defined by its kernel (\ref{rp36}). Then the estimate (\ref{rp44a}) is immediate. $\square$ \vskip.1cm By (\ref{rp40}), (\ref{rp37}), and (\ref{rp42}), we readily obtain \[ \langle (\hat{\omega}_p (\tau) , \hat{\omega}_{-p} (\tau') )\rangle_{\nu_{ \Lambda}^{\rm per}} = \frac{\nu}{2 \beta \mathcal{E}(p)} \sum_{k \in \mathcal{K}} \widehat{Q}(p,k) \cos[k(\tau - \tau')], \quad p\neq 0, \] which yields, see (\ref{rp40z}) and (\ref{rp44a}), \begin{equation} \label{rp45} \widehat{D}^\Lambda_p = \frac{ \beta \nu}{2 \mathcal{E}(p)} \widehat{Q}(p, 0) \leq \frac{ \beta \nu}{2 J \mathcal{E}(p)}, \quad p \neq 0. \end{equation} Comparing this estimate with (\ref{rp40x}) we have the following \begin{corollary} \label{Irco} If the model is translation and rotation invariant with the nearest neighbor interaction, then the infrared estimate (\ref{rp40x}) holds with \begin{equation} \label{rp45z} \widehat{B}(p) = \frac{ \beta \nu}{2 J \mathcal{E}(p)}, \quad p\in (-\pi , \pi]^d \setminus \{0\}, \qquad \widehat{B}(0) = +\infty. \end{equation} \end{corollary} \subsection{Phase transition in the translation and rotation invariant model} \label{3.3.ss} In this subsection, we consider the model described by Corollary \ref{Irco}. First we obtain the lower bounds for \[ \langle (\omega_{\ell} (\tau), \omega_\ell(\tau))\rangle_{\nu_\Lambda^{\rm per}}, \] from which we then obtain the bounds (\ref{rp40w}). In the case where the anharmonic potential has the form \begin{equation} \label{rp46} V(u) = - b |u|^2 + b_2|u|^4, \quad b > a/2, \ \ b_2>0, \end{equation} $a$ being the same as in (\ref{U1}), the bound (\ref{rp40w}) can be found explicitly. We begin by considering this special case. \begin{lemma} \label{ph1lm} Let $V$ be as in (\ref{rp46}). Then, for every $\Lambda\Subset \mathbb{L}$, \begin{equation} \label{rp47a} \langle (\omega_{\ell} (\tau), \omega_\ell(\tau))\rangle_{\nu_\Lambda^{\rm per}} \geq \frac{(2b - a) \nu}{ 4 b_2 (\nu + 2)}\ \stackrel{\rm def}{=} \vartheta_{*}. \end{equation} \end{lemma} {\it Proof:} Let $A$ be a self-adjoint operator, such that the expressions below make sense. Then \begin{eqnarray} \label{rp47b} & & \varrho_{ \Lambda}^{\rm per} \left([A, [H^{\rm per}_\Lambda,A]]\right) \\ & & \quad = \varrho_{\beta , \Lambda}^{\rm per} \left( A H^{\rm per}_\Lambda A + A H^{\rm per}_\Lambda A - A A H^{\rm per}_\Lambda - H^{\rm per}_\Lambda A A \right) \nonumber \\ & & \quad = \frac{1}{Z^{\rm per}_{\beta ,\Lambda}} \sum_{s, s' \in \mathbb{N}} \left\vert A_{ss'}\right\vert^2 \left(E^{\rm per}_{s'} - E^{\rm per}_{s} \right)\left\{ \exp\left[ - \beta E^{\rm per}_{s} \right] - \exp\left[ - \beta E^{\rm per}_{s'} \right]\right\} \nonumber \\ & & \quad \geq 0.\nonumber \end{eqnarray} Here $E^{\rm per}_s$, $s\in \mathbb{N}$ are the eigenvalues of the periodic Hamiltonian (\ref{A5a}), $A_{ss'}$ are the corresponding matrix elements of $A$, and $\varrho_{\Lambda}^{\rm per}$ is the periodic local Gibbs state (\ref{A5b}). By the Euclidean representation, \[ \langle (\omega_{\ell} (\tau), \omega_\ell(\tau))\rangle_{\nu_\Lambda^{\rm per}} = \sum_{j=1}^\nu\bigg{\langle} \left( \omega_\ell^{(j)} (0) \right)^2 \bigg{\rangle}_{\nu_{\beta, \Lambda}^{\rm per}} = \sum_{j=1}^\nu \varrho^{\rm per}_{ \Lambda}\left[ \left( q_\ell^{(j)} \right)^2 \right]. \] Then we take in (\ref{rp47b}) $A = p_\ell^{(j)}$, $j = 1, \dots , \nu$, make use of the commutation relation (\ref{cr}), take into account the rotation invariance, and arrive at \begin{eqnarray} \label{rp47c} \varrho_{ \Lambda}^{\rm per} \left([A, [H^{\rm per}_\Lambda,A]]\right) & = & \varrho_{\beta , \Lambda}^{\rm per} \left( - 2 b + a + 2 b_2 |q_\ell|^2 + 4 b_2 (q_\ell^{(j)})^2 \right) \\ & = & - 2b + a + 4 b_2 (\nu + 2) \bigg{\langle} \left[ \omega_\ell^{(j)} (0) \right]^2\bigg{\rangle}_{\nu_{\beta, \Lambda}^{\rm per}} \nonumber \\ & \geq & 0, \nonumber \end{eqnarray} which yields (\ref{rp47a}). $\square$ Now we consider the case where $V$ is more general as to compare with (\ref{rp46}). \begin{lemma} \label{FUlm} Let the model be translation and rotation invariant, with nearest neighbor interaction. Then, for every $\theta >0$, there exist positive $m_*$ and $J_*$, which may depend on $\beta$, $\theta$, and on the potential $V$, such that, for $m>m_*$ and $J>J_*$, \begin{equation} \label{FU2} \langle (\omega_{\ell} (\tau), \omega_\ell(\tau))\rangle_{\nu_\Lambda^{\rm per}} \geq \theta. \end{equation} \end{lemma} {\it Proof:} Let us rewrite (\ref{ZiFF10}) \begin{eqnarray} \label{KK} p^{\rm per}_{\Lambda} (J) & = & \log N_\beta \nonumber \\ & + & \frac{1}{|\Lambda|}\log \left\{\int_{\Omega_{ \Lambda}} \exp \left[Y_\Lambda (\omega_\Lambda ) \right] \prod_{\ell \in \Lambda} \lambda ({\rm d}\omega_\ell) \right\}, \end{eqnarray} where we indicate the dependence of the pressure on the interaction intensity and have set $h=0$ since the potential $V$ should be rotation invariant. Clearly, $p_\Lambda^{\rm per}(J)$ is convex; its derivative can be computed from (\ref{KK}). Then we get \begin{eqnarray} \label{FU1} \frac{J}{|\Lambda|} \sum_{\langle \ell , \ell' \rangle \in E} \big{\langle} (\omega_{\ell}, \omega_{\ell'})_{L^2_\beta}\big{\rangle}_{\nu^{\rm per}_{ \Lambda}} & = & J \frac{\partial}{\partial J}p_\Lambda^{\rm per}(J) \\ & \geq & p_\Lambda^{\rm per}(J) - p_\Lambda^{\rm per}(0) \nonumber \\ & = & \frac{1}{|\Lambda|}\log \left\{\int_{\Omega_{ \Lambda}} \exp \left[Y_\Lambda (\omega_\Lambda ) \right] \prod_{\ell \in \Lambda} \lambda ({\rm d}\omega_\ell) \right\}, \nonumber \end{eqnarray} where $E$ is the same as in (\ref{rp33}). By the translation invariance and (\ref{a11}), one gets \begin{eqnarray*} \big{\langle} (\omega_{\ell}, \omega_{\ell'})_{L^2_\beta}\big{\rangle}_{\nu^{\rm per}_{ \Lambda}} & \leq & \left( \big{\langle} (\omega_{\ell}, \omega_{\ell})_{L^2_\beta}\big{\rangle}_{\nu^{\rm per}_{\Lambda}} + \big{\langle} (\omega_{\ell'}, \omega_{\ell'})_{L^2_\beta}\big{\rangle}_{\nu^{\rm per}_{\Lambda}}\right)/2 \\ & = & \big{\langle} (\omega_{\ell}, \omega_{\ell})_{L^2_\beta}\big{\rangle}_{\nu^{\rm per}_{\Lambda}} = \beta \big{\langle} (\omega_{\ell} (\tau), \omega_\ell(\tau))\big{\rangle}_{\nu_\Lambda^{\rm per}}. \end{eqnarray*} Then we choose $\varepsilon$, $c$, and $n$ as in (\ref{ZiFF7}), apply this estimate in (\ref{FU1}), and obtain \begin{eqnarray} \label{KK1} \beta J d \big{\langle} (\omega_{\ell} (\tau), \omega_\ell(\tau))\big{\rangle}_{\nu_\Lambda^{\rm per}} & \geq & \frac{1}{|\Lambda|}\log \left\{\int_{\left[B^+(\varepsilon;c) \right]^{\nu|\Lambda|}} \exp \left[Y_\Lambda (\omega_\Lambda ) \right] \prod_{\ell \in \Lambda} \lambda ({\rm d}\omega_\ell) \right\} \nonumber \\ & \geq & \beta J \nu d \varepsilon^2 + \nu \log \gamma (m). \end{eqnarray} For $m>m_*$ given by (\ref{ZiFF8}), $\gamma (m)>0$ and the latter estimate makes sense. Given $\theta>0$, one picks $\varepsilon >\sqrt{\theta/\nu}$ and then finds $J_*$ such that the right-hand side of the latter estimate equals $\theta$ for $J=J_*$. $\square$ To convert (\ref{rp47a}) and (\ref{FU2}) into the bound (\ref{rp40w}) we need the function $f: [0, +\infty) \rightarrow [0,1)$ defined implicitly by \begin{equation} \label{rp48} f( u \tanh u) = u^{-1} \tanh u, \quad {\rm for} \ \ u>0; \quad {\rm and} \ \ f(0)=1. \end{equation} It is differentiable, convex, monotone decreasing on $(0, +\infty)$, such that $t f(t) \rightarrow 1$. For $t\geq 6$, $f(t) \approx 1/t$ to five-place accuracy, see Theorem A.2 in \cite{[DLS]}. By direct calculation, \begin{equation} \label{RRP} \frac{f' (u \tau) }{f(u \tau)} = - \frac{1}{ u \tau} \cdot \frac{ \tau - u (1 - \tau^2)}{\tau + u (1 - \tau^2)}, \qquad \tau = \tanh u. \end{equation} \begin{proposition} \label{rp48pn} For every fixed $\alpha >0$, the function \begin{equation} \label{rp48Z} \phi (t) = t \alpha f (t/\alpha), \quad t >0 \end{equation} is differentiable and monotone increasing to $\alpha^2$ as $t \rightarrow +\infty$. \end{proposition} {\it Proof:} By (\ref{RRP}), \[ \phi ' (t) = \frac{ 2 \alpha \tau (1 -\tau^2)} {\tau + u (1 - \tau^2)} >0, \qquad u \tau = u \tanh u = t /\alpha. \] The limit $\alpha^2$ is obtained from the corresponding asymptotic property of $f$. $\square$ \vskip.1cm Next, we need the following fact, known as Inequality of Bruch and Falk, see Theorem IV.7.5 on page 392 of \cite{[Simon]} or Theorem 3.1 in \cite{[DLS]}. \begin{proposition} \label{BFpn} Let $A$ be as in (\ref{rp47b}). Let also \begin{eqnarray*} & & b(A) = \beta^{-1} \int_0^\beta \varrho^{\rm per}_{ \Lambda} \left\{ A \exp[- \tau H^{\rm per}_\Lambda] A \exp[ \tau H^{\rm per}_\Lambda] \right\}{\rm d}\tau, \\ & & g(A) = \varrho^{\rm per}_{ \Lambda} \left( A^2 \right); \quad \ \ c(A) = \varrho^{\rm per}_{ \Lambda}\left\{[A,[\beta H^{\rm per}_\Lambda, A]] \right\}, \end{eqnarray*} Then \begin{equation} \label{rp49} b(A) \geq g(A) f\left(\frac{c(A)}{4 g(A)} \right), \end{equation} where $f$ is the same as in (\ref{rp48}). \end{proposition} Set \begin{equation} \label{rp50} \mathcal{J}(d) = \frac{1}{(2\pi)^d}\int_{(-\pi, \pi]^d} \frac{{\rm d} p}{\mathcal{E}(p)}, \end{equation} where $\mathcal{E}(p)$ is given by (\ref{rp40c}). The exact value of $\mathcal{J}(3)$ can be expressed in terms of complete elliptic integrals, see \cite{[Watson]} and also \cite{[Joyce]} for more recent developments. For our aims, it is enough to have the following property, see Theorem 5.1 in \cite{[DLP]}. \begin{proposition} \label{DLPpn} For $d\geq 4$, one has \begin{equation} \label{rp500} \frac{1}{d - 1/2} < \mathcal{J}(d) < \frac{1}{d - \alpha (d)}< \frac{1}{d - 1}, \end{equation} where $\alpha (d) \rightarrow 1/2$ as $d\rightarrow +\infty$. \end{proposition} Recall that $m$ is the reduced particle mass (\ref{In}). \begin{theorem} \label{phrottm} Let $d\geq 3$, the interaction be of nearest neighbor type, and the anharmonic potential be of the form (\ref{rp46}), which defines the parameter $\vartheta_*$. Let also the following condition be satisfied \begin{equation} \label{rp52} 8 m \vartheta_*^2 J > \mathcal{J}(d) . \end{equation} Then for every $\beta> \beta_*$, where the latter is the unique solution of the equation \begin{equation} \label{rp52a} 2 \beta J \vartheta_* f (\beta/ 4 m \vartheta_*) = \mathcal{J}(d), \end{equation} the model has a phase transition in the sense of Definition \ref{phdef}. \end{theorem} {\it Proof:} One observes that \begin{equation} \label{rp53} [ q^{(j)}_\ell, [ H_\Lambda^{\rm per}, q_\ell^{(j)}]] = 1/m, \quad \ell \in \Lambda. \end{equation} Then we take in (\ref{rp49}) $A= q_{\ell}^{(j)}$ and obtain \[ b(A) \geq \big{\langle} \left( \omega_\ell^{(j)}(0) \right)^2 \big{\rangle}_{\nu_{ \Lambda}^{\rm per}} f\left(\frac{\beta}{ 4 m \big{\langle} \left( \omega_\ell^{(j)}(0) \right)^2 \big{\rangle}_{\nu_{ \Lambda}^{\rm per}}} \right). \] By Proposition \ref{rp48pn}, $\vartheta f(\beta / 4m \vartheta)$ is an increasing function of $\vartheta$. Thus, by (\ref{rp47a}) and (\ref{nrp10}), \begin{equation} \label{rp54} D^\Lambda_{\ell \ell} \geq \beta^{2} \nu\vartheta_* f(\beta/ 4m \vartheta_* ), \end{equation} which yields the bound (\ref{rp40w}). Thereby, the condition (i) in (\ref{RP4}) takes the form \begin{equation} \label{rp51} \vartheta_* f \left( \beta / 4 m \vartheta_* \right) > \mathcal{J}(d) / 2 \beta J. \end{equation} By Proposition \ref{rp48pn}, the function \[ \phi (\beta) = 2 \beta J \vartheta_* f (\beta/ 4 m \vartheta_*)\] is monotone increasing and hits the level $\mathcal{J}(d)$ at certain $\beta_*$. For $\beta > \beta_*$, the estimate (\ref{rp51}) holds, which yields $|\mathcal{G}_\beta^{\rm t}|>1$. $\square$ \vskip.1cm One observes that $f(\beta/4 m \vartheta_*) \rightarrow 1$ as $m \rightarrow +\infty$. In this limit, the condition (\ref{rp52}) turns into the corresponding condition for a classical model of $\phi^4$ anharmonic oscillators, Now let us turn to the general case. \begin{theorem} \label{phrot1tm} Let $d\geq 3$, the interaction be of nearest neighbor type, and the anharmonic potential be rotation invariant. Then, for every $\beta>0$, there exist $m_*$ and $J_*>0$, which may depend on $\beta$ and on the anharmonic potential, such that $|\mathcal{G}^{\rm t} | >1$ for $m>m_*$ and $J>J_*$. \end{theorem} {\it Proof:} Given positive $\beta$ and $\theta$, the estimate (\ref{FU2}) for big enough $m$ and $J$. Then one applies Proposition \ref{BFpn}, which yields that the condition (i) in (\ref{RP4}) is satisfied if \[ \theta f(\beta /4m \theta) > \mathcal{J} (d) / 2 \beta J. \] Then one sets $m_*$ to be as in (\ref{ZiFF8}) and $J_*$ to be the smallest value of $J$ for which both (\ref{FU2}) and the latter inequality hold. $\square$ \subsection{Phase transition in the symmetric scalar models} \label{6.3.2.ss} In the case $\nu=1$, we can extend the above results to the models without translation invariance and with much more general $J_{\ell \ell'}$ and $V_\ell$. However, certain assumptions beyond (\ref{a1}) and (\ref{a2}) should be made. Suppose also that the interaction between the nearest neighbors is uniformly nonzero, i.e., \begin{equation} \label{Ip1} \inf_{|\ell -\ell'|=1} J_{\ell \ell'} \ \stackrel{\rm def}{=} \ J >0. \end{equation} Next we suppose that all $V_\ell$'s are even continuous functions and the upper bound in (\ref{A3}) can be chosen to obey the following conditions: \vskip.1cm \begin{tabular}{ll} (a) \ &for every $\ell$, \end{tabular} \vskip.1cm \begin{equation} \label{Iub1} V(u_\ell ) - V_\ell (u_\ell) \leq V(\tilde{u}_\ell ) - V_\ell (\tilde{u}_\ell) , \quad {\rm whenever} \ \ u_\ell^2 \leq \tilde{u}_\ell^2; \end{equation} \vskip.1cm \begin{tabular}{ll} (b) \ &the function $V$ has the form \end{tabular} \vskip.1cm \begin{equation} \label{Iub} V(u_\ell ) = \sum_{s=1}^r b^{(s)} u_\ell^{2s}; \quad 2 b^{(1)} < - a ; \ \ \ b^{(s)}\geq 0, \ s\geq 2, \end{equation} \vskip.1cm \begin{tabular}{ll} &where $a$ is as in (\ref{U1}) and $r\geq 2$ is either positive integer or infinite;\\[.1cm] (c) &if $r=+\infty$, the series \end{tabular} \vskip.1cm \begin{equation} \label{Ip3} \mathit{\Phi}(\vartheta) = \sum_{s=2}^{+\infty} \frac{(2s)!}{2^{s-1}(s-1)!}{b}^{(s)} \vartheta^{s-1}, \end{equation} \vskip.1cm \begin{tabular}{ll} &converges at some $\vartheta>0$. \end{tabular} \vskip.1cm \noindent Since $2b^{(1)} + a <0$, the equation \begin{equation} \label{Ip4} a + 2b^{(1)} + \mathit{\Phi}(\vartheta) = 0, \end{equation} has a unique solution $\vartheta_* >0$. By the above assumptions, all $V_\ell$ are `uniformly double-welled'. If $V_\ell (u_\ell)= v_\ell (u_\ell^2)$ and $v_\ell$ are differentiable, the condition (\ref{Iub1}) can be formulated as an upper bound for $v_\ell'$. Note that the pressure as a unified characteristics of all Euclidean Gibbs states makes senses for translation invariant models only. Thus, the notions mentioned in Definition \ref{landau} are not applicable to the versions of the model which do not possess this property. The main result of this subsection is contained in the following statement. \begin{theorem} \label{phsctm} Let the model be as just described. Let also the condition (\ref{rp52}) with $\vartheta_*$ defined by the equation (\ref{Iub1}) and $J$ defined by (\ref{Ip1}) be satisfied. Then for every $\beta >\beta_*$, where $\beta_*$ is defined by the equation (\ref{rp52}), the model has a phase transition in the sense of Definition \ref{phdef}. If the model is translation invariant, the long range order and the first order phase transition take place at such $\beta$. \end{theorem} {\it Proof:} The proof is made by comparing the model under consideration with a reference model, which is the scalar model with the nearest neighbor interaction of intensity (\ref{Ip1}) and with the anharmonic potential (\ref{Iub}). Thanks to the condition (\ref{Iub1}), the reference model is more stable; hence, the phase transition in this model implies the same for the model considered. The comparison is conducted by means of the correlation inequalities. The reference model is translation invariant and hence can be defined by its local periodic Hamiltonians \begin{equation} \label{Irf1} H^{\rm low}_{\Lambda} = \sum_{\ell \in \Lambda}\left[ H_\ell^{\rm har} + V(q_\ell)\right] - J \sum_{\langle \ell, \ell' \rangle\in E} q_\ell q_{\ell'}, \end{equation} where for a box $\Lambda$, $E$ is the same as in (\ref{rp16}); $H_\ell^{\rm har}$ is as in (\ref{U1}). For this model, we have the infrared estimate (\ref{rp45}) with $\nu=1$. Let us obtain the lower bound, see (\ref{rp47a}). To this end we use the inequalities (\ref{rp47b}), (\ref{rp47c}) and obtain \begin{eqnarray} \label{rp55} \qquad 0 &\leq & a + 2 b^{(1)} + \sum_{s=2}^r 2 s (2 s-1)b^{(s)} \big{\langle }\left[ \omega_\ell (0) \right]^{2(s-1)} \big{\rangle}_{\nu^{\rm low}_{ \Lambda}} \\ & \leq & a + 2 b^{(1)} + \sum_{s=2}^r 2 s (2 s-1)\frac{(2s-2)!}{2^{s-1} (s-1)!} \cdot b^{(s)} \left[\big{\langle} \left(\omega_\ell (0)\right)^2 \big{\rangle}_{\nu^{\rm low}_{ \Lambda}}\right]^{s-1}. \nonumber \end{eqnarray} Here $\nu^{\rm low}_{\Lambda}$ is the periodic Gibbs measure for the model (\ref{Irf1}). To get the second line we used the Gaussian upper bound inequality, see page 1031 in \cite{[KoT]} and page 1372 in \cite{[RevMF]}, which is possible since all $b^{(s)}$, $s\geq 2$ are nonnegative. The solution of the latter inequality is \begin{equation} \label{rp55A} \big{\langle} \left( \omega_\ell(0) \right)^2 \big{\rangle}_{\nu_{ \Lambda}^{\rm low}} \geq \vartheta_*. \end{equation} Then the proof of the phase transitions in the model (\ref{Irf1}) goes along the line of arguments used in proving Theorem \ref{phrottm}. Thus, for $\beta>\beta_*$, $\langle \omega_\ell (0) \rangle_{\mu^{\rm low}_+} >0$, where $\mu^{\rm low}_+$ is the corresponding maximal Euclidean Gibbs measure, see Proposition \ref{MAtm}. But, \begin{equation} \label{kot} \langle \omega_\ell (0) \rangle_{\mu_+} > \langle \omega_\ell (0) \rangle_{\mu^{\rm low}_+}, \end{equation} see Lemma 7.7 in \cite{[KoT]}. At the same time $\langle \omega_\ell (0) \rangle_{\mu} =0$ for any periodic $\mu \in \mathcal{G}^{\rm t}$, which yields the result to be proven. $\square$ \subsection{Phase transition in the scalar model with asymmetric potential} \label{6.3.3.ss} The phase transitions proven so far have a common feature -- the spontaneous symmetry breaking. This means that the symmetry, e.g., rotation invariance, possessed by the model and hence by the unique element of $\mathcal{G}^{\rm t}$ is no longer possessed by the multiple Gibbs measures appearing as its result. In this subsection, we show that the translation invariant scalar version o the model (\ref{U1}), (\ref{U2}) has a phase transition without symmetry breaking. However, we restrict ourselves to the case of first order phase transitions, see Definition \ref{landau}. The reason for this can be explained as follows. The fact that $D_{\ell \ell'}^\mu$ does not decay to zero as $|\ell - \ell'| \rightarrow +\infty$, see (\ref{nrp1}), implies that $\mu$ is non-ergodic only if $\mu$ is symmetric. Otherwise, to show that $\mu$ is non-ergodic one should prove that the difference $D_{\ell \ell'}^\mu - \langle f_\ell \rangle_\mu \cdot \langle f_{\ell'} \rangle_\mu$ does not decay to zero, which cannot be done by means of our methods based on the infrared estimate. In what follows, we consider the translation invariant scalar version of the model (\ref{U1}), (\ref{U2}) with the nearest neighbor interaction. The only condition imposed on the anharmonic potential is (\ref{a2}). Obviously, we have to include the external field, that is the anharmonic potential is now $V(u) - h u$. Since we are not going to impose any conditions on the odd part of $V$, we cannot apply the GKS inequalities, see \cite{[RevMF],[KoT]}, the comparison methods are based on, see (\ref{kot}). In view of this fact we suppose that the interaction is of nearest neighbor type. Thus, for a box $\Lambda$, the periodic local Hamiltonian of the model has the form (\ref{Irf1}). In accordance with Definition \ref{landau}, our goal is to show that the model parameters (except for $h$) and the inverse temperature $\beta$ can be chosen in such a way that the set $\mathcal{R}$, defined by (\ref{ZiF4}), is non-void. The main idea on how to do this can be explained as follows. First we find a condition, independent of $h$, under which $D^\mu_{\ell \ell'}$ does not decay to zero for a certain periodic $\mu$. Next we prove the following \begin{lemma} \label{ZiFFtm} There exist $h_{\pm}$, $h_{-} < h_{+}$, which may depend on the model parameters and $\beta$, such that the magnetization (\ref{ZiF5}) has the property: \[ M(h) < 0, \ \ \ {\rm for} \ \ h \in \mathcal{R}^{c} \cap (- \infty, h_{-}); \quad \ M(h) > 0, \ \ \ {\rm for} \ \ h \in \mathcal{R}^{c} \cap (h_{+} + \infty). \] \end{lemma} Thereby, if $\mathcal{R}$ were void, one would find $h_* \in (h_{-}, h_{+})$ such that $M(h_*)= 0$. At such $h_*$, the aforementioned property of $D^\mu$ would yield the non-ergodicity of $\mu$ and hence the first order phase transition, see Theorem \ref{phsctm}. In view of Corollary \ref{RPco}, $D^\mu_{\ell \ell'}$ does not decay to zero if (\ref{rp40w}) holds with big enough $\vartheta$. By Proposition \ref{BFpn}, the lower bound (\ref{rp40w}) can be obtained from the estimate (\ref{FU2}). The only problem with the latter estimate is that it holds for $h=0$. \begin{lemma} \label{Ziplm} For every $\beta>0$ and $\theta$, there exist positive $m_*$ and $J_*$, which may depend on $\beta>0$ and $\theta$ but are independent of $h$, such that, for any box $\Lambda$ and any $h\in \mathbb{R}$, \begin{equation} \label{zifc} \big{\langle} \left[ \omega_\ell(0) \right]^2 \big{\rangle}_{\nu_{\Lambda}^{\rm per}} \geq \theta, \quad \ {\rm if} \ \ {J} > J_* \ \ {\rm and} \ \ m>m_* . \end{equation} \end{lemma} {\it Proof:} For $h\in \mathbb{R}$, we set \begin{eqnarray} \label{zifa} \lambda^h ({\rm d} \omega) & = & \frac{1}{N_\beta^h} \exp\left( h \int_0^\beta \omega(\tau){\rm d}\tau\right)\lambda ({\rm d}\omega), \\ {N_\beta^h} & = & \int_{C_\beta} \exp\left( h \int_0^\beta \omega(\tau){\rm d}\tau\right)\lambda ({\rm d}\omega), \nonumber \end{eqnarray} where $\lambda$ is as in (\ref{ZiFF1}). Then for $\pm h>0$, we get the estimate (\ref{KK1}) in the following form \begin{equation} \label{ziffa} \beta J d \big{\langle} \left[ \omega_\ell(0) \right]^2 \big{\rangle}_{\nu_{\Lambda}^{\rm per}} \geq \beta J d \varepsilon^2 + \log \lambda^h \left[ B^{\pm}(\varepsilon, c)\right] , \end{equation} where $B^{\pm}(\varepsilon, c)$ is as in (\ref{ZiFF6}), (\ref{ZiFF7}). Let us show now that, for $\pm h \geq 0$, \begin{equation} \label{zifb4} \lambda^h \left[B^{\pm}(\varepsilon, c)\right] \geq \lambda \left[B^{\pm}(\varepsilon, c)\right]. \end{equation} For $h\geq 0$, let $I (\omega)$ be the indicator function of the set $C^{+}_\beta (n;c)$, see (\ref{ZiFF5}). For $\delta >0$ and $t\in \mathbb{R}$, we set \[ \iota_\delta (t) = \left\{ \begin{array}{ll} 0 &\quad \ \ t\leq c,\\ (t - c)/\delta &\quad \ \ t\in (c, c+\delta], \\ 1 &\quad \ \ c\geq c+\delta. \end{array} \right. \] Thereby, \[ I_\delta (\omega) \ \stackrel{\rm def}{ =} \ \prod_{k=0}^n \iota_\delta \left[ \omega(k \beta /n) \right]. \] By Lebesgue's dominated convergence theorem \begin{eqnarray} \label{zifb5} N^h_\beta \lambda^h \left[ C^{+}_\beta (n;c)\right] & = & \int_{C_\beta} I (\omega)\exp\left(h \int_0^\beta \omega(\tau){\rm d}\tau \right)\lambda ({\rm d}\omega) \\ & = & \lim_{\delta \downarrow 0} \int_{C_\beta} I_\delta (\omega)\exp\left(h \int_0^\beta \omega(\tau){\rm d}\tau \right)\lambda ({\rm d}\omega). \nonumber \end{eqnarray} As the function $I_\delta$ is continuous and increasing, by the FKG inequality, see Theorem 6.1 in \cite{[RevMF]}, it follows that \[ \int_{C_\beta} I_\delta (x)\exp\left(h \int_0^\beta \omega(\tau){\rm d}\tau \right)\lambda ({\rm d}\omega) \geq N^h_\beta \int_{C_\beta} I_\delta (\omega)\lambda ({\rm d}\omega). \] Passing here to the limit we obtain from (\ref{zifb5}) \[ \lambda^h \left[ C^{+}_\beta (n;c)\right] \geq \lambda \left[ C^{+}_\beta (n;c)\right], \] which obviously yields (\ref{zifb4}). For $h \leq 0$, one just changes the signs of $h$ and $\omega$. Thereby, we can rewrite (\ref{ziffa}) as follows, c.f., (\ref{KK1}), \[ \big{\langle} \left[ \omega_\ell(0) \right]^2 \big{\rangle}_{\nu_{\Lambda}^{\rm per}} \geq \varepsilon^2 + [\log \gamma (m) ]/ \beta J d. \] Then one applies the arguments from the very end of the proof of Lemma \ref{FUlm}. $\square$ \noindent {\it Proof of Lemma \ref{ZiFFtm}:} Suppose that $h>0$. Then restricting the integration in (\ref{ZiFF10}) to $[B^{+}(\varepsilon,c)]^\Lambda$, we get \begin{eqnarray} \label{ZiFF11} p^{\rm per}_{\Lambda} (h) & \geq & h \beta \varepsilon + \log N_\beta + \frac{1}{2} \beta \varepsilon^2 \sum_{\ell'\in \Lambda} J^{\Lambda}_{\ell\ell'} + \log \lambda [B^+(\varepsilon,c)] \\ & \geq & h \beta \varepsilon + \log N_\beta + \log \gamma (m) . \nonumber \end{eqnarray} As the right-hand side of the latter estimate is independent of $\Lambda$, it can be extended to the limiting pressure $p(h)$. For any positive $h\in \mathcal{R}^c$, by the convexity of $p(h)$ one has \begin{eqnarray*} M(h) & \geq & \left[ p(h) - p(0)\right]/ \beta h \\& \geq & \varepsilon + \frac{1}{\beta h}\left\{ - p(0) + \log N_\beta + \log \gamma (m) \right\}. \end{eqnarray*} Picking big enough $h$ we get the positivity stated. The negativity can be proven in the same way. $\square$ Now we are at a position to prove the main statement of this subsection. \begin{theorem} \label{phasymtm} Let the model be scalar, translation invariant, and with the nearest-neighbor interaction. Let also $d\geq 3$. Then for every $\beta$, there exist $m_*>0$ and $J_* >0$ such that, for all $m> m_*$ and $J>J_*$, there exists $h_*\in \mathbb{R}$, possibly dependent on $m$, $\beta$, and $J$, such that $p' (h)$ gets discontinuous at $h_*$, i.e., the model has a first order phase transition. \end{theorem} {\it Proof:} Let $m_*$ be as in (\ref{ZiFF8}) and $J_*$, $\theta$ be as in Lemma \ref{Ziplm}. Fix any $\beta>0$ and $m >m_*$. Then, for $J>J_*$, the estimate (\ref{zifc}) holds, which yields the validity of (\ref{rp54}) for all boxes $\Lambda$ with such $\beta$, $m$, and $\nu =1$. Thereby, we increase $J$, if necessary, up to the value at which (\ref{rp51}) holds. Afterwards, all the parameters, except for $h$, are set fixed. In this case, there exists a periodic state $\mu\in \mathcal{G}^{\rm t}$ such that the first summand in (\ref{RP6}) is positive; hence, $D_{\ell \ell'}^\mu$ does not decay to zero as $|\ell - \ell'|\rightarrow +\infty$, see (\ref{RL}) and (\ref{RP6}). If $p(h)$ is everywhere differentiable, i.e., if $\mathcal{R}= \emptyset$, then by Lemma \ref{ZiFFtm} there exists $h_*$ such that $M(h_*)=0$; hence, the state $\mu$ with such $h_*$ is non-ergodic, which yields $|\mathcal{G}^{\rm t}|>1$ and hence a first order phase transition. Otherwise, $\mathcal{R}\neq \emptyset$. $\square$ \subsection{Comments} \label{ssC3} \begin{itemize} \item \emph{Subsection \ref{3.1.ss}:} According to Definition \ref{phdef}, the phase transition corresponds to the existence of multiple equilibrium phases at the same values of the model parameters and temperature. This is a standard definition for theories, which employ Gibbs states, see \cite{[Ge]}. In the translation invariant case, a way of proving phase transitions can be to show the existence of non-ergodic elements of $\mathcal{G}^{\rm t}$. For classical lattice systems, it was realized in \cite{[FSS]} by means of infrared estimates. More or less at the same time, an alternative rigorous theory of phase transitions in classical lattice spin models based on contour estimates has been proposed. This is the Pirogov-Sinai theory elaborated in \cite{[PS]}, see also \cite{[SinaiB]}. Later on, this theory was essentially extended and generalized into an abstract sophisticated method, applicable also to classical (but not quantum) models with unbounded spins, see \cite{[Zah]} and the references therein. For quantum lattice models, the theory of phase transitions has essential peculiarities, which distinguish it from the corresponding theory of classical systems. Most of the results in this domain were obtained by means of quantum versions of the method of infrared estimates. The first publication in which such estimates were applied to quantum spin models seems to be the article \cite{[DLS]}. After certain modifications this method was applied to a number of models with unbounded Hamiltonians \cite{[AKKR],[BaK],[BaK0],[DLP],[Kondr],[Pastur]}. In our approach, the quantum crystal is described as a system of `classical' infinite dimensional spins. This allows for applying here the original version of the method of infrared estimates elaborated in \cite{[FSS]} adapted to the infinite dimensional case, which has been realized in the present work. Among others, the adaptation consists in employing such tools as the Garsia-Rodemich-Rumsey lemma, see \cite{[Garsia]}. This our approach is more effective and transparent than the one used in \cite{[AKKR],[BaK],[BaK0],[Kondr]}. It also allows for comparing the conditions (\ref{rp40w}), (\ref{RP4}) with the stability conditions obtained in the next section. In the physical literature, there exist definitions of phase transitions alternative to Definition \ref{phdef}, based directly on the thermodynamic properties of the system. These are the definition employing the differentiability of the pressure (Definition \ref{landau}, which is applicable to translation invariant models only), and the definition based on the long range order. The relationship between the latter two notions is established by means of the Griffiths theorem, Proposition \ref{Grpn}, the proof of which can be found in \cite{[DLS]}. For translation invariant models with bounded interaction, non-differentiability of the pressure corresponds to the non-uniqueness of the Gibbs states, see \cite{[Israel],[Simon]}. We failed to prove this for our model. In the language of limit theorems of probability theory, the appearance of the long range order corresponds to the fact that a new law of large numbers comes to power, see Theorem \ref{Grpn} and the discussion preceding Definition \ref{Weberdf}. The critical point of the model corresponds to the case where the law of large numbers still holds in its original form (in the translation invariant case this means absence of the first order phase transitions), but the central limit theorem holds true with an abnormal normalization. For a hierarchical version of the model (\ref{U1}), (\ref{U2}), the critical point was described in \cite{[Kozak]}. Algebras of abnormal fluctuation operators were studied in \cite{[Broi]}. In application to quantum crystals, such operators were discussed in \cite{[VZ1],[VZ2]}, where the reader can find a more detailed discussion of this subject as well as the corresponding bibliography. \item \emph{Subsection \ref{3.2.ss}:} As was mentioned above, the method of infrared estimates was originated in \cite{[FSS]}. The version employed here is close to the one presented in \cite{[KKE]}. We note that, in accordance with the conditions (\ref{rp40x}),(\ref{rp40w}), and (\ref{RP4}), the infrared bound was obtained for the Duhamel function, see (\ref{rp45}), rather than for \[ \sum_{\ell'\in \Lambda}\langle (\omega_\ell (\tau) , \omega_{\ell'}( \tau))\rangle_{\nu_\Lambda^{\rm per}} \cdot \cos (p, \ell - \ell'), \] which was used in \cite{[RevMF],[BaK],[BaK0],[Kondr]}. \item \emph{Subsection \ref{3.3.ss}:} The lower bound (\ref{rp47a}) was obtained in the spirit of \cite{[DLP],[Pastur]}. The estimate stated in Lemma \ref{FUlm} is completely new; the key element of its proving is the estimate (\ref{ZiFF4}), obtained by means of Proposition \ref{grrpn}. The sufficient condition for the phase transition obtained in Theorem \ref{phrottm} is also new. Its significant feature is the appearance of a universal parameter responsible for the phase transition, which includes the particle mass $m$, the anharmonicity parameter $\vartheta_*$, and the interaction strength $J$. This is the parameter on the left-hand side of (\ref{rp52}). The same very parameter will describe the stability of the model studied in the next section. Theorem \ref{phrot1tm} is also new. \item \emph{Subsection \ref{6.3.2.ss}:} Here we mostly repeat the corresponding results of \cite{[KoT]}, announced in \cite{[KoT1]}. \item \emph{Subsection \ref{6.3.3.ss}:} The main characteristic feature of the scalar model studied in \cite{[AKKR],[BaK],[BaK0],[DLP],[Kondr],[Pastur]}, as well the the one described by Theorem \ref{phsctm}, was the $Z_2$-symmetry broken by the phase transition. This symmetry allowed for obtaining estimates like (\ref{rp55A}), crucial for the method. However, in classical models, for proving phase transitions by means of the infrared estimates, symmetry was not especially important, see Theorem 3.5 in \cite{[FSS]} and the discussion preceding this theorem. There might be two explanations of such a discrepancy: (a) the symmetry was the key element but only of the methods employed therein, and, like in the classical case, its lack does not imply the lack of phase transitions; (b) the symmetry is crucial in view of e.g. quantum effects, which stabilize the system, see the next section. So far, there has been no possibility to check which of these explanations is true. Theorem \ref{phasymtm} solves this dilemma in favor of explanation (a). Its main element is again an estimate, obtained by means of the Garsia-Rodemich-Rumsey lemma. The corresponding result was announced in \cite{[KaK]}. \end{itemize} \section{Quantum Stabilization} \label{4s} In physical substances containing light quantum particles moving in multi-welled potential fields phase transitions are experimentally suppressed by application of strong hydrostatic pressure, which makes the wells closer to each other and increases the tunneling of the particles. The same effect is achieved by replacing the particles with the ones having smaller mass. The aim of this section is to obtain a description of such effects in the framework of the theory developed here and to compare it with the theory of phase transitions presented in the previous section. \subsection{The stability of quantum crystals} \label{stabcr} Let us look at the scalar harmonic version of the model (\ref{U1}) -- a quantum harmonic crystal. For this model, the one-particle Hamiltonian includes first two terms of (\ref{U2}) only. Its spectrum consists of the eigenvalues $E_n^{\rm har} = (n+1/2)\sqrt{a/m} $, $n \in \mathbb{N}_0$. The parameter $a>0$ is the oscillator rigidity. For reasons, which become clear in a while, we consider the following gap parameter \begin{equation} \label{Gap} \mathit{\Delta}^{\rm har} = \min_{n \in \mathbb{N}} (E_n^{\rm har} - E_{n-1}^{\rm har}). \end{equation} Then \begin{equation} \label{De2} \mathit{\Delta}^{\rm har} = \sqrt{a/m}; \qquad a = m \mathit{\Delta}_{\rm har}^2. \end{equation} The set of tempered Euclidean Gibbs measures of the harmonic crystal can be constructed similarly as it was done in section \ref{2s}, but with one exception. Such measures exist only under the stability condition (\ref{si}), which might now be rewritten \begin{equation} \label{De2a} \hat{J}_0 < m \mathit{\Delta}_{\rm har}^2. \end{equation} In this case, $\mathcal{G}^{\rm t}$ is a singleton at all $\beta$, that readily follows from Theorem \ref{httm}. As the right-hand side of (\ref{De2a}) is independent of $m$, this stability condition is applicable also to the classical harmonic crystal which is obtained in the classical limit $m\rightarrow +\infty$, see \cite{[RevMF]}. According to (\ref{a2}) the anharmonic potentials $V_\ell$ have a super-quadratic growth due to which the tempered Euclidean Gibbs measures of anharmonic crystals exist for all $\hat{J}_0$. In this case, the instability of the crystal is connected with phase transitions. A sufficient condition for some of the models described in the previous section to have a phase transition may be derived from the equation (\ref{rp51}). It is \begin{equation} \label{De1} 2 \beta J \vartheta_* f ( \beta / 4 m \vartheta_*) > \mathcal{J} (d), \end{equation} which in the classical limit $m\rightarrow + \infty$ takes the form \[ 2 \beta J \vartheta_* > \mathcal{J} (d). \] The latter condition can be satisfied by picking big enough $\beta$. Therefore, the classical anharmonic crystals always have phase transitions -- no matter how small is the interaction intensity. For finite $m$, the left-hand side of (\ref{De1}) is bounded by $8 m \vartheta_*^2 J$, and the bound is achieved in the limit $\beta \rightarrow + \infty$. If for given values of the interaction parameter $J$, the mass $m$, and the parameter $\vartheta_*$ which characterizes the anharmonic potential, this bound does not exceed $\mathcal{J}(d)$, the condition (\ref{De1}) will never be satisfied. Although this condition is only sufficient, one might expect that the phase transition can be eliminated at all $\beta$ if the compound parameter $8 m \vartheta_*^2 J$ is small enough. Such an effect, if really exists, could be called \emph{quantum stabilization} since it is principally impossible in the classical analog of the model. \subsection{Quantum rigidity} \label{4.2.ss} In the harmonic case, big values of the rigidity $a$ ensure the stability. In this subsection, we introduce and stugy {\it quantum rigidity}, which plays a similar role in the anharmonic case Above the sufficient condition (\ref{De1}) for a phase transition to occur was obtained for a simplified version of the model (\ref{U1}), (\ref{U2}) -- nearest neighbor interactions, polynomial anharmonic potentials of special kind (\ref{Iub}), ect. Then the results were extended to more general models via correlation inequalities. Likewise here, we start with a simple scalar version of the one-particle Hamiltonian (\ref{U1}), which we take in the form \begin{equation} \label{De7} H_m = \frac{1}{2m} p^2 + \frac{a}{2} q^2 + V(q), \end{equation} where the anharmonic potential is, c.f., (\ref{Iub}), \begin{equation} \label{De8} V(q) = b^{(1)} q^2 + b^{(2)} q^4 + \cdots + b^{(r)} q^{2r}, \qquad b^{(r)} >0, \quad r \in \mathbb{N}\setminus \{1\}. \end{equation} The subscript $m$ in (\ref{De7}) indicates the dependence of the Hamiltonian on the mass. Recall that $H_m$ acts in the physical Hilbert space $L^2(\mathbb{R})$. Its relevant properties are summarized in the following \begin{proposition} \label{spectpn} The Hamiltonian $H_m$ is essentially self-adjoint on the set $C_0^\infty(\mathbb{R})$ of infinitely differentiable functions with compact support. The spectrum of $H_m$ has the following properties: (a) it consists of eigenvalues $E_n$, $n\in \mathbb{N}_0$ only; (b) to each $E_n$ there corresponds exactly one eigenfunction $\psi_n\in L^2(\mathbb{R})$; (c) there exists $\gamma >1$ such that \begin{equation} \label{De3} n^{-\gamma} E_n \rightarrow + \infty, \qquad {\rm as} \ \ n\rightarrow +\infty. \end{equation} \end{proposition} {\it Proof:} The essential self-adjointness of $H_m$ follows from the Sears theorem, see Theorem 1.1, page 50 of \cite{[BeS]} or Theorem X.29 of \cite{[RS2]}. The spectral properties follow from Theorem 3.1, page 57 (claim (a)) and Proposition 3.3, page 65 (claim (b)), both taken from the book \cite{[BeS]}. To prove claim (c) we employ a classical formula, see equation (7.7.4), page 151 of the book \cite{[Titch]}, which in our context reads \begin{equation} \label{De3a} \frac{2}{\pi}\sqrt{2m} \int_0^{u_n}\sqrt{E_n - V(u)}\ {\rm d} u = n+\frac{1}{2} + O\left( \frac{1}{n}\right), \end{equation} where $n$, and hence $E_n$, are big enough so that the equation \begin{equation} \label{De3b} V(u) = E_n \end{equation} have the unique positive solution $u_n$. Then \begin{equation} \label{De3c} u_n^{r+1} \int_0^1 \sqrt{\phi_n (t) - t^{2r}} \ {\rm d}t = \frac{\pi}{2 \sqrt{2m b^{(r)}} }\left( n + \frac{1}{2} \right) + O\left(\frac{1}{n}\right), \end{equation} where \[ \phi_n (t) = \frac{E_n}{b^{(r)} u_n^{2r}} - \frac{u_n^{2 - 2r}}{b^{(r)}} (b^{(1)} + a/2)t^2 - \dots - \frac{u_n^{- 2 }}{b^{(r)}} b^{(r-1)} t^{2(r-1)}. \] Note that $\phi_n (1) =1$ for all $n$, which follows from (\ref{De3b}). Thus, \begin{equation} \label{De3d} \frac{E_n}{b^{(r)} u_n^{2r}} \rightarrow 1, \qquad {\rm as} \ \ n \rightarrow +\infty. \end{equation} Thereby, we have \begin{eqnarray} \label{De3e} c_n & \stackrel{\rm def}{=} & \int_0^1 \sqrt{\phi_n (t) - t^{2r}} \ {\rm d}t \rightarrow \int_0^1 \sqrt{1 - t^{2r}} \ {\rm d} t \\ & = & \frac{\Gamma\left(\frac{3}{2}\right) \Gamma\left(\frac{1}{2r}\right)}{2 r \Gamma\left(\frac{3}{2} + \frac{1}{2r}\right)}. \nonumber \end{eqnarray} Then combining (\ref{De3e}) with (\ref{De3b}) and (\ref{De3d}) we get \begin{equation} \label{De8a} E_n = \left[\frac{b^{(r)}}{(2 m)^r} \right]^{1/(r+1)} \cdot \left[\frac{ \pi r \Gamma\left(\frac{3}{2} + \frac{1}{2r} \right)}{\Gamma\left(\frac{3}{2} \right) \Gamma\left( \frac{1}{2r} \right)}\cdot \left( n + \frac{1}{2} \right)\right]^{\frac{2r}{r+1}} + o\left(1\right), \end{equation} which readily yields (\ref{De3}) with any $\gamma \in (1, 2r/ (r+1))$. $\square$ \vskip.1cm Thus, in view of the property (\ref{De8a}) we introduce the gap parameter \begin{equation} \label{De4} \mathit{\Delta}_m = \min_{n \in \mathbb{N}} (E_n - E_{n-1}), \end{equation} and thereby, c.f., (\ref{De2}), \begin{equation} \label{De5} \mathcal{R}_m = m \mathit{\Delta}_m^2, \end{equation} which can be called \emph{quantum rigidity} of the oscillator. One might expect that the stability condition for quantum anharmonic crystals, at least for their scalar versions with the anharmonic potentials independent of $\ell$, is similar to (\ref{De2a}). That is, it has the form \begin{equation} \label{De6} \hat{J}_0 < \mathcal{R}_m. \end{equation} \subsection{Properties of quantum rigidity} \label{gap} Below $f\sim g$ means that $\lim (f / g) =1$. \begin{theorem} \label{gap1tm} For every $r \in \mathbb{N}$, the gap parameter $\mathit{\Delta}_m$, and hence the quantum rigidity $\mathcal{R}_m$ corresponding to the Hamiltonian (\ref{De7}), (\ref{De8}), are continuous functions of $m$. Furthermore, \begin{equation} \label{De9} \mathit{\Delta}_m \sim \mathit{\Delta}_0 m^{-r/(r+1)}, \quad \ \ \mathcal{R}_m \sim \mathit{\Delta}_0^2 m^{-(r-1)/(r+1)}, \quad \ \ m\rightarrow 0, \end{equation} with a certain $\mathit{\Delta}_0>0$. \end{theorem} {\it Proof:} Given $\alpha >0$, let $U_\alpha: L^2(\mathbb{R})\rightarrow L^2(\mathbb{R})$ be the following unitary operator \begin{equation} \label{De10} \left(U_\alpha \psi\right) (x) = \sqrt{\alpha} \psi(\alpha x). \end{equation} Then by (\ref{cr}) \[ U^{-1}_\alpha p U_\alpha = \alpha p, \qquad U^{-1}_\alpha q U_\alpha = \alpha^{-1} q. \] Fix any $m_0>0$ and set $\rho = (m /m_0)^{1/(r+1)}$, $\alpha = \rho^{1/2}$. Then \begin{equation} \label{De11} \widetilde{H}_m \ \stackrel{\rm def}{=} \ U^{-1}_\alpha H_m U_\alpha = \rho^{-r} T(\rho), \end{equation} where \begin{eqnarray} \label{De12} T(\rho) & = & H_{m_0} + Q(\rho)\\ & = & \frac{1}{2m_0} p^2 + \rho^{r-1} ( b^{(1)} +a/2) q^2 + \rho^{r-2} b^{(2)} q^4 + \cdots + b^{(r)} q^{2r}, \nonumber\\ \label{De13} Q(\rho) & = & (\rho-1) \left[ p_{r-1} (\rho)( b^{(1)} +a/2) q^2 \right. \\ & + & \left. p_{r-2} (\rho) b^{(2)} q^4 + \cdots + p_{r-s} (\rho) b^{(s)} q^{2s} + \cdots + b^{(r-1)} q^{2(r-1)}\right], \nonumber \end{eqnarray} and \begin{equation} \label{De14} p_k (\rho) = 1 + \rho + \rho^2 +\cdots + \rho^{k-1}. \end{equation} As the operators $H_m$, $\widetilde{H}_m$, are unitary equivalent, their gap parameters (\ref{De4}) coincide. The operators $\widetilde{H}_m$ and $T(\rho)$, $\rho>0$ possess the properties established by Proposition \ref{spectpn}. In particular, they have the property (\ref{De3}) with one and the same $\gamma$. Therefore, there exist $\varepsilon >0$ and $k\in \mathbb{N}$ such that for $|\rho-1|< \varepsilon$, the gap parameters (\ref{De4}) for $\widetilde{H}_m$ and $T(\rho)$ are defined by the first $k$ eigenvalues of these operators. As an essentially self-adjoint operator, $T(\rho)$ possesses a unique self-adjoint extension $\hat{T}(\rho)$, the eigenvalues of which coincide with those of $T(\rho)$. Furthermore, for complex $\rho$, $\hat{T}(\rho)$ is a closed operator, its domain $Dom[\hat{T}(\rho)]$ does not depend on $\rho$. For every $\psi \in Dom[\hat{T}(\rho)]$, the map $\mathbb{C} \ni \zeta \mapsto \hat{T}(\zeta) \psi \in L^2(\mathbb{R})$ is holomorphic. Therefore, $\{\hat{T} (\rho)\ | \ |\rho - 1|< \varepsilon\}$ is a self-adjoint holomorphic family. Hence, the eigenvalues $\Theta_n (\rho)$, $n \in \mathbb{N}_0$ of $\hat{T}(\rho)$ are continuous functions of $\rho\in (1-\varepsilon, 1+ \varepsilon)$, see Chapter VII, $\S$3 in the book \cite{[Kato]}. At $\rho =1$ they coincide with those of $\hat{H}_{m_0}$. Since we have given $k \in \mathbb{N}$ such that, for all $\rho\in (1-\varepsilon, 1+ \varepsilon)$, \[ \min_{n \in \mathbb{N}} \left[\Theta_n (\rho) - \Theta_{n-1} (\rho)\right]= \min_{n \in \{1, 2, \dots , k\} }\left[\Theta_n (\rho) - \Theta_{n-1} (\rho)\right], \] the function \begin{equation} \label{De14a} \widetilde{\mathit{\Delta}} (\rho) \ \stackrel{\rm def}{=} \ \min_{n \in \mathbb{N}} \rho^{-r}\left[\Theta_n (\rho) - \Theta_{n-1} (\rho)\right] \end{equation} is continuous. But by (\ref{De11}) \begin{equation} \label{De14b} \mathit{\Delta}_m = \widetilde{\mathit{\Delta}} \left(\left({m}/{m_0}\right)^{1/(r+1)}\right), \end{equation} which proves the continuity stated since $m_0>0$ has been chosen arbitrarily. To prove the second part of the theorem we rewrite (\ref{De12}) as follows \begin{equation} \label{De14c} T(\rho) = H^{(0)}_{m_0} + R(\rho), \end{equation} where \[ H^{(0)}_{m_0} = \frac{1}{2m_0} p^2 + b^{(r)}q^{2r}, \] and \[ R(\rho) = \rho\left( \rho^{r-2} (b^{(1)} + a/2) q^2 + \rho^{r-3} b^{(2)} q^4 + \cdots + b^{(r-1)} q^{2(r-1)} \right). \] Repeating the above perturbation arguments one concludes that the self-adjoint family $\{\hat{T}(\rho) \ | \ |\rho|< \varepsilon\}$ is holomorphic at zero; hence, the gap parameter of (\ref{De14c}) tends, as $\rho \rightarrow 0$, to that of $H^{(0)}_{m_0}$, i.e., to $\mathit{\Delta}_0$. Thereby, the asymptotics (\ref{De9}) for $\mathit{\Delta}_m$ follows from (\ref{De11}) and the unitary equivalence of $H_m$ and $\widetilde{H}_m$. $\square$ \vskip.1cm Our second result in this domain is the quasi-classical analysis of the parameters (\ref{De4}), (\ref{De5}). Here we shall suppose that the anharmonic potential $V$ has the form (\ref{De8}) with $b^{(s)} \geq 0$ for all $s=2, \dots, r-1$, c.f., (\ref{Iub}). We remind that in this case the parameter $\vartheta_*>0$ is the unique solution of the equation (\ref{Ip3}). \begin{theorem} \label{gap2tm} Let $V$ be as in (\ref{Iub}). Then the gap parameter $\mathit{\Delta}_m$ and the quantum rigidity $\mathcal{R}_m$ of the Hamiltonian (\ref{De7}) with such $V$ obey the estimates \begin{equation} \label{De15} \mathit{\Delta}_m \leq \frac{1}{2 m \vartheta_*}, \qquad \mathcal{R}_m \leq \frac{1}{4 m \vartheta_*^2}. \end{equation} \end{theorem} {\it Proof:} Let $\varrho_m$ be the local Gibbs state (\ref{a4}) corresponding to the Hamiltonian (\ref{De7}). Then by means of the inequality (\ref{rp47b}) and the Gaussian upper bound we get, see (\ref{rp55}), \[ a + 2 b^{(1)} + \mathit{\Phi} \left(\varrho_m (q^2) \right) \geq 0, \] by which \begin{equation} \label{De16} \varrho_m (q^2) \geq \vartheta_*. \end{equation} Let $\psi_n$, $n \in \mathbb{N}_0$ be the eigenfunctions of the Hamiltonian $H_m$ corresponding to the eigenvalues $E_n$. By Proposition \ref{spectpn}, to each $E_n$ there corresponds exactly one $\psi_n$. Set \[ Q_{nn'} = (\psi_n , q \psi_{n'})_{L^2(\mathbb{R})}, \quad n, n' \in \mathbb{N}_0. \] Obviously, $Q_{nn} = 0$ for any $n\in \mathbb{N}_0$. Consider \[ \Gamma (\tau, \tau') = \varrho_m \left[q \exp\left( - (\tau' - \tau)H_m\right) q \exp\left( - (\tau - \tau')H_m\right)\right], \quad \tau, \tau' \in [0,\beta], \] which is the Matsubara function corresponding to the state $\varrho_m$ and the operators $F_1 = F_2 = q$. Set \begin{equation} \label{De17} \hat{u} (k) = \int_0^\beta \Gamma (0, \tau) \cos k \tau {\rm d}\tau, \qquad k \in \mathcal{K} = \{({2\pi}/{\beta}) \kappa \ | \kappa \in \mathbb{Z}\}. \end{equation} Then \begin{eqnarray} \label{De17a} \hat{u}(k) & = & \frac{1}{Z_m} \sum_{n, n'=0}^{+\infty} \left\vert Q_{nn'}\right\vert^2 \frac{E_n - E_{n'}}{k^2 + (E_n - E_{n'})^2} \\ & \times & \left\{ \exp (- \beta E_{n'}) - \exp (- \beta E_{n}) \right\}, \nonumber \end{eqnarray} where $Z_m = {\rm trace} \exp(- \beta H_m)$. The term $(E_n - E_{n'})^2$ in the denominator can be estimated by means of (\ref{De4}), which yields \begin{eqnarray} \label{De18} \hat{u}(k) & \leq & \frac{1}{k^2 + \mathit{\Delta}_m^2}\cdot \frac{1}{Z_m} \sum_{n, n'=0}^{+\infty} \left\vert Q_{nn'}\right\vert^2 (E_n - E_{n'}) \\ & \times & \left\{ \exp (- \beta E_n) - \exp (- \beta E_{n'}) \right\} \nonumber \\ & \leq & \frac{1}{k^2 + \mathit{\Delta}_m^2}\cdot \varrho_m\left(\left[q, \left[H_m, q\right] \right] \right) \nonumber \\ & = & \frac{1}{m(k^2 + \mathit{\Delta}_m^2)}. \nonumber \end{eqnarray} By this estimate we get \begin{eqnarray} \label{De19} \varrho_m (q^2) & = & \Gamma (0,0) = \frac{1}{\beta} \sum_{k \in \mathcal{K}} u(k) \\ & \leq & \frac{1}{\beta} \sum_{k \in \mathcal{K}} \frac{1}{m ( k^2 + \mathit{\Delta}_m^2)} = \frac{1}{2m \mathit{\Delta}_m} \coth \left(\beta \mathit{\Delta}_m/2 \right). \nonumber \end{eqnarray} Combining the latter estimate with (\ref{De16}) we arrive at \[ \mathit{\Delta}_m \tanh \left(\beta \mathit{\Delta}_m/2 \right) < 1 / (2 m \vartheta_*), \] which yields (\ref{De15}) in the limit $\beta \rightarrow + \infty$. $\square$ \vskip.1cm Now let us analyze the quantum stability condition (\ref{De6}) in the light of the latter results. The first conclusion is that, unlike to the case of harmonic oscillators, this condition can be satisfied for all $\hat{J}_0$ by letting the mass be small enough. For the nearest-neighbor interaction, one has $\hat{J}_0 = 2 d J$; hence, if (\ref{De6}) holds, then \begin{equation} \label{De20} 8 d m \vartheta_*^2 J < 1. \end{equation} This can be compared with the estimate \begin{equation} \label{DeE} 8 d m \vartheta_*^2 J > d \mathcal{J}(d), \end{equation} guaranteeing a phase transition, which one derives from (\ref{De1}). For finite $d$, $d\mathcal{J}(d) > 1$, see Proposition \ref{DLPpn}; hence, there is a gap between the latter estimate and (\ref{De20}), which however diminishes as $d \rightarrow + \infty$ since \[ \lim_{d \rightarrow + \infty}d \mathcal{J}(d) = 1. \] In the remaining part of this section, we show that for the quantum crystals, both scalar and vector, a stability condition like (\ref{De6}) yields a sufficient decay of the pair correlation function. In the scalar case, this decay guaranties the uniqueness of tempered Euclidean Gibbs measures. However, in the vector case it yields a weaker result -- suppression of the long range order and of the phase transitions of any order in the sense of Definition \ref{landau}. The discrepancy arises from the fact that the uniqueness criteria based on the FKG inequalities are applicable to scalar models only. \subsection{Decay of correlations in the scalar case} \label{7.2.1} In this subsection, we consider the model (\ref{U1}), (\ref{U2}) which is (a) translation invariant; (b) scalar; (c) the anharmonic potential is $V(q)=v(q^2)$ with $v$ being convex on $\mathbb{R}_+$. Let $\Lambda$ be the box (\ref{box}) and $\Lambda_*$ be its conjugate (\ref{rp39}). For this $\Lambda$, let \begin{equation} \label{CF} K_{\ell\ell'}^\Lambda (\tau, \tau') \ \stackrel{\rm def}{=} \ \big{\langle} \omega_{\ell} (\tau) \omega_{\ell'}(\tau')\big{\rangle}_{\nu_\Lambda^{\rm per}} \end{equation} be the periodic correlation function. Recall that the periodic interaction potential $J^\Lambda_{\ell \ell'}$ was defined by (\ref{A2}). For the one-particle Hamiltonian (\ref{U2}), let $\hat{u}(k)$ be as in (\ref{De17}). \begin{theorem} \label{nagumo1} Let the model be as just describes. If \begin{equation} \label{De20a} \hat{u}(0) \hat{J}_0 < 1, \end{equation} then \begin{eqnarray} \label{De21} K_{\ell\ell'}^\Lambda (\tau, \tau') \leq \frac{1}{\beta |\Lambda|} \sum_{p\in \Lambda_*} \sum_{k\in \mathcal{K}} \frac{\exp\left[\imath (p, \ell - \ell') + \imath k(\tau - \tau')\right]}{[\hat{u}(k)]^{-1} - \hat{J}^\Lambda_0 + \mathit{\Upsilon}^\Lambda (p)}, \end{eqnarray} where \begin{equation} \label{De22} \hat{J}^\Lambda_0 = \sum_{\ell'\in \Lambda}J^\Lambda_{\ell\ell'}, \quad \ \ \mathit{\Upsilon}^\Lambda (p) = \hat{J}^\Lambda_0 - \sum_{\ell'\in \Lambda} J^\Lambda_{\ell\ell'} \exp[\imath (p , \ell - \ell')]. \end{equation} \end{theorem} {\it Proof:} Along with the periodic local Gibbs measure (\ref{A5}) we introduce \begin{eqnarray} \label{De23} & & \nu_{ \Lambda}^{\rm per}({\rm d} \omega_\Lambda|t)\qquad \\ & & \ \quad = \frac{1}{N_{ \Lambda}^{\rm per}(t)} \exp\left\{\frac{t}{2} \sum_{\ell , \ell'\in \Lambda} J^\Lambda_{\ell\ell'} (\omega_\ell, \omega_{\ell'})_{L^2_\beta} - \int_0^\beta \sum_{\ell \in \Lambda} V(\omega_\ell (\tau)){\rm d}\tau \right\}\chi_{\Lambda}({\rm d}\omega_\Lambda), \nonumber \end{eqnarray} where $t\in [0,1]$ and $N_{ \Lambda}^{\rm per}(t)$ is the corresponding normalization factor. Thereby, we set \begin{equation} \label{De24} X_{\ell \ell'} (\tau, \tau'|t) = \langle \omega_\ell (\tau) \omega_{\ell'}(\tau')\rangle_{\nu_{ \Lambda}^{\rm per}(\cdot |t)}, \quad \ell ,\ell' \in \Lambda. \end{equation} By direct calculation \begin{eqnarray} \label{De25} & & \frac{\partial}{\partial t}X_{\ell \ell'} (\tau, \tau'|t) \\ & & \qquad = \frac{1}{2} \sum_{\ell_1 , \ell_2 \in \Lambda} J^\Lambda_{\ell_1 \ell_2} \int_0^\beta R_{\ell \ell' \ell_1 \ell_2} (\tau, \tau' , \tau'' , \tau''|t) {\rm d} \tau'' \nonumber\\ & & \qquad + \sum_{\ell_1 , \ell_2 \in \Lambda} J^\Lambda_{\ell_1 \ell_2} \int_0^\beta X_{\ell \ell_1} (\tau, \tau''|t) X_{\ell_2 \ell'} (\tau'', \tau'|t) {\rm d}\tau'', \nonumber \end{eqnarray} where \begin{eqnarray*} R_{\ell_1 \ell_2 \ell_3 \ell_4} (\tau_1, \tau_2 , \tau_3 , \tau_4|t) & = & \langle \omega_{\ell_1} (\tau_1) \omega_{\ell_2}(\tau_2)\omega_{\ell_3}(\tau_3)\omega_{\ell_4}(\tau_4)\rangle_{\nu_{ \Lambda}^{\rm per}(\cdot |t)} \\ & - & \langle \omega_{\ell_1} (\tau_1) \omega_{\ell_2}(\tau_2)\rangle_{\nu_{\Lambda}^{\rm per}(\cdot |t)} \cdot \langle \omega_{\ell_3}(\tau_3)\omega_{\ell_4}(\tau_4)\rangle_{\nu_{ \Lambda}^{\rm per}(\cdot |t)} \nonumber \\ & - & \langle \omega_{\ell_1} (\tau_1) \omega_{\ell_3}(\tau_3)\rangle_{\nu_{ \Lambda}^{\rm per}(\cdot |t)} \cdot \langle \omega_{\ell_2}(\tau_2)\omega_{\ell_4}(\tau_4)\rangle_{\nu_{ \Lambda}^{\rm per}(\cdot |t)} \nonumber \\ & - & \langle \omega_{\ell_1} (\tau_1) \omega_{\ell_4}(\tau_4)\rangle_{\nu_{ \Lambda}^{\rm per}(\cdot |t)} \cdot \langle \omega_{\ell_2}(\tau_2)\omega_{\ell_3}(\tau_3)\rangle_{\nu_{ \Lambda}^{\rm per}(\cdot |t)}. \nonumber \end{eqnarray*} By the Lebowitz inequality, see \cite{[RevMF]}, we have \begin{equation} \label{De26} R_{\ell_1 \ell_2 \ell_3 \ell_4} (\tau_1, \tau_2 , \tau_3 , \tau_4|t) \leq 0, \end{equation} holding for all values of the arguments. Let us consider (\ref{De25}) as an integro-differential equation subject to the initial condition \begin{equation} \label{De27} X_{\ell \ell'}(\tau , \tau'|0) = \delta_{\ell \ell'} \Gamma (\tau, \tau') = (\delta_{\ell \ell'}/\beta) \sum_{k \in \mathcal{K}} \hat{u}(k) \cos k(\tau - \tau'). \end{equation} Besides, we also have \begin{equation} \label{De28} X_{\ell \ell'}(\tau , \tau'|1) = K_{\ell \ell'}^\Lambda(\tau , \tau'|p). \end{equation} Along with the Cauchy problem (\ref{De25}), (\ref{De27}) let us consider the following equation \begin{equation} \label{De29} \frac{\partial}{\partial t} Y_{\ell \ell'} (\tau , \tau'|t) = \sum_{\ell_1 , \ell_2\in \Lambda} \left[ J^\Lambda_{\ell_1 \ell_2} +\frac{\varepsilon}{|\Lambda|}\right] \int_0^\beta Y_{\ell \ell_1} (\tau , \tau'' |t) Y_{\ell_2 \ell'} (\tau'' , \tau'|t) {\rm d} \tau'', \end{equation} where $\varepsilon >0$ is a parameter, subject to the initial condition \begin{eqnarray} \label{De30} & & Y_{\ell \ell'}(\tau , \tau'|0) = X_{\ell \ell'}(\tau , \tau'|0) \\& & \qquad \ \ = (\delta_{\ell \ell'}/\beta) \sum_{k \in \mathcal{K}} \hat{u}(k) \cos k(\tau - \tau'). \nonumber \end{eqnarray} Let us show that under the condition (\ref{De20a}) there exists $\varepsilon_0 >0$ such that, for all $\varepsilon \in [0, \varepsilon_0)$, the problem (\ref{De29}), (\ref{De30}), $t \in [0,1]$, has the unique solution \begin{equation} \label{De31} Y_{\ell \ell'} (\tau , \tau'|t) = \frac{1}{\beta|\Lambda|} \sum_{p\in \Lambda_*} \sum_{k \in \mathcal{K}} \frac{\exp\left[ \imath (p , \ell - \ell') + \imath k (\tau - \tau')\right]}{[\hat{u}(k)]^{-1} - t [\hat{J}^\Lambda_0 + \varepsilon \delta_{p,0}] + t \mathit{\Upsilon}^\Lambda (p)}, \end{equation} where $\hat{J}_0$, $\mathit{\Upsilon}^\Lambda(p)$ are the same as in (\ref{De22}) and $\delta_{p,0}$ is the Kronecker symbol with respect to each of the components of $p$. By means of the Fourier transformation \begin{eqnarray} \label{De32} \qquad \quad Y_{\ell \ell'} (\tau , \tau'|t) & = & \frac{1}{\beta |\Lambda|}\sum_{p\in \Lambda_*} \sum_{k \in \mathcal{K}}\widehat{Y}(p,k|t) \exp\left[\imath (p, \ell - \ell') + \imath k (\tau - \tau') \right], \qquad \\ \widehat{Y}(p,k|t) & = & \sum_{\ell' \in \Lambda} \int_0^\beta Y_{\ell \ell'} (\tau , \tau'|t) \exp\left[-\imath (p, \ell - \ell') - \imath k (\tau - \tau') \right] {\rm d} \tau', \nonumber \end{eqnarray} we bring (\ref{De29}), (\ref{De30}) into the following form \begin{equation} \label{De33} \frac{\partial}{\partial t} \widehat{Y} (p,k|t) = \left[ \hat{J}^\Lambda (p) + \varepsilon \delta_{p,0} \right]\cdot \left[ \widehat{Y} (p,k|t)\right]^2, \quad \widehat{Y} (p,k|0) = \hat{u}(k), \end{equation} where, see (\ref{De22}), \begin{equation} \label{De34} \hat{J}^\Lambda(p) = \sum_{\ell'\in \Lambda} J^\Lambda_{\ell \ell'} \exp\left[ \imath (p, \ell - \ell')\right] = \hat{J}^\Lambda_0 - \mathit{\Upsilon}^\Lambda(p). \end{equation} Clearly, $\hat{J}^\Lambda_0 \leq \hat{J}_0$, $|\hat{J}^\Lambda(p)| \leq \hat{J}^\Lambda_0$, and $\hat{u}(k) \leq \hat{u}(0)$. Then in view of (\ref{De20a}), one finds $\varepsilon_0 >0$ such that, for all $\varepsilon \in (0, \varepsilon_0)$, the following holds \[ \left[\hat{J}^\Lambda (p) + \varepsilon \delta_{p,0} \right] \hat{u}(k) < 1, \] for all $p \in \Lambda_*$ and $k \in \mathcal{K}$. Thus, the problem (\ref{De33}) can be solved explicitly, which via the transformation (\ref{De32}) yields (\ref{De31}). Given $\theta \in (0,1)$, we set \begin{equation} \label{De35} Y^{(\theta)}_{\ell \ell'} (\tau , \tau'|t) = Y_{\ell \ell'} (\tau , \tau'|t+\theta), \quad t \in [0, 1-\theta]. \end{equation} Obviously, the latter function obeys the equation (\ref{De29}) on $t \in [0, 1-\theta]$ with the initial condition \begin{equation} \label{De36} Y^{(\theta)}_{\ell \ell'} (\tau , \tau'|0) = Y_{\ell \ell'} (\tau , \tau'|\theta) > Y_{\ell \ell'} (\tau , \tau'|0) = X_{\ell \ell'} (\tau , \tau'|0) . \end{equation} The latter inequality is due to the positivity of both sides of (\ref{De29}). Therefore, \begin{equation} \label{De37} Y^{(\theta)}_{\ell \ell'} (\tau , \tau'|t) >0, \end{equation} for all $\ell , \ell'\in \Lambda$, $\tau , \tau' \in [0, \beta]$, and $t \in [0, 1-\theta]$. Let us show now that under the condition (\ref{De20a}), for all $\theta \in (0, 1)$ and $\varepsilon \in (0, \varepsilon_0)$, \begin{equation}\label{De38} X_{\ell \ell'} (\tau , \tau'|t)< Y^{(\theta)}_{\ell \ell'} (\tau , \tau'|t), \end{equation} also for all $\ell , \ell'\in \Lambda$, $\tau , \tau' \in [0, \beta]$, and $t \in [0, 1-\theta]$. To this end we introduce \begin{equation} \label{De39} Z_{\ell \ell'}^{\pm} (\tau , \tau'|t) \ \stackrel{\rm def}{=} \ Y^{(\theta)}_{\ell \ell'} (\tau , \tau'|t) \pm X_{\ell \ell'} (\tau , \tau'|t), \quad t \in [0, 1-\theta]. \end{equation} Then one has from (\ref{De25}), (\ref{De29}) \begin{eqnarray} \label{De40} & & \frac{\partial}{\partial t} Z_{\ell \ell'}^{-} (\tau , \tau'|t)\\ & & \qquad = \frac{1}{2} \sum_{\ell_1 , \ell_2 \in \Lambda}J^\Lambda_{\ell_1\ell_2} \int_0^\beta \left\{Z_{\ell \ell_1}^{+} (\tau , \tau''|t) Z_{\ell' \ell_2}^{-} (\tau' , \tau''|t) \right. \nonumber \\ & & \qquad \left. + Z_{\ell \ell_1}^{-} (\tau , \tau''|t) Z_{\ell' \ell_2}^{+} (\tau' , \tau''|t) \right\}{\rm d}\tau'' \nonumber\\ & & \qquad + \frac{\varepsilon}{|\Lambda|} \sum_{\ell_1 ,\ell_2\in \Lambda} \int_0^\beta Y^{(\theta)}_{\ell \ell_1} (\tau , \tau''|t) Y^{(\theta)}_{\ell' \ell_2} (\tau' , \tau''|t){\rm d}\tau'' - S_{\ell \ell'}(\tau, \tau'|t), \nonumber \end{eqnarray} where $S_{\ell \ell'}(\tau, \tau'|t)$ stands for the first term on the right-hand side of (\ref{De25}). By (\ref{De39}) and (\ref{De36}) \begin{equation} \label{De41} Z^{-}_{\ell \ell'}(\tau, \tau'|0) = Y_{\ell \ell'}(\tau, \tau'|\theta) - X_{\ell \ell'}(\tau, \tau'|0) >0, \end{equation} which holds for all $\ell , \ell'\in \Lambda$, $\tau , \tau' \in [0, \beta]$. For every $\ell , \ell'\in \Lambda$, both $Y_{\ell \ell'}(\tau, \tau'|t)$, $X_{\ell \ell'}(\tau, \tau'|t)$ and, hence, $Z^{\pm}_{\ell \ell'}(\tau, \tau'|t)$ are continuous functions of their arguments. Set \begin{equation} \label{De42} \zeta (t) = \inf \left\{ Z_{\ell \ell'}^{-} (\tau , \tau'|t)\ | \ \ell , \ell' \in \Lambda , \ \ \tau , \tau' \in [0, \beta] \right\}. \end{equation} By (\ref{De41}), it follows that $\zeta (0) >0$. Suppose now that $\zeta (t_0) =0$ at some $t_0 \in [0, 1-\theta]$ and $\zeta (t) >0$ for all $t \in [0, t_0)$. Then by the continuity of $Z^{-}_{\ell \ell'}$, there exist $\ell , \ell'\in \Lambda$ and $\tau , \tau' \in [0, \beta]$ such that \[ Z^{-}_{\ell \ell'}(\tau , \tau'|t_0) = 0 \quad \ \ {\rm and} \quad Z^{-}_{\ell \ell'}(\tau , \tau'|t) > 0 \quad \ {\rm for} \ \ {\rm all} \ t < t_0. \] For these $\ell , \ell'\in \Lambda$ and $\tau , \tau' \in [0, \beta]$, the derivative $(\partial / \partial t) Z^{-}_{\ell \ell'}(\tau , \tau'|t)$ at $t=t_0$ is positive since on the right-hand side of (\ref{De40}) the third term is positive and the remaining terms are non-negative. But a differentiable function, which is positive at $t\in [0, t_0)$ and zero at $t=t_0$, cannot increase at $t=t_0$. Thus, $\zeta (t) >0$ for all $t\in [0, 1-\theta]$, which yields (\ref{De38}). By the latter estimate, we have \begin{eqnarray*} & & X_{\ell \ell'} (\tau , \tau'|1-\theta) < Y_{\ell \ell'} (\tau , \tau'|1) \\ & & \qquad = \frac{1}{\beta|\Lambda|} \sum_{p\in \Lambda_*} \sum_{k \in \mathcal{K}} \frac{\exp\left[ \imath (p , \ell - \ell') + \imath k (\tau - \tau')\right]}{[\hat{u}(k)]^{-1} - t [\hat{J}^\Lambda_0 + \varepsilon \delta_{p,0}] + t \mathit{\Upsilon}^\Lambda (p)}. \end{eqnarray*} All the function above depend on $\theta$ and $\varepsilon$ continuously. Hence, passing here to the limit $\theta = \varepsilon \downarrow 0$ and taking into account (\ref{De28}) we obtain (\ref{De21}). $\square$ \vskip.1cm By means of Proposition \ref{periodtm}, the result just proven can be extended to all periodic elements of $\mathcal{G}^{\rm t}$. For $\mu \in \mathcal{G}^{\rm t}$, we set \begin{equation} \label{De43} K^\mu_{\ell \ell'} (\tau, \tau') = \big{\langle} \omega_{\ell} (\tau) \omega_{\ell'}(\tau')\big{\rangle}_{\mu}. \end{equation} \begin{theorem} \label{nagumo2} Let the stability condition (\ref{De6}) be satisfied. Then for every periodic $\mu \in \mathcal{G}^{\rm t}$, the correlation function (\ref{De43}) has the bound \begin{eqnarray} \label{De44} K^\mu_{\ell \ell'} (\tau, \tau')& \leq & Y_{\ell \ell'}(\tau , \tau') \\ & \stackrel{\rm def}{=} & \frac{1}{\beta (2 \pi)^d}\sum_{k \in \mathcal{K}} \int_{(-\pi, \pi]^d} \frac{\exp\left[\imath (p , \ell - \ell') + \imath k(\tau - \tau') \right]}{[\hat{u}(k)]^{-1} - \hat{J}_0 + \mathit{\Upsilon}(p)} {\rm d} p , \nonumber \end{eqnarray} where \begin{equation} \label{De45} \mathit{\Upsilon} (p) = \hat{J}_0 - \sum_{\ell'}J_{\ell \ell'} \exp[\imath (p , \ell - \ell')], \quad p \in (-\pi, \pi]^d. \end{equation} The same bound has also the correlation function $K^{\mu_0}_{\ell \ell'}(\tau, \tau')$, where $\mu_0\in \mathcal{G}^{\rm t}$ is the same as in Proposition \ref{MA1tm}. \end{theorem} \begin{remark} \label{nagumor} By (\ref{De18}), $[\hat{u}(k)]^{-1} \geq m(\mathit{\Delta}_m^2 + k^2)$. The upper bound in (\ref{De44}) with $[\hat{u}(k)]^{-1}]$ replaced by $m ([\mathit{\Delta}^{\rm har}]^2 + k^2)$ turns into the infinite volume correlation function for the quantum harmonic crystal discussed at the beginning of subsection \ref{stabcr}. Thus, under the condition (\ref{De20a}) the decay of the correlation functions in the periodic states is not less than it is in the stable quantum harmonic crystal. As we shall see in the next subsection, such a decay stabilizes also anharmonic ones. \end{remark} For $\mathit{\Upsilon} (p) \sim \mathit{\Upsilon}_0 |p|^2$, $\mathit{\Upsilon}_0 >0$, as $p \rightarrow 0$, the asymptotics of the bound in (\ref{De44}) as $\sqrt{|\ell - \ell'|^2 + |\tau - \tau'|^2} \rightarrow + \infty$ will be the same as for the $d+1$-dimensional free field, which is well known, see claim (c) of Proposition 7.2.1, page 162 of \cite{[GJ]}. Thus, we have the following \begin{proposition} \label{nagumo3} If the function (\ref{De45}) is such that $\mathit{\Upsilon} (p) \sim \mathit{\Upsilon}_0 |p|^2$, $\mathit{\Upsilon}_0 >0$, as $p \rightarrow 0$, the upper bound in (\ref{De44}) has an exponential spacial decay. \end{proposition} \subsection{Decay of correlations in the vector case} \label{vectc} In the vector case, the eigenvalues of the Hamiltonian (\ref{De7}) are no longer simple; hence, the parameter (\ref{De4}) definitely equals zero. Therefore, one has to pick another parameter, which can describe the quantum rigidity in this case. If the model is rotation invariant, its dimensionality $\nu$ is just a parameter. Thus, one can compare the stability of such a model with the stability of the model with $\nu=1$. This approach was developed in \cite{KozZ}, see also \cite{[RevMF],[Kargol]}. Here we present the most general result in this domain, which is then used to study the quantum stabilization in the vector case. We begin by introducing the corresponding class of functions. A function $f:\mathbb{R}\rightarrow \mathbb{R}$ is called polynomially bounded if $f(x) / (1 + |x|^k)$ is bounded for some $k\in\mathbb{N}$. Let $\mathcal{F}$ be the set of continuous polynomially bounded $f:\mathbb{R}\rightarrow \mathbb{R}$ which are either odd and increasing or even and positive. \begin{proposition} \label{sdpn} Suppose that the model is rotation invariant and for all $\ell \in \Lambda$, $\Lambda \Subset \mathbb{L}$, $V_\ell (x) = v_\ell (|x|^2)$ with $v_\ell$ being convex on $\mathbb{R}_+$. Then for any $\tau_1 , \dots, \tau_n\in [0, \beta]$, $\ell_1 , \dots , \ell_n \in \Lambda$, $j = 1 , \dots , \nu$, $f_1 , \dots f_n \in \mathcal{F}$, \begin{equation} \label{SD} \langle f_1 (\omega^{(j)}_{\ell_1} (\tau_1)) \cdots f_n (\omega^{(j)}_{\ell_n} (\tau_n)) \rangle_{\nu_\Lambda} \leq \langle f_1 (\omega_{\ell_1} (\tau_1)) \cdots f_n (\omega_{\ell_n} (\tau_n)) \rangle_{\tilde{\nu}_\Lambda} , \end{equation} where $\tilde{\nu}_\Lambda$ is the Euclidean Gibbs measure (\ref{a27}) of the scalar model with the same $J_{\ell\ell'}$ as the model considered and with the anharmonic potentials $V_\ell (q) = v_\ell (q^2)$. \end{proposition} By this statement one immediately gets the following fact. \begin{theorem} \label{sdtm} Let the model be translation invariant and such as in Proposition \ref{sdpn}. Let also $\mathit{\Delta}_m$ be the gap parameter (\ref{De4}) of the scalar model with the same interaction intensities $J_{\ell\ell'}$ and with the anharmonic potentials $V (q) = v(q^2)$. Then if the stability condition (\ref{De6}) is satisfied, the longitudinal correlation function \begin{equation} \label{SD0} K^\mu_{\ell \ell'} (\tau, \tau') = \langle \omega^{(j)}_{\ell'}(\tau)\omega^{(j)}_{\ell}(\tau')\rangle_\mu , \quad \ \ j = 1, 2, \dots , \nu, \end{equation} corresponding to any of the periodic states $\mu\in \mathcal{G}^{\rm t}$, as well as to any of the accumulation points of the family $\{\pi_\Lambda (\cdot |0)\}_{\Lambda \Subset \mathbb{L}}$, obeys the estimate (\ref{De44}) in which $\hat{u}(k)$ is calculated according to (\ref{De17a}) for the one-dimensional anharmonic oscillator of mass $m$ and the anharmonic potential $v(q^2)$. \end{theorem} \subsection{Suppression of phase transitions} \label{sps} From the `physical' point of view, the decay of correlations (\ref{De44}) already corresponds to the lack of any phase transition. However, in the mathematical theory, one should show this as a mathematical fact basing on the definition of a phase transition. The most general one is Definition \ref{phdef} according to which the suppression of phase transitions corresponds to the uniqueness of tempered Euclidean Gibbs states. Properties like the differentiability of the pressure, c.f., Definition \ref{landau}, or the lack of the order parameter, see Definition \ref{rppdf}, may also indicate the suppression of phase transitions, but in a weaker sense. The aim of this section is to demonstrate that the decay of correlations caused by the quantum stabilization yields the two-times differentiability of the pressure, which in the scalar case yields the uniqueness. This result is then extended to the models which are not necessarily translation invariant. In the scalar case, the most general result is the following statement, see Theorem 3.13 in \cite{[KoT]}. \begin{theorem} \label{7.1tm} Let the anharmonic potentials $V_\ell$ be even and such that there exists a convex function $v:\mathbb{R}_+ \rightarrow \mathbb{R}$, such that, for any $V_\ell$, \begin{equation} \label{De51} V_\ell (x_\ell) - v(x_\ell^2) \leq V_\ell (\tilde{x}_\ell) - v(\tilde{x}_\ell^2) \quad {\rm whenever} \ \ x_\ell^2 < \tilde{x}_\ell^2. \end{equation} For such $v$, let $\mathit{\Delta}_m$ be the gap parameter of the one-particle Hamiltonian (\ref{U1}) with the anharmonic potential $v(q^2)$. Then the set of tempered Euclidean Gibbs measures of this model is a singleton if the stability condition (\ref{De6}) involving $\mathit{\Delta}_m$ and the interaction parameter $\hat{J}_0$ of this model is satisfied. \end{theorem} The proof of this theorem is conducted by comparing the model with the translation invariant reference model with the anharmonicity potential $V(q) = v(q^2)$. By Proposition \ref{MAtm}, for the model considered and the reference model, there exist maximal elements, $\mu_+$ and $\mu_{+}^{\rm ref}$, respectively. By means of the symmetry $V_\ell (q) = V_{\ell}(-q)$ and the FKG inequality, one proves that, for both models, the uniqueness occurs if \begin{equation} \label{SD1} \langle \omega_\ell (0) \rangle_{\mu^{\rm ref}_{+}} = 0, \qquad \ \ \langle \omega_\ell (0) \rangle_{\mu_{+}} = 0, \quad \ \ {\rm for} \ \ {\rm all} \ \ \ell. \end{equation} By the GKS inequalities, the condition (\ref{De51}) implies \begin{equation} \label{SD2} 0 \leq \langle \omega_\ell (0) \rangle_{\mu_{+}} \leq \langle \omega_\ell (0) \rangle_{\mu^{\rm ref}_{+}}, \end{equation} which means that the reference model is less stable with respect to the phase transitions than the initial model. The reference model is translation invariant. By means of a technique employing this fact, one proves that the decay of correlations in the reference model which occurs under the stability condition (\ref{De6}) yields, see Theorem \ref{nagumo1}, \[ \langle \omega_\ell (0) \rangle_{\mu^{\rm ref}_{+}} = 0, \] and therefrom (\ref{SD1}) by (\ref{SD2}). The details can be found in \cite{[KoT]}. As was mentioned above, in the vector case we did not manage to prove that the decay of correlations implies the uniqueness. The main reason for this is that the proof of Theorem \ref{7.1tm} was based on the FKG inequality, which can be proven for scalar models only. In the vector case, we get a weaker result, by which the decay of correlations yields the normality of thermal fluctuations. To this end we introduce \emph{the fluctuation operators} \begin{equation} \label{De64} Q^{(j)}_\Lambda = \frac{1}{\sqrt{|\Lambda|}} \sum_{\ell \in \Lambda} q^{(j)}_\ell, \qquad \Lambda \Subset \mathbb{L}, \ \ \ j = 1, \dots, \nu. \end{equation} Such operators correspond to \emph{normal fluctuations}. \begin{definition} \label{normalf} The fluctuations of the displacements of oscillators are called normal if the Matsubara functions (\ref{a9}) for the operators $F_1= Q^{(j_1)}, \dots , F_n = Q^{(j_n)}$, remain bounded as $\Lambda \nearrow \mathbb{L}$. \end{definition} If $\Lambda$ is a box, the parameter (\ref{gri6}) can be written \begin{equation} \label{De65} P^{(\alpha)}_\Lambda = \frac{1}{\beta^2 |\Lambda|^\alpha}\sum_{j=1}^\nu \int_0^\beta\int_0^\beta \Gamma^{\beta , \Lambda}_{Q^{(j)}_\Lambda, Q^{(j)}_\Lambda} (\tau , \tau') {\rm d} \tau {\rm d}\tau'. \end{equation} Thus, if the fluctuations are normal, phase transitions of the second order (and all the more of the first order) do not occur. Like in the proof of Theorem \ref{sdtm}, the model is compared with the scalar ferromagnetic model with the same mass and the anharmonic potential $v(q^2)$. Then the gap parameter $\mathit{\Delta}_m$ is the one calculated for the latter model. \begin{theorem} \label{nagumo5} Let the model be the same as in Theorem \ref{sdtm} and let the stability condition involving the interaction parameter $\hat{J}_0$ of the model and the gap parameter $\mathit{\Delta}_m$ corresponding to its scalar analog be satisfied. Then the fluctuations of the displacements of the oscillators remain normal at all temperatures. \end{theorem} \subsection{Comments} \begin{itemize} \item \emph{Subsection \ref{stabcr}:} In an ionic crystal, the ions usually form massive complexes the dynamics of which determine the physical properties of the crystal, including its instability with respect to structural phase transitions, see \cite{[BC]}. Such massive complexes can be considered as classical particles; hence, the phase transitions are described in the framework of classical statistical mechanics. At the same time, in a number of ionic crystals containing localized light ions certain aspects of the phase transitions are apparently unusual from the point of view of classical physics. Their presence can only be explained in a quantum-mechanical context, which points out on the essential role of the light ions. This influence of the quantum effects on the phase transition was detected experimentally already in the early 1970's. Here we mention the data presented in \cite{[Blinc],[12]} on the KDP-type ferroelectrics and in \cite{[KMueller]} on the YBaCuO-type superconductors. These data were then used for justifying the corresponding theoretical models and tools of their study. On a theoretical level, the influence of quantum effects on the structural phase transitions in ionic crystals was first discussed in the paper \cite{[9]}, where the particle mass was chosen as the only parameter responsible for these effects. The conclusion, obtained there was that the long range order, see Definition \ref{rppdf}, gets impossible at all temperatures if the mass is sufficiently small. Later on, a number of rigorous studies of quantum effects inspired by this result as well as by the corresponding experimental data have appeared, see \cite{[Minlos],[VZ]} and the references therein. Like in \cite{[9]}, in these works the reduced mass (\ref{In}) was the only parameter responsible for the effects. The result obtained was that the long range order is suppressed at all temperatures in the light mass limit $m\rightarrow 0$. Based on the study of the quantum crystals performed in \cite{[AKK1],[AKK2],[AKKR],[CRAS],[AKKRNN]}, a mechanism of quantum effects leading to the stabilization against phase transitions was proposed, see \cite{[AKKRPRL]}. \item \emph{Subsection \ref{4.2.ss}:} According to \cite{[AKKRPRL]} the key parameter responsible for the quantum stabilization is $\mathcal{R}_m = m\mathit{\Delta}_m^2$, see (\ref{De5}). In the harmonic case, $m \mathit{\Delta}_m^2$ is merely the oscillator rigidity and the stability of the crystal corresponds to large values of this quantity. That is why the parameter $m \mathit{\Delta}_m^2$ was called quantum rigidity and the effect was called quantum stabilization. If the tunneling between the wells gets more intensive (closer minima), or if the mass diminishes, $m \mathit{\Delta}_m^2$ gets bigger and the particle `forgets' about the details of the potential energy in the vicinity of the origin (including instability) and oscillates as if its equilibrium at zero is stable, like in the harmonic case. \item \emph{Subsection \ref{gap}:} Theorems \ref{gap1tm} and \ref{gap2tm} are new. Preliminary results of this kind were obtained in \cite{[AKK2],[Koz4]}. \item \emph{Subsection \ref{7.2.1}:} Theorems \ref{nagumo1}, \ref{nagumo2}, \ref{nagumo3} were proven in \cite{[KK]}. \item \emph{Subsection \ref{vectc}:} Various scalar domination estimates were obtained in \cite{[KDres],Koz,KozZ}. \item \emph{Subsection \ref{sps}:} Theorem \ref{7.1tm} was proven in \cite{[KoT]}. The proof of Theorem \ref{nagumo5} was done in \cite{KozZ}. The suppression of abnormal fluctuations in the hierarchical version of the model (\ref{U1}), (\ref{U2}) was proven in \cite{[AKK1]}. \end{itemize} \section*{Acknowledgments} The authors are grateful to M. R\"ockner and T. Pasurek for valuable discussions. The financial support by the DFG through the project 436 POL 113/115/0-1 and through SFB 701 ``Spektrale Strukturen und topologische Methoden in der Mathematik" is cordially acknowledged. A. Kargol is grateful for the support by the KBN under the Grant N N201 0761 33.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} Although it is generally accepted that soft $\gamma$-ray spectra of supernova remnants interacting with molecular clouds (SNRs) result from decay of $\pi^0$ produced via inelastic collisions of high-energy ions with nuclei in the background due to evolution of SNR shocks in a high-density environment \citep{2010Sci...327.1103A, Giuliani_2011, 2019ApJ...874...50Z}, the nature of hard $\gamma$-ray spectra has been a subject of extensive investigations \citep{2012ApJ...761..133Y, 2014MNRAS.445L..70G, 2016ApJ...821...43Z, 2018A&A...612A...6H, 2019MNRAS.487.3199C}. In the leptonic scenario for the $\gamma$-ray emission, the model parameters are well constrained and appear to be consistent with expectation of diffusive shock particle acceleration mechanism \citep{2019ApJ...876...24Z}. Hadronic models require stronger magnetic fields and less efficient electron acceleration \citep{2008MNRAS.386L..20B}. These results have profound implications on the origin of cosmic rays (CRs), especially those with energies lower than the CR spectral knee energy of $\sim 1$ PeV \citep{2020ChA&A..44....1Z}. Considering the anomalous CR spectra discovered with space measurements \citep{PhysRevLett.114.171103,PhysRevLett.115.211101}, \cite{2017ApJ...844L...3Z} proposes that GeV CRs are mostly accelerated in SNRs interacting with molecular clouds with relatively lower shock speeds, giving rise to softer spectra, while higher energy CRs may be attributed to particle acceleration in relatively younger SNRs with higher shock speeds, leading to slightly harder high-energy spectra. In this scenario, high-energy electron acceleration efficiency in young SNRs can indeed be lower than that for GeV electrons in old SNRs \citep{2019MNRAS.482.5268Z}, reminiscence of the hadronic scenario for hard $\gamma$-ray spectra \citep{2008MNRAS.386L..20B}. One of the challenges facing hadronic models is that ion spectra need to cut off at tens of TeVs to account for the observed $\gamma$-ray spectral cutoffs, suggesting that SNRs are not PeVatrons. Recent CR proton spectral measurement by the DAMPE shows that there appears to be a spectral hump at tens of TeVs \citep{eaax3793}, which has been attributed to a nearby SNR, such as Geminga \citep{2019JCAP...12..007Q}, implying indeed that shocks of SNRs can only accelerate protons up to a few tens of TeV \citep{1983A&A...125..249L, 2013MNRAS.431..415B}. Observations of young SNR Cas A and $\gamma$-Cygni SNR also imply a high-energy cutoff of the ion distribution in the TeV energy range \citep{2019ApJ...874...98Z, 2020ApJ...894...51A, 2020arXiv201015854M}. HESS J1912+101 was first discovered in 2008 by the H.E.S.S. collaboration with a shell structure \citep{2008A&A...484..435A}. However its radio counterpart had not been identified until very recently with polarization measurement \citep{2019RAA....19...45R}, implying presence of large scale strong magnetic field. Interestingly, observations of molecular clouds in the direction of HESS J1912+101 suggest that it is associated with an SNR with an age of $70-200$ kilo-years (kyrs) \citep{Su_2017}, which is consistent with the characteristic age of 170 kyrs for pulsar J1913+1011 inside it \citep{2002MNRAS.335..275M}. SNR G279.0+1.1 has similar properties with $\gamma$-ray emission up to 0.5 TeV detected recently \citep{2020MNRAS.492.5980A}. A recent population study of SNRs shows that higher energy particles are preferentially accelerated in younger SNRs with higher shock speeds \citep{2019ApJ...874...50Z}. In the absence of continuous TeV particle acceleration in old SNRs, it is very challenging to produce the TeV emission from HESS J1912+101 via leptonic processes. In section \ref{sec:1912}, we show challenges to reproduce $\gamma$-ray emission from HESS J1912+101 and SNR G279.0+1.1 via leptonic processes. Section \ref{sec:296} is dedicated to the study of G296.5+10.0 for its hard $\gamma$-ray spectrum similar to HESS J1912+101. In section \ref{sec:sample} we fit the multi-wavelength spectra of another 10 SNRs with hard $\gamma$-ray spectra in the hadronic and leptonic scenarios and discuss the model implications. Our conclusions are drawn in section \ref{sec:conclusion}. \section{HESS J1912+101} \label{sec:1912} Recent radio observations of HESS J1912+101 revealed strongly polarized emission at 6 cm from the northeast half of the shell \citep{2019RAA....19...45R}. Due to strong radio emission from the surrounding, a shell structure cannot be identified in the total intensity maps. The total polarized flux density at 6 cm is about 0.5$\pm$0.2 Jy, which can be considered as a lower limit to the total flux density. Assuming a polarization fraction of 20\%, \citet{2019RAA....19...45R} obtained a total flux density of 2.5$\pm$ 1.0 Jy, which will be treated as an upper limit in the following. The strong polarization of radio emission implies the presence of large scale magnetic fields, which should also be stronger than the typical value of $3\mu$G for the interstellar medium. Via CO and HI observations, \citet{Su_2017} showed that HESS J1912+101 is likely associated with an old SNR with an age of $0.7-2.0\times 10^5$ years at a distance of $\sim$ 4.1 kpc. The good correlation between the TeV emission and the disturbed gas revealed by these observations makes them suggest a hadronic origin for the $\gamma$-ray emission. \begin{figure}[ht!] \plotone{coolingtime.pdf} \caption{Electron cooling time due to synchrotron and inverse Compton emission. The magnetic fields and properties of the background soft photons are indicated. The grey band indicates the age range obtained via molecular cloud observations \citep{Su_2017}. \label{fig:times}} \end{figure} Recently \cite{2019ApJ...874...50Z} studied a sample of $\gamma$-ray SNRs and found that in general high-energy particles in these sources have a broken power-law distribution with the break energy and the low-energy spectral index decreasing with the increase of SNR age. These results imply that higher energy particles are mostly accelerated in younger SNRs with relatively higher shock speeds. The acceleration of TeV particles quenches for SNR with an age greater than 10 kyrs. This challenges a leptonic origin for the TeV emission from HESS J1912+101. Figure \ref{fig:times} shows the energy loss timescale of electrons due to the synchrotron and inverse Compton (IC) processes. It can be seen that for a typical value of the interstellar magnetic field, the maximum energy of electrons in HESS J1912+101 should be about 10 TeV. The $\gamma$-ray produced via IC by such electrons should cut off below 10 TeV. The left panel of Figure \ref{fig:lep} shows the evolution of electron distribution under the influence of energy loss due to synchrotron and IC processes for an injected broken power-law spectrum with a maximum energy of 1 PeV at the beginning. Compared to the age of this SNR, TeV and higher energy particles are assumed to be accelerated instantaneously in the early stage of the SNR evolution \citep{2013MNRAS.431..415B}. \begin{figure}[ht!] \plottwo{lep_a.pdf}{lep_b.pdf} \caption{Left: evolution of electron distribution due to synchrotron and IC losses. The initial distribution is a broken power-law with a high-energy cutoff with parameters indicated in the figure. Right: evolution of the corresponding emission spectra compared with the spectral energy distribution (SED) of HESS J1912+101. The radio data are taken from \cite{2019RAA....19...45R}. The lower limit is for the polarized emission. The X-ray upper limit is from \cite{2008ApJ...682.1177C}, and the blue data points are from the H.E.S.S. collaboration \citep{2018A&A...612A...8H}. \label{fig:lep}} \end{figure} \begin{figure*}[!htb] \centering \includegraphics[width=3.5in]{j1912-1gev-tsmap.pdf} \includegraphics[width=3.5in]{sed-j1912.pdf} \caption{Left: $4^\circ \times 4^\circ$ TS map of photons above 1 GeV around HESS J1912+101 after subtracting $\gamma$-ray emission from 4FGL J1913.3+1019. The green crosses and circle represent the 4FGL sources and the best-fit radius of the uniform disk is marked by the white circle. The cyan contours show the TeV $\gamma$-ray emission of HESS J1912+101 \citep{2018A&A...612A...8H}. Right: The $\gamma$-ray SED of HESS J1912+101 (black dots) and 4FGL J1913.3+1019 (green dots). The blue dots are the HESS data of HESS J1912+101 \citep{2018A&A...612A...8H}.The gray histogram denotes the TS value of HESS J1912+101 for each energy bin and the arrows indicate the upper limits with 95\% significance level.} \label{fig:tsmap-sed-j1912} \end{figure*} Recent analyses of Fermi data by \citet{2020ApJ...889...12Z} uncovered a compact GeV source with a soft spectrum within the shell of HESS J1912+101, and there appears to be extended diffuse emission. Here we re-analyzed the {\em Fermi}-LAT data around HESS J1912+101. In the region of HESS J1912+101, there are four 4FGL sources (4FGL J1913.3+1019, 4FGL J1914.7+1012, 4FGL J1911.7+1014, 4FGL J1912.7+0957). Among them, 4FGL J1913.3+1019 is associated with PSR J1913+1011 \citep{2002MNRAS.335..275M, 2019ApJ...871...78S, 2020ApJ...889...12Z} and others are unidentified sources with soft spectra. We removed these three unidentified 4FGL sources from the model file and treated them as parts of the diffuse emission around HESS J1912+101. Using the data above 1 GeV with the {\em Fermipy} package version 0.19.0 \citep{2017ICRC...35..824W}, we tested several spatial templates for HESS J1912+101, including an uniform disk, a 2D Gaussian model, and the HESS image. The uniform disk model (R.A., decl.=$288.262^{\circ} \pm 0.033^{\circ}$, $10.075^{\circ} \pm 0.037^{\circ}$ with $\sigma = 0.496^{\circ} \pm 0.035^{\circ}$) is favored, while the 2D Gaussian template (R.A., decl.=$288.277^{\circ} \pm 0.034^{\circ}$, $10.135^{\circ} \pm 0.041^{\circ}$ with $\sigma = 0.413^{\circ} \pm 0.036^{\circ}$) gives an equally good representations of the data. We produced a TS map above 1 GeV after subtracting the $\gamma$-ray emission from 4FGL J1913.3+1019, which is shown in the left panel of Figure \ref{fig:tsmap-sed-j1912}. In the energy range of 1 GeV - 500 GeV, the TS value of HESS J1912+101 is fitted to be 469.0, and the spectral index of a power-law model is 2.54$\pm$0.07. The integral photon flux from 1 GeV to 500 GeV is $(7.72\pm0.44)\times10^{-9}$ photon cm$^{-2}$ s$^{-1}$. For the point source 4FGL J1913.3+1019, the TS value and the power-law spectral index are fitted to be 24.2 and 1.90$\pm$0.17. The $\gamma$-ray SEDs of HESS J1912+101 and 4FGL J1913.3+1019 shown in the right panel of Figure \ref{fig:tsmap-sed-j1912} were produced by dividing all data between 1 GeV and 500 GeV into 10 bins with identical width on the logarithmic of energy. And the 95\% upper limits are for energy bins with the TS value of HESS J1912+101 smaller than 4.0. The $\gamma$-ray SED shows a new spectral component below 10 GeV, which may be attributed to shock interaction with molecular clouds. The nature of this soft extended component will be explored in a separate paper. Here we focus on spectral modeling above 10 GeV. The right-panel of Figure \ref{fig:lep} shows that even with a magnetic field of $3\ \mu$G, the age of HESS J1912+101 needs to be less than 10 kyrs to reproduce the $\gamma$-ray spectrum, which contradicts molecular cloud observations \citep{Su_2017} and properties of the associated pulsar \citep{2019ApJ...871...78S}. Here the $\gamma$-ray is produced via IC processes \citep{1968PhRv..167.1159J}, and besides the cosmic microwave background radiation (CMB), we assume an infrared photon background with $T=30$ K and an energy density of 1 eV cm$^{-3}$ \citep{2006ApJ...648L..29P}. To facilitate model comparison, these values for background photons will be used in this paper unless specified otherwise. Although the $\gamma$-ray spectrum is flat, it implies an electron distribution with an index of $\sim$3 in the leptonic scenario. Considering the low radio flux density, it is evident that energetic electrons need to have a broken power law distribution with a hard low-energy spectrum. Here the spectral index $\alpha=1.7$ is indicated in the left panel of Figure \ref{fig:lep}. Beyond the break energy of $E_{\rm br}=50$ GeV, the spectral index is 2.7. \begin{figure}[ht!] \plottwo{J1912_b.pdf}{J1912_a.pdf} \caption{Fit to the multi-wavelength SED of HESS J1912+101 in the hadronic scenario for the $\gamma$-ray emission. The total energy of protons above 1 GeV is $10^{50}$ erg (Left) and $10^{49}$ erg (Right). The model parameters are indicated on the Figures. The solid line is for $\gamma$-ray emission via hadronic processes, while the dotted, dotted-dashed, and dashed lines are for electron bremsstrahlung and IC of Infrared and CMB photons, respectively. \label{fig:1912}} \end{figure} On the other hand, the $\gamma$-ray spectrum can be readily fitted in the hadronic scenario. For the sake of simplicity, we assume a power-law distribution with an identical index for electrons and ions. $N(R_i) = N_{0,i} R_i^{-\alpha} $exp$[-(E_i/E^i_{\rm cut})^{\delta}]$ where $R_i=p_i/q_i$ is the rigidity of the particle, $E$, $p$ and $q$ are the particle energy, momentum and charge, respectively, and $"i"$ represents different particle species. Considering that the flux ratio of TeV CR electrons and protons is less than $0.1\%$ and high-energy electrons are subject to radiative energy loss as they propagate in the Milky Way galaxy, we fix the density ratio of electrons and protons at 1 GeV $K_{\rm ep}=N_{0,\rm e}/N_{0,\rm p}=5\times 10^{-3}$. The total energy content of protons above 1 GeV ($W_{\rm p}$) determines the normalization of the particle distributions. We will usually consider two cases: $W_{\rm p}=10^{49}$ and $W_{\rm p}=10^{50}$. The mean background density $n_{\rm H}$ and magnetic field $B$, and the spectral index $\alpha$ and cutoff energy of protons $E_{\rm cut}^p$ can be adjusted to fit the multi-wavelength SED. The cutoff energy of electrons $E_{\rm cut}^e$ is obtained by requiring the corresponding synchrotron energy loss time be equal to the age of the remnant and we assume a super exponential high-energy cutoff for the electron distribution with $\delta =2$ unless specified otherwise. The high-energy cutoff of ion distribution is always exponential with $\delta =1$. When calculating the $\gamma$-ray emission via the hadronic processes, we only consider protons and contributions from other ions are approximated by multiplying the proton produced $\gamma$-ray flux by a factor of 1.84 \citep{2009APh....31..341M}. Figure \ref{fig:1912} shows the results of the spectral fit and the model parameters are listed in Table \ref{tab:fitpatameters}. When calculating the total energy of the magnetic field $W_{B}$, we assume a uniform magnetic field with a volume filling factor of 1. This magnetic field energy therefore should be considered as an upper limit. Although both set of parameters give good fit to the radio and $\gamma$-ray spectra, the model with a weaker magnetic field and therefore more energy in energetic particles (left panel of Figure \ref{fig:1912}) is favored for the more reasonable value of the magnetic field energy. Moreover, the case with a strong magnetic field has a synchrotron spectrum cutting off in the radio band (the right panel of Figure \ref{fig:1912}), which appears to be too low. Of course, the magnetic field is not well constrained. For the given radio emission, one can always compensate the increase in the magnetic field with a decrease in the energetic particle energy. \citet{2020MNRAS.492.5980A} recently carried out a detailed analyses of G279.0$+$1.1, a huge SNR with a radius of $\sim 40$ pc. It has a very hard $\gamma$-ray spectrum with a spectral index of $1.86\pm 0.09$. Although it has not been detected in the TeV range, the $\gamma$-ray spectrum obtained with the Fermi-LAT extends to $0.5$ TeV without any indication of a spectral softening at high energies. The age of this remnant is greater than $100$ kyr. Figure \ref{fig:intermediate4} shows our spectral fits in the hadronic scenario for the $\gamma$-ray emission. It is interesting to note that there appears to be a low energy spectral component near 1 GeV, reminiscence of the spectral component below 10 GeV for HESS J1912+101. Compared with HESS J1912+101, SNR G279.0$+$1.1 has a similar $\gamma$-ray luminosity, a few times higher radio luminosity and radius, leading to a higher magnetic field and a large magnetic energy. However, the small volume filling factor of the radio emission region can reduce the magnetic energy significantly. \begin{figure} \plottwo{G279_b.pdf}{G279_a.pdf} \caption{Same as Figure \ref{fig:1912} but for G279.0$+$1.1. The total energy of protons above 1 GeV is $10^{50}$ erg (Left) and $5\times 10^{49}$ erg (Right). \label{fig:intermediate4}} \end{figure} \section{G296.5+10.0} \label{sec:296} \def{\sl XMM-Newton}{{\sl XMM-Newton}} \def\hbox{erg cm$^{-2}$ s$^{-1}$ }{\hbox{erg cm$^{-2}$ s$^{-1}$ }} \def\hbox{cm$^{-2}$}{\hbox{cm$^{-2}$}} \def\hbox{cm$^{-3}$}{\hbox{cm$^{-3}$}} \def\hbox{km $\rm{s^{-1}}$}{\hbox{km $\rm{s^{-1}}$}} G296.5$+$10.0 is a bilateral morphology SNR in radio and X-rays with an angular extension of 90 arcmin $\times$ 65 arcmin, and the distance is estimated as 2.1 kpc \citep{2000AJ....119..281G}. It has relatively bright radio emission with a typical spectral index of $-0.5$ \citep{1994MNRAS.270..106M}. Due to its large size, Chandra, XMM-Newton, and Suzaku have not mapped the entire SNR, and only ROSAT PSD data provided the thermal spectrum for the whole SNR \citep{1987MNRAS.225..199K}. Nevertheless, five {\sl XMM-Newton}~observations taken in 2016 roughly cover the bright limbs of G296.5+10.0 (PI: Brian Whillianms). \begin{figure}[htbp] \centering \includegraphics[angle=0,width=0.24\textwidth]{101.png} \includegraphics[angle=0,width=0.35\textwidth]{pn-bkg.pdf} \includegraphics[angle=0,width=0.36\textwidth,origin=c]{101.pdf} \caption{The left panel shows X-ray images of the northeast part of SNR G296.5+10.0, where the red, green, and blue colors represent emission in the 0.2-0.5 keV, 0.5-1.0 keV, and 1.0-2.0 keV bands, respectively. The color bars in the bottom are in units of photons cm$^{-2}$ s$^{-1}$. The blue circle with a radius of 5.15$''$ is the region for spectral extraction, while the other circles and the square are for the `local background' regions. The green box is for the PN data, and the white and the yellow circles are for the MOS1 and MOS2 data, respectively. A few point sources are marked out with red. The middle panel shows the source-region spectrum and the scaled `local background' spectrum (according to the effective area) of the PN data, with the model of unresolved AGN emission in the cosmic X-ray background over-plotted. The right panel shows the {\sl XMM-Newton}~PN and MOS spectra with their `local background' subtracted, and the best-fit models are shown in the red color. The black curve, as modeled by an absorbed power law, represents the non-thermal emission, which is likely caused by the cosmic X-ray background.} \label{fig:xmm} \end{figure} \begin{figure}[hbt] \centering \includegraphics[width=3.3in]{g296-1gev-radio.pdf} \includegraphics[width=3.3in]{g296-1gev-xray.pdf} \includegraphics[width=3.3in]{g296-5gev-radio.pdf} \includegraphics[width=3.5in]{g296-5gev-xray.pdf} \caption{TS maps of a $3^\circ \times 3^\circ$ region around G296.5+10.0. The top two panels is for photons in the range of 1 GeV - 1 TeV, and the bottom two panels is for photons from 5 GeV to 1 TeV. The magenta dashed circle and the white solid circle represent the size of spatial template of G296.5+10.0 in 4FGL and this work by {\em Fermi}-py, respectively. The white cross show the position of the radio-quiet X-ray-emitting neutron star (1E 1207.4-5209) in it \citep{1984Natur.307..215H, 2000ApJ...540L..25Z}. The green (left) and cyan (right) contours describe the radio and X-ray emission of G296.5+10.0, respectively.} \label{fig:tsmap-g296} \end{figure} Here we analyze the {\sl XMM-Newton}~data to estimate the flux of non-thermal emission. This emission could be fairly dim as it was not reported in the ROSAT data, and even the residual contamination from the soft proton flares may affect the detection. As a result, only the ID:0781720101 observation is used, which didn't suffer apparent soft proton flares during its 28 ks exposure. The data reduction employs the standard procedure in the {\sl XMM-Newton}~Science Analysis System (SAS; version: 15.0.0) and the Extended Source Analysis Software (ESAS) package. The diagnostic file created by the {\sl `pn-filter'} shows a steady light curve with PN count rate of $\sim$3.2 count/s in this observation, suggesting the absence of soft proton flares. We use the {\sl `eimageget'} script to generate the PN and MOS images, and the {\sl `eimagecombine'} to combine them and produce a background-subtracted, vignetting-corrected, and smoothed image, where the background is based on the filter wheel closed images. This image (the left panel of Figure~\ref{fig:xmm}) covers the northeast part of the SNR. The PN and MOS spectra are extracted from a circular region with a radius of 5.15$'$, well confined within the central CCD of the MOS1 and MOS2. A few bright point sources are manually removed when making the mask. The {\sl `mos/pn-spectra"} scripts are used to create spectra, and the {\sl `mos/pn-back'} scripts to generate model quiescent particle background (QPB) spectra. Using the same method we also produce spectra from `local background' regions at the northeast corner of the field of view, where the SNR emission seems negligible. These regions are slightly different for the MOS1, MOS2, and PN as shown in the left panel of Figure~\ref{fig:xmm}, due to differences in the effective boundaries of different detectors. We present the PN source spectrum and the `local background' spectrum in the middle panel of Figure~\ref{fig:xmm} with their QPB spectra subtracted, and compared with the model of the unresolved AGN emission in the cosmic X-ray background \citep{2004A&A...419..837D, 2008A&A...478..575K}. The model is an absorbed power law component with a spectral index of 1.46 and a normalization of $\sim 11.6\, \rm photons$ $\rm cm^{-2} s^{-1} sr^{-1} keV^{-1}$ at 1 keV, and the absorption is determined by the Galactic HI column density towards this SNR of about $1.1\times10^{21}$ \hbox{cm$^{-2}$}~\citep{2016A&A...594A.116H}. In the hard X-ray band (2-7 keV), the PN source spectrum is slightly under the cosmic X-ray background curve, suggesting absence of prominent non-thermal emission. Nevertheless, to estimate an upper limit of the non-thermal emission, we fit the PN and MOS spectra jointly, but using the `local background' spectra as the background spectra. A single temperature APEC model represents the SNR thermal emission, and a power law model is for the non-thermal emission. The redshift is set to zero, and the Galactic HI column density is assigned to be the maximum column density for the foreground absorption. The best-fit spectra are shown in the right panel of Figure~\ref{fig:xmm}. The best-fit temperature of the thermal emission is 0.15 keV, and the metal abundances are slightly sub-solar (Table~\ref{tab:xmmpar}). The non-thermal flux in the range of 0.5--10 keV is $6.0(\pm1.5)\times10^{-13}$ \hbox{erg cm$^{-2}$ s$^{-1}$ }, with a photon index of $1.4\pm0.4$. Since that the `local background' spectra do not have a good statistic in the hard X-ray band, this component is likely from the unresolved AGN emission. Note that the photon index of 1.4 for SNR is too hard if the hard tail X-ray component is synchrotron X-rays. We will treat this fitted flux as an upper limit to nonthermal emission from the SNR, which in turn suggests an upper limit of the non-thermal flux of $\sim1.4\times10^{-11}$ \hbox{erg cm$^{-2}$ s$^{-1}$ }\ for the entire SNR region \citep{1987MNRAS.225..199K}. \begin{deluxetable*}{l|ccccccc}[htb!] \tablecaption{The best-fit results for the {\sl XMM-Newton}~data of G296.5+10.0} \tablecolumns{8} \tablehead{ \multicolumn{8}{c}{Model: TBabs*(APEC+Powerlaw)} } \startdata TBabs & \multicolumn{2}{c}{$n_{\rm H}$} & \multicolumn{5}{c}{$(1.0-1.1)\times10^{21}$ \hbox{cm$^{-2}$}} \\ \hline APEC & $\eta_\mathrm{apec}$\tablenotemark{*} & kT [keV] & C & N & O & Ne, Mg, Si, S & Fe, Ni \\ & $0.04\pm0.01$ & $0.150\pm0.001$ & $<0.6$ & $0.2\pm0.1$ & $0.3\pm0.1$ & $0.7\pm0.2$ & $1.1\pm0.3$ \\ \hline Powerlaw & \multicolumn{2}{c}{$flux_{(0.5-10\,\mathrm{keV})}$} & \multicolumn{5}{c}{$6.0(\pm1.5)\times10^{-13}$ \hbox{erg cm$^{-2}$ s$^{-1}$ }} \\ & \multicolumn{2}{c}{Photon Index} & \multicolumn{5}{c}{$1.4\pm0.4$} \\ \hline $\chi^2$/$d.o.f.$ & \multicolumn{7}{c}{967/480 $\sim$ 2} \enddata \tablecomments{Compared to the MOS spectra, the PN spectrum may have some systematic deficit in flux. The total flux of the PN is allowed to vary during the joint fitting, and the best-fit value here is $0.84\pm0.01$.} \tablenotetext{*}{Normalization parameter of the APEC model has the physical meaning of $\frac{10^{-14}}{4\pi[D_{\rm A}(1+z)^2]} \int n_{\rm e}n_{\rm H} dV$, where $D_{\rm A}$ is the angular diameter distance to the source (cm) and $z$ is the redshift, $n_{\rm e}$ and $n_{\rm H}$ are the electron and the hydrogen densities (cm$^{-3}$), and $V$ is the volume. } \label{tab:xmmpar} \end{deluxetable*} Using 52 months of Pass 7 data recorded by {\em Fermi}-LAT, \citet{2013MNRAS.434.2202A} detected extended $\gamma$-ray emission toward the region of SNR G296.5+10.0 with a $\sim$ 5$\sigma$ confidence level. The $\gamma$-ray spectrum of it can be fitted by a power-law with an index of 1.85$\pm$0.13. In the fourth Fermi-LAT source catalog \citep[4FGL;][]{2020ApJS..247...33A}, G296.5+10.0 corresponds to the extended source 4FGL J1208.5-5243e, and the spatial template of it is adopted to be an uniform disk centered at (R.A.$=182.13^\circ$, Dec.$=-52.73^\circ$) with an radius of 0.76$^\circ$. \begin{figure}[!htb] \centering \includegraphics[width=3.5in]{sed-g296.pdf} \caption{The $\gamma$-ray SED of G296.5+10.0. The red solid line shows the best-fitting power-law spectrum in the energy range of 100 MeV - 1 TeV, and the red dashed lines show its 1$\sigma$ statistic error. The gray histogram denotes the TS value for each energy bin and the arrows indicate the upper limits with 95\% significance level.} \label{fig:sed-g296} \end{figure} \begin{figure}[hbt!] \plottwo{G296_b.pdf}{G296_a.pdf} \caption{Same as Figure \ref{fig:1912} but for G296.5+10.0. \label{fig:296}} \end{figure} With the latest {\em Fermi}-LAT Pass 8 data collected from 2008 August 4 to 2018 August 4, we updated the $\gamma$-ray morphology and spectrum of G296.5+10.0. Figure \ref{fig:tsmap-g296} shows the TS maps of a $3^\circ \times 3^\circ$ region around G296.5+10.0. In each panel, the magenta dashed circle is the size of spatial template of G296.5+10.0 in 4FGL \citep{2020ApJS..247...33A}. As can be seen, the $\gamma$-ray emission of G296.5+10.0 has a significant deviation with this spatial template. Therefore, we use the {\em Fermi}-py tool to refit the data and get the updated centered position and radius of the uniform disk, which is shown as the white solid circle in Figure \ref{fig:tsmap-g296}. The best-fit position and radius of the uniform disk are (R.A.$=182.11^\circ$, Dec.$=-52.38^\circ$) and 0.79$^\circ$, respectively. With this spatial template, the TS value of G296.5+10.0 is fitted to be 203.68, and the spectrum can be well fitted by a power-law with an index of 1.90$\pm$0.06. In the energy range from 1 GeV to 1 TeV, the integral photon flux is $(1.19\pm0.11)\times10^{-9}$ photon cm$^{-2}$ s$^{-1}$ with statistical error only. Moreover, to derive the $\gamma$-ray SED of G296.5+10.0, all of the data from 100 MeV to 1 TeV are divided to be 14 equal logarithmic bins. And for any energy bin with TS value of G296.5+10.0 smaller than 5.0, an upper limit at 95\% confidence level is calculated. The SED and fit results are shown in Figure \ref{fig:sed-g296}. G296.5$+$10.0 has an age of $\sim 10$ kyrs \citep{1997ApJ...476L..43V} and a GeV $\gamma$-ray spectrum similar to that of HESS J1912+101. Adopting the hadronic model for HESS J1912+101, one can fit the radio and $\gamma$-ray spectra of G296.5$+$10.0. Due to the absence of TeV observations, the high-energy cutoff of the proton distribution is not well constrained and we adopt a value of 70 TeV derived from the spectral fit of G296.5$+$10.0. Figure \ref{fig:296} shows the spectral fit, and the model parameters are listed in Table \ref{tab:fitpatameters}, which are very similar to those for HESS J1912+101 except for a slightly stronger magnetic field. As we will see below, this is due to the higher radio to $\gamma$-ray flux ratio of G296.5$+$10.0 than that of HESS J1912+101. Similarly, we favor the model with a weaker magnetic field (left panel of Figure \ref{fig:296}). \section{Hadronic and/or Leptonic models for hard GeV spectra of other 10 SNRs} \label{sec:sample} \begin{figure}[ht!] \plotone{SED.pdf} \caption{The multi-wavelength spectral data of 13 SNRs with hard $\gamma$-ray spectra. The $\gamma$-ray spectra are fitted with a hadronic model with the normalization of individual spectrum as free parameters. The model assumes that protons have a single power-law energy distribution with an exponential high-energy cutoff. Note that the TeV spectra of G78.2$+$2.1 (HAWC) and N132D (HESS) cut off at relatively lower energies, and the soft spectral component of GeV of HESS 1912$+$101 may be from other contributors, and are not considered in SED fitting. The best-fit model parameters are indicated on the Figure. References for the observational data are as follows. RX J0852.0$-$4622: radio \citep{2000A&A...364..732D}, GeV \citep{2011ApJ...740L..51T}, X-ray \citep{2007ApJ...661..236A}, TeV \citep{2018A&A...612A...7H}; RX J1713.7$-$3946: radio \citep{2004ApJ...602..271L}, X-ray \citep{2008ApJ...685..988T}, GeV and TeV \citep{2018A&A...612A...6H}; HESS J1731$-$347: radio \citep{2008ApJ...679L..85T}, GeV \citep{2017ApJ...851..100C,2018ApJ...853....2G}, X-ray \citep{2017A&A...608A..23D}, TeV \citep{2011A&A...531A..81H}; RCW 86: radio \citep{1975AuJPA..37....1C,2012AA...545A..28L}, X-ray \citep{2012AA...545A..28L}, GeV \citep{2016ApJ...819...98A}, TeV \citep{2018A&A...612A...4H}; SN 1006: radio \cite{2009AJ....137.2956D}, X-ray \citep{2008PASJ...60S.153B}, GeV \citep{2017ApJ...851..100C}, TeV \citep{2010AA...516A..62A}; G$150.3+4.5$: radio \citep{2014A&A...566A..76G}, X-ray and GeV \citep{2020arXiv200908397D}; G$296.5+10.0$: radio \citep{1994MNRAS.270..106M}, GeV (this work), HESS J1534$-$571: radio \citep{2018MNRAS.480..134M}, GeV \citep{2017ApJ...843...12A}, X-ary and TeV \citep{2018A&A...612A...8H}; RCW 103: radio \citep{1996AJ....111..340D}, GeV \citep{2014ApJ...781...64X}; G78.2$+$2.1: radio \citep{1991A&A...241..551W,1997A&A...324..641Z,2006A&A...457.1081K,2011A&A...529A.159G}, X-ray \citep{2013MNRAS.436..968L}, GeV \citep{2018ApJ...861..134A} and TeV \citep{2019ICRC...36..675F}; G279.0$+$1.1: radio \citep{1988MNRAS.234..971W,1995MNRAS.277..319D}, GeV \cite{2020MNRAS.492.5980A}; N132D: radio \citep{1995AJ....109..200D}, X-ray \citep{1998ApJ...505..732H,2018ApJ...854...71B}, GeV (Xin et al. 2020, in preparation) and TeV \citep{2015Sci...347..406H}. \label{fig:sample}} \end{figure} We now extend the above study to all SNRs with hard $\gamma$-ray spectra. Figure \ref{fig:sample} shows the multi-wavelength non-thermal spectra of 12 SNRs with hard GeV spectra. The spectra have been normalized at $\gamma$-ray band by fitting the $\gamma$-ray spectra with a hadronic model. The model assumes that protons have a single power law with an exponential high energy cutoff. The normalization of the SED of each source is adjusted to minimize the $\chi^2$ of the $\gamma$-ray data of all sources. We then have an index of 1.74 and $E_{\rm cut}^p=47$ TeV. Then we may classify these sources based on their radio and/or X-ray spectra. Based on the radio flux, these SNRs may be divided into three categories. SN 1006, RCW 86, G296.5+10.0, G78.2+2.1 and N132D have strong radio emission, while the radio emission from G150.3+4.5 and HESS J1534-571 are very weak. Their normalized radio flux densities can differ by about two orders of magnitudes. The famous TeV bright SNRs RX J1713-3946, RX J0852.0-4622, HESS J1731-347, and HESS J1912+101, RCW 103 have normalized radio flux densities between the above two categories. In Table \ref{tab:fitpatameters}, we use double horizontal lines to separate these categories and give the estimated age, distance, radius of these SNRs and the related references. Non-thermal X-ray emission is detected from five relatively young SNRs: SN 1006, RCW 86, RX J1713-3946, RX J0852.0-4622, and HESS J1731-347. As will be shown below, a broken power-law spectrum is needed to explain their multi-wavelength non-thermal emission spectra, while for other sources, the above hadronic model for HESS J1912+101 and G296.5+10.0 with a single power-law high-energy particle spectrum is sufficient (See Table \ref{tab:fitpatameters}). \begin{deluxetable*}{lcccccccccccccc}[htb] \tablecaption{Physical and fitting parameters for our sample.} \tablewidth{0pt} \tablehead{ \colhead{Source} & \colhead{Age} & \colhead{Distance} & \colhead{Radius}&\colhead{$W_{\rm p}$} &\colhead{$n_{\rm H}$}& \colhead{$\alpha$} &\colhead{$E^{\rm p}_{\rm cut}$} & \colhead{$\Delta \alpha$}&\colhead{$\delta$} & \colhead{$E^{\rm e}_{\rm br}$}&\colhead{$E^{\rm e}_{\rm cut}$} &\colhead{$B$}&\colhead{$W_B$}&\colhead{$W_B/W_{\rm e}$}\\ \colhead{} & \colhead{kyr} & \colhead{kpc} & \colhead{pc} & \colhead{$10^{49}$ erg}&\colhead{cm$^{-3}$}& \colhead{}& \colhead{TeV} & \colhead{} & \colhead{}& \colhead{GeV}& \colhead{TeV}& \colhead{$\mu$G}&\colhead{10$^{49}$ erg}&\colhead{10$^{2}$} } \startdata \hline G150.3$+$04.5 & 6.0 & 1.0 &24& 1.0& 4.2& 1.75 & 70& $-$ & 2.0& $-$ & 2.0$^a$ & 32 & $6.8$ & {\bf 34.6}\\ & & & & 10.0& 0.48& 1.75 & 70& $-$ & 2.0& $-$ & 0.2 & 6.0 & $0.24$&0.26\\ \hline HESS J1534$-$571 & 10.0 & 3.5 &22.4& 1.0& 12& 1.45 & 20& $-$ & 2.0& $-$ & 0.14$^a$ &300 & 496 & $7.2 \times 10^4$\\ G323.7$-$01.0 & & & & 10.0& 1.2& 1.45& 20 & $-$ & 2.0& $-$ & 0.59$^a$ & 46 &11.6& ${\bf 16}$\\ \hline \hline HESS J1912$+$101 & 170 & 4.1 & 15 & 1.0& 27& 1.9 & 70& $-$ & 2.0& $-$ & 0.0051$^a$ & 120 & 23.9& $6.0 \times 10^2$\\ G044.5$-$00.2 & & & & 10.0& 2.7& 1.9 & 70& $-$ & 2.0& $-$ & 0.1225$^a$ & 24.5 & 1.0&{\bf 0.63}\\ \hline G279.0$+$1.1 & $100$ & 3.0 &41.5& 5.0& 5.2& 1.9 & 40& $-$ & 2.0& $-$ & 0.006$^a$ &145& $7.3\times10^2$& $3.0 \times 10^3$\\ & & & & 10.0& 2.6& 1.9 & 40& $-$ & 2.0& $-$ & 0.0186$^a$ & 82 &$2.3\times10^2$ & ${\bf 256}$\\ \hline RCW 103 & 4.4 & 3.3 & 4.8 &1.0& 65& 2.05 & 70& $-$ & 2.0& $-$ & 0.56$^a$ & 226 & 2.8 & {12}\\ G332.4$-$00.4 & & & &10.0& 6.5& 2.05 & 70& $-$ & 2.0& $-$ & 1.14$^a$ & 50 & 0.14&{\bf 0.034}\\ \hline RX J1713.7$-$3946 & 1.6 & 1.0 & 8.7 & 0.6& 11.7& 1.6 & 40& 1.0 & 2.0& 25.8$^a$ &13.5 & 550 & 97.4& $3.9 \times 10^3$\\ & & && 5.0& 1.2 & 1.55 & 40& 1.75 & 2.0 & 390$^a$ & 40 & 142 & 6.5 & ${14}$\\ G347.3$-$00.5 & & && 10.0& 0.6 & 1.6 & 40& 1.0 & 2.0 & 82 & 38 & 62 & 1.24 & ${\bf 1.73}$\\ (Leptonic) & & && 7.7& 0.1 & 1.89 & 70& 1.0 & 2.0 & 1800 & 70 & 22 & 0.16& {\bf 0.055} \\ \hline RX J0852.0$-$4622 & 2.7 & 0.75 &13& 1.0& 5.5& 1.75 & 70& 1.0 & 2.0& 28 & 20 & 165 & 29.9& $3.9 \times 10^2$\\ & & & & 1.0& 5.5& 1.75 & 70& 1.4 & 2.0& 170$^a$ & 20 & 165 & 29.9& $2.4 \times 10^2$\\ & & && 10.0& 0.45& 1.5 & 35& 2.32 & 2.0& 460$^a$ & 30 & 100 & 11 & ${ 14}$\\ G266.2$-$01.2 & & && 10.0& 0.55& 1.75 & 70& 1.0 & 2.0& 55 & 50 & 31 & 1.06 & ${\bf 1.08}$\\ (Leptonic) & & && 350& 0.001& 1.5 & 70& $1.0$ & 2.0& $20$ & 70 & 8 & 0.7&0.012\\ (Leptonic) & & && 6.0& 0.2& 2.2 & 70& $-$ & 1.0& $-$ & 22 & 11 & 0.13&{\bf 0.037}\\ \hline HESS J1731$-$347 & 2.5 & 3.2 &14& 1.0& 7.5& 1.5 & 30& 1.0 & 2.0& 38 & 6.5 & 550 &403& $1.1 \times 10^4$\\ & & && 1.0& 7.5& 1.5 & 30& 0.85 & 2.0& 16.5$^a$ & 6.4 & 550 &403& $1.5 \times 10^4$\\ & & & & 10.0& 0.6& 1.5 & 30& 1.82 & 2.0& 692$^a$ & 25 & 85 &9.6& ${ 8.3}$\\ G353.6$-$00.7 & & & & 10.0& 0.75& 1.5 & 30& 1.0 & 2.0& 100 & 17 & 85 &9.6& ${\bf 16.2}$\\ (Leptonic) & & & & 55& 0.05& 1.5 & 70& 1.0 & 2.0& 170 & 25 & 32 &1.05& 0.29\\ (Leptonic) & & & & 5.5& 0.05& 1.95 & 70& $-$ & 1.0& $-$ & 13 & 23 &0.71& {\bf 0.30}\\ \hline \hline G296.5$+$10.0 & 10.0 & 2.1 &24.1& 1.0& 4.2& 1.9 & 70& $-$ & 2.0& $-$ & 0.0097$^a$ &360& $8.9\times10^2$& $1.5 \times 10^4$\\ & & & & 10.0& 0.35& 1.9 & 70& $-$ & 2.0& $-$ & 0.26$^a$ & 70 & 33.7& ${\bf 18}$\\ \hline SN 1006 & 1.0 & 2.2 & 9.6 & 1.0& 1.0& 1.90 & 70& 1.0 & 1.0& 386$^a$ & 8.1 & 180 &14& $52$\\ G327.6$+$14.6(Lep) & & &&1.5& 0.1& 2.1 & 70& $-$ & 1.0& $-$ & 7.3 & 65 &1.8& ${\bf 2.4}$ \\ \hline RCW 86 & 1.8 & 2.5 &15.3& 1.0& 6.5& 1.45& 20 & 0.85& 1.0 & 0.048$^a$ & 0.85 & $1.2\times10^4$ &$2.5 \times 10^5$& $9.0 \times 10^{8}$\\ G315.4$-$02.3(Lep) & & && 10& 0.05& 2.25 & 70& 1.0 & 1.0& 7200$^a$ & 30 & 31 &1.68&{\bf 0.27} \\ \hline Gamma Cygni & 8.25 & 2.0 &17& 1.0& 29& 2.00& 10 & $-$& 2.0 & $-$& 0.0036$^a$ & 650 &$1.0\times 10^3$& $1.6 \times 10^{4}$\\ G78.2$+$01.2 & & && 10& 2.9& 2.00 & 10& $-$ & 2.0&$-$& 0.077$^a$ & 140 &47.2&${\bf 18}$ \\ \hline N132D & 2.5 & 50 &11.4& 10.0& 32& 2.10& 70 &$-$ & 2.0 &$-$ &0.028$^a$ & 423 &$1.3 \times 10^2$& {\bf 56}\\ N132D & 2.5 & 50 &11.4& 50.0& 6.4& 2.10& 70 &$-$ & 2.0 &$-$ &0.23$^a$ & 148 &15.9& {\bf 0.89}\\ \hline \enddata \tablecomments{References of physical parameters $-$ HESS J1912$+$101 \citep{Su_2017,2020ApJ...889...12Z}; RCW 103 \citep{2004PASA...21...82R,2019MNRAS.489.4444B}; RX J1713.7$-$3946 \citep{2003PASJ...55L..61F,2016PASJ...68..108T}; RX J0852.0$-$4622 \citep{2008ApJ...678L..35K}; HESS J1731$-$347 \citep{2008ApJ...679L..85T,2011AA...531A..81H}; G150.3$+$4.5 \citep{2016PhDT.......190C,2020arXiv200908397D}; HESS J1534$-$571 \citep{2018MNRAS.480..134M}; G279.0$+$1.1 \citep{1995MNRAS.277..319D,2020MNRAS.492.5980A}; G296.5$+$10 \citep{1997ApJ...476L..43V,2000AJ....119..281G}; SNR 1006 \citep{2003ApJ...585..324W,2009ApJ...692L.105K}; RCW 86 \citep{2000AA...360..671B,2013MNRAS.435..910H}; $\gamma$ Cygni \citep{1977AJ.....82..329H,2013MNRAS.436..968L}; N132D \citep{1995AJ....109..200D,2011ApSS.331..521V}.} $a:$ Determined by requiring the synchrotron energy loss time being equal to the SNR age; \label{tab:fitpatameters} \end{deluxetable*} \begin{figure}[ht!] \plottwo{G150_new_a.pdf}{G150_new_b.pdf} \plottwo{J1534_b.pdf}{J1534_a.pdf} \caption{Same as Figure \ref{fig:1912} but for G150.3$+$4.5 (upper) and HESS J1534$-$571 (lower) with very weak radio emission. \label{fig:weak}} \end{figure} For the two SNRs with very weak radio emission, G150.3$+$4.5 is similar to G296.5+10.0 in the sense that there is no TeV data. We therefore set the high-energy cutoff of the proton distribution at 70 TeV. The spectral fits are shown in the upper panels of Figure \ref{fig:weak}. We favor the model with a low value of $10^{49}$ erg for the total proton energy $W_{\rm p}$ (left panel) with a magnetic field of $32\ \mu$G. An even lower value of $W_{\rm p}$ is disfavored since it implies an even stronger magnetic field and therefore an energy ratio of the magnetic field to the high-energy electrons greater than 3500. For higher values of the total proton energy, the electron cutoff energy needs to be lower than that determined by requiring the synchrotron energy loss time being equal to the age of the SNR. Otherwise the IC emission will dominate the $\gamma$-ray emission with a harder spectrum than the observed one. The right panel shows such a case with the electron cutoff energy of 200 GeV to suppress contributions to the $\gamma$-ray via the IC processes. Equating the synchrotron energy loss time to the SNR age will lead to a cutoff energy of electron $E^{\rm e}_{\rm cut}$ greater than 50 TeV for the relatively weaker magnetic field of $6\, \mu$G. We notice that there are some degeneracies among $W_{\rm p}$, $n_{\rm H}$, $B$ and $K_{\rm ep}$. The product of $W_{\rm p}$ and $n_{\rm H}$ is determined by the $\gamma$-ray spectrum. The total proton energy discussed above should be re-scaled by the actual value of the mean number density of the background $n_{\rm H}$. We fix $K_{\rm ep}$ to $5\times 10^{-3}$ in the spectral fit above. For a given radio flux and $W_{\rm p}$, an increase of $K_{\rm ep}$ will lead to more energetic electrons and a weaker magnetic field. The lower panels of Figure \ref{fig:weak} show the spectral fits for HESS J1534$-$571. The cutoff energy of the proton distribution for HESS J1534$-$571 is well constrained by TeV observations. However, Table \ref{tab:fitpatameters} shows that the cutoff energy of $20$ TeV is much lower than the value of 70 TeV for the three sources studied above. Similar to G296.5+10.0 and HESS J1912+101, we favor the model with a higher value of $W_{\rm p}$ (left panel) for the weaker magnetic field and higher synchrotron cutoff energy. The ratio of $W_{B}/W_{\rm e}=1600$ is also more reasonable. However, the magnetic field is not as well constrained as for G150.3$+$4.5. We also notice that with a spectral index of 1.45, HESS J1534$-$571 has a much harder energetic particle spectrum than other sources. Such a hard spectrum is needed to fit the $\gamma$-ray spectrum. More radio flux density measurements are needed to test this model. \begin{figure*} \plottwo{RCW103_b.pdf}{RCW103_a.pdf} \caption{Same as Figure \ref{fig:1912} but for RCW 103. \label{fig:intermediate0}} \end{figure*} Among the 5 SNRs with intermediate radio emission, RCW 103 does not have non-thermal X-ray emission, similar to HESS J1912$+$101. There is also no TeV data, we then fix the proton cutoff energy at $70$ TeV. Although RCW 103 is relatively young with an age of 4.4 kyrs, radio and GeV observations lead to a high-energy particle spectrum slightly softer than that for HESS J1912$+$101. The GeV flux of RCW 103 is actually about 3 times higher than that of HESS J1912$+$101. Since the distance to the two source is comparable, the $\gamma$-ray luminosity of RCW 103 of about 2 times higher. For the same proton energy $W_{\rm p}$ and comparable radio flux densities, RCW 103 therefore has a higher background density $n_{\rm H}$ and stronger magnetic field. However, the radius of RCW 103 is more than 3 times smaller than HESS J1912$+$101, leading to a factor of 30 difference in the volume. We therefore have a much lower value of $W_{B}/W_{\rm e}$. Even the case with a $W_{\rm p}$ of $10^{50}$ erg (left panel of Figure \ref{fig:intermediate0}) has a very low value of 0.034 for $W_{B}/W_{\rm e}$, we still favor this model for its relatively higher cutoff energy of the synchrotron spectrum. To increase the value of $W_{B}/W_{\rm e}$, one needs to consider lower values for $W_{\rm p}$ as discussed above (See right panel of Figure \ref{fig:intermediate0}). \begin{figure*} \centering \plottwo{J1713_b.pdf}{J1713_c.pdf} \plottwo{J1713_a.pdf}{J1713_d.pdf} \caption{Several spectral fits to the SED of RX J17137.7$-$3946. The upper left is our favored model. The upper right corresponds to the leptonic scenario for the $\gamma$-ray emission. Although the model shown in the lower left has fewer parameters, the magnetic field appears to be too strong. The model in the lower right panel has a relatively weaker magnetic field but gives a poorer fit to the SED. \label{fig:intermediate1}} \end{figure*} The other 3 SNRs with intermediate radio emission RX J1713.7$-$3946, RX J0852$-$4622, and HESS J1731$-$347 have been studied extensively for their prominent non-thermal X-rays and TeV emission. There are still debates on the nature of the $\gamma$-ray emission. Detailed TeV observations of RX J17137.7$-$3946 have shown that a broken power-law distribution is needed to fit the $\gamma$-ray spectrum in both the leptonic and hadronic scenarios for the $\gamma$-ray emission \citep{2018A&A...612A...6H}. In general, for sources with strong non-thermal X-ray emission, a single power law particle distribution will lead to a poor fit to the multi-wavelength SED. But for the sake of simplicity and considering the overall quality of $\gamma$-ray data, we will still adopt a single power-law distribution with an exponential high-energy cutoff for the rigidity of ions. The electron distribution, however, can be a broken power law with an exponential ($\delta =1$) or super exponential ($\delta=2.0$) cutoff. Since the acceleration of low energy particles are not affected by radiative energy loss, we assume that the electrons and ions have the same spectral index at low energies. The upper-right panel of Figure \ref{fig:intermediate1} shows a leptonic scenario for the $\gamma$-ray emission. The corresponding model parameters are given in the fourth row for RX J1713.7$-$3946 in Table \ref{tab:fitpatameters}. Since we adopt a much higher energy density of 1 eV cm$^{-3}$ for the infrared background photons and ion processes also have significantly contribution to the $\gamma$-ray emission (solid line), a high value of 22 $\mu$G is inferred for the magnetic field. However, the total energy of the magnetic field is on the order of $10^{48}$ ergs, which is still much lower than that for ions. Since the ion spectral cutoff is not well-constrained by the data, we set it at $70$ TeV as we did above for other sources. An increase of the magnetic field will lead to a shift to the dominance of $\gamma$-ray fluxes by the hadronic processes. The upper left panel of Figure \ref{fig:intermediate1} shows our favored hadronic model for the $\gamma$-ray emission. The corresponding model parameters are given in the third row for RX J1713.7$-$3946 in Table \ref{tab:fitpatameters}. The cutoff energy of protons is 40 TeV, which is lower than that inferred from $\gamma$-ray observations \citep{2018A&A...612A...6H} for the adoption of a single power-law ion distribution here. The spectral index of 1.6 is comparable to the low energy spectral index inferred from $\gamma$-ray observations and the product of $W_{\rm p}$ and $n_{\rm H}$ is also compatible with these observations. For such a hard spectrum, a broken power law electron distribution is needed to fit the radio to X-ray spectrum via the synchrotron process. For the magnetic field of 62 $\mu$G, we find a break energy of $82$ GeV and a cutoff energy of $38$ TeV. Although the electron cutoff energy is very close to the proton cutoff energy, their distributions are quite different at high energies for the differences in the spectral index and shape of the cutoff $\delta$. The total energy of the magnetic field is comparable to that of ions and is more than 2 orders of magnitude higher than that of electrons. Note that although $K_{\rm ep}=0.005$, the total energy of protons is more than 3 orders of magnitude higher than that of electron for the hard spectrum and much lower break energy of the electron distribution than the cutoff energy of protons \citep{2008MNRAS.386L..20B}. Further increase of the magnetic field will lead to a higher value of $W_B/W_{\rm e}$ and a slight decrease in $E_{\rm br}^{\rm e}$, giving rise to a slightly higher ratio of $W_{\rm p}/W_{\rm e}$. \begin{figure*}[htb] \plottwo{J0852x_b.pdf}{J0852x_c.pdf} \plottwo{J0852_a.pdf}{J0852_b.pdf} \plottwo{J0852x_a.pdf}{J0852_c.pdf} \caption{The upper two rows are the same as Figure \ref{fig:intermediate1} but for RX J0852$-$4622. The bottom left panel is similar to the top left panel except for a lower $W_{\rm p}$ and a stronger magnetic field. The bottom right panel is for a relatively simple single power-law leptonic model. However, the cutoff of the electron distribution is exponential in this case. \label{fig:intermediate2}} \end{figure*} To reduce the number of model parameters, one may increase the magnetic field and set the electron radiative energy loss timescale at the break energy to be equal to the age of the SNR. This leads to the fit in the lower left panel of Figure \ref{fig:intermediate1}. The corresponding model parameters are given in the first row for RX J1713.7$-$3946 in Table \ref{tab:fitpatameters}. The $\gamma$-ray emission is completely dominated by the hadronic processes. However, the total energy of the magnetic field $W_B$ is more than 5 orders of magnitude higher than that of electrons and more than 2 orders of magnitude higher than the energy of ions $W_{\rm p}$. One may adjust the change of the spectral index $\Delta \alpha$ from low to high energies to reduce the magnetic field. This leads to the fit in the lower right panel of Figure \ref{fig:intermediate1}, which is not as good as the others, especially in the X-ray band. The corresponding model parameters are given in the second row for RX J1713.7$-$3946 in Table \ref{tab:fitpatameters}. The slight decrease in $\alpha$ is due to contribution to GeV $\gamma$-ray via the leptonic processes. \begin{figure*} \plottwo{J1731x_b.pdf}{J1731x_c.pdf} \plottwo{J1731_a.pdf}{J1731_b.pdf} \plottwo{J1731x_a.pdf}{J1731_c.pdf} \caption{Same as Figure \ref{fig:intermediate2} but for HESS J1731$-$347. \label{fig:intermediate3}} \end{figure*} The spectral fits to the SED of RX J0852$-$4622 shown in Figure \ref{fig:intermediate2} are very similar to those for RX J1713.7$-$3946. The upper left panel shows our favored model with the model parameters given in the fourth row for RX J0852$-$4622 in Table \ref{tab:fitpatameters}. The spectral index of 1.75 for the ions is slightly larger than 1.6 for the favored model of RX J1713.7$-$3946, and the cutoff energy of the proton distribution is also slightly higher. With a magnetic field of $31\ \mu$G, the magnetic field energy is about one tenth of the proton energy and about 100 times higher than the energy of electrons. The upper right panel shows a leptonic model with the model parameters given in the fifth row for RX J0852$-$4622 in Table \ref{tab:fitpatameters}. The model slight overproduces $\gamma$-rays at tens of TeV. Compared with the leptonic model for RX J1713.7$-$3946, its break energy is about 100 times smaller, leading to a 20 times higher total energy of electrons. The total energy of protons is about $3.5\times 10^{51}$ erg for the very hard spectrum and very high cutoff energy, which is a bit too high to be reasonable. The total energy of the magnetic field however is about $10^{49}$ erg and comparable to that of the electrons. The model of the middle right panel of Figure \ref{fig:intermediate2} is similar to that of the lower right panel of Figure \ref{fig:intermediate1} for RX J1713.7$-$3946. The model parameters are given in the third row for RX J0852$-$4622 in Table \ref{tab:fitpatameters}. The model of the middle left panel of Figure \ref{fig:intermediate2} is similar to that of the lower right panel of Figure \ref{fig:intermediate1} for RX J1713.7$-$3946 with the model parameters given in the second row for RX J0852$-$4622 in Table \ref{tab:fitpatameters}. To reduce the ratio of $W_B/W_{\rm e}$, $\Delta \alpha = 1.4$ instead of 1 is adopted for RX J1713.7$-$3946. The model of the lower left panel of Figure \ref{fig:intermediate2} is similar to our favored model in the upper left with the model parameter given in the first row for RX J0852$-$4622 in Table \ref{tab:fitpatameters}. Here the total energy of proton $W_{\rm p}$ is 10 times smaller, leading to a very strong magnetic field of 165 $\mu$G and a very high value of $3.9\times 10^4$ for $W_B/W_{\rm e}$. The model of the lower right panel of Figure \ref{fig:intermediate2} has a single power-law electron distribution with the model parameter given in the sixth row for RX J0852$-$4622 in Table \ref{tab:fitpatameters}. However to fit the SED, the shape of the cutoff needs to be exponential instead of super exponential for other models. The energy of the magnetic field is comparable to that of electrons, reminiscence of the leptonic model with a broken power law electron distribution (upper right panel of Figure \ref{fig:intermediate2}). \begin{figure}[ht!] \plottwo{SN1006_b.pdf}{SN1006_a.pdf} \plottwo{RCW86_b.pdf}{RCW86_a.pdf} \caption{The spectral fits for SN 1006 and RCW 86. The left panels correspond to the favored leptonic scenarios for the $\gamma$-ray emission. \label{fig:strong}} \end{figure} The six spectral fits for HESS J1731$-$347 shown in Figure \ref{fig:intermediate3} are very similar to those in Figure \ref{fig:intermediate2}. The model parameters are given in Table \ref{tab:fitpatameters}. Due to its relatively higher X-ray to $\gamma$-ray flux ratio, the magnetic fields in the leptonic scenarios are higher than those for RX J0852$-$4622. For the leptonic models with a broken power-law electron distribution, the break energy for HESS J1731$-$347 is between those for the other two sources, so is the total energy of protons. For the hadronic models, we first notice that the cutoff energy of protons is the lowest among these three sources and its particle distributions are also harder than the other two sources. The $\gamma$-ray luminosity of these three sources are comparable as can be seen from the product of $W_{\rm p}$ and $n_{\rm H}$. Compared with G296.5$+$10.0, SN 1006 and RCW 86 have prominent non-thermal X-ray emission and are relatively younger. They all have relatively strong radio emission. Figure \ref{fig:strong} shows the spectral fits to these two SNRs with the model parameters given in Table \ref{tab:fitpatameters}. Although both the leptonic (left panels) and hadronic (right panels) models give reasonable fits to the SEDs, the hadronic models are disfavored for their relatively stronger magnetic fields, especially for RCW 86, whose hadronic model requires a magnetic field of more than 10 mG. This is because the hard GeV spectrum is not compatible to the soft radio spectrum if we assume electrons and ions have the same spectral index. To fit the spectrum, the synchrotron spectrum needs to have a spectrum break below the radio band, implying very strong magnetic field if this break is associated with radiative energy loss processes. However, we notice that multi-wavelength images of RCW 86 reveal complicated structure \citep{2016ApJ...819...98A}. Multi-zone hadronic models may still work. More detailed studies are warranted. For SN 1006, $W_{p}$ is on the order of $10^{49}$ erg. A much higher value can be ruled out without reducing $K_{\rm ep}$ since electrons already have significant contributions to the $\gamma$-rays via the IC emission process in both scenarios. We also notice that both leptonic models for these two SNRs require an exponential cutoff instead of the super exponential one and SNR 1006 has a single power law electron distribution while RCW 86 has a broken one. The leptonic models for RWC 86 explored by \citet{2016ApJ...819...98A} only consider the CMB for the IC process and have a single power law electron distribution, which is different from our favored model. \begin{figure*} \plottwo{G78_b.pdf}{G78_a.pdf} \plottwo{N132D_new_b.pdf}{N132D_new_a.pdf} \caption{Same as Figure \ref{fig:1912} but for G78.2+01.2 (upper) and N132D (lower). \label{fig:strong2}} \end{figure*} Besides the three SNRs studied above, there are two more radio bright SNRs: G78.2+01.2 and N132D. Both of them have very soft spectra in the TeV band, indicating a spectral cutoff. The latter is an SNR in the Large Magellanic Cloud and is the most powerful SNR in our sample \citep{2018ApJ...854...71B}. Although thermal X-ray emission has been detected from both sources, there is no evidence for non-thermal X-rays. A simple single power-law model can readily fit their SEDs. The model parameters are given in Table \ref{tab:fitpatameters}. For G78.2+01.2, we favor the model with $W_{\rm p} = 10^{50}$ erg for the more reasonable value of magnetic field and $W_B/W_{\rm e}$. Stronger magnetic fields are needed for lower values of $W_{\rm p}$. Considering the fact that N132D is the most powerful SNR, a value of $5\times 10^{50}$ erg appears to be reasonable for $W_{\rm p}$. To reduce $W_{\rm p}$, the magnetic field needs to be increased to reproduce the observed radio flux. The strong magnetic field of $423 \mu$ G appears to be reasonable to this SNR. So both models for N132D shown in Figure \ref{fig:strong2} are favored. N132D has a relatively soft $\gamma$-ray spectrum. The proton distribution cuts off at 70 TeV. The cutoff energy of the proton distribution for G78.2+01.2, however, is only about 10 TeV, the lowest in our sample. It is likely that protons above 10 TeV have already escaped from the SNR and may illuminate surrounding molecular clouds. More observations in the TeV band are warranted. \section{Discussions} \label{dis} In general, all the models presented above give good fits to the SEDs. We picked up the favored ones mostly based on the model parameters. Firstly, we favor models with a total energy of protons below $10^{50}$ erg and a magnetic field between 10 to 100 $\mu$G or as close as to this range as possible. Since these SNRs are expected to dominate the flux of TeV cosmic rays, on average each SNR should inject less than $10^{50}$ erg of energy to the cosmic rays \citep{2017ApJ...844L...3Z}. Secondly, the total energy of the magnetic field should be comparable to that of the electrons or protons. In Table \ref{tab:fitpatameters}, we highlighted these models with a bold face for the ratio of $W_B/W_{\rm e}$. It can be seen that most of the favored models satisfies the first criteria except for SNRs G78.2+01.2 and N132D, where the magnetic field is above 100 $\mu$G. A magnetic field below $100\ \mu$G will require a total energy of protons exceeding $10^{50}$ ergs. Most of the favored models have a $W_{\rm p}$ of $10^{50}$ erg. Only for G150.3+04.5 and SN 1006, $W_{\rm p}$ is one order of magnitude lower. This is reasonable since these SNRs are likely resulting from Type Ia SNs for their lack of compact neutron stars within the remnants. N132D is the most powerful SNR, a value of $W_{\rm p}$ exceeding $10^{50}$ erg is acceptable. \begin{figure}[ht!] \plotone{n_B.pdf} \caption{Gas density $n_{\rm H}$ vs magnetic field $B$ for the favored models for the 13 SNRs studied in this paper. The open signs below $n_{\rm H}=0.21 $cm$^{-3}$ are for the leptonic models. The dashed lines satisfy the inserted equation, which assumes synchrotron process for the radio flux density $f_{\rm R}$ and hadronic processes for the $\gamma$-ray flux density $f_{\rm GeV}$ and gives the model-predicted relationship between $n_{\rm H}$ and $B$ in the hadronic scenario for different SNRs. The dotted line indicates the gas density that can give a $\gamma$-ray flux density at 1 GeV via hadronic processes $F_{\rm pp}$ equal to the 1 GeV flux density produced via IC scattering of CMB $F_{\rm IC}$ by energetic electrons with $K_{\rm ep}=0.005$ and $\alpha=2.0$. \label{fig:had1}} \end{figure} Figure \ref{fig:had1} shows the scatter plot between $B$ vs $n_{\rm H}$ for the favored models. The mean density of the emission region is always less than $10$ cm$^{-3}$ except for N132D, which is consistent with the X-ray observations. It appears that $n_{\rm H} =0.21$ cm$^{-3}$ gives the dividing line between the leptonic and hadronic models with the former having a lower density. Assuming a single power-law distribution with $\alpha=2$ and $K_{\rm ep}=5\times10^{-3}$, the dotted line shows the density when the $\gamma$-ray flux at 1 GeV produced via the hadronic processes equals to that produced via IC of the CMB. With the decrease of $\alpha$, this line will shift toward high densities since the GeV emission efficiency via the IC process increases. Along the dotted line, the radio to $\gamma$-ray flux density ratio increases with the increase of $B$, which is consistent with Figure \ref{fig:sample} with radio brighter sources having stronger magnetic fields. The radio flux density also depends on the spectral index with weaker radio emission for sources with harder spectra. This explains the relative strong magnetic field for the two weak radio sources HESS J1534-571 and G150.3+4.5. The exact strength of the magnetic field also depends on contributions to the $\gamma$-ray via the leptonic processes in the hadronic scenario. Below the dotted line, the leptonic process dominates the $\gamma$-ray emission. The dashed lines indicate the correlation between $n_{\rm H}$ and $B$ if the $\gamma$-ray emission is solely produced via the hadronic scenario for 13 SNRs studied in the paper, where $f_{\rm R}$ and $f_{\rm GeV}$ represent the flux density at 1 GHz and 1 GeV, respectively. Equally good fits to the SEDs can be obtained along these lines. Note that via the CO and/or HI observations, much higher densities are obtained for some sources in our sample, e.g., $\sim 130$ cm$^{-3}$ for RX J1713.7-3946 \citep{2012ApJ...746...82F}, $\sim 60$ cm$^{-3}$ for HESS J1731-347 \citep{2014ApJ...788...94F}, $\sim 100$ cm$^{-3}$ for RX J0852-4622 \citep{2017ApJ...850...71F}, $\sim 75$ cm$^{-3}$ for RCW 86 \citep{2019ApJ...876...37S}, $\sim 30-80$ cm$^{-3}$ for N132D \citep{2018ApJ...854...71B,2020ApJ...902...53S} and $\sim 45$ cm$^{-3}$ for Gamma Cygni \citep{2020arXiv201015854M}. A higher average density in the $\gamma$-ray emission region will result in a larger magnetic field and a lower total energy of relativistic protons. We obtained a lower limit to the total energy of cosmic rays of ~$10^{48}-10^{49}$ ergs by adopting these densities. The number density of some individual cloud can be as high as $10^4-10^5$ cm$^{-3}$ in RX J1713.7-3946 \citep{2010ApJ...724...59S, 2012MNRAS.422.2230M}. Most of these high density regions are associated with local clouds with a small volume filling factor, implying that the shock acceleration mainly operates in a low density inter-cloud medium as we assumed (see Table \ref{tab:fitpatameters}). Under the strong magnetic field of these clouds, the broad-band SED can be well reproduced without considering the higher value of $n_{\rm H}$ due to the energy depended-penetration of cosmic-rays into the dense clouds. Considering that the distribution of high energy particles should be more or less uniform due to diffusion, a good spatial correspondence between TeV gamma-rays and interstellar neutral gases is expected in the hadronic scenario for the $\gamma$-ray emission. To explain the origin of hard $\gamma$-ray spectra from supernova remnants in the hadronic scenario, \citet{2014MNRAS.445L..70G} first showed that the energy dependent penetration of cosmic rays into dense emission regions can lead to a very hard $\gamma$-ray spectrum at low energies. Detailed studies of shock interaction with molecular clouds show that the magnetic field strength will be enhanced not only on the surface of targeted clouds, but also inside these clouds \citep{2010ApJ...708..965Z,2012ApJ...744...71I, 2019ApJ...872...46I}. The model was further developed by \citet{2019MNRAS.487.3199C}. Detailed modeling of RX J1713.7-3946 also favors the hadronic scenario \citep{2016ApJ...821...43Z}. Our results suggest that the hard spectra may be due to very efficient particle acceleration in a low density environment \citep{2017ApJ...844L...3Z, 2019MNRAS.482.5268Z}. A softer spectrum can be produced when shocks slow down dramatically due to interaction with molecular clouds \citep{2013MNRAS.431..415B, 2014ApJ...784L..35T}, which may explain the low energy spectral component seen in HESS J1912+101 and G279.0+1.1. \begin{figure}[ht!] \plotone{B_We.pdf} \caption{The total energy content of electron above 1 GeV $W_e$ vs magnetic field $B$ for the 13 SNRs studied in this paper. \label{fig:B_We}} \end{figure} Figure \ref{fig:B_We} shows the scatter plot between $B$ and $W_{\rm e}$. The total energy of electrons $W_{\rm e}$ is about $10^{47}$ erg, with N132D having the highest value of $1.8\times 10^{48}$ erg and G150.3+04.5 having the lowest value of $2.0\times 10^{46}$ erg. These results look reasonable. Table \ref{tab:fitpatameters} shows that the cutoff energies of these electrons are always around 1 TeV for sources without nonthermal X-ray emission, which may explain the spectral break near 1 TeV in the cosmic ray electron spectrum \citep{2017Natur.552...63D}. For the three sources RX J1713.7$-$3946, RX J0852$-$4622, and HESS J1731$-$347, both the leptonic and hadronic scenarios can lead to reasonable model parameters. The leptonic models always have weaker magnetic fields and more energy in electrons than the corresponding hadronic models. However, Table \ref{tab:fitpatameters} shows that leptonic models always have less energetic protons even we fix $K_{\rm ep}$ for the difference in their spectral shape. The leptonic model always has a softer spectrum than the hadronic one. Better spectral measurements in the radio band may distinguish these models. \begin{figure}[ht!] \plotone{Wb_We.pdf} \caption{The ratio of the magnetic energy $W_B$ to the total energy content of electron above 1 GeV $W_{\rm e}$ for the 13 SNRs studied in this paper. \label{fig:B_We1}} \end{figure} Figure \ref{fig:B_We1} shows the dependence of $W_{B}/W_{\rm e}$ on the age of the SNR. In the leptonic models, the magnetic field energy is comparable to the total energy of electrons. In the hadronic scenario, the magnetic energy is comparable to that of protons and $W_{B}/W_{\rm e}$ appears to increase with the age of the SNR. \section{Conclusions}\label{sec:conclusion} In this paper, we carried out detailed spectral modeling of 13 SNRs with hard GeV spectra. We re-analyzed the Fermi data of HESS J1912+101, and found its TeV emission can not be attributed to leptonic process for the old age of the SNR inferred from molecular cloud observations and the spin-down age of the associated pulsar. The same is for SNR G279.0+1.1. A detailed analysis of XMM-Newton observations of G296.5+10.0 failed to uncover nonthermal emission, which in combination with Fermi data analyses also favors the hadronic scenario for the $\gamma$-ray. Of the 13 sources studied here, only SN 1006 and RCW 86 favor the leptonic scenario for their $\gamma$-ray emission. RX J1713.7$-$3946, RX J0852$-$4622, and HESS J1731$-$347 can be explained with both leptonic and hadronic models. In the leptonic models, the total energy of the magnetic field is comparable to that of the electrons. In the hadronic models, the magnetic fields and protons are close to energy equipartition. All these sources have prominent nonthermal X-ray emission. For the other 8 sources without evident nonthermal X-ray emission, the hadronic models with a single power-law particle distribution are favored. And in the hadronic scenario, the magnetic field of older remnants tends to contain more energy than relativistic particles, which may be attributed to escape of high energy particles from SNRs \citep{2018PhRvD..97l3008P, 2020arXiv201015854M}. Although our results do not completely address the origin of hard $\gamma$-ray spectra from SNRs, young remnants with prominent nonthermal X-ray emission favors the leptonic scenario, while absence of nonthermal X-ray emission strongly favors the hadronic scenario. For RX J1713.7$-$3946, RX J0852$-$4622, and HESS J1731$-$347, both scenarios can fit the SEDs with reasonable parameters. The leptonic model always predicts a softer radio spectrum than the corresponding hadronic one, which may be tested with future observations. The proton spectrum always cuts off below 70 TeV, implying that SNRs may not be able to accelerate cosmic ray to PeV energies. Alternative PeV sources are needed to explain CR observations. SNRs of Type Ia SNs produce much fewer TeV ions than core collapse SNs. TeV cosmic ray fluxes are therefore likely dominated by more powerful SNRs with compact neutron stars in the middle. The total energy of relativistic protons is on the order of $10^{50}$ erg for each core collapse SNR, which indicates very efficient ion acceleration and is compatible to the relatively hard spectra inferred from observations \citep{2019MNRAS.482.5268Z}. G78.2+01.2 has the lowest cutoff energy indicating escape of particles beyond 10 TeV from the SNR. Nearby molecular clouds may be illuminated by these escaping particles and produce $\gamma$-ray in the TeV band. HESS 1912+101 has the oldest age in our sample, yet the cutoff energy of protons is relatively high, indicating structure of the magnetic field may play an important role on the escape process. The strong linear polarization of the radio emission indeed indicates presence of large scale regular magnetic field, which may trap high-energy particles in this source effectively. For sources with prominent nonthermal X-ray emission, the electron distribution always cuts off above 7 TeV, while those without nonthermal X-ray emission, their electron distribution always cuts off near 1 TeV, which may explain the cosmic ray electron spectrum in the TeV range \citep{2017Natur.552...63D}. Further exploration of this issue is warranted. \begin{figure}[ht!] \plotone{Syntau_age_new.pdf} \caption{The correlation between the electron synchrotron cooling time at the cutoff energy and the age of SNRs. \label{fig:Syntau_age}} \end{figure} To study the acceleration of high-energy particles in SNRs, \citet{2019ApJ...874...50Z} plotted the synchrotron energy loss time at the cutoff energy of electron distribution vs the ages of a sample of SNRs. This figure is updated here as shown in Figure \ref{fig:Syntau_age}, confirming the early discovery that high-energy electrons are mostly accelerated in young SNRs and the radiative energy loss and escape processes dominate in old SNRs. \acknowledgments We thank the anonymous referee for very helpful suggestions that help to improve the manuscript significantly. This work is partially supported by National Key R\&D Program of China: 2018YFA0404203, NSFC grants: U1738122, U1931204, 11761131007, 11573070, the Natural Science Foundation for Young Scholars of Jiangsu Province, China (No. BK20191109), and by the International Partnership Program of Chinese Academy of Sciences, grant No. 114332KYSB20170008.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Neutrino Mass via Tritium Beta Decay} While neutrino oscillation experiments have successfully shown that neutrinos change flavor, and therefore have non-zero mass, the absolute mass scale remains unknown. The simplest way to directly measure the mass of the neutrino is using beta decays. Neutrino mass has an effect on the kinematics of decay process~\cite{ref:tritiumbetadecay}. While the neutrinos themselves are difficult to measure, the energies of the outgoing electrons can be precisely determined. The neutrino mass can then be inferred from the shape of the electron energy spectrum: \begin{equation} \frac{dN}{dK_e} \propto F(Z,K_{e}) \cdot p_e \cdot (K_{e}+m_{e}) \cdot (E_{0}-K_{e}) \cdot \sum_{i=1}^{3}|U_{ei}|^2 \sqrt{(E_0-K_e)^2-m_i^2} \cdot \Theta(E_0-K_e-m_i). \label{eq:betadecayspectrum} \end{equation} The Fermi function, $F(Z,K_{e})$, takes into account the Coulomb interactions of the electron with the recoiling nucleus; $Z$ is the proton number of the final-state nucleus, $K_e$ is the electron's kinetic energy, $p_e$ is the electron's momentum, $E_0$ is the Q-value of the decay, and $U_{ei}$ are the elements of the PMNS matrix for neutrino mass states $m_{i,\,i=1-3}$. The only dependence on the neutrino mass comes from the phase-space factor. The shape of the spectrum is independent of all other properties of the neutrino, including whether neutrinos are Majorana or Dirac particles. One technique being used to precisely measure the beta-decay spectrum relies on a spectrometer to precisely select high-energy electrons from tritium decays. The most recent experiments to use this technique are the Mainz and Troitsk experiments. They placed similar limits on the neutrino mass: $m_{\beta\nu} < 2.3\ \mathrm{eV}$~\cite{ref:mainz,ref:troitsk}. KATRIN, the next-generation of spectrometer-type experiments, aims to lower that limit by an order of magnitude, to 200~meV (90\% CL)~\cite{ref:katrin}. KATRIN is currently under construction and commissioning in Karlsruhe, Germany. The lower limits for the neutrino mass from oscillation experiments provide a strong motivation for probing to lower neutrino masses. However, with KATRIN, the technologies used in spectrometer-type tritium experiments have been pushed to their current practical limits. A new technique is needed to push the mass sensitivity lower. \section{A New Technique} The Project 8 collaboration proposes an alternate method of measuring the electron energies: measure the cyclotron radiation emitted by the electrons spiraling around magnetic field lines. An enclosed volume of tritium is placed in a uniform magnetic field, and as the tritium nuclei decay, the electrons will spiral around the magnetic fields lines. The spiraling electrons are being accelerated, and therefore emit cyclotron radiation. The frequency of that radiation is proportional to the magnetic field strength, and inversely proportional to the electron's kinetic energy: \begin{equation} \omega = \frac{eB}{\gamma m_e} = \frac{\omega_c}{\gamma} = \frac{\omega_c}{1+K_e/(m_e c^2)}. \label{eq:frequency} \end{equation} By measuring the frequency of the cyclotron radiation, one can measure the electron's kinetic energy without interfering with the electron itself. Using a 1-T magnetic field, the endpoint of the tritium spectrum (18.6-keV) falls around 26~GHz. The power emitted as cyclotron radiation depends both on the relativistic velocity electron, $\beta$, and the angle at which it is emitted relative to the direction of the magnetic field, $\theta$: \begin{equation} P(\beta,\theta) = \frac{1}{4\pi\epsilon_0}\frac{2 q^2 \omega_c^2}{3c}\frac{\beta_{\perp}^2}{1-\beta^2}, \qquad \beta_{\perp} \equiv \beta \sin{\theta}. \label{eq:power} \end{equation} The electrons that radiate the most power will be the easiest to detect, because the signal-to-noise ratio will be higher. Equation~\ref{eq:power} shows that the power will be greatest for electrons with $\theta \approx 90^{\circ}$. Conveniently, these electrons also travel the slowest along the magnetic field lines, increasing the amount of time they can be observed. For the hypothetical experiment described in~\cite{ref:formaggio_monreal_2009}, the simulated power spectrum measured from $10^5$ tritium decays in 30~$\mu$s is shown in Fig.~\ref{fig:powerspectrum}~(left). Since frequency is inversely proportional to electron energy, the rare high-energy electrons are at lower frequency, near 26~GHz. Low-energy electrons, making up the vast majority of the spectrum, are piled up towards 27~GHz. The pileup from low-energy electrons is a possible complication: unlike spectrometer-type experiments, we do not have an intrinsic method for rejecting low-energy electrons. Instead, we can narrow the bandwidth such that the event rate is low enough that individual events can be identified. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{figure2.pdf} \includegraphics[width=0.45\textwidth]{figure3.pdf} \caption{\label{fig:powerspectrum}Simulated power spectrum from the hypothetical experiment described in~\cite{ref:formaggio_monreal_2009}. $10^5$ beta decays were simulated over 30~$\mu$s. The vertical arrow indicates the location of the tritium beta-decay endpoint in frequency space. The full spectrum is on the left, and the endpoint region is on the right. The triplet of peaks from a high-energy electron are easily distinguished from the background.} \end{figure} The primary concern for making a precise electron energy measurement is the ability to measure frequency precisely. The desired energy precision is therefore the place to start in considering the requirements for this type of experiment. To achieve the necessary energy precision, $\Delta E$, we need to achieve a relative frequency precision of $\Delta f / f = \Delta E / m_e$. KATRIN is designed to achieve $\Delta E \approx 1\ \mathrm{eV}$; for Project 8 to achieve a similar accuracy means that $\Delta f / f \approx 2 \times 10^{-6}$. This accuracy is reasonable with current technologies. With a 1-T magnetic field, $\Delta f \approx 52\ \mathrm{kHz}$ at 26~GHz. The desired frequency accuracy determines for how long we must be able to observe single electrons. To have a frequency resolution of $\Delta f$, we must measure each electron for $t_{\mathrm{min}} = 1 / \Delta f$. With the design parameters discussed above, the electrons must be coherently measured for at least 20~$\mu$s. The minimum measurement time places constraints on a number of physical parameters of the experiment. The gas density must be low enough that, on average, 18.6~keV electrons can travel for $t_{\mathrm{min}}$ without scattering. Furthermore, the experiment must be large enough so that the electron can be tracked continuously. The signal detected for a single electron may be more complicated than the single frequency at which the cyclotron radiation is emitted. In particular, the detected signal can include a Doppler shift due to the velocity of the electron parallel to the magnetic field, $\beta_{\parallel}$, a dependence on the electron-antenna distance, and effects from the angular dependence of the power distribution of the radiation. The way these effects are represented in the data depends strongly on the antenna configuration. For the hypothetical experiment described in~\cite{ref:formaggio_monreal_2009}, if the signals from the different antennas are summed coherently, there will be sideband peaks from the Doppler shift. Fig.~\ref{fig:powerspectrum}~(right) zooms in on the high-energy (low-frequency) region of the power spectrum shown previously. In this simulation a triplet of peaks from a single high-energy electron are easily distinguishable. Though it is an antenna-design-dependent effect, the triplet of peaks due to the Doppler shift could be a convenient tool for tagging electrons. \subsection{Prototype Experiment} The Project 8 Collaboration has put together a prototype experiment to explore the practical use of cyclotron radiation as a method for measuring the decay-electron energy. The initial goal of the prototype is to verify that we can, in fact, detect the cyclotron radiation from a single electron. We will use a $^{83m}$Kr radioactive source, which emits a monoenergetic electron. This excited nucleus emits 17.8~keV or 30~keV electrons, and has a half-life of 1.83 hours. The source is a good stand-in for tritium: it is gaseous, emitting the electrons isotropically, and the energy of one of the decay branches is close to the tritium-decay endpoint. Figure~\ref{fig:prototype} shows a diagram of the magnet insert for the prototype experiment, which is located at the University of Washington, in Seattle, WA. A superconducting solenoid provides the 1-T magnetic field. The electrons are trapped in a small ($\approx 1 \mathrm{mm}^3$) magnetic bottle in the bore of the magnet. The magnetic field from the solenoid traps the electrons in the horizontal plane; a trapping coil within the bore of the magnet decreases the field slightly in a small volume, trapping the electrons vertically. Whether or not electrons are trapped depends on the depth of the magnetic bottle potential, and the pitch angle of the electrons. Electrons with large $\beta_{\perp}$ will be trapped. Fortunately, these electrons also emit the most power as cyclotron radiation. Only electrons with large pitch angles ($\theta \ge 85^{\circ}$) are trapped. Though this angle selection severely limits the number of electrons we will detect, it maximizes the signal-to-noise ratio. \begin{figure} \centering \includegraphics[width=5in]{figure4.pdf} \caption{\label{fig:prototype}Diagram of the magnet-bore insert for the prototype experiment located at the University of Washington, in Seattle, WA. The configuration shown was used to take data in January, 2013.} \end{figure} The ability to open the magnetic bottle by turning off or reversing the current in the trapping coil will allow us to confirm that we are indeed trapping electrons, and accurately measure the noise levels. In addition to detecting the cyclotron radiation, we will employ more traditional means of detecting electrons to monitor the presence of $^{83m}$Kr in the trap, and verify that electrons are actually trapped. The cyclotron radiation is detected with a waveguide coupled to two low-noise cryogenic amplifiers. The rectangular cavity of the waveguide also serves to contain the $^{83m}$Kr gas. The signals from the amplifier are mixed down to lower frequencies, digitized and written to disk. After the data has been recorded, we analyze it to search for excesses of power as a function of frequency. Fig.~\ref{fig:chirp} (left) shows a simulated ``chirp'' signal in time-frequency space. Each column is created by taking a Fourier Transform of a $\approx 40$-ms time slice. The signal rises in frequency as a function of time because the electron loses energy to the cyclotron radiation. The main background in our analysis is the random noise that sometimes fluctuates high enough to mimic a signal. As Fig.~\ref{fig:chirp} (right) shows, the noise can form clusters that look like signal chirps. We can identify electron candidates by finding a peak of candidate chirps as a function of frequency; using the clustering of high-power bins as a function of time allows us to significantly reduce the background candidate rate. Data was taken in January, 2013, though we have not yet seen any indication of trapped electrons in that data set. We are currently developing more sensitive analysis techniques, and plan on taking data in Fall 2013 after making several improvements to the apparatus. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{figure5.pdf} \includegraphics[width=0.45\textwidth]{figure6.pdf} \caption{\label{fig:chirp}The time-frequency-space representation of our data. On the left is a simulated chirp. The candidate on the right resembles what we expect our signal to look like, but is actually a random noise fluctuation from a run in which the trapping-magnet current was reversed, so no electrons were being stored in the magnetic bottle.} \end{figure} \section{Future Work} Once we have shown that we can detect single electrons using their cyclotron radiation, we will investigate the energy resolution achievable with this type of setup. The $^{83m}$Kr source is particularly useful for this purpose, since the electrons are monoenergetic. Finally, we want to demonstrate that the signal from cyclotron radiation can be used to identify electrons and determine their energy without additional detection methods. Though for the initial stages of the prototype the data acquisition is untriggered, we will need to develop the ability to recognize electrons and trigger the recording of data. This research is supported in part by DOE grant DE-FG02-97ER41020 and the National Science Foundation.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} A complete census of the obscured AGN is crucial to fully understand the cosmological growth of supermassive black holes (SMBH) and to reveal the nature of the SMBH-galaxy co-evolution. Obscured accretion is a key phase both in AGN growth and in the co-evolution of AGN and their host galaxies as most SMBH mass growth occurs in heavily obscured environments (\cite[Fabian \& Iwasawa 1999]{1999MNRAS.303L..34F}). However, even the deepest X-ray surveys conducted to date with XMM-Newton and Chandra at energies $>$2 keV are incomplete for AGN with line-of-sight neutral hydrogen column densities ${\rm N_H > 10^{23}\,cm^{-2}}$ and they miss the Compton-thick AGN (${\rm N_H > 1.5\times10^{24}\,cm^{-2}}$; e.g. \cite[Burlon et al. 2011]{2011ApJ...728...58B}). Surveys at mid-infrared wavelengths ($>$5$\mu$m) are much less affected by extinction since the obscuring circumnuclear dust reradiates the absorbed nuclear optical-to-X-ray radiation in the infrared. As shown by selection techniques mainly developed with data from Spitzer-IRAC, surveys conducted at mid-infrared wavelengths can potentially reveal the elusive obscured accretion missed by hard X-ray surveys (e.g. \cite[Lacy et al. 2004]{lacy04}; \cite[Stern et al. 2005]{stern05}; \cite[Alonso-Herrero et al. 2006]{alonso06}; \cite[Donley et al. 2012]{2012ApJ...748..142D}). AGN population studies with the Wide-field Infrared Survey Explorer (WISE; \cite[Wright et al. 2010]{2010AJ....140.1868W}) are starting to fill the gap between local/deep mid-infrared surveys with IRAS/Spitzer, completing our census of obscured SMBH growth in regions of the luminosity-redshift parameter space poorly sampled. Several works have already demonstrated that using WISE colours alone it is possible to separate stars and star-forming galaxies from luminous AGN (e.g. \cite[Mateos et al. 2012]{mateos12}; \cite[Stern et al. 2012]{stern12}; \cite[Assef et al. 2013]{assef13}; \cite[Mateos et al. 2013]{2013MNRAS.434..941M}; \cite[Yan et al. 2013]{2013AJ....145...55Y}). In \cite[Mateos et al. (2012)]{mateos12} (hereafter M12) we presented a colour-based selection technique of luminous AGN using the 3.4, 4.6, and 12 $\mu$m bands of WISE (hereafter mid-infrared wedge). We demonstrated this technique is one of the most reliable and efficient to detect X-ray luminous AGN in the literature. Furthermore, in \cite[Mateos et al. (2013)]{2013MNRAS.434..941M} (hereafter M13) we showed it is very effective at revealing both Compton-thin and Compton-thick luminous AGN, at least up to z$\lesssim$1. Here we briefly summarise the results of these studies. \section{The data} The samples used are described in detail in M12, M13 and \cite[Reyes et al. (2008)]{2008AJ....136.2373R} (hereafter R08). Briefly, the Bright Ultra-hard XMM-Newton Survey (BUXS) is a complete flux-limited sample of 258 bright (${\rm f_{4.5-10\,keV} > 6 \times 10^{-14}\,erg\,s^{-1}\,cm^{-2}}$) "ultra-hard" (4.5-10 keV) X-ray selected sources detected over a sky area of 44.43 deg${\rm ^2}$. Currently 253 objects have been identified through optical spectroscopy, 143 as type 1 AGN (Sy1-1.5) and 110 as type 2 AGN (Sy1.8-2). We also have used the largest catalogue of 887 [${\rm O_{III}}$] $\lambda$5007-selected type 2 quasars (QSO2s) at $z$$\lesssim$0.83 in the literature from the Sloan Digital Sky Survey (SDSS) from R08. These QSO2s were selected independently of their X-ray properties hence, the sample should be unaffected by nuclear obscuration. \begin{figure}[t] \vspace*{-0.3 cm} \begin{center} \includegraphics[angle=0,width=3.0in]{mateos_fig1.eps} \vspace*{-0.3 cm} \caption{Mid-infrared colours for the WISE sources detected in the BUXS area down to the full depth of the X-ray observations (i.e. not applying the X-ray flux limit that defines the BUXS AGN; filled circles). Open squares are the SDSS QSO2s in R08. The M12 mid-infrared wedge and power-law locus (and the values for different spectral index) are the thick solid and dashed lines, respectively. The solid and dot-dashed contours indicate the densities (normalized to the peak value) of the SDSS QSO2s and the WISE sources in BUXS detected in X-rays, respectively.} \label{fig1} \end{center} \end{figure} \section{Results} \subsection{Mid-infrared selection of AGN with WISE} Fig.\,\ref{fig1} illustrates our mid-infrared wedge and power-law locus and the WISE colour distributions of the SDSS QSO2s. For comparison we show the colours of all the WISE sources detected in the BUXS fields and of those detected at 2-10 keV energies down to the full depth of the XMM-Newton observations. The great majority of WISE objects detected in X-rays fall in the mid-infrared wedge and cluster near the power-law locus. It seems however, that a substantial fraction of the SDSS QSO2s have the infrared colours of low $z$ star-forming galaxies (horizontal sequence in the lower-right part of the diagram). \subsection{Dependence on AGN luminosity} It is well known that the effectiveness of any mid-infrared selection technique is a strong function of the AGN luminosity (e.g. M12; \cite[Donley et al. 2012]{2012ApJ...748..142D}; \cite[Messias et al. 2013]{2013arXiv1312.3336M}; M13). Fig.\,\ref{fig2} shows the dependence on luminosity of the fraction of SDSS QSO2s and BUXS AGN in the mid-infrared wedge. As most SDSS QSO2s have not been observed in X-rays to compare with the results for the AGN in BUXS we derived their intrinsic 2-10 keV luminosities using the empirical relation between hard X-ray emission and [${\rm O_{III}}$] $\lambda$5007 luminosity from \cite[Jin, Ward, \& Done (2012)]{2012MNRAS.422.3268J} (top axis in Fig.\,\ref{fig2}). We see that the effectiveness of our selection technique increases with the AGN luminosity. The fraction of SDSS QSO2s in the mid-infrared wedge is substantially lower than for the BUXS type 1 AGN but it is consistent, within the uncertainties, with that for the BUXS type 2 AGN, especially at the highest luminosities. The apparent different fractions of type 1 and type 2 AGN in the mid-infrared wedge could be explained if all type 2 objects suffer larger extinction at rest-frame near-infrared wavelengths so that, for a given luminosity, their observed WISE fluxes, especially at the shortest wavelengths, are more contaminated by their host galaxies than those of type 1 AGN. Still, at luminosities ${\rm L_{2-10\,keV} > 10^{44}\,erg\,s^{-1}}$ the mid-infrared wedge is highly effective at selecting obscured AGN ($75.0_{-19.1}^{+14.1}\%$ for the BUXS type 2 AGN and $66.1_{-4.7}^{+4.5}\%$ for the SDSS QSO2s). \begin{figure}[t] \vspace*{0.2 cm} \begin{center} \includegraphics[angle=90,width=2.7in]{mateos_fig2.ps} \vspace*{0.3 cm} \caption{Fraction of sources in the mid-infrared wedge as a function of the AGN luminosity. Triangles are the SDSS QSO2s and open and filled circles are the type 1 and type 2 AGN in the BUXS survey, respectively. At ${\rm L_{2-10\,keV}>10^{44}\,erg\,s^{-1}}$ $>$96\% and $>$75\% of the BUXS type 1 and type 2 AGN and $>$66\% of the SDSS QSO2s fall in the mid-infrared wedge, respectively. The horizontal arrow at the bottom right shows the amplitude of the median extinction correction to the [${\rm O_{III}}$] line luminosities.} \label{fig2} \end{center} \end{figure} \subsection{Effectiveness of mid-infrared selection to uncover Compton-thick AGN} To investigate the effectiveness of our mid-infrared wedge at identifying absorbed luminous AGN missed by deep X-ray surveys we have evaluated whether the SDSS QSO2s identified as Compton-thick candidates in the literature from the studies of \cite[Vignali et al. (2010)]{vignali10} and \cite[Jia et al. (2013)]{2013ApJ...777...27J} would be selected by our mid-infrared wedge. To date the X-ray follow-up of the SDSS QSO2s in R08 has focused mainly on the most luminous objects hence, in what follows, we used only the SDSS QSO2s with ${\rm L_{[OIII]}>4.8\times10^{42}\,erg\,s^{-1}}$. The fraction of those objects in the mid-infrared wedge is ${\rm 72.8^{+5.9}_{-6.5}\%}$ (99 out of 136 objects). Out of the 31 SDSS QSO2s in this sample with X-ray follow-up, 18 objects are robust Compton-thick candidates. Of these 12 are in the mid-infrared wedge (${\rm 66.7^{+15.5}_{-18.5}}$\%). We show in Fig.\,\ref{fig3} the WISE colours of the SDSS QSO2s with X-ray follow-up and indicate those objects identified as Compton-thick candidates. We see that the Compton-thick AGN have a distribution of colours that is consistent with that for the SDSS QSO2s with same luminosities. All these results fully support that at high AGN luminosities and at least up to z$\lesssim$1 our mid-infrared selection technique is very effective at identifying both Compton-thin and Compton-thick AGN. \begin{figure}[t] \vspace*{0.3 cm} \begin{center} \includegraphics[angle=90,width=2.7in]{mateos_fig3.ps} \vspace*{0.2 cm} \caption{Mid-infrared colours for the SDSS QSO2s with ${\rm L_{[O_{III}]} \gtrsim 4.8 \times 10^{42}\,erg\,s^{-1}}$ and X-ray follow-up with either Chandra or XMM-Newton (open circles). Open squares and triangles are the Compton-thick candidates from the studies of \cite[Vignali et al. (2010)]{vignali10} and \cite[Jia et al. (2013)]{2013ApJ...777...27J}, respectively. The mid-infrared wedge and power-law locus (and the values for different spectral index) are the thick solid and dashed black lines, respectively. The dashed contours indicate the density (normalized to the peak value) of all SDSS QSO2s with ${\rm L_{[O_{III}]} \gtrsim 4.8 \times 10^{42}\,erg\,s^{-1}}$.} \label{fig3} \end{center} \end{figure} Based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere, Chile. Based on observations made with the William Herschel Telescope, the Telescopio Nazionale Galileo and the Gran Telescopio de Canarias installed in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrof\'isica de Canarias, in La Palma, Spain. SM acknowledges financial support from the Spanish Plan Nacional through grants AYA2010-21490-C02-01 and AYA2012-31447.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} For a positive integer $n$, let $[n]=\{1,\ldots,n\}$. An order $k$ dimension $n$ tensor $\mathcal{A}=(a_{i_1\cdots i_k})\in\mathbb{C}^{n\times\cdots\times n}$ is a multidimensional array with $n^k$ entries, where $i_j\in[n]$, $j=1,\ldots,k$. $\mathcal{A}$ is called \textit{symmetric} if $a_{i_1i_2\cdots i_k}=a_{i_{\sigma(1)}i_{\sigma(2)}\cdots i_{\sigma(k)}}$ for any permutation $\sigma$ on $[k]$. We sometimes write $a_{i_1\cdots i_k}$ as $a_{i_1\alpha}$, where $\alpha=i_2\cdots i_k$. When $k=1$, $\mathcal{A}$ is a column vector of dimension $n$. When $k=2$, $\mathcal{A}$ is an $n\times n$ matrix. The \textit{unit tensor} of order $k\geqslant2$ and dimension $n$ is the tensor $\mathcal{I}_n=(\delta_{i_1i_2\cdots i_k})$ such that $\delta_{i_1i_2\cdots i_k}=1$ if $i_1=i_2=\cdots=i_k$, and $\delta_{i_1i_2\cdots i_k}=0$ otherwise. When $k=2$, $\mathcal{I}_n$ is the identity matrix $I_n$. Recently, Shao \cite{Shao-product} introduce the following product of tensors, which is a generalization of the matrix multiplication. \begin{definition}\label{definition1.1}\textup{\cite{Shao-product}} Let $\mathcal{A}$ and $\mathcal{B}$ be order $m\geqslant2$ and order $k\geqslant1$, dimension $n$ tensors, respectively. The product $\mathcal{A}\mathcal{B}$ is the following tensor $\mathcal{C}$ of order $(m-1)(k-1)+1$ and dimension $n$ with entries: \begin{eqnarray*} c_{i\alpha_1\ldots \alpha_{m-1}}=\sum_{i_2,\ldots,i_m\in[n]}a_{ii_2\ldots i_m}b_{i_2\alpha_1}\cdots b_{i_m\alpha_{m-1}}, \end{eqnarray*} where $i\in[n]$, $\alpha_1,\ldots,\alpha_{m-1}\in[n]^{k-1}$. \end{definition} Let $\mathcal{A}$ be an order $m\geqslant2$ dimension $n$ tensor, and let $x=(x_1,\ldots,x_n)^\top$. From Definition \ref{definition1.1}, the product $\mathcal{A}x$ is a vector in $\mathbb{C}^n$ whose $i$-th component is (see Example 1.1 in \cite{Shao-product}) \begin{eqnarray*} (\mathcal{A}x)_i=\sum_{i_2,\ldots,i_m\in[n]}a_{ii_2\cdots i_m}x_{i_2}\cdots x_{i_m}. \end{eqnarray*} In 2005, the concept of tensor eigenvalues was posed by Qi \cite{Qi05} and Lim \cite{Lim}. A number $\lambda\in\mathbb{C}$ is called an \textit{eigenvalue} of $\mathcal{A}$, if there exists a nonzero vector $x\in\mathbb{C}^n$ such that $\mathcal{A}x=\lambda x^{[m-1]}$, where $x^{[m-1]}=(x_1^{m-1},\ldots,x_n^{m-1})^\top$. The \textit{determinant} of $\mathcal{A}$, denoted by $\det(\mathcal{A})$, is the resultant of the system of polynomials $f_i(x_1,\ldots,x_n)=(\mathcal{A}x)_i$ ($i=1,\ldots,n$). The \textit{characteristic polynomial} of $\mathcal{A}$ is defined as $\Phi_{\mathcal{A}}(\lambda)=\det(\lambda\mathcal{I}_n-\mathcal{A})$, where $\mathcal{I}_n$ is the unit tensor of order $m$ and dimension $n$. It is known that eigenvalues of $\mathcal{A}$ are exactly roots of $\Phi_{\mathcal{A}}(\lambda)$ (see \cite{Shao-product}). For an order $m\geqslant2$ dimension $n$ tensor $\mathcal{A}$, a number $\lambda\in\mathbb{C}$ is called an \textit{E-eigenvalue} of $\mathcal{A}$, if there exists a nonzero vector $x\in\mathbb{C}^n$ such that $\mathcal{A}x=\lambda x$ and $x^\top x=1$. In \cite{Qi07}, the \textit{E-characteristic polynomial} of $\mathcal{A}$ is defined as \begin{eqnarray*} \phi_\mathcal{A}(\lambda)=\begin{cases}{\rm Res}_x\left(\mathcal{A}x-\lambda(x^\top x)^{\frac{m-2}{2}}x\right)~~~~~~~m~\mbox{is even},\\{\rm Res}_{x,\beta}\begin{pmatrix}\mathcal{A}x-\lambda\beta^{m-2}x\\x^\top x-\beta^2\end{pmatrix}~~~~~~~~~~m~\mbox{is odd},\end{cases} \end{eqnarray*} where `Res' is the resultant of the system of polynomials. It is known that E-eigenvalues of $\mathcal{A}$ are roots of $\phi_\mathcal{A}(\lambda)$ (see \cite{Qi07}). If $m=2$, then $\phi_\mathcal{A}(\lambda)=\Phi_{\mathcal{A}}(\lambda)$ is just the characteristic polynomial of the square matrix $\mathcal{A}$. A hypergraph $H$ is called $k$-\textit{uniform} if each edge of $H$ contains exactly $k$ distinct vertices. All hypergraphs in this note are uniform and simple. Let $K_n^k$ denote the complete $k$-uniform hypergraph with $n$ vertices, i.e., every $k$ distinct vertices of $K_n^k$ forms an edge. For a $k$-uniform hypergraph $H=(V(H),E(H))$, a hypergraph $G=(V(G),E(G))$ is a \textit{sub-hypergraph} of $H$, if $V(G)\subseteq V(H)$ and $E(G)\subseteq E(H)$. For any edge $u_1\cdots u_k\in E(H)$, we say that $u_k$ is a \textit{neighbor} of $\{u_1,\ldots,u_{k-1}\}$. The \textit{complement} of $H$ is a $k$-uniform hypergraph with vertex set $V(H)$ and edge set $E(K_{|V(H)|}^k)\backslash E(H)$. The \textit{adjacency tensor} of $H$, denoted by $\mathcal{A}_H$, is an order $k$ dimension $|V(H)|$ tensor with entries (see \cite{Cooper12}) \begin{eqnarray*} a_{i_1i_2\cdots i_k}=\begin{cases}\frac{1}{(k-1)!}~~~~~~~\mbox{if}~i_1i_2\cdots i_k\in E(H),\\ 0~~~~~~~~~~~~~\mbox{otherwise}.\end{cases} \end{eqnarray*} Clearly $\mathcal{A}_H$ is a symmetric tensor. We say that two $k$-uniform hypergraphs are \textit{cospectral} (\textit{E-cospectral}), if their adjacency tensors have the same characteristic polynomial (E-characteristic polynomial). A $k$-uniform hypergraph $H$ is said to be \textit{determined by its spectrum}, if there is no other non-isomorphic $k$-uniform hypergraph cospectral with $H$. We shall use ``DS" as an abbreviation for ``determined by its spectrum" in this note. Cospectral (E-cospectral) hypergraphs and DS hypergraphs are generalizations of cospectral graphs and DS graphs in the classic sense \cite{Dam03}. Recently, the research on spectral theory of hypergraphs has attracted extensive attention [2,6-9,12,15-20]. In this note, we give a method for constructing E-cospectral hypergraphs. Some hypergraphs are shown to be DS. \section{Preliminaries} The following lemma can be obtained from equation (2.1) in \cite{Shao-product}. \begin{lem}\label{lem1} Let $\mathcal{A}=(a_{i_1\cdots i_m})$ be an order $m\geqslant2$ dimension $n$ tensor, and let $P=(p_{ij})$ be an $n\times n$ matrix. Then \begin{eqnarray*} (P\mathcal{A}P^\top)_{i_1\cdots i_m}=\sum_{j_1,\ldots,j_m\in[n]}a_{j_1\cdots j_m}p_{i_1j_1}p_{i_2j_2}\cdots p_{i_mj_m}. \end{eqnarray*} \end{lem} We can obtain the following lemma from Lemma \ref{lem1}. \begin{lem}\label{lem2} Let $\mathcal{B}=P\mathcal{A}P^\top$, where $\mathcal{A}$ is a tensor of dimension $n$, $P$ is an $n\times n$ matrix. If $\mathcal{A}$ is symmetric, then $\mathcal{B}$ is symmetric. \end{lem} Let $\mathcal{B}=P\mathcal{A}P^\top$, where $\mathcal{A}$ is a tensor of dimension $n$, $P$ is an $n\times n$ real orthogonal matrix. In \cite{Shao-product}, Shao pointed out that $\mathcal{A},\mathcal{B}$ are orthogonally similar tensors defined by Qi \cite{Qi05}. Orthogonally similar tensors have the following property. \begin{lem}\label{lem3}\textup{\cite{Li}} Let $\mathcal{B}=P\mathcal{A}P^\top$, where $\mathcal{A}$ is a tensor of dimension $n$, $P$ is an $n\times n$ real orthogonal matrix. Then $\mathcal{A}$ and $\mathcal{B}$ have the same E-characteristic polynomial. \end{lem} A \textit{simplex} in a $k$-uniform hypergraph is a set of $k+1$ vertices where every set of $k$ vertices forms an edge (see [2, Definition 3.4]). \begin{lem}\label{lem4} Let $G$ and $H$ be cospectral $k$-uniform hypergraphs. Then $G$ and $H$ have the same number of vertices, edges and simplices. \end{lem} \begin{proof} The degree of the characteristic polynomial of an order $k$ dimension $n$ tensors is $n(k-1)^{n-1}$ (see \cite{Qi05}). Since $\mathcal{A}_G$ and $\mathcal{A}_H$ are order $k$ tensors, $G$ and $H$ have the same number of vertices. From [2, Theorem 3.15] and [2, Theorem 3.17], we know that $G$ and $H$ have the same number of edges and simplices. \end{proof} \section{Main results} Let $H=(V(H),E(H))$ be a $k$-uniform hypergraph with a partition $V(H)=V_1\cup V_2$, and $H$ satisfies the following conditions: (a) For each edge $e\in E(H)$, $e$ contains at most one vertex in $V_1$. (b) For any $k-1$ distinct vertices $u_1,\ldots,u_{k-1}\in V_2$, $\{u_1,\ldots,u_{k-1}\}$ has either $0,\frac{1}{2}|V_1|$ or $|V_1|$ neighbors in $V_1$. Similar with GM switching \cite{GM switching,Haemers}, we construct a hypergraph E-cospectral with $H$ as follows. \begin{thm}\label{thm1} Let $H$ be a $k$-uniform hypergraph satisfies the conditions (a) and (b) described above. For any $\{u_1,\ldots,u_{k-1}\}\subseteq V_2$ which has $\frac{1}{2}|V_1|$ neighbors in $V_1$, by replacing these $\frac{1}{2}|V_1|$ neighbors with the other $\frac{1}{2}|V_1|$ vertices in $V_1$, we obtain a $k$-uniform hypergraph $G$ which is E-cospectral with $H$. \end{thm} \begin{proof} Let $P=\begin{pmatrix}\frac{2}{n_1}J-I_{n_1}&0\\0&I_{n_2}\end{pmatrix}$, where $n_1=|V_1|,n_2=|V_2|$, $J$ is the $n_1\times n_1$ all-ones matrix, $\frac{2}{n_1}J-I_{n_1}$ and $I_{n_2}$ correspond to the vertex sets $V_1$ and $V_2$, respectively. Then $P=P^\top=P^{-1}$. Suppose that $\mathcal{A}_H=(a_{i_1i_2\cdots i_k})$, and let $\mathcal{B}=P\mathcal{A}_HP^\top$. By Lemma \ref{lem2}, $\mathcal{B}$ is symmetric. We need to show that $\mathcal{B}=\mathcal{A}_G$. By Lemma \ref{lem1}, we have \begin{eqnarray}\label{Eq.1} (\mathcal{B})_{i_1\cdots i_k}=\sum_{j_1,\ldots,j_k\in V(H)}a_{j_1\cdots j_k}p_{i_1j_1}p_{i_2j_2}\cdots p_{i_kj_k}. \end{eqnarray} Note that $P=\begin{pmatrix}\frac{2}{n_1}J-I_{n_1}&0\\0&I_{n_2}\end{pmatrix}$. From Eq. (\ref{Eq.1}), we have \begin{eqnarray}\label{Eq.2} (\mathcal{B})_{i_1\cdots i_k}=a_{i_1\cdots i_k}~~\mbox{if}~i_1,\ldots,i_k\in V_2. \end{eqnarray} Since $H$ satisfies the condition (a), we have $a_{j_1\cdots j_k}=0$ if $|\{j_1,\ldots,j_k\}\cap V_1|\geqslant2$. From Eq. (\ref{Eq.1}), we have \begin{eqnarray}\label{Eq.3} (\mathcal{B})_{i_1\cdots i_k}=a_{i_1\cdots i_k}=0~~\mbox{if}~|\{i_1,\ldots,i_k\}\cap V_1|\geqslant2. \end{eqnarray} Next we consider the case $|\{i_1,\ldots,i_k\}\cap V_1|=1$. Note that $\mathcal{B}$ is symmetric. Without loss of generality, suppose that $i_1\in V_1,i_2,\ldots,i_k\in V_2$. From Eq. (\ref{Eq.1}), we have \begin{eqnarray}\label{Eq.4} (\mathcal{B})_{i_1\cdots i_k}=\sum_{j_1\in V_1}a_{j_1i_2\cdots i_k}p_{i_1j_1}~(i_1\in V_1,i_2,\ldots,i_k\in V_2). \end{eqnarray} Since $H$ satisfies the condition (b), $S^{i_2\cdots i_k}=\{a_{j_1i_2\cdots i_k}|j_1\in V_1,a_{j_1i_2\cdots i_k}\neq0\}$ contains either $0,\frac{1}{2}|V_1|$ or $|V_1|$ elements for any given $i_2,\ldots,i_k\in V_2$. By computing the sum in (\ref{Eq.4}), we have \begin{eqnarray}\label{Eq.5} (\mathcal{B})_{i_1\cdots i_k}=a_{i_1\cdots i_k}=0~~\mbox{if}~i_1\in V_1,i_2,\ldots,i_k\in V_2,|S^{i_2\cdots i_k}|=0. \end{eqnarray} \begin{eqnarray}\label{Eq.6} (\mathcal{B})_{i_1\cdots i_k}=a_{i_1\cdots i_k}=\frac{1}{(k-1)!}~~\mbox{if}~i_1\in V_1,i_2,\ldots,i_k\in V_2,|S^{i_2\cdots i_k}|=|V_1|. \end{eqnarray} \begin{eqnarray}\label{Eq.7} (\mathcal{B})_{i_1\cdots i_k}=0~~\mbox{if}~i_1\in V_1,i_2,\ldots,i_k\in V_2,a_{i_1\cdots i_k}=\frac{1}{(k-1)!},|S^{i_2\cdots i_k}|=\frac{1}{2}|V_1|. \end{eqnarray} \begin{eqnarray}\label{Eq.8} (\mathcal{B})_{i_1\cdots i_k}=\frac{1}{(k-1)!}~~\mbox{if}~i_1\in V_1,i_2,\ldots,i_k\in V_2,a_{i_1\cdots i_k}=0,|S^{i_2\cdots i_k}|=\frac{1}{2}|V_1|. \end{eqnarray} From Eqs. (\ref{Eq.2})(\ref{Eq.3}) and (\ref{Eq.5})-(\ref{Eq.8}), we have $\mathcal{B}=P\mathcal{A}_HP^\top=\mathcal{A}_G$. By Lemma \ref{lem3}, $G$ is E-cospectral with $H$. \end{proof} If two $k$-uniform hypergraphs $G$ and $H$ are isomorphic, then there exists a permutation matrix $P$ such that $\mathcal{A}_G=P\mathcal{A}_HP^\top$ (see \cite{Bu,Shao-product}). From Lemma \ref{lem3}, we know that two isomorphic $k$-uniform hypergraphs are E-cospectral. By using the method in Theorem \ref{thm1}, we give a class of non-isomorphic E-cospectral hypergraphs as follows. \vspace{3mm} \noindent \textbf{Example.} Let $H$ be a $3$-uniform hypergraph whose vertex set and edge set are \begin{eqnarray*} V(H)&=&\{u_1,u_2,u_3,u_4,v_1,\ldots,v_n\}~(n\geqslant3),\\ E(H)&=&\{v_1v_2u_2,v_1v_2u_3,v_2v_3u_2,v_2v_3u_4,v_1v_3u_3,v_1v_3u_4\}\cup F, \end{eqnarray*} where each edge in $F$ contains three vertices in $\{v_1,\ldots,v_n\}$, and each vertex in $\{v_4,\ldots,v_n\}$ is contained in at least one edge in $F$ if $n\geqslant4$. Let $G$ be a $3$-uniform hypergraph whose vertex set and edge set are \begin{eqnarray*} V(G)=V(H),E(G)&=&\{v_1v_2u_1,v_1v_2u_4,v_2v_3u_1,v_2v_3u_3,v_1v_3u_1,v_1v_3u_2\}\cup F. \end{eqnarray*} The vertex set $V(H)$ has a partition $V(H)=V_1\cup V_2$ such that $H$ and $G$ satisfy the conditions in Theorem \ref{thm1}, where $V_1=\{u_1,u_2,u_3,u_4\}$, $V_2=\{v_1,\ldots,v_n\}$. Then $G$ and $H$ are E-cospectral. Moreover, $G$ and $H$ are non-isomorphic E-cospectral hypergraphs, because $H$ has an isolated vertex $u_1$ and $G$ has no isolated vertices. \vspace{3mm} Let $K_n^k-e$ denote the $k$-uniform hypergraph obtained from $K_n^k$ by deleting one edge. We can obtain the following result from Lemma \ref{lem4}. \begin{thm} The complete $k$-uniform hypergraph $K_n^k$, the hypergraph $K_n^k-e$ and their complements are DS. Any $k$-uniform sub-hypergraph of $K_{k+1}^k$ is DS. The disjoint union of $K_{k+1}^k$ and some isolated vertices is DS. \end{thm} If $G$ is a sub-hypergraph of a hypergraph $H$, then let $H\backslash G$ denote the hypergraph obtained from $H$ by deleting all edges of $G$. \begin{thm} The hypergraph $K_n^k\backslash G$ is DS, where $G$ is a $k$-uniform sub-hypergraph of $K_n^k$ such that all edges of $G$ share $k-1$ common vertices. \end{thm} \begin{proof} Let $H$ be any $k$-uniform hypergraph cospectral with $K_n^k\backslash G$. Suppose that $G$ has $r$ edges. By Lemma \ref{lem4}, $H$ can be obtained from $K_n^k$ by deleting $r$ edges. Deleting $r$ edges from $K_n^k$ destroys at least $\sum_{i=0}^{r-1}(n-k-i)$ simplices, with equality if and only if all deleted edges share $k-1$ common vertices. Lemma \ref{lem4} implies that $H=K_n^k\backslash G$. \end{proof} \vspace{3mm} \noindent \textbf{Acknowledgements.} \vspace{3mm} This work is supported by the National Natural Science Foundation of China (No. 11371109 and No. 11271084), and the Fundamental Research Funds for the Central Universities.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Convolutional neural networks have been instrumental in solving various problems in the field of computer vision. However, the network designs were mainly done by humans (like AlexNet \cite{krizhevsky2012imagenet}, ResNet \cite{he2016deep}, DenseNet \cite{huang2017densely}, VGGNet \cite{simonyan2014very}) on the basis of their intuition and understanding of the specific problem. This has led to the growing interest in the automated search of neural architecture called \textit{Neural Architecture Search} (NAS) \cite{elsken2018neural}\cite{zoph2016neural}\cite{pmlr-v80-pham18a}. NAS has shown some promising results in the field of computer vision but most of these methods demand a considerable amount of computational power. For example, obtaining the state-of-the-art architecture for CIFAR-10 required 3150 GPU days of evolution \cite{real2019regularized} and 1800 GPU days of reinforcement learning (RL) \cite{zoph2018learning}. This can be mainly attributed to the evaluation of the architectures during the search process of the different NAS methods because most NAS methods train each architecture individually for certain number of epochs in order to evaluate its performance on the validation data. Recent works \cite{pmlr-v80-pham18a}\cite{bender2019understanding} have reduced the search time by weight sharing among the architectures. DARTS \cite{liu2018darts2} further improves upon the scalability by relaxing the search space to a continuous space in order to use gradient descent for optimizing the architecture. But these gradient based methods are highly dependent on the search space and they tend to overfit to operations in the search space that lead to faster gradient descent \cite{Zela2020Understanding}. In this work, we propose a method called EvNAS (\textit{Evolving Neural Architecture using One Shot Model}) which involves evolving a convolutional neural network architecture with weight sharing among the architectures in the population for the image classification task. The work is inspired in part by the representation used for the architectures of the network in DARTS \cite{liu2018darts2} and a random search \cite{li2019random} using the same representation as DARTS which achieved a competitive result on the CIFAR-10 dataset. By replacing the idea of using a random search with a genetic algorithm on the representation used in DARTS, we introduce a directional component to the otherwise directionless random search \cite{li2019random} through the use of crossover with tournament selection and elitism. The stochastic nature of the algorithm also ensures that the algorithm does not get stuck in the local minima. The objective of the paper is to show how to apply a simple genetic algorithm to the neural architecture search problem while reducing the high search time associated with the evolution based search algorithm. Our experiments (Section ~\ref{experiments}) involves neural architecture search on a proxy dataset i.e. CIFAR-10 which achieves the test error of 2.47\% with 3.63M parameters on this dataset while using minimal computational resources (4.4 GPUs day on a single GPU). The discovered architecture is then transferred to CIFAR-100 and ImageNet achieving top-1 error of 16.37\% and top-5 error of 7.4\% respectively. Our contributions can be summarized as follows: \begin{itemize} \item We introduce a novel method of applying a simple genetic algorithm to the NAS problem with reduced computational requirements. \item We propose a decoding technique for each architecture in the population which diverts a majority of the gradient information to the current architecture during the training phase and is used to calculate the fitness of the architecture from the one shot model during the fitness evaluation phase. \item We propose a crossover operation that is guided by the predicted fitness of the partially trained architectures of the previous generation and does not require keeping track of the ancestors of the parent architectures. \item We achieved remarkable efficiency in the architecture search achieving test error of 2.47\% with 3.63M parameters on CIFAR-10 and showed that the architecture learned by EvNAS is transferable to CIFAR-100 and ImageNet. \end{itemize} \begin{figure*} \centering \begin{subfigure}{0.34\linewidth} \includegraphics[width=\linewidth,scale=0.5]{oneshot.jpg} \caption{} \label{subfig:darts} \end{subfigure} \quad \begin{subfigure}{0.18\linewidth} \includegraphics[width=\linewidth]{arch.jpg} \caption{} \label{subfig:discrete_arch} \end{subfigure} \quad \begin{subfigure}{0.38\linewidth} \includegraphics[width=\linewidth]{discrete.jpg} \caption{} \label{subfig:decoded_arch} \end{subfigure} \caption{The process of decoding the architecture parameter, $\alpha$. Better viewed in color mode. Here, we consider three operations in the operation space. (a) One shot model and its representation with arrows between the nodes representing all the operations in the search space, (b) Discrete architecture, $arch_{dis}$, derived from $\alpha$, (c) Decoded architecture, $\bar{\alpha}$, created using $arch_{dis}$. The thickness of the arrow is proportional to the weight given to an operation.} \label{fig:arch_represent} \end{figure*} \section{Related Work} Automated Neural Architecture Search is an alternative to the hand-crafted architectures where the machine designs the best suited architecture for a specific problem. Several search methods have been proposed to explore the space of neural architectures, such as evolutionary algorithm (EA) \cite{real2019regularized}\cite{real2017large}\cite{liu2018hierarchical}\cite{xie2017genetic}, reinforcement learning (RL) \cite{zoph2016neural}\cite{zoph2018learning}\cite{pmlr-v80-pham18a}, random search\cite{li2019random} and gradient-based methods \cite{liu2018darts}\cite{liu2018darts2}\cite{luo2018neural}\cite{chen2019progressive}\cite{Zela2020Understanding}. These can be grouped into two groups: \textit{gradient-based} methods and \textit{non-gradient based} methods. \textbf{Gradient Based Methods:} In these methods, the neural architecture is directly optimized using the gradient information based on the performance on the validation data. In \cite{liu2018darts}\cite{liu2018darts2}, the discrete architecture search space is relaxed to a continuous search space by using a one shot model and the performance of the model on the validation data is used for updating the architecture using gradients. This method reduces the search time significantly but suffers from the overfitting problem wherein the searched architecture performs very well on the validation data but exhibits poor performance on the test data. This is mainly attributed to the preference of parameter-less operations during the search process as it leads to a rapid gradient descent \cite{chen2019progressive}. Many regularizations have been introduced to tackle the problem such as early stopping \cite{Zela2020Understanding}, search space regularization \cite{chen2019progressive} and architecture refinement \cite{chen2019progressive}. Contrary to the gradient based methods, the proposed method does not suffer from the overfitting problem because of the stochastic nature introduced by the mutation operation. \textbf{Non-Gradient Based Methods:} These methods include reinforcement learning (RL) and evolutionary algorithm (EA). In RL methods, an agent is trained to generate a neural architecture through its action in order to maximize the expected accuracy on the validation data. In \cite{zoph2016neural}\cite{zoph2018learning}, a recurrent neural network (RNN) is used as an agent which samples neural architectures which are then trained to convergence in order to obtain their accuracies on the validation data. These accuracies are then used to update the weights of RNN by using policy gradient methods. Both of these methods suffered from huge computational requirements. This was improved upon in \cite{pmlr-v80-pham18a}, where all the sampled architectures were forced to share weights by using a single directed acyclic graph (DAG) resulting in the reduction of computational resources. Early approaches based on EA such as \cite{stanley2002evolving}\cite{stanley2009hypercube} optimized both the neural architectures and the weights of the network which limited their usage to relatively smaller networks. Then, methods such as \cite{xie2017genetic}\cite{real2019regularized} used evolution to search for the architecture and gradient descent for optimizing the weights of each architecture which made it possible to search for relatively large networks. However, this resulted in huge computational requirements. To speed up the training of each individual architecture, \textit{weight inheritance} was introduced in \cite{real2017large} wherein a child network inherits the parent networks' weights. In this work, we used both weight inheritance and weight sharing among the architectures to speed up the search process. \section{Methods} \label{methods} This section discusses different parts of the proposed algorithm and its relationship to prior works. \subsection{Representation of Architecture} The proposed algorithm deals with a population of architectures in each generation during the search process. Instead of having a separate model for each architecture in a population \cite{xie2017genetic}\cite{real2019regularized}, we used a \textit{one shot model} which treats all the architectures as subgraphs of the supergraph while sharing the weights among all the architectures. The one shot model is composed of repeatable cells which are stacked together to form the convolutional network. The one shot model has two types of convolutional cells: \textit{normal} cell and \textit{reduction} cell. A normal cell uses operations with stride 1 whereas reduction cell uses operations with stride 2. A cell in the one shot model is represented by the parameter, $\alpha$ called \textit{architecture parameter}, which represents the weights of the different operations $op(.)$ in the operation space $O$ (i.e. search space of NAS) between a pair of nodes. The edge between node $i$ and node $j$ can be written as: \begin{equation} f^{(i,j)}(x) = \sum_{op \in O } \frac{exp(\alpha^{i,j}_{op})} {\sum_{op' \in O } exp(\alpha^{i,j}_{op'})} op(x). \end{equation} Where $\alpha^{i,j}_{op}$ refers to the weight of the operation $op$ in the operation space $O$ between node $i$ and node $j$. The architecture is represented by two matrices, one for normal cell and one for reduction cell, where the row represents the edge between two nodes and the column represents the weights of different operations from the operation space as shown in Figure~\ref{fig:arch_represent}(a). Please refer to the original DARTS paper \cite{liu2018darts2} for more technical details. The design choice results in weight sharing among the architectures in a given population of architectures. It also results in weight inheritance from one generation of architectures to the next generation of architectures i.e. the next generation architectures are not trained from scratch but inherit the partially trained weights from the previous generation architectures. All of these ultimately leads to the reduction of the architecture search time using evolution. \begin{figure*}[t] \centering \begin{subfigure}{0.3\linewidth} \includegraphics[width=\linewidth]{accuracy_vs_decoded_architecture_value.png} \caption{} \label{subfig:AccVsArch} \end{subfigure} \quad \begin{subfigure}{0.3\linewidth} \includegraphics[width=\linewidth]{accuracy_vs_population_size.png} \caption{} \label{subfig:AccVsPop} \end{subfigure} \quad \begin{subfigure}{0.3\linewidth} \includegraphics[width=\linewidth]{accuracy_vs_mutation_rate.png} \caption{} \label{subfig:AccVsMrate} \end{subfigure} \caption{(a) Accuracy vs Decoded architecture value (b) Accuracy vs Population size (c) Accuracy vs mutation rate} \label{fig:exp} \end{figure*} \subsection{Decoding Architecture Parameter}\label{subsec:decode} \textit{Architecture parameter}, $\alpha$, gives variable weights to the operations in any particular architecture which results in very noisy estimate of fitness of the architecture. This results in the algorithm performing marginally better than the random algorithm as discussed in Section ~\ref{ablation}. We propose a decoding technique, which is a process of giving equal higher weight to the operations of the actual architecture/subgraph according to the architecture parameter, $\alpha$ and equal smaller weights to the operations of the other architectures. This can be thought of as decoding/mapping the genotype, i.e. $\alpha$, to the phenotype, i.e. actual architecture \cite{eiben2003introduction}. The process has the following two steps: \begin{itemize} \item For any $\alpha$, derive the discrete architecture, $arch_{dis}$, from $\alpha$ as shown in Figure~\ref{fig:arch_represent}(b). \item On the basis of the discrete architecture, $arch_{dis}$, create another architecture parameter called \textit{decoded architecture} parameter , $\bar{\alpha}$ (as shown in Figure~\ref{fig:arch_represent}(c)), with the following entries: \begin{equation} \bar{\alpha}^{i,j}_{op} = \begin{cases} k, \text{if $op$ between node $i$ and $j$ present in $arch_{dis}$} \\ 0, \text{otherwise} \\ \end{cases} \end{equation} \end{itemize} where \textit{k} is an integer. The design ensures that the current architecture according to $\alpha$ gets a majority of the gradient information to update its parameters while the rest of the gradient information is distributed equally among all the other architectures to update their parameters. This results in making sure that the weights of an architecture does not get co-dependent with the weights of the other architecture due to the weight sharing nature of the one shot model. It also helps in improving the estimation of the \textit{fitness} of each architecture in the population, as it gives higher equal weight to that particular architecture operations while giving lower equal weights to the other architecture operations. This results in the higher contribution from a particular architecture while very low contribution by other architectures during the fitness evaluation step of that particular architecture from the one shot model. This is in contrast to the variable architecture contribution, used in the original DARTS paper, wherein an architecture is evaluated using $\alpha$ which results in very noisy estimate of its performance. We empirically find that k = 1 gives a good result and increasing the value of k from 1 tends to deteriorate the accuracy as shown in Figure \ref{fig:exp}(a). \subsection{Training and Performance Estimation} The sharing of the network weights among the architectures in the population, due to the one shot model representation \cite{liu2018darts2}, helps in exchanging information to the next generation population, wherein the architectures of the next generation do not start training from scratch. This can be thought of as child architecture model inheriting the weights of the parent architecture model, also known as \textit{weight inheritance}. Therefore, instead of the full training of each architecture in the population from scratch, EvNAS partially trains the inherited architecture model weights by using the training data. This is done by first copying the \textit{decoded architecture} parameter, $\bar{\alpha}$, in Section~\ref{subsec:decode}, for the individual architecture in the population to the one shot model and then training the network for a certain number of batches of training examples. To evaluate the performance of each individual architecture, its \textit{decoded architecture} parameter, $\bar{\alpha}$, from Section~\ref{subsec:decode}, is first copied to the one shot model. The model is then evaluated on the basis of its accuracy on the validation data, which becomes the \textit{fitness} of the architecture. Note that the fitness value of each architecture is a noisy estimate of its true accuracy on the validation data as the architecture has been trained partially on a certain number of training batches while inheriting its weights from the previous generation. \subsection{Evolutionary Algorithm} The evolutionary algorithm (EA) starts with a population of architectures, which are sampled from a uniform distribution on the interval $\left[0,1\right)$, and it runs for \textit{G} generations. In each generation, the one shot model is trained on the training data by using the decoded architecture parameter $\bar{\alpha}$ of each individual architecture in the population in a round-robin fashion. Then, the fitness of each individual architecture is estimated using the decoded architecture parameter $\bar{\alpha}$. The population is then evolved using crossover and mutation operations to create the next generation population replacing the previous generation population. The best architecture in each generation does not undergo any modification and is automatically copied to the next generation. This ensures that the algorithm does not forget the best architecture learned thus far and gives an opportunity to old generation architecture to compete against the new generation architecture; this is known as \textit{elitism}. The best architecture is returned after \textit{G} generations. The entire process is summarized as Algorithm 1 in the supplementary. \textbf{Mutation Operation:} It refers to a random change to an individual architecture in the population. The algorithm uses the \textit{mutation rate} \cite{eiben2003introduction}, which decides the probability of changing the architecture parameter, $\alpha^{i,j}$, between node $i$ and node $j$. This is done by re-sampling $\alpha^{i,j}$ from a uniform distribution on the interval $\left[0,1\right)$ as illustrated in Figure ~\ref{fig:mutation}. \begin{figure}[h] \begin{center} \includegraphics[width=0.9\linewidth]{mutation.jpg} \end{center} \caption{Illustration of mutation operation} \label{fig:mutation} \end{figure} \begin{figure*}[t] \centering \begin{subfigure}{0.45\linewidth} \includegraphics[width=\linewidth]{normal_cell.png} \caption{} \label{fig:normal} \end{subfigure} \quad \quad \begin{subfigure}{0.45\linewidth} \includegraphics[width=\linewidth]{reduce_cell.png} \caption{} \label{fig:reduce} \end{subfigure} \caption{Discovered cell using EvNAS-A (a) Normal Cell (b) Reduction Cell} \label{discovered_cells} \end{figure*} \textbf{Crossover Operation:} It is a process of combining parent architectures to create a new child architecture, which may perform better than the parents. EvNAS uses \textit{tournament selection} \cite{eiben2003introduction} for the parent selection process to generate the next generation architecture population. In \textit{tournament selection}, a certain number of architectures are randomly selected from the current population. The top-2 most fit architectures from the selected group become \textit{parent1} and \textit{parent2} and are used to create a single child architecture. This is done by copying the architecture parameters, $[{\alpha}^{i,j}]_{parent1}$ and $[{\alpha}^{i,j}]_{parent2}$ between node $i$ and node $j$, from \textit{parent1} and \textit{parent2}, respectively, with a certain probability to the child architecture parameter, $[{\alpha}^{i,j}]_{child}$ between node $i$ and node $j$ as illustrated in Figure ~\ref{fig:crossover}. This can be formulated as follows: \begin{equation} [{\alpha}^{i,j}]_{child} = \begin{cases} [{\alpha}^{i,j}]_{parent1}, \text{with probability 0.5} \\ [{\alpha}^{i,j}]_{parent2}, \text{otherwise} \\ \end{cases} \end{equation} Note that as all the architectures are sub-graph of the super-graph, i.e. the one shot model, so, we do not have to keep track of the ancestors in order to apply the crossover operation, as was done in \cite{zhu2019eena}\cite{stanley2002evolving}. \begin{figure}[h] \begin{center} \includegraphics[width=\linewidth]{crossover.jpg} \end{center} \caption{Illustration of crossover operation} \label{fig:crossover} \end{figure} \subsection{Relationship to Prior Works} Weight inheritance was used in \cite{real2017large} during architecture search using evolution but in the proposed method, the architectures in a given generation share weights and inherit the weights from the previous generation (i.e weight inheritance) because of the use of the one shot model. FairNAS \cite{chu2019fairnas}, NSGANetV2 \cite{lu2020nsganetv2} has also proposed evolutionary search with weight sharing which has two steps for searching neural architecture where they optimizes the supernet in the first step and then FairNAS performs architecture search using evolutionary method with the trained supernet as the evaluator in the second step while NSGANetV2 uses the weights from trained supernet to warm start gradient descent for an architecture during the architecture search. In contrast, our method combines both the training and search process in one single stage. FairNAS and NSGANetV2 solves the search problem as a multi-objective problem whereas our method solves it as a single objective problem. \section{Experiments and Results} \label{experiments} In this section, we report the performance of the proposed algorithm EvNAS in terms of a neural architecture search on the CIFAR-10 dataset \cite{krizhevsky2009learning} and the performance of the found architectures on the CIFAR-100 dataset \cite{krizhevsky2009learning} and the ImageNet dataset \cite{imagenet_cvpr09}. We then present an ablation study showing the importance of the proposed \textit{decoded architecture} parameter, $\bar{\alpha}$, crossover and mutation operations during the search process. \textbf{Initialization:} Each architecture in a population is represented by the architecture parameter, $\alpha$, which is sampled from a uniform distribution on the interval $\left[0,1\right)$. \textbf{Search Process:} The search process on the CIFAR-10 is divided into three stages as was done in \cite{liu2018darts2}\cite{li2019random}. In \textit{stage 1}, we perform the search process for a cell block on CIFAR-10 by using four different seeds; this can be thought of as the search stage of the algorithm. In \textit{stage 2}, the best architecture found in each trial of stage 1 is evaluated by retraining a larger network created using the same cell blocks discovered in stage 1 for 600 epochs from scratch on CIFAR-10. Next, we choose the best performing architecture among the four trials, making it the selection stage of the algorithm. In \textit{stage 3}, we evaluate the best architecture found from stage 2 by training the network from scratch with ten different seeds for 600 epochs. This stage can be considered as the evaluation stage of the algorithm. \begin{table*}[t] \caption{Comparison of EvNAS with other NAS methods on CIFAR-10 and CIFAR-100 datasets. The first block presents the performance of the hand-crafted architecture. The second block presents the performance of other NAS methods, the third block presents the performance of our method and the last block presents the performance of our ablation study. All the architecture search were performed using cutout. $\dagger$ indicates that the result was reported in \cite{chen2019progressive}} \label{table:CIFAR10} \centering \begin{tabular}{lcccccc} \hline & \multicolumn{2}{c}{\bf{Test Error (\%)}} & \bf{Params} & \bf{Search Time} &\bf{Search} \\ \bf{Architecture} & \bf{C10} & \bf{C100} & (M) & (GPU Days) & \bf{Method} \\ \hline DenseNet-BC \cite{huang2017densely} & 3.46 & 17.18 & 25.6 & - & manual\\ \hline PNAS \cite{liu2018progressive} &3.41 & - & 3.2 & 225 & SMBO\\ NASNet-A \cite{zoph2018learning} & 2.65 & - & 3.3 &1800& RL\\ ENAS \cite{pmlr-v80-pham18a} & 2.86 & - & 4.6 &0.45& RL\\ DARTS \cite{liu2018darts2} & $2.76\pm0.09$ & $17.54^{\dagger}$ & 3.3 &4& gradient-based\\ SNAS \cite{xie2018snas} & $2.85\pm0.02$ & - & 2.8&1.5& gradient-based\\ PDARTS \cite{chen2019progressive} & $2.50$ & 16.55 & 3.4 & 0.3 & gradient-based\\ AmoebaNet-A\cite{real2019regularized} & $3.34\pm0.06$ & - & 3.2 &3150& evolution\\ AmoebaNet-B\cite{real2019regularized} & $2.55\pm0.05$ & - & 2.8 &3150& evolution\\ EENA \cite{zhu2019eena} & $2.56$ & 17.71 & 8.47 & 0.65 & evolution\\ Random Search WS\cite{li2019random} & $2.86\pm0.08$ & - & 4.3 &2.7& random\\ \hline EvNAS-A (Ours) & $2.47\pm0.06$ & 16.37 & 3.6 & 4.4 & evolution\\ EvNAS-B (Ours) & $2.62\pm0.06$ & 16.51 & 3.8 & 4.4 & evolution\\ EvNAS-C (Ours) & $2.63\pm0.05$ & 16.86 & 3.4 & 4.4 & evolution\\ \hline EvNAS-Rand (Ours) & $2.84\pm0.08$ & - & 2.7 &0.62& random\\ EvNAS-ND (Ours) & $2.78\pm0.1$ & - & 3.8 & 4.4 & evolution\\ EvNAS-NDF (Ours) & $2.75\pm0.09$ & - & 3.1 & 4.4 & evolution\\ EvNAS-NDT (Ours) & $2.67\pm0.06$ & - & 3.5 & 4.4 & evolution\\ EvNAS-Mut (Ours) & $2.79\pm0.06$ & - & 3.4 & 4.4 & evolution\\ EvNAS-Cross (Ours) & $2.81\pm0.08$ & - & 3.2 & 4.4 & evolution\\ \end{tabular} \end{table*} \subsection{Search on CIFAR-10:} \label{res:cifar10} \textbf{Dataset:} CIFAR-10 \cite{krizhevsky2009learning} has 50,000 training images and 10,000 testing images with a fixed resolution of 32x32. During the architecture search, the training images are divided into two subsets of size 25,000 each, out of which the first subset is used for training the one shot model and the other subset is the validation data, which is used for calculating the \textit{fitness} of each architecture in the population. In the selection and evaluation stage, the normal training/testing split is used. \textbf{Search Space:} We follow the setup given in DARTS\cite{liu2018darts2}. The one shot model is created by stacking \textit{normal} cell with \textit{reduction} cell inserted at 1/3 and 2/3 of the total depth of the model. Each cell has two input nodes, four intermediate node and one output node resulting in 14 edges among them. The operations considered for the cells are as follows: 3x3 and 5x5 dilated separable convolutions, 3x3 and 5x5 separable convolutions, 3x3 max pooling, 3x3 average pooling, skip connect and zero. Thus, each architecture is represented by two 14x8 matrices one for normal cell and one for reduction cell. \begin{table*}[t] \caption{Comparison of our method with other image classifiers on ImageNet in mobile setting. The first block presents the performance of the hand-crafted architecture. The second block presents the performance of other NAS methods and the last block presents the performance of our method.} \label{table:imagenet} \centering \begin{tabular}{lcccccc} \hline & \multicolumn{2}{c}{\bf{Test Error (\%)}} & \bf{Params} &+$\times$& \bf{Search Time} &\bf{Search} \\ \bf{Architecture} & \bf{top 1} & \bf{top 5} & (M) & (M) & (GPU Days) & \bf{Method} \\ \hline MobileNet \cite{howard2017mobilenets}&29.4& 10.5 & 4.2 & 569 & - & manual\\ \hline PNAS \cite{liu2018progressive} &25.8& 8.1 & 5.1 & 588 & 225 & SMBO\\ NASNet-A \cite{zoph2018learning} & 26.0 & 8.4 & 5.3 & 564 &1800& RL\\ NASNet-B \cite{zoph2018learning} & 27.2 & 8.7 & 5.3 & 488 &1800& RL\\ NASNet-C \cite{zoph2018learning} & 27.5 & 9.0 & 4.9 & 558 &1800& RL\\ DARTS \cite{liu2018darts2} & 26.7 & 8.7 & 4.7 & 574 &4& gradient-based\\ SNAS \cite{xie2018snas} & 27.3 & 9.2 & 4.3 & 522 &1.5& gradient-based\\ PDARTS \cite{chen2019progressive} & 24.4 & 7.4 & 4.9 & 557 &0.3& gradient-based\\ AmoebaNet-A \cite{real2019regularized} & 25.5 & 8.0 & 5.1 & 555 &3150& evolution\\ AmoebaNet-B \cite{real2019regularized} & 26.0 & 8.5 & 5.3 & 555 &3150& evolution\\ AmoebaNet-C \cite{real2019regularized} & 24.3 & 7.6 & 6.4 & 570 &3150& evolution\\ FairNAS-A \cite{chu2019fairnas} & 24.7 & 7.6 & 4.6 & 388 &12& evolution\\ FairNAS-B \cite{chu2019fairnas} & 24.9 & 7.7 & 4.5 & 345 &12& evolution\\ FairNAS-C \cite{chu2019fairnas} & 25.3 & 7.9 & 4.4 & 321 &12& evolution\\ \hline EvNAS-A (Ours) & 24.4 & 7.4 & 5.1 & 570 & 4.4 & evolution\\ EvNAS-B (Ours) & 24.4 & 7.4 & 5.3 & 599 & 4.4 & evolution\\ EvNAS-C (Ours) & 25.1 & 7.8 & 4.9 & 547 & 4.4 & evolution\\ \end{tabular} \end{table*} \textbf{Training Settings:} The training setting mainly follows the setup proposed by DARTS \cite{liu2018darts2}. Because of the high memory requirements of the one shot model, a smaller network, called \textit{proxy network} \cite{li2019random}, with 8 stacked cells and 16 initial channels is used during the architecture search process, i.e. \textit{stage 1}. For deriving the discrete architecture, $arch_{dis}$, each node in the discrete architecture is connected to two nodes among the previous nodes selected via the top-2 operations according to the architecture parameter $\alpha$. During the search process, we use SGD for training the one shot model with a batch size of 64, initial learning rate of 0.025, momentum of 0.9, and weight decay of $3\times10^{-4}$. The learning rate is annealed down to 0.001 by using the cosine annealing schedule without any restart during the search process. For our evolutionary algorithm, we use a population size of 50 in each generation, 0.1 as the \textit{mutation rate} and 10 architectures are chosen randomly during the tournament selection. The search process runs for 50 generations on a single GPU, NVIDIA 2080 Ti, and takes 4.4 days to complete stage 1. Number of generations was chosen to match the number of epochs in DARTS \cite{liu2018darts2}. Population size was chosen based on the experiments where we ran our method for population size of 20, 30, 50 with tournament size chosen as one-fifth of the population size, as shown in Figure \ref{fig:exp}(b). We did not go beyond 50 population size as we wanted to have search time similar to that of DARTS. Mutation rate was chosen based on the experiments where we ran our method for \textit{mutation rate} of 0.05, 0.1, 0.15, as shown in Figure \ref{fig:exp}(c). All our architecture search in Table \ref{table:CIFAR10} are done with cutout \cite{devries2017improved}. \textbf{Architecture Evaluation:} A larger network, called \textit{proxyless network} \cite{li2019random}, with 20 stacked cells and 36 initial channels is used during the selection and evaluation stage. Following DARTS\cite{liu2018darts2}, the proxyless network is trained with a batch size of 96, weight decay of 0.0003, cutout \cite{devries2017improved}, auxiliary tower with 0.4 as its weights, and path dropout probability of 0.2 for 600 epochs. The same setting is used to train and evaluate the proxyless network on the CIFAR-100 dataset \cite{krizhevsky2009learning}. \textbf{Search Results and Transferability to CIFAR-100:} We perform the architecture search on CIFAR-10 three times with different random number seeds and their results are provided in Table~\ref{table:CIFAR10} as EvNAS-A, EvNAS-B and EvNAS-C, which are then transferred to CIFAR-100. The cells discovered during EvNAS-A are shown in Figure~\ref{discovered_cells} and those discovered by EvNAS-B and EvNAS-C are given in the supplementary. EvNAS-A evaluates 10K architectures during the search time and achieves the average test error of $2.47\pm0.06$ and $16.37$ on CIFAR-10 and CIFAR-100 respectively with search time significantly less than the previous evolution based methods. EENA \cite{zhu2019eena} found a competitive architecture in lesser search time than EvNAS using evolution but EvNAS was able to achieve better result on both CIFAR-10 and CIFAR-100 with fewer parameters. \subsection{Architecture Transferability to ImageNet:} \textbf{Architecture Evaluation:} The architecture discovered in the search process on CIFAR-10 is then used to train a network on the ImageNet dataset \cite{imagenet_cvpr09} with 14 cells and 48 initial channels in the mobile setting, where the size of the input images is 224 x 224 and the number of multiply-add operations in the model is restricted to less than 600M. We follow the training settings used by PDARTS \cite{chen2019progressive}. The network is trained from scratch with a batch size of 1024 on 8 NVIDIA V100 GPUs. \textbf{ImageNet Results:} The results of the evaluation on the ImageNet dataset are provided in Table ~\ref{table:imagenet}. The result shows that the cell discovered by EvNAS on CIFAR-10 can be successfully transferred to the ImageNet, achieving a top-5 error of 7.4\%. Notably, EvNAS is able to achieve better result than previous state-of-the-art evolution based methods AmoebaNet \cite{real2019regularized}, FairNAS \cite{chu2019fairnas} while using significantly less computational resources. \subsection{Ablation Studies} \label{ablation} To discover the effect of the \textit{decoded architecture} parameter, $\bar{\alpha}$, and the crossover and mutation operations during the search process, we conduct more architecture searches: without \textit{decoded architecture} parameter, $\bar{\alpha}$, with \textit{crossover only}, with \textit{mutation only} and without \textit{crossover and mutation}. The search results are provided in Table~\ref{table:CIFAR10}. \textbf{Without \textit{Crossover and Mutation}:} Here, a population of 50 architectures are randomly changed after every generation and in the last generation, the architecture with the best performance on the validation set is chosen as the best found architecture. Thus, the search process only evaluates only 200 architectures to come up with the best architecture. The architecture found (listed as \textit{EvNAS-Rand} in Table~\ref{table:CIFAR10}) achieves an average error of $2.84\pm0.08$, as the search behaves as a \textit{random search} and shows similar results to those reported in \cite{li2019random}. \textbf{Without \textit{Decoded Architecture} Parameter, $\bar{\alpha}$:} Here, we conduct three architecture searches where a population of 50 architectures are modified through both crossover and mutation operations without using the decoded architecture parameter, $\bar{\alpha}$, (i) during the training (\textit{EvNAS-NDT}), (ii) during the fitness evaluation of each individual architecture in the population (\textit{EvNAS-NDF}) and (iii) during both training and fitness evaluation (\textit{EvNAS-ND}). The architecture found (listed in Table~\ref{table:CIFAR10}) in \textit{EvNAS-NDT} performs better than that of \textit{EvNAS-NDF} which shows that the decoded architecture parameter, $\bar{\alpha}$, is more important during the fitness estimation step than during the training step. Also, the architecture found in \textit{EvNAS-ND} performs slightly better than that of the \textit{random search} because of the direction component introduced by the crossover operation. The improvement is due to the fact that when using architecture parameter, $\alpha$, it allows a varying amount of contribution from other architectures during the fitness estimation of a particular architecture from the one shot model, resulting in very noisy fitness estimate. But when using the decoded architecture parameter, $\bar{\alpha}$, it assigns higher weight to the current architecture while giving equally small weights to other architectures. \textbf{With \textit{Mutation Only}:} Here, a population of 50 architectures are modified only through a mutation operation with 0.1 as the \textit{mutation rate} while using the decoded architecture parameter. The architecture found (listed as \textit{EvNAS-Mut} in Table~\ref{table:CIFAR10}) performs slightly better than that of the \textit{random search} even though mutation is a random process. This improvement can be attributed to \textit{elitism}, which does not let the algorithm forget the best architecture learned thus far. \textbf{With \textit{Crossover Only}:} Here, a population of 50 architectures are modified only through a crossover operation only while using the decoded architecture parameter. The architecture found (listed as \textit{EvNAS-Cross} in Table~\ref{table:CIFAR10}) performs slightly better than that of the \textit{random search}. This improvement can be attributed to the selection pressure \cite{eiben2003introduction} introduced because of the tournament selection, which guides the search towards the better architecture solution. The improvements in both \textit{EvNAS-Mut} and \textit{EvNAS-Cross} are not much as compared to the \textit{EvNAS-Rand} because of the fact that we are using a partially trained network for evaluating the architectures on the validation set which provides a noisy estimate of their fitness/performance. The ablation study shows that the decoded architecture parameter $\bar{\alpha}$, mutation and crossover operations play an equally important role in the search process while the decoded architecture parameter, $\bar{\alpha}$, plays more important role during the fitness estimation. All the cells discovered in the ablation study are provided in the supplementary. \subsection{Discussion on Evolutionary Search vs Gradient Based Search} The gradient based methods are highly dependent on the search space and they tend to overfit to operations that lead to faster gradient descent which is the \textit{skip-connect} operation due to its parameter-less nature leading to higher number of \textit{skip-connect} in the final discovered cell \cite{Zela2020Understanding}. PDARTS \cite{chen2019progressive} uses a regularization method to restrict the number of \textit{skip-connect} to a specific number in the final normal cell for the search space used in the original DARTS paper. PDARTS emperically found that the optimal number of \textit{skip-connect} in the normal cell is 2, which is a search space dependent value and the optimal number may not be 2 if the search space is changed. This reduces the search space resulting in faster search time as compared to the original DARTS which is a gradient based method without any regularization applied to the search space. Notice that without such regularization to restrict the number of skip-connect to 2, the gradient based methods, e.g. DARTS, only provides similar search time but much worse performance than ours due to the overfitting problem. By contrast, EvNAS does not have to worry about the overfitting problem due to its stochastic nature and so it is not dependent on the search space. EvNAS arrives at this optimal solution without any regularization being applied to the search space as can be seen in the discovered normal cells in Figure \ref{discovered_cells}(a) and all the figures in the supplementary. \section{Conclusions and Future Directions} We propose an efficient method of applying a simple genetic algorithm to the neural architecture search problem with both parameter sharing among the individual architectures and weight inheritance using a one shot model, resulting in decreased computational requirements as compared to other evolutionary search methods. A decoding method for the architecture parameter was used to improve the fitness estimation of a partially trained individual architecture from the one shot model. The proposed crossover along with the tournament selection provides a direction to an otherwise directionless random search. The proposed algorithm was able to significantly reduce the search time of evolution based architecture search while achieving better results on CIFAR-10, CIFAR-100 and ImageNet dataset than previous evolutionary algorithms. A possible future direction to improve the performance of the algorithm is by making an age factor to be a part of the architecture, which makes sure that the old generation architectures do not die after one generation and can compete against the newer generation architectures. {\small \bibliographystyle{ieee_fullname} \section{Algorithm} \begin{algorithm}[h] \begin{algorithmic} \STATE \textbf{Input:} population \textit{P}, population size \textit{N}, number of selected individuals in tournament selection \textit{T}, mutation rate \textit{r}, total number of training batches \textit{B}, one shot model \textit{M} \FOR{\textit{g} = 1, 2, ..., \textit{G} generations} \FOR{\textit{i} = 1,..., \textit{B}} \STATE Copy $\bar{\alpha}[i \mod N]$ generated from $\alpha[i\mod N]$ to \textit{M} and train \textit{M} on training batch[\textit{i}]; \ENDFOR \STATE Calculate the fitness of each individual architecture, $\alpha$ in the population by copying the respective $\bar{\alpha}$ to \textit{M}; \STATE Set Elite, \textit{E} $\gets$ best architecture in \textit{P}; \STATE Copy \textit{E} to the next generation population, \textit{$P_{next}$}; \FOR{\textit{i} = 2,.., \textit{N}} \STATE Select \textit{T} individuals randomly from \textit{P} \{Tournament Selection\} and use top-2 individuals to create new architecture using \textit{crossover} operation for \textit{$P_{next}$}; \FOR{\textit{j} = 1,.., length($\alpha[i]$)} \IF{$uniformRandom(0,1) \leq r$} \STATE Apply \textit{mutation} operation to the $j^{th}$ component of $\alpha[i]$ in \textit{$P_{next}$}; \ENDIF \ENDFOR \ENDFOR \STATE \textit{$P$} $\gets$ \textit{$P_{next}$}; \ENDFOR \RETURN Elite, \textit{E} \end{algorithmic} \caption{EvNAS} \label{algo:ENAS_WS} \end{algorithm} Where $\alpha[i]$ represents the $i^{th}$ architecture in the population. \section{Discovered Cells from EvNAS-B, EvNAS-C and Ablation Study} \begin{figure}[H] \centering \begin{subfigure}{1.0\linewidth} \includegraphics[width=\linewidth]{normal_B.png} \caption{} \end{subfigure} \quad \quad \begin{subfigure}{1.0\linewidth} \includegraphics[width=\linewidth]{reduction_B.png} \caption{} \end{subfigure} \caption{Discovered cell in EvNAS-B (a) Normal Cell (b) Reduction Cell.} \label{fig:EvNASB} \end{figure} \begin{figure}[t] \centering \begin{subfigure}{1.0\linewidth} \includegraphics[width=\linewidth]{normal_C.png} \caption{} \end{subfigure} \quad \quad \begin{subfigure}{1.0\linewidth} \includegraphics[width=\linewidth]{reduction_C.png} \caption{} \end{subfigure} \caption{Discovered cell in EvNAS-C (a) Normal Cell (b) Reduction Cell.} \label{fig:EvNASC} \end{figure} \begin{figure}[t] \centering \begin{subfigure}{1.0\linewidth} \includegraphics[width=\linewidth]{random_normal.png} \caption{} \end{subfigure} \quad \quad \begin{subfigure}{1.0\linewidth} \includegraphics[width=\linewidth]{random_reduce.png} \caption{} \end{subfigure} \caption{Discovered cell using random search (EvNAS-Rand) (a) Normal Cell (b) Reduction Cell.} \label{fig:random_cells} \end{figure} \begin{figure}[h] \centering \begin{subfigure}{1.0\linewidth} \includegraphics[width=\linewidth]{no_dis_normal.png} \caption{} \end{subfigure} \quad \quad \begin{subfigure}{1.0\linewidth} \includegraphics[width=\linewidth]{no_dis_reduce.png} \caption{} \end{subfigure} \caption{Discovered cell using EvNAS without decoding architecture $\bar{\alpha}$ during both training and fitness evaluation (EvNAS-ND) (a) Normal Cell (b) Reduction Cell.} \label{fig:no_dis_cells} \end{figure} \begin{figure}[t] \centering \begin{subfigure}{1.0\linewidth} \includegraphics[width=\linewidth]{normal_EvNAS_NDF.png} \caption{} \end{subfigure} \quad \quad \begin{subfigure}{1.0\linewidth} \includegraphics[width=\linewidth]{reduction_EvNAS_NDF.png} \caption{} \end{subfigure} \caption{Discovered cell using EvNAS without decoding architecture $\bar{\alpha}$ during fitness evaluation (EvNAS-NDF) (a) Normal Cell (b) Reduction Cell.} \label{fig:EvNAS-NDF} \end{figure} \begin{figure}[h] \centering \begin{subfigure}{1.0\linewidth} \includegraphics[width=\linewidth]{normal_EvNAS_NDT.png} \caption{} \end{subfigure} \quad \quad \begin{subfigure}{1.0\linewidth} \includegraphics[width=\linewidth]{reduction_EvNAS_NDT.png} \caption{} \end{subfigure} \caption{Discovered cell using EvNAS without decoding architecture $\bar{\alpha}$ during training (EvNAS-NDT) (a) Normal Cell (b) Reduction Cell.} \label{fig:EvNAS-NDT} \end{figure} \begin{figure}[h] \centering \begin{subfigure}{1.0\linewidth} \includegraphics[width=\textwidth]{mutate_normal.png} \caption{} \end{subfigure} \quad \quad \begin{subfigure}{1.0\linewidth} \includegraphics[width=\textwidth]{mutate_reduce.png} \caption{} \end{subfigure} \caption{Discovered cell using EvNAS with mutation only (EvNAS-Mut) (a) Normal Cell (b) Reduction Cell.} \label{fig:mut_cells} \end{figure} \begin{figure}[t] \centering \begin{subfigure}{1.0\linewidth} \includegraphics[width=\linewidth]{cross_normal.png} \caption{} \end{subfigure} \quad \quad \begin{subfigure}{1.0\linewidth} \includegraphics[width=\linewidth]{cross_reduce.png} \caption{} \end{subfigure} \caption{Discovered cell using EvNAS with crossover only (EvNAS-Cross) (a) Normal Cell (b) Reduction Cell.} \label{fig:cross_cells} \end{figure} \bigskip \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The study of weak solutions of the incompressible Euler equations is motivated by (at least) two aspects of fluid flow: the presence of instabilities, most notably the Kelvin-Helmholtz instability, and fully developed 3-dimensional turbulence. Concerning the latter, an important problem arises in connection with the famous $5/3$ law of Obukhov-Kolmogorov and the conjecture of Onsager regarding energy conservation. We refer to \cite{EyinkSreenivasan,bardostiti2} and \cite{bdlsz,daneri,isett} for more information and recent progress regarding this problem. Concerning the former, it has been the subject of intensive research to define a physically meaningful notion of weak solution, that can capture the basic features of such instabilities and be analytically well behaved at the same time. Due to the lack of an analogous theorem to the existence of Leray-Hopf weak solutions of the Navier-Stokes equations, several weaker notions have been considered. Dissipative solutions of the incompressible Euler equations were introduced by P.-L. Lions \cite{lions} as a concept of solution with two desirable properties: (i) existence for arbitrary initial data, and (ii) weak-strong uniqueness, meaning that a dissipative weak solution agrees with the strong solution as long as the latter exists. Dissipative solutions have been shown to arise, among others, as viscosity \cite{lions} or hydrodynamic \cite{saintraymond} limits of the incompressible Euler equations. The major draw-back of dissipative solutions is that, in general, the velocity field does not solve the Euler equations in the sense of distributions. Weak solutions (i.e. distributional solutions with some additional properties) on the other hand have been constructed by various techniques, see\cite{scheffer,shnirel1,shnirel2,euler1,euler2,euleryoung,eulerexistence,szlecturenotes}. Many of these results come with a high level of non-uniqueness, even violating the weak-strong uniqueness property - we refer to the survey \cite{hprinciple}. In particular, in \cite{eulerexistence} the existence of global in time weak solutions was shown for arbitrary initial data. Due to the high level of non-uniqueness, a natural question is whether there are any selection criteria among weak solutions. With this regard, it has been noted in \cite{DuchonRobert,euler2} that, in the absense of boundaries a weak solution is dissipative in the sense of Lions, provided the weak energy inequality \begin{equation}\label{ei} \int |v(x,t)|^2\,dx\leq \int |v(x,0)|^2\,dx\qquad\textrm{ for almost every $t>0$} \end{equation} holds. In \cite{euler2} this condition is referred to as an admissibility condition, in analogy with the entropy condition used in hyperbolic conservation laws \cite{DafermosBook}. Admissibility turned out to be a useful selection criterion among weak solutions, since already in the weak form in \eqref{ei} it implies the weak-strong uniqueness property of dissipative solutions (stronger versions of the energy inequality are discussed in \cite{euler2}). This is even the case not just for distributional solutions but also for measure-valued solutions, see \cite{brenierdelellissz}. Despite the weak-strong uniqueness property, there exists a large, in fact $L^2$ dense set of initial data on the whole space or with periodic boundary conditions \cite{euleryoung} (see also \cite{szlecturenotes}), for which the initial value problem admits infinitely many {\it admissible} weak solutions. Such initial data, called ``wild initial data'', necessarily has to be irregular. The non-uniqueness of admissible weak solutions is intimately related to the presence of instabilities. For instance, in \cite{vortexpaper} the non-uniqueness of admissible weak solutions was shown for the flat vortex sheet initial data \begin{equation}\label{e:flat} v_0(x)=\begin{cases} e_1 & \text{if $x_d\in(0,\frac{1}{2})$}\\ -e_1 & \text{if $x_d\in(-\frac{1}{2},0)$,} \end{cases} \end{equation} extended periodically to the torus $\mathbb{T}^d$. Note that the stationary vector field is an obvious solution in this case, but the statement in \cite{vortexpaper} is that there exist infinitely many non-stationary solutions. A common feature in these solutions is that for time $t>0$ they exhibit an expanding "turbulent" region around the initial vortex sheet, much akin to the propagation of singularity in the classical Kelvin-Helmholtz problem. Further examples of this nature appeared in \cite{shearflow} and recently in \cite{isentropic} for the compressible Euler system. Motivated by the idea that it is the underlying Kelvin-Helmholtz instability that is responsible for the non-uniqueness of admissible weak solutions, we study in this note the case of domains with boundary. We show that the presence of a (smooth) boundary can lead to the same effect of an expanding turbulent region as in \cite{vortexpaper}. As a corollary, we observe that admissibility does not imply the weak-strong uniqueness property in domains with boundary. \section{Statement of the main results} \subsection{Formulation of the equations} We study weak solutions of the initial and boundary value problem for the incompressible Euler equations \begin{equation}\label{euler} \begin{aligned} \partial_tv+v\cdot\nabla v+\nabla p&=0\\ \operatorname{div}v&=0\\ v|_{t=0}&=v_0\\ \end{aligned} \end{equation} complemented with the usual kinematic boundary condition $$ v|_{\partial\Omega}\cdot\nu=0. $$ Here, $\Omega\subset\mathbb R^d$, $d\geq 2$, is a domain with sufficiently smooth boundary, $T>0$ a finite time, $v:\Omega\times[0,T)\rightarrow\mathbb R^d$ the velocity field, $p:\Omega\times(0,T)\rightarrow\mathbb R$ the scalar pressure, $v_0$ the initial velocity and $\nu$ the inner unit normal to the boundary of $\Omega$. In order to give the precise definition of weak solutions, consider the space of solenoidal vectorfields on $\Omega$ (cf. Chapter III of \cite{galdibook}), \begin{equation*} \begin{aligned} H(\Omega)=\big\{v\in L^2&(\Omega;\mathbb R^d):\int_{\Omega}v\cdot\nabla p dx=0\\ &\text{for every $p\in W^{1,2}_{loc}(\Omega)$ such that $\nabla p\in L^2(\Omega)$}\big\}. \end{aligned} \end{equation*} Let $v_0\in H(\Omega)$. An {\it admissible weak solution} of (\ref{euler}) with initial data $v_0$ is defined to be a vectorfield $v\in L^{\infty}(0,T;H(\Omega))$ such that for every test function $\phi\in C_c^{\infty}(\Omega\times[0,T);\mathbb R^2)$ with $\operatorname{div}\phi=0$, we have \begin{equation*} \int_0^T\int_{\Omega}\left(\partial_t\phi\cdot v+\nabla\phi:v\otimes v\right)dxdt+\int_{\Omega}v_0(x)\cdot\phi(x,0)dx=0, \end{equation*} and the energy inequality \eqref{ei} holds. We remark in passing that in fact one may assume that admissible weak solutions are in the space $C([0,T);H_w(\Omega))$, where $H_w(\Omega)$ is the space $H(\Omega)$ equipped with the weak $L^2$-topology. Indeed, dissipative solutions of Lions are also defined in this space. Nevertheless, for simplicity we will just treat the velocity fields as elements in the larger space $L^{\infty}(0,T;H(\Omega))$. \subsection{Rotationally symmetric data} In the present paper, we consider rotationally symmetric initial data in two dimensions. It should be noted that the restriction to 2 dimensions is purely for simplicity of presentation - the constructions and the methods can be easily extended to higher dimensions. Similarly, we will consider as domain an annulus purely for simplicity of presentation - the nontrivial topology of the domain does not play a role in our results. By ``rotational'' we mean initial data of the form \begin{equation}\label{rotational} v_0(x)=\alpha_0(r)(\sin\theta,-\cos\theta) \end{equation} on an annulus \begin{equation}\label{annulus} \Omega=\{x\in\mathbb R^2: \rho<|x|<R\}, \end{equation} where $0<\rho<R<\infty$. Vector fields as in (\ref{rotational}) are known to define stationary solutions to the Euler equations regardless of the choice of $\alpha_0$, and are frequently used as explicit examples in the study of incompressible flows \cite{amick, shnirel3, bertozzimajda}. Fix a radius $r_0$ with $\rho<r_0<R$ and consider the initial data on the annulus given by \eqref{rotational} with \begin{equation}\label{initial} \alpha_0(r)=\begin{cases}-\frac{1}{r^2} &\text{if $\rho<r<r_0$}\\ \frac{1}{r^2} &\text{if $r_0<r<R$}, \end{cases} \end{equation} which corresponds to a rotational flow with a jump discontinuity on the circle $\{r=r_0\}$. \begin{thm}\label{wildrotation} Let $\Omega$ be an annulus as in (\ref{annulus}), $T>0$ a finite time, and $v_0$ be rotational as in (\ref{rotational}) and (\ref{initial}). Apart from the stationary solution $v(\cdot,t)=v_0$, there exist infinitely many non-stationary admissible weak solutions of the Euler equations on $\Omega\times(0,T)$ with initial data $v_0$. Among these, infinitely many have strictly decreasing energy, and infinitely many conserve the energy. \end{thm} Our proof, given in Section \ref{nonuniqueness} below, relies on the techniques from \cite{euler2} and is similar to the construction in \cite{vortexpaper}. Regarding the quest for suitable selection principles, a much-discussed criterion is the viscosity solution, defined to be a solution obtained as a weak limit of Leray-Hopf solutions as viscosity converges to zero. In the case of the initial data in \eqref{e:flat} it is an easy exercise (see for instance \cite{shearflow}) to show that the viscosity solution agrees with the stationary solution. In the rotational case \eqref{initial} the same is true, as we show in Section \ref{viscosity} below: \begin{prop}\label{introuniqueness} Let $\Omega\subset\mathbb R^2$ be an annulus and let initial data be given by (\ref{rotational}). Then every sequence of Leray-Hopf solutions of the Navier-Stokes equations with viscosities tending to zero which correspond to this initial data will converge strongly to the stationary solution $v(\cdot,t)=v_0$ of the Euler equations. \end{prop} Finally, we discuss the relation between admissible weak solutions and dissipative solutions of Lions in bounded domains. For the convenience of the reader we recall in Section \ref{dissipative} the precise definition of dissipative solutions. As a corollary to Theorem \ref{wildrotation} we show in Section \ref{dissipative} that, contrary to the case without boundaries, admissible weak solutions need not be dissipative: \begin{cor}\label{cor} On $\Omega$ there exist admissible weak solutions which are not dissipative solutions. \end{cor} Corollary \ref{cor} says that in the presence of boundary the weak-strong uniqueness might fail for admissible weak solutions. On the technical level the explanation for this lies in the observation that the notion of strong solution in a bounded domain does not allow any control of the boundary behaviour. Therefore in Section \ref{holder} we study what happens when additional boundary control is available: \begin{thm}\label{criterion} Let $\Omega\subset\mathbb R^2$ be a bounded domain with $C^2$ boundary. Suppose $v$ is an admissible weak solution of (\ref{euler}) on $\Omega$ for which there exists some $\delta>0$ and $\alpha>0$ such that $v$ is H\"{o}lder continuous with exponent $\alpha$ on the set \begin{equation*} \Gamma_{\delta}=\left\{x\in\overline{\Omega}:\operatorname{dist}(x,\partial\Omega)<\delta\right\}, \end{equation*} uniformly in $t$. Then $v$ is a dissipative solution. \end{thm} \section{Subsolutions and convex integration}\label{s:subsolution} In order to prove Theorem \ref{wildrotation} we recall the basic framework developed in \cite{euler1,euler2}, with slight modifications to accomodate for domains with boundary. For further details we refer to the survey \cite{hprinciple} and the recent lecture notes \cite{szlecturenotes}. To start with, recall the definition of subsolution. To this end let us fix a non-negative function $$ \overline{e}\in L^\infty(0,T;L^1(\Omega)), $$ which will play the role of the (kinetic) energy density. We will work in the space-time domain $$ \Omega_T:=\Omega\times (0,T), $$ where $\Omega\subset\mathbb R^d$ is either an open domain with Lipschitz boundary or $\Omega=\mathbb{T}^d$. \begin{defn}[Subsolution]\label{d:subsolution} A subsolution to the incompressible Euler equations with respect to the kinetic energy density $\overline{e}$ is a triple $$ (\bar{v},\bar{u},\bar{q}):\Omega_T\to \mathbb R^d\times\mathcal{S}^{d\times d}_0\times\mathbb R $$ with $\bar{v}\in L^\infty(0,T;H(\Omega)),\,\bar{u}\in L^1_{loc}(\Omega_T),\, \bar{q}\in \mathcal{D}'(\Omega_T)$, such that \begin{equation}\label{e:LR} \left\{\begin{array}{l} \partial_t \bar{v}+\mathrm{div }\bar{u}+\nabla \bar{q} =0\\ \mathrm{div }\bar{v} =0, \end{array}\right. \qquad \mbox{in the sense of distributions;} \end{equation} and moreover \begin{equation}\label{e:CR} \bar{v}\otimes \bar{v}-\bar{u}\leq \tfrac{2}{d}\overline{e}\,I\quad\textrm{ a.e. $(x,t)$.} \end{equation} \end{defn} Here $\mathcal{S}^{d\times d}_0$ denotes the set of symmetric traceless $d\times d$ matrices and $I$ is the identity matrix. Observe that subsolutions automatically satisfy $\tfrac{1}{2}|\bar{v}|^2\leq \overline{e}$ a.e. If in addition \eqref{e:CR} is an equality a.e. then $\bar{v}$ is a weak solution of the Euler equations. A convenient way to express the inequality \eqref{e:CR} is obtained by introducing the {\it generalized energy density} \begin{equation*} e(\bar{v},\bar{u})=\frac{d}{2}|\bar{v}\otimes \bar{v}-\bar{u}|_{\infty}, \end{equation*} where $|\cdot|_{\infty}$ is the operator norm of the matrix ($=$ the largest eigenvalue for symmetric matrices). The inequality \eqref{e:CR} can then be equivalently written as \begin{equation}\label{e:CREuler1} e(\bar{v},\bar{u})\leq \bar{e}\textrm{ a.e. } \end{equation} The key point of convex integration is that a {\em strict inequality} instead of \eqref{e:CR} gives enough room so that high-frequency oscillations can be ``added'' on top of the subsolution -- of course in a highly non-unique way -- so that one obtains weak solutions. It is important also to note that, since in the process of convex integration only compactly supported (in space-time) perturbations are added to the subsolution, the boundary and initial conditions of the weak solutions so obtained agree with the corresponding data of the subsolution. This is the content of the following theorem, which is essentially Proposition 2 from \cite{euler2}. \begin{thm}[Subsolution criterion]\label{t:criterion} Let $\overline{e}\in L^{\infty}(\Omega_T)$ and $(\overline{v},\overline{u},\overline{q})$ be a subsolution. Furthermore, let $\mathcal{U}\subset\Omega_T$ a subdomain such that $(\overline{v},\overline{u},\overline{q})$ and $\overline{e}$ are continuous on $\mathcal{U}$ and \begin{equation}\label{e:strict} \begin{split} e(\overline{v},\overline{u})&<\overline{e}\qquad \textrm{ on }\mathcal{U}\\ e(\overline{v},\overline{u})&=\overline{e}\qquad \textrm{ a.e. }\Omega_T\setminus \mathcal{U} \end{split} \end{equation} Then there exist infinitely many weak solutions $v\in L^{\infty}(0,T;H(\Omega))$ of the Euler equations such that \begin{align*} v&=\overline{v} \qquad \textrm{ a.e. $\Omega_T\setminus \mathcal{U}$,}\\ \tfrac{1}{2}|v|^2&=\overline{e} \qquad\textrm{ a.e. $\Omega_T$,}\\ p&=\overline{q}-\tfrac{2}{d}\overline{e} \qquad\textrm{ a.e. $\Omega_T$}. \end{align*} If in addition \begin{equation}\label{e:initialdatum} \overline{v}(\cdot,t)\rightharpoonup v_0(\cdot)\textrm{ in }L^2(\Omega)\textrm{ as }t\to 0 , \end{equation} then $v$ solves the Cauchy problem \eqref{euler}. \end{thm} We also refer to \cite{szlecturenotes}, where a detailed discussion of the convex integration technique can be found - in particular the above theorem is Theorem 7 of \cite{szlecturenotes}. \section{Non-Uniqueness for Rotational Initial Data}\label{nonuniqueness} In this section we wish to apply the framework of Section \ref{s:subsolution} to prove Theorem \ref{wildrotation}. Thus, we set $$ \Omega:=\{x\in\mathbb R^2:\,\rho<|x|<R\} $$ to be an annulus, fix $r_0\in (\rho,R)$ and set \begin{equation}\label{e:v0} v_0(x)=\begin{cases}-\frac{1}{|x|^3}x^\perp&|x|<r_0,\\ \frac{1}{|x|^3}x^{\perp}&|x|>r_0,\end{cases} \end{equation} where $x^\perp=\begin{pmatrix}x_2\\-x_1\end{pmatrix}$. We will construct subsolutions by a similar method as in \cite{vortexpaper}. Owing to Theorem \ref{t:criterion} of the previous section, it suffices to show the existence of certain subsolutions. We fix two small constants $\lambda>0$ ("turbulent propagation speed") and $\epsilon\geq 0$ ("energy dissipation rate"), to be determined later. We look for subsolutions $(\bar{v},\bar{u},\bar{q})$ (c.f. Definition \ref{d:subsolution} - the energy density function $\bar{e}$ is still to be fixed) of the form \begin{equation*} \bar{v}(x,t)=\alpha(r,t)\begin{pmatrix}\sin\theta\\-\cos\theta\end{pmatrix}, \end{equation*} where $\alpha(r,0)=\alpha_0(r)$ and $(r,\theta)$ denotes polar coordinates on $\mathbb R^2$, \begin{equation}\label{defu} \begin{aligned} \bar{u}(x,t)&=\left(\begin{array}{cc}\cos\theta & \sin\theta\\ \sin\theta & -\cos\theta\end{array}\right)\left(\begin{array}{cc}\beta(r,t) & \gamma(r,t)\\ \gamma(r,t) & -\beta(r,t)\end{array}\right)\left(\begin{array}{cc}\cos\theta & \sin\theta\\ \sin\theta & -\cos\theta\end{array}\right)\\ &=\left(\begin{array}{cc}\beta\cos(2\theta)+\gamma\sin(2\theta) & \beta\sin(2\theta)-\gamma\cos(2\theta)\\ \beta\sin(2\theta)-\gamma\cos(2\theta) & -\beta\cos(2\theta)-\gamma\sin(2\theta)\end{array}\right), \end{aligned} \end{equation} and \begin{equation*} \bar{q}=\bar{q}(r). \end{equation*} As a side remark, note that the choice $\alpha(r,t)=\alpha_0(r)$ for all $t\geq0$, $\beta=-\frac{1}{2}\alpha^2$, $\gamma=0$, and \begin{equation}\label{pressure} \bar{q}(r)=\frac{1}{2}\alpha^2+\int_{\rho}^r\frac{\alpha(s)^2}{s}ds \end{equation} yields the well-known stationary solution (the integral in the formula for $\bar{q}$ represents the physical pressure). We insert this ansatz into \eqref{e:LR} to arrive at two equations. More precisely, using the formulas $\nabla_xr=\begin{pmatrix}\cos\theta\\ \sin\theta\end{pmatrix}$ and $\nabla_x\theta=\frac{1}{r}\begin{pmatrix}-\sin\theta\\ \cos\theta\end{pmatrix}$, we obtain \begin{equation*} \begin{aligned} \partial_t\alpha\sin\theta &+\partial_r\beta\left[\cos\theta\cos(2\theta)+\sin\theta\sin(2\theta)\right]+\partial_r\gamma\left[\cos\theta\sin(2\theta)-\sin\theta\cos(2\theta)\right]\\ &+\frac{2}{r}\beta\left[\sin\theta\sin(2\theta)+\cos\theta\cos(2\theta)\right]+\frac{2}{r}\gamma\left[-\sin\theta\cos(2\theta)+\cos\theta\sin(2\theta)\right]\\ &+\partial_r\bar{q}\cos\theta=0 \end{aligned} \end{equation*} and \begin{equation*} \begin{aligned} -\partial_t\alpha\cos\theta &+\partial_r\beta\left[\cos\theta\sin(2\theta)-\sin\theta\cos(2\theta)\right]+\partial_r\gamma\left[-\cos\theta\cos(2\theta)-\sin\theta\sin(2\theta)\right]\\ &+\frac{2}{r}\beta\left[-\sin\theta\cos(2\theta)+\cos\theta\sin(2\theta)\right]+\frac{2}{r}\gamma\left[-\sin\theta\sin(2\theta)-\cos\theta\cos(2\theta)\right]\\ &+\partial_r\bar{q}\sin\theta=0. \end{aligned} \end{equation*} If we multiply the first equation by $\sin\theta$ and add it to the second one multiplied by $\cos\theta$, use the identities $\cos^2\theta-\sin^2\theta=\cos(2\theta)$ and $2\sin\theta\cos\theta=\sin(2\theta)$, and then separate by terms involving $\sin(2\theta)$ and $\cos(2\theta)$, respectively, we will eventually get the two equations \begin{equation}\label{preburgers} \begin{aligned} \partial_r\beta+\frac{2}{r}\beta+\partial_r\bar{q}&=0\\ \partial_t\alpha+\partial_r\gamma+\frac{2}{r}\gamma&=0. \end{aligned} \end{equation} It can be easily verified that these equations are equivalent to the original system \eqref{e:LR} for our ansatz. If we set $\bar{q}(r)$ as in (\ref{pressure}) and $\beta=-\frac{1}{2}\alpha^2$, the first equation will be satisfied, in nice analogy with \cite{vortexpaper} (up to a sign). Also, the second equation is similar to \cite{vortexpaper}, but it involves the additional ``centrifugal'' term $\frac{2}{r}\gamma$. Therefore, we cannot simply set $\gamma=\frac{1}{2}\alpha^2$ as in \cite{vortexpaper} to obtain Burgers' equation. However, observing that $\partial_r(r^2\gamma)=2r\gamma+r^2\partial_r\gamma$, we set $$ \alpha(r,t)=\frac{1}{r^2}f(r,t) $$ and \begin{equation}\label{gamma} \gamma=-\frac{\lambda}{2r^2}(1-f^2)=-\frac{\lambda}{2}\left(\frac{1}{r^2}-r^2\alpha^2\right), \end{equation} so that the second equation in (\ref{preburgers}), after multiplication by $r^2$, turns into Burgers' equation \begin{equation}\label{burgers} \partial_tf+\frac{\lambda}{2}\partial_r(f^2)=0. \end{equation} The initial data (\ref{initial}) for $\alpha$ then corresponds to \begin{equation*} f(r,0)=\begin{cases}-1 &\text{if $\rho<r<r_0$}\\ 1 &\text{if $r_0<r<R$}. \end{cases} \end{equation*} Then, for this data, Burgers' equation (\ref{burgers}) has a rarefaction wave solution for $t\in[0,T]$, provided $\lambda>0$ is sufficiently small (depending on $T$ and $\rho<r_0<R$), which can be explicitly written as \begin{equation}\label{entropy} f(r,t)=\begin{cases}-1 &\text{if $\rho<r<r_0-\lambda t$}\\ \frac{r-r_0}{\lambda t} &\text{if $r_0-\lambda t<r<r_0+\lambda t$}\\ 1 &\text{if $r_0+\lambda t<r<R$.} \end{cases} \end{equation} Therefore, by setting $\alpha(r,t)=\frac{1}{r^2}f(r,t)$ for $f$ as in (\ref{entropy}), $\beta=-\frac{1}{2}\alpha^2$, $\gamma$ as in (\ref{gamma}), and $\bar{q}$ as in (\ref{pressure}), we obtain a solution of the equations \eqref{e:LR} with initial data corresponding to \eqref{e:v0}. It remains to study the generalized energy. Since $\bar{u}$ is given by (\ref{defu}) and moreover \begin{equation*} \begin{aligned} \bar{v}\otimes \bar{v}&=\alpha(r,t)^2\left(\begin{array}{cc}\cos^2\theta & -\sin\theta\cos\theta\\ -\sin\theta\cos\theta & \cos^2\theta\end{array}\right)\\ &=\left(\begin{array}{cc}\cos\theta & \sin\theta\\ \sin\theta & -\cos\theta\end{array}\right)\left(\begin{array}{cc}0 & 0\\ 0 & \alpha(r,t)^2\end{array}\right)\left(\begin{array}{cc}\cos\theta & \sin\theta\\ \sin\theta & -\cos\theta\end{array}\right), \end{aligned} \end{equation*} and since the eigenvalues of a matrix are invariant under conjugation by an orthogonal transformation, in order to determine $e(\bar{v},\bar{u})=|\bar{v}\otimes \bar{v}-\bar{u}|_{\infty}$ it suffices to find the largest eigenvalue of \begin{equation*} \left(\begin{array}{cc}-\beta & -\gamma\\ -\gamma & \alpha^2+\beta \end{array}\right)=\left(\begin{array}{cc}\frac{1}{2}\alpha^2 & \frac{\lambda}{2}\left(\frac{1}{r^2}-r^2\alpha^2\right)\\ \frac{\lambda}{2}\left(\frac{1}{r^2}-r^2\alpha^2\right) & \frac{1}{2}\alpha^2 \end{array}\right). \end{equation*} It is easily calculated, taking into account $|\alpha|\leq\frac{1}{r^2}$ and $\lambda\geq0$, that \begin{equation}\label{genenergy} \begin{aligned} e(\bar{v},\bar{u})&=\frac{1}{2}\alpha^2+\frac{\lambda}{2}\left(\frac{1}{r^2}-r^2\alpha^2\right)\\ &=\frac{1}{2r^4}\left[1-(1-r^2\lambda)\left(1-f(r,t)^2\right)\right]. \end{aligned} \end{equation} Finally, we set \begin{equation*} \bar{e}(r,t)=\frac{1}{2r^4}\left[1-\epsilon(1-r^2\lambda)\left(1-f(r,t)^2\right)\right]\,, \end{equation*} where $\epsilon$ is sufficiently small so that $\bar{e}>0$. Observe that $$ e(\bar{v},\bar{u})\leq\bar{e}\leq\frac{1}{2}|v_0|^2\qquad\textrm{ in }\Omega_T. $$ More precisely, we have the following result, summarizing the calculations in this section: \begin{prop}\label{p:subsol} For any choice of constants $\epsilon,\lambda$ satisfying \begin{align*} 0&<\lambda<\min\left\{\frac{1}{R^2},\frac{r_0-\rho}{T},\frac{R-r_0}{T}\right\},\\ 0&\leq \epsilon< \frac{1}{1-\rho^2\lambda}\, \end{align*} there exists a subsolution $(\bar{v},\bar{u},\bar{q})$ in $\Omega_T$ with respect to the kinetic energy density \begin{equation*} \bar{e}(r,t)=\frac{1}{2r^4}\left[1-\epsilon(1-r^2\lambda)\left(1-f(r,t)^2\right)\right] \end{equation*} and with initial data $\bar{v}(x,0)=v_0(x)$ from \eqref{e:v0}, such that, with \begin{equation*} \mathcal{U}:=\left\{x\in\mathbb R^2:\,r_0-\lambda t<|x|<r_0+\lambda t\right\} \end{equation*} we have \begin{align*} &e(\bar{v},\bar{u})<\bar{e}\qquad\textrm{ in }\mathcal{U},\\ &e(\bar{v},\bar{u})=\bar{e}\qquad\textrm{ in }\Omega_T\setminus\mathcal{U}. \end{align*} \end{prop} We can now conclude with the proof of Theorem \ref{wildrotation}. \begin{proof}[Proof of Theorem \ref{wildrotation}] We apply Proposition \ref{p:subsol} above with $\epsilon\geq 0$ to obtain a subsolution $(\bar{v},\bar{u},\bar{q})$. According to Theorem \ref{t:criterion} with this subsolution, there exist infinitely many weak solutions $v\in L^{\infty}(0,T;H(\Omega))$ such that $|v|^2=2\bar{e}$ almost everywhere in $\Omega_T$ and with initial data $v_0$. To check that these are admissible, observe that $$ \int_{\Omega}|v(x,t)|^2\,dx=\int_{\Omega}2\bar{e}(x,t)\,dx\leq \frac{1}{|x|^4}\,dx=\int_{\Omega}|v_0(x)|^2\,dx. $$ Finally, observe that we obtain strictly energy-decreasing solutions by choosing $\epsilon>0$ and energy-conserving solutions for $\epsilon=0$. \end{proof} \section{Uniqueness of the Viscosity Limit}\label{viscosity} \begin{proof}[Proof of Proposition \ref{introuniqueness}] Consider the Navier-Stokes equations with viscosity $\epsilon>0$: \begin{equation}\label{navierstokes} \begin{aligned} \partial_tv_{\epsilon}+v_{\epsilon}\cdot\nabla v_{\epsilon}+\nabla p_{\epsilon}&=\epsilon\Delta v_{\epsilon}\\ \operatorname{div}v_{\epsilon}&=0\\ v_{\epsilon}(\cdot,0)&=v_0\\ v_{\epsilon}|_{\partial\Omega}&=0. \end{aligned} \end{equation} It is known that the Navier-Stokes equations in two space dimensions admit a unique weak solution (the Leray-Hopf solution) which satisfies the energy equality \begin{equation*} \frac{1}{2}\int_{\Omega}|v_{\epsilon}(x,t)|^2dx+\epsilon\int_0^t\int_{\Omega}|\nabla v_{\epsilon}(x,s)|^2dxds=\frac{1}{2}\int_{\Omega}|v_0(x)|^2dx \end{equation*} for every $t\in[0,T]$, see e.g. \cite{galdi} for details. It turns out that if the initial data $v_0$ has the rotational symmetry in \eqref{rotational}, then the (unique) Leray-Hopf solution will have the same symmetry. To show this, we take the ansatz \begin{equation}\label{ansatz} v_{\epsilon}(x,t)=\alpha_{\epsilon}(r,t)\begin{pmatrix}\sin\theta\\-\cos\theta\end{pmatrix} \end{equation} and $p_{\epsilon}=p_{\epsilon}(r)$, again using polar coordinates. Insertion of this ansatz into the first equation of (\ref{navierstokes}) yields \begin{equation*} \begin{aligned} \partial_t\alpha_{\epsilon}\sin\theta&-\frac{\alpha_{\epsilon}^2}{r}\cos\theta+\partial_rp_{\epsilon}\cos\theta\\ &=\epsilon\left(\frac{\partial_r\alpha_{\epsilon}}{r}+\partial_r^2\alpha_{\epsilon}-\frac{\alpha_{\epsilon}}{r^2}\right)\sin\theta\,. \end{aligned} \end{equation*} If we choose \begin{equation*} p_{\epsilon}(r)=\int_{\rho}^r\frac{\alpha_{\epsilon}(s)^2}{s}ds \end{equation*} and divide by $\sin\theta$, we end up with the parabolic equation \begin{equation}\label{parabolic} \partial_t\alpha_{\epsilon}=\epsilon\left(\frac{\partial_r\alpha_{\epsilon}}{r}+\partial_r^2\alpha_{\epsilon}-\frac{\alpha_{\epsilon}}{r^2}\right). \end{equation} Insertion of our ansatz into the second equation of (\ref{navierstokes}) also gives (\ref{parabolic}), as one can easily check by a similar computation. Moreover, the divergence-free condition is automatically satisfied, the initial condition becomes \begin{equation}\label{initialparabolic} \alpha_{\epsilon}(\cdot,0)=\alpha_0 \end{equation} with $\alpha_0$ defined by (\ref{initial}), and the boundary condition translates into \begin{equation}\label{boundaryparabolic} \alpha_{\epsilon}(\rho)=\alpha_{\epsilon}(R)=0. \end{equation} Thus we obtain the well-posed parabolic initial and boundary value problem (\ref{parabolic}), (\ref{initialparabolic}), (\ref{boundaryparabolic}). By well-known results (cf. e.g. \cite{evans}, Section 7.1), this parabolic problem admits, for each $\epsilon>0$, a unique weak solution. But our calculations so far show that, if $\alpha_{\epsilon}$ is a solution to the parabolic problem, then the corresponding $v_{\epsilon}$ defined by (\ref{ansatz}) is the (unique) Leray-Hopf solution of the Navier-Stokes problem (\ref{navierstokes}), and at the same time it satisfies the initial and boundary value problem for the heat equation: \begin{equation*} \begin{aligned} \partial_tv_{\epsilon}&=\epsilon\Delta v_{\epsilon}\\ \operatorname{div}v_{\epsilon}&=0\\ v_{\epsilon}(\cdot,0)&=v_0\\ v_{\epsilon}|_{\partial\Omega}&=0. \end{aligned} \end{equation*} Since the solutions of the heat equation converge strongly to the stationary solution, and since we have shown that for our particular initial data the heat equation coincides with the Navier-Stokes equations, the proposition is thus proved. \end{proof} \begin{rem} The previous discussion can be extended to initial data on a cylinder of the form $Z=\Omega\times\mathbb{T}\subset\mathbb R^2\times\mathbb{T}$, where $\Omega\subset\mathbb R^2$ is still the annulus. Indeed, for so-called 2 1/2 dimensional initial data $V_0(x_1,x_2)=(v_0(x_1,x_2),w(x_1,x_2))$ on $Z$, where $v_0$ is as in (\ref{rotational}), there may exist infinitely many admissible weak solutions, but only the solution given by \begin{equation*} V(x_1,x_2,t)=(v_0(x_1,x_2),w(x_1-(v_0)_1t,x_2-(v_0)_2t)) \end{equation*} arises as a viscosity limit. We omit details, but remark that this can be shown along the lines of \cite{shearflow}, where a similar analysis was carried out for the case of shear flows. \end{rem} \section{Dissipative Solutions}\label{dissipative} Let $S(w)=\frac{1}{2}(\nabla w+\nabla w^t)$ denote the symmetric gradient of a vectorfield $w$, and set \begin{equation*} E(w)=-\partial_tw-P(w\cdot\nabla w), \end{equation*} with $P$ denoting the Leray-Helmholtz projection onto $H(\Omega)$. The following definition is from \cite{lions}, given here in the version of \cite{bardostiti} for bounded domains. The reader may consult these references also for a motivation of the definition. \begin{defn} Let $\Omega$ be a bounded domain with $C^1$ boundary. A vectorfield $v\in C([0,T];H_w(\Omega))$ is said to be a \emph{dissipative solution} of the Euler equations (\ref{euler}) if for every divergence-free test vectorfield $w\in C^1(\overline{\Omega}\times[0,T])$ with $w\cdot\nu\restriction_{\partial\Omega}=0$ one has \begin{equation}\label{defdissipative} \begin{aligned} \int_{\Omega}|v-w|^2dx&\leq\operatorname{exp}\left(2\int_0^t\norm{S(w)}_{\infty}ds\right)\int_{\Omega}|v(x,0)-w(x,0)|^2dx\\ +&2\int_0^t\int_{\Omega}\operatorname{exp}\left(2\int_s^t\norm{S(w)}_{\infty}d\tau\right)E(w)\cdot(v-w)dxds \end{aligned} \end{equation} for all $t\in[0,T]$. \end{defn} An immediate consequence of this definition is the weak-strong uniqueness (Proposition 4.1 in \cite{lions}): \begin{prop}\label{weak-strong} Suppose there exists a solution $v\in C^1(\overline{\Omega}\times[0,T])$ of the Euler equations (\ref{euler}). Then $v$ is unique in the class of dissipative solutions with the same initial data. \end{prop} This follows simply by choosing $w=v$ as a test function in the definition of dissipative solutions. Next, we prove Corollary \ref{cor}, showing that admissible solutions may fail to be unique in bounded domains even for smooth initial data. \begin{proof} Recall the construction from Section \ref{nonuniqueness} and define \begin{equation*} \tilde{\Omega}=\{x\in\mathbb R^2: \rho<|x|<r_0\}\subset \Omega. \end{equation*} It follows immediately from the definition that the restriction of a subsolution to a subdomain is itself a subsolution. Therefore we may consider the subsolution $(\bar{v},\bar{u},\bar{q})$ constructed in Section \ref{nonuniqueness} as a subsolution on $\tilde\Omega$ with energy density $\bar{e}$ as in Proposition \ref{p:subsol}, with initial data given by $$ \bar{v}(x,0)=-\frac{x^\perp}{|x|^3}\quad\textrm{ for }x\in\tilde\Omega $$ (c.f. \eqref{e:v0}). Applying this time Theorem \ref{t:criterion} in $\tilde\Omega$ with this subsolution yields infinitely many admissible weak solutions as in the proof of Theorem \ref{wildrotation}. Since the initial data $\bar{v}(x,0)$ is smooth on $\tilde\Omega$, there exists a unique strong solution (indeed, this is the stationary solution). Thus weak-strong uniqueness fails, a fortiori implying that the non-stationary weak admissible solutions are not dissipative in the sense of Lions. \end{proof} \section{A Criterion for Admissible Solutions to be Dissipative}\label{holder} We have seen that, on bounded domains, an admissible weak solution may fail to be dissipative. However this will not happen provided such a solution is H\"{o}lder continuous near the boundary of the domain, as claimed in Theorem \ref{criterion} above. The aim of this last section is to prove this theorem. We follow Appendix B of \cite{euler2}, but have to take into account that we need to deal with test functions which are not necessarily compactly supported in $\Omega$ in the definition of dissipative solutions. So let $\Omega$ be a bounded domain in $\mathbb R^2$ with $C^2$ boundary and $v$ an admissible weak solution of the Euler equations (\ref{euler}) as in the statement of Theorem \ref{criterion}. Assume for the moment that for every divergence-free $w\in C^1(\overline{\Omega}\times[0,T])$ satisfying the boundary condition we have \begin{equation}\label{identity} \frac{d}{dt}\int_{\Omega}v\cdot wdx=\int_{\Omega}\left(S(w)(v-w)\cdot(v-w)-E(w)\cdot v\right)dx \end{equation} in the sense of distributions, where $E(w)$ is the quantity defined at the beginning of Section \ref{dissipative}. We claim that (\ref{identity}) implies already that $v$ is a dissipative solution. Indeed this can be shown exactly as in \cite{euler2}: On the one hand, since $v$ is admissible, \begin{equation}\label{identity2} \frac{d}{dt}\int_{\Omega}|v|^2dx\leq0 \end{equation} in the sense of distributions. On the other hand, using the definition of $E(w)$ and the identity $\int_{\Omega}(w\cdot\nabla w)\cdot wdx=0$ (which follows from $w\cdot\nu\restriction_{\partial\Omega}=0$), we have \begin{equation}\label{identity3} \frac{d}{dt}\int_{\Omega}|w|^2dx=-2\int_{\Omega}E(w)\cdot wdx. \end{equation} Since \begin{equation*} \int_{\Omega}|v-w|^2dx=\int_{\Omega}|v|^2dx+\int_{\Omega}|w|^2dx-2\int_{\Omega}v\cdot wdx, \end{equation*} we infer from this together with (\ref{identity}), (\ref{identity2}), and (\ref{identity3}) that \begin{equation*} \begin{aligned} \frac{d}{dt}\int_{\Omega}|v-w|^2dx&\leq2\int_{\Omega}\left(E(w)\cdot(v-w)-S(w)(v-w)\cdot(v-w)\right)dx\\ &\leq2\int_{\Omega}E(w)\cdot(v-w)dx+2\norm{S(w)}_{\infty}\int_{\Omega}|v-w|^2dx \end{aligned} \end{equation*} in the sense of distributions. We can then apply Gr\"{o}nwall's inequality as in \cite{euler2} to obtain (\ref{defdissipative}) for every $t\in[0,T]$. Therefore, it remains to prove (\ref{identity}) for every test function $w$. In \cite{euler2}, identity (\ref{identity}) is proved for the case that $w$ is compactly supported in $\Omega$ at almost every time (see the considerations after equality (96) in \cite{euler2}). Let now $w\in C^1(\overline{\Omega}\times[0,T])$ be a divergence-free vectorfield with $w\cdot\nu|_{\partial\Omega}=0$, which does not necessarily have compact support in space. We will suitably approximate $w$ by vectorfields that do have compact support, much in the spirit of T. Kato \cite{kato} (in particular Section 4 therein). Assume for the moment that $\Omega$ is simply connected, so that $\partial\Omega$ has only one connected component. Since $w$ is divergence-free, there exists a function $\psi\in C([0,T];C^2(\overline{\Omega}))\cap C^1(\overline{\Omega}\times[0,T])$ such that \begin{equation*} w(x,t)=\nabla^{\perp}\psi(x,t) \end{equation*} and $\psi\restriction_{\partial\Omega}=0$. Let now $\chi:[0,\infty)\rightarrow\mathbb R$ be a nonnegative smooth function such that \begin{equation*} \chi(s)=\begin{cases}0 & \text{if $s<1$}\\ 1 & \text{if $s>2$} \end{cases} \end{equation*} and set \begin{equation*} w_{\epsilon}(x,t)=\nabla^{\perp}\left(\chi\left(\frac{\operatorname{dist}(x,\partial\Omega)}{\epsilon}\right)\psi(x,t)\right). \end{equation*} Then, by Lemma 14.16 in \cite{gilbargtrudinger}, there exists $\eta>0$ depending on $\Omega$ such that $x\mapsto\operatorname{dist}(x,\partial\Omega)$ is $C^2$ on \begin{equation}\label{boundarystrip} \Gamma_{\eta}=\{x\in\overline{\Omega}:\operatorname{dist}(x,\partial\Omega)<\eta\}, \end{equation} and hence $w_{\epsilon}\in C^1_c(\Omega\times[0,T])$ for sufficiently small $\epsilon>0$. Therefore, (\ref{identity}) is true for $w_{\epsilon}$: \begin{equation}\label{identityeps} \frac{d}{dt}\int_{\Omega}v\cdot w_{\epsilon}dx=\int_{\Omega}\left(S(w_{\epsilon})(v-w_{\epsilon})\cdot(v-w_{\epsilon})-E(w_{\epsilon})\cdot v\right)dx. \end{equation} We will now let $\epsilon$ tend to zero in order to recover (\ref{identity}). Writing $d(x)=\operatorname{dist}(x,\partial\Omega)$, we have from the definition of $w_{\epsilon}$: \begin{equation}\label{productrule} w_{\epsilon}=\chi\left(\frac{d}{\epsilon}\right)\nabla^{\perp}\psi+\frac{1}{\epsilon}\chi'\left(\frac{d}{\epsilon}\right)\psi\nabla^{\perp}d, \end{equation} and since $\psi\in C([0,T];C^2(\overline{\Omega}))$ and $\psi\restriction_{\partial\Omega}=0$, there is a constant $C$ independent of $t$ and $\epsilon$ such that \begin{equation*} |\psi(x,t)|\leq Cd(x) \end{equation*} for all $x\in\overline{\Omega}$. Moreover, as the support of $\chi'\left(\frac{\cdot}{\epsilon}\right)$ is contained in $(\epsilon,2\epsilon)$, and as $|\nabla d|\leq1$, it follows from (\ref{productrule}) that \begin{equation}\label{strong} w_{\epsilon}\to w\hspace{0.3cm}\text{strongly in $L^{\infty}([0,T];L^2(\Omega))$} \end{equation} as $\epsilon\to0$. For the left hand side of (\ref{identityeps}) this immediately implies \begin{equation*} \frac{d}{dt}\int_{\Omega}v\cdot w_{\epsilon}dx\to\frac{d}{dt}\int_{\Omega}v\cdot wdx \end{equation*} in the sense of distributions. Moreover, the right hand side of (\ref{identityeps}) can be written, recalling the definition of $E(w_{\epsilon})$, as \begin{equation*} \begin{aligned} \int_{\Omega}&\left(S(w_{\epsilon})(v-w_{\epsilon})\cdot(v-w_{\epsilon})-E(w_{\epsilon})\cdot v\right)dx\\ &=\int_{\Omega}\left[\partial_tw_{\epsilon}\cdot v+(v\cdot\nabla w_{\epsilon})\cdot v-((v-w_{\epsilon})\cdot\nabla w_{\epsilon})\cdot w_{\epsilon}\right]dx, \end{aligned} \end{equation*} and the right hand side of (\ref{identity}) is given by a similar expression. Next, observe that, again by (\ref{strong}), \begin{equation*} \int_{\Omega}\partial_tw_{\epsilon}\cdot vdx\to\int_{\Omega}\partial_tw\cdot vdx \end{equation*} in the sense of distributions and also that \begin{equation*} \int_{\Omega}((v-w_{\epsilon})\cdot\nabla w_{\epsilon})\cdot w_{\epsilon}dx=\int_{\Omega}((v-w)\cdot\nabla w)\cdot wdx=0 \end{equation*} thanks to the formula $(v-w)\cdot\nabla w)\cdot w=(v-w)\cdot\frac{1}{2}\nabla|w|^2$ and the fact that $v-w\in H(\Omega)$ (and similarly for $((v-w_{\epsilon})\cdot\nabla w_{\epsilon})\cdot w_{\epsilon}$). To complete the proof of (\ref{identity}) and therefore of Theorem \ref{criterion}, it remains to show that \begin{equation}\label{problemterm} \int_{\Omega}(v\cdot\nabla w_{\epsilon})\cdot vdx\to\int_{\Omega}(v\cdot\nabla w)\cdot vdx \end{equation} in the sense of distributions as $\epsilon\to0$. To this end, note that for every $x\in\Omega$ sufficiently close to $\partial\Omega$ there exists a unique closest point $\hat{x}\in\partial\Omega$, and then \begin{equation*} x=\hat{x}+d(x)\nu(\hat{x}). \end{equation*} We denote by $\tau(\hat{x})=\left(-\nu_2(\hat{x}),\nu_1(\hat{x})\right)$ the unit vector at $\hat{x}$ tangent to $\partial\Omega$ and use the notation $v_{\tau}(x)=v(x)\cdot\tau(\hat{x})$, $\partial_{\tau}w_{\nu}(x)=\nabla w_{\nu}(x)\cdot\tau(\hat{x})$, etc. (recall that $\hat{x}$ is uniquely determined by $x$). If $\epsilon$ is sufficiently small, we can then write (recall (\ref{boundarystrip})) \begin{equation*} \begin{aligned} \int_{\Omega}(v\cdot\nabla(w_{\epsilon}-w))\cdot vdx&=\int_{\Gamma_{2\epsilon}}v_{\nu}\partial_{\nu}(w_{\epsilon}-w)_{\nu}v_{\nu}dx+\int_{\Gamma_{2\epsilon}}v_{\nu}\partial_{\nu}(w_{\epsilon}-w)_{\tau}v_{\tau}dx\\ +\int_{\Gamma_{2\epsilon}}v_{\tau}&\partial_{\tau}(w_{\epsilon}-w)_{\nu}v_{\nu}dx+\int_{\Gamma_{2\epsilon}}v_{\tau}\partial_{\tau}(w_{\epsilon}-w)_{\tau}v_{\tau}dx\\ &=:I_1+I_2+I_3+I_4. \end{aligned} \end{equation*} Recalling (\ref{productrule}) as well as $\nabla^{\perp}\psi=w$ and observing $\nabla d=\nu$, we compute \begin{equation*} \begin{aligned} (w_{\epsilon}-w)_{\nu}&=\left(\chi\left(\frac{d}{\epsilon}\right)-1\right)\partial_{\tau}\psi\\ &=\left(\chi\left(\frac{d}{\epsilon}\right)-1\right)w_{\nu}, \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} (w_{\epsilon}-w)_{\tau}&=-\left(\chi\left(\frac{d}{\epsilon}\right)-1\right)\partial_{\nu}\psi+\frac{1}{\epsilon}\chi'\left(\frac{d}{\epsilon}\right)\psi\\ &=\left(\chi\left(\frac{d}{\epsilon}\right)-1\right)w_{\tau}+\frac{1}{\epsilon}\chi'\left(\frac{d}{\epsilon}\right)\psi, \end{aligned} \end{equation*} \begin{equation}\label{nunu} \partial_{\nu}(w_{\epsilon}-w)_{\nu}=\frac{1}{\epsilon}\chi'\left(\frac{d}{\epsilon}\right)w_{\nu}+\left(\chi\left(\frac{d}{\epsilon}\right)-1\right)\partial_{\nu}w_{\nu}, \end{equation} \begin{equation}\label{nutau} \partial_{\nu}(w_{\epsilon}-w)_{\tau}=\left(\chi\left(\frac{d}{\epsilon}\right)-1\right)\partial_{\nu}w_{\tau}+\frac{1}{\epsilon^2}\chi''\left(\frac{d}{\epsilon}\right)\psi, \end{equation} \begin{equation}\label{taunu} \partial_{\tau}(w_{\epsilon}-w)_{\nu}=\left(\chi\left(\frac{d}{\epsilon}\right)-1\right)\partial_{\tau}w_{\nu}, \end{equation} \begin{equation}\label{tautau} \partial_{\tau}(w_{\epsilon}-w)_{\tau}=\left(\chi\left(\frac{d}{\epsilon}\right)-1\right)\partial_{\tau}w_{\tau}+\frac{1}{\epsilon}\chi'\left(\frac{1}{\epsilon}\right)w_{\nu}. \end{equation} Before we estimate $I_1$-$I_4$ using (\ref{nunu})-(\ref{tautau}), let us collect some more information: As mentioned above, there is a constant $C$ independent of $t$ such that $|\psi(x)|\leq Cd(x)$. Moreover, since $w\in C^1(\overline{\Omega}\times[0,T])$ and $w_{\nu}=0$ on $\partial\Omega$, we find similarly a constant independent of $t$ such that $|w_{\nu}(x)|\leq Cd(x)$. By assumption, if $\epsilon$ is small enough, then $v$ is H\"{o}lder continuous with exponent $\alpha$ on $\Gamma_{2\epsilon}$, uniformly in $t$, and since $v\in H(\Omega)$ implies $v_{\nu}=0$ on $\partial\Omega$ (cf. \cite{galdibook} Chapter III), we obtain another time-independent constant such that $|v_{\nu}(x)|\leq Cd(x)^{\alpha}$ on $\Gamma_{2\epsilon}$. Finally note that $\psi$, $w$, and $v$ are uniformly bounded on $\Gamma_{2\epsilon}$ provided $\epsilon$ is small, and that there is a constant independent of $\epsilon$ such that $|\Gamma_{2\epsilon}|\leq C\epsilon$. In the light of these considerations we can use (\ref{nunu})-(\ref{tautau}) to estimate \begin{equation*} \begin{aligned} |I_1|&\leq\frac{1}{\epsilon}\int_{\Gamma_{2\epsilon}}v_{\nu}^2\norm{\chi'}_{\infty}\norm{w_{\tau}}_{\infty}dx+\int_{\Gamma_{2\epsilon}}v_{\nu}^2\norm{\chi-1}_{\infty}\norm{\partial_{\nu}w_{\nu}}_{\infty}dx\\ &\leq C\epsilon^{2\alpha+1}+C\epsilon^{2\alpha+1}\to0, \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} |I_2|&\leq\int_{\Gamma_{2\epsilon}}|v_{\nu}||v_{\tau}|\norm{\chi-1}_{\infty}\norm{\partial_{\nu}w_{\tau}}_{\infty}dx+\frac{1}{\epsilon^2}\int_{\Gamma_{2\epsilon}}|v_{\nu}||v_{\tau}|\norm{\chi''}_{\infty}|\psi|dx\\ &\leq C\epsilon^{\alpha+1}+C\epsilon^{\alpha}\to0, \end{aligned} \end{equation*} \begin{equation*} |I_3|\leq\int_{\Gamma_{2\epsilon}}|v_{\nu}||v_{\tau}|\norm{\chi-1}_{\infty}\norm{\partial_{\tau}w_{\nu}}_{\infty}dx\leq C\epsilon^{\alpha+1}\to0, \end{equation*} \begin{equation*} \begin{aligned} |I_4|&\leq\int_{\Gamma_{2\epsilon}}v_{\tau}^2\norm{\chi-1}_{\infty}\norm{\partial_{\tau}w_{\tau}}_{\infty}dx+\frac{1}{\epsilon}\int_{\Gamma_{2\epsilon}}v_{\tau}^2\norm{\chi'}_{\infty}|w_{\nu}|\\ &\leq C\epsilon+C\epsilon\to0, \end{aligned} \end{equation*} all estimates being uniform in time. This proves Theorem \ref{criterion} if $\Omega$ is simply connected. As a final step, we convince ourselves that the proof can easily be modified to the general case when $\partial\Omega$ has $N$ connected components $\Gamma^1,\ldots,\Gamma^N$ in the spirit of Section 1.4 of \cite{kato2}. There still exists $\psi\in C([0,T];C^2(\overline{\Omega}))\cap C^1(\overline{\Omega}\times[0,T])$ with $\nabla^{\perp}\psi=w$, but we can no longer require $\psi\restriction_{\partial\Omega}=0$. Instead, $\psi$ will take the constant value $\psi^i$ on $\Gamma^i$, but the numbers $\psi^i$ may be different. Now, if $\epsilon>0$ is small enough, then the sets \begin{equation*} \Gamma^i_{2\epsilon}=\{x\in\overline\Omega:\operatorname{dist}(x,\Gamma^i)<2\epsilon\},\hspace{0.3cm}i=1,\ldots,N, \end{equation*} will be mutually disjoint, so that $w_{\epsilon}$ is well-defined by setting \begin{equation*} w_{\epsilon}(x)=\begin{cases}\nabla^{\perp}\left(\chi\left(\frac{\operatorname{dist}(x,\partial\Omega)}{\epsilon}\right)(\psi(x)-\psi^i)\right) & \text{if $x\in\Gamma^i_{2\epsilon}$}\\ w(x) & \text{if $x\in\overline\Omega\setminus\bigcup_i\Gamma^i_{2\epsilon}$} \end{cases} \end{equation*} with $\chi$ as in the simply connected case. With this choice of $w_{\epsilon}$ we can then employ the very same arguments as above.\qed \begin{rem} Theorem \ref{criterion} implies that there can not be wild solutions on an annulus with smooth rotational initial data that are H\"{o}lder continuous. Indeed, any admissible H\"{o}lder continuous solution must be dissipative by our theorem, and the weak-strong uniqueness then yields that this solution must coincide with the stationary one. This observation is particularly interesting in the light of recent results (e.g. \cite{isett, bdlsz, daneri}) where examples of H\"{o}lder continuous wild solutions are constructed. \end{rem} \vskip0.1cm {\it One of the first papers of Professor Mark Vishik ``On general boundary problems for elliptic differential equations'' \cite{Visik} was essential, in particular in France, for the training of mathematicians in the generation of the first author of this contribution. Then when he turned to Navier-Stokes and turbulence he took an important role in progress over the last 60 years toward the mathematical understanding of turbulence in fluid mechanics. Hence we hope that this essay will contribute to his memory and to the recognition of his influence on our community.} \vskip0.1cm \textbf{Acknowledgements.} The authors would like to thank Professor Edriss Titi for interesting and valuable discussions. The research of L. Sz. is supported by ERC Grant Agreement No. 277993. Part of this work was done while E. W. was a visitor to the project ``Instabilities in Hydrodynamics'' of the Fondation Sciences Math\'{e}matiques de Paris. He gratefully acknowledges the Fondation's support.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} {\it A short description} There is a book. It is on desk. It is titled `Meditations and Other Metaphysical Writings'. It, or the document from which the English translation was borne, is written by Ren{\'e} Descartes. {\it Period} \\ \indent I\footnote{`We' is preferred throughout this document save where the use of the term is most unnatural.} have just described a book, not some freely arbitrary book but one with a few pieces of information: that it is on desk, that it has the said title, and that it is authored by Descartes. \hide{ It happens very frequently in our communication that a general term is made more specific and less ambiguous through addition of adjectives. } Let us suppose that I am with a friend of mine. If I simply said {\it There is a book} irrespective of being fully conscious that the book that I have spotted is the described book and none others, the friend of mine, who is here supposed oblivious of any articles on the desk, would have no reason to go against imagining whatever that is considered a book, say `Les Mis{\'e}rables'. The short statement by itself does not forestall such a possibility. By contrast, if, as in the description provided at the beginning, I ask him to think of a laid-on-desk Ren{\'e} Descartes book titled `Meditations and Other Metaphysical Writings', then there would be certain logical dissonance if he should still think of `Les Mis{\'e}rables' as a possible option that conforms to the given description. In innumerable occasions like this example, adjectives (or adverbs or whatever terms that fulfil the same purpose) are utilised to disambiguate terms that may denote more than what we intend to communicate.\\ \indent This feature of natural languages, allowing formulation of a precise enough concept through coordination of (1) broad concepts and (2) attributes that narrow down their possibilities, is a very economical and suitable one for us. For imagine otherwise that every word exactly identifies a unique and indivisible object around us, then we would have no abstract concepts such as generalisation or composition since generalisation must assume specificity and composition decomposability of what result from the process, neither of which accords with the proposed code of the alternative language. While it is certain that concepts expressible in the alternative language sustain no degree of ambiguity in what they refer to, and in this sense it may be said to have an advantage to our languages, the absence of abstract concepts that we so often rely upon for reasoning is rather grave a backlash that would stem its prospect for wide circulation, because - after all - who is capable of showing knowledge of an object that (s)he has never seen before; then who could confidently assert that his/her listener could understand any part of his/her speech on a matter that only he/she knows of if all of us were to adopt the alternative language? By contrast, concepts in our languages, being an identifier of a group rather than an individual, allow generation of a vast domain of discourse with a relatively small number of them in aggregation, \emph{e.g.} `book' and `title' cover anything that can be understood as a book and/or a title, and they at the same time enable refinement, \emph{e.g.} `title'd `book' denotes only those books that are titled. The availability of mutually influencing generic concepts adds to so much flexibility in our languages. \\ \hide{ \indent Many logics currently available to us, however, do not primitively capture the essential feature, the generality/specificity relation, of our languages; or they could but only at the expense of explicit predicates' addition to their propositional fragments.\\ } \indent In this document, we will be interested in primitively representing the particular relation between objects/concepts (no special distinction between the two hereafter) and what may form their attributes, which will lead to development of a new logic. Our domain of discourse will range over certain set of (attributed) objects (which may themselves be an attribute to other (attributed) objects) and pure attributes that presuppose existence of some (attributed) object as their host. Needless to say, when we talk about or even just imagine an object with some explicated attribute, the attribute must be found among all that can become an attribute to it. To this extent it is confined within the presumed existence of the object. The new logic intends to address certain phenomena around attributed objects which I think are reasonably common to us but which may not be reasonably expressible in classical logic. Let us turn to an example for illustration of the peculiar behaviour that attributed objects often present to us. \subsection{On peculiarity of attributed objects as observed in negation, and on the truth `of' classical logic} {\it Episode} Imagine that there is a tiny hat shop in our town, having the following in stock: \begin{enumerate \item 3 types of hats: orange hats, green hats ornamented with some brooch, and blue hats decorated with some white hat-shaped accessory, of which only the green and the blue hats are displayed in the shop. \item 2 types of shirts: yellow and blue, of which only the blue shirts are displayed in the shop. \end{enumerate} Imagine also that a young man has come to the hat shop. After a while he asks the shop owner, a lady of many a year of experience in hat-making; ``Have you got a yellow hat?'' Well, obviously there are no yellow hats to be found in her shop. She answers; ``No, I do not have it in stock,'' negating the possibility that there is one in stock at her shop at the present point of time. {\it Period} \\ \indent But ``what is she actually denying about?'' is the inquiry that I consider pertinent to this writing. We ponder; in delivering the answer, the question posed may have allowed her to infer that the young man was looking for a hat, a yellow hat in particular. Then the answer may be followed by she saying; ``\dots but I do have hats with different colours including ones not currently displayed.'' That is, while she denies the presence of a yellow hat, she still presumes the availability of hats of which she reckons he would like to learn. It does not appear so unrealistic, in fact, to suppose such a thought of hers that he may be ready to compromise his preference for a yellow hat with some non-yellow one, possibly an orange one in stock, given its comparative closeness in hue to yellow. \\ \indent Now, what if the young man turned out to be a town-famous collector of yellow articles? Then it may be that from his question she had divined instead that he was looking for something yellow, a yellow hat in particular, in which case her answer could have been a contracted form of ``No, I do not have it in stock, but I do have a yellow shirt nonetheless (as you are looking after, I suppose?)'' \\ \indent Either way, these somewhat-appearing-to-be partial negations contrast with classical negation with which her answer can be interpreted only as that she does not have a yellow hat, nothing less, nothing more, with no restriction in the range of possibilities outside it. \\ \hide{ \indent From this short episode, we could observe both the ambiguity and the peculiarity that come to surface when we reason about attributed objects. First for ambiguity around negation of them, it is certainly the case that which negation may be reasonable is determined by a given conversational context in which they appear. However, some circumstance has a stronger conditioning in favour of one of them than some others do. Of the three in the above episode: the attributed-object negation (the first), the attribute negation (the second), and the object negation (the third), the last choice is not very natural as regards common sense. It is of course not impossible to conceive that the young man be in fact one of the very few who would go to hat shop for something yellow, a yellow hat in particular, but the chance for any hat shop owner to encounter such an individual looks to me, especially as we take into account the given circumstance under which the conversation was holding there, very slim. The second inference that we supposed she may have made seems to be more in keeping with intuition about what is likely and what is not in this particular context. \\ \indent But it is not just the innate ambiguity in attributed objects that the episode wishes to illuminate. In our conversation (by assuming which we must inevitably assume the presence of a conversational context) such situations as in the second inference where the subject of the discourse is restricted to certain concept/object occur very often, so frequent in fact that the shift in domain of discourses from some object A (as opposed to other objects) to some attribute of A (as opposed to other attributes of A) is tacitly accepted, without any participant in conversation exclaiming, aghast with terror, that the set of assumptions with which their conversation initiated has altered in the middle. \\ } \indent An analysis that I attempt regarding this sort of usual every-day phenomenon around concepts and their attributes, which leads for example to a case where negation of some concept with attributes does not perforce entail negation of the concept itself but only that of the attributes, is that presupposition of a concept often becomes too strong in our mind to be invalidated. Let us proceed in allusion to logical/computer science terminologies. In classical reasoning that we are familiar with, 1 - truth - is what we should consider is our truth and 0 - falsehood - is what we again should consider is our non-truth. When we suppose a set of true atomic propositions {\small $p, q, r, \cdots$} under some possible interpretation of them, the truth embodied in them does - by definition - neither transcend the truth that the 1 signifies nor go below it. The innumerable true propositions miraculously sit on the given definition of what is true, 1. By applying alternative interpretations, we may have a different set of innumerable true propositions possibly differing from the {\small $p, q, r, \cdots$}. However, no interpretations are meant to modify the perceived significance of the truth which remains immune to them. Here what renders the truth so immutable is the assumption of classical logic that no propositions that cannot be given a truth value by means of the laws of classical logic may appear as a proposition: there is nothing that is 30 \% true, and also nothing that is true by the probability of 30 \% unless, of course, the probability of 30 \% should mean to ascribe to our own confidence level, which I here assume is not part of the logic, of the proposition being true. \\ \indent However, one curious fact is that the observation made so far can by no means preclude a deduction that, {\it therefore} and no matter how controversial it may appear, the meaning of the truth, so long as it can be observed only through the interpretations that force the value of propositions to go coincident with it and only through examination on the nature\footnote{Philosophical, that is, real, reading of the symbols {\small $p, q, r,\dots$}.} of those propositions that were made true by them, must be invariably dependant on the delimiter of our domain of discourse, the set of propositions; on the presupposition of which are sensibly meaningful the interpretations; on the presupposition of which, in turn, is possible classical logic. Hence, quite despite the actuality that for any set of propositions as can form a domain of discourse for classical logic it is sufficient that there be only one truth, it is not {\it a priori} possible that we find by certainty any relation to hold between such individual truths and the universal truth, if any, whom we cannot hope to successfully invalidate. Nor is it {\it a priori} possible to sensibly impose a restriction on any domain of discourse for classical reasoning to one that is consistent with the universal truth, provided again that such should exist. But, then, it is not by the force of necessity that, having a pair of domains of discourse, we find one individual truth and the other wholly interchangeable. In tenor, suppose that truths are akin to existences, then just as there are many existences, so are many truths, every one of which can be subjected to classical reasoning, but no distinct pairs of which {\it a priori} exhibit a trans-territorial compatibility. But the lack of compatibility also gives rise to a possibility of dependency among them within a meta-classical-reasoning that recognises the many individual truths at once. In situations where some concepts in a domain of discourse over which reigns a sense of truth become too strong an assumption to be feasibly falsified, the existence of the concepts becomes non-falsifiable during the discourse of existences of their attributes (which form another domain of discourse); it becomes a delimiter of classical reasoning, that is, it becomes a `truth' for them. \subsection{Gradual classical logic: a logic for attributed objects} It goes hopefully without saying that what I wished to impart through the above fictitious episode was not so much about which negation should take a precedence over the others as about the distinction of objects and what may form their attributes, \emph{i.e.} about the inclusion relation to hold between the two and about how it could restrict domains of discourse. If we are to assume attributed objects as primitive entities in a logic, we for example do not just have the negation that negates the presence of an attributed object (attributed-object negation); on the other hand, the logic should be able to express the negation that applies to an attribute only (attribute negation) and, complementary, we may also consider the negation that applies to an object only (object negation). We should also consider what it may mean to conjunctively/disjunctively have several attributed objects and should attempt a construction of the logic according to the analysis. I call the logic derived from all these analysis {\it gradual classical logic} in which the `truth', a very fundamental property of classical logic, gradually shifts by domains of discourse moving deeper into attributes of (attributed) objects. For a special emphasis, here the gradation in truth occurs only in the sense that is spelled out in the previous sub-section. One in particular should not confuse this logic with multi-valued logics \cite{Gottwald09,Hajek10} that have multiple truth values in the same domain of discourse, for any (attributed) object in gradual classical logic assumes only one out of the two usual possibilities: either it is true (that is, because we shall essentially consider conceptual existences, it is synonymous to saying that it exists) or it is false (it does not exist). In this sense it is indeed classical logic. But in some sense - because we can observe transitions in the sense of the `truth' within the logic itself - it has a bearing of meta-classical logic. As for inconsistency, if there is an inconsistent argument within a discourse on attributed objects, wherever it may be that it is occurring, the reasoning part of which is inconsistent cannot be said to be consistent. For this reason it remains in gradual classical logic just as strong as is in standard classical logic. \subsection{Structure of this work} Shown below is the organisation of this work. \begin{itemize \item Development of gradual classical logic (Sections {\uppercase\expandafter{\romannumeral 1}} and {\uppercase\expandafter{\romannumeral 2}}). \item A formal semantics of gradual classical logic and a proof that it is not para-consistent/inconsistent (Section {\uppercase\expandafter{\romannumeral 3}}). \item Decidability of gradual classical logic (Section {\uppercase\expandafter{\romannumeral 4}}). \item Conclusion and discussion on related thoughts: para-consistent logics, epistemic/conditional logics, intensional/description logics, and combined logic (Section {\uppercase\expandafter{\romannumeral 5}}). \end{itemize} \section{Gradual Classical Logic: Logical Particulars} In this section we shall look into logical particulars of gradual classical logic. Some familiarity with propositional classical logic, in particular with how the logical connectives behave, is presumed. Mathematical transcriptions of gradual classical logic are found in the next section. \subsection{Logical connective for object/attribute and interactions with negation ({\small $\gtrdot$} and {\small $\neg$})} It was already mentioned that the inclusion relation that is implicit when we talk about an attributed object shall be primitive in the proposed gradual classical logic. We shall dedicate the symbol {\small $\gtrdot$} to represent it. The usage of the new connective is fixed to take either of the forms {\small $\code{Object}_1 \gtrdot \code{Object}_2$} or {\small $\code{Object}_1 \gtrdot \code{Attribute}_2$}. Both denote an attributed object. In the first case, {\small $\code{Object}_1$} is a more generic object than {\small $\code{Object}_1 \gtrdot \code{Object}_2$} ({\small $\code{Object}_2$} acting as an attribute to {\small $\code{Object}_1$} makes {\small $\code{Object}_1$} more specific). In the second case, we have a pure attribute which is not itself an object. Either way a schematic reading is as follows: ``It is true that {\small $\code{Object}_1$} is, and it is true that {\small $\code{Object}_1$} has an attribute of {\small $\code{Object}_2$} (, or of {\small $\code{Attribute}_2$}).'' Given an attributed object {\small $\code{Object}_1 \gtrdot \code{Ojbect}_2$} (or {\small $\code{Object}_1 \gtrdot \code{Attribute}_2$}), {\small $\neg (\code{Object}_1 \gtrdot \code{Object}_2)$} expresses its attributed object negation, {\small $\neg \code{Object}_1 \gtrdot \code{Object}_2$} its object negation and {\small $\code{Object}_1 \gtrdot \neg \code{Object}_2$} its attribute negation. Again schematic readings for them are, respectively; \begin{itemize \item It is false that the attributed object {\small $\code{Object}_1 \gtrdot \code{Object}_2$} is (\emph{Cf}. above for the reading of `an attribute object is'). \item It is false that {\small $\code{Object}_1$} is, but it is true that some non-{\small $\code{Object}_1$} is which has an attribute of {\small $\code{Object}_2$}. \item It is true that {\small $\code{Object}_1$} is, but it is false that it has an attribute of {\small $\code{Object}_2$}. \end{itemize} The presence of negation flips ``It is true that \ldots'' into ``It is false that \ldots'' and vice versa. But it should be also noted how negation acts in attribute negations and object/attribute negations. Several specific examples\footnote{I do not pass judgement on what is reasonable and what is not here, as my purpose is to illustrate the reading of {\small $\gtrdot$}. So there are ones that ordinarily appear to be not very reasonable.} constructed parodically from the items in the hat shop episode are; \begin{enumerate \item {\small $\text{Hat} \gtrdot \text{Yellow}$}: It is true that hat is, and it is true that it is yellow(ed). \item {\small $\text{Yellow} \gtrdot \text{Hat}$}: It is true that yellow is, and it is true that it is hatted. \item {\small $\text{Hat} \gtrdot \neg \text{Yellow}$}: It is true that hat is, but it is false that it is yellow(ed).\footnote{In the rest, this -ed to indicate an adjective is assumed clear and is omitted another emphasis.} \item {\small $\neg \text{Hat} \gtrdot \text{Yellow}$}: It is false that hat is, but it is true that yellow object (which is not hat) is. \item {\small $\neg (\text{Hat} \gtrdot \text{Yellow})$}: Either it is false that hat is, or if it is true that hat is, then it is false that it is yellow. \end{enumerate} \subsection{Object/attribute relation and conjunction ({\small $\gtrdot$} and {\small $\wedge$})} We examine specific examples first involving {\small $\gtrdot$} and {\small $\wedge$} (conjunction), and then observe what the readings imply. \begin{enumerate \item {\small $\text{Hat} \gtrdot \text{Green} \wedge \text{Brooch}$}: It is true that hat is, and it is true that it is green and brooched. \item {\small $(\text{Hat} \gtrdot \text{Green}) \wedge (\text{Hat} \gtrdot \text{Brooch})$}: for one, it is true that hat is, and it is true that it is green; for one, it is true that hat is, and it is true that it is brooched. \item {\small $(\text{Hat} \wedge \text{Shirt}) \gtrdot \text{Yellow}$}: It is true that hat and shirt are, and it is true that they are yellow. \item {\small $(\text{Hat} \gtrdot \text{Yellow}) \wedge (\text{Shirt} \gtrdot \text{Yellow})$}: for one, it is true that hat is, and it is true that it is yellow; for one, it is true that shirt is, and it is true that it is yellow. \end{enumerate} By now it has hopefully become clear that by {\it existential facts as truths} I do not mean how many of a given (attributed) object exist: in gradual classical logic, cardinality of objects, which is an important pillar in the philosophy of linear logic \cite{DBLP:journals/tcs/Girard87} and that of its kinds of so-called resource logics, is not what it must be responsible for, but only the facts themselves of whether any of them exist in a given domain of discourse, which is in line with classical logic.\footnote{ That proposition A is true and that proposition A is true mean that proposition A is true; the subject of this sentence is equivalent to the object of its.} Hence they univocally assume a singular than plural form, as in the examples inscribed so far. That the first and the second, and the third and the fourth, equate is then a trite observation. Nevertheless, it is still important that we analyse them with a sufficient precision. In the third and the fourth where the same attribute is shared among several objects, the attribute of being yellow ascribes to all of them. Therefore those expressions are a true statement only if (1) there is an existential fact that both hat and shirt are and (2) being yellow is true for the existential fact (formed by existence of hat and that of shirt). Another example is found in Figure \ref{first_figure}. \input{first_figure.tex} \subsection{Object/attribute relation and disjunction ({\small $\gtrdot$} and {\small $\vee$})} We look at examples first. \begin{enumerate \item {\small $\text{Hat} \gtrdot (\text{Hat} \vee \text{Brooch})$}: It is true that hat is, and it is true that it is either hatted or brooched. \item {\small $(\text{Hat} \gtrdot \text{Hat}) \vee (\text{Hat} \gtrdot \text{Brooch})$}: At least either that it is true that hat is and it is true that it is hatted, or that it is true that hat is and it is true that it is brooched. \item {\small $(\text{Hat} \vee \text{Shirt}) \gtrdot \text{Yellow}$}: It is true that at least either hat or shirt is, and it is true that whichever is existing (or both) is (or are) yellow. \item {\small $(\text{Hat} \gtrdot \text{Yellow}) \vee (\text{Shirt} \gtrdot \text{Yellow})$}: At least either it is true that hat is and it is true that it is yellow, or it is true that shirt is and it is true that it is yellow. \end{enumerate} Just as in the previous sub-section, here again 1) and 2), and 3) and 4) are equivalent. However, in the cases of 3) and 4) here, we have that the existential fact of the attribute yellow depends on that of hat or shirt, whichever is existing, or that of both if they both exist.\footnote{In classical logic, that proposition A or proposition B is true means that at least one of the proposition A or the proposition B is true though both can be true. Same goes here.} \subsection{Nestings of object/attribute relations} An expression of the kind {\small $(\code{Object}_1 \gtrdot \code{Object}_2) \gtrdot \code{Object}_3$} is ambiguous. But we begin by listing examples and then move onto analysis of the readings of the nesting of the relations. \begin{enumerate \item {\small $(\text{Hat} \gtrdot \text{Brooch}) \gtrdot \text{Green}$}: It is true that hat is, and it is true that it is brooched. It is true that the object thus described is green. \item {\small $\text{Hat} \gtrdot (\text{Hat} \gtrdot \text{White})$}: It is true that hat is, and it is true that it has the attribute of which it is true that hat is and that it is white. (More simply, it is true that hat is, and it is true that it is white-hatted.) \item {\small $\neg (\text{Hat} \gtrdot \text{Yellow}) \gtrdot \text{Brooch}$}: Either it is false that hat is, or else it is true that hat is but it is false that it is yellow.\footnote{ This is the reading of {\small $\neg (\text{Hat} \gtrdot \text{Yellow})$}.} If it is false that hat is, then it is true that brooched object (which obviously cannot be hat) is. If it is true that hat is but it is false that it is yellow, then it is true that the object thus described is brooched. \end{enumerate} Note that to say that Hat {\small $\gtrdot$} Brooch (brooched hat) is being green, we must mean to say that the object to the attribute of being green, \emph{i.e.} hat, is green. It is on the other hand unclear if green brooched hat should or should not mean that the brooch, an accessory to hat, is also green. But common sense about adjectives dictates that such be simply indeterminate. It is reasonable for (Hat {\small $\gtrdot$} Brooch) {\small $\gtrdot$} Green, while if we have (Hat {\small $\gtrdot$} Large) {\small $\gtrdot$} Green, ordinarily speaking it cannot be the case that the attribute of being large is green. Therefore we enforce that {\small $(\code{Object}_1 \gtrdot \code{Object}_2) \gtrdot \code{Object}_3$} amounts to {\small $(\code{Object}_1 \gtrdot \code{Object}_3) \wedge ((\code{Object}_1 \gtrdot \code{Object}_2) \vee (\code{Object}_1 \gtrdot (\code{Object}_2 \gtrdot \code{Object}_3)))$} in which disjunction as usual captures the indeterminacy. 2) poses no ambiguity. 3) is understood in the same way as 1). \subsection{Two nullary logical connectives {\small $\top$} and {\small $\bot$}} Now we examine the nullary logical connectives {\small $\top$} and {\small $\bot$} which denote, in classical logic, the concept of the truth and that of the inconsistency. In gradual classical logic {\small $\top$} denotes the concept of the presence and {\small $\bot$} denotes that of the absence. Several examples for the readings are; \begin{enumerate \item {\small $\top \gtrdot \text{Yellow}$}: It is true that yellow object is. \item {\small $\text{Hat} \gtrdot (\top \gtrdot \text{Yellow})$}: It is true that hat is, and it is true that it has the following attribute of which it is true that it is yellow object. \item {\small $\bot \gtrdot \text{Yellow}$}: It is true that nothingness is, and it is true that it is yellow. \item {\small $\text{Hat} \gtrdot \top$}: It is true that hat is. \item {\small $\text{Hat} \gtrdot \bot$}: It is true that hat is, and it is true that it has no attributes. \item {\small $\bot \gtrdot \bot$}: It is true that nothingness is, and it is true that it has no attributes. \end{enumerate} 1) and 2) illustrate how the sense of the `truth' is constrained by the object to which it acts as an attribute. For the rest, however, there is a point around the absence which is not so vacuous as not to merit a consideration, and to which I in fact append the following postulate. \begin{postulate} That which cannot have any attribute is not. Conversely, anything that remains once all the attributes have been removed from a given object is nothingness for which any scenario where it comes with an attribute is inconceivable. \label{axiomZero} \end{postulate} With it, 3) which asserts the existence of nothingness is contradictory. 4) then behaves as expected in that Hat which is asserted with the presence of attribute(s) is just as generic a term as Hat itself is. 5) which asserts the existence of an object with no attributes again contradicts Postulate \ref{axiomZero}. 6) illustrates that any attributed object in some part of which has turned out to be contradictory remains contradictory no matter how it is to be extended: a {\small $\bot$} cannot negate another {\small $\bot$}. \\ \indent But how plausible is the postulate itself? Let us imagine hat. If the word evoked in our mind any specific hat with specific colour and shape, we first remove the colour out of it. If the process should make it transparent, we then remove the transparentness away from it. And if there should be still some things that are by some means perceivable as have originated from it, then because they are an attribute of the hat, we again remove any one of them. If {\it the humanly no longer detectable something is not nothingness} is not itself contradictory, then there must be still some quality originating in the hat that makes the something differ from nothingness. But the quality must again be an attribute to the hat, which we decisively remove away. Therefore, at least intuition solidifies the validity of Postulate \ref{axiomZero}. A further pursuit on this topic may be useful. For now, however, we shall draw a direct support from - among others - Transcendental Aesthetic in Critique of Pure Reason (English translation \cite{Kant08}), and close the scene. \subsection{Sub-Conclusion} Gradual classical logic was developed in Section {\uppercase\expandafter{\romannumeral 1}} and Section {\uppercase\expandafter{\romannumeral 2}}. The next two sections Section {\uppercase\expandafter{\romannumeral 3}} and Section {\uppercase\expandafter{\romannumeral 4}} study its mathematical aspects. \section{Mathematical mappings: syntax and semantics} In this section a semantics of gradual classical logic is formalised. We assume in the rest of this document; \begin{itemize \item {\small $\mathbb{N}$} denotes the set of natural numbers including 0. \item {\small $\wedge^{\dagger}$} and {\small $\vee^{\dagger}$} are two binary operators on Boolean arithmetic. The following laws hold; {\small $1 \vee^{\dagger} 1 = 1 \vee^{\dagger} 0 = 0 \vee^{\dagger} 1 = 1$}, {\small $0 \wedge^{\dagger} 0 = 0 \wedge^{\dagger} 1 = 1 \wedge^{\dagger} 0 = 0$}, and {\small $1 \wedge^{\dagger} 1 = 1$}. \item {\small $\wedge^{\dagger}$}, {\small $\vee^{\dagger}$} {\small $\rightarrow^{\dagger}$}, {\small $\neg^{\dagger}$}, {\small $\exists$} and {\small $\forall$} are meta-logical connectives: conjunction, disjunction,\footnote{ These two symbols are overloaded. Save whether truth values or the ternary values are supplied as arguments, however, the distinction is clear from the context in which they are used. } material implication, negation, existential quantification and universal quantification, whose semantics follow those of standard classical logic. We abbreviate {\small $(A \rightarrow^{\dagger} B) \wedge^{\dagger} (B \rightarrow^{\dagger} A)$} by {\small $A \leftrightarrow^{\dagger} B$}. \item Binding strength of logical or meta-logical connectives is, in the order of decreasing precedence;\\ {\small $[\neg]\! \gg \! [\wedge \ \ \vee]\! \gg \! [\gtrdot] \gg [\forall \ \ \exists]\! \gg \! [\neg^{\dagger}]\! \gg\! [\wedge^{\dagger} \ \ \vee^{\dagger}] \! \gg\! [\rightarrow^{\dagger}] \! \gg\! [\leftrightarrow^{\dagger}]$}. \item For any binary connectives {\small $?$}, for any {\small $i, j \in \mathbb{N}$} and for {\small $!_0, !_1, \cdots, !_j$} that are some recognisable entities, {\small $?_{i = 0}^j !_i$} is an abbreviation of {\small $(!_0) ? (!_1) ? \cdots ? (!_j)$}. \item For the unary connective {\small $\neg$}, {\small $\neg \neg !$} for some recognisable entity {\small $!$} is an abbreviation of {\small $\neg (\neg !)$}. Further, {\small $\neg^k !$} for some {\small $k \in \mathbb{N}$} and some recognisable entity {\small $!$} is an abbreviation of {\small $\underbrace{\neg \cdots \neg}_k !$}. \item For the binary connective {\small $\gtrdot$}, {\small $!_0 \gtrdot !_1 \gtrdot !_2$} for some three recognisable entities is an abbreviation of {\small $!_0 \gtrdot (!_1 \gtrdot !_2)$}. \end{itemize} On this preamble we shall begin. \subsection{Development of semantics} The set of literals in gradual classical logic is denoted by {\small $\mathcal{A}$} whose elements are referred to by {\small $a$} with or without a sub-script. This set has a countably many number of literals. Given a literal {\small $a \in \mathcal{A}$}, its complement is denoted by {\small $a^c$} which is in {\small $\mathcal{A}$}. As usual, we have {\small $\forall a \in \mathcal{A}.(a^c)^c = a$}. The set {\small $\mathcal{A} \cup \{\top\} \cup \{\bot\}$} where {\small $\top$} and {\small $\bot$} are the two nullary logical connectives is denoted by {\small $\mathcal{S}$}. Its elements are referred to by {\small $s$} with or without a sub-script. Given {\small $s \in \mathcal{S}$}, its complement is denoted by {\small $s^c$} which is in {\small $\mathcal{S}$}. Here we have {\small $\top^c = \bot$} and {\small $\bot^c = \top$}. The set of formulas is denoted by {\small $\mathfrak{F}$} whose elements, {\small $F$} with or without a sub-/super-script, are finitely constructed from the following grammar; \\ \indent {\small $F := s \ | \ F \wedge F \ | \ F \vee F \ | \ \neg F \ | \ F \gtrdot F$}\\ \hide{ \begin{definition}[Properties of gradual classical logic]{\ } \begin{enumerate} \item {\small $(F_1 \gtrdot F_2) \gtrdot F_3 = F_1 \gtrdot (F_2 \gtrdot F_3)$}. \item {\small $\neg \neg a = a$}. \item {\small $\neg \neg \top = \top$}. \item {\small $\neg \neg \bot = \bot$}. \item {\small $(F_1 \wedge F_2) \gtrdot F_3 = (F_1 \gtrdot F_3) \wedge (F_2 \gtrdot F_3)$}. \item {\small $(F_1 \vee F_2) \gtrdot F_3 = (F_1 \gtrdot F_2) \vee (F_2 \gtrdot F_3)$}. \item {\small $F_1 \gtrdot (F_2 \wedge F_3) = (F_1 \gtrdot F_2) \wedge (F_1 \gtrdot F_3)$}. \item {\small $F_1 \gtrdot (F_2 \vee F_3) = (F_1 \gtrdot F_2) \vee (F_1 \gtrdot F_3)$}. \hide{ \item {\small $\neg (\circled{F_1}\ F_2) = \neg (\circled{F_1}^+ F_2) = \circled{F_1}^- F_2$}. \hide{ \item {\small $\circled{F_1 \wedge F_2}\ F_3 = \circled{F_1}\ F_2 \wedge \circled{F_1}\ F_3$}. \item {\small $\circled{F_1 \vee F_2}\ F_3 = \circled{F_1}\ F_2 \vee \circled{F_1}\ F_3$}. \item {\small $\circled{F_1 \wedge F_2}^+ F_3 = \circled{F_1}^+ F_3 \wedge \circled{F_2}^+ F_3$}. \item {\small $\circled{F_1 \vee F_2}^+ F_3 = \circled{F_1}^+ F_3 \vee \circled{F_2}^+ F_3$}. \item {\small $\circled{F_1 \wedge F_2}^- F_3 = \circled{F_1}^- F_3 \vee \circled{F_2}^- F_3$}. \item {\small $\circled{F_1 \vee F_2}^- F_3 = \circled{F_1}^- F_3 \wedge \circled{F_2}^- F_3$}. \hide{ \item {\small $\circled{\circled{F_1}\ F_2}\ F_3 = \circled{F_1}(\circled{F_2}\ F_3)$}. \item {\small $\circled{\circled{F_1}^+ F_2}\ F_3 = \circled{F_1}^+ (\circled{F_2}\ F_3)$}. \item {\small $\circled{\circled{F_1}^- F_2}\ F_3 = \circled{F_1}^- (\circled{F_2}\ F_3)$}. \item {\small $\circled{F_1 \wedge F_2}^+ F_3 = \circled{F_1}^+ F_3 \wedge \circled{F_2}^+ F_3$}. \item {\small $\circled{F_1 \vee F_2}^+ F_3 = \circled{F_1}^+ F_3 \vee \circled{F_2}^+ F_3$}. \item {\small $\circled{\circled{F_1}^+ F_2}^+ F_3 = \circled{F_1}^+ (\circled{F_2}\ F_3)$}. \item {\small $\circled{F_1 \wedge F_2}^- F_3 = \circled{F_1}^- F_3 \vee \circled{F_2}^- F_3$}. \item {\small $\circled{F_1 \vee F_2}^- F_3 = \circled{F_1}^- F_3 \wedge \circled{F_2}^- F_3$}. \item {\small $\circled{\circled{F_1}^- F_2}^- F_3 = \circled{F_1}^+ (\circled{F_2}\ F_3)$}. } } } \end{enumerate} \end{definition} } We now develop semantics. This is done in two parts: we do not outright jump to the definition of valuation (which we could, but which we simply do not choose in anticipation for later proofs). Instead, just as we only need consider negation normal form in classical logic because every classical logic formula definable has a reduction into a normal form, so shall we first define rules for formula reductions (for any {\small $F_1, F_2, F_3 \in \mathfrak{F}$}): \begin{itemize \item {\small $\forall s \in \mathcal{S}.\neg s \mapsto s^c$} ({\small $\neg$} reduction 1). \item {\small $\neg (F_1 \wedge F_2) \mapsto \neg F_1 \vee \neg F_2$} ({\small $\neg$} reduction 2). \item {\small $\neg (F_1 \vee F_2) \mapsto \neg F_1 \wedge \neg F_2$} ({\small $\neg$} reduction 3). \item {\small $\neg (s \gtrdot F_2) \mapsto s^c \vee (s \gtrdot \neg F_2)$} ({\small $\neg$} reduction 4). \item {\small $(F_1 \gtrdot F_2) \gtrdot F_3 \mapsto (F_1 \gtrdot F_3) \wedge ((F_1 \gtrdot F_2) \vee (F_1 \gtrdot F_2 \gtrdot F_3))$} ({\small $\gtrdot$} reduction 1). \item {\small $ (F_1 \wedge F_2) \gtrdot F_3 \mapsto (F_1 \gtrdot F_3) \wedge (F_2 \gtrdot F_3)$} ({\small $\gtrdot$} reduction 2). \item {\small $ (F_1 \vee F_2) \gtrdot F_3 \mapsto (F_1 \gtrdot F_3) \vee (F_2 \gtrdot F_3)$} ({\small $\gtrdot$} reduction 3). \item {\small $F_1 \gtrdot (F_2 \wedge F_3) \mapsto (F_1 \gtrdot F_2) \wedge (F_1 \gtrdot F_3)$} ({\small $\gtrdot$} reduction 4). \item {\small $F_1 \gtrdot (F_2 \vee F_3) \mapsto (F_1 \gtrdot F_2) \vee (F_1 \gtrdot F_3)$} ({\small $\gtrdot$} reduction 5). \end{itemize} \hide{ \begin{definition}[Binary sequence and concatenation] We denote by {\small $\mathfrak{B}$} the set {\small $\{0, 1\}$}. We then denote by {\small $\mathfrak{B}^*$} the set union of (A) the set of finite sequences of elements of {\small $\mathfrak{B}$} and (B) a singleton set {\small $\{\epsilon\}$} denoting an empty sequence. A concatenation operator {\small $\code{CONCAT}: \mathfrak{B}^{*} \times \mathfrak{B} \rightarrow \mathfrak{B}$} is defined to satisfy for all {\small $B_1 \in \mathfrak{B}^{*}$} and for all {\small $b \in \mathfrak{B}$}; \begin{enumerate} \item {\small $\code{CONCAT}(B_1, b) = 0$} if {\small $0$} occurs in {\small $B_1$} or if {\small $b = 0$}. \item {\small $\code{CONCAT}(B_1, b) = \underbrace{11\dots1}_{k+1}$} for {\small $|B_1| = k$}, otherwise. Here {\small $|\dots|$} indicates the size of the set: {\small $|\{\epsilon\}| = 0$}, {\small $|\{b\}| = 1$} for {\small $b \in \mathfrak{B}$}, and so on. \end{enumerate} \hide{ The following properties hold; \begin{itemize \item {\small $\forall n \in \mathbb{N}.[0 < \underbrace{11\ldots 1}_{n+1}]$}. \item {\small $\forall n, m \in \mathbb{N}. [n < m] \rightarrow^{\dagger} [\underbrace{11\ldots1}_{n + 1} < \underbrace{11\ldots1}_{m + 1}]$}. \end{itemize} } \end{definition} } \begin{definition}[Valuation frame] Let {\small $\mathcal{S}^*$} denote the set union of (A) the set of finite sequences of elements of {\small $\mathcal{S}$}\footnote{Simply for a presentation purpose, we use comma such as {\small $s_1^*.s_2^*$} for {\small $s_1^*, s_2^* \in \mathcal{S}^*$} to show that {\small $s_1^*.s_2^*$} is an element of {\small $\mathcal{S}^*$} in which {\small $s_1^*$} is the preceding constituent and {\small $s_2^*$} the following constituent of {\small $s_1^*.s_2^*$}.} and (B) a singleton set {\small $\{\epsilon\}$} denoting an empty sequence. We define a valuation frame as a 2-tuple: {\small $(\mathsf{I}, \mathsf{J})$}, where {\small $\mathsf{I}: \mathcal{S}^* \times \mathcal{S} \rightarrow \{0,1\}$} is what we call local interpretation and {\small $\mathsf{J}: \mathcal{S}^* \backslash \{\epsilon\} \rightarrow \{0,1\} $} is what we call gloal interpretation. The following are defined to satisfy. \begin{description} \item[Regarding local interpretation]{\ } \begin{itemize \item {\small $[\mathsf{I}(s_0.\dots.s_{k-1}, \top) = 1]$}\footnote{ When {\small $k = 0$}, we assume that {\small $[\mathsf{I}(s_0.\dots.s_{k-1}, s_k) = \mathsf{I}(\epsilon, s_0)]$}. Same applies in the rest.} ({\small $\mathsf{I}$} valuation of $\top$). \item {\small $[\mathsf{I}(s_0.\dots.s_{k-1}, \bot) = 0]$} (That of $\bot$). \item {\small $[\mathsf{I}(s_0.\dots.s_{k-1}, a_{k}) = 0] \vee^{\dagger} [\mathsf{I}(s_0.\dots.s_{k-1}, a_{k}) = 1]$} (That of a literal). \item {\small $[\mathsf{I}(s_0.\dots.s_{k-1},a_{k}) = 0] \leftrightarrow^{\dagger} [\mathsf{I}(s_0.\dots.s_{k-1},a^c_{k}) = 1]$} (That of a complement). \item {\small $[{\mathsf{I}(s_0.\dots.s_{k-1}, s_{k}) = \mathsf{I}(s'_0.\dots.s'_{k-1}, s_{k})}]$} (Synchronization condition on {\small $\mathsf{I}$} interpretation; this reflects the dependency of the existential fact of an attribute to the existential fact of objects to which it is an attribute). \end{itemize} \item[Regarding global interpretation]{\ } \begin{itemize \item {\small $[\mathsf{J}(s_0.\dots.s_k) = 1] \leftrightarrow^{\dagger}$} {\small $\forall i \in \mathbb{N}. \bigwedge_{i=0}^{\dagger k} [\mathsf{I}(s_0.\dots.s_{i-1},s_i) = 1]$} (Non-contradictory {\small $\mathsf{J}$} valuation). \item {\small $[\mathsf{J}(s_0.\dots.s_k) = 0] \leftrightarrow^{\dagger}$} {\small $ \exists i \in \mathbb{N}.[i \le k] \wedge^{\dagger} [\mathsf{I}(s_0.\dots.s_{i-1}, s_i) = 0]$} (Contradictory {\small $\mathsf{J}$} valuation). \end{itemize} \end{description} \label{interpretations} \end{definition} Note that global interpretation is completely characterised by local interpretations, as clear from the definition. \begin{definition}[Valuation] Suppose a valuation frame {\small $\mathfrak{M} = (\mathsf{I}, \mathsf{J})$}. The following are defined to hold for all {\small $F_1, F_2\in \mathfrak{F}$} and for all {\small $k \in \mathbb{N}$}: \begin{itemize \item {\small $[\ensuremath{\mathfrak{M}} \models s_0 \gtrdot s_1 \gtrdot \dots \gtrdot s_k] = \mathsf{J}(s_0.s_1.\dots.s_k)$}. \item {\small $[\ensuremath{\mathfrak{M}} \models F_1 \wedge F_2] = [\ensuremath{\mathfrak{M}} \models F_1] \wedge^{\dagger} [\ensuremath{\mathfrak{M}} \models F_2]$}. \item {\small $[\ensuremath{\mathfrak{M}} \models F_1 \vee F_2] = [\ensuremath{\mathfrak{M}} \models F_1] \vee^{\dagger} [\ensuremath{\mathfrak{M}} \models F_2]$}. \hide{ \item {\small $ (F \not\in \mathcal{S}) \rightarrow^{\dagger} ([\models_{\psi} F] = *)$}. \item {\small $\forall v_1, v_2 \in \{0, 1, *\}. [v_1 = *] \vee^{\dagger} [v_2 = *] \rightarrow^{\dagger} [v_1 \oplus v_2 = v_1 \odot v_2 = *]$} \item {\small $\forall v_1, v_2 \in \{0, 1, *\}. [v_1 \not= *] \wedge^{\dagger} [v_2 \not= *] \rightarrow^{\dagger} ((v_1 \oplus v_2) = (v_1 \wedge^{\dagger} v_2))$}. \item {\small $\forall v_1, v_2 \in \{0, 1, *\}. [v_1 \not= *] \wedge^{\dagger} [v_2 \not= *] \rightarrow^{\dagger} ((v_1 \odot v_2) = (v_1 \vee^{\dagger} v_2))$}. } \hide{ \item {\small $\forall x_{0\cdots k} \in \mathfrak{X}.x_0 \odot x_1 \odot \dots \odot x_{k} = \lceil x_0 \rfloor^{\overrightarrow{x_0}} \vee^{\dagger} \lceil x_1 \rfloor^{\overrightarrow{x_1}} \vee^{\dagger} \dots \vee^{\dagger} \lceil x_{k}\rfloor^{\overrightarrow{x_k}}$} if {\small $\bigwedge^{\dagger\ k}_{j = 0} ( \exists m \in \mathbb{N}\ \exists y_{j0}, y_{j1}, \dots, y_{jm} \in \mathfrak{Y}.[x_j = \oplus_m y_{jm}])$}. \footnote{Just to ensure of no confusion on the part of readers though not ambiguous, this {\small $\bigwedge^{\dagger}$} is a meta-logical connective operating on true/false.} \item {\small $\forall x \in \mathfrak{X}. \lceil [\models_{\psi} s] \oplus x \rfloor^{i} = \lceil [\models_{\psi} s] \rfloor^{i} \wedge^{\dagger} \lceil x \rfloor^{i}$}.\footnote{And this {\small $\wedge^{\dagger}$} operates on 1/0, though again not ambiguous.} } \end{itemize} \label{model} \end{definition} The notions of validity and satisfiability are as usual. \begin{definition}[Validity/Satisfiability] A formula {\small $F \in \mathfrak{F}$} is said to be satisfiable in a valuation frame {\small $\mathfrak{M}$} iff {\small $1 = [\ensuremath{\mathfrak{M}} \models F]$}; it is said to be valid iff it is satisfiable for all the valuation frames; it is said to be invalid iff {\small $0 = [\ensuremath{\mathfrak{M}} \models F]$} for some valuation frame {\small $\ensuremath{\mathfrak{M}}$}; it is said to be unsatisfiable iff it is invalid for all the valuation frames. \label{universal_validity} \end{definition} \subsection{Study on the semantics} We have not yet formally verified some important points. Are there, firstly, any formulas {\small $F \in \mathfrak{F}$} that do not reduce into some value-assignable formula? Secondly, what if both {\small $1 = [\ensuremath{\mathfrak{M}} \models F]$} and {\small $1 = [\ensuremath{\mathfrak{M}} \models \neg F]$}, or both {\small $0 = [\ensuremath{\mathfrak{M}} \models F]$} and {\small $0 = [\ensuremath{\mathfrak{M}} \models \neg F]$} for some {\small $F \in \mathfrak{F}$} under some {\small $\ensuremath{\mathfrak{M}}$}? Thirdly, should it happen that {\small $[\mathfrak{M} \models F] = 0 = 1$} for any formula {\small $F$}, given a valuation frame? \\ \indent If the first should hold, the semantics - the reductions and valuations as were presented in the previous sub-section - would not assign a value (values) to every member of {\small $\mathfrak{F}$} even with the reduction rules made available. If the second should hold, we could gain {\small $1 = [\mathfrak{M} \models F \wedge \neg F]$}, which would relegate this gradual logic to a family of para-consistent logics \cite{Marcos05} - quite out of keeping with my intention. And the third should never hold, clearly. \\ \indent Hence it must be shown that these unfavoured situations do not arise. An outline to the completion of the proofs is; \begin{enumerate \item to establish that every formula has a reduction through {\small $\neg$} reductions and {\small $\gtrdot$} reductions into some formula {\small $F$} for which it holds that {\small $\forall \mathfrak{M}. [\mathfrak{M} \models F] \in \{0, 1\}$}, to settle down the first inquiry. \item to prove that any formula {\small $F$} to which a value 0/1 is assignable {\it without the use of the reduction rules} satisfies for every valuation frame (a) that {\small $[\mathfrak{M} \models F] \vee^{\dagger} [\mathfrak{M}\models \neg F] = 1$} and {\small $[\mathfrak{M} \models F] \wedge^{\dagger} [\mathfrak{M} \models \neg F] = 0$}; and (b) either that {\small $0 \not= 1 = [\ensuremath{\mathfrak{M}} \models F]$} or that {\small $1 \not= 0 = [\ensuremath{\mathfrak{M}} \models F]$}, to settle down the other inquiries partially. \item to prove that the reduction through {\small $\neg$} reductions and {\small $\gtrdot$} reductions on any formula {\small $F \in \mathfrak{F}$} is normal in that, in whatever order those reduction rules are applied to {\small $F$}, any {\small $F_{\code{reduced}}$} in the set of possible formulas it reduces into satisfies for every valuation frame either that {\small $[\mathfrak{M} \models F_{\code{reduced}}] = 1$}, or that {\small $[\mathfrak{M} \models F_{\code{reduced}}] = 0$}, for all such {\small $F_{\code{reduced}}$}, to conclude. \end{enumerate} \subsubsection{Every formula is 0/1-assignable} \vspace{-0.1mm} We state several definitions for the first objective of ours. \begin{definition}[Chains/Unit chains]{\ }\\ A chain is defined to be any formula {\small $F \in \mathfrak{F}$} such that {\small $F = F_0 \gtrdot F_1 \gtrdot \dots \gtrdot F_{k+1}$} for {\small $k \in \mathbb{N}$}. A unit chain is defined to be a chain for which {\small $F_i \in \mathcal{S}$} for all {\small $0 \le i \le k+1$}. We denote the set of unit chains by {\small $\mathfrak{U}$}. By the head of a chain {\small $F \in \mathfrak{F}$}, we mean some formula {\small $F_a \in \mathfrak{F}$} satisfying (1) that {\small $F_a$} is not in the form {\small $F_b \gtrdot F_c$} for some {\small $F_b, F_c \in \mathfrak{F}$} and (2) that {\small $F = F_a \gtrdot F_d$} for some {\small $F_d \in \mathfrak{F}$}. By the tail of a chain {\small $F \in \mathfrak{F}$}, we then mean some formula {\small $F_d \in \mathfrak{F}$} such that {\small $F = F_a \gtrdot F_d$} for some {\small $F_a$} as the head of {\small $F$}. \end{definition} \begin{definition}[Unit chain expansion]{\ }\\ Given any {\small $F \in \mathfrak{F}$}, we say that {\small $F$} is expanded in unit chains only if any chain that occurs in {\small $F$} is a unit chain. \hide{ Given any formula {\small $F \in \mathfrak{F}$}, we denote by {\small $[F]_p$} of {\small $\mathfrak{F}$} such that it contains only those formulas (A) that result from applying rules of transformations \indent {\small $\{F' (\in \mathfrak{F})\ | \ F' \text{ is expanded in primary chains via \textbf{Transformations} in Definition 3.}\}$}. Likewise, we denote by {\small $[F]_u$} the set;\\ \indent {\small $\{F' (\in \mathfrak{F})\ | \ F' \text{ is expanded in unit chains via Definition 3.}\}$}.\footnote{ Given a formula {\small $F$}, it may be that {\small $[F]_u$} is always a singleton set, {\small $F$} always leading to a unique unit expansion via Definition 3. But I do not verify this in this paper, as no later results strictly the stronger statement.} } \end{definition} \begin{definition}[Formula size] The size of a formula is defined inductively. Let {\small $F$} be some arbitrary formula, and let {\small $\ensuremath{\code{f}\_\code{size}}(F)$} be the formula size of {\small $F$}. Then it holds that; \begin{itemize \item {\small $\ensuremath{\code{f}\_\code{size}}(F) = 1$} if {\small $F \in \mathcal{S}$}. \item {\small $\ensuremath{\code{f}\_\code{size}}(F) = \ensuremath{\code{f}\_\code{size}}(F_1) + \ensuremath{\code{f}\_\code{size}}(F_2) + 1$} if {\small $F = F_1 \wedge F_2$}, {\small $F = F_1 \vee F_2$}, or {\small $F = F_1 \gtrdot F_2$}. \item {\small $\ensuremath{\code{f}\_\code{size}}(F) = \ensuremath{\code{f}\_\code{size}}(F_1) + 1$} if {\small $F = \neg F_1$}. \end{itemize} \end{definition} \begin{definition}[Maximal number of {\small $\neg$} nestings]{\ }\\ Given a formula {\small $F \in \mathfrak{F}$}, we denote by {\small $\ensuremath{\code{neg}\_\code{max}}(F)$} a maximal number of {\small $\neg$} nestings in {\small $F$}, whose definition goes as follows; \begin{itemize \item If {\small $F_0 = s$}, then {\small $\ensuremath{\code{neg}\_\code{max}}(F_0) = 0$}. \item If {\small $F_0 = F_1 \wedge F_2$} or {\small $F_0 = F_1 \vee F_2$} or {\small $F_0 = F_1 \gtrdot F_2$}, then {\small $\ensuremath{\code{neg}\_\code{max}}(F_0) = max(\ensuremath{\code{neg}\_\code{max}}(F_1), \ensuremath{\code{neg}\_\code{max}}(F_2))$}. \item If {\small $F_0 = \neg F_1$}, then {\small $\ensuremath{\code{neg}\_\code{max}}(F_0) = 1 + \ensuremath{\code{neg}\_\code{max}}(F_1)$}. \\ \end{itemize} \end{definition} We now work on the main results. \begin{lemma}[Linking principle] Let {\small $F_1$} and {\small $F_2$} be two formulas in unit chain expansion. Then it holds that {\small $F_1 \gtrdot F_2$} has a reduction into a formula in unit chain expansion. \label{linking_principle} \end{lemma} \begin{IEEEproof} In Appendix A. \end{IEEEproof} \hide{ \begin{IEEEproof} First apply {\small $\gtrdot$} reductions 2 and 3 on {\small $F_1 \gtrdot F_2$} into a formula in which the only occurrences of the chains are {\small $f_0 \gtrdot F_2$}, {\small $f_1 \gtrdot F_2$}, \dots, {\small $f_{k} \gtrdot F_2$} for some {\small $k \in \mathbb{N}$} and some {\small $f_0, f_1, \dots, f_k \in \mathfrak{U} \cup \mathcal{S}$}. Then apply {\small $\gtrdot$} reductions 4 and 5 to each of those chains into a formula in which the only occurrences of the chains are: {\small $f_0 \gtrdot g_{0}, f_0 \gtrdot g_{1}, \dots, f_0 \gtrdot g_{j}$}, {\small $f_1 \gtrdot g_{0}$}, \dots, {\small $f_1 \gtrdot g_{j}$}, \dots, {\small $f_k \gtrdot g_{0}$}, \dots, {\small $f_k \gtrdot g_j$} for some {\small $j \in \mathbb{N}$} and some {\small $g_0, g_1, \dots, g_j \in \mathfrak{U}$}. To each such chain, apply {\small $\gtrdot$} reduction 1 as long as it is applicable. This process cannot continue infinitely since any formula is finitely constructed and since, under the premise, we can apply induction on the number of elements of {\small $\mathcal{S}$} occurring in {\small $g_x$}, {\small $0 \le x \le j$}. The straightforward inductive proof is left to readers. The result is a formula in unit chain expansion. \\ \end{IEEEproof} } \begin{lemma}[Reduction without negation] Any formula {\small $F_0 \in \mathfrak{F}$} in which no {\small $\neg$} occurs reduces into some formula in unit chain expansion. \label{normalisation_without_negation} \end{lemma} \begin{IEEEproof} By induction on formula size. For inductive cases, consider what {\small $F_0$} actually is: \begin{enumerate \item {\small $F_0 = F_1 \wedge F_2$} or {\small $F_0 = F_1 \vee F_2$}: Apply induction hypothesis on {\small $F_1$} and {\small $F_2$}. \item {\small $F_0 = F_1 \gtrdot F_2$}: Apply induction hypothesis on {\small $F_1$} and {\small $F_2$} to get {\small $F'_1 \gtrdot F'_2$} where {\small $F'_1$} and {\small $F'_2$} are formulas in unit chain expansion. Then apply Lemma \ref{linking_principle}. \end{enumerate} \end{IEEEproof} \begin{lemma}[Reduction] Any formula {\small $F_0 \in \mathfrak{F}$} reduces into some formula in unit chain expansion. \label{reduction_result} \end{lemma} \begin{IEEEproof} By induction on maximal number of {\small $\neg$} nestings and a sub-induction on formula size. Lemma \ref{normalisation_without_negation} for base cases. Details are in Appendix B. \hide{ For inductive cases, assume that the current lemma holds true for all the formulas with {\small $\ensuremath{\code{neg}\_\code{max}}(F_0)$} of up to {\small $k$}. Then we conclude by showing that it still holds true for all the formulas with {\small $\ensuremath{\code{neg}\_\code{max}}(F_0)$} of {\small $k+1$}. Now, because any formula is finitely constructed, there exist sub-formulas in which occur no {\small $\neg$}. By Lemma \ref{normalisation_without_negation}, those sub-formulas have a reduction into a formula in unit chain expansion. Hence it suffices to show that those formulas {\small $\neg F'$} with {\small $F'$} already in unit chain expansion reduce into a formula in unit chain expansion, upon which inductive hypothesis applies for a conclusion. Consider what {\small $F'$} is: \begin{enumerate} \item {\small $s$}: then apply {\small $\neg$} reduction 1 on {\small $\neg F'$} to remove the {\small $\neg$} occurrence. \item {\small $F_a \wedge F_b$}: apply {\small $\neg$} reduction 2. Then apply (sub-)induction hypothesis on {\small $\neg F_a$} and {\small $\neg F_b$}. \item {\small $F_a \vee F_b$}: apply {\small $\neg$} reduction 3. Then apply (sub-)induction hypothesis on {\small $\neg F_a$} and {\small $\neg F_b$}. \item {\small $s \gtrdot F \in \mathfrak{U}$}: apply {\small $\neg$} reduction 4. Then apply (sub-)induction hypothesis on {\small $\neg F$}. \end{enumerate} } \end{IEEEproof} \hide{ \begin{lemma}[Reduction into unit chains Given any unit tree {\small $F \in \mathfrak{F}$}, there exists a formula {\small $F_1 \in \mathfrak{F}$} in unit chain expansion. Moreover, {\small $\code{recursiveReduce}(F)$} is the unique reduction of {\small $F$}. \label{unit_tree_reduction} \end{lemma} \begin{IEEEproof} By rules in \textbf{Transformations}. \\ \end{IEEEproof} } \begin{lemma} For any {\small $F \in \mathfrak{F}$} expanded in unit chains, there exists {\small $v \in \{0,1\}$} such that {\small $[\ensuremath{\mathfrak{M}} \models F] = v$} for any valuation frame. \label{simple_lemma2} \end{lemma} \begin{IEEEproof} Since a value 0/1 is assignable to any element of {\small $\mathcal{S} \cup \mathfrak{U}$} by Definition \ref{model}, it is (or they are if more than one in \{0, 1\}) assignable to {\small $[\mathfrak{M} \models F]$}. \\ \end{IEEEproof} Hence we obtain the desired result for the first objective. \begin{proposition} To any {\small $F \in \mathfrak{F}$} corresponds at least one formula {\small $F_a$} in unit chain expansion into which {\small $F$} reduces. It holds for any such {\small $F_a$} that {\small $[\mathfrak{M} \models F_a] \in \{0, 1\}$} for any valuation frame. \\ \label{simple_proposition} \end{proposition} For the next sub-section, the following observation about negation on a unit chain comes in handy. Let us state a procedure. \begin{definition}[Procedure \code{recursiveReduce}]{\ }\\ The procedure given below takes as an input a formula {\small $F$} in unit chain expansion. \\ \textbf{Description of {\small $\code{recursiveReduce}(F)$}} \begin{enumerate \item Replace {\small $\wedge$} in {\small $F$} with {\small $\vee$}, and {\small $\vee$} with {\small $\wedge$}. These two operations are simultaneous. \item Replace all the non-chains {\small $s \in \mathcal{S}$} in {\small $F$} simultaneously with {\small $s^c\ (\in \mathcal{S})$}. \item For every chain {\small $F_a$} in {\small $F$} with its head {\small $s \in \mathcal{S}$} for some {\small $s$} and its tail {\small $F_{\code{tail}}$}, replace {\small $F_a$} with {\small $(s^c \vee (s \gtrdot (\code{recursiveReduce}(F_{\code{tail}}))))$}. \item Reduce {\small $F$} via {\small $\gtrdot$} reductions in unit chain expansion. \end{enumerate} \end{definition} Then we have the following result. \begin{proposition}[Reduction of negated unit chain expansion] Let {\small $F$} be a formula in unit chain expansion. Then {\small $\neg F$} reduces via the {\small $\neg$} and {\small $\gtrdot$} reductions into {\small $\code{recursiveReduce}(F)$}. Moreover {\small $\code{recursiveReduce}(F)$} is the unique reduction of {\small $\neg F$}. \vspace{-4mm} \label{special_reduction} \end{proposition} \begin{IEEEproof} For the uniqueness, observe that only {\small $\neg$} reductions and {\small $\gtrdot$} reduction 5 are used in reduction of {\small $\neg F$}, and that at any point during the reduction, if there occurs a sub-formula in the form {\small $\neg F_x$}, the sub-formula {\small $F_x$} cannot be reduced by any reduction rules. Then the proof of the uniqueness is straightforward. \\ \end{IEEEproof} \hide{ \begin{lemma}[Simple observation]{\ }\\ Let {\small $\psi$} denote {\small $s_1.s_2.\dots.s_{k}$} for some {\small $k \in \mathbb{N}$} ({\small $k = 0$} means that {\small $s_1.s_2.\dots.s_{k} = \epsilon$}). Then it holds that {\small $[\models_{\psi} F] = [\models_{\epsilon} s_1 \gtrdot s_2 \gtrdot \dots \gtrdot s_{k} \gtrdot F]$}. {\ }\\ \label{simple_observation} \end{lemma} \begin{lemma}[Formula reconstruction]{\ }\\ Given any {\small $x \in \mathfrak{X}$}, there exists a formula {\small $F \in \mathfrak{F}$} such that {\small $x = [\models_{\epsilon} F]$} and that all the chains occurring in {\small $x$}\footnote{ In the following sense: for any formulas which occur in the uninterpreted expression {\small $x$}, any chain that occurs in any one of them is said to occur in {\small $x$}.} preserve in {\small $F$}. \label{formula_reconstruction} \end{lemma} \begin{proof} Use the following recursions to derive a formula {\small $F_a$} from {\small $\underline{x}$}; \begin{itemize} \item {\small $\underline{x_1 \oplus x_2} \leadsto \underline{x_1} \wedge \underline{x_2}$}. \item {\small $\underline{x_1 \odot x_2} \leadsto \underline{x_1} \vee \underline{x_2}$}. \item {\small $\underline{[\models_{s_0.s_1\dots.s_{k}} F_b]} \leadsto s_0 \gtrdot s_1 \gtrdot \dots \gtrdot s_{k} \gtrdot F_b$} for {\small $k \in \mathbb{N}$}. \end{itemize} We choose {\small $F_a$} for {\small $F$}, as required. \end{proof} {\ }\\ In the rest, given any {\small $x \in \mathfrak{X}$} such that {\small $\underline{x} \leadsto F_a$} (\emph{Cf.} Lemma \ref{formula_reconstruction}), we let {\small $F_{\widehat{x}}$} denote the {\small $F_a$}. \begin{lemma} Given any {\small $F \in \mathfrak{F}$}, it holds that {\small $[\models_{\psi} F] = [\models_{\psi} F']$} for some {\small $F' \in \mathfrak{F}$} which is expanded in primary chains. \label{primary_chain_expansion} \end{lemma} \begin{proof} By induction on the formula depth of {\small $F$} and by a sub-induction on the formula depth of the head of {\small $F$}.\footnote{Make sure to define the formula depth first.} Assume {\small $n \in \mathbb{N}$}, {\small $s_x \in \mathcal{S}$} for all {\small $x$} and {\small $F_x \in \mathfrak{F}$} for all {\small $x$}. Consider what {\small $F$} looks like. \begin{enumerate} \item {\small $F = \neg^n s_1$}: This is a base case. Apply the complement axiom repeatedly. Then vacuous by {\small $(s^c)^c = s$} (defined at the beginning of the previous sub-section). \item {\small $F = \neg^n s_1 \gtrdot F_2$}: Another base case. Apply the complement axiom repeatedly on the head of the chain. Then again vacuous by {\small $(s^c)^c = s$} and induction hypothesis (of the sub-induction). \item {\small $F = \neg^n (F_1 \wedge F_2)$}: Apply the De Morgan axioms repeatedly and then apply induction hypothesis on {\small $\neg^n F_1$} and {\small $\neg^n F_2$}. \item {\small $F = \neg^n (F_1 \wedge F_2) \gtrdot F_3$}: Apply the De Morgan axioms repeatedly on {\small $\neg^n (F_1 \wedge F_2)$} and then the distribution axioms. Then apply induction hypothesis on {\small $\neg^n F_1 \gtrdot F_3$} and on {\small $\neg^n F_2 \gtrdot F_3$}. \item {\small $F = \neg^n (F_1 \vee F_2)$} or {\small $F = \neg^n (F_1 \vee F_2) \gtrdot F_3$}: Similar. \item {\small $F = \neg^n (F_1 \gtrdot F_2)$}: Apply the De Morgan axioms repeatedly to push all the {\small $n$} {\small $\neg$}'s in the outermost bracket. Then apply induction hypothesis on any chain and on any {\small $\neg^k F_1$} for {\small $k \in \{0,\cdots,n\}$}. \item {\small $F = \neg^n (F_1 \gtrdot F_2) \gtrdot F_3$}: Apply the De Morgan axioms repeatedly to expand the head of the chain in the same way as the previous sub-case. Denote the formula by {\small $F_a$}. Next, apply the distribution axioms on {\small $F_a$} such that {\small $F_a$} expands into a formula in which the head of all the chains is {\small $\neg^k F_1$} for some {\small $k \in \{0,\cdots,n\}$}. Apply induction hypothesis on all the chains. \end{enumerate} \end{proof} \begin{proposition} Given any {\small $F \in \mathfrak{F}$}, it holds that {\small $[\models_{\epsilon} F] = [\models_{\epsilon} F']$} for some {\small $F' \in \mathfrak{F}$} which is expanded in unit chains. \label{unit_chain_expansion} \end{proposition} \begin{proof} By Lemma \ref{primary_chain_expansion}, for any {\small $\psi' \in \mathcal{S}^*$}, we succeed in deriving {\small $[\models_{\psi'} F_a] = [\models_{\psi'} F_b]$} such that {\small $F_b$} is a primary chain expansion of {\small $F_a$}. Then the current proposition follows because any {\small $F \in \mathfrak{F}$}, and therefore any {\small $\psi \in \mathcal{S}^*$} that may appear in the process of uninterpreted transformations are finite. \end{proof} {\ }\\ \begin{proposition}[Reduction into normal form]{\ } Given any {\small $F \in \mathfrak{F}$}, it holds that {\small $[\models_{\epsilon} F] = [\models_{\epsilon} F']$} for some formula {\small $F' \in \mathfrak{F}$} in disjunctive/conjunctive normal form. \label{reduction_into_normal_form} \end{proposition} \begin{proof} By Proposition \ref{unit_chain_expansion}, there exists a formula {\small $F_a$} in unit expansion such that {\small $[\models_{\epsilon} F] = [\models_{\epsilon} F_a]$}, which can be transformed into {\small $x \in \mathfrak{X}$} via the two rules of \textbf{Transformations} {\small $[\models_{\psi} F_1 \wedge F_2] = [\models_{\psi} F_1] \oplus [\models_{\psi} F_2]$} and {\small $[\models_{\psi} F_1 \vee F_2] = [\models_{\psi} F_1] \odot [\models_{\psi} F_2]$} (obviously {\small $\psi = \epsilon$} here) such that no {\small $\wedge$} or {\small $\vee$} occur within it, while preserving all the unit chains. By the full distributivity of {\small $\oplus$} over {\small $\odot$} and vice versa that hold by definition, then, it can be transformed into an uninterpreted expression in the form: {\small $\odot_{i = 0}^k \oplus_{j=0}^{h_i} x_{ij}$} for some {\small $i, j, k \in \mathbb{N}$} and some {\small $x_{00}, \dots, x_{kh_i} \in \mathfrak{X}$} (such that {\small $F_{\overrightarrow{x_{ij}}}$} are all in {\small $\mathfrak{U}$}), as required for disjunctive normal form. Dually for the conjunctive normal form. \end{proof} {\ }\\ This concludes the first proof that for any formula {\small $F \in \mathfrak{F}$} {\small $[\models_{\epsilon} F]$} is assigned a non-* value. } \subsubsection{Unit chain expansions form Boolean algebra}{\ }\\ We make use of disjunctive normal form in this sub-section for simplification of proofs. \begin{definition}[Disjunctive/Conjunctive normal form] A formula {\small $F \in \mathfrak{F}$} is defined to be in disjunctive normal form only if {\small $\exists i,j,k \in \mathbb{N}\ \exists h_{0}, \cdots, h_i \in \mathbb{N}\ \exists f_{00}, \dots, f_{kh_k} \in \mathfrak{U} \cup \mathcal{S}.F = \vee_{i =0}^k \wedge_{j = 0}^{h_i} f_{ij}$}. Dually, a formula {\small $F \in \mathfrak{F}$} is defined to be in conjunctive normal form only if {\small $\exists i, j, k \in \mathbb{N}\ \exists h_0, \cdots, h_i \in \mathbb{N}\ \exists f_{00}, \dots, f_{kh_k} \in \mathfrak{U} \cup \mathcal{S}.F = \wedge_{i=0}^k \vee_{j =0}^{h_i} f_{ij}$}. \\ \end{definition} Now, for the second objective of ours, we prove that {\small $\mathfrak{U} \cup \mathcal{S}$}, {\small $\code{recursiveReduce}$}, {\small $\vee^{\dagger}$} and {\small $\wedge^{\dagger}$} form a Boolean algebra (\emph{Cf.} \cite{wikiBooleanAlgebra} for the laws of Boolean algebra), from which follows the required outcome. \begin{proposition}[Annihilation/Identity] For any formula {\small $F$} in unit chain expansion and for any valuation frame, it holds (1) that {\small $[\ensuremath{\mathfrak{M}} \models \top \wedge F] = [\ensuremath{\mathfrak{M}} \models F]$}; (2) that {\small $[\ensuremath{\mathfrak{M}} \models \top \vee F] = [\ensuremath{\mathfrak{M}} \models \top]$}; (3) that {\small $[\ensuremath{\mathfrak{M}} \models \bot \wedge F] = [\ensuremath{\mathfrak{M}} \models \bot]$}; and (4) that {\small $[\ensuremath{\mathfrak{M}} \models \bot \vee F] = [\ensuremath{\mathfrak{M}} \models F]$}. \end{proposition} \begin{lemma}[Elementary complementation] For any {\small $s_0 \gtrdot s_1 \gtrdot \dots \gtrdot s_k \in \mathfrak{U} \cup \mathcal{S}$} for some {\small $k \in \mathbb{N}$}, if for a given valuation frame it holds that {\small $[\mathfrak{M} \models s_0 \gtrdot s_1 \gtrdot \dots \gtrdot s_{k}] = 1$}, then it also holds that {\small $[\ensuremath{\mathfrak{M}} \models \code{recursiveReduce}(s_0 \gtrdot s_1 \gtrdot \dots \gtrdot s_{k})] = 0$}; or if it holds that {\small $[\ensuremath{\mathfrak{M}} \models s_0 \gtrdot s_1 \gtrdot \dots \gtrdot s_{k}] = 0$}, then it holds that {\small $[\ensuremath{\mathfrak{M}} \models \code{recursiveReduce}(s_0 \gtrdot s_1 \gtrdot \dots \gtrdot s_{k})] = 1$}. These two events are mutually exclusive. \label{unit_chain_excluded_middle} \end{lemma} \begin{IEEEproof} In Appendix C. \hide{ For the first one, {\small $[(\mathsf{I}, \mathsf{J})\models s_0 \gtrdot s_1 \gtrdot \dots \gtrdot s_{k}] = 1$} implies that {\small $\mathsf{I}(|\epsilon |, s_0)\! =\! \mathsf{I}(|s_0|, s_1) \!=\! \dots \!=\! \mathsf{I}(| s_0.s_1.\dots.s_{k - 1}|, s_{k})\! =\! 1$}. So we have; {\small $\mathsf{I}(| \epsilon |, s_0^c) = \mathsf{I}(| s_0 |, s_1^c) = \dots = \mathsf{I}(| s_0.s_1\dots.s_{k - 1} |, s_{k}^c) = 0$} by the definition of {\small $\mathsf{I}$}. Meanwhile, {\small $\code{recursiveReduce}(s_0 \gtrdot s_1 \gtrdot \cdots \gtrdot s_k) = s_0^c \vee (s_0 \gtrdot ((s_1^c \vee (s_1 \gtrdot \cdots)))) = s_0^c \vee (s_0 \gtrdot s_1^c) \vee (s \gtrdot s_1 \gtrdot s_2^c) \vee \cdots \vee (s \gtrdot s_1 \gtrdot \cdots \gtrdot s_{k-1} \gtrdot s_k^c)$}. Therefore {\small $[(\mathsf{I}, \mathsf{J})\models \code{recursiveReduce}(s_0 \gtrdot s_1 \gtrdot \cdots \gtrdot s_k)] = 0 \not= 1$} for the given interpretation frame. \\ \indent For the second obligation, {\small $[(\mathsf{I}, \mathsf{J})\models s_0 \gtrdot s_1 \gtrdot \dots \gtrdot s_{k}] = 0$} implies that {\small $[\mathsf{I}(|\epsilon|, s_0) = 0] \vee^{\dagger} [\mathsf{I}(|s_0|, s_1) = 0] \vee^{\dagger} \dots \vee^{\dagger} [\mathsf{I}(|s_0.s_1.\dots. s_{k -1}|, s_{k}) = 0]$}. Again by the definition of {\small $\mathsf{I}$}, we have the required result. That these two events are mutually exclusive is trivial. \\ } \end{IEEEproof} \begin{proposition}[Associativity/Commutativity/Distributivity] Given any formulas {\small $F_1, F_2, F_3 \in \mathfrak{F}$} in unit chain expansion and any valuation frame {\small $\mathfrak{M}$}, the following hold: \begin{enumerate} \item {\small $[\ensuremath{\mathfrak{M}} \models F_1] \wedge^{\dagger} ([\ensuremath{\mathfrak{M}} \models F_2] \wedge^{\dagger} [\ensuremath{\mathfrak{M}} \models F_3]) = ([\ensuremath{\mathfrak{M}} \models F_1] \wedge^{\dagger} [\ensuremath{\mathfrak{M}} \models F_2]) \wedge^{\dagger} [\ensuremath{\mathfrak{M}} \models F_3]$} (associativity 1). \item {\small $[\ensuremath{\mathfrak{M}} \models F_1] \vee^{\dagger} ([\ensuremath{\mathfrak{M}} \models F_2] \vee^{\dagger} [\ensuremath{\mathfrak{M}} \models F_3]) = ([\ensuremath{\mathfrak{M}} \models F_1] \vee^{\dagger} [\ensuremath{\mathfrak{M}} \models F_2]) \vee^{\dagger} F_3$} (associativity 2). \item {\small $[\ensuremath{\mathfrak{M}} \models F_1] \wedge^{\dagger} [\ensuremath{\mathfrak{M}} \models F_2] = [\ensuremath{\mathfrak{M}} \models F_2] \wedge^{\dagger} [\ensuremath{\mathfrak{M}} \models F_1]$} (commutativity 1). \item {\small $[\ensuremath{\mathfrak{M}} \models F_1] \vee^{\dagger} [\ensuremath{\mathfrak{M}} \models F_2] = [\ensuremath{\mathfrak{M}} \models F_2] \vee^{\dagger} [\ensuremath{\mathfrak{M}} \models F_1]$} (commutativity 2). \item {\small $[\ensuremath{\mathfrak{M}} \models F_1] \wedge^{\dagger} ([\ensuremath{\mathfrak{M}} \models F_2] \vee^{\dagger} [\ensuremath{\mathfrak{M}} \models F_3]) = ([\ensuremath{\mathfrak{M}} \models F_1] \wedge^{\dagger} [\ensuremath{\mathfrak{M}} \models F_2]) \vee^{\dagger} ({[\ensuremath{\mathfrak{M}} \models F_1]} \wedge^{\dagger} [\ensuremath{\mathfrak{M}} \models F_3])$} (distributivity 1). \item {\small $[\ensuremath{\mathfrak{M}} \models F_1] \vee^{\dagger} ([\ensuremath{\mathfrak{M}} \models F_2] \wedge^{\dagger} [\ensuremath{\mathfrak{M}} \models F_3]) = ([\ensuremath{\mathfrak{M}} \models F_1] \vee^{\dagger} [\ensuremath{\mathfrak{M}} \models F_2]) \wedge^{\dagger} ({[\ensuremath{\mathfrak{M}} \models F_1]} \vee^{\dagger} [\ensuremath{\mathfrak{M}} \models F_3])$} (distributivity 2). \end{enumerate} \label{associativity_commutativity_distributivity} \end{proposition} \begin{IEEEproof} Make use of Lemma \ref{unit_chain_excluded_middle}. Details are in Appendix D. \hide{ Let us generate a set of expressions finitely constructed from the following grammar;\\ \indent {\small $X := [(\mathsf{I}, \mathsf{J})\models f] \ | \ X \wedge^{\dagger} X \ | \ X \vee^{\dagger} X$} where {\small $f \in \mathfrak{U} \cup \mathcal{S}$}. \\ Then first of all it is straightforward to show that {\small $[(\mathsf{I}, \mathsf{J})\models F_i] = X_i$} for each {\small $i \in \{1,2,3\}$} for some {\small $X_1, X_2, X_3$} that the above grammar recognises. By Lemma \ref{unit_chain_excluded_middle} each expression ({\small $[(\mathsf{I}, \mathsf{J})\models f_x]$} for some {\small $f_x \in \mathfrak{U} \cup \mathcal{S}$}) is assigned one and only one value {\small $v \in \{0,1\}$}. Then since {\small $1 \vee^{\dagger} 1 = 1 \vee^{\dagger} 0 = 0 \vee^{\dagger} 1 = 1$}, {\small $0 \wedge^{\dagger} 0 = 0 \wedge^{\dagger} 1 = 1 \wedge^{\dagger} 0 = 0$}, and {\small $1 \wedge^{\dagger} 1 = 1$} by definition (given at the beginning of this section), it is also the case that {\small $[(\mathsf{I}, \mathsf{J})\models F_i]$} is assigned one and only one value {\small $v_i \in \{0,1\}$} for each {\small $i \in \{1,2,3\}$}. Then the proof for the current proposition is straightforward. \\ } \end{IEEEproof} \hide{ \begin{corollary} Let {\small $F$} denote {\small $s_0 \gtrdot s_1 \gtrdot \cdots \gtrdot s_k \in \mathfrak{U} \cup \mathcal{S}$} for some {\small $k$}. Then it holds, for any interpretation frame {\small $(\mathsf{I}, \mathsf{J})$}, that {\small $[(\mathsf{I}, \mathsf{J})\models F \vee \code{recursiveReduce}(F)] = 1 \not= 0$} and also that {\small $[(\mathsf{I}, \mathsf{J})\models F \wedge \code{recursiveReduce}(F)] = 0 \not= 1$}. \label{corollary_1} \end{corollary} \begin{proof} {\small $[(\mathsf{I}, \mathsf{J})\models F \vee \code{recursiveReduce}(F)] = [(\mathsf{I}, \mathsf{J})\models F] \vee^{\dagger} [(\mathsf{I}, \mathsf{J})\models \code{recursiveReduce}(F)] = [(\mathsf{I}, \mathsf{J})\models F] \vee^{\dagger} [(\mathsf{I}, \mathsf{J})\models s^c_0] \vee^{\dagger} [(\mathsf{I}, \mathsf{J})\models s_0 \gtrdot s^c_1] \vee^{\dagger} \cdots \vee^{\dagger} [(\mathsf{I}, \mathsf{J})\models s_0 \gtrdot s_1 \gtrdot \cdots \gtrdot s_{k-1} \gtrdot s^c_k] = 1 \not= 0$} by the definition of {\small $\mathsf{I}$} and {\small $\mathsf{J}$} valuations. {\small $[(\mathsf{I}, \mathsf{J})\models F \wedge \code{recursiveReduce}(F)] = [(\mathsf{I}, \mathsf{J})\models F] \wedge^{\dagger} [(\mathsf{I}, \mathsf{J})\models \code{recursiveReduce}(F)] = ([(\mathsf{I}, \mathsf{J})\models F] \wedge^{\dagger} [(\mathsf{I}, \mathsf{J})\models s^c_0]) \vee^{\dagger} ([(\mathsf{I}, \mathsf{J})\models F] \wedge^{\dagger} [(\mathsf{I}, \mathsf{J})\models s_0 \gtrdot s^c_1]) \vee^{\dagger} \dots \vee^{\dagger} ([(\mathsf{I}, \mathsf{J})\models F] \wedge^{\dagger} [(\mathsf{I}, \mathsf{J})\models s_0 \gtrdot s_1 \gtrdot \dots \gtrdot s^c_{k}]) = 0 \not= 1$}. The last equality holds due to Proposition \ref{associativity_commutativity_distributivity}.\\ \end{proof} } \begin{proposition}[Idempotence and Absorption] Given any formula {\small $F_1, F_2 \in \mathfrak{F}$} in unit chain expansion, for any valuation frame it holds that {\small $[\ensuremath{\mathfrak{M}} \models F_1] \wedge^{\dagger} [\ensuremath{\mathfrak{M}} \models F_1] = {[\ensuremath{\mathfrak{M}} \models F_1]} \vee^{\dagger} [\ensuremath{\mathfrak{M}} \models F_1] = [\ensuremath{\mathfrak{M}} \models F_1]$} (idempotence); and that {\small $[\ensuremath{\mathfrak{M}} \models F_1] \wedge^{\dagger} ([\ensuremath{\mathfrak{M}} \models F_1] \vee^{\dagger} {[\ensuremath{\mathfrak{M}} \models F_2]}) = [\ensuremath{\mathfrak{M}} \models F_1] \vee^{\dagger} ([\ensuremath{\mathfrak{M}} \models F_1] \wedge^{\dagger} [\ensuremath{\mathfrak{M}} \models F_2]) = [\ensuremath{\mathfrak{M}} \models F_1]$} (absorption). \label{idempotence_absorption} \end{proposition} \begin{IEEEproof} Both {\small $F_1, F_2$} are assigned one and only one value {\small $v \in \{0,1\}$} (\emph{Cf}. Appendix D). Trivial to verify. \\ \end{IEEEproof} We now prove laws involving \code{recursiveReduce}. \begin{lemma}[Elementary double negation] Let {\small $F$} denote\linebreak {\small $s_0 \gtrdot s_1 \gtrdot \cdots \gtrdot s_k \in \mathfrak{U} \cup \mathcal{S}$} for some {\small $k \in \mathbb{N}$}. Then for any valuation frame it holds that {\small $[\ensuremath{\mathfrak{M}} \models F] = [\ensuremath{\mathfrak{M}} \models \code{recursiveReduce}(\code{recursiveReduce}(F))]$}. \label{unit_double_negation} \end{lemma} \begin{IEEEproof} {\small $\code{recursiveReduce}(\code{recursiveReduce}(F))$} is in conjunctive normal form. Transform this to disjunctive normal form, and observe that almost all the clauses are assigned 0. Details are in Appendix E. \hide{ {\small $\code{recursiveReduce}(\code{recursiveReduce}(F)) = \code{recursiveReduce}(s^c_0 \vee (s_0 \gtrdot s^c_1) \vee \cdots \vee (s_0 \gtrdot s_1 \gtrdot \cdots \gtrdot s_{k-1} \gtrdot s^c_{k})) = s_0 \wedge (s^c_0 \vee (s_0 \gtrdot s_1)) \wedge (s_0^c \vee (s_0 \gtrdot s_1^c) \vee (s_0 \gtrdot s_1 \gtrdot s_2)) \wedge \cdots \wedge (s^c_0 \vee (s_0 \gtrdot s_1^c) \vee \cdots \vee (s_0 \gtrdot s_1 \gtrdot \cdots \gtrdot s_{k-2} \gtrdot s^c_{k-1}) \vee (s_0 \gtrdot s_1 \gtrdot \cdots \gtrdot s_{k}))$}. Here, assume that the right hand side of the equation which is in conjunctive normal form is ordered, the number of terms, from left to right, strictly increasing from 1 to {\small $k + 1$}. Then as the result of a transformation of the conjunctive normal form into disjunctive normal form we will have 1 (the choice from the first conjunctive clause which contains only one term {\small $s_0$}) {\small $\times$} 2 (a choice from the second conjunctive clause with 2 terms {\small $s_0^c$} and {\small $s_0 \gtrdot s_1$}) {\small $\times$} \ldots {\small $\times$} (k $+$ 1) clauses. But almost all the clauses in {\small $[(\mathsf{I}, \mathsf{J})\models (\text{the disjunctive normal form})]$} will be assigned 0 (trivial; the proof left to readers) so that we gain {\small $[(\mathsf{I}, \mathsf{J})\models (\text{the disjunctive normal form})] = [(\mathsf{I}, \mathsf{J})\models s_0] \wedge^{\dagger} [(\mathsf{I}, \mathsf{J})\models s_0 \gtrdot s_1] \wedge^{\dagger} \cdots \wedge^{\dagger} [(\mathsf{I}, \mathsf{J})\models s_0 \gtrdot s_1 \gtrdot \cdots \gtrdot s_k] = [(\mathsf{I}, \mathsf{J})\models s_0 \gtrdot s_1 \gtrdot \cdots \gtrdot s_k]$}. \\ } \end{IEEEproof} \begin{proposition}[Complementation/Double negation]{\ }\\ For any {\small $F$} in unit chain expansion and for any valuation frame, it holds that {\small $1 = [\ensuremath{\mathfrak{M}} \models F \vee \code{recursiveReduce}(F)]$} and that {\small $0 = [\ensuremath{\mathfrak{M}} \models F \wedge \code{recursiveReduce}(F)]$} (complementation). Also, for any {\small $F \in \mathfrak{F}$} in unit chain expansion and for any valuation frame it holds that {\small $[\ensuremath{\mathfrak{M}} \models F] = [\ensuremath{\mathfrak{M}} \models \code{recursiveReduce}(\code{recursiveReduce}(F))]$} (double negation). \label{excluded_middle} \end{proposition} \begin{IEEEproof} Make use of disjunctive normal form, Lemma \ref{unit_chain_excluded_middle} and Lemma \ref{unit_double_negation}. Details are in Appendix F. \hide{ Firstly for {\small $1 = [(\mathsf{I}, \mathsf{J})\models F \vee \code{recursiveReduce}(F)]$}. By Proposition \ref{associativity_commutativity_distributivity}, {\small $F$} has a disjunctive normal form: {\small $F = \bigvee_{i = 0}^{k} \bigwedge_{j=0}^{h_i} f_{ij}$} for some {\small $i, j, k \in \mathbb{N}$}, some {\small $h_0, \cdots, h_k \in \mathbb{N}$} and some {\small $f_{00}, \cdots, f_{kh_k} \in \mathfrak{U} \cup \mathcal{S}$}. Then {\small $\code{recursiveReduce}(F) = \bigwedge_{i=0}^k \bigvee_{j=0}^{h_i} \code{recursiveReduce}(f_{ij})$}, which, if transformed into a disjunctive normal form, will have {\small $(h_0 + 1)$} [a choice from {\small $\code{recursiveReduce}(f_{00}), \code{recursiveReduce}(f_{01}), \dots,\\ \code{recursiveReduce}(f_{0h_0})$}] {\small $\times$} {\small $(h_1 + 1)$} [a choice from {\small $\code{recursiveReduce}(f_{10}), \code{recursiveReduce}(f_{11}), \dots,\\ \code{recursiveReduce}(f_{1h_1})$}] {\small $\times \dots \times$} {\small $(h_k + 1)$} clauses. Now if {\small $[(\mathsf{I}, \mathsf{J})\models F] = 1$}, then we already have the required result. Therefore suppose that {\small $[(\mathsf{I}, \mathsf{J})\models F] = 0$}. Then it holds that {\small $\forall i \in \{0, \dots, k\}. \exists j \in \{0, \dots, h_i\}.([\models f_{ij}] = 0)$}. But by Lemma \ref{unit_chain_excluded_middle}, this is equivalent to saying that {\small $\forall i \in \{0, \dots, k\}. \exists j \in \{0, \dots, h_i\}.([(\mathsf{I}, \mathsf{J})\models \code{recursiveReduce}(f_{ij})] = 1)$}. But then there exists a clause in disjunctive normal form of {\small $[(\mathsf{I}, \mathsf{J})\models \code{recursiveReduce}(F)]$} which is assigned 1. Dually for {\small $0 = [(\mathsf{I}, \mathsf{J})\models F \wedge \code{recursiveReduce}(F)]$}. \\ \indent For {\small $[(\mathsf{I}, \mathsf{J})\models F] = [(\mathsf{I}, \mathsf{J})\models \code{recursiveReduce}(\code{recursiveReduce}(F))]$}, by Proposition \ref{associativity_commutativity_distributivity}, {\small $F$} has a disjunctive normal form: {\small $F = \bigvee_{i = 0}^k \bigwedge_{j=0}^{h_i} f_{ij}$} for some {\small $i, j, k \in \mathbb{N}$}, some {\small $h_0, \dots, h_k \in \mathbb{N}$} and some {\small $f_{00}, \dots, f_{kh_k} \in \mathfrak{U} \cup \mathcal{S}$}. Then {\small $\code{recursiveReduce}(\code{recursiveReduce}(F)) = \bigvee_{i = 0}^{k} \bigwedge_{j=0}^{h_i} \code{recursiveReduce}( \code{recursiveReduce}(f_{ij}))$}. But by Lemma \ref{unit_double_negation} {\small $[(\mathsf{I}, \mathsf{J})\models \code{recursiveReduce}(\code{recursiveReduce}(f_{ij})] = [(\mathsf{I}, \mathsf{J})\models f_{ij}]$} for each appropriate {\small $i$} and {\small $j$}. Straightforward. \\ } \end{IEEEproof} \begin{theorem} Denote by {\small $X$} the set of the expressions comprising all {\small $[\ensuremath{\mathfrak{M}} \models f_x]$} for {\small $f_x \in \mathfrak{U} \cup \mathcal{S}$}. Then for every valuation frame, {\small $(X, \code{recursiveReduce}, \wedge^{\dagger}, \vee^{\dagger})$} defines a Boolean algebra. \label{theorem_1} \end{theorem} \begin{IEEEproof} Follows from earlier propositions and lemmas. \\ \end{IEEEproof} \subsubsection{Gradual classical logic is neither para-consistent nor inconsistent }{\ }\\ To achieve the last objective we assume two notations. \begin{definition}[Sub-formula notation] Given a formula {\small $F \in \mathfrak{F}$}, we denote by {\small $F[F_a]$} the fact that {\small $F_a$} occurs as a sub-formula in {\small $F$}. Here the definition of a sub-formula of a formula follows one that is found in standard textbooks on logic \cite{Kleene52}. {\small $F$} itself is a sub-formula of {\small $F$}. \end{definition} \begin{definition}[Small step reductions] By {\small $F_1 \leadsto F_2$} for some formulas {\small $F_1$} and {\small $F_2$} we denote that {\small $F_1$} reduces in one reduction step into {\small $F_2$}. By {\small $F_1 \leadsto_{r} F_2$} we denote that the reduction holds explicitly by a reduction rule {\small $r$} (which is either of the 7 rules). By {\small $F_1 \leadsto^* F_2$} we denote that {\small $F_1$} reduces into {\small $F_2$} in a finite number of steps including 0 step in which case {\small $F_1$} is said to be irreducible. By {\small $F_1 \leadsto^k F_2$} we denote that the reduction is in exactly {\small $k$} steps. By {\small $F_1 \leadsto^*_{\{r_1, r_2, \cdots\}} F_2$} or {\small $F_1 \leadsto^k_{\{r_1, r_2, \cdots\}} F_2$} we denote that the reduction is via those specified rules {\small $r_1, r_2, \cdots$} only. \\ \end{definition} Along with them, we also enforce that {\small $\mathcal{F}(F)$} denote the set of formulas in unit chain expansion that {\small $F \in \mathfrak{F}$} can reduce into. A stronger result than Lemma \ref{normalisation_without_negation} follows. \hide{ \begin{lemma}[Linking principle 2] Let {\small $F_1, F_2$} be two formulas in unit chain expansion. Denote the set of formulas in unit chain expansion that {\small $F_1 \gtrdot F_2$} can reduce into by {\small $\mathcal{F}$}. Then it holds either that {\small $[\models F_a] = 1$} for all {\small $F_a \in \mathcal{F}$} or else that {\small $[\models F_a] = 0$} for all {\small $F_a \in \mathcal{F}$}. \label{linking_principle_2} \end{lemma} \begin{proof} By the number of reduction steps on {\small $F_1 \gtrdot F_2$}. If it is 0, then it is a formula in unit chain expansion. By the results of the previous sub-section, {\small $[\models F_1 \gtrdot F_2] = 1$} or else {\small $[\models F_1 \gtrdot F_2] = 0$}. Trivial. For inductive cases, assume that the current lemma holds true for all the numbers of steps up to {\small $k$}. We show that it still holds true for all the reductions with {\small $k+1$} steps. Consider what reduction first applies on {\small $F_1 \gtrdot F_2$}: \begin{enumerate} \item {\small $\gtrdot$} reduction 2: there are three sub-cases: \begin{enumerate} \item If we have {\small $F_1[(F_a \wedge F_b) \gtrdot F_c] \gtrdot F_2 \leadsto F_3[(F_a \gtrdot F_c) \wedge (F_b \gtrdot F_c)] \gtrdot F_2$} such that {\small $F_3$} differs from {\small $F_1$} only by the shown sub-formula: then; \begin{enumerate} \item If the given reduction is the only one possible reduction, then we apply induction hypothesis on {\small $F_3 \gtrdot F_2$} to conclude. \item Otherwise, suppose that there exists an alternative reduction step {\small $r$} (but necessarily one of the {\small $\gtrdot$} reductions), then; \begin{enumerate} \item If {\small $F_1 \gtrdot F_2 \leadsto_r F_1 \gtrdot $}. \end{enumerate}<++> we have {\small $F_1 \gtrdot F_2 \leadsto_r F_3 \gtrdot F_2 $} \end{enumerate} \end{enumerate} {\small $F_1 \gtrdot F_2[(F_a \wedge F_b) \gtrdot F_c] \leadsto F_1 \gtrdot F_3[(F_a \gtrdot F_c) \wedge (F_b \gtrdot F_c)]$}. determined from {\small $F_1 \gtrdot F_2$} in either of the cases. Consider the first case. \begin{enumerate} \item If the given reduction is the only one possible reduction, then we apply induction hypothesis on {\small $F_3 \gtrdot F_2$} to conclude. \item Otherwise, suppose that there exists an alternative reduction step {\small $r$} (but necessarily one of the {\small $\gtrdot$} reductions), then we have {\small $F_1 \gtrdot F_2 \leadsto_r F_3 \gtrdot F_2 $} \end{enumerate} \end{enumerate} First we spell out intuition. The result follows if no possible reductions at any given point during a reduction affect the others in an essential way. That is, if the effect of a reduction {\small $r$} acting upon some sub-formula {\small $F'$} of a given formula {\small $F$} is contained within it, that is, if {\small $F' \leadsto_r F''$} and also if {\small $F[F'] \leadsto_r F_{new}[F'']$} where {\small $r$} is assumed to be acting upon {\small $F'$}, then in case there are other alternative reductions {\small $r' (\not= r)$} that can apply on {\small $F'$}: {\small $F' \leadsto_{r'} F'''$} such as to satisfy {\small $F[F'] \leadsto_{r'} F_{\alpha}[F''']$}, then reduction on {\small $F_{\alpha}[F''']$} could potentially lead to some formula in unit chain expansion which does not have the same value assignment as for some formula in unit chain expansion that {\small $F_{new}[F'']$} can reduce into. any alternative reductions possible to apply for {\small $F'$} might lead to some formula in unit chain expansion which has a different valuation \end{proof} } \begin{lemma}[Bisimulation without negation] Assumed below are pairs of formulas in which {\small $\neg$} does not occur. {\small $F'$} differs from {\small $F$} only by the shown sub-formulas, \emph{i.e.} {\small $F'$} derives from {\small $F$} by replacing the shown sub-formula for {\small $F'$} with the shown sub-formula for {\small $F$} and vice versa. Then for each pair {\small $(F, F')$} below, it holds for every valuation frame that {\small $[\ensuremath{\mathfrak{M}} \models F_1] = [\ensuremath{\mathfrak{M}} \models F_2]$} for all {\small $F_1 \in \mathcal{F}(F)$} and for all {\small $F_2 \in \mathcal{F}(F')$}. {\small \begin{eqnarray}\nonumber F[(F_a \wedge F_b) \gtrdot F_c] &,& F'[(F_a \gtrdot F_c) \wedge (F_b \gtrdot F_c)]\\\nonumber F[(F_a \vee F_b) \gtrdot F_c] &,& F'[(F_a \gtrdot F_c) \vee (F_b \gtrdot F_c)]\\\nonumber F[F_a \gtrdot (F_b \wedge F_c)] &,& F'[(F_a \gtrdot F_b) \wedge (F_a \gtrdot F_c)]\\\nonumber F[F_a \gtrdot (F_b \vee F_c)] &,& F'[(F_a \gtrdot F_b) \vee (F_a \gtrdot F_c)]\\\nonumber F[(F_a \gtrdot F_b) \gtrdot F_c] &\!\!\!\!\!\!\!\!\!,& \!\!\!\!\!\!\!\!\!\! F'[(F_a \gtrdot F_c) \wedge ((F_a \gtrdot F_b) \vee (F_a \gtrdot F_b \gtrdot F_c))] \end{eqnarray} } \label{bisimulation} \end{lemma} \begin{IEEEproof} By induction on the number of reduction steps and a sub-induction on formula size in each direction of bisimulation. Details are in Appendix G. \hide{ By induction on the number of reduction steps and a sub-induction on formula size, we first establish that {\small $ \mathcal{F}(F_1) = \mathcal{F}(F_2)$} (by bisimulation). Into one way to show that to each reduction on {\small $F'$} corresponds reduction(s) on {\small $F$} is straightforward, for we can choose to reduce {\small $F$} into {\small $F'$}, thereafter we synchronize both of the reductions. Into the other way to show that to each reduction on {\small $F$} corresponds reduction(s) on {\small $F'$}, we consider each case: \begin{enumerate} \item The first pair. \begin{enumerate} \item If a reduction takes place on a sub-formula which neither is a sub-formula of the shown sub-formula nor has as its sub-formula the shown sub-formula, then we reduce the same sub-formula in {\small $F'$}. Induction hypothesis (note that the number of reduction steps is that of {\small $F$} into this direction). \item If it takes place on a sub-formula of {\small $F_a$} or {\small $F_b$} then we reduce the same sub-formula of {\small $F_a$} or {\small $F_b$} in {\small $F'$}. Induction hypothesis. \item If it takes place on a sub-formula of {\small $F_c$} then we reduce the same sub-formula of both occurrences of {\small $F_c$} in {\small $F'$}. Induction hypothesis. \item If {\small $\gtrdot$} reduction 2 takes place on {\small $F$} such that we have; {\small $F[(F_a \wedge F_b) \gtrdot F_c] \leadsto F_x[(F_a \gtrdot F_c) \wedge (F_b \gtrdot F_c)]$} where {\small $F$} and {\small $F_x$} differ only by the shown sub-formulas,\footnote{This note `where \dots' is assumed in the remaining.} then do nothing on {\small $F'$}. And {\small $F_x = F'$}. Vacuous thereafter. \item If {\small $\gtrdot$} reduction 2 takes place on {\small $F$} such that we have; {\small $F[(F_d \wedge F_e) \gtrdot F_c] \leadsto F_x[(F_d \gtrdot F_c) \wedge (F_e \gtrdot F_c)]$} where {\small $F_d \not= F_a$} and {\small $F_d \not = F_b$}, then without loss of generality assume that {\small $F_d \wedge F_{\beta} = F_a$} and that {\small $F_{\beta} \wedge F_b = F_e$}. Then we apply {\small $\gtrdot$} reduction 2 on the {\small $(F_d \wedge F_{\beta}) \gtrdot F_c$} in {\small $F'$} so that we have; {\small $F'[((F_d \wedge F_{\beta}) \gtrdot F_c) \wedge (F_b \gtrdot F_c)] \leadsto F''[(F_d \gtrdot F_c) \wedge (F_{\beta} \gtrdot F_c) \wedge (F_b \gtrdot F_c)]$}. Since {\small $(F_x[(F_d \gtrdot F_c) \wedge (F_e \gtrdot F_c)] =) F_x[(F_d \gtrdot F_c) \wedge ((F_{\beta} \wedge F_b) \gtrdot F_c)] = F_x'[(F_{\beta} \wedge F_b) \gtrdot F_c]$} and {\small $F''[(F_d \gtrdot F_c) \wedge (F_{\beta} \gtrdot F_c) \wedge (F_b \gtrdot F_c)] = F'''[(F_{\beta} \gtrdot F_c) \wedge (F_b \gtrdot F_c)]$} such that {\small $F'''$} and {\small $F_x'$} differ only by the shown sub-formulas, we repeat the rest of simulation on {\small $F'_x$} and {\small $F'''$}. Induction hypothesis. \item If a reduction takes place on a sub-formula {\small $F_p$} of {\small $F$} in which the shown sub-formula of {\small $F$} occurs as a strict sub-formula ({\small $F[(F_a \wedge F_b) \gtrdot F_c] = F[F_p[(F_a \wedge F_b) \gtrdot F_c]]$}), then we have {\small $F[F_p[(F_a \wedge F_b) \gtrdot F_c]] \leadsto F_x[F_q[(F_a \wedge F_b) \gtrdot F_c]]$}. But we have {\small $F' = F'[F_p'[(F_a \gtrdot F_c) \wedge (F_b \gtrdot F_c)]]$}. Therefore we apply the same reduction on {\small $F_p'$} to gain; {\small $F'[F_p'[(F_a \gtrdot F_c) \wedge (F_b \gtrdot F_c)]] \leadsto F'_x[F_{p'}'[(F_a \gtrdot F_c) \wedge (F_b \gtrdot F_c)]]$}. Induction hypothesis. \end{enumerate} \item The second, the third and the fourth pairs: Similar. \item The fifth pair: \begin{enumerate} \item If a reduction takes place on a sub-formula which neither is a sub-formula of the shown sub-formula nor has as its sub-formula the shown sub-formula, then we reduce the same sub-formula in {\small $F'$}. Induction hypothesis. \item If it takes place on a sub-formula of {\small $F_a$}, {\small $F_b$} or {\small $F_c$}, then we reduce the same sub-formula of all the occurrences of the shown {\small $F_a$}, {\small $F_b$} or {\small $F_c$} in {\small $F'$}. Induction hypothesis. \item If {\small $\gtrdot$} reduction 4 takes place on {\small $F$} such that we have; {\small $F[(F_a \gtrdot F_b) \gtrdot F_c] \leadsto F_x[(F_a \gtrdot F_c) \wedge ((F_a \gtrdot F_b) \vee (F_a \gtrdot F_b \gtrdot F_c))]$}, then do nothing on {\small $F'$}. And {\small $F_x = F'$}. Vacuous thereafter. \item If a reduction takes place on a sub-formula {\small $F_p$} of {\small $F$} in which the shown sub-formula of {\small $F$} occurs as a strict sub-formula, then similar to the case 1) f). \end{enumerate} \end{enumerate} By the result of the above bisimulation, we now have {\small $\mathcal{F}(F) = \mathcal{F}(F')$}. However, without {\small $\neg$} occurrences in {\small $F$} it takes only those 5 {\small $\gtrdot$} reductions to derive a formula in unit chain expansion; hence we in fact have {\small $\mathcal{F}(F) = \mathcal{F}(F_x)$} for some formula {\small $F_x$} in unit chain expansion. But then by Theorem \ref{theorem_1}, there could be only one of {\small $\{0, 1\}$} assigned to {\small $[(\mathsf{I}, \mathsf{J})\models F_x]$} \\ } \end{IEEEproof} \begin{lemma}[Other bisimulations] For each pair {\small $(F \in \mathfrak{F}, F' \in \mathfrak{F})$} below, it holds for every valuation frame (1) that {\small $\forall F_1 \in \mathcal{F}(F).\exists F_2 \in \mathcal{F}(F').[\ensuremath{\mathfrak{M}} \models F_1] = [\ensuremath{\mathfrak{M}} \models F_2]$} and (2) that {\small $\forall F_2 \in \mathcal{F}(F').\exists F_1 \in \mathcal{F}(F).[\ensuremath{\mathfrak{M}} \models F_1] = [\ensuremath{\mathfrak{M}} \models F_2]$}. Once again, {\small $F$} and {\small $F'$} differ only by the shown sub-formulas. {\small \begin{eqnarray}\nonumber F[\neg (F_a \wedge F_b)] &,& F'[\neg F_a \vee \neg F_b]\\\nonumber F[\neg (F_a \vee F_b)] &,& F'[\neg F_a \wedge \neg F_b]\\\nonumber F[s \vee s] &,& F'[s]\\\nonumber F[s \vee F_a \vee s] &,& F'[s \vee F_a]\\\nonumber F[s \wedge s] &,& F'[s]\\\nonumber F[s \wedge F_a \wedge s] &,& F'[s \wedge F_a]\\\nonumber F[s^c] &,& F'[\neg s] \end{eqnarray} } \label{other_bisimulation} \end{lemma} \begin{IEEEproof} By simultaneous induction on the number of reduction steps and a sub-induction on formula size. Details are in Appendix H. \hide{ By simultaneous induction on reduction steps and by a sub-induction on formula size. One way is trivial. Into the direction to showing that to every reduction on {\small $F$} corresponds reduction(s) on {\small $F'$}, we consider each case. For the first case; \begin{enumerate} \item If a reduction takes place on a sub-formula which neither is a sub-formula of the shown sub-formula nor has as its sub-formula the shown sub-formula, then we reduce the same sub-formula in {\small $F'$}. Induction hypothesis. \item If it takes place on a sub-formula of {\small $F_a$} or {\small $F_b$} then we reduce the same sub-formula of {\small $F_a$} or {\small $F_b$} in {\small $F'$}. Induction hypothesis. \item If {\small $\neg$} reduction 2 takes place on {\small $F$} such that we have; {\small $F[\neg (F_a \wedge F_b)] \leadsto F_x[\neg F_a \vee \neg F_b]$}, then do nothing on {\small $F'$}. And {\small $F_x = F'$}. Vacuous thereafter. \item If {\small $\neg$} reduction 2 takes place on {\small $F$} such that we have; {\small $F[\neg (F_d \wedge F_e)] \leadsto F_x[\neg F_d \vee \neg F_e]$} where {\small $F_d \not= F_a$} and {\small $F_d \not= F_b$}, then without loss of generality assume that {\small $F_d \wedge F_{\beta} = F_a$} and that {\small $F_{\beta} \wedge F_b = F_e$}. Then we apply {\small $\neg$} reduction 2 on the {\small $\neg (F_d \wedge F_{\beta})$} in {\small $F'$} so that we have; {\small $F'[\neg (F_d \wedge F_{\beta}) \vee \neg F_b] \leadsto F''[\neg F_d \vee \neg F_{\beta} \vee \neg F_b]$}. Since {\small $(F_x[\neg F_d \vee \neg F_e] = ) F_x[\neg F_d \vee \neg (F_{\beta} \wedge F_b)] = F'_x[\neg (F_{\beta} \wedge F_b]$} and {\small $F''[\neg F_d \vee \neg F_{\beta} \vee \neg F_b] = F'''[\neg F_{\beta} \vee \neg F_b]$} such that {\small $F'''$} and {\small $F'_x$} differ only by the shown sub-formulas, we repeat the rest of simulation on {\small $F'_x$} and {\small $F'''$}. Induction hypothesis. \item If a reduction takes place on a sub-formula {\small $F_p$} of {\small $F$} in which the shown sub-formula of {\small $F$} occurs as a strict sub-formula, then similar to the 1) f) sub-case in Lemma \ref{bisimulation}. \end{enumerate} The second case is similar. For the third case; \begin{enumerate} \item If a reduction takes place on a sub-formula which neither is a sub-formula of the shown sub-formula nor has as its sub-formula the shown sub-formula, then we reduce the same sub-formula in {\small $F'$}. Induction hypothesis. \item If a reduction takes place on a sub-formula {\small $F_p$} of {\small $F$} in which the shown sub-formula of {\small $F$} occurs as a strict sub-formula, then; \begin{enumerate} \item If the applied reduction is $\neg$ reduction 2 or 4, then straightforward. \item If the applied reduction is $\neg$ reduction 3 such that {\small $(F = F_a[\neg (F_x \vee s \vee s \vee F_y)]) \leadsto (F_b[\neg F_x \wedge \neg s \wedge \neg s \wedge \neg F_y] = F_c[\neg s \wedge \neg s]) \leadsto F_d[s^c \wedge s^c]$} for some {\small $F_x$} and {\small $F_y$} (the last transformation due to simultaneous induction), then we reduce {\small $F'$} as follows: {\small $(F' = F_a'[\neg (F_x \vee s \vee F_y)]) \leadsto (F_b'[\neg F_x \wedge \neg s \wedge \neg F_y] = F_c'[\neg s]) \leadsto F_d'[s^c]$}. Induction hypothesis. Any other cases are straightforward. \item If the applied reduction is $\gtrdot$ reduction 1-4, then straightforward. \end{enumerate} \end{enumerate} Similarly for the remaing ones. } \end{IEEEproof} \begin{lemma}[Normalisation without negation] Given a formula {\small $F \in \mathfrak{F}$}, if {\small $\neg$} does not occur in {\small $F$}, then it holds for every valuation frame either that {\small $[\mathfrak{M} \models F_a] = 1$} for all {\small $F_a \in \mathcal{F}(F)$} or else that {\small $[\ensuremath{\mathfrak{M}} \models F_a] = 0$} for all {\small $F_a \in \mathcal{F}(F)$}. \label{reduction_without_negation} \end{lemma} \begin{IEEEproof} Consequence of Lemma \ref{bisimulation}. \end{IEEEproof} \hide{ \begin{lemma} For any {\small $F$} in unit chain expansion it holds that {\small $[\models (s_0 \gtrdot s_1 \gtrdot \dots \gtrdot s_k) \gtrdot F] = [\models (s_0 \gtrdot F) \wedge ((s_0 \gtrdot s_1 \gtrdot \dots \gtrdot s_k) \vee ((s_0 \gtrdot s_1 \gtrdot F) \wedge (s_0 \gtrdot s_1 \gtrdot s_2 \gtrdot F) \wedge \dots \wedge (s_0 \gtrdot s_1 \gtrdot \dots \gtrdot s_{k} \gtrdot F))) $} \label{prop_unit_chain} \end{lemma} \begin{proof} By induction on {\small $k$}. \\ \end{proof} \begin{lemma} Given two formulas {\small $F_x = \neg (s_0 \gtrdot F_1 \gtrdot F_2 \dots \gtrdot F_k)$} and {\small $F_y = s_0^c \vee (s_0 \gtrdot \neg (F_1 \gtrdot F_2 \dots \gtrdot F_k))$} for {\small $k \in \mathbb{N}$}, denote the set of formulas in unit chain expansion which {\small $F_x$} can reduce into by {\small $Y$} and that which {\small $F_y$} can by {\small $Z$}. Then {\small $Y = Z$}. \label{identity_lemma} \end{lemma} \begin{proof} {\small $s_0$} cannot be farther reduced in {\small $F_x$}. So the initial reduction of {\small $F_x$} is either via {\small $\neg$} reduction 4 for the outermost {\small $\neg$} or else via some reduction for {\small $F_1 \gtrdot F_2 \dots \gtrdot F_k$}, if {\small $k \not= 0$}. If the former, then the reduction produces {\small $F_y$}. Hence clearly {\small $Y = Z$}. On the other hand, if the latter, we have {\small $F_x \leadsto_r \neg (s_0 \gtrdot F_p)$} for some {\small $F_p \in \mathfrak{F}$} whatever the reduction rule {\small $r$} is. But then we have {\small $F_x \leadsto_r \neg (s_0 \gtrdot F_p) \leadsto_{\neg\; 4} s_0^c \vee (s_0 \gtrdot \neg F_p)$}, which is none other than the formula that can be reached via {\small $\neg\; 4$} and then the {\small $r$}. {\small $Y = Z$}, as required. This result is general in that the number of reductions to take place before the outermost {\small $\neg$} is reduced does not change the validity of the given proof (straightforward by induction; left to readers).\\ \end{proof} \hide{ \begin{lemma}[Docking principle] Given any formula {\small $F \in \mathfrak{F}$}, denote by {\small $\code{normal}(F)$} one of the reduced formulas of its in unit chain expansion. For any formula {\small $F_1, F_2, F_3$} in unit chain expansion and for any interpretation frame, if {\small $[\models F_1] = [\models F_2] = 1$}, then {\small $[\models \code{normal}(F_1 \gtrdot F_3) \vee \code{normal}(F_2 \gtrdot F_3)] = [\models \code{normal}(F_1 \gtrdot F_3) \wedge \code{normal}(F_2 \gtrdot F_3)]$}. \label{docking_lemma} \end{lemma} \begin{proof} Note that by Lemma \ref{normalisation_without_negation}, both {\small $\code{normal}(F_1 \gtrdot F_3)$} and {\small $\code{normal}(F_2 \gtrdot F_3)$} are uniquely determined. \\ \indent Now the main part. Assume with no loss of generality that there occur {\small $m$} unit chains in {\small $F_1$} and {\small $n$} unit chains in {\small $F_2$} for some {\small $m, n \in \mathbb{N}$}. Observe here that any chain {\small $f_i$} in {\small $F_1$} ({\small $0 \le i \le m$}) and also any chain {\small $f_j$} in {\small $F_2$} ({\small $0 \le j \le n$}) for which {\small $[\models f_k] = 0$}, {\small $k \in \{i, j\}$}, remains 0 no matter how long it is extended with other formula(s), \emph{i.e.} {\small $[\models \code{normal}(f_k \gtrdot F_x)] = 0$} for any {\small $F_x \in \mathfrak{F}$}. Since generally {\small $\code{normal}((F_p \wedge F_q) \gtrdot F_r) = \code{normal}(F_p \gtrdot F_r) \wedge \code{normal}(F_q \gtrdot F_r)$} and {\small $\code{normal}((F_p \vee F_q) \gtrdot F_r) = \code{normal}(F_p \gtrdot F_r) \vee \code{normal}(F_q \gtrdot F_r)$} (trivial; proof left to readers), we may simplify {\small $F_1$} and {\small $F_2$} into {\small $F'_1$} and {\small $F'_2$} by the following reductions: \begin{enumerate} \item if there occurs in {\small $F_1$} or in {\small $F_2$} a sub-formula {\small $f_1 \wedge F_y$} or {\small $F_y \wedge f_1$} for some {\small $f_1 \in \mathfrak{U} \cup \mathcal{S}$} and some {\small $F_y \in \mathfrak{F}$} (but necessarily in unit formula expansion), then replace the sub-formula with {\small $f_1$}. \item if there occurs in {\small $F_1$} or in {\small $F_2$} a sub-formula {\small $f_1 \vee F_y$} or {\small $F_y \vee f_1$} for some {\small $f_1 \in \mathfrak{U} \cup \mathcal{S}$} and some {\small $F_y \in \mathfrak{F}$} (but of course necessarily in unit formula expansion), then replace the sub-formula with {\small $F_y$}. \end{enumerate} {\small $F'_1$} and {\small $F'_2$} then comprise elements of {\small $\mathfrak{U} \cup \mathcal{S}$} such that for each {\small $f_r$} of them, {\small $[\models f_r] = 1$}. Now also prepare a procedure to generate unit chains and also simplify them. And show that at each depth is found {\small $F_3$}, and that the evaluation clearly only depends on what {\small $F_3$} is, and not what {\small $F_1$} or {\small $F_2$} is. \end{proof} } } \begin{theorem}[Normalisation] Given a formula {\small $F \in \mathfrak{F}$}, denote the set of formulas in unit chain expansion that it can reduce into by {\small $\mathcal{F}_1$}. Then it holds for every valuation frame either that {\small $[\ensuremath{\mathfrak{M}} \models F_a] = 1$} for all {\small $F_a \in \mathcal{F}_1$} or else that {\small $[\ensuremath{\mathfrak{M}} \models F_a] = 0$} for all {\small $F_a \in \mathcal{F}_1$}. \label{theorem_normalisation} \end{theorem} \begin{IEEEproof} By induction on maximal number of {\small $\neg$} nestings and a sub-induction on formula size. We quote Lemma \ref{reduction_without_negation} for base cases. Details are in Appendix I. \\ \hide{ For inductive cases, assume that the current theorem holds true for all the formulas with {\small $\ensuremath{\code{neg}\_\code{max}}(F_0)$} of up to {\small $k$}. Then we conclude by showing that it still holds true for all the formulas with {\small $\ensuremath{\code{neg}\_\code{max}}(F_0)$} of {\small $k+1$}. First we note that there applies no {\small $\neg$} reductions on {\small $\neg F_x$} if {\small $F_x$} is a chain whose head is not an element of {\small $\mathcal{S}$}. But this is straightforward from the descriptions of the reduction rules. \\ \indent On this observation we show that if we have a sub-formula {\small $\neg F_x$} such that no {\small $\neg$} occurs in {\small $F_x$}, then {\small $F_x$} can be reduced into a formula in unit chain expansion with no loss of generality, prior to the reduction of the outermost {\small $\neg$}. Then we have the desired result by induction hypothesis and the results in the previous sub-section. But suppose otherwise. Let us denote by {\small $\mathcal{F}$} the set of formulas in unit chain expansion that {\small $\neg F_x'$} reduces into where {\small $F_x'$} is a unit chain expansion of {\small $F_x$}. Now suppose there exists {\small $F_y$} in unit chain expansion that {\small $\neg F_x$} can reduce into if the outermost {\small $\neg$} reduction applies before {\small $F_x$} has reduced into a formula in unit chain expansion such as to satisfy that {\small $[(\mathsf{I}, \mathsf{J})\models F_y] \not= [(\mathsf{I}, \mathsf{J})\models F_{\beta}]$} for some {\small $F_{\beta} \in \mathcal{F}$}. We here have; \\ {\small $\neg F_x \leadsto^*_{\{\gtrdot\!\! \text{ reductions only}\}} \neg F_z \leadsto^*_{\{\gtrdot\!\! \text{ reductions only}\}} \neg F_x' \leadsto^+_{\{\neg \text{ reductions only}\}}\!\! F_{\beta}$} and {\small $\neg F_x \leadsto^*_{\{\gtrdot \text{ reductions only}\}} \neg F_z \leadsto_{\neg \text{ reduction}} F_z' \leadsto^* F_y$} where {\small $\neg^{\dagger} \exists F_{zz}.F_z' = \neg F_{zz}$}. \\ Hence for our supposition to hold, it must satisfy that there exists no bisimulation between {\small $F'_z$} and {\small $\neg F_z$}. But because it is trivially provable that to each reduction on {\small $F'_z$} corresponds reduction(s) on {\small $\neg F_z$} (, for we can choose to apply the {\small $\neg$} reduction on {\small $\neg F_z$} to gain {\small $F'_z$},) it must in fact satisfy that not to each reduction on {\small $\neg F_z$} corresponds reduction(s) on {\small $F'_z$}. Consider what reduction applies on a sub-formula of {\small $\neg F_z$}: \begin{enumerate} \item any {\small $\neg$} reduction: Then the reduction generates {\small $F'_z$}. A contradiction to supposition has been drawn. \item {\small $\gtrdot$} reduction 1: Consider how {\small $F_z$} looks like: \begin{enumerate} \item {\small $F_z = F_1[(F_u \gtrdot F_v) \gtrdot F_w] \wedge F_2$}: But then the same reduction can take place on {\small $F_z' = \neg F_1[(F_u \gtrdot F_v) \gtrdot F_w] \vee \neg F_2$}. Contradiction. \item {\small $F_z = F_1 \wedge F_2[(F_u \gtrdot F_v) \gtrdot F_w]$}: Similar. \item {\small $F_z = F_1[(F_u \gtrdot F_v) \gtrdot F_w] \vee F_2$}: Similar. \item {\small $F_z = F_1 \vee F_2[(F_u \gtrdot F_v) \gtrdot F_w]$}: Similar. \item {\small $F_z = (F_u \gtrdot F_v) \gtrdot F_w$}: This case is impossible due to the observation given earlier in the current proof. \item {\small $F_z = (F_1[(F_u \gtrdot F_v) \gtrdot F_w] \gtrdot F_2) \gtrdot F_3$}: Similar. \item The rest: all similar. \end{enumerate} \item {\small $\gtrdot$} reduction 2: Similar. \item {\small $\gtrdot$} reduction 3: Similar. \item {\small $\gtrdot$} reduction 4: Consider how {\small $F_z$} looks like: \begin{enumerate} \item {\small $F_z = s \gtrdot (F_1 \wedge F_2)$}: Then {\small $\neg F_z \leadsto \neg ((s \gtrdot F_1) \wedge (s \gtrdot F_2))$}. But by Lemma \ref{other_bisimulation}, it does not cost generality if we reduce the {\small $\neg$} to have; {\small $\neg ((s \gtrdot F_1) \wedge (s \gtrdot F_2)) \leadsto \neg (s \gtrdot F_1) \vee \neg (s \gtrdot F_2)$}. Meanwhile {\small $F'_z = s^c \vee (s \gtrdot \neg (F_1 \wedge F_2))$}. By Lemma \ref{other_bisimulation}, it does not cost generality if we have {\small $F''_z = s^c \vee (s \gtrdot (\neg F_1 \vee \neg F_2))$} instead of {\small $F'_z$}. But it also does not cost generality (by Lemma \ref{bisimulation}) if we have {\small $F'''_z = s^c \vee (s \gtrdot \neg F_1) \vee (s \gtrdot \neg F_2)$} instead of {\small $F''_z$}. But by Lemma \ref{other_bisimulation}, it again does not cost generality if we have {\small $F''''_z = s^c \vee (s \gtrdot \neg F_1) \vee s^c \vee (s \gtrdot \neg F_2)$} instead. Therefore we can conduct bisimulation between {\small $\neg (s \gtrdot F_1)$} and {\small $s^c \vee (s \gtrdot \neg F_1)$} and between {\small $\neg (s \gtrdot F_2)$} and {\small $s^c \vee (s \gtrdot \neg F_2)$}. Since each of {\small $\neg (s \gtrdot F_1)$} and {\small $\neg (s \gtrdot F_2)$} has a strictly smaller formula size than {\small $\neg (s \gtrdot (F_1 \wedge F_2))$}, (sub-)induction hypothesis. Contradiction. \item The rest: Trivial. \end{enumerate} \item {\small $\gtrdot$} reduction 5: Similar. \end{enumerate} } \end{IEEEproof} \hide{ \begin{lemma}[Commutativity of reductions] Given any {\small $F_0 \leadsto F_1 \leadsto \cdots \leadsto F_k$} for {\small $k \in \mathbb{N}$}, if there exists no {\small $i$} ranging over {\small $\{0, \dots, k\}$} such that {\small $F_i = F_i[\neg F_a[(F_b \wedge F_c) \gtrdot F_d]]$}, that {\small $F_i = F_i[\neg F_a[(F_b \vee F_c) \gtrdot F_d]]$} or that {\small $F_i = F_i[\neg F_a[(F_b \gtrdot F_c) \gtrdot F_d]]$} for some {\small $F_a, F_b, F_c, F_d \in \mathfrak{F}$}, then it holds that {\small $\forall F_{\alpha}, F_{\beta} \in \mathfrak{F}. (F_0 \leadsto^* F_{\alpha}) \wedge^{\dagger} (F_0 \leadsto^* F_{\beta}) \rightarrow^{\dagger} (F_{\alpha} = F_{\beta})$}. \label{commutativity_reductions} \end{lemma} \begin{proof} By induction on the length of the reduction steps (the smallest number possible is 0 in which {\small $F_0$} is irreducible). Vacuous when it is 0. Also vacuous when it is 1 since, given a reduction rule, the reduction is determinate. Now assume that the current lemma holds true for any steps {\small $2i$} for {\small $i \in \mathbb{N}$}. We need to show that it still holds true for {\small $2i + 1$} and {\small $2i + 2$} steps. For the former, consider which reduction rule applied first. Then it is trivial by induction hypothesis on the rest of the reduction steps that we have the desired result. For the latter, we have some reduction {\small $F_0 \leadsto_{r_0} F_1 \leadsto_{r_1} F_2 \leadsto^{2i} F_{2i + 2}$}. Consider what these two rules {\small $(r_0, r_1)$} are. \begin{enumerate} \item ({\small $\neg$} reduction 1, {\small $\neg$} reduction 1): Vacuously {\small $F_0 \leadsto_{r_1} F_1 \leadsto_{r_0} F_2$}. \item ({\small $\neg$} reduction 1, {\small $\neg$} reduction 2): {\small $F_0$} contains a sub-formula {\small $\neg (F_a \wedge F_b)$} and another sub-formula {\small $\neg s_c$}. If the latter does not occur within the former, vacuously {\small $F_0 \leadsto_{r_1} F_1 \leadsto_{r_0} F_2$}. The former cannot occur within the latter, on the other hand. Otherwise, if the latter occurs within the former, then {\small $\neg (F_a \wedge F_b) = \neg (F_a[\neg s] \wedge F_b)$}. Then {\small $F_0[\neg (F_a[\neg s] \wedge F_b)] \leadsto_{r_0} F_1[\neg (F_c[s] \wedge F_b)] \leadsto_{r_1} F_2[\neg F_c[s] \vee \neg F_b]$}. Reversing the two rules, we instead have {\small $F_0[\neg (F_a[\neg s] \wedge F_b)] \leadsto_{r_1} F_3[\neg F_a[\neg s] \vee \neg F_b] \leadsto_{r_2} F_2[\neg F_c[s] \vee \neg F_b]$} to derive the same formula.\footnote{Note that {\small $F_1$}, {\small $F_2$}, {\small $F_3$} and {\small $F_c$} are precisely determined in either of the reduction steps because the reduction rules for a given formula determine the result of a reduction.} \item({\small $\neg$} reduction 1, \{{\small $\neg$} reduction 3, {\small $\neg$} reduction 4, {\small $\gtrdot$} reduction 1, {\small $\gtrdot$} reduction 2, {\small $\gtrdot$} reduction 3\}): similar. \item({\small $\neg$} reduction 2, {\small $\neg$} reduction 2): {\small $F_0$} has sub-formulas {\small $\neg (F_a \wedge F_b)$} and {\small $\neg (F_c \wedge F_d)$}. If neither occurs within the other, then vacuously {\small $F_0 \leadsto_{r_0} F_1 \leadsto_{r_1} F_2$} and {\small $F_0 \leadsto_{r_1} F_3 \leadsto_{r_0} F_2$} for some formula {\small $F_3$} (which is again precisely determined from {\small $F_0$} and {\small $r_1$}). Otherwise, assume that {\small $F_0 = F_0[\neg (F_a[\neg (F_c \wedge F_d)] \wedge F_b)]$}. Assume without a loss of generality that {\small $r_0$} acts upon {\small $\neg (F_a \wedge F_b)$} and {\small $r_1$} upon the other sub-formula. Then we have; {\small $F_0 \leadsto_{r_0} F_1[\neg F_a[\neg (F_c \wedge F_d)] \vee \neg F_b] \leadsto_{r_1} F_2[\neg F_e[\neg F_c \vee \neg F_d] \vee \neg F_b]$}, and {\small $F_0 \leadsto_{r_1} F_3[\neg (F_f[\neg F_c \vee \neg F_d] \wedge F_b)] \leadsto_{r_0} F_2[\neg F_e[\neg F_c \vee \neg F_d] \vee \neg F_b]$}. The remaining possibilities are all similar. \item ({\small $\neg$} reduction 2, \{{\small $\neg$} reduction 3, {\small $\neg$} reduction 4\}): Apart from which sub-formula is inside which changes appearance of {\small $F_0$}, similar. \item ({\small $\neg$} reduction 2, {\small $\gtrdot$} reduction 1): {\small $F_0$} has two sub-formulas {\small $\neg (F_a \wedge F_b)$} and {\small $(F_c \gtrdot F_d) \gtrdot F_e$}. Straightforward if neither occurs within the other. Otherwise, first consider cases where the former occurs within the latter. One of them is {\small $F_0 = F_0[(F_c[\neg (F_a \wedge F_b)] \gtrdot F_d) \gtrdot F_e]$}. Assume without a loss of generality that {\small $r_0$} acts upon {\small $\neg (F_a \wedge F_b)$} and that {\small $r_1$} acts upon {\small $(F_c \gtrdot F_d) \gtrdot F_e$}. Then we have; \\ {\small $F_0 \leadsto_{r_0} F_1[(F_{c'}[\neg F_a \vee \neg F_b] \gtrdot F_d) \gtrdot F_e] \leadsto_{r_1} F_2[F_{c'} \gtrdot ((F_d \wedge F_e) \vee ((F_d \gtrdot F_e) \wedge F_e))]$}, and {\small $F_0 \leadsto_{r_1} F_3[F_c \gtrdot ((F_d \wedge F_e) \vee ((F_d \gtrdot F_e) \wedge F_e))] \leadsto_{r_0} F_2$}. By the given assumption, it does not happen that the latter occurs within the former. Otherwise, similar for the remaining cases. \item ({\small $\neg$} reduction 2, \{{\small $\gtrdot$} reduction 2, {\small $\gtrdot$} reduction 3\}): similar. \item ({\small $\neg$} reduction 3, \ldots): similar. \item ({\small $\neg$} reduction 4, {\small $\gtrdot$} reduction 1): {\small $F_0$} has two sub-formulas {\small $\neg (F_a \gtrdot F_b)$} and {\small $(F_c \gtrdot F_d) \gtrdot F_e$}. By the given assumption it does not happen that the latter occurs in the former. The remaining cases are trivial. \item ({\small $\neg$} reduction 4, {\small $\gtrdot$} reduction 2): {\small $F_0$} has two sub-formulas {\small $\neg (F_a \gtrdot F_b)$} and {\small $(F_c \wedge F_d) \gtrdot F_e$}. Most of the cases are straightforward; and {\small $F_0 = F_0[\neg F_a[(F_c \wedge F_d) \gtrdot F_e] \gtrdot F_b]$} or {\small $F_0 = F_0[\neg F_a[(F_c \wedge F_d) \gtrdot F_e] ]$} is, due to the assumption of the current lemma, not possible. But the remaining cases are trivial. \item ({\small $\neg$} reduction 4, {\small $\gtrdot$} reduction 3): similar. \end{enumerate} \end{proof} \begin{theorem} Given any formula {\small $F \in \mathfrak{F}$}, for all the unit trees that {\small $F$} possibly reduces into, if some of them is assigned 1, then so are all the others; and if some of them is assigned 0, then so are all the others. \label{normalisation} \end{theorem} \begin{proof} We need to show that {\small $\forall F_a, F_b, F_c \in \mathfrak{F}. (F_a \leadsto^* F_b) \wedge^{\dagger} (F_a \leadsto^* F_c) \rightarrow^{\dagger} ([\models_{\epsilon} F_b] \doteq [\models_{\epsilon} F_c])$}. The proof is by induction on the maximal formula size of {\small $F_a \in \mathfrak{F}$} in the form {\small $\neg F_a$}. occurrences of formulas in the form {\small $\neg ((F_a \gtrdot F_b) \gtrdot F_c)$} for some {\small $F_a, F_b, F_c \in \mathfrak{F}$}, a sub-induction on the sum of the formula size of each such occurrence and a sub-sub-induction on the length of the reduction. Let us consider base cases where, throughout reductions, there occur no such formulas. We show that {\small $F$} reduces uniquely into some unit tree {\small $F_{\alpha}$} (upon which the result in the previous sub-section applies). Observe that for any reduction rule {\small $r$} and for any {\small $F_1 \in \mathfrak{F}$}, there is a unique {\small $F_2 \in \mathfrak{F}$} such that {\small $F_1 \leadsto_r F_2$} (obvious). Hence we simply By induction on the number of steps into a unit tree. If {\small $F$} is a unit tree, then vacuous by the results of the previous sub-section. Otherwise, we assume that the current theorem holds true for all the numbers of steps up to {\small $k \in \mathbb{N}$} and on the assumption show that it still holds true with the reduction with {\small $k+1$} steps. Consider which reduction rule applied initially. \begin{enumerate} \item {\small $ $} \end{enumerate}<++> \end{proof} \begin{lemma}[Mutual exclusion] Given any {\small $F \in \mathfrak{F}$}, if\linebreak {\small $[\models_{\epsilon} F] = 1$}, then {\small $[\models_{\epsilon} F] \not= 0$}; and if {\small $[\models_{\epsilon} F] = 0$}, then {\small $[\models_{\epsilon} F] \not= 1$}. \label{mutually_exclusive} \end{lemma} \begin{proof} It suffices to show that the disjunctive normal form in gradual classical logic is indeed a normal form. But then it suffices to show that {\small $(\mathfrak{Y}, \oplus, \odot)$} obeys the law of Boolean algebra. Associativity, commutativity and distributivity hold by definition. We show the idempotence: \end{proof} } By the result of Theorem \ref{theorem_1} and Theorem \ref{theorem_normalisation}, we may define implication: {\small $F_1 \supset F_2$} to be an abbreviation of {\small $\neg F_1 \vee F_2$} - {\it exactly the same} - as in classical logic. \section{Decidability} We show a decision procedure {\small $\oint$} for universal validity of some input formula {\small $F$}. Here, {\small $z: Z$} for some {\small $z$} and {\small $Z$} denotes a variable {\small $z$} of type {\small $Z$}. Also assume a terminology of `object level', which is defined inductively. Given {\small $F$} in unit chain expansion, (A) if {\small $s \in \mathcal{S}$} in {\small $F$} occurs as a non-chain or as a head of a unit chain, then it is said to be at the 0-th object level. (B) if it occurs in a unit chain as {\small $s_0 \gtrdot \dots \gtrdot s_k \gtrdot s$} or as {\small $s_0 \gtrdot \dots \gtrdot s_k \gtrdot s \gtrdot ...$} for some {\small $k \in \mathbb{N}$} and some {\small $s_0, \dots, s_k \in \mathcal{S}$}, then it is said to be at the (k+1)-th object level. Further, assume a function {\small $\code{toSeq}: \mathbb{N} \rightarrow \mathcal{S}^*$} satisfying {\small $\code{toSeq}(0) = \epsilon$} and {\small $\code{toSeq}(k+1) = \underbrace{\top. \dots.\top}_{k+1}$}. \begin{description \item[{\small $\oint(F: \mathfrak{F}, \code{object}\_\code{level} : \mathbb{N} )$}]{\ }\\ \textbf{returning either 0 or 1}\\ $\backslash\backslash$ This pseudo-code uses {\small $n, o:\mathbb{N}$}, {\small $F_a, F_b:\mathfrak{F}$}. \\ \textbf{L0: } Duplicate {\small $F$} and assign the copy to {\small $F_a$}. If {\small $F_a$} is not already in unit chain expansion, then reduce it into a formula in unit chain expansion. \\ \textbf{L1: } {\small $F_b := \squash(F_a, \code{object}\_\code{level})$}. \\ \textbf{L2: } {\small $n := \code{COUNT}\_\code{DISTINCT}(F_b)$}.\\ \textbf{L3$_0$: } For each {\small $\mathsf{I}: \code{toSeq}(\code{object}\_\code{level}) \times \mathcal{S}$} distinct for the {\small $n$} elements of {\small $\mathcal{S}$} at the given object level, Do: \\ \textbf{L3$_1$: } If {\small $\sat(F_b, \mathsf{I})$}, then go to \textbf{L5}.\\ \textbf{L3$_2$: } Else if no unit chains occur in {\small $F_a$}, go to \textbf{L3}$_5$.\\ \textbf{L3$_3$: } {\small $o := \oint(\code{REWRITE}(F_a, \mathsf{I}, \code{object}\_\code{level}),$}\\ {\small $\code{object}\_\code{level} + 1)$}. \\ \textbf{L3$_4$: } If {\small $o = 0$}, go to \textbf{L5}. \\ \textbf{L3$_5$: } End of For Loop. \\ \textbf{L4: } return 1. $\backslash\backslash$ Yes.\\ \textbf{L5: } return 0. $\backslash\backslash$ No.\\ \end{description} \begin{description \item[\squash({\small $F: \mathfrak{F}, \code{object}\_\code{level}: \mathbb{N}$}) returning {\small $F': \mathfrak{F}$}] {\ }\\ \textbf{L0}: {\small $F' := F$}. \\ \textbf{L1}: For every {\small $s_0 \gtrdot s_1 \gtrdot \dots \gtrdot s_{k}$} for some {\small $k \in \mathbb{N}$} greater than or equal to \code{object}\_\code{level} and some {\small $s_0, s_1, \dots, s_{k} \in \mathcal{S}$} occurring in {\small $F'$}, replace it with {\small $s_0 \gtrdot \dots \gtrdot s_{\code{object}\_ \code{level}}$}. \\ \textbf{L2}: return {\small $F'$}. \end{description} \begin{description \item[$\code{COUNT}\_\code{DISTINCT}(F : \mathfrak{F})$ returning {\small $n : \mathbb{N}$}] {\ } \\ \textbf{L0}: return {\small $n:=$} (number of distinct members of {\small $\mathcal{A}$} in {\small $F$} ). \end{description} \begin{description \item[\sat({\small $F: \mathfrak{F}, \mathsf{I}: \mathsf{I}$}) returning \code{true} or \code{false}]{\ }\\ \textbf{L0}: return \code{true} if, for the given interpretation {\small $\mathsf{I}$},\linebreak {\small $[(\mathsf{I}, \mathsf{J}) \models F] = 0$}. Otherwise, return \code{false}. \end{description} \begin{description \item[\rewrite({\small $F: \mathfrak{F}, \mathsf{I}: \mathsf{I}, \code{object}\_\code{level}: \mathbb{N}$}) returning {\small $F': \mathfrak{F}$}]{\ }\\ \textbf{L0}: {\small $F' := F$}. \\ \textbf{L1}: remove all the non-unit-chains and unit chains shorter than or equal to \code{object}\_\code{level} from {\small $F'$}. The removal is in the following sense: if {\small $f_x \wedge F_x$}, {\small $F_x \wedge f_x$}, {\small $f_x \vee F_x$} or {\small $F_x \vee f_x$} occurs as a sub-formula in {\small $F'$} for {\small $f_x$} those just specified, then replace them not simultaneously but one at a time to {\small $F_x$} until no more reductions are possible. \\ \textbf{L2$_0$}: For each unit chain {\small $f$} in {\small $F'$}, Do:\\ \textbf{L2$_1$}: if the head of {\small $f$} is 0 under {\small $\mathsf{I}$}, then remove the unit chain from {\small $F'$}; else replace the head of {\small $f$} with {\small $\top$}. \\ \textbf{L2$_2$}: End of For Loop. \\ \textbf{L3}: return {\small $F'$}. \end{description} The intuition of the procedure is found within the proof below. \begin{proposition}[Decidability of gradual classical logic] Complexity of {\small $\oint(F, 0)$} is at most \code{EXPTIME}. \end{proposition} \begin{IEEEproof} We show that it is a decision procedure. That the complexity bound cannot be worse than {\code{EXPTIME}} is clear from the semantics (for \textbf{L0}) and from the procedure itself. Consider \textbf{L0} of the main procedure. This reduces a given formula into a formula in unit chain expansion. In \textbf{L1} of the main procedure, we get a snapshot of the input formula. We extract from it components of the 0-th object level, and check if it is (un)satisfiable. The motivation for this operation is as follows: if the input formula is contradictory at the 0th-object level, the input formula is contradictory by the definition of {\small $\mathsf{J}$}. Since we are considering validity of a formula, we need to check all the possible valuation frames. The number is determined by distinct {\small $\mathcal{A}$} elements. \textbf{L2} gets the number (n). The For loop starting at \textbf{L3}$_0$ iterates through the {\small $2^n$} distinct interpretations. If the snapshot is unsatisfiable for any such valuation frame, it cannot be valid, which in turn implies that the input formula cannot be valid (\textbf{L3}$_1$). If the snapshot is satisfiable and if the maximum object-level in the input formula is the 0th, \emph{i.e.} the snapshot is the input formula, then the input formula is satisfiable for this particular valuation frame, and so we check the remaining valuation frames (\textbf{L3}$_2$). Otherwise, if it is satisfiable and if the maximum object-level in the input formula is not the 0th, then we need to check that snapshots in all the other object-levels of the input formula are satisfiable by all the valuation frames. We do this check by recursion (\textbf{L3}$_3$). Notice the first parameter {\small $\code{REWRITE}(F_a, \mathsf{I}, \code{object}\_\code{level})$} here. This returns some formula {\small $F'$}. At the beginning of the sub-procedure, {\small $F'$} is a duplicated copy of {\small $F_a$} (not {\small $F_b$}). Now, under the particular 0-th object level interpretation {\small $\mathsf{I}$}, some unit chain in {\small $F_a$} may be already evaluated to 0. Then we do not need consider them at any deeper object-level. So we remove them from {\small $F'$}. Otherwise, in all the remaining unit chains, the 0-th object gets local interpretation of 1. So we replace the {\small $\mathcal{S}$} element at the 0-th object level with {\small $\top$} which always gets 1. Finally, all the non-chain {\small $\mathcal{S}$} constituents and all the chains shorter than or equal to \code{object}\_\code{level} in {\small $F_a$} are irrelevant at a higher object-level. So we also remove them (from {\small $F'$}). We pass this {\small $F'$} and an incremented \code{object}\_\code{level} to the main procedure for the recursion. \\ \indent The recursive process continues either until a sub-formula passed to the main procedure turns out to be invalid, in which case the recursive call returns 0 (\textbf{L2}$_2$ and \textbf{L4} in the main procedure) to the caller who assigns 0 to $o$ (\textbf{L2}$_4$) and again returns 0, and so on until the first recursive caller. The caller receives 0 once again to conclude that {\small $F$} is invalid, as expected. Otherwise, we have that {\small $F$} is valid, for we considered all the valuation frames. The number of recursive calls cannot be infinite.\\ \end{IEEEproof} \hide{ \begin{proof} \textbf{L1} of {\small $\oint$} executes in \code{EXPTIME}; \textbf{L2}, \textbf{L3} and \textbf{L5} in \code{PCOMPLETE}; and \textbf{L4} in constant time. \end{proof} } \hide{ \section{Proof System} We start by defining meta-formula notations. By {\small $\mathfrak{S}$} we denote the set of structures whose elements {\small $\Gamma$} with or without a sub-/super- script are constructed from the grammar;\\ \indent {\small $\Gamma := F \ | \ \Gamma \hookleftarrow \Gamma \ | \ \Gamma; \Gamma$}. \\ Only the following full associativity and commutativity are defined to be holding among elements of {\small $\mathfrak{S}$}: For all {\small $\Gamma_1, \Gamma_2, \Gamma_3 \in \mathfrak{S}$}; \begin{itemize} \item {\small $\Gamma_1; (\Gamma_2; \Gamma_3) = (\Gamma_1; \Gamma_2); \Gamma_3$}. \item {\small $\Gamma_1; \Gamma_2 = \Gamma_2; \Gamma_1$}. \end{itemize} {\small $\hookleftarrow$} is right associative: {\small $\Gamma_1 \hookleftarrow \Gamma_2 \hookleftarrow \Gamma_3$} is interpreted as {\small $\Gamma_1 \hookleftarrow (\Gamma_2 \hookleftarrow \Gamma_3)$}. The set of sequents is denoted by {\small $\mathfrak{D}$} and is defined by: {\small $\mathfrak{D} := \{\ \Gamma \vdash \ | \ \Gamma \in \mathfrak{S}\}$}. Its elements are referred to by {\small $D$} with or without a sub-/super-script. As is customary in a proof system, some structures in a sequent may be empty. They are indicated by a tilde {\small $\widetilde{ }$} over them, \emph{e.g.} {\small $\widetilde{\Gamma}$}. Contexts, representations of a given structure, are defined as below. Due to the length of the definition, we first state a preparatory definition of specialised structures. \begin{definition}[Specialised structures]{\ } \begin{description} \item[\textbf{Unit structures}]{\ } \begin{description} \item[\textbf{Horizontally unitary structures}]{\ }\\The set of those is denoted by {\small $\mathfrak{S}^{uH}$}, and forms a strict subset of {\small $\mathfrak{S}$}. It holds that;\\ \indent {\small $\forall \gamma \in \mathfrak{S}^{uH}. (\neg^{\dagger} \exists \Gamma_1, \Gamma_2 \in \mathfrak{S}.\gamma = \Gamma_1; \Gamma_2)$}. \item[\textbf{Vertically unitary structures}]{\ }\\ The set of those is denoted by {\small $\mathfrak{S}^{uV}$}, and forms a strict subset of {\small $\mathfrak{S}$}. It holds that; \indent {\small $\forall \kappa \in \mathfrak{S}^{uV}. (\neg^{\dagger} \exists \Gamma_1, \Gamma_2 \in \mathfrak{S}.\kappa = \Gamma_1 \hookleftarrow \Gamma_2)$}. \end{description} \item[\textbf{Chains}]{\ } \begin{description} \item[\textbf{Unit chains}]{\ }\\ The set of those is denoted by {\small $\mathfrak{C}$}, and is formed by taking a set union of (A) the set of all the structures in the form: {\small $s_1 \hookleftarrow s_2 \hookleftarrow \cdots \hookleftarrow s_{k + 1}$} for {\small $k \in \mathbb{N}$} such that {\small $s_i \in \mathcal{S}$} for all {\small $1 \le i \le k + 1$} and (B) a singleton set {\small $\{\epsilon\}$} denoting an empty structure. \item[\textbf{Sub-chains}]{\ } \\ Given a horizontally unitary structure {\small $\gamma = \kappa_1 \hookleftarrow \kappa_2 \hookleftarrow \cdots \hookleftarrow \kappa_{k+1}$} for some {\small $\kappa_1, \kappa_2, \cdots, \kappa_{k+1} \in \mathfrak{S}^{uV}$} for {\small $k \in \mathbb{N}$}, its sub-chain is any of {\small $\kappa_1 \hookleftarrow \kappa_2 \hookleftarrow \kappa_i$} for {\small $1 \le i \le {k+1}$}. \end{description} \item[\textbf{Upper structures}]{\ }\\ Given a structure {\small $\Gamma \in \mathfrak{S}$} such that {\small $\Gamma = \gamma_1; \gamma_2; \cdots; \gamma_{k+1}$} for some {\small $\gamma_1, \gamma_2, \cdots, \gamma_{k+1} \in \mathfrak{S}^{uH}$} for {\small $k \in \mathbb{N}$}, the set of its upper structures is defined to contain all the structures {\small $\gamma'_1; \gamma_2'; \cdots; \gamma_{k+1}'$} such that, for all {\small $1 \le i \le k+1$}, {\small $\gamma_i'$} (if not empty) is a sub-chain of {\small $\gamma_i$}. \\ \end{description} \end{definition} \begin{definition}[Contexts of a given structure]{\ }\\ Let {\small $\Omega(\alpha, \beta)$} for {\small $\alpha \in \mathfrak{C}$} and {\small $\beta \in \mathfrak{S}$} denote what we call a representation. Let {\small $\mathfrak{R}$} denote the set of representations. Let {\small $P$} be a predicate over {\small $\mathfrak{S} \times \mathfrak{R}$} defined by; \\ \indent {\small $P(\Gamma_1, \Omega(\Psi, \Gamma_2))$} for some {\small $\Gamma_1, \Gamma_2 \in \mathfrak{S}$} and some {\small $\Psi \in \mathfrak{C}$} iff; \begin{itemize} \item if {\small $\Psi = \epsilon$}, then {\small $\Gamma_1 = \Gamma_2$}. \item if {\small $\Psi = s_1 \hookleftarrow s_2 \hookleftarrow \cdots \hookleftarrow s_{k +1}$} for some {\small $k \in \mathbb{N}$}, then supposing {\small $\Gamma_1 = \gamma_1; \gamma_2; \cdots; \gamma_{j+1}$} for some {\small $j \in \mathbb{N}$}; there exists at least one {\small $\gamma_i$} for {\small $1 \le i \le j +1$} such that {\small $\gamma_i = (s_1; \widetilde{\kappa_{x1}}) \hookleftarrow (s_2; \widetilde{\kappa_{x2}}) \hookleftarrow \cdots \hookleftarrow (s_{k+1}; \widetilde{\kappa_{xk+1}}) \hookleftarrow \Gamma_{yi}$} for some {\small $\kappa_{x1}, \kappa_{x2}, \cdots, \kappa_{xk+1} \in \mathfrak{S}^{uV}$} such that, for all such {\small $i$}, \emph{i.e.} {\small $i \in \{i1, i2, \cdots, im\}$} for {\small $1 \le |\{i1, i2, \cdots, im\}| \le j +1$}, {\small $\Gamma_2$} is an upper structure of {\small $\Gamma_{yi1}; \Gamma_{yi2}; \cdots; \Gamma_{yim}$}. \end{itemize} {\ }\\ \end{definition} The proof system for gradual classical logic is found in Figure \ref{relevant_system}. \subsection{Main properties} \begin{definition}[Interpretation] Interpretation of a sequent is a function {\small $\overline{\cdot}: \mathfrak{D} \rightarrow \mathfrak{F}$}, defined recursively as follows, in conjunction with {\small $\overline{\cdot}^{\mathfrak{S}}: \mathfrak{S} \rightarrow \mathfrak{F}$}; \begin{itemize} \item {\small $\overline{\Gamma \vdash } = \neg \overline{\Gamma}^{\mathfrak{S}}$}. \item {\small $\overline{\Gamma_1; \Gamma_2}^{\mathfrak{S}} = \overline{\Gamma_1}^{\mathfrak{S}} \wedge \overline{\Gamma_2}^{\mathfrak{S}}$}. \item {\small $\overline{\Gamma_1 \hookleftarrow \Gamma_2}^{\mathfrak{S}} = \overline{\Gamma_1}^{\mathfrak{S}} \gtrdot \overline{\Gamma_2}^{\mathfrak{S}}$}. \item {\small $\overline{F}^{\mathfrak{S}} = F$}. \end{itemize} \label{interpretation_sequent} \end{definition} \begin{theorem}[Soundness] If there exists a closed derivation tree for {\small $F \vdash $}, then {\small $F$} is unsatisfiable. \label{soundness} \end{theorem} \begin{proof} By induction on derivation depth of the derivation tree. Base cases are when it has only one conclusion. In case the axiom inference rule is {\small $id$}, we need to show that any chain which looks like {\small $\Psi_1 \gtrdot F_1 \wedge a \wedge a^c \gtrdot \widetilde{F_2}$} \end{proof} \begin{theorem}[Completeness] If {\small $[\models_{\epsilon} F] = 0$}, then there exists a closed derivation tree for {\small $F \vdash $}. \label{completeness} \end{theorem} \begin{proof} We show that \code{GradC} can simulate all the processes of the decision procedure. \begin{description} \item[Transformation into disjunctive normal form]{\ }\\ This is achieved if all the axioms in \textbf{Transformations} can be simulated. For each axiom corresponds an inference rule. \item[Truncate]{\ }\\ Though this step is not necessary due to the definition of {\small $id$} and {\small $\bot$}, in case a sequent is unsatisfiable at the first chain-lets, then we can truncate all the unit chains into 1-long unit chains via {\small $Wk_{1,2}$}. \item[Satisfiability check]{\ }\\ A sequent with 1-long unit chains makes use of connectives found in classical logic only. All the inference rules that are needed for a check on theorem/non-theorem of a given formula in standard classical logic are available in the given proof system. \item[Rewrite and maximum chain length check]{\ }\\ Via $\code{Advance}\curvearrowright$. \end{description} \end{proof} Leave some comments here that these inference rules seem to suggest a more efficient proof search strategy; but that I leave it open. } \section{Conclusion and Related Thoughts} There are many existing logics to which gradual classical logic can relate, including ones below. ``G(g)radual classical logic'' is abbreviated by \code{Grad}. \subsection{Para-consistent Logic} In classical logic a contradictory statement implies just anything expressible in the given domain of discourse. Not so in the family of para-consistent logics where it is distinguished from other forms of inconsistency \cite{Marcos05}; what is trivially the case in classical logic, say {\small $a_1 \wedge a_1^c \supset a_2$} for any propositions {\small $a_1$} and {\small $a_2$}, is not an axiom. Or, if my understanding about them is sufficient, it actually holds in the sense that to each contradiction expressible in a para-consistent logic associates a sub-domain of discourse within which it entails anything; however, just as \code{Grad} internalises classical logic, so do para-consistent logics, revealing the extent of the explosiveness of contradiction within them. In some sense para-consistent logics model parallel activities as seen in concurrency. What \code{Grad} on the other hand aims to model is conceptual scoping. As they do not pose an active conflict to each other, it should be possible to derive an extended logic which benefits from both features. \hide{The following may be one interesting discussion point. With the understanding of \code{Grad}, it is reasonable to think that, given two sub-domains {\small $D_1$} and {\small $D_2$} such that {\small $a_1, a_1^c \in D_1$}, that {\small $a_2 \not\in D_1$}, that {\small $a_1, a_1^c \not\in D_2$} and that {\small $a_2 \in D_2$}, if {\small $D_1$} and {\small $D_2$} exist sufficiently independently such that to talk about {\small $a_1, a_1^c$} and {\small $a_2$} is not possible in classical logic, then we can prevent ourselves from deriving {\small $a_1 \wedge a_1^c \supset a_2$} by appropriately extending the semantics of classical {\small $\supset$}. Likewise, if for example {\small $a_1, a_1^c, a_2 \in D_1$}, and {\small $a_1, a_1^c \in D_2$}, then if the semantics of {\small $\wedge$} and possibly also {\small $\supset$} are extended such that {\small $a_1 (\in D_1) \wedge a_1^c (\in D_2) \supset a_2 (\in D_1)$} comes to make sense, we can surely prevent ourselves from deriving {\small $a_1 \wedge a_1^c \supset a_2$}. But in this case again, classical logic is in fact extended than restricted. And for now as before, we continue reasoning classically both in {\small $D_1$} and {\small $D_2$}. Since the combination of {\small $D_1$} and {\small $D_2$} does not give us a classically reasonable domain of discourse, classically valid propositions are those found in {\small $D_1$} and {\small $D_2$} only, which remain valid in the briefly sketched para-consistent logic. The combination of {\small $D_1$} and {\small $D_2$} can be made classically reasonable, but in \code{Grad} at least an inner contradiction propagates outwards. Hence an inquiry: ``Can we derive, assuming synonyms of Postulate 1, a para-consistent logic whose domain of discourse is classically reasonable and whose sub-domains within which the influence of contradictions are intended to be restricted can be verified obeying the laws of classical logic from the perspective of the domain of discourse?" } \hide{ Relevant logic \cite{Anderson75} is a para-consistent logic that analyses the sense of implication, by which an implication {\small $F_1 \supset F_2$} (suppose that these {\small $F_1$} and {\small $F_2$} are formulas in classical logic) can be a true statement only if there exists certain relation, {\it relevance} as relevantists call, between {\small $F_1$} and {\small $F_2$}. They have a standpoint that material implication allows one to deduce a conclusion from an irrelevant premise to the conclusion, which they reflect should not occur if material implication were any sort of implication at all \cite{Anderson75}. Implication in \code{Grad} (\emph{Cf.} a remark at the end of Section {\uppercase\expandafter{\romannumeral 3}}) is, on the other hand, consistent with that in standard classical logic because, as was expounded in Section {\uppercase\expandafter{\romannumeral 1}}, it considers that anything that is expressible in classical logic is completely relevant to the given domain of discourse, and, consequently, that the effect to the contrary is physically impossible. Nevertheless, the existential fact of any attribute to an object cannot be irrelevant to the existential fact of the object by the formulation of \code{Grad}, to which extent it captures relevance as seen in object-attribute relationship. On this point, this work may provide an interesting topic for discussion. } \subsection{Epistemic Logic/Conditional Logic} Epistemic logic concerns knowledge and belief, augmenting propositional logic with epistemic operators {\small $K_c$} for knowledge and {\small $B_c$} for belief such that {\small $K_c a$}/{\small $B_c a$} means that a proposition {\small $a$} is known/believed to be true by an agent {\small $c$}. \cite{Hendricks06}. \code{Grad} has a strong link to knowledge and belief, being inspired by tacit agreement on assumptions about attributed objects. To seek a correspondence, we may tentatively assign to {\small $a_0 \gtrdot a_1$} a mapping of {\small $a_0 \wedge K_c/B_c a_1$}. However, this mapping is not very adequate due to the fact that {\small $K_c/B_c$} enforces a global sense of knowledge/belief that does not update in the course of discourse. The relation that {\small $\gtrdot$} expresses between {\small $a_0$} and {\small $a_1$} is not captured this way. A more proximate mapping is achieved with the conditional operator {\small $>$} in conditional logics \cite{Horacio13} with which we may map {\small $a_0 \gtrdot a_1$} into {\small $a_0 \wedge (a_0 > a_1)$}. But by this mapping the laws of {\small $>$} will no longer follow any of normal, classical, monotonic or regular (\emph{Cf}. \cite{Chellas80} or Section 3 in \cite{Horacio13}; note that the small letters {\small $a, b, c, ...$} in the latter reference are not literals but propositional formulas) conditional logics'. RCEA holds safely, but all the rest: RCEC; RCM; RCR and RCK fail since availability of some {\small $b$} and {\small $c$} equivalent in one sub-domain of discourse of \code{Grad} does not imply their equivalence in another sub-domain. Likewise, the axioms listed in Section 3 of \cite{Horacio13} fail save CC (understand it by {\small $a \wedge (a > b) \wedge a \wedge (a > c) \supset a \wedge (a > b \wedge c)$}), CMon and CM. Further studies should be useful in order to unravel a logical perspective into how some facts that act as pre-requisites for others could affect knowledge and belief. \subsection{Intensional Logic/Description Logic} Conditional logics were motivated by counterfactuals \cite{Plumpton31,Lewis01}, \emph{e.g.} ``If X were the case, then Y would be the case." According to the present comprehension of the author's about reasoning about such statements as found in Appendix J in the form of an informal essay, the reasoning process involves transformation of one's consciousness about the antecedent that he/she believes is impossible. However, even if we require the said transformation to be minimal in its rendering the impossible X possible, we still cannot ensure that we obtain a unique representation of X, so long as X is not possible. Hence it is understood to be not what it is unconditionally, but only what it is relative to a minimal transformation that applied. The collection of the possible representations is sometimes described as the {\it extension} of X. Of course, one may have certain intention, under which X refers to some particular representations of X. They are termed {\it intension} of X for contrast. \\ \indent The two terms are actively differentiated in Intensional Logic \cite{Church51,Montague74,Carnap47}. For example, suppose that we have two concepts denoting collections U and V such that their union is neither U nor V. Then, although U is certainly not equal to V, if, for instance, we regard every concept as a designator of an element of the collection, then U is V if U {\small $\mapsto$} u and V {\small $\mapsto$} v such that u = v. For a comparison, \code{Grad} does not treat intension explicitly, for if some entity equals another in \code{Grad}, then they are always extensionally equal: if the morning star is the evening star, it cannot be because the two terms designate the planet Venus that \code{Grad} says they are equal, but because they are the same. But it expresses the distinction passively in the sense that we can meta-logically observe it. To wit, consider an expression {\small $(\top \gtrdot \code{Space} \gtrdot \code{Wide}) \wedge (\code{Space} \gtrdot \code{Wide})$}. Then, depending on what the given domain of discourse is, the sense of {\small $\code{Space}$} in {\small $\top \gtrdot \code{Space} \gtrdot \code{Wide}$} may not be the same as that of {\small $\code{Space}$} in {\small $\code{Space} \gtrdot \code{Wide}$}. Similarly for {\small $\code{Wide}$}. (Incidentally, note that {\small $\gtrdot$} is not the type/sub-type relation.) The intensionality in the earlier mentioned conditional logics is, provided counterfactual statements are reasoned in line with the prescription in Appendix J, slightly more explicit: the judgement of Y depends on intension of X. But in many of the ontic conditional logics in \cite{Horacio13}, it does not appear to be explicitly distinguished from extension. \\ \indent It could be the case that \code{Grad}, once extended with predicates, may be able to express intensionality in a natural way, \emph{e.g.} we may say {\small $\exists \code{Intension}(\code{Adjective} \gtrdot \code{Sheep}) = \code{Ovine}$} (in some, and not necessarily all, sub-domains of discourse). At any rate, how much we should care for the distinction of intensionality and extensionality probably owes much to personal tastes. We may study intensionality as an independent component to be added to extensional logics. We may alternatively study a logic in which extensionality is deeply intertwined with intensionality. It should be the sort of applications we have in mind that favours one to the other. \\ \indent Of the logics that touch upon concepts, also worth mentioning are a family of description logics \cite{Baader10} that have influence in knowledge representation. They are a fragment of the first-order logic specialised in setting up knowledge bases, in reasoning about their contents and in manipulating them \cite{Baader10-2}. The domain of discourse, a knowledge base, is formed of two components. One called TBox stores knowledge that does not usually change over time: (1) concepts (corresponding to unary predicates in the first-order logic) and (2) roles (corresponding to binary predicates), specifically. The other one, ABox, stores contingent knowledge of assertions about individuals, \emph{e.g.} Mary, an individual, is mother, a general concept. Given the domain of discourse, there then are reasoning facilities in description logics responsible for checking satisfiability of an expression as well as for judging whether one description is a sub-/super-concept of another (here a super-concept of a concept is not ``a concept of a concept'' in the term of \cite{Church51}). \\ \indent Description logics were developed from specific applications, and capture a rigid sense of the concept. It should be of interest to see how \code{Grad} may be specialised for applications in computer science. To see if the use of {\small $\gtrdot$} as a meta-relation on description logic instances can lead to results that have been conventionally difficult to cope with is another hopeful direction. \subsection{Combined Logic} \code{Grad} is a particular kind of combined logic \cite{stanford11,Gabbay96a,Caleiro05} combining the same logic over and over finitely many times. The presence of the extra logical connective {\small $\gtrdot$} scarcely diverts it from the philosophy of combined logics. Instead of regarding base logics\footnote{A base logic of a combined logic is one that is used to derive the combined logic.} as effectively bearing the same significance in footing, however, this work recognised certain sub-ordination between base logics, as the new logical connective characterised. Object-attribute negation also bridges across the base logics. Given these, a finite number of the base logic combinations at once made more sense than combinations of two base logics finitely many times, for the latter approach may not be able to adequately represent the meta-base-logic logical connectives with the intended semantics of gradual classical logic. Investigation into this sub-set of combined logics could have merits of its own. \hide{which was contemplated by Gabbay in his fifth agenda \cite{Gabbay96a} and was reminded again in \cite{Arisaka13}. } \subsection{Conclusion} This work presented \code{Grad} as a logic for attributed objects. Its mechanism should be easily integrated into many non-intuitionistic logics. Directions to future research were also suggested at lengths through comparisons. Considering its variations should be also interesting. For applications of gradual logics, program analysis/verification, databases, and artificial intelligence come into mind. \hide{ \section*{Acknowledgement} Thanks are due to Danko Ilik for his support, Yusuke Kawamoto for constructive discussions, Dale Miller for an introduction to intensionality/extensionality in logic, and Ana{\"i}s Lion for the reference by Church. } \hide{ Anything that appears as a proposition in classical logic are completely relevant to the truth that they and interpretations on them define. As far as I could gather from \cite{stanford2012} and the references that it suggests such as \cite{Mares07}, \subsection{Combined Logic} \subsection{Left-over} Therefore our conversation, to which the presences of objects but also of attributes are substantial, spins around manipulation of conceptual hierarchies inherent in attributed objects. When we reason about relations that hold between given existences, an existential contradiction may not be found simply in spotting, say `Book A exists' and `Book A does not exist', for it is plausible that at one point we may discuss about the collection of books in Public Library and that at another point about books held in University Library, in which case the conversational context allows a statement `Book A exists (in Public Library).' and another statement `Book A does not exist (in University Library).' to peacefully co-exist. \\ \indent Let us try to describe the two statements in classical logic. Since, if we were to make both `Book A exists' and `Book A does not exist' an atomic proposition in our domain of discourse, a contradiction would be immediate in conjunctive concatenation of the two statements, it is necessary that we probe other options available to us. But if we were, for the fear of the imminent contradiction, to treat `Book A exists in Public Library.' and `Book A does not exist in University Library.' as atomic propositions, we would lose all the relations that they share, such as that both talk about existence of Book A in some library, at propositional level. On the other hand, if, then, we were, for the lack of sufficient expressiveness power in propositional logic, to seek assistance in some predicate of the kind $\code{ExistsIn}(\cdot, \cdot)$, to for instance have a sentence $\code{ExistsIn}$(Book A, Public Library) $\wedge$ $\neg \code{ExistsIn}$(Book A, University Library), our domain of discourse would place all the three entities `Book A', `Public Library', and `University Library' collaterally, and this time it is the conversational context that would be forever lost. In short, it is not so easy to capture conceptual hierarchies in standard classical logic. There are also certain peculiarity and, be they intentional or unwitting consequences, ambiguities that seep in when we talk about attributed objects. For example, as we are to illustrate in Section {\uppercase\expandafter{\romannumeral 2}}, negation on an attributed object is innately ambiguous if no conversational clues that may aid us to winnow down its range of potency are given. Whether it is acting upon an attribute, upon the object or upon the attributed object may not be determinable. But how do we express the three types of negations given an attributed object if (1) there is only one negation and (2) all the atomic propositions in the domain of discourse are on equal significance in any discourse on them that we may make? It is again a labouring task to simulate those ambiguities of conceptual hierarchies in an intuitive fashion. \\ \indent Hence, not very surprisingly, we will set about developing a logic in which attributed objects can be reasonably represented. However, the aphorism by Girard: {\it Witness the fate of non-monotonic ``logics'' who tried to tamper with logical rules without changing the basic operations\ldots}\cite{DBLP:journals/tcs/Girard87} should not be taken so lightly, either. We also do not take the stance that material implication is paradoxical \cite{Mares07}, and hold onto the viewpoint that our change to classical logic should be small enough not to break the fundamental logical principles that it nurtures. \\ \indent Hence it is reasonable that we define a new connective: we annex {\small $\gtrdot$} that is read as ``is descriptive of'' or as ``belongs to'', and habituate the other logical connectives to the extended logic in accordance with initial philosophical investigation about the interactions between the `old' connectives and {\small $\gtrdot$}. With the novel connective in place, we achieve the effect that the sense of the logical truth gradually shifts in the new logic; hence the appellation of gradual classical logic. All the mathematical rigorousness to follow will be a symbolic paraphrasing of the philosophical development. \subsection{Key undertakings} Roughly in the order they appear in this document; \begin{itemize} \item Philosophical motivation and characterisations of gradual classical logic (Section {\uppercase\expandafter{\romannumeral 1}} and Section {\uppercase\expandafter{\romannumeral 2}}). \item Mathematical development of semantics for gradual classical logic by means of dependent interpretations (valuations) and of reductions of logical entailments (Section {\uppercase\expandafter{\romannumeral 3}}). \item Identification that gradual classical logic is not para-consistent and that it is decidable (Section {\uppercase\expandafter{\romannumeral 4}}). \item A sequent calculus of gradual classical logic which is sound and complete with respect to the semantics (Section {\uppercase\expandafter{\romannumeral 5}}). \end{itemize} The relation of ``is descriptive of/belongs to'' seen in attributed objects undertakes an important role in our conversation, allowing hierarchical constructions of concepts. Their influence does not confine within natural language. As exemplified in the fields such as object-oriented programming paradigms, relational databases and computer vision to name a few, it is a ubiquitous concept within computer science. Then much more so in many fields in mathematics. \\ \indent Despite the appeals in applications that attributed objects have, however, the hierarchical reasoning as required in accommodation of ambiguities and peculiarities that emerge in the handling of attributed objects. is not something that is accommodated as a fundamental reasoning tool within classical logic or in fact in other non-classical logics, in the context of this paper. \\ \indent {\it A short dialogue:} ``(Looking at desk) There is a book.'' ``(Abstractedly, not turning back to the speaker) Which book?'' ``Meditations and Other Metaphysical Writings''. ``And the author?'' ``It is Ren{\'e} Descartes.'' ``In which country was he born?'' ``In France.'' {\it Period}\\ \indent In the above, `a book' caught attention of the first speaker, defining a main subject of the dialogue. Through the inquiries and replies, the general term acquired more descriptive pieces of information: of the title and of the author. The last inquiry by the second author is slightly discursive, signaling a change in subjects from the book into its author. \\ \indent For the purpose of this paper, those that act as the main subjects are objects/concepts. Those that provide details (adjectives) to them are called attributes, to contrast. In the above dialogue the birthplace of France is an attribute to `Ren{\'e} Descartes', forming one attributed object; and `Meditations and Other Metaphysical Writings', the title, and that it is written by French-born Ren{\'e} Descartes are attributes to the `book', forming another attributed object French-born Ren{\'e} Descartes' book `Meditations and Other Metaphysical Writings'. \\ \indent Now, because we talk about logic we would like to put attributed objects into some logical framework that allows us to capture their existential relations such as; if A is, then it follows that B is. How shall we construct a domain of discourse for reasoning about them? Of course it could be that classical logic is already sufficient. For what appeared in the short dialogue above, we may have a set comprising several elements: `Book' which we denote by B, `Ren{\'e} Descartes' which we denote by R, `Meditations and Other Metaphysical Writings' by M and `France' by F. Then, we first express that Ren{\'e} Descartes was born in France by defining a predicate \code{WasBornIn} so that \code{WasBornIn}(R, F) is a true statement. Likewise with the predicate \code{IsWrittenBy}, we make \code{IsWrittenBy}(M, R) true; and with the predicate \code{IsTitledBy}, we make \code{IsTitledBy}(B, M) true. Then that we have the two attributed objects may be expressed in the following sentence: \code{IsTitledBy}(B, M) {\small $\wedge$} \code{WasBornIn}(R, F) {\small $\wedge$} \code{IsWrittenBy}(M, R). However, that we do not discriminate main subjects and attributes incurs certain inconvenience in many conversational contexts that are influenced by conceptual dependencies, as we purport to illustrate in the next sub-section. But then if we are to adopt more drastic policy that only the main subjects shall be in the domain of discourse and that anything else that may embellish them shall take a form of predicate, having in the domain of discourse two elements `B'ook and `R'en{\'e} Descartes, and expressing the said two attributed objects by; \code{IsWrittenByFrenchBornReneDescartes}(B) {\small $\wedge$} \code{IsMeditationsAndOtherMetaphysicalWritings}(B) {\small $\wedge$} \code{IsBornInFrance}(R), it should not take so long till it dawns on us that the presence of `R'en{\'e} Descartes in the domain of discourse may be amiss because it is an attribute to `B'ook; and that, secondly, the predicates, to take into account the restriction, must themselves be very specific, to the point that it would have had produced almost the same effect had we simply had three atomic propositions in place of the predicates. \\ \indent Of course it is not only in natural language that such conceptual dependencies play an important role. Many fields in computer science, relational database and object-oriented programming languages to name a few, are reliant on hierarchical structures, for which the relation that some concept depends on another itself forms an essential component. \\ \indent In this document we set forth analysing some peculiar logical characteristics that attributed objects exhibit from which we are to develop a new logic that accommodate them. It is hierarchical, and the strength of truth changes gradually. Hence in the name is `gradual'. Starting philosophical investigation, we begin more mathematical a formulation of logic, and prescribe both semantics and proof system. An important result is that the gradual logic is not paraconsistent. It is also decidable. Comparisons to other logics follow. } \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} The W3 giant molecular cloud (GMC) together with W4 and W5 form part of a massive star forming complex in the Perseus Arm. With an estimated mass of $\sim4\times10^5$\,M$_\odot$ (\citealp{moore2007}; \citealp{polychroni2010}) and a size of $\sim1.5$\,deg$^2$ this cloud is one of the most massive in the outer Galaxy. Located at a distance of $\sim2$\,kpc (e.g., \citealp{xu2006}; \citealp{navarete2011}), W3 comprises a wealth of \ion{H}{2} regions, clusters, and active star forming sites with clear signatures of both low-mass and high-mass star formation. \begin{figure*}[ht] \centering \includegraphics[scale=0.75,angle=270]{intro.pdf} \caption{Greyscale \textit{Spitzer}\ channel 1 mosaic, with labels marking the regions and key features in the W3 GMC.} \label{fig:intro} \end{figure*} \begin{figure*}[ht] \centering \includegraphics[scale=0.75,angle=0]{intro_ch4.pdf} \caption{Greyscale \textit{Spitzer}\ channel 4 mosaic of W3. Intensity scale has been chosen to highlight the weaker features surrounding the main star forming regions labeled in Fig.\ref{fig:intro} (at the expense of the latter). Details in this image include several Infrared Dark Clouds (IRDCs) and filaments (e.g., ($\alpha$, $\delta$)=(02h 26m 57s, 61$^{\circ}$\ 29\arcmin\ 45\arcsec)).} \label{fig:intro_ch4} \end{figure*} Its relatively close location and its massive star content has made this cloud a prime target for the study of cluster and high-mass star formation (see \citealp{megeath2008} for a detailed review of the literature related to the study of W3), which contrary to the low-mass case is, overall, poorly understood. Massive OB (M$>8$\,M$_\odot$) stars are sources of UV photons, responsible for dissociating/ionizing molecules and atoms (HII regions), and of metals, which enrich the surrounding gas with heavier species. They also affect the local and global dynamical state of the interstellar medium (ISM) through ionization and radiation pressure, stellar winds, outflows and supernovae. These stars are therefore fundamental in determining the state and evolution of the ISM, as well as the formation, maintenance, and dissipation of structures ranging from the largest, Galactic scales, to giant molecular clouds, disks, and planetary systems. Theories such as the turbulent core model \citep{mckee2003} and competitive accretion (e.g., \citealp{bonnell2005}) have been proposed as possible theories explaining the formation of the rare and massive OB stars. However, the question as to whether massive star formation is simply a scaled-up version of low-mass star formation, or if it is the result of a completely different process, remains one of the main outstanding issues in star formation theory. Indeed, authors such as \citet{yorke1993} claim that a bimodality should arise from the shorter Kelvin-Helmholtz timescale for massive stars, their high $\mathrm{L}/\mathrm{M}$ ratio, and the dramatic effects their strong radiation field have on their evolution. The immediate effects of high-mass star activity can be observed, for instance, with masers, shocks, `hot spots', and the formation of hypercompact, ultracompact and classical HII regions (e.g., \citealp{zinnecker2007} and references therein). Furthermore, while low-mass stars are known to be able to form in isolation, most star formation is thought to occur in clusters embedded in their parent GMCs \citep{lada2003}. This is particularly true for massive stars, which have been theorized to form exclusively in clusters (e.g., \citealp{dewit2005}), making cluster studies crucial in order to understand and investigate the formation of massive stars. W3 is believed to contain massive stars in various evolutionary stages (e.g., \citealp{tieftrunk1997}). The eastern high density layer (HDL) neighboring W4 contains the most active star forming sites W3 North, W3 Main, W3 (OH) and AFGL 333, with signatures of massive star formation in a triggered environment (e.g., \citealp{oey2005}). A rare case of spontaneous massive star formation is also believed to have occurred in KR 140, west of the HDL (e.g., \citealp{kerton2008}). \subsection{Proposed Analysis: The W3 GMC} This work is the first of a series of papers focusing on W3 and which aim to shed some light on the still poorly understood massive star formation process. With the advent of high sensitivity instruments such as \textit{Spitzer}\ and the Herschel Space Observatory, this GMC presents an ideal opportunity to investigate the high-mass star/cluster star formation process and its relation to triggered/spontaneous star formation events, clustered, and distributed star formation. In this analysis we follow up the study initiated by \citet{polychroni2010} to characterize the pre-stellar population in W3 by means of \textit{Spitzer}\ data. This analysis will complement the Herschel data obtained under the Guaranteed Time Key Program HOBYS\footnote{http://starformation-herschel.iap.fr/hobys/} (Herschel imaging survey of OB young Stellar objects), as well as an extensive molecular analysis (Polychroni et al., in preparation), which focuses on the formation, evolution, and dynamics of the largely unknown progenitors of massive stars and clusters in this cloud. In Section \ref{sec:data} we first describe the infrared data used to select and classify the young stellar objects (YSOs). Classification techniques and schemes, including a description of the final catalog of candidate YSOs have been included in Section \ref{sec:class}. The physical and observational properties of the YSO sample are presented in Section \ref{sec:discussion}. Their clustering and spatial distribution, as well as the techniques employed in our analysis, are described in Section \ref{sec:groupclass}. Section \ref{sec:history} investigates the cluster properties and ages across W3, and results are used to compare the star formation history and activity in the different subregions. We conclude with a summary of the star formation in W3. \section{Photometry, Data Processing \& Datasets} \label{sec:data} \subsection{\textit{Spitzer} IRAC and MIPS Infrared Observations}\label{irac} The main active regions of W3 were observed by \textit{Spitzer} under two main Program Identification Number (PID) programs. The northern parts of the HDL comprising W3 Main and W3 (OH) (Figs.\ref{fig:intro} and \ref{fig:intro_ch4}) were first observed in 2004 under program P00127; AFGL 333 and central/western region of W3 (including KR 140 and the active region to its north: `KR 140-N') were subsequently observed in 2007 (program P30955). A preliminary analysis of the data obtained during the first observation has been presented in \citet{ruch2007}, while \citet{polychroni2010} combined data from both programs for their analysis of the stellar population associated with SCUBA cores \citep{moore2007}. For this work we also used data from both programs. For the reduction process we downloaded the IRAC and MIPS 24\,\micron\ corrected Basic Calibrated Data (cBCD) from the \textit{Spitzer}\ Archive. The IRAC\ cBCDs were produced with the S18.7.0 pipeline. P00127 and P30955 MIPS data were processed with the S18.13.0 and S18.12.0 pipelines, respectively. These files already include all the pre-processing artifact mitigation corrections, including muxbleed, column pulldown/pullup and electronic banding. Tiles were reduced, background-matched and mosaicked with the MOPEX\footnote{http://ssc.spitzer.caltech.edu/dataanalysistools/tools/mopex/} tool, which with the exception of the MIPS observations from program P30955 (KR 140) produced maps of higher quality than the mosaicked pBCD data already provided by the Archive. The best available mosaics were chosen for our analysis. Photometry tests performed on both the pBCDs and our own reduced mosaics shows no systematic differences in channels 3 and 4, and negligible (0.04 mag) differences in channel 1 and channel 2, which is much smaller than any of our expected photometric uncertainties. \subsection{\textit{Spitzer} Source Extraction and Photometry}\label{irac2} A preliminary list of sources was obtained using SExtractor \citep{bertin1996} and a mexican hat filter, which was observed to perform the best (with the highest detection rate of visually confirmed sources in the region) in the extremely crowded regions found in W3, especially W3 Main and W3 (OH). We note that this also introduced a significant number of artifacts and false detections which we eliminated through a series of cleaning steps, described below. SExtractor also produced noise and background maps, which we also checked visually to ensure optimal source extraction. This preliminary list was subsequently fed into the point source extraction package in MOPEX (APEX) for point response function (PRF) fitting, which provides a more accurate centroid calculation as well as additional statistics for each source resulting from the fitting (e.g., $\chi^2$, signal-to-noise ratio (SNR), etc). When bandmerging the IRAC list with the 2MASS Point Source Catalog (through the GATOR interface in the NASA/IPAC Infrared Science Archive; IRSA \footnote{http://irsa.ipac.caltech.edu/applications/Gator/}) we find the accuracy of our final PRF-fitted coordinates to be generally better than $0$\arcsec.5, although in this work we allow for a more conservative matching radius of $2$\arcsec\ when bandmerging catalogs at different wavelengths and/or instruments. The internal background image used in APEX for photometry was produced choosing the option `SExtractor background method' for consistency with the previous part of this analysis. As recommended by the IRAC handbook we chose aperture photometry for our main photometric analysis, using in this case the most accurate centroids returned by APEX. The use of this technique also ensured consistency with the most recent \textit{Spitzer}\ studies of this cloud \citep{polychroni2010}. Aperture corrections and zero points were obtained from the IRAC/MIPS Instrument Handbooks\footnote{http://ssc.spitzer.caltech.edu/irac/iracinstrumenthandbook/},\footnote{http://ssc.spitzer.caltech.edu/mips/mipsinstrumenthandbook/}. Additional corrections (e.g., pixel phase and array correction) were applied when required. An aperture of 2 pixels ($2$\arcsec.4) with a sky annulus between 2 and 6 pixels ($7$\arcsec.3) for IRAC, and an aperture of $7$\arcsec.0 with a sky annulus between $7-13$\arcsec\ for MIPS, were found to yield results most closely agreeing with the magnitudes provided by \citet{ruch2007}, who used the version of {\sc daophot} \citep{stetson1987} modified by the Galactic Legacy Infrared Mid-Plane Survey Extraordinaire ({\sc glimpse}). Contrary to standard aperture photometry, the {\sc glimpse} technique is particularly useful for analysis in crowded fields and regions with variable background \footnote{see http://www.astro.wisc.edu/glimpse/photometry\textunderscore v1.0.pdf; http://www.astro.wisc.edu/glimpse/}, and we therefore checked the accuracy of our results by comparing the photometry for those YSO candidates in our list with counterparts in the source list provided by these authors. The root mean square (rms) difference between the MIPS\,$24$\micron\, photometry obtained in this work for the YSO list and that from \citet{ruch2007} was found to be $\sim0.6\,$mag. Comparison of IRAC photometry yields an rms difference of $<0.2$\,mag in all four channels, which is consistent with the estimated $3\sigma$\,errors and equivalent to the minimum signal-to-noise (S/N) required for our final catalog (S/N$=5$; see below). The same procedures applied to the IRAC long exposure mosaics were also performed on the short exposure images. While the final catalog is based on the long exposure maps, we used the short exposure mosaics to obtain replacement photometry for those sources with bad pixels within their apertures or observed/suspected to be affected by saturation in the long exposure mosaics. \section{Stellar Classification} \label{sec:class} \subsection{General YSO Classification: Methodology and Techniques} \subsubsection{Protostars \& Optically Thick Disks} We aimed to provide the most reliable sample of young stellar objects (YSOs) in this GMC. For the main classification in our analysis we chose the `revised', updated criteria in Appendix A of \citet{gutermuth2009}. The color and magnitude scheme developed by these authors includes a series of sequential steps (phases) to identify, clean, and classify the candidates: Phase 1) Removal of contaminants such as star-forming galaxies with strong polycyclic aromatic hydrocarbon (PAH) emission, AGN, unresolved knots of shock emission, and PAH-emission-contaminated apertures. This step also includes the first separation of YSOs by means of the four IRAC bands. Phase 2) A search for additional YSOs based on 2MASS photometry\footnote{One of the equations of Phase 2 (\citealp{gutermuth2008}; \citealp{gutermuth2009}) should read: E$_{[3.6]-[4.5]}$/E$_{H-K}$=(E$_{H-K}$/E$_{K-[4.5]}$)$^{-1}$-(E$_{H-K}$/E$_{K-[3.6]}$)$^{-1}$}. Phase 3) Identification and re-classification of previously identified sources with suitable MIPS $24$\,\micron\ photometry. When re-classifying photospheric sources into `transition disk' objects (those with significant $24$\,\micron\ emission; included within the Class II category) we required sources to have been classified as photospheric in both previous phases. To deredden the magnitudes we used the extinction maps and methodology described in \citet{RF2009}. The visual extinction map was transformed to a median A$_{\mathrm{H}}$ map using the extinction law from \citet{mathis1990}. We changed extinction in the 2MASS bands to extinction in the IRAC channels using the numbers from \citet{flaherty2007}. The above classification was complemented and cross-checked with: i) the `red source' classification scheme from \citet{rob2008}, which should include all Class 0/I and several Class II sources; and ii) the `stage' phase from \citet{rob2006} (Stage 0/I: $\dot{M}_{\mathrm{env}}/M_{\star}>10^{-6}$\,yr$^{-1}$; Stage II: $\dot{M}_{\mathrm{env}}/M_{\star}<10^{-6}$\,yr$^{-1}$ and M$_{\mathrm{disk}}/$M$_{\star}>10^{-6}$; Stage III: $\dot{M}_{\mathrm{env}}/M_{\star}<10^{-6}$\,yr$^{-1}$ and M$_{\mathrm{disk}}/$M$_{\star}<10^{-6}$). We used this last scheme to compare the above observational classification with an alternative method based on intrinsic physical properties (e.g., mass accretion rate and disk mass). This last analysis was carried out by studying the position of each YSO in the color-color diagrams (CCDs) with respect to the limits marking the areas where most of the sources of one particular `stage' are predicted to fall \citep{rob2006}. \subsubsection{Pre-Main Sequence Population With Optically Thin Disks} Separation and classification of pre-main sequence stars (PMS) with optically-thin disks is a particularly complicated process due to their similarity (in infrared color) with more evolved reddened main sequence and giant stars (photospheric-dominated). These transition objects are however essential to fully understand the different stages in star formation. In an attempt to estimate the population of sources with weak infrared excess and other PMS stars that may have been missed with the above color classification, we first excluded those objects in our source list already classified as Class 0/I and II. We then used the 2MASS catalog and a process similar to that used in \citet{kerton2008}, who attempted to separate the YSO population guided by a sample of known low-mass T-Tauri and intermediate-mass Herbig Ae/Be (HAeBe) stars. In this work (see Section \ref{sec:discussion}) we show that this method can only be applied successfully once the `younger' YSOs have been identified using additional data (e.g., IRAC). To investigate the possibility of missed candidates from the PMS population we first chose a sample of T-Tauri (\citealp{kenyon1995}) and HAeBe stars with known distances (\citealp{finkenzeller1984}; \citealp{the1994}; \citealp{mendigutia2011} and references therein). All sources where checked with {\sc simbad}\footnote{http://simbad.u-strasbg.fr/simbad/}, keeping variable, emission and pre-main sequence stars and rejecting those classified as double/multiple systems, low mass stars and brown dwarfs. Infrared photometry was obtained by matching the sample with the 2MASS Point Source Catalog through the {\sc gator} interface. Infrared photometric systems were converted to the 2MASS system using the transformations from \citet{carpenter2001}. All magnitudes were shifted to a distance of $2$\,kpc for W3 with the inclusion of interstellar extinction by means of the A$_{\mathrm{V}}-$distance relation from \citet{indebetouw2005}. Figures \ref{fig:ccd1} and \ref{fig:cmd1} show the color-color diagram (CCD) and color-magnitude diagram (CMD) for the T-Tauri and HAeBe samples shifted to the distance of W3. The CCD shows the T-Tauri locus from \citet{meyer1997} and the main sequence and giant branch from \citet{koornneef1983} including interstellar reddening. Reddening vectors for an A$_{\mathrm{V}}=10$ have been included for an O$6$\,V, M$8$\,V, M$2$\,V, and an M$6$\,III star. The CMD shows solar metallicity isochrones (\citealp{marigo2008}; \citealp{girardi2010}) at log(t\,yr$^{-1})=$7, 8, and 9 for the same distance. The dashed-dotted line is the reddening vector for an $\sim$A$0$ star with A$_{\mathrm{V}}=10$ at d$=2$\,kpc, applied using the reddening law and the A$_{\mathrm{V}}/$A$_{\mathrm{J}}$ conversion from \citet{mathis1990}. The values used for extinction conversions between 2MASS bands are consistent with those from \citet{indebetouw2005} up to one decimal place taking into account uncertainties, which is a negligible difference compared to the expected uncertainty in the transformation from optical to infrared extinction. \begin{figure}[ht] \centering \includegraphics[scale=0.6,angle=0]{fig4_ccd.pdf} \caption{Color-Color Diagram showing the T-Tauri and HAeBe samples shifted to d$=2$\,kpc. Solid lines mark the main sequence and giant branch from \citet{koornneef1983}, also shifted to a distance of $2$\,kpc. Dash-dotted lines are reddening vectors for an additional A$_{\mathrm{V}}=10$ for an O$6$\,V, M$8$\,V, M$2$\,V, and M$6$\,III star using the extinction law from \citet{mathis1990}. Dashed line marks the locus of T-Tauri stars from \citet{meyer1997} at the same distance. Vertical solid line marks the bluest [H-K] color accepted for PMS classification.} \label{fig:ccd1} \end{figure} \begin{figure}[ht] \centering \includegraphics[scale=0.6,angle=0]{fig5_cmd.pdf} \caption{Color-Magnitude Diagram for the T-Tauri and HAeBe samples at d$=2$\,kpc. Solid lines are solar metallicity isochrones (\citealp{marigo2008}; \citealp{girardi2010}) log(t\,yr$^{-1}$)=7, 8 and 9. Dashed-dotted line is the reddening vector for an $\sim$A0 star at this distance with A$_{\mathrm{V}}=10$. Horizontal solid line marks the magnitude limit separating T-Tauri and HAeBe candidates. Vertical solid line like in Fig.\ref{fig:ccd1}.} \label{fig:cmd1} \end{figure} Figure \ref{fig:ccd1} shows T-Tauri stars lying mainly above the T-Tauri locus. Many are within the reddening band formed by the reddening vectors \citep{mathis1990} of an O$6$ and $\sim$M$2$ main sequence stars (with a tail extending into the HAeBe region, to the right of the reddening band, and following the direction of the T-Tauri locus). The wide distribution implies variable amounts of extinctions toward these sources and variable disk emission. A large proportion of T-Tauri stars are indistinguishable from main sequence stars with just interstellar reddening or are consistent with weak-emission T-Tauri stars \citep{meyer1997}. The maximum A$_{\mathrm{V}}$ in the maps from \citet{RF2009} is about $\sim9.5-10$ in the region containing W3 Main/(OH), and $\sim7$ for those comprising KR 140 and AFGL 333. Thus there will be considerable extinction of objects in the W3 field, and so color identification of this type of T-Tauri star without the aid of spectroscopic data will be severely contaminated. In consequence, we chose a more conservative approach and selected our T-Tauri sample by requiring these PMS stars to be reddened enough to lie above the T-Tauri locus, with colors satisfying H-K$>0.7$ (similar to that used in \citet{kerton2008} for KR 140). The above limits minimize the contamination from foreground or mildly reddened weak-emission T-Tauri stars (undistinguishable from main sequence), and early type stars. While the color cuts could still allow for non-negligible contamination from late main sequence and giant stars with moderate reddening, the magnitude selection criterion below reduces this contamination to mainly that caused by highly extinguished (A$_\mathrm{V}\ge7)$ stars of spectral type of $\sim$A or later (Figure \ref{fig:cmd1}). HAeBe stars lie preferentially to the right of the reddening band. All suitable candidates should therefore be located in this region and satisfy the condition H-K$>0.7$. This limit aims to minimize contamination from reddened early type stars and luminous T-Tauri stars. To separate candidates in regions of the CCD populated by both types of PMS (i.e., outside the reddening band) we used the information from the CMD (Figure \ref{fig:cmd1}), in which T-Tauri and HAeBe can be easily separated. All T-Tauri stars have K magnitudes $\gtrsim12$. HAeBe stars tend to be brighter, approaching this limit only for `late' stages reaching the main sequence, which we already discard with our imposed limit in the CCD to minimize contamination from reddened early type stars. The combined color plus magnitude condition minimizes contamination of the T-Tauri sample from reddened giant stars. In addition, the color constraint imposed on HAeBe stars, which are mainly localized and `isolated' outside the reddening band of typical stars, already minimizes the contamination from reddened main sequence and giant stars. We note that this relatively simple scheme for PMS classification may only be applied \textit{after} the Class 0/I and Class II populations have been identified using IRAC data, as there is considerable overlap in the CCD and CMD of T-Tauri candidates with \textit{Spitzer}\, Class II/I/0 sources (see Section \ref{sec:discussion}). \begin{table}[t!] \caption{Selection Criteria for T-Tauri and HAeBe stars for d$=2$\,kpc} \label{table:class3} \centering \begin{tabular}{l c l} \hline \hline &T-Tauri&\\ \hline &[H$-$K]$\geq 0.7$&\\ &[J$-$H]$\geq 0.59$([H$-$K]$-0.187)+0.72$&\\ &[J$-$H]$\leq 1.55$([H$-$K]$-0.39)+0.85$&\\ &K$>12$&\\ \hline &HAeBe&\\ \hline &[H$-$K]$\geq 0.7$&\\ &[J$-$H]$\leq 1.55$([H$-$K]$-0.187)-0.008$&\\ &K$<12$&\\ \hline \end{tabular} \end{table} Our final PMS selection scheme has been summarized in (Table \ref{table:class3}). The color and magnitude selection criteria were applied to all sources in our initial IRAC list satisfying i) the cleaning/reliability conditions in the \textit{Spitzer}\ channels; ii) matched to a 2MASS source with quality flag better than `D' in all 2MASS bands; and iii) classified as (mainly) photospheric or without a successful classification using the YSO scheme from \citet{gutermuth2009} (i.e., no Class 0/I, Class II, or contaminant). A search for additional PMS was also carried out by extending our analysis to 2MASS sources \textit{in} the area covered by the \textit{Spitzer}\ survey but without a suitable IRAC counterpart (i.e., satisfying our initial cleaning and reliability conditions) in the short wavelength \textit{Spitzer}\ channels. The lack of a detection in \textit{Spitzer}\, minimizes the possibility of confusion with actual embedded protostars, which should have been previously identified with the color/magnitude criteria from \citet{gutermuth2009}. \subsection{The \textit{Spitzer}\ Catalog} \label{sec:catalog} Here we present the final products and source lists derived from the analysis carried out in the previous section. Results and statistics from our YSO detection and classification procedures are shown in Tables \ref{table:catyso} and \ref{table:class3a}. For the purpose of this paper, we define as `YSO' those Class 0/I and Class II candidates selected using the color/magnitude scheme from \citet{gutermuth2009}. The sample of PMS stars, that is those additional candidate young stellar objects selected using 2MASS photometry, will include stars with optically thick disks, e.g., classical T Tauri stars and HAeBe stars (Class II) missed by the IRAC color/magnitude classification, and optically thin disks, e.g., weak-lined T Tauri stars (Class III sources). An analysis of this sample will be relevant to investigating the `oldest' young stellar population in W3. \subsubsection{Statistics, Completeness \& Reliability} The YSO search using the color and magnitude classification from \citet{gutermuth2009} yielded a total of $616$, $706$ (two of which were also observed in neighboring AORs), and $246$ YSOs in the regions surveyed in all four IRAC channels in each individual AOR: W3 Main/(OH), KR 140/KR 140-N, and AFGL 333 \textit{Spitzer}\ regions, respectively (Fig. \ref{fig:cat1_mosaic}; Table \ref{table:class3a}). The full list of YSOs (Table \ref{table:catyso}) and 2MASS-based PMS candidates (Table \ref{table:catyso}a) appear in the electronic version of this article. \begin{table*}[ht] \caption{YSOs in each subregion of W3: Sample list$^a$} \label{table:catyso} \centering \begin{tabular}{l l l l l l} \hline \hline RA&Dec&Class&Class&Flag$^b$&Flag\\ h m s (J2000)&$^{\circ}$\ \arcmin\ \arcsec\ (J2000)&Catalog 1&Catalog 2$^c$&Catalog 1&Catalog 2\\ \hline ...&&&&&\\ 34 29 06.92&61 33 44.75&classII*&nomatch&$0$&$-1$\\ 34 29 56.38&61 22 31.69&classII&classII&$1$&$1$\\ 34 32 05.86&61 25 22.47&classII*&nomatch&$0$&$-1$\\ ...&&&&&\\ \hline \multicolumn{6}{l}{{$^a$Catalog is published in its entirety in the electronic version of this article.}}\\ \multicolumn{6}{l}{{$^b$ Reliability flag (see text):}}\\ \multicolumn{6}{l}{{1: Candidates satisfying cleaning/bandmerging requirements and with individual}}\\ \multicolumn{6}{l}{{detections in each band.}}\\ \multicolumn{6}{l}{{0: Candidates satisfying cleaning/bandmerging requirements but with IRAC}}\\ \multicolumn{6}{l}{{Channel 3 and 4, and MIPS 24\,\micron\ detection (centroid) and photometry}}\\ \multicolumn{6}{l}{{based on a successful detection in Channels 1/2.}}\\ \multicolumn{6}{l}{{$^c$ Same as Catalog 1, but without MIPS 24\,\micron-based re-classification}}\\ \multicolumn{6}{l}{{unless a successful detection was found in our original MIPS source list; i.e., MIPS}}\\ \multicolumn{6}{l}{{centroid (and photometry) not based exclusively on an IRAC detection.}}\\ \end{tabular} \end{table*} \begin{table*}[ht] \caption{Photometry for YSOs in each subregion of W3$^a$} \label{table:photometry} \centering \begin{tabular}{l l l l l l l l l l} \hline \hline 3.6\,\micron&Error&4.5\,\micron&Error&5.8\,\micron&Error&8.0\,\micron&Error&24\,\micron&Error\\ \hline ...&&&&&&&&&\\ 12.08&0.01&12.05&0.01&11.93&0.03&12.03&0.09&9.34 &0.19\\ 12.46&0.01&12.06&0.01&11.81&0.03&11.29&0.08&7.90 &0.36\\ 12.98&0.01&12.98&0.01&12.86&0.05&12.95&0.12 &9.03 &0.14\\ ...&&&&&&&&&\\ \hline \multicolumn{10}{l}{{$^a$Photometry (magnitudes) for sample in Table \ref{table:catyso}.}}\\ \multicolumn{10}{l}{{Catalog is published in its entirety in the electronic version of this article.}}\\ \end{tabular} \end{table*} \begin{figure*}[ht] \centering \includegraphics[scale=0.85,angle=0]{cat1_new.pdf} \caption{Greyscale \textit{Spitzer}\ channel 1 mosaic with Class 0/I (red), Class II (green) and PMS (blue) candidates. Sample includes all sources (and all flags) from Catalog 1, as well as PMS candidates with no IRAC counterparts (see text).} \label{fig:cat1_mosaic} \end{figure*} \begin{figure*}[ht] \centering \includegraphics[scale=0.85,angle=0]{cat2_new.pdf} \caption{Same as Fig.\ref{fig:cat1_mosaic} but for Catalog 2, without PMS candidates with no IRAC counterpart.} \label{fig:cat2_mosaic} \end{figure*} For the purpose of the present analysis we chose reliability over completeness, and so this sample requires S/N$\geq5$ for any IRAC/MIPS photometry used in any particular method. When using 2MASS photometry in Phase 2 of the color/magnitude classification scheme, those sources with the closest distance to our IRAC sources (within $2$\arcsec) were chosen as suitable counterparts as long as they had a quality flag `A', `B', `C' or `D' in H and K-bands, and at least valid photometry (`D') in the J-band (if available). In order to remove as many artifacts and false detections as possible, we first bandmerged the IRAC channels using channel pairs. A candidate source could still be included in the initial list without suitable photometry in all four channels if it either appeared in channels 1 and 2, or channels 3 and 4. This was done as the first step to minimize the need for visual inspection of samples containing tens of thousands of sources, while at the same time attempting to minimize the loss of relatively `cold' sources without a suitable detection at shorter wavelengths. In addition, we also imposed our own `internal' cleaning conditions based on the properties provided by APEX and SExtractor for those sources already published as reliable detections in the catalog of \citet{ruch2007} (e.g., successful PRF fitting, $\chi^2$ of fitting, ellipticity and successful deblending). We note, however, that these internal cleaning parameters are physically meaningless and were only used for `relative' classification of sources within a particular sample, with the only purpose being to reject as many artifacts (e.g., PSF residuals and spikes around bright sources) and false detections as possible. Despite this procedure, these conditions were still conservative, and visual inspection and manual rejection was still required and performed in the last stages of the catalog production process. All sources successfully classified as YSO candidates and satisfying all the cleaning conditions and bandmerging requirements have an entry$=1$ in the `flag' column in the source catalog (Table \ref{table:catyso}). This defines the `reliable' subset of the final list (Fig. \ref{fig:cat1_mosaic}). In an attempt to improve the `completeness' of the sample we also used the detections in IRAC channels 1 and 2 (with the best sensitivity) as a base for a new source list. Photometry was performed on fixed centroids at longer wavelengths (including MIPS), and all the cleaning/selection conditions and the Gutermuth classification scheme were again applied to each source. Those additional detections satisfying all the catalog requirements were included in the final catalog with a flag entry$=0$. We note that in the scheme of \citet{gutermuth2009}, MIPS photometry was used mainly to reclassify as YSOs those sources initially rejected as galaxies, AGN, or fake excess sources in the first steps of the scheme. Photometry derived from the MIPS $24$\,\micron\ maps was generally less reliable. We therefore produced a Catalog 2 (Fig. \ref{fig:cat2_mosaic}) based on the same procedure as Catalog 1, but allowing for re-classification of IRAC/2MASS sources only if there was a successful MIPS counterpart in our original (independently obtained) MIPS source list (i.e., the standard fixed-centroid photometry is not used in MIPS phase 3). Catalog 2 is therefore more conservative, because although the SExtractor extraction was visually observed to detect all significant sources, a large fraction of these detections did not satisfy the cleaning conditions after performing APEX PRF fitting on the MIPS mosaics, on account of the variable and complicated background at longer wavelengths. Both catalogs (Catalog 1 and Catalog 2) yield very similar source lists for Class 0/I and Class II sources, differing mainly on the number of Class 0/I$^*$ (highly embedded YSOs) and Class II$^*$ transition objects. Defined in this work as the ($^*$) population, both the highly embedded and transition objects rely on MIPS photometry for identification and classification. A summary of the number of candidates found of each class in each field is given in Table \ref{table:class3a}. As expected, highly embedded and transition objects are particularly abundant in Catalog 1 due to the use of MIPS photometry based on IRAC centroids, with Class 0/I$^*$ sources forming up to $\sim65\%$ of the population in W3 Main/(OH) ($\sim5.5\%$ Class II$^*$). In Catalog 2 these types of objects constitute less than $\sim3.5\%$ of the YSOs in each field. We created a sample resulting from bandmerging just channels 3 and 4 and ran it through the cleaning and classification procedures described above. This experiment produced no new sources. This shows that no significant sample of `cold' sources (without detections in IRAC channels 1 and 2) should have been missed by the initial bandmerging-by-pairs procedure (within the limitations of this technique). We therefore conclude that the majority of sources potentially missed in our (flag $=1$) analysis would have come from the samples in IRAC channel 1/channel 2 that do not have counterparts at longer wavelengths due to the diffuse emission and sensitivity loss (although we note that Catalogs 1 and 2 for flag $=1$ are very similar; Table \ref{table:class3a}). Unless mentioned otherwise, in the following sections we will use Catalog 1 (all flags) as the primary sample in our analysis. As we explain in Section \ref{sec:groupclass}, this source list is expected to be a more reliable indicator of the YSO properties. We find that the use (or omission) of the ($^*$) population is mostly relevant in those highly populated regions with high extinction, mainly IC 1795 and KR 140. We cross-correlated our final YSO list with the list of infrared sources associated with the cluster IC 1795 presented in \citet{rocca2011}, and we find 76 YSOs in our sample which are consistent with being cluster members. Of these, 30 YSOs belong to the ($^*$) population in Catalog 1 (not classified as YSOs in Catalog 2), and yet all are also classified as YSOs (Class II) in \citet{rocca2011}. When applying the color classification from \citet{megeath2004} we find that only 4 out of the 76 sources would not have been classified as YSOs, which supports our decision and the need to keep the ($^*$) population in our analysis. Results based on Catalog 2 are only mentioned briefly when required. We finally note that while the flag $=0$ and IRAC short wavelength-based catalogs intend to improve the completeness of the final sample (and each source was visually inspected in channel 1), these detections are still tentative and should be treated with caution. \begin{table}[ht] \caption{YSOs in the \textit{Spitzer}\ Survey} \label{table:class3a} \centering \begin{tabular}{l l l l l} \hline \hline \multicolumn{5}{c}{{W3 Main/(OH)}}\\ \hline &Catalog 1&Catalog 1&Catalog 2&Catalog 2\\ Flag&all$^a$&1&all&1\\ \hline Class 0/I&$39$&$32$&$39$&$32$\\ Class 0/I*&$405$&$7$&$7$&$7$\\ Class II&$138$&$85$&$132$&$85$\\ Class II*&$34$&$3$&$3$&$3$\\ HAeBe$^b$&$1$&$0$&$2$&$0$\\ T-Tauri$^b$&$110$&$50$&$126$&$58$\\ \hline \multicolumn{5}{c}{{KR 140}}\\ \hline Class 0/I&$88$&$80$&$84$&$78$\\ Class 0/I*&$130$&$3$&$3$&$3$\\ Class II&$271$&$252$&$271$&$252$\\ Class II*&$215$&$3$&$3$&$3$\\ HAeBe&$0$&$0$&$0$&$0$\\ T-Tauri&$94$&$56$&$96$&$57$\\ \hline \multicolumn{5}{c}{{AFGL 333}}\\ \hline Class 0/I&$57$&$51$&$57$&$51$\\ Class 0/I*&$25$&$0$&$0$&$0$\\ Class II&$140$&$129$&$140$&$129$\\ Class II*&$24$&$2$&$2$&$2$\\ HAeBe&$0$&$0$&$0$&$0$\\ T-Tauri&$86$&$43$&$86$&$43$\\ \hline \multicolumn{5}{l}{{$^a$ Flag $=0$ \& Flag $=1$ combined.}}\\ \multicolumn{5}{l}{{$^b$ With \textit{Spitzer}\ counterparts.}}\\ \end{tabular} \end{table} We compared Catalog 1 and Catalog 2 with the detections from \citet{ruch2007} for W3 Main/(OH), the region analyzed in their study. These authors detected a total of $295$ sources in the four IRAC channels, $21$ ($\sim7\%$) of which were classified as Class I sources, and $94$ ($\sim32\%$) as Class II. All $295$ sources were in our initial catalog resulting from the SExtractor source detection process. Our analysis identifies a similar number of Class I and Class II candidates in this sample, with a total of 117 sources classified as YSOs in Catalog 1 (93 in Catalog 2). Of the remaining sources in their list not classified as YSOs in Catalog 1, $130$ are stars, $0$ galaxies, $3$ shock/knots of emission or sources with PAH contaminated apertures, and $45$ do not satisfy the cleaning and reliability conditions. Sources in their sample not classified as YSOs in Catalog 2 consist of $149$ stars, $0$ galaxies, $3$ shock/contaminated aperture sources, and $50$ sources not satisfying cleaning/reliability conditions. In all cases, the percentage of Class II sources is $\sim3$ times that of Class 0/I sources. Table \ref{table:ruch} shows the results after applying our cleaning conditions and the Gutermuth scheme. \begin{table}[ht] \caption{Breakdown of candidate YSOs with counterparts in the catalog of \citet{ruch2007}} \label{table:ruch} \centering \begin{tabular}{l l l l l} \hline \hline Class&Catalog 1&Catalog 1&Catalog 2&Catalog 2\\ Flag&all&1&all&1\\ \hline Total YSO&$117$&$77$&$93$&$77$\\ Class0/I&$21$&$20$&$21$&$20$\\ Class0/I*&$8$&$0$&$0$&$0$\\ ClassII&$72$&$57$&$72$&$57$\\ ClassII*&$16$&$0$&$0$&$0$\\ \hline \end{tabular} \end{table} Table \ref{table:class3a} includes the number of PMS stars found based on 2MASS color and magnitude information, with counterparts in the \textit{Spitzer}\ images. We found 51 more T-Tauri stars and 2 HAeBe stars when searching for 2MASS sources (from the 2MASS PSC) without suitable IRAC counterparts (i.e., IRAC sources satisfying our initial cleaning conditions), but located in the common (4-channels) areas surveyed by \textit{Spitzer}. Many of these sources are located in the bright, diffuse region surrounding W3 Main, (OH) and AFGL 333 in the IRAC images, which did not allow for proper identification and extraction of candidate sources due to confusion. The imposed lower limit for the K magnitude of our HAeBe stars is within the 2MASS $10\sigma$ completeness limit for this band ($14.3$\,mag). This sample is also complete in the H band ($15.0$\,mag), and J band ($15.9$\,mag). The faintest 2MASS magnitudes for the T-Tauri sample lie beyond the 2MASS completeness limits by $\sim1$\,mag in K and H, and $\sim2$\,mag in J band, and therefore our list will be incomplete at the faint end of the population. \begin{table*}[ht] \caption{IRAC completeness limits for Catalog 1} \label{table:complete} \centering \begin{tabular}{l c c c c} \hline \hline AOR&Channel 1&Channel 2&Channel 3&Channel 4\\ &All/Flag$=1$&All/Flag$=1$&All/Flag$=1$&All/Flag$=1$\\ \hline W3 Main/W3(OH)&$14.1/12.4$&$12.9/11.8$&$12.7/10.6$&$11.3/10.3$\\ KR 140&$13.1/13.7$&$13.1/13.0$&$13.2/12.8$&$12.2/11.9$\\ AFGL 333&$12.9/12.8$&$12.1/11.8$&$11.9/11.6$&$10.4/10.9$\\ \hline \end{tabular} \end{table*} The use of several steps of cleaning and reliability thresholds for our final sample, combined with different sensitivity limits in different regions for a given field, complicate the determination of a reliable completeness limit for our catalog. In order to provide an estimate of the magnitude at which a catalog is 100\% complete, which due to our cleaning and selection steps differs from a fainter limit at which some sources can be detected, we examined the number of detections as a function of magnitude for each field at each IRAC wavelength. Figure \ref{fig:completeness} shows the histograms for W3 Main and W3 (OH) for all YSOs in Catalog 1, the most complete sample derived in this work. Our estimates for 100\% completeness, given by the `turnover' points in the YSO distributions for each subregion of W3, are included in Table \ref{table:complete}. The effects of strong, large scale emission in dense regions with high stellar activity is evident from the limits derived for the HDL. Our choice of reliability over completeness excludes a large population of faint sources from our final list, and therefore the completeness limits for the different regions reveal a particularly conservative population for IRAC channels 1 and 2. The flag $=0$ sample (using the short wavelength channels as a template for photometry at longer wavelengths) improves completeness at these channels significantly, although we note that our estimated completeness limits are still conservative compared to samples derived from similar \textit{Spitzer}\ studies of W3 \citep{polychroni2010}. Clearly, the completeness magnitudes quoted for our work are upper (faint) limits, especially for the long wavelength channels. The use of flag $=1$ data or the sample from Catalog 2 would be more conservative (and therefore less complete). The latter candidate source list, while more reliable than Catalog 1, is expected to suffer a more severe loss of highly embedded sources. This is due particularly to the rejection of poorly fitted sources (failed PRF fitting) with APEX, and our strict requirement of suitable fitting and photometry in the MIPS band (with severe sensitivity loss and confusion due to strong diffuse emission). We expect to compensate for this issue with the Herschel data currently being analyzed, which will be able to constrain the dust emission much more accurately for those embedded (younger) sources. With regard to Class types, Class II sources are generally weaker at the longer IRAC wavelengths than more embedded Class 0/I objects, and are therefore more likely to be missed because of lower sensitivity and bright PAH emission (e.g., \citealp{chavarria2008}). Despite our attempt to compensate for saturation in the IRAC sample by using the short exposure mosaics, some sources will still be missed in the brightest and most active regions such as W3 Main and W3 (OH) due to confusion and location within the bright infrared nebulosity. The catalogs would also exclude clearly extended sources and stellar groups. Due to our attempt to provide a reliable sample for our PMS sources, we again note that weak-lined and low extinction T-Tauri and HAeBe stars have been excluded from our catalog in order to avoid major contamination from reddened main sequence stars. We cross-checked our final list of candidate T-Tauri and HAeBe stars with the {\sc simbad} database, and found no matches with the exception of a couple of sources, classified in the database as `infrared sources' or `star in cluster'. Ultimately, spectroscopic surveys of stellar candidates are the most reliable method to find and confirm PMS. \begin{figure*}[ht] \centering \includegraphics[scale=0.7,angle=0]{completeness_flag0.pdf} \caption{Number of YSO candidates as a function of magnitude and channel for the region comprising W3 Main and W3 (OH). Channels 1-4 are shown in order from left to right, top to bottom. Completeness limits for each region (Table \ref{table:complete}) have been derived from the turnover points of the distributions.} \label{fig:completeness} \end{figure*} \subsubsection{Contamination} The \textit{Spitzer}\ based sample was subjected to strict cleaning conditions and cross-checked with other classification schemes to ensure the highest consistency and reliability of our sample of YSO candidates. However, it is likely that some contamination from extraneous objects like galaxies/AGN, planetary nebulae (PNe) and AGB stars is present in the final sample. A measure of the contamination by PNe and AGB stars was obtained by applying the conditions from \citet{rob2008} to the reliable photometry derived from the previous YSO classification. For successful classification as PN, detections needed to satisfy at least two of the four color conditions to ensure reliability in their position in the CCDs. While the selection scheme still allows for mutual contamination of YSO/AGB stars in both samples, this technique can still provide a useful measure of the general contamination of our YSO sample. The PN contamination using this method is predicted to be $\leq1.5\%$ for W3 Main/(OH), and $\leq1\%$ and $\leq2\%$ for KR 140 and AFGL 333, respectively. The contamination from AGB stars is expected to be $<3.0\%$ and $<0.5\%$ for W3 Main/(OH) and AFGL 333. No candidate AGB stars are found in KR 140 using this scheme. In their analysis, \citet{gutermuth2009} adapted their classification to work with the larger distances present in their survey of up to $\sim1$\,kpc. The use of this classification at the distance of W3 ($\sim2$\,kpc) is expected to shift the proportion of classified YSOs toward the brightest sources, because of the loss of dimmer sources rejected through the magnitude limits imposed on the sample and a possible contamination of the galaxy sample with YSOs. To check this effect, we ran the codes on the YSO sample in W5 from the work of \citet{koenig2008}, applying the new classification conditions to the photometry provided by these authors. W5 belongs to a massive complex (neighboring W3 and W4) and is considered to be at the same distance as these GMCs. These authors found a significant proportion of YSOs being misclassified as non YSOs, which was evident, for instance, in the `clustered' properties of the `galaxies'. Although the code used in the present work is a `revised' version of that used by \citet{koenig2008}, we successfully classified $\sim95.5\%$ of their sources as YSOs of the same type (excluding the very few sources not satisfying the requirement of magnitude error $<0.2$\,mag). About two thirds of the $4.5\%$ sources where we disagreed with the classification from \citet{koenig2008} were classified as AGN, and $0\%$ as PAH galaxies. Increasing the magnitude limits for AGN classification by $0.5$\,mag to account for the larger distance of W3 (e.g., \citealp{megeath2009}), we find no differences in the percentage of sources classified as AGN. Although we cannot perform this test on their rejected (non-YSO) sample (as that data was not included in their publication), we expect this code to detect and recover successfully the main YSO population in W3, without the need for major additional modifications. This is in agreement with the following analysis of the distribution of those sources classified as galaxies or AGNs. Using the `Distance to Nearest Neighbor' technique \citep{clark1954} we measured the mean observed distance between galaxy candidates relative to the mean distance that would be expected for a random distribution for a population with the same characteristics. The ratio between these two quantities (R) would be equal to one for a perfectly random distribution. We obtain R$=0.94$, $0.98$, and $0.88$, with significance levels of $\sim25\%$, $\sim37\%$ and $\sim5\%$ for W3 Main/(OH), KR 140 and AFGL 333, respectively. This supports the random spatial distribution of our galaxy candidates and low YSO contamination based on the lack of significant clustering. Coordinates for the list of galaxy candidates is also available online (Table \ref{table:catyso}b). \begin{figure*}[ht] \centering \includegraphics[scale=0.74,angle=0]{fig9_stage1.pdf} \caption{IRAC CCD of Class0/I+(*) (squares) and ClassII+(*) (diamonds) YSOs from Catalog 1 in W3 Main/(OH) (left), KR 140 (middle) and AFGL 333 (right). Black solid lines mark the areas where the majority of Stage I, II and III sources from \citet{rob2006} are found.} \label{fig:stage1} \end{figure*} \begin{table*}[ht] \caption{General 2MASS Properties for `Class 0/I' and `Class II' Populations} \label{table:stageprops2} \centering \begin{tabular}{l l l l l l l l} \hline \hline Class&Mean [H-K]&$\sigma$&Mean [J-H]&$\sigma$&Mean K&$\sigma$&Objects\\ \hline 0/I&1.5&0.6&2.0&0.8&13.6&1.1&68\\ II&0.9&0.4&1.4&0.5&13.6&1.1&412\\ \hline &&T-Test Stat.&Significance&F-Test Stat.&Significance&&\\ \hline 0/I vs. T-Tauri&[H-K]&11.5&8E-22&6.1&6E-13&&\\ &[J-H]&10.0&3E-18&9.3&6E-18&&\\ &[K]&-5.0&2E-6&1.3&0.3&&\\ 0/I vs. II&[H-K]&10.8&2E-24&2.3&5E-07&&\\ &[J-H]&7.9&3E-14&2.6&1E-08&&\\ &[K]&-0.8&0.4&1.1&0.6&&\\ 0/I vs. HAeBe&[H-K]&3.7&4E-04&2.8&4E-03&&\\ &[J-H]&6.2&1E-08&3.1&2E-03&&\\ &[K]&13.2&3E-23&2.2&0.01&&\\ II vs. T-Tauri&[H-K]&5.5&6E-08&2.6&2E-06&&\\ &[J-H]&7.1&5E-12&3.6&1E-09&&\\ &[K]&-5.8&1E-8&1.2&0.5&&\\ II vs. HAeBe&[H-K]&-2.2&0.03&1.2&0.6&&\\ &[J-H]&4.5&1.0E-05&1.2&0.6&&\\ &[K]&18.4&0.00&2.4&8E-04&&\\ \hline \end{tabular} \end{table*} \section{YSO Analysis: Observed and Intrinsic Properties}\label{sec:discussion} In this work we aim to produce a reliable list of YSO candidates, which will be used in our upcoming papers to investigate the early stages of massive star formation in W3. An estimate of the mass of each of our YSO candidates in the different classes is a crucial component of our analysis. Spectral energy distributions (SEDs) have been extensively used to estimate this parameter (e.g., \citealp{rob2006}). However, the highly embedded state of some of these sources demands proper modelling of the dust envelope and disk, which will be the focus of our future Herschel-based analysis. Based on the classification scheme proposed by \citet{gutermuth2009} we have produced a list of YSO candidates in the regions of W3 Main/(OH), KR 140, and AFGL 333, without any preliminary bias on selection according to clump/core association. This classification has been compared to and supplemented by alternative classification schemes, as described below. We classified sources as `red' according to the nomenclature from \citet{rob2008}, which should include all Class I, Flat, and a large number of Class II sources as defined by \citet{lada1987} and \citet{greene1994}. Only three sources in KR 140 classified as Class 0/I in Catalog 1 (and two in Catalog 2) were not initially classified as red sources. These sources showed a slight flattening of the SED at longer wavelengths which is responsible for this misclassification, but all have been visually checked and their rising SEDs are consistent with embedded sources. In this section we characterize the behavior of the \textit{Spitzer}\ YSOs in the 2MASS and IRAC color-magnitude space. With this analysis we investigate the presence of possible identifying characteristics for the different classes, as well as near/mid infrared properties that may help in the identification of low mass and intermediate/high mass YSOs in our sample (e.g., T-Tauri and HAeBe PMS stars) based on the information used in this work. \subsection{YSO Stages and the IRAC CCD} The `stage' classification from \citet{rob2006} is particularly useful when combined with the `Class' scheme in order to avoid contradictions between observed (color, magnitude) and inferred (e.g., $\dot{M}_{\mathrm{env}}$; M$_{\mathrm{disk}}$) properties. Figure \ref{fig:stage1} shows our YSO sample from Catalog 1 (flag $=1$) and the regions in the CCD that most sources of different stages are predicted to occupy (these and all the other figures in this analysis do not include error bars for clarity). We obtain a similar figure when using the candidate list from Catalog 2. While this scheme is useful to separate sources with and without circumstellar material, and a significant proportion of Stage I sources can be separated from the remaining population, some Stage I objects may however still exist in the regions occupied by Stage II (disk domain) and Stage III sources. For all catalogs, we observed that more than $\sim95\%$ of Class 0/I YSOs (flag $=1$) and $\sim90\%$ (flag $=0$) are classified as Stage I sources. Flag $=1$ Class II (including Class II$^*$) YSOs are more evenly distributed ($\sim50\%$) between Stage I and Stage II, although \textit{none} of these sources is as red in [3.6]-[4.5] as Class 0/I (including Class 0/I$^*$) candidates (Figure \ref{fig:stage1}). We observe $<1\%$ of Class II sources above [3.6]-[4.5]$>1$, and none above $1.2$. We consider this to be the limit separating the area exclusive to Class 0/I Stage I sources, and the area (bluer colors) where Class 0/I and Class II Stage I are mixed, perhaps indicative of a transition from envelope to optically thick disks. Very few sources lie in the Stage III area, which confirms the robustness of our sample and ability to separate embedded sources and optically thick disks from those consistent with photospheric colors and optically thin disks. \subsection{The 2MASS CCD and CMD} We next analyzed the behaviour of the `stage' vs `class' classification in the 2MASS CCD and CMD for those IRAC sources (Catalog 1, quality flag $=1$) with counterparts in the 2MASS PSC. As mentioned above, both Catalog 1 and 2 yield almost identical samples for Class 0/I and Class II sources, and therefore the conclusions below from this analysis are independent of the catalog used. \subsubsection{Analysis of observed (Class) properties} In Section \ref{sec:class} we established a scheme to separate the populations of T-Tauri and HAeBe stars based on magnitude and color information. HAeBe stars occupy a distinct region in the CCD, but K magnitude data is still essential to separate the two samples in the region near the T-Tauri locus populated by both types. \begin{figure}[htp] \centering \includegraphics[scale=0.6,angle=0]{fig19_ccd_class.pdf} \caption{2MASS CCD showing Class 0/I IRAC sources with 2MASS counterparts. Lines and T-Tauri data as in Fig. \ref{fig:ccd1}.} \label{fig:class_ccd} \end{figure} \begin{figure}[htp] \centering \includegraphics[scale=0.6,angle=0]{fig21_cmd_class.pdf} \caption{2MASS CMD showing Class 0/I and Class II IRAC sources with 2MASS counterparts. Lines and T-Tauri data as in Fig. \ref{fig:cmd1}.} \label{fig:class_cmd} \end{figure} \begin{figure}[htp] \centering \includegraphics[scale=0.6,angle=0]{fig11_ccd_stage1.pdf} \caption{2MASS CCD showing Stage I IRAC sources with 2MASS counterparts. Lines and T-Tauri data as in Fig. \ref{fig:ccd1}.} \label{fig:stage1_ccd_overall} \end{figure} \begin{figure}[htp!] \centering \includegraphics[scale=0.6,angle=0]{fig20_ccd_class2.pdf} \caption{Same as Fig. \ref{fig:class_ccd}, but for Class II sources.} \label{fig:class_ccd2} \end{figure} \begin{figure}[htp] \centering \includegraphics[scale=0.6,angle=0]{fig22_cmd_class_herbig.pdf} \caption{Same as Fig. \ref{fig:class_cmd}, but with HAeBe stars.} \label{fig:class_cmd_herbig} \end{figure} \begin{figure}[htp] \centering \includegraphics[scale=0.6,angle=0]{fig12_ccd_stage2.pdf} \caption{Same as Fig. \ref{fig:stage1_ccd_overall}, but for Stage II sources.} \label{fig:stage2_ccd_overall} \end{figure} We carried out T-Test and F-Test analyses comparing the color and magnitude distributions of various classes. Results are reported in Table \ref{table:stageprops2}. Statistically, and as in previous cases, the Class 0/I, Class II and T-Tauri samples are indistinguishable in the 2MASS CMD diagram, and therefore selection of low mass PMS stars is not possible in the infrared without prior knowledge of the protostar population. In color, the Class0/I sample is intrinsically redder, with the main population of Class II sources lying more intermediate between Class 0/I (and closer) to the T-Tauri sample (Figures \ref{fig:class_ccd}, \ref{fig:class_ccd2}, and \ref{fig:class_cmd}). $\sim90\%$ of Class 0/I sources lie between $0.7\leq$[H-K]$<2.5$, compared to Class II sources, located within $0.4\leq$[H-K]$<1.7$. While both have similar maximum [H-K] value of $\sim3.4$, $\sim80\%$ and $\sim50\%$ of Class 0/I sources lie above [H-K]$=1.0$ and $1.5$ respectively, compared to $\sim30\%$ and $\sim9\%$ for the Class II population. We find a completely different scenario when focusing on the intermediate mass HAeBe population. As shown in Figure \ref{fig:cmd1}, Herbig stars are not only redder, but brighter than T-Tauri stars, with both populations clearly separated in the CMD. Class 0/I candidates \textit{are not} consistent with the colors \textit{or} magnitudes of HAeBe stars (Class 0/I being redder and dimmer). Class II sources are consistent in color with Herbig stars (i.e., intermediate between T-Tauri and Class 0/I), but are again dimmer than typical HAeBe stars (Figure \ref{fig:class_cmd_herbig}). While a spectroscopic analysis is still required to confirm our HAeBe candidates as such, we find that, contrary to T-Tauri stars, Herbig stars may still be selected without prior knowledge of the embedded population. \subsubsection{Comparative Analysis of Intrinsically Different Populations (Stages)} Using the 2MASS data we find that the populations in Stage I and Stage II (as defined in \citealp{rob2006}), when treated as a whole, are indistinguishable with respect to observed K magnitude (e.g., Figs.\ref{fig:stage1_cmd_overall}, \ref{fig:stage2_cmd_overall}, and \ref{fig:stages_all_cmd}). However, while Stage I sources are found in the color region occupied by Stage II, members of the former population consistently reach redder colors: $\sim90\%$ of Stage I sources have [H-K]$>0.6$, $\sim85\%$ have [H-K]$>0.7$, $\sim55\%$ have [H-K]$>1.0$, and $\sim25\%$ have [H-K]$>1.5$ (maximum [H-K]$=\sim3.5$). The corresponding statistics for Stage II are $\sim65\%$, $\sim40\%$, $10\%$, and $2\%$ (maximum [H-K]$=2.2$). F-test significance levels between Stage I-Stage II and the T-Tauri population show both groups are statistically consistent with having been drawn from the same parent population in K magnitude as the T-Tauri sample (sig. levels: $0.41$ and $0.58$, respectively), with Stage II \textit{also} consistent in color (sig. level: $0.39$) with the PMS population (dominating in the reddening band of typical main-sequence and giant stars). Stage I sources show a larger scatter in the CCD (e.g., Figures \ref{fig:stage1_ccd_overall}, \ref{fig:stage2_ccd_overall}), and significance levels obtained from Student's t-test and F-test between the two populations agree with both having different parent distributions based on color information (Table \ref{table:stageprops}). We find no clear boundary in the CCD or CMD separating sources with intrinsic, physically different characteristics, which could easily be explained by different inclination angles. About $90\%$ of Stage 0/I sources lie in the range $0.5\leq$[H-K]$<2.0$, $\sim90\%$ of Stage II between $0.4\leq$[H-K]$<1.2$, leaving the region in [H-K]$\geq1.2$ as the area dominated by (younger) sources with $\dot{M}_{\mathrm{env}}/M_{\star}>10^{-6}$\,yr$^{-1}$ (e.g., Figure \ref{fig:stages_all_cmd}). \begin{table*}[ht] \caption{General 2MASS Properties: `Stage I' vs `Stage II' Population} \label{table:stageprops} \centering \begin{tabular}{l l l l l l l l} \hline \hline Stage&Mean [H-K]&$\sigma$&Mean [J-H]&$\sigma$&Mean K&$\sigma$&Objects\\ \hline 1&1.2&0.6&1.7&0.6&13.6&1.1&254\\ 2&0.7&0.3&1.3&0.4&13.6&1.1&203\\ \hline &&T-Test Stat.&Significance&F-Test Stat.&Significance&&\\ \hline &[H-K]&10.6&1E-23&4.1&4E-23&&\\ &[J-H]&8.0&1E-14&2.9&5E-14&&\\ &[K]&0.4&0.7&1.1&0.7&&\\ \hline \end{tabular} \end{table*} \begin{figure}[htp!] \centering \includegraphics[scale=0.6,angle=0]{fig13_cmd_stage1.pdf} \caption{2MASS CMD showing Stage I IRAC sources with 2MASS counterparts. Lines and T-Tauri data as in Fig. \ref{fig:cmd1}.} \label{fig:stage1_cmd_overall} \end{figure} \begin{figure}[htp!] \centering \includegraphics[scale=0.6,angle=0]{fig14_cmd_stage2.pdf} \caption{Same as Fig. \ref{fig:stage1_cmd_overall}, but for Stage II sources.} \label{fig:stage2_cmd_overall} \end{figure} \begin{figure}[htp!] \centering \includegraphics[scale=0.6,angle=0]{fig15_cmd_stages.pdf} \caption{Same as Fig. \ref{fig:stage1_cmd_overall}, but for both Stage I and Stage II sources.} \label{fig:stages_all_cmd} \end{figure} \section{Spatial Distribution and Clustering: Group Classification and Characterization}\label{sec:groupclass} There is strong evidence that star formation in GMCs occurs primarily in clusters. The strong link between `clustered star-formation' and massive star formation implies that in order to investigate the nature and processes involved in the formation of the most massive stars it is crucial to investigate the characteristics of young stellar clusters and their pre-stellar progenitors. This analysis, described below, was based on our YSO surface density maps (e.g., \citealp{chavarria2008}) and on the so-called Minimum Spanning Tree Algorithm (MST). \subsection{Minimum Spanning Tree Analysis} In order to identify and characterize regions of YSO clustering we first implemented the nearest-neighbor and MST techniques \citep{gower1969}. In the MST algorithm there is an optimized break length, D$_{\mathrm{break}}$ (the branch length whose removal maximizes the number of groups, N$_{\mathrm{g}}$, with members in each group separated by distances shorter than this length) for the specified N$_{\mathrm{YSO}}$, the minimum number of YSOs for the group to be considered a `cluster' (N$_{\mathrm{YSO}}$). To facilitate comparison with previous analyses of adjacent regions at the same distance (e.g., W5; \citealp{koenig2008}) we chose N$_{\mathrm{YSO}}=10$. We also obtained a new set of results for N$_{\mathrm{YSO}}=5$ to investigate the clustering properties at smaller scales. D$_{\mathrm{break}}$ was estimated from the peak of the distribution of the number of groups satisfying the YSO requirement (N$_{\mathrm{g}}$) as a function of d$_{\mathrm{break}}$ \citep{battinelli1991}; the minimum d$_{\mathrm{break}}$, as well as the incremental step, were chosen to be $0.05$\,pc, similar to the resolution of MIPS $24$\,\micron\ at $2$\,kpc. D$_{\mathrm{break}}$ and N$_{\mathrm{g}}$ are expected to be affected by incompleteness and resolution (e.g., \citealp{bastian2007}). Nevertheless, this technique is particularly useful when comparing the relative degree of clustering for different types of YSOs in different regions with similar data, as in the case of the different subregions in W3. Since we cannot confirm physical associations in these groups, cluster membership cannot be determined without additional data. Therefore, for the following analysis we define the `sub-branches' resulting for D$_{\mathrm{break}}$ as `stellar groups', to avoid confusion from true clusters and associations. Global results for the \textit{Spitzer}\ survey are presented in Table \ref{table:clusters}. When a range of break lengths is found to have the same maximum N$_{\mathrm{g}}$ we include the range. MST parameters for the entire YSO population (analyzed as a whole) and for each subregion (each AOR analyzed individually) are shown in Tables \ref{table:clusters} and \ref{table:clusters_regions}, respectively. Tables also include results for the YSO population as a whole and divided into classes. Each Class is also separated according to N$_{\mathrm{YSO}}$ in order to investigate a possible hierarchical structure within larger groups. \begin{table*}[ht] \caption{Identified Stellar Groups in W3} \label{table:clusters} \centering \begin{tabular}{l l l | c c c | c c c} \hline \hline \multicolumn{9}{c}{{All Survey}}\\ &&&\multicolumn{3}{c}{{Flag$=0$}}&\multicolumn{3}{c}{{Flag$=1$}}\\ \hline Catalog&Class&N$_{\mathrm{YSO}}$&D$_{\mathrm{break}}$&N$_{\mathrm{g}}$&$\%$ Assoc.&D$_{\mathrm{break}}$&N$_{\mathrm{g}}$&$\%$ Assoc.\\ \hline 1&All$^a$&$10$&$0.60$&$27$&$56\%$&$1.2$&$13$&$76\%$\\ 2&All&$10$&$0.60-1.2$&$15$&$40-78\%$&$1.2$&$13$&$76\%$\\ 1&Class0/I&$10$&$3.1-3.4$&$6$&$82-83\%$&$3.1-3.4$&$6$&$80-82\%$\\ 1&ClassII&$10$&$0.85$&$14$&$55\%$&$0.85$&$14$&$56\%$\\ 1&PMS&$10$&$0.70-1.2$&$6$&$30-61\%$&&&\\ 1&Class0/I/II&$10$&$0.60-1.2$&$15$&$40-78\%$&$0.80$&$13$&$54\%$\\ 1&All&$5$&$0.45$&$60$&$53\%$&$0.55$&$33$&$54\%$\\ 2&All&$5$&$0.55-0.60$&$37$&$55-60\%$&$0.95$&$33$&$54\%$\\ 1&Class0/I&$5$&$2.3-2.8$&$10$&$80-88\%$&$2.3-2.8$&$10$&$80-87\%$\\ 1&ClassII&$5$&$0.60$&$29$&$53\%$&$0.60$&$24$&$51\%$\\ 1&PMS&$5$&$0.65$&$14$&$37\%$&&&\\ 1&Class0/I/II&$5$&$0.55-0.60$&$39$&$57-62\%$&$0.55-0.65$&$32$&$54-63\%$\\ \hline \multicolumn{9}{l}{{$^a$ Includes Class0/I, Class0/I$^*$, ClassII, and ClassII$^*$. Other classes exclude}}\\ \multicolumn{9}{l}{{ ($^*$) candidates unless specifically mentioned. }}\\ \end{tabular} \end{table*} Using the group information provided by the MST algorithm and the technique from \citet{battinelli1991} we first explored whether there might be a characteristic scale in the entire \textit{Spitzer}\ survey (Table \ref{table:clusters}). Our list of YSO candidates (Class 0/I, Class II, Class 0/I* and Class II* candidates: `All' class in Table {\ref{table:clusters}) from Catalog 1 yields D$_{\mathrm{break}}=0.6$ for a minimum group membership of N$_{\mathrm{YSO}}=10$, resulting in $56\%$ of the YSO population associated with a group. Flag $=1$-only sources and Catalog 2 (both containing less than half the sources in Catalog 1) are more consistent with a larger length and grouped fraction of $\sim1.1$\,pc and $73.5\%$, respectively. It is clear that incompleteness will affect the optimal break length, as missing sources will affect the YSO grouping and result in spatially larger groups in order to satisfy the minimum group membership requirement. As shown in Table \ref{table:clusters_regions}, this appears to be an issue mainly when considering the (*) population, which constitutes the most uncertain sample and the main difference between Catalog 1 and Catalog 2. Exclusion of this sample affects the Class 0/I candidate sample in particular, because when the highly embedded population is omitted the few remaining typical Class 0/I sources are more widely distributed, resulting in spatially larger groups for a given N$_{\mathrm{YSO}}$. The information from Catalog 1 (all flags) is therefore expected to be a more (statistically) significant indicator of the properties of the overall population, and so this is the primary catalog used in the following analysis. We also find that all the results derived for N$_{\mathrm{YSO}}=5$ are comparable with the results derived from Catalog 1 and N$_{\mathrm{YSO}}=10$, with a typical D$_{\mathrm{break}}\sim0.54$\,pc containing $55\%$ of the YSO population. This is in good agreement with the results from \citet{koenig2008}, who found optimal break lengths in the neighboring region of W5 (for a sample of the same characteristics) of 0.54\,pc (YSO fraction: $44\%$). \begin{table*}[ht] \caption{Identified Stellar Groups in Individual Subregions of W3} \label{table:clusters_regions} \centering \begin{tabular}{l c c c c c c} \hline \hline \multicolumn{7}{c}{{W3 Main/(OH)}}\\ \\[0.1pt] \hline &\multicolumn{3}{c}{{N$_{\mathrm{YSO}}=10$}}&\multicolumn{3}{c}{{N$_{\mathrm{YSO}}=5$}}\\ Class&D$_{\mathrm{break}}$&N$_{\mathrm{g}}$&$\%$ Assoc.&D$_{\mathrm{break}}$&N$_{\mathrm{g}}$&$\%$ Assoc.\\ \hline All$^a$&$0.45$&$11$&$48\%$&$0.45$&$23$&$62\%$\\ Class0/I/II&$0.75$&$5$&$63\%$&$0.55$&$11$&$51\%$\\ Class0/I&$2.1-3.6$&$2$&$74-97\%$&$2.0$&$4$&$77\%$\\ Class0/I+Class0/I$^*$&$0.45-0.55$&$6$&$35-49\%$&$0.55$&$22$&$72\%$\\ ClassII&$0.65-0.85$&$2$&$35-49\%$&$0.55$&$8$&$44\%$\\ ClassII+ClassII$^*$&$0.65-0.85$&$2$&$30-43\%$&$0.55$&$9$&$40\%$\\ PMS&$0.75-1.2$&$2$&$38-85\%$&$0.65$&$8$&$45\%$\\ \hline \\[0.1pt] \multicolumn{7}{c}{{KR 140}} \\[0.1pt] \hline All&$1.1$&$14$&$67\%$&$0.85-0.95$&$30$&$72-77\%$\\ Class0/I/II&$0.65-1.6$&$7$&$42-76\%$&$0.65$&$18$&$61\%$\\ Class0/I&$3.1-3.4$&$3$&$66\%$&$2.3-3.0$&$6$&$68-83\%$\\ Class0/I+Class0/I$^*$&$2.2-2.9$&$5$&$78-90\%$&$0.75$&$13$&$62\%$\\ ClassII&$0.75-0.85$&$8$&$54-55\%$&$1.1$&$12$&$69\%$\\ ClassII+ClassII$^*$&$1.2$&$10$&$61\%$&$0.95$&$24$&$65\%$\\ PMS&$3.9-4.7$&$3$&$75-93\%$&$2.3-2.5$&$7$&$73-77\%$\\ \hline \\[0.1pt] \multicolumn{7}{c}{{AFGL 333}} \\[0.1pt] \hline All&$0.55$&$6$&$44\%$&$0.55$&$12$&$59\%$\\ Class0/I/II&$0.55-1.3$&$4$&$41-93\%$&$0.55$&$11$&$66\%$\\ Class0/I&$0.25-0.35$&$2$&$40-49\%$&$0.15-0.35$&$2$&$19-49\%$\\ Class0/I+Class0/I$^*$&$0.25-2.7$&$2$&$28-94\%$&$1.3-1.5$&$4$&$78\%$\\ ClassII&$0.85-1.3$&$4$&$60-90\%$&$0.55$&$9$&$49\%$\\ ClassII+ClassII$^*$&$0.85-1.3$&$4$&$52-85\%$&$0.55$&$10$&$46\%$\\ PMS&$0.65-1.1$&$3$&$43-60\%$&$0.65-1.1$&$6$&$60-82\%$\\ \hline \multicolumn{7}{l}{{$^a$ Class 0/I, Class 0/I$^*$, Class II, and Class II$^*$}}\\ \end{tabular} \end{table*} \subsection{Determination of Group Intrinsic Properties} Results above indicate that the global YSO population of W3 as a whole shows a tendency to group with a scale D$_{\mathrm{break}}$ comparable to or less than half a parsec. While larger than typical core sizes associated with the formation of individual stars ($\sim0.1$\,pc; e.g., \citealp{mckee2007}; \citealp{motte2007}), these scales are consistent with clump-like objects ($\sim0.5$\,pc; e.g., \citealp{zinnecker2007}) considered to be the likely birth place of stellar clusters. This led \citet{koenig2008} to conclude that these scales might well be typical of high mass star forming regions. However, the relevance of this result and its underlying relation to the actual physical processes in star formation remains ambiguous. It is important to quantify how representative this value is of the inter-YSO separations and how relevant this grouping is to the original birth configuration and conditions of the eventual stellar members. We examined the distribution of D$_{\mathrm{near}}$, the distance of a YSO to its nearest neighbor, both for the entire sample (Table \ref{table:parameters1}) and for members within a specific group (Table \ref{table:parameters2}). Typically, D$_{\mathrm{near}}$ is considerably smaller than half a parsec, and therefore the optimal D$_{\mathrm{break}}$ is more indicative of `inter-cluster' separation than YSO separation, and therefore more relevant for cluster formation than that of individual stars. We note that embedded clusters of massive stars like the one forming IRS5 (W3 Main) or the Trapezium in the Orion Nebula have \textit{maximum} projected stellar separations of the order of $0.02-0.05$\,pc \citep{megeath2005}. These approach the resolution limits of IRAC and MIPS $24$\,\micron, respectively. Therefore, a given \textit{Spitzer}\ `YSO' may in fact contain more than one protostar. The clear link between massive stars and clusters, however, make the present study a required step for understanding the physics behind high-mass star formation and the differences with respect to that of low mass stars. With this goal in mind, we analyzed each subregion of W3 in more detail to investigate the underlying properties of the stellar groups found by the MST algorithm, as well as possible local differences on the intrinsic characteristics of the stellar population. Tables \ref{table:parameters1} and \ref{table:parameters2} include the parameters derived from the YSO candidate list in Catalog 1. The former includes, for each subregion, the total area surveyed, the extinction range, the total mass in the region derived from the extinction maps (e.g., \citealp{heiderman2010}), the number of YSOs (Table \ref{table:class3a}), the total surface density, the YSO surface density (e.g., \citealp{chavarria2008}), and the star formation efficiency (SFE) of the region assuming that the YSOs are solar-mass stars. The latter assumption is not expected to be accurate, especially considering the possibility of some \textit{Spitzer}\ YSOs actually being more than one object. However, we used this parameter as a measure of the \textit{relative} properties of the different stellar groups, just like when considering the `ages' of the regions in W3 (Section \ref{sec:history}). In both tables uncertainties for the mass and surface density have been derived from the statistical uncertainties in the extinction calculations. These uncertainties do not include effects such as variations in the extinction law or background fitting uncertainties during the creation of the maps. The final errors in these maps do not account for the larger differences observed with respect to other extinction estimates in the literature, and which depend on knowledge of the dust emissivity and temperature \citep{RF2009}. Changes in these last parameters can, by themselves, affect the estimated extinction values by a factor of $\sim2$, and therefore the uncertainties derived from this work will be underestimated. We note that SFE is the amount of mass in YSOs \textit{at present} compared to the total mass, and therefore not necessarily the ultimate conversion efficiency. Table \ref{table:parameters2} summarizes the average properties of the identified groups for N$_{\mathrm{YSO}}=10$ using the break length derived from the global analysis of the entire \textit{Spitzer}\, survey (D$_{\mathrm{break}}=0.6$\,pc; Table \ref{table:clusters}). Several methods have been used to characterize the size and shape of identified groups, from circular to convex-hull techniques (e.g., \citealp{bastian2007}; \citealp{gutermuth2009}). To characterize the size and elongation of the group we chose to use an elliptical area. The center was defined as the average position of the YSOs within the group. The semimajor axis is the vector from the chosen center to the farthest YSO. The semiminor axis is the minimum size required to keep all the YSOs within the ellipse (Table \ref{table:parameters2}). Parameters for individual groups have been included in the electronic version of this article (Table \ref{table:parameters2}a). \clearpage \begin{sidewaystable}[p!] \caption{Average Global Parameters in Subregions of W3} \label{table:parameters1} \centering \begin{tabular}{l | c c c c c c c c c c c c} \hline \multicolumn{12}{c}{{Global AOR Data$^a$}}\\ \hline Region&D$_{\mathrm{Near}}$&D$_{\mathrm{Near}}$&Area&A$_{\mathrm{V}}$&A$_{\mathrm{V}}$&Mass$_{\mathrm{gas}}$&$\Sigma_{\mathrm{gas}}$&n$_{\mathrm{YSO}}$&$\Sigma_{\mathrm{YSO}}$&$\Sigma_{\mathrm{YSO}}$&SFE\\ &Min-Max&Mean&&Min-Max&Mean&&&&Min-Max&Mean&\\ &[pc]&[pc]&[pc$^2$]&[mag]&[mag]&[$10^4$\,M$_{\odot}$]&[M$_{\odot}$\,pc$^{-2}$]&&[pc$^{-2}$]&[pc$^{-2}$]&\\ \hline \hline All-Survey&$0.01-2.9$&$0.33\pm0.01$&$1316$&$1.2-9.9$&$3.5$&$6.2\pm0.005$&$59.44\pm0.04$&1566$^b$&$0.05-569.13$&$1.73\pm0.001$&$0.02$\\ W3Main/(OH)&$0.01-1.8$&$0.26\pm0.01$&$231.5$&$1.2-9.9$&$4.0$&$1.4\pm0.002$&$61.31\pm0.08$&616&$0.12-558.25$&$3.41\pm0.005$&$0.04$\\ KR140&$0.02-2.9$&$0.38\pm0.02$&$853$&$1.4-6.6$&$3.0$&$3.8\pm0.004$&$45.38\pm0.04$&706&$0.05-569.13$&$1.25\pm0.002$&$0.02$\\ AFGL333&$0.03-2.2$&$0.34\pm0.02$&$231.5$&$1.7-7.7$&$3.4$&$1.0\pm0.002$&$53.25\pm0.10$&246&$0.10-336.68$&$1.77\pm0.004$&$0.02$\\ \hline \multicolumn{12}{l}{{$^a$ Average parameters for the entire AOR.}}\\ \multicolumn{12}{l}{{$^b$ Excludes repeats: two YSOs appearing in two fields due to AOR overlap.}}\\ \end{tabular} \end{sidewaystable} \begin{sidewaystable}[p!] \caption{Average Parameters of Groups in Subregions of W3$^a$} \label{table:parameters2} \centering \begin{tabular}{l c c c c c c c c c c c c c} \hline Data&D$_{\mathrm{Near}}$&D$_{\mathrm{Near}}$&Area&A$_{\mathrm{V}}$&A$_{\mathrm{V}}$&Mass$_{\mathrm{gas}}$&$\Sigma_{{\mathrm{gas}}}$&n$_{\mathrm{YSO}}$&$\Sigma_{\mathrm{YSO}}$&$\Sigma_{\mathrm{YSO}}$&SFE&a$^b$&a$/$b\\ &Min-Max&Mean&&Min-Max&Mean&&&&Min-Max&Mean&&&\\ &[pc]&[pc]&[pc$^2$]&[mag]&[mag]&[$10^2$\,M$_{\odot}$]&[M$_{\odot}$\,pc$^{-2}$]&&[pc$^{-2}$]&[pc$^{-2}$]&&[pc]&\\ \hline \hline \\[0.1pt] \multicolumn{14}{c}{{Class0/I($^*$)+ClassII($^*$); D$_{\mathrm{break}}=0.60$\,pc; N$_{\mathrm{YSO}}=10$}}\\ \\[0.1pt] \hline W3Main/(OH)&$0.05-0.51$&$0.21\pm0.05$&$14.7$&$2.9-7.4$&$4.8$&$11.0\pm0.02$&$71.9\pm0.2$&449&$1.61-178.21$&$7.75\pm0.06$&$0.07$&1.97&1.1\\ KR140&$0.04-0.48$&$0.16\pm0.04$&$5.05$&$2.7-4.2$&$3.4$&$2.9\pm0.009$&$51.4\pm0.3$&294&$1.62-141.95$&$8.91\pm0.03$&$0.14$&1.39&1.65\\ AFGL333&$0.07-0.44$&$0.21\pm0.05$&$2.43$&$3.0-5.2$&$4.1$&$1.6\pm0.007$&$61.7\pm0.4$&134&$3.02-73.90$&$12.59\pm0.12$&$0.15$&1.02&1.64\\ \hline \\[0.1pt] \multicolumn{14}{c}{{Class0/I/II; D$_{\mathrm{break}}=0.60$\,pc; N$_{\mathrm{YSO}}=10$}}\\ \\[0.1pt] \hline W3Main/(OH)&$0.04-0.54$&$0.19\pm0.09$&$2.84$&$3.6-6.0$&$4.6$&$2.1\pm0.01$&$69.5\pm0.5$&45&$2.44-64.50$&$8.64\pm0.06$&$0.10$&1.06&1.49\\ KR140&$0.04-0.54$&$0.19\pm0.06$&$4.51$&$3.1-4.8$&$4.1$&$2.8\pm0.01$&$61.0\pm0.3$&141&$1.44-155.20$&$7.88\pm0.03$&$0.09$&1.37&1.44\\ AFGL333&$0.08-0.46$&$0.23\pm0.05$&$2.93$&$2.9-5.8$&$4.3$&$2.0\pm0.009$&$64.7\pm0.3$&106&$1.70-32.71$&$6.43\pm0.02$&$0.10$&1.12&1.51\\ \hline \multicolumn{14}{l}{{$^a$ Parameters for individual groups (Table \ref{table:parameters2}a) available in the electronic version of this article.}}\\ \multicolumn{14}{l}{{$^b$ Ellipse semi-major axis.}}\\ \end{tabular} \end{sidewaystable} \clearpage \begin{figure*}[ht] \centering \includegraphics[scale=0.8,angle=270]{age_w3.pdf} \caption{Greyscale \textit{Spitzer}\ channel 1 image of W3 with a superposition of `age' contours from the ratio of Class II/Class 0/I, including ($^*$) population. Only specific contours are shown for clarity: relatively old, $3\%$ of map peak value (red); intermediate, $0.5\%$ (magenta); and relatively young, $0.05\%$ (green).} \label{fig:age_all} \end{figure*} \begin{figure*}[h!] \centering \includegraphics[scale=0.65,angle=270]{fig23_main1.pdf} \caption{Greyscale \textit{Spitzer}\ channel 1 image of W3 Main/(OH) with identified groups for N$_{\mathrm{YSO}}=10$ and D$_{\mathrm{break}}=0.6$\,pc (red ellipses). Yellow contours are Class 0/I surface density contours between $1-5\%$ of peak value $\sim560$\,YSO\,pc$^{-2}$ in $1\%$\,steps. Blue contours are Class II contours between $5-25\%$ of peak value $\sim100$\,YSO\,pc$^{-2}$ in $5\%$\,steps. YSO contours have been chosen to span a common YSO range for both classes of $\sim5-25$\,YSO\,pc$^{-2}$. Green contours are of the ratio of Class II/Class 0/I YSO surface density maps; these include transition and highly embedded candidates: $^*$ classification. These `age' contours are for 0.5-2.5$\%$ of the peak value of $\sim300$ in $0.5\%$ steps, a range chosen to highlight the youngest regions.} \label{fig:main1} \end{figure*} \begin{figure*}[ht!] \centering \includegraphics[scale=0.65,angle=270]{fig24_main2.pdf} \caption{Like Fig. \ref{fig:main1}, but excluding the less reliable ($^*$) population. Groups are marked as magenta ellipses. Class 0/I surface density contours (yellow) between $20-90\%$ of peak value $\sim8$\,YSO\,pc$^{-2}$ in $10\%$\,steps. Class II contours (blue) between $1.5-7\%$ of peak value $\sim100$\,YSO\,pc$^{-2}$ in $\sim1.4\%$\,steps. YSO contours have been chosen to span a common YSO range of $\sim1.5-7$\,YSO\,pc$^{-2}$. `Age' contours (green) are between 1-31$\%$ of peak value of $\sim500$ in $5\%$ steps. } \label{fig:main2} \end{figure*} \begin{figure*}[ht!] \centering \includegraphics[scale=0.65,angle=270]{fig25_main1.pdf} \caption{Like Fig. \ref{fig:main1} but for N$_{\mathrm{YSO}}=5$ and D$_{\mathrm{break}}=0.55$\,pc, which is optimal in this region for both Class 0/I + Class 0/I$^*$ groups (brown ellipses), and Class II + Class II$^*$ groups (blue ellipses). Parameters are from Table \ref{table:clusters_regions}. Figure shows the location of the highest concentrations of YSOs according to Class.} \label{fig:main1_5} \end{figure*} \begin{figure*}[ht!] \centering \includegraphics[scale=0.65,angle=270]{fig26_main2.pdf} \caption{Like Fig. \ref{fig:main1_5}, but excluding the ($^*$) population. Now D$_{\mathrm{break}}=1.95$\,pc, which is optimal for Class 0/I (brown ellipses), and D$_{\mathrm{break}}=0.55$, which is optimal for Class II (blue ellipses). Parameters are from Table \ref{table:clusters_regions}.} \label{fig:main2_5} \end{figure*} \section{W3 in Perspective: Stellar Content, Cluster Properties, and Star Formation Activity} \label{sec:history} W3 is believed to have signatures of both triggered (HDL; e.g., \citealp{oey2005}) and isolated (KR140; e.g., \citealp{kerton2008}) massive star formation. The existence of different star formation processes in the same cloud makes W3 a prime location for further investigation. The identification and characterization of the embedded/young ($1-2$\,Myr) cluster population (including their relative age) can shed some light on cluster and massive star formation. With this goal in mind, in this section we use the spatial distribution of YSO classes, the properties of the identified groups, and the surface density/age maps to present a description of the star formation history and activity in W3. Characteristics of the groups (including the ($^*$) population) discussed below can be found in Table \ref{table:parameters2}a (online). We carried out the main analysis on W3 by working on each subregion individually, while using common group properties (e.g., N$_{\mathrm{YSO}}$ and D$_{\mathrm{break}}$) for the entire survey. Clearly, the definition and properties of the populations that are `clustered' or `distributed' (objects outside the `cluster' boundaries, stars formed in isolation and members displaced from their original birthplaces) depend on the fundamental definition of `cluster' and the chosen boundary (see e.g., \citealp{bressert2010} for a compilation of recent cluster identification techniques). The application of a common definition and technique throughout an entire cloud like W3 (same distance/resolution) can, however, be used to determine the \textit{relative} properties of each subregion, avoiding major systematic effects arising from distance dependent factors or an assumed definition. For W3, this approach is used to investigate the possibility of intrinsic differences between stellar groups in different environments and with different stellar activity (e.g., HDL vs. KR 140), and so illuminate possible intrinsic differences between the clustered and distributed populations (e.g., \citealp{allen2007}). Unless mentioned otherwise, the following analysis will be based on the YSO sample from Catalog 1 and groups with N$_{\mathrm{YSO}}=10$ and D$_{\mathrm{break}}=0.6$\,pc (`red' groups). We made use of the YSO density distribution maps and carried out an individual analysis of the identified groups in each subfield in W3 (W3 Main/(OH), AFGL 333, and KR 140). `Age' maps were also created from the ratio of the YSO surface density images, for example the ratio of Class II to Class 0/I candidates. Peaks in this map represent the `oldest' of those regions \textit{containing YSOs}, whereas a low value in a region populated by YSOs indicates a relatively young region. The maps used in this work are best suited for the analysis of regions known to contain YSO groups, because the low map values in inactive or unpopulated regions depend on YSOs rather distant from the pixel in question. Figure \ref{fig:age_all} shows the age map for the entire W3 cloud. The `oldest' star forming regions (peaks) are located in the KR 140 field, although we note that many of these old regions are not always associated with identified groups (see below). Finally, it is important to distinguish between the actual `age' of the system (from the start of star formation activity) and `apparent age'. In an undisturbed system initiated by a short-lived burst of star formation, our ratio method yields reliable ages. High density, central regions would contain the oldest population and shorter free-fall timescales (t$_{\mathrm{ff}}$\,$\alpha$\,$\rho^{-1/2}$). Such an idealized system would show a relatively ordered distribution of its population, and the age estimate would be an acceptable upper limit. On the other hand, our age estimates for triggered regions are expected to be biased toward younger ages, because there is strong evidence (e.g., IC 1795; see below) for star formation being an on-going process lasting for at least a few Myr, rather than being the result of an isolated burst of star formation. Thus, on average, the YSO groups in W3 Main/ (OH) might only appear to be the youngest using the simple age map contours (Fig. \ref{fig:age_all}). \subsection{\textit{Star Formation in the HDL: W3 Main/(OH) \& AFGL 333}} AFGL 333 and W3 Main/(OH) are located in the eastern HDL neighboring W4 (Fig. \ref{fig:intro}). The HDL is the most dense structure and essentially all the major activity of the GMC is found within its boundaries. In both fields we have identified signatures indicative of several modes of star formation contributing to the overall structure and young stellar population. \subsubsection{The W3 Main/(OH) Field} The cluster IC 1795 is located at the center of an eroded shell-like structure containing the W3 Main and W3 (OH) complexes. Both show the most active massive star formation, including deeply embedded clusters, \ion{H}{2} regions, and a trapezium-like system rivaling that in the Orion nebula (e.g., \citealp{megeath2005}). \begin{figure}[h!] \centering \includegraphics[scale=0.6,angle=0]{sfe_meandis.pdf} \caption{Inter-YSO separations as a function of SFE for W3 Main/(OH), KR 140, and AFGL 333. YSOs (including highly embedded and transition candidates) are associated in groups with N$_{\mathrm{YSO}}=10$ and D$_{\mathrm{break}}=0.6$\,pc. Filled symbols mark those groups with the youngest ages (Class II/Class 0/I $< 1$).} \label{fig:sfe_meandis} \end{figure} \begin{figure*}[h!] \centering \includegraphics[scale=0.65,angle=270]{age1.pdf} \caption{Same as Fig. \ref{fig:main1} with various `age' contours superimposed. Blue contours are Class II/Class 0/I surface density contours between 5-30$\%$ of peak value of $\sim500$ in $5\%$ steps, excluding the ($^*$) population. Red and magenta contours are for PMS/[Class 0/I+Class II] between 3-10$\%$ of peak value of $\sim80$ in $1\%$ steps (red), and 10-60$\%$ in $10\%$ steps (magenta), with and without the ($^*$) population, respectively. Green contours like Fig. \ref{fig:main1}.} \label{fig:age1} \end{figure*} Figure \ref{fig:main1} shows the W3 Main/(OH) field with the identified groups, the contours representing the surface density distributions for YSOs calculated on a grid identical to that of the \textit{Spitzer}\ images, and including highly embedded and transition stage candidates (Class 0/I$^*$ and Class II$^*$: ($^*$) sample). This figure also includes `age' contours, obtained from the ratio of the surface density maps: [Class II+Class II$^*$]/[Class 0/I+Class 0/I$^*$]. Figure \ref{fig:main2} shows the groups found for the same N$_{\mathrm{YSO}}$ and break length as Figure \ref{fig:main1}, but excluding the less reliable ($^*$) population. For convenience these will be specifically referred to as `magenta' groups, to distinguish them from the `red' groups in Figure \ref{fig:main1}. Reduction of the number of candidate YSOs yielded smaller and fewer groups, as expected. In addition, the omission of highly embedded sources, combined with the confusion in the regions of strong IR emission, highly affected the numbers of YSOs detected in the innermost and most active regions, especially around W3 Main. We find two magenta groups within red Group 1, concident with IC 1795. The oldest (magenta Group 2; Fig. \ref{fig:main2}) coincides with the oldest part of IC 1795. Magenta Group 0 coincides with a high extinction region and contains numerous \ion{H}{2} regions and known clusters, including the well known maser sources W3 (OH) and W3 (H$_2$O). Although with N$_{\mathrm{m}}<25$, the characteristics of these two groups are consistent with those of mid-rich triggered groups in Figure \ref{fig:main1} (see below) linked to massive star formation, with surface densities greater than $70$\,M$_\odot$\,pc$^{-2}$, A$_{\mathrm{V-mean}}>5.0$, and mean inter-YSO separations $D<0.25$\,pc. In Figures \ref{fig:main1_5} and \ref{fig:main2_5} we show the groups identified for each particular Class (with and without the ($^*$) population) with the relaxed requirement of N$_{\mathrm{YSO}}=5$. The identified groups more closely trace particular structures observed in extinction (e.g., filaments), as well as smaller separate concentrations of Class 0/I and Class II candidates. We suggest that, overall, clustered formation in compact clumps/cores can be distinguished from the star formation process associated with young groups that have low surface density and extinction and that are found in triggered regions (e.g., Groups 6 and 7; Fig. \ref{fig:main1}). Figure \ref{fig:sfe_meandis} shows the inter-YSO separations as a function of SFE. Groups in `induced' regions with the largest separations have been less efficient in forming young stars, which could be due to formation in a turbulent environment. This could be indicative of the `distributed' star formation mode taking over in those regions that lack the conditions, like high column density and low turbulence, required to form massive stars and their (richer) parent clusters. Indeed, a {\sc simbad} search reveals no \ion{H}{2} regions or clusters associated with any of these triggered groups, which also lack localized and significant radio continuum emission in the Canadian Galactic Plane Survey (CGPS; \citealp{taylor2003}) maps. Rich groups ($25 \le $ N$_{\mathrm{m}} < 50$) such as Group 0 and Group 5 have average inter-YSO separations $<0.25$\,pc and the highest surface densities, and both contain within their perimeters a large proportion of the major massive star activity of the W3 GMC. This supports the classical view that one can have clustered star formation caused by triggering when at \textit{early stages} high surface densities without disruptive turbulence are present, to ensure that a gravitationally (unstable) bound system is formed. The privileged location of IC 1795 (Group 1; Fig. \ref{fig:main1}), in the most central and dense parts of the HDL, was likely key in the formation of its rich population. Although it depends on the pre-existence of the HDL, its location, morphology and population characteristics are compatible with isolated/quiescent formation, and not necessarily the product of a triggering event as suggested in previous studies. A similar (quiescent) origin is also suggested for some individual isolated groups in the outer boundaries of the field, characterized by a relatively poor and yet closely spaced (contrary to the poor groups associated with triggered regions discussed above) old population in a relatively compact configuration (e.g., Group 2 in Fig. \ref{fig:main1}). Group 1 is the next `oldest' system after Group 2, and it is the most massive and richest group, with the smallest separation distances down to $0.02$\,pc. W3 Main and (OH) (Fig. \ref{fig:intro}) show the highest extinction and both are likely to have (at least part) of their stellar population induced by IC 1795, in agreement with the conclusions from \citet{oey2005} and \citet{polychroni2010}. A triggered scenario is supported by i) an elongated `older' Class II dominated region within Group 1 extending in the directions of W3 Main and W3 (OH) (see Fig. \ref{fig:main1}); ii) the cavity and shell like structure around IC 1795 hosting both systems; and iii) a tendency of Class II candidates to be located toward the most central regions, with younger groups of YSOs following the shell around Group 1 (in increasing age: Groups 4, 7, 0, 5, 3, 6; Fig. \ref{fig:main1}). Using the ratio of Class II/Class 0/I as a relative measurement of age, Group 6, the oldest after Group 2 and Group 1, happens to be also the nearest to IC 1795, while younger groups have larger distances. Secondary bursts of star formation, as well as a non-negligible star formation duration, are needed to reconcile the characteristics of the YSO population in IC 1795 with the age estimated by \citet{oey2005} ($3-5$\,Myr) using optical photometry and spectral analysis. We find a highly distributed population of Class II and Class 0/I sources in the vicinity of IC 1795. Star formation is estimated to occur in 1-2 dynamical crossing times \citep{elmegreen2000}, but larger age spreads are possible if multiple events occur within a certain system. Figures \ref{fig:main1_5} and \ref{fig:main2_5} show the groups obtained when separating the YSOs according to Class with a minimum group membership requirement of N$_{\mathrm{YSO}}=5$. The `class subgroups' observed within Group 1 and the triggered population could be indicative of such additional star formation events. This combined activity likely reinforced the effects of the central cluster when forming the cavity and dense surrounding shell hosting the most massive stars in W3. Contrary to the suggestion of \citet{oey2005}, some of the stellar activity in W3 Main might also have originally initiated in quiescent mode. Figure \ref{fig:age1} shows an `older' (low and intermediate mass) PMS population toward the outer edge of this region, shown by the magenta/red contours. On the other hand there are young groups at the inner edge of W3 Main, closer to IC 1795. By calibrating the Class II/Class 0/I ratio of Group 1 to the age of IC 1795 we obtain an average age for these young groups of $\sim1.5-2.5$\,Myr. In this calculation we assumed the onset of star formation occurred in a single event, and so a larger proportion of Class II sources implies a more evolved population. We ignored effects such as secondary bursts of star formation in the region, and assumed that for groups with similar initial stellar characteristics there is a direct relation between the class ratio and the actual age of the system. A non-negligible period of star formation and internal triggering would mean that it would be more appropriate to consider our age estimates as lower limits. This age estimate for the young groups is however comparable to the estimated age for the oldest (diffuse) \ion{H}{2} regions and the PMS population known to dominate in this region ($\sim10^6$\,yrs; \citealp{feigelson2008}). Therefore, triggering (either indirectly by IC 1795, and/or by this pre-existing PMS population) could have been responsible for the observed young proto-OB population in the central regions of W3 Main, a secondary burst of massive star formation in a region with an already highly enhanced surface density. Such a scenario would nicely link IC 1795 to one of the possible models suggested by \citet{feigelson2008} to explain the origin of W3 Main (option 4 in their list), and could explain the anomalous age distribution (a central young cluster surrounded by an older population) described by these authors. \subsubsection{The AFGL 333 Field} W3 Main and W3 (OH) dominate both in massive stars and in star formation activity. The other HDL field, containing AFGL 333, is characterized by a young stellar population in its most part associated with high extinction regions and filamentary structures, that have remained undetected in the 2MASS-based extinction maps. There are also clear similarities and some differences relative to W3 Main/(OH). Figures \ref{fig:afgl1} and \ref{fig:afgl2} show the identified YSO groups (with and without the ($^*$) population) for the AFGL 333 field, including YSO surface density contours and age contours. Figures \ref{fig:afgl1_5} and \ref{fig:afgl2_5} show the groups identified for N$_{\mathrm{YSO}}=5$ and different classes (each with their corresponding D$_{\mathrm{break}}$). The oldest and youngest groups are clearly separated, with the youngest (brown ellipses) localized in the central regions, and the oldest (blue ellipses) toward the outer parts of AFGL 333 (northern parts and triggered regions neighboring W4). \begin{figure*}[ht!] \centering \includegraphics[scale=0.65,angle=270]{afgl1.pdf} \caption{Greyscale \textit{Spitzer}\ channel 1 image of AFGL 333 with identified groups for N$_{\mathrm{YSO}}=10$ and D$_{\mathrm{break}}=0.6$\,pc (red ellipses). Yellow contours are Class 0/I surface density contours between $5-65\%$ of peak value $\sim120$\,YSO\,pc$^{-2}$ in $10\%$\,steps. Blue contours are Class II contours between $3.5-43.5\%$ of peak value $\sim170$\,YSO\,pc$^{-2}$ in $10\%$\,steps. Crosses are IRAS 02245+6115 (bottom) and IRAS 02244+6117 (top). YSO contours have been chosen to span a common YSO range for both classes of $\sim6-75$\,YSO\,pc$^{-2}$. Green contours are of the ratio of Class II/Class 0/I YSO surface density maps; these include transition and highly embedded candidates: $^*$ classification. These `age' contours are for 10-90$\%$ of peak value of $\sim300$ in $10\%$ steps. Groups like Group 1 are relatively young, while those like Group 4 are older.} \label{fig:afgl1} \end{figure*} \begin{figure*}[ht!] \centering \includegraphics[scale=0.65,angle=270]{afgl2.pdf} \caption{Like Fig. \ref{fig:afgl1}, but excluding the less reliable ($^*$) population. Groups are marked as magenta ellipses. `Age' contours between 5-80$\%$ of peak value of $\sim1000$ in $5\%$ steps.} \label{fig:afgl2} \end{figure*} \begin{figure*}[ht!] \centering \includegraphics[scale=0.65,angle=270]{afgl1_5.pdf} \caption{Like Fig. \ref{fig:afgl1} but for N$_{\mathrm{YSO}}=5$, D$_{\mathrm{break}}=1.25$\,pc, and Class 0/I + Class 0/I$^*$ (brown ellipses), and N$_{\mathrm{YSO}}=5$, D$_{\mathrm{break}}=0.55$, and Class II + Class II$^*$ (blue ellipses). Parameters are from Table \ref{table:clusters_regions}.} \label{fig:afgl1_5} \end{figure*} \begin{figure*}[ht!] \centering \includegraphics[scale=0.65,angle=270]{afgl2_5.pdf} \caption{Like Fig. \ref{fig:afgl2} but for N$_{\mathrm{YSO}}=5$, D$_{\mathrm{break}}=0.15$\,pc, and Class 0/I (brown ellipses), and N$_{\mathrm{YSO}}=5$, D$_{\mathrm{break}}=0.55$, and Class II (blue ellipses). Parameters are from Table \ref{table:clusters_regions}.} \label{fig:afgl2_5} \end{figure*} Group 4 (Fig. \ref{fig:afgl1}) is the oldest and most massive group in the field, with the largest area, surface density, and highest extinction of $\sim$A$_{\mathrm{V-peak}}\sim7.5$). From our analysis we conclude that its population formed first, clearing a region and eroding a cavity-like structure much like IC 1795 (sizes $\sim6-7$\,pc) but in YSOs less populated. Within this cavity, the YSO population shows a more compact configuration in the eastern side of the group, and a more widely separated population toward the western side, where some high extinction filamentary structures are identifiable. There is also a population of PMS candidates toward the edge of this cavity, on the western side of Group 4 (opposite W4). Just as with IC 1795, the properties of this group (its location in the middle of the HDL and the geometrically `ordered' population, already eroding the surrounding material) are compatible with it being produced by quiescent formation, rather than having a triggered origin by the Perseus superbubble. A significant difference relative to W3 Main/(OH) is the abundance of filaments with and without stellar activity. Groups in this field are all relatively poor, with only two groups (red Groups 1 and 4) with N$>25$ members (Fig. \ref{fig:afgl1}) and with the YSO population mainly localized to these filamentary structures. The morphology of such filaments, seen well in \textit{Spitzer}\ channel 4 (Fig. \ref{fig:intro_ch4}), suggests a turbulent origin, a suggestion supported by their remarkable similarity and continuity with nearby (non-extinction) structures. A good example is Group 6 (Fig. \ref{fig:afgl1}), associated with a filamentary structure within the cavity caused by the stellar population in Group 4, and containing a relatively old YSO population (but still younger than the latter). Its associated population, while slightly offset from that filament either due to displacement since formation or clearing of the immediate surroundings, clearly traces the overall shape of the high extinction, indicating a birth association. While the filament in Group 6 is easily traced, only dispersed `remnants' of high extinction structures are visible at the western side of Group 4. This supports the `older' evolutionary stage for the latter, where the apparently dissociated population may be due to the dispersal of their common parental filaments. A triggered origin is clearly identified for Groups 7 and 3 (N$_{\mathrm{m}} < 25$), both in the outer boundary of the HDL and associated within structures carved by the activity in W4. The pillar associated with Group 7 coincides with a known cluster (IRAS-02252+6120; \citealp{bica2003}), containing both a Class II population, located toward the outer edges of the pillar, and Class 0/I sources, mainly in the innermost regions. Such a configuration has previously been observed in other pillar-like structures (e.g., \citealp{choud2010}) and suggested to be the result of radiative driven implosion (RDI) triggered star formation, in which an ionization/shock front driven by an expanding \ion{H}{2} region causes a neighboring overdensity to collapse, triggering the formation of stars. Group 1 is the youngest and the richest group in Figure \ref{fig:afgl1}. It is located in the innermost regions of AFGL 333 and it is associated with the most dense and prominent filament in this field. If filaments are formed (or induced to collapse) by turbulence and compression by nearby star activity, then Group 1 might have been induced by the activity at opposite sides associated with IRAS 02245+6115 and IRAS 02244+6117, bottom and top crosses in Figure \ref{fig:afgl1}, respectively. The latter contains bright infrared sources (BIRS; \citealp{elmegreen1980}), and the former hosts a known cluster and massive star activity \citep{bica2003}. These form the brightest regions in the mid-infrared in AFGL 333 and both contain a population of PMS in their outermost parts. Thus two active older regions sandwich the central young stellar population of Group 1. Group 5 (Fig. \ref{fig:afgl1}), while associated with the same high extinction structure as Group 1, is however relatively member-poor. Its location away from the influence of the main (infrared-bright) star forming activity in AFGL 333 may explain the low membership, in contrast with the richness of Group 1. High extinction structures border Group 0 as well (especially noticeable in IRAC channel 4; Fig. \ref{fig:intro_ch4}). The associated filaments are located within an evacuated region, much like those in Group 6. However, as mentioned above, triggering in such regions of low surface density will likely result in isolated star formation (instead of clustered). The results from the above analysis remain unchanged when the $(^*)$ population is excluded, except for the omission of red Groups 2 and 5 (Fig. \ref{fig:afgl2}). We note that the features seen in extinction in the mid-infrared are better tracers of high column density material than the 2MASS-based extinction map, the latter missing some major high extinction areas (e.g., Group 1, Group 7; Fig. \ref{fig:afgl1}). Thus the mass (and surface density) estimates for associated groups are lower limits. Indeed, Group 1 has the largest number of associated YSOs, and it is expected to have a large surface density comparable to or greater than groups of similar membership, $>80$\,M$_\odot$\,pc$^{-2}$. The extinction structure associated with this group is the brightest in the AFGL 333 field in the $850$\,\micron\ SCUBA map, which is an excellent tracer of column density. We find a similar situation in the KR 140 field (below). For small structures and filaments prominent in the submillimeter maps, including KR 140-N, north of the KR 140 \ion{H}{2} region near the center of the field, information is lost in the 2MASS-based extinction maps. Our Herschel analysis (in preparation) will be able to provide accurate masses and properties for each of these structures based on temperature-calibrated dust emission. \subsubsection{Modes and Sequential Star Formation} It has been argued that the presence of massive star forming sites along the interface between W3 and W4 is evidence that the main activity in the HDL is triggered by the expansion of the W4 \ion{H}{2} region (e.g., \citealp{carpenter2000} and references therein). This is supported by the estimated ages of the structures (e.g., \citealp{oey2005}). We agree that the HDL structure itself was created by W4, with conditions (e.g., turbulence, surface density) favorable for star/cluster formation. However, whether the formation and ultimate collapse of clumps and cores in W3 were directly triggered is less clear. Based on the above evidence it seems also plausible to us that formation of the HDL was followed in sequence by a more quiescent evolution governed by local conditions. Local triggering does play a major role in enhancing (massive) star/cluster formation: i) internal triggering within evacuated regions (inner shells) generates secondary bursts of star formation and a distributed population either by forming or collapsing pre-existing small overdensities (clumps and filaments; IC 1795, AFGL-Group 4, AFGL-Group 0); ii) compression of high density regions generates major bursts of star formation, including massive stars and highly embedded clusters (e.g., inner part of W3-Main, W3 (OH), AFGL-Group 1, AFGL-Group 7). Overall, the star formation activity and processes in the AFGL 333 and W3 Main/(OH) fields operate similarly, albeit less vigorously in the former. Environmental physical differences between the two regions, such as column density distribution and kinematics, will be investigated in upcoming papers. The preponderance of filaments and the characteristics of these and other star forming and starless structures in the HDL will be the subject of our Herschel paper currently in preparation. \begin{figure*}[ht!] \centering \includegraphics[scale=0.65,angle=270]{kr1.pdf} \caption{Greyscale \textit{Spitzer}\ channel 1 image of the KR 140 field with identified groups for N$_{\mathrm{YSO}}=10$ and D$_{\mathrm{break}}=0.6$\,pc (red ellipses). Yellow contours are Class 0/I surface density contours between $10-90\%$ of peak value $\sim100$\,YSO\,pc$^{-2}$ in $10\%$\,steps. Blue contours are Class II contours between $4.0-34.0\%$ of peak value $\sim250$\,YSO\,pc$^{-2}$ in $10\%$\,steps. YSO contours have been chosen to span a common YSO range for both classes of $\sim10-90$\,YSO\,pc$^{-2}$. Green contours are of the ratio of Class II/Class 0/I YSO surface density maps; these include transition and highly embedded candidates: $^*$ classification. These `age' contours are for 4-20$\%$ of peak value of $\sim680$ in $2\%$ steps.} \label{fig:kr1} \end{figure*} \begin{figure*}[ht!] \centering \includegraphics[scale=0.65,angle=270]{kr2.pdf} \caption{Like Fig. \ref{fig:kr1}, but excluding the less reliable ($^*$) population. Groups are marked as magenta ellipses. Class 0/I surface density contours between $5-95\%$ of peak value $\sim50$\,YSO\,pc$^{-2}$ in $10\%$\,steps. Class II contours between $1.5-30.5\%$ of peak value $\sim175$\,YSO\,pc$^{-2}$ in $10\%$\,steps. YSO contours have been chosen to span a common YSO range for both classes of $\sim2.5-50$\,YSO\,pc$^{-2}$. Age contours are for 2-52$\%$ of peak value of $\sim1000$ in $5\%$ steps.} \label{fig:kr2} \end{figure*} \begin{figure*}[ht!] \centering \includegraphics[scale=0.65,angle=270]{kr1_5.pdf} \caption{Like Fig. \ref{fig:kr1} but for N$_{\mathrm{YSO}}=5$, D$_{\mathrm{break}}=0.75$\,pc, and Class 0/I + Class 0/I$^*$ (brown ellipses), and N$_{\mathrm{YSO}}=5$, D$_{\mathrm{break}}=0.95$, and Class II + Class II$^*$ (blue ellipses). Parameters are from Table \ref{table:clusters_regions}.} \label{fig:kr1_5} \end{figure*} \begin{figure*}[ht!] \centering \includegraphics[scale=0.65,angle=270]{kr2_5.pdf} \caption{Like Fig. \ref{fig:kr2} but for N$_{\mathrm{YSO}}=5$, D$_{\mathrm{break}}=2.25$\,pc, and Class 0/I (brown ellipses), and N$_{\mathrm{YSO}}=5$, D$_{\mathrm{break}}=1.05$, and Class II (blue ellipses). Parameters are from Table \ref{table:clusters_regions}.} \label{fig:kr2_5} \end{figure*} \subsection{Star Formation in the Central and Western Region: KR 140-N and KR 140 \ion{H}{2} Region} \begin{figure*}[ht!] \centering \includegraphics[scale=0.65,angle=270]{age_kr.pdf} \caption{Greyscale \textit{Spitzer}\ channel 1 image of the KR 140 field. Contours like Fig.\ref{fig:age_all}, but with red contours between $3-33\%$ in $10\%$ steps (red). Small blue circles are 2MASS-based PMS candidates. Brown and blue ellipses like in Figs. \ref{fig:kr1} and \ref{fig:kr2}, respectively.} \label{fig:age_kr} \end{figure*} The W3 molecular cloud is, overall, in an advanced state of evolution. Channel 4 \textit{Spitzer}\ images reveal a highly turbulent and dynamic environment with a variety of structures such as bright rims, pillars with varying orientation, cavities, and filaments (Fig. \ref{fig:intro_ch4}). In this third field, groups have a wide range of ages and properties and can be separated mainly into i) groups associated with filaments; ii) groups associated with the active star forming regions: KR 140-N and KR 140 (Fig.\ref{fig:intro}). Figures \ref{fig:kr1} and \ref{fig:kr2} show the YSO groups (with and without the ($^*$) population) identified in the central and western regions of the W3 GMC. The figures also show YSO surface density contours and age contours. Figures \ref{fig:kr1_5} and \ref{fig:kr2_5} are like Figures \ref{fig:kr1} and \ref{fig:kr2}, but for N$_{\mathrm{YSO}}=5$ and separating YSOs according to class. These figures show a tendency for Class II sources to be more widely distributed. While this effect could be enhanced by motion away from their birthsites, when Class II sources are found systematically located preferentially on one particular side of the parent structure, like in the case of Group 2 (Fig. \ref{fig:kr1}), a propagating triggering disturbance seems more likely responsible for the observed distribution. The Class 0/I population is well localized, almost exclusively in the high extinction filaments and bright rims. The Class II population is found distributed throughout the entire cloud, following the infrared (PAH) emission structures (channel 4) and within the central `cavity' that separates the HDL and the western region comprising KR 140-N and KR 140. Independent star forming events likely produced the observed distributed population. Star formation events throughout the cloud also triggered secondary bursts of star formation resulting in the eroded and turbulent environment observed at longer wavelengths. An almost square lower column density region is noticeable in the extinction maps, but we find no evidence of any major event that could have caused an evacuation. An age sequence could be inferred for the star forming regions associated with the HDL. However, the structures and groups found in the central and western regions of the W3 GMC (Fig. \ref{fig:kr1}) appear more characteristic of sequential or even isolated events, perhaps more typical of the normal evolution and aging of a cloud without a neighbor like W4, and/or perhaps related to the original formation of the GMC. Groups in the central region of the W3 cloud are member-poor (N$_{\mathrm{m}} < 25$; Groups 6, 5, 1, 10, 3, and 0, in order of increasing membership; Fig. \ref{fig:kr1}). Analysis of the environment of the filaments (Groups 6, 5, 1, 0, the latter also identified as magenta Group 0; Fig. \ref{fig:kr2}) suggests their YSO population could have been triggered by external events. The fact that most run parallel to the HDL might indicate some interaction (pillars and elephant trunks in the region point toward the upper (and older) part of AFGL 333), but might, on the other hand, just reflect the high density there, based on their range of ages. Group 10, located in a bright rim of emission in the PAH band in the northern parts, does suggest a triggered formation. The orientation of other nearby pillar structures points to a triggering source located in the most northern parts of the cloud and outside the \textit{Spitzer}\, surveyed area. The age of the systems in the central regions of W3 range from the youngest (Group 5 or 10) to the oldest (Groups 0 and 3) in the GMC. Group 0 is also the filament with lowest extinction. Group 3 exists within a cavity-like structure in the neighborhood of weak emission pillar-remnants that point toward the location of Group 10. This, combined with the lack of association with an extinction structure and its low density is suggestive of an old triggered population that formed from material associated with the now low density pillar structures. The richest groups (N$_{\mathrm{m}} \ge 25$; Groups 4, 9, 8, and 2, in order of increasing membership; Fig. \ref{fig:kr1}) are associated with KR 140-N, KR 140 (e.g., \citealp{kerton2001}; \citealp{kerton2008}), and the major filament east of this \ion{H}{2} region. The structure and YSO population of KR 140-N (Group 9) support a triggered origin from RDI. The innermost region of this structure contains a large population of Class 0/I sources, with Class II candidates extending in front and behind the shocked region. A similar YSO distribution was observed in pillars associated with embedded clusters triggered by W4 (e.g., Group 7 in the AFGL 333 field; Fig. \ref{fig:afgl1}) that were also suggested to have been produced by RDI. The cometary-like morphology also resembles that obtained by theoretical models of RDI of a cloud exposed to a high ionizing flux ($\Phi_{\mathrm{LyC}}=3\times10^{11}$\,cm$^{-2}$\,s$^{-1}$; \citealp{bisbas2011}). A significant tail of `blown' material extends toward the west, containing a largely distributed population of Class II and some Class 0/I sources, the latter generally associated with knots of bright PAH emission. Just as Group 7 in Figure \ref{fig:afgl1} is triggered by W4, the presence of Class II sources `ahead' of the front for KR 140-N suggests triggering by a source `external' to this structure and to the east. The location of the rim, at the edge of an evacuated region, also suggests an external (albeit so far unidentified) influence. The (low density) YSO population of Group 7 likely associated with the shell of KR 140 is the youngest, indicating active star formation triggered by the ionizing star VES 735. We find a population of Class II and PMS sources extending toward the north of the \ion{H}{2} region (Fig. \ref{fig:age_kr}) that are likely to have originated because of the activity in the latter. Group 8, a rich (N$_{\mathrm{m}} > 50$) group projected on the \ion{H}{2} region, contains a mixed population of Class 0/I and II sources (ratio $\sim1$) indicative of an extended period of star formation. This group, together with Group 2, are not only the richest in the field, but are also associated with the highest surface densities ($>60-70$\,M$_\odot$\,pc$^{-2}$) and extinctions (A$_{\mathrm{V-peak}}\sim5.5$). This extinction is in agreement with that estimated for the exciting O star of KR 140, VES 735, by \citet{kerton1999} from spectroscopy (A$_{\mathrm{V}}\sim5.4$) and optical photometry (A$_{\mathrm{V}}\sim5.7\pm0.2$), confirming the reliability of the extinction map at least within the resolution-related limitations. The filaments associated with Groups 2 and 4 are the most prominent of the region. Our analysis indicates that Group 2 is in a relatively highly evolved stage and was not triggered by KR 140, in agreement with previous analysis \citep{kerton2008}. Contrary to the population of Group 4, Class II sources in Group 2 are displaced from the filament toward the north, with a string of Class 0/I still deeply embedded in the innermost regions of the filament. The east-west orientation and arc-like shape of this structure, as well as the distribution of the Class II population and the material traced in the mid-infrared, suggest a possible trigger located in the direction of KR 140-N. Group 4 is younger, with characteristics and orientation that suggest a link to the activity in KR 140, at least in the northernmost parts. All member-rich groups are also identified even after excluding the ($^*$) sample (Fig. \ref{fig:kr2}). When reducing the membership requirement to N$_{\mathrm{YSO}}>5$ we again reach similar conclusions (Figs. \ref{fig:kr1_5} and \ref{fig:kr2_5}). Overall, Class 0/I candidates are confined to the innermost regions of the filaments (high ellipticity groups), and therefore trace these structures with high accuracy. We also detect a distribution of mainly Class II groups around the `cavity-like' region bounded by Group 10 (Fig. \ref{fig:kr1}), KR 140-N, and Group 3. While such a configuration is reminiscent of a triggered origin, the distributed Class II population might actually be the result of percolating low level spontaneous star formation. A string of such Class II groups is observed crossing the entire field, extending from the easternmost regions to KR 140, and incorporating the filaments identified with Groups 2 and 4 (Fig. \ref{fig:kr1_5}), suggesting that the filaments in the southern central and western parts of W3 are the peak overdensities in a region of overall enhanced extinction. VES 735 might have formed as part of a growing loose association of massive stars. \citet{kerton2001} indicate that the nearby sources IRAS 02171+6058 and IRAS 02174+6052 have luminosities consistent with lower mass embedded B stars. We find a Class 0/I YSO matching IRAS 02171+6058 and no significant radio continuum emission, which supports their hypothesis of an embedded B-type star. We do not detect a \textit{Spitzer}\ counterpart for the other source. A more detailed analysis is required to investigate this possible association for VES 735, including the numerous bright infrared stars (BIRS) in this region\footnote{The YSO classification and association with clumps and cores will be investigated in our upcoming Herschel paper}. In Figure \ref{fig:age_kr} we plot the `age' contours deduced from the ratio of Class II/Class 0/I with ($^*$) sources. Peaks in the red contours indicate the oldest age, while green represents the youngest. The distribution of ages throughout the field is evident. Group 7, associated with the shell, is the youngest. The age of VES 735 is estimated to be $\sim2$\,Myr, consistent with an \ion{H}{2} region of $\sim1-2$\,Myr (\citealp{kerton1999}; \citealp{ballantyne2000}), and so this age can be considered an upper limit for the YSO population associated with this group. Group 7 is followed in age by Group 5, and those associated with the other active (infrared-bright) star forming sites: Group 10, Group 9 (KR 140-N), and Group 8 (KR 140). Excluding the ($^*$) sources we find the youngest group to be that associated with the filament closest to KR 140 (Group 4 in Fig. \ref{fig:kr1}), followed by the population in KR 140-N (Group 9), the filament associated with Group 2, and Group 3 (central W3), with the \textit{oldest} groups being Group 8, projected on KR 140, and Group 0. The regions undergoing star formation (Class 0/I) in the central and western part of W3 are at very localized spots in the cloud. Neighboring groups have an age spread suggestive of individual star formation events and evolution. Filaments host some of the oldest populations, while those with triggered morphology (Group 10, KR 140-N: Group 9) are younger, as expected if actually triggered by (and therefore dependent on) a \textit{previous} event. Star formation events must also continue over at least a few Myr, as suggested by the Class II population distributed throughout the cloud surrounding the active star forming sites. Figure \ref{fig:age_kr} also shows the candidate PMS sources, which dominate near the KR 140 \ion{H}{2} region. Finding the oldest population to be associated with KR 140 itself (and therefore independent of any previous star formation), supports the `spontaneous' origin of its massive exciting star. \section{Conclusion} By means of \textit{Spitzer}\ and 2MASS data we have identified and analyzed the young stellar population in the W3 GMC. These were classified according to the standard `CLASS' nomenclature for YSOs, and compared to other classification schemes based on intrinsic stellar properties, such as envelope accretion rate and disk mass. We find distinct regions in the CCD separating the intrinsically (and observationally) young population from the disk-dominated sample, with an intermediate region likely containing edge-on optically thick disks and weaker envelope candidates. Observationally, the low-mass PMS population cannot be identified unambiguously without first identifying the protostellar Class 0/I and Class II sources. Intermediate mass stars could in principle be identified \textit{without} the need for any previous protostar classification, although it is likely that some HAeBe sources would then be missed through misclassification as Class II and T-Tauri objects. The YSO population was divided into spatial groups according to the minimum spanning tree algorithm. We also created YSO surface density maps and `age' maps. These data were used to investigate the characteristics and history of the star formation in W3. The distinctive HDL is no doubt influenced by the expansion of W4. The high density conditions there favored the formation of particularly rich bursts of star formation. The HDL contains the main star formation activity of W3 and has signatures of both spontaneous and triggered star formation from both external (e.g., W4) and internal events. Whether W4 was just key to creating the initial favorable \textit{conditions} for (massive) star and cluster formation, the very high surface density regions, clumps and cores which then collapse at a later stage, or whether it also \textit{triggered} the star formation in such clumps and cores is a more subtle question. Our finding of a relatively older population in the western side of W3 Main and AFGL 333, opposite to shells formed by compression by IC 1795 and W4, respectively, suggest star formation could have started first in quiescent mode throughout the GMC, including these structures as well as the cluster IC 1795 itself. Subsequently, triggering mechanisms by the intense activity (e.g., IC 1795) were responsible for compressing an already dense environment, greatly enhancing and/or inducing major bursts of massive star and cluster formation (e.g., W3 Main, W3 (OH)). The evolved state of W3 is also evident in the central and western parts of this cloud, an overall highly turbulent and eroded environment. This is believed to have been produced by numerous individual star formation events that were responsible for triggering secondary episodes of star forming activity. The central regions between AFGL 333 and KR 140 are particularly rich in filaments, seen in extinction in the mid-infrared, whose formation and star forming activity appear to be associated with this highly turbulent environment. Recent star formation is confined mainly to these very localized regions. The overlapping spatial distributions of the YSO class populations in the most active areas of W3 indicate on-going periods of star formation in regions of massive star and cluster formation, as opposed to a single, major short-lived event. Based on number of YSO types (in groups) and ratios of class surface densities (in regions) as relative measurements of age, we find that the region comprising W3 Main and W3 (OH) is the youngest, followed by the central/west region of W3 and AFGL 333. However, in AFGL 333 confusion does not allow us to detect groups in the currently most active areas, and therefore the age is representative of the YSO population associated with filaments. We cannot determine if the activity in AFGL 333 did start first or if the activity all across the HDL started at similar times triggered by W4, but of all the ages derived for the groups associated with isolated filaments we find those in this region to be the oldest. Indeed, the characteristics of the YSO population and individual group ages suggest that the activity immediately west of AFGL 333 could have induced some secondary triggered events in filamentary structures in the central regions of the cloud. On-going stellar activity in W3 Main, IC 1795, and KR 140 results in younger apparent ages for their associated groups. Nevertheless, the KR 140 region appears to contain the oldest population in the western part of W3. This age is less than that of IC 1795, and the absence of a suitable nearby triggering mechanism supports the spontaneous origin for VES 735 and stars in this possible association. We find that the age of KR 140-N must be intrinsically young. This structure shows not only a morphology consistent with triggering by RDI, but also a well defined distribution of Class II sources surrounding the Class 0/I population, similar to that associated with pillar structures. The age of IC 1795 has been estimated to be around $\sim3-5$\,Myr. Even when considering a supersonic velocity (for typical velocity in the ionized medium) of c$\sim10$\,km$^{-1}$, IRS5 would require $\sim2.5$\,Myr, and $\sim3$\,Myr for interaction with KR 140-N. If IRS5 itself were actually triggered by IC 1795, then this makes the former an even more unlikely candidate for triggering activity in the western regions. If a source of triggering is indeed located in the HDL, then at least the western side of these structures must have been undergoing star formation activity well before the onset of the present major activity in the HDL. This is supported by findings of a relatively older population in the western side of these structures. Results presented above reveal that despite having considered a range of relatively evolved stages of pre-stellar evolution, the W3 GMC is still a prime target for investigating different modes of star formation. We classified as a \textit{grouped} population those YSOs not part of a previously cataloged cluster, but belonging to a group identified though the MST analysis. \textit{Primordial/quiescent \textit{grouped} formation} is identified by old, relatively isolated systems with the closest inter-YSO separations and compact configurations (not necessarily circular if parental material is filamentary), whose richness depends on the primordial surface density of the clump/core. An example is Group 2 in Figure \ref{fig:main1}. \textit{Triggering} may be a more efficient mechanism for creating high surface density structures favorable to high multiplicity and/or for inducing the collapse of such structures (e.g., by overcoming internal pressure and turbulent support). All current major clustered massive star activity in W3 is believed to have been `triggered' to some extent. \textit{Quiescent clustered formation} might nevertheless have occurred initially to produce IC 1795, whose preferential location in the HDL resulted in a rich population that subsequently triggered secondary bursts of star formation within an already overdense region. The outcome for the GMC would also depend ultimately on the physics, effectiveness, and limitations of the processes creating the precursors of groups and clusters in the parental cloud (e.g., filaments, clumps, and cores). \textit{Triggered formation} has occurred throughout W3. In some cases, unstable structures with enough surface density and low turbulence are created (collect/collapse model). Elsewhere there is collapse induced (RDI) in the neighborhood of the triggering source. Examples for triggered \textit{grouped} formation are Group 7 in Figure \ref{fig:kr1}, a YSO population associated with a shell of material compressed by the expanding KR 140 \ion{H}{2} region, and Group 9 associated with a cometary like structure in KR 140-N. \textit{Clustered} triggered formation can be observed in pillar structures such as Group 7 facing W4 (Fig. \ref{fig:afgl1}). The present data cannot confirm whether pre-existing cold seeds had formed in the cloud prior to the triggering mechanisms, and this issue will be revisited with the available Herschel data (Rivera-Ingraham et al., in preparation). Regardless of the detailed sequence of processes, our work indicates that triggering is the main mechanism associated with those structures dense enough to host the current main cluster and massive star formation activity (i.e., W3 Main). Clusters formed within such structures are the richest. IRS5 in particular, unresolved in our data, has been suggested to be a Trapezium-like system of proto OB stars within an envelope $<0.02$\,pc in size; its protostellar density, of $\sim0.5-5\times10^{6}$\,pc$^{-3}$, makes it one of the most dense clusters known (\citealp{megeath2005}; \citealp{rodon2008}). This system is itself embedded in a massive cluster of low mass and PMS stars (\citealp{megeath1996}; \citealp{feigelson2008}) within the region of highest extinction in the entire W3 GMC (A$_{\mathrm{V}}=9.6\pm0.3$). We classified as \textit{distributed} those YSO candidates not in groups and not presently associated with extinction structures in the infrared. Some of the distributed population has an origin associated with high extinction structures (e.g., filaments, pillars) cleared by the stellar activity (e.g., Fig. \ref{fig:afgl1}, western part of Group 4). A filament can follow a highly irregular and curved morphology, likely linked to large scale turbulence in the environment, and the YSOs born in the filament will looked highly distributed after the disappearance of their parent structure (e.g., Group 6, Fig. \ref{fig:afgl1}). The fact that the Class II population of Group 4 is surrounded by possible `remnants' of parent structures suggests lifetimes for the filaments and clumps of the order of a few Myr. Whether the star formation processes within filaments differ from those in clustered formation needs to be determined. Note that a population of filamentary origin can be confused with that truly originated in isolation. The latter can be observed in the `tail' of KR 140-N and in highly turbulent, low surface density regions in the vicinity of active star formation. An example can be found in the western side of W3, where `knots' of infrared emission trace isolated overdensities (clumps, cores), more relevant for the formation of individual objects. Herschel submillimeter imaging data (Rivera-Ingraham et al., in preparation) will be used to identify and characterize the population of cores and clumps in the W3 GMC. By examining how these differ in regions of quiescent and triggered activity or with and without an associated YSO population, this analysis will probe the processes involved in the earliest stages of massive star/cluster formation and the different modes of star formation. In the present analysis we observe a link between the star formation modes and their environment. Filamentary structures appear to trace the turbulence in the cloud. Since the environment is a key parameter in understanding the processes and properties of the stellar progenitors, molecular data (Polychroni et al. in preparation) will also be used to characterize the dynamics of the material associated with the Herschel sources, and link their differing properties back to their environment. \acknowledgements We thank the anonymous referee for very useful suggestions and improvements to this paper. This research was supported in part by the Natural Sciences and Engineering Research Council of Canada and the funds received by A. Rivera-Ingraham as Connaught Fellow at the University of Toronto.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} Spherical models are made of $N$ real variables $\sigma_i\in\mathbb{R}$ satisfying the global constraint $\sum_i \sigma_i^2 = N$. They play a key role among solvable models in statistical physics, because they usually allow for closed and compact algebraic solutions~\cite{Berlin52,Baxter82}. Moreover, being the variables reals, the space of configurations is continuous and differentiable, thus allowing one to study in these models several kind of dynamics (e.g. Langevin dynamics or gradient descent like relaxations). At variance, models whose variables satisfy local constraints pose more problems. For example, in Ising and Potts models the variables take discrete values and so the space of configuration is not continuous; while in $O(n)$ models (e.g.\ with XY or Heisenberg spins) each variable is continuous, but needs to satisfy a local constraint of unit norm in an $n$-dimensional space, and this in turn makes the analytic solution much more complicated, see for instance~\cite{LRT17,LRT18,LPRT19}.\\ The success of spherical models is well witnessed by the fully-connected spherical $p$-spin model. For $p\ge3$ this model is the most used mean-field model for the glassy dynamics. We learned a lot from it exactly because both the thermodynamics and the dynamics can be easily solved~\cite{Crisanti92,Crisanti93,Crisanti95,Bouchaud96}. The thermodynamic solution has been obtained via the replica method, and it has a compact analytic form thanks to the spherical constraint: the solution predicts a random first-order transition from a high-temperature paramagnetic phase to a low temperature spin glass phase. The equilibrium and out-of-equilibrium dynamics have been solved via the generating functional formalism, and it is exact thanks to the mean-field nature of the model and the spherical constraint~\cite{Cugliandolo93,Bouchaud96}. Notwithstanding the success of fully-connected spherical models, we are well aware they have several unrealistic features: fully-connectedness is unlikely to happen in any realistic phenomenon and the spherical constraint is just a global surrogate for the actual constraint each variable should satisfy locally. In other words, in realistic models each variable is somehow bounded and one uses the single global spherical constraint to make computations easier. Although this approximation is extremely useful, it has some drawbacks. For example, when the interactions are diluted, a condensation phenomenon may take place~\cite{Majumdar05,Szavits-Nossan14,GILM19}. The diluted and sparse versions of a model are particularly interesting, because moving away from the fully connected limit is needed in order to study more realistic phenomena~\cite{Bouchaud04,Biroli06,Biroli08,Biroli2013FragilityMF, Altieri2017Mlayer}. We reserve the word \emph{sparse} for graphs with a mean degree $O(1)$, i.e.\ not growing with $N$, while we use the term \emph{diluted} for a graph which is not fully-connected, but whose mean degree still grows with $N$. In sparse models the couplings do not vanish in the large $N$ limit and this implies the solution is deeply non-perturbative. The cavity method has been developed exactly to solve sparse models \cite{Mezard01}. In very few cases such method has been exploited for fully connected lattices, e.g., in the study of models with discrete variables~\cite{KT95} or in the case of linear interactions, as for the planted SK problem in context of inference~\cite{AKUZ19}. Diluted models are much less studied in the literature with respect to fully-connected and sparse models. Nonetheless they are very interesting for several aspects. They can be used in numerical simulations as a proxy for fully-connected models which are very demanding in terms of computing resources. They appear in models of random lasers where dilution is induced by the selection rules for the coupling of light modes in random media~\cite{Antenucci15a,Antenucci15c,Gradenigo20}. Depending on the level of dilution, they allow for heterogeneities and local fluctuations in models that can still be solved similarly to the fully-connected version, that is, exploiting the fact that couplings are weak and the graph mean degree diverges. We believe it is worth dedicating more efforts in studying the realm of diluted models. In the present contribution we would like to set up the framework that would allow us to study diluted models via the cavity method. We are particularly interested in spherical models, because they are models whose solution turns out to be particularly simple and compact. However, spherical models may undergo a condensation transition when the interaction graph is diluted. How the condensation transition can be avoided in a $p$-spin model by just modifying the spherical constraint is another open problem which we are currently investigating and which will be discussed elsewhere~\cite{AGLR21}. The study of whether condensation takes place is a delicate matter: this depends on a competition between the functional form of the global constraint, which can even be non-spherical, and the strength of the interactions, the latter depending on both the order of the non-linearity and the amount of dilution in the graph. Working with Hamiltonian models where variables interact via $p$-body terms and calling $M=O(N^\delta)$ the number of interaction terms, one would like to single out the threshold exponent $\delta_c$ such that for $\delta > \delta_c$ at finite temperature there is no condensation while for $\delta < \delta_c$ at any temperature the system is in the condensed phase. So far the situation is clear only for the two boundaries of the interval of possible values for $\delta$. For $\delta=p$, which represents the complete graph, condensation is never found at finite temperature, while the sparse graph, i.e., $\delta=0$, is always in the condensed phase provided that interactions are non-linear, i.e., $p>2$. The situation for intermediate values of $\delta$ is under current investigation, and we expect the present work to be an important footstep in this direction. For the moment we focus on the dilution regime where such a condensation phenomenon does not take place. \\ In the following we present the zero-th order step of the above program by showing how to use the cavity method to solve the fully-connected version of spherical spin glass models. Although the cavity method is well known \cite{Mezard09}, its use in spherical models did not appear before in the literature (to the best of our knowledge). The application of the cavity method to spherical models is not straightforward, because one has to decide how to convert a \emph{global} constraint in a set of \emph{local} ones. We will discuss this aspect explicitly and propose a standardized solution. Once the cavity equations are written, their solution requires some Ansatz for the distribution of local fields. This is one of the advantage of the cavity method with respect to the replica method: all assumptions made in the derivation have a clear and direct physical meaning. By using a Gaussian Ansatz for the distribution of local fields (eventually, correlated Gaussian fields in the spin glass phase where the replica symmetry spontaneously breaks down) we are able to obtain the exact solution to the spherical $p$-spin spin glass model, that was previously derived via the replica method. We dedicate the main text to the derivation of the saddle point equations, to the illustration of the Ansatz for the local field distributions, to the discussion on how to implement the spherical constraint and to report the resulting free-energies. More technical and lengthy derivations, as the explicit calculations of the free-energy, are postponed to the Appendices.\\ More in detail: in Sec.~\ref{sec-1} we explain why a Gaussian ansatz for the cavity marginals is correct in the large degree limit and how to use it to obtain a closure of the Belief Propagation equations. In particular, in Sec.~\ref{sec-1:sub-E} we discuss the two possible choices to implement the spherical constraint in the Belief Propagation equations, which are equivalent only in the large degree limit. Sec.~\ref{sec:3} is dedicated to the study of Survey Propagation equations, i.e., the generalization of Belief Propagation equations in the case of a one-step-replica-symmetry-breaking scenario. In Sec.~\ref{Sec:1rsbAnsatz} we present the multivariate Gaussian ansatz needed for the Survey Propagation equations, recently introduced in Ref. \cite{AKUZ19}, and in Sec.~\ref{sec:3B} how the explicit closure of the equations is obtained by means of this ansatz. While the 1RSB expression of the free energy is reported in Sec.~\ref{sec:1RSB-fe}, its explicit derivation in full detail can be found in the Appendices. \section[sec1]{Cavity equations with spherical constraint} \label{sec-1} \subsection{Spherical models} We consider models with $N$ real variables $\sigma_i\in\mathbb{R}$ constrained to satisfy the condition \begin{equation} \mathcal{A}[\bm \sigma]\equiv\sum_{i=1}^N\sigma_i^2=N \label{eq:A0} \end{equation} and interacting via $p$-body interactions \begin{equation} \mathcal{H} = - \sum_{a=1}^M J_a \prod_{i\in \partial a} \sigma_i, \label{eq:H} \end{equation} where $\partial a$ is the set of variables entering the $a$-th interaction and we fix $|\partial a|=p$. If the interaction graph is fully-connected then $M=\binom{N}{p}$ and the sum runs over all possible $p$-uples; otherwise, in diluted models, the $M$ interactions are randomly chosen among the $\binom{N}{p}$ possible $p$-uples. The fully-connected versions have been solved via the replica method. For $p=2$ the model is particularly simple because the energy function has only two minima and the free-energy can be computed from the spectrum of the interaction matrix ${\bm J}$. The model possesses a spin glass phase at low temperatures, but the replica symmetry never breaks down and a replica symmetric (RS) ansatz provides the exact solution \cite{kosterlitz1976spherical}. In this case the spherical constraint, although efficient in keeping variables bounded, changes drastically the low energy physics with respect to models with e.g. Ising variables: indeed the Sherrington-Kirkpatrick model \cite{Sherrington75} has a spin glass phase with spontaneous breaking of the replica symmetry \cite{Parisi80,Parisi83}. For $p\ge3$ the spherical model is much more interesting since it undergoes a phase transition to a spin glass phase where the replica symmetry is broken just once (1RSB phase) \cite{Crisanti92} as in the analogous model with Ising variables \cite{gardner1985spin}. More importantly the thermodynamic phase transition is preceded by a dynamical phase transition \cite{Crisanti93} which has been connected to the structural glass transition \cite{Kirkpatrick87b,Kirkpatrick87c} and to the mode coupling theory \cite{Goetze09}. The spherical $p$-spin model with $p\ge3$ represents now the most used mean-field model for the random first order transition \cite{castellani2005spin}. \subsection[sec1-subA]{Self-consistent cavity equations for the local marginals} \label{sec-1:sub-A} The replica method allows to fully characterize the static properties of the spherical $p$-spin model on complete graphs, as was firstly done in Ref.~\cite{Crisanti92}. Our purpose is to study spherical $p$-spin models, showing that the cavity method is equivalent to replicas on complete graphs. A complete hypergraph can be seen as a bipartite graph made of function nodes, representing the interaction $p$-uplets, and variable nodes, representing the $N$ spins $\sigma_i$'s. We will indicate the set of links between function and variable nodes as edges $E$. A complete graph has $M=\binom{N}{p}=O(N^p)$ function nodes, each of which is linked to $p$ variable nodes. On the other hand, each variable node is linked to $K=\binom{N-1}{p-1}=O(N^{p-1})$ function nodes. In order to ensure the extensivity of the energy, not only the $N$ real variables must satisfy the spherical constraint in Eq.~(\ref{eq:A0}), but the couplings $\{J_a\}$, which are independent and identically distributed quenched random variables, must be properly normalized: in the case of symmetric couplings we have \begin{equation} \langle J\rangle = 0 \quad , \quad \langle J^2 \rangle=\frac{p! J_2}{2N^{p-1}} \label{def:pJ} \end{equation} with $J_2=O(1)$ to ensure an extensive energy. Since we have in mind to extend the results of the present study to the case of increasing dilution of the hypergraph, let us start from the statistical ensemble where the partition function of the model, and hence the corresponding thermodynamic potentials, is always well defined, i.e., the \emph{microcanonical ensemble}. In presence of the spherical constraint written in Eq.~(\ref{eq:A0}) the partition function of the model reads thus \begin{equation} \Omega_A(E,N) = \int\prod_{i=1}^N d\sigma_i~\delta\left(E -\mathcal{H}[\boldsymbol{\sigma}]\right) ~\delta\left( A-\mathcal{A}[\boldsymbol{\sigma}]\right). \label{eq:Z-hard} \end{equation} The first, very important, assumption of the present derivation is the equivalence between the ensemble with hard constraints on both $A$ and $E$, i.e. the partition function written in Eq.~(\ref{eq:Z-hard}), and the one where the same spherical constraints are realized via a Lagrange multiplier. This means that the study of the partition function in Eq.~(\ref{eq:Z-hard}) is fully equivalent to that of its Laplace transform: \begin{equation} \mathcal{Z}_\lambda(\beta,N) = \int_{0}^\infty~dA~e^{-\lambda A}~\int_{-\infty}^{+\infty}~dE~e^{-\beta E}~\Omega_A(E,N) = \int\prod_{i=1}^N d\sigma_i~\exp\left\lbrace -\lambda \sum_{i=1}^N\sigma^2_i + \beta\sum_{a=1}^MJ_a \prod_{i\in \partial a} \sigma_i\right\rbrace, \label{eq:Z-soft} \end{equation} For a given choice of values $A$ and $E$ the ensembles are equivalent if and only if it is possible to find real values of the Lagrange multipliers $\lambda$ and $\beta$ such that \begin{eqnarray} A &=& -\frac{\partial}{\partial\lambda}\log\left[ \mathcal{Z}_\lambda(\beta,N) \right] \nonumber \\ E &=& -\frac{\partial}{\partial\beta}\log\left[ \mathcal{Z}_\lambda(\beta,N) \right] \label{eq:average-soft} \end{eqnarray} In this paper we will consider only choices of $E \propto A \propto N$ such that it is possible to find real positive values of $\lambda$ and $\beta$ which solve the equations in Eq.~(\ref{eq:average-soft}). It is nevertheless important to keep in mind that there are situations where Eq.~(\ref{eq:average-soft}) does not have a solution in terms of either a real $\lambda$ or a real $\beta$: this is the situation where the equivalence of ensembles breaks down and we expect it to happen in sparse hypergraphs, where condensation takes place. See for instance the recent discussion in~\cite{GILM19}. Let us now introduce the cavity approach to solve the analyzed problem. We will introduce two kinds of cavity messages: with $\eta_{i\rightarrow a}(\sigma_i)$ we will indicate the variable-to-function cavity message, that indicates the probability that the spin on the $i$ node assumes the value $\sigma_i$ in the absence of the link between the variable node $i$ and the function node $a$. Analogously with $\hat{\eta}_{a\rightarrow i}(\sigma_i)$ we indicate the function-to-variable cavity message. In the general case $\eta_{i\rightarrow a}(\sigma_i)$ will depend on all the messages $\hat{\eta}_{b\rightarrow i}(\sigma_i)$, with $b\in\partial i\setminus a$, that are correlated random variables. However for tree-like graphs they are independent, due to the absence of loops. Loops are negligible at the leading order also on the Bethe lattice, which is locally tree-like (there are loops of size $\log(N)$). A complete graph is not at all locally tree-like, since each spin participates in $\mathcal{O}(N^{p-1})$ interactions, and there are always short loops. Nevertheless, due to the vanishing intensity of coupling constants $J_a$, i.e.\ $\langle J^2 \rangle\sim 1/N^{p-1}$, $\hat{\eta}_{b\rightarrow i}(\sigma_i)$, with $b\in\partial i\setminus a$, behave as independent random variables even on complete graphs. This allows us to introduce the following \emph{cavity equations}: \begin{eqnarray} \eta_{i\rightarrow a}(\sigma_i) &=&\frac{1}{Z_{i\to a}} \prod_{b \in \partial i \setminus a} \hat{\eta}_{b\rightarrow i}(\sigma_i) \label{eq:cavity1}\\ \hat{\eta}_{a\rightarrow i}(\sigma_i) & =&\frac{1}{\hat{Z}_{a\to i}} e^{-\frac{\lambda \sigma_i^2}{2 K}} \int \prod_{j\in \partial a \setminus i} ~ d\sigma_j ~ \eta_{j\rightarrow a}(\sigma_j) ~ \exp\left\{\beta J_a \sigma_i \prod_{j\in \partial a \setminus i}\sigma_j\right\}, \label{eq:cavity2} \end{eqnarray} with $Z_{i\to a}$ and $\hat{Z}_{a\to i}$ that are normalization constants to ensure that the messages are normalized: \begin{equation} \nonumber \int_{-\infty}^{\infty}d\sigma_i\eta_{i\to a}(\sigma_i) = \int_{-\infty}^{\infty}d\sigma_i\hat{\eta}_{i\to a}(\sigma_i)=1 \ . \end{equation} Let us spend few words on the way we have transformed the global spherical constraint into the local terms $\exp(-\frac{\lambda \sigma_i^2}{2K})$ appearing in the equations for the cavity marginals $\hat\eta_{a\to i}(\sigma_i)$. The factor $1/2$ is just convenient for the definition of the Gaussian distributions (Lagrange multiplier can be changed by a multiplicative factor without changing the physics). Although the most natural place to insert the spherical constraint would be as an external field in the equation for the cavity marginal $\eta_{i\to a}(\sigma_i)$, our choice turn out to simplify the computations and we prove in Sec.~\ref{sec-1:sub-E} to be equivalent to the other one. We notice that the idea of moving the external field from the variables to the interactions is not new. It is used, for example, in the real space renormalization group. Once Eqs.~(\ref{eq:cavity1},\ref{eq:cavity2}) are solved (e.g.\ in an iterative way as in the Belief Propagation algorithm), the local marginals for each spin ae given by \begin{equation} \eta_{i}(\sigma_i) =\frac{1}{Z_{i}} \prod_{b \in \partial i} \hat{\eta}_{b\rightarrow i}(\sigma_i) \label{eq:marginals} \end{equation} with $Z_i$ a new normalization constant. \subsection[sec1-subA]{The Gaussian Ansatz in the large degree limit} In the fully-connected model, but also in diluted models, the mean degree grows and diverges in the large $N$ limit. At the same time the coupling intensities decrease as $N^{-(p-1)/2}$ to ensure well defined local fields. In this limit we can close the cavity equations with the following Gaussian Ansatz for the cavity marginal distribution \begin{equation} \eta_{i\rightarrow a}(\sigma_i) = \frac{1}{\sqrt{2\pi v_{i\rightarrow a}}} \exp\left[-\frac{(\sigma_i-m_{i\rightarrow a})^2}{2v_{i\rightarrow a}}\right] \propto \exp\left[ \frac{m_{i\to a}}{v_{i \to a}} \sigma_i - \frac{1}{2 v_{i \to a}}\sigma_i^2\right] \label{eq:Gaussian-ansatz} \end{equation} Since $\langle J^2 \rangle \sim 1/N^{p-1}$ the large $N$ limit is equivalent to as a small $J$ or high-temperature expansion, known as the Plefka/Georges-Yedidia expansion \cite{Plefka,GY}. Expanding to second order in $J$, and inserting the Ansatz Eq. \ref{eq:Gaussian-ansatz}, we get \begin{align} \nonumber \hat\eta_{a \to i}(\sigma_i) &=\frac{1}{\hat{Z}_{a\rightarrow i}} e^{-\frac{\lambda \sigma_i^2}{2 K}} \int \prod_{j\in \partial a \setminus i} ~ d\sigma_j ~ \eta_{j\rightarrow a}(\sigma_j) ~ \exp\left\{\beta J_a \sigma_i \prod_{j\in \partial a \setminus i}\sigma_j\right\}=\\ \nonumber &\simeq \frac{1}{\hat{Z}_{a\rightarrow i}} e^{-\frac{\lambda \sigma_i^2}{2K}}\left[1+\beta J_a \sigma_i \prod_{j \in \partial a \setminus i} m_{j \to a} + \frac{\beta^2 J_a^2}{2} \sigma_i^2 \prod_{j \in \partial a \setminus i}\Big(m_{j\to a}^2 + v_{j \to a}\Big)\right]=\\ &\simeq \frac{1}{\hat{Z}_{a\rightarrow i}} e^{-\frac{\lambda \sigma_i^2}{2K}}\exp\left\{\beta J_a \sigma_i \prod_{j \in \partial a \setminus i} m_{j \to a} + \frac{\beta^2 J_a^2}{2} \sigma_i^2 \left(\prod_{j \in \partial a \setminus i}\left(m_{j\to a}^2 + v_{j \to a}\right)-\prod_{j \in \partial a \setminus i}m_{j\to a}^2\right)\right\} \label{eq:node-to-spin-A} \end{align} and \begin{align} \nonumber \eta_{i \to a}(\sigma_i) &= \frac{1}{Z_{i\rightarrow a}} \prod_{b \in \partial i \setminus a} \hat\eta_{b \to i}(\sigma_i)=\\ &= \frac{1}{Z_{i\rightarrow a}} e^{-\frac{\lambda}{2} \sigma_i^2}\exp\left\{ \beta \sigma_i \sum_{b \in \partial i \setminus a} J_b \prod_{j \in \partial b \setminus i} m_{j \to b} + \frac{\beta^2}{2} \sigma_i^2 \sum_{b \in \partial i \setminus a} J_b^2 \left(\prod_{j \in \partial b \setminus i} \left(m_{j \to b}^2 + v_{j \to b}\right) - \prod_{j \in \partial b \setminus i} m_{j \to b}^2\right) \right\} \label{eq:spin-to-node-A} \end{align} Comparing Eq.~(\ref{eq:Gaussian-ansatz}) and Eq.~(\ref{eq:spin-to-node-A}), one obtains the following self consistency equations for the means and the variances of the Gaussian marginals: \begin{eqnarray} \nonumber \frac{m_{i\to a}}{v_{i \to a}} &=& \beta \sum_{b \in \partial i \setminus a} J_b \prod_{j \in \partial b \setminus i} m_{j \to b}\\ \frac{1}{v_{i \to a}} &=& \lambda - \beta^2 \sum_{b \in \partial i \setminus a} J_b^2 \left(\prod_{j \in \partial b \setminus i} \left(m_{j \to b}^2 + v_{j \to b}\right) - \prod_{j \in \partial b \setminus i} m_{j \to b}^2\right) \label{eq:closure-cavity} \end{eqnarray} The $\lambda$ parameter has to be fixed in order to satisfy the spherical constraint $\sum_i \langle \sigma_i^2 \rangle = N$, where the average is taken over the marginals defined in Eq. (\ref{eq:marginals}). However, given that we are in a dense system, cavity marginal and full marginals differ by just terms of order $O(1/N)$, so we can impose the spherical constraint using cavity marginals. These are the replica symmetric cavity equations for dense (fully-connected or diluted) spherical $p$-spin models. \gblue{In the limit of large degree (fully-connected or diluted models) the two summations appearing in Eq.~(\ref{eq:closure-cavity}) are over a large number $K$ of terms. So we can use the law of large numbers and the central limit theorem to simplify the self-consistency equations in (\ref{eq:closure-cavity}). Reminding that in the large $K$ limit the couplings scale according to $\langle J \rangle \sim 1 / K$ and $\langle J^2 \rangle \sim 1 / K$, the second equation in (\ref{eq:closure-cavity}) concentrates $v_{i \to a}$ close its mean value $v=\mathbb{E}(v_{i \to a})$, while the first equation in (\ref{eq:closure-cavity}) implies that the cavity magnetization $m_{i \to a}$ are Gaussian random variables with first moments $m=\mathbb{E}(m_{i \to a})$ and $q=\mathbb{E}(m_{i \to a}^2)$, satisfying the following equations \begin{eqnarray} \frac{m}{v} &=& \beta \langle J \rangle K \, m^{p-1} \label{eq:cavity-av-moments-1} \\ \frac{q}{v^2} &=& \beta^2 \langle J^2 \rangle K \, q^{p-1} + \beta^2 \langle J \rangle^2 K^2 m^{2(p-1)} \label{eq:cavity-av-moments-2} \\ \frac{1}{v} &=& \lambda - \beta^2 \langle J^2 \rangle K \left((q + v)^{p-1} - q^{p-1}\right) \label{eq:cavity-av-moments-3} \end{eqnarray} By imposing the spherical constraint, $\sum_i \langle \sigma_i^2 \rangle = N$, one gets the identity $q+v=1$ that fixes the Lagrange multiplier and simplifies further the equations \begin{eqnarray} \lambda &=& \frac{1}{1-q} + \beta^2 \langle J^2 \rangle K (1-q^{p-1}) \label{eq:Lagrange-multiplier}\\ m &=& \beta \langle J \rangle K\, m^{p-1}(1-q) \label{eq:closure-cavity-A} \\ q &=& \left[ \beta^2 \langle J^2 \rangle K\, q^{p-1} + \beta^2 \langle J \rangle^2 K^2 m^{2(p-1)}\right](1-q)^2 \label{eq:closure-cavity-B} \end{eqnarray} It can be checked by using this expression for $\lambda$ that the normalization of messages $\hat{\eta}_{a\rightarrow i}(\sigma_i)$ is always well defined in the limit of large $N$. } \subsection{The replica symmetric free energy} \label{sec-1:sub-C} We have now all the pieces we need to compute the replica symmetric free energy of the model, which is defined as~\cite{Mezard09}: \begin{equation} -\beta F \equiv \beta\left(\sum_{a=1}^{M} \mathbb{F}_a + \sum_{i=1}^N \mathbb{F}_i - \sum_{(ai) \in E} \mathbb{F}_{ai}\right)\equiv\sum_{a=1}^{M} \log(Z_a) + \sum_{i=1}^N \log(Z_i) - \sum_{(ai) \in E}\log(Z_{(ai)}), \label{eq:free-energy-def} \end{equation} where we have respectively \begin{eqnarray} Z_a &=& \int_{-\infty}^{\infty} \prod_{i\in \partial a} d\sigma_i ~\eta_{i\rightarrow a}(\sigma_i) ~ e^{\beta J_a \prod_{i\in \partial a}\sigma_i} \label{eq:free-energy-Za} \\ Z_i &=& \int_{-\infty}^{\infty} d\sigma_i ~ \prod_{a\in \partial i} \hat{\eta}_{a\rightarrow i}(\sigma_i) \label{eq:free-energy-Zi} \\ Z_{(ai)} &=& \int_{-\infty}^{\infty} d\sigma_i ~ \hat{\eta}_{a\rightarrow i}(\sigma_i) ~\eta_{i\rightarrow a}(\sigma_i) \label{eq:free-energy-Zai}. \end{eqnarray} The computation of these three terms is reported in the Appendix \ref{app:RSFreeEnergy}. Here we just report the final result: \begin{eqnarray} -\beta F_{\textrm{RS}} &=& \frac{N}{2}\left[\frac{\beta^2}{2} (1-q^p) J_2 + \log(1-q)+ \frac{q}{(1-q)}\right], \label{eq:frs-final} \end{eqnarray} The free energy written in Eq.~(\ref{eq:frs-final}) is identical to that of the spherical $p$-spin computed with replicas in the replica symmetric case, see Eq.~(4.4) of~\cite{Crisanti92}. From now on we will set $J_2=1$.\\ \subsection[sec1-subE]{Alternatives for the spherical constraint: equivalence in the large-$N$ limit.} \label{sec-1:sub-E} The experienced reader will have probably noticed that the way we have introduced the spherical constraint in the cavity equations is not, perhaps, the most natural one, that would correspond to an \emph{external field} of intensity $\lambda$ acting on every spin. As such, we should have put \begin{equation} \eta_{i\rightarrow a}(\sigma_i) ~\propto~e^{-\frac{\lambda}{2}\sigma_i^2}, \label{eq:spherical-eta} \end{equation} rather than \begin{equation} \hat{\eta}_{a\rightarrow i}(\sigma_i) ~\propto~e^{-\frac{\lambda}{2K}\sigma_i^2}, \label{eq:spherical-eta-hat} \end{equation} as we have done in the equations for the cavity marginals, Eq.~(\ref{eq:cavity1}) and Eq.~(\ref{eq:cavity2}). In what follows we show that the choice of where to put the spherical constraint is arbitrary in the large-$N$ limit. In practice we are going to show that either we let the constraint act as an external field in the \emph{variable-to-function} message $\eta_{i\rightarrow a}(\sigma_i)$, as in Eq.~(\ref{eq:spherical-eta}), or inside the \emph{function-to-variable} marginal $\hat\eta_{a\rightarrow i}(\sigma_i)$, as in Eq.~(\ref{eq:spherical-eta-hat}), in both cases we obtain the same expression for the free energy to the leading order in $N$. The reader must therefore bare in mind that the two ways to put the constraint in the cavity equations \emph{might not be equivalent} in the case of a graph with finite connectivity. After a trial and error procedure we realized that the choice in Eq.~(\ref{eq:spherical-eta-hat}) makes all calculations simpler, so that we opted for this one. We have already shown that by doing so we obtain, at high temperature, a free energy which is identical to the one obtained from mean-field replica calculations,~Eq.~(\ref{eq:frs-final}). We now want to show explicitly that, term by term and beside any further assumption as the one of homogeneity, the free energy in the high temperature ergodic phase is identical for the two choices [Eq.~(\ref{eq:spherical-eta}) and Eq.~(\ref{eq:spherical-eta-hat})] to introduce the constraint. Let us term $\eta_{i\rightarrow a}^{(\lambda)}(\sigma_i)$ and $\hat{\eta}_{a\rightarrow i}^{(\lambda)}(\sigma_i)$ the local cavity marginals corresponding to the case where the \emph{field} $\lambda$ acts directly on the spin: \begin{eqnarray} \eta_{i\rightarrow a}^{(\lambda)}(\sigma_i) &=&\frac{1}{Z_{i\to a}^{(\lambda)}} e^{-\frac{\lambda \sigma_i^2}{2}} \prod_{b \in \partial i \setminus a} \hat{\eta}_{b\rightarrow i}^{(\lambda)}(\sigma_i) \label{eq:cavity1-lambda}\\ \hat{\eta}_{a\rightarrow i}^{(\lambda)}(\sigma_i) & =&\frac{1}{\hat{Z}_{a\to i}^{(\lambda)}} \int_{-\infty}^{\infty} \prod_{j\in \partial a \setminus i} ~ d\sigma_j ~ \eta_{j\rightarrow a}^{(\lambda)}(\sigma_j) ~ \exp\left\{\beta J_a \sigma_i \prod_{j\in \partial a \setminus i}\sigma_j\right\}. \label{eq:cavity2-lambda} \end{eqnarray} Accordingly, since in the function-to-variable messages there is now no trace of the external field, one has to consider the following modified definition of the entropic term in the local partition functions: \begin{eqnarray} Z_a^{(\lambda)} &=& \int_{-\infty}^{\infty} \prod_{i\in \partial a} d\sigma_i ~\eta_{i\rightarrow a}^{(\lambda)}(\sigma_i) ~ e^{\beta J_a \prod_{i\in \partial a}\sigma_i} \label{eq:free-energy-Za-lambda} \\ Z_i^{(\lambda)} &=& \int_{-\infty}^{\infty} d\sigma_i ~e^{-\lambda \sigma_i^2 /2}~ \prod_{a\in \partial i} \hat{\eta}_{a\rightarrow i}^{(\lambda)}(\sigma_i) \label{eq:free-energy-Zi-lambda} \\ Z_{(ai)}^{(\lambda)} &=& \int_{-\infty}^{\infty} d\sigma_i ~ \hat{\eta}_{a\rightarrow i}^{(\lambda)}(\sigma_i) ~\eta_{i\rightarrow a}^{(\lambda)}(\sigma_i) \label{eq:free-energy-Zai-lambda}. \end{eqnarray} Our task is now to show that: \begin{equation} \sum_{a=1}^{M} \log(Z_a) + \sum_{i=1}^N \log(Z_i) - \sum_{(ai) \in E}\log(Z_{(ai)}) = \sum_{a=1}^{M} \log(Z_a^{(\lambda)}) + \sum_{i=1}^N \log(Z_i^{(\lambda)}) - \sum_{(ai) \in E}\log(Z_{(ai)}^{(\lambda)}). \label{eq:identity-free-energies} \end{equation} The key observation is that, in order to have overall consistency, the Gaussian ansatz for the variable-to-function message \emph{must} be the same in both cases, that is: \begin{equation} \eta_{i\rightarrow a}(\sigma_i) = \frac{1}{\sqrt{2\pi v_{i\rightarrow a}}} \exp\left[-\frac{(\sigma_i-m_{i\rightarrow a})^2}{2v_{i\rightarrow a}}\right] = \eta_{i\rightarrow a}^{(\lambda)}(\sigma_i). \label{eq:equivalence-eta} \end{equation} The assumption of Eq.~(\ref{eq:equivalence-eta}) allows us to conclude immediately that $Z_a^{(\lambda)} = Z_a$, so that the identity we need to prove reduces to: \begin{equation} \sum_{i=1}^N \log(Z_i) - \sum_{(ai) \in E}\log(Z_{(ai)}) = \sum_{i=1}^N \log(Z_i^{(\lambda)}) - \sum_{(ai) \in E}\log(Z_{(ai)}^{(\lambda)}) \label{eq:identity-free-energies-2nd} \end{equation} By exploiting Eq. (\ref{eq:equivalence-eta}) once again we obtain \begin{equation} \hat{\eta}_{a\rightarrow i}^{(\lambda)}(\sigma_i) =\frac{1}{\hat{Z}_{a\to i}^{(\lambda)}} \int_{-\infty}^{\infty} \prod_{j\in \partial a \setminus i} ~ d\sigma_j ~ \eta_{j\rightarrow a}(\sigma_j) ~ \exp\left\{\beta J_a \sigma_i \prod_{j\in \partial a \setminus i}\sigma_j\right\}, \end{equation} that, by comparison with the defintion Eq.~(\ref{eq:cavity2}) leads to \begin{equation} \hat{\eta}_{a\rightarrow i}^{(\lambda)}(\sigma_i)~\hat{Z}_{a\to i}^{(\lambda)} = \hat{\eta}_{a\rightarrow i}(\sigma_i)~\hat{Z}_{a\to i}~e^{\frac{\lambda \sigma_i^2}{2K}} \label{eq:nontrivial-identity}, \end{equation} so that \begin{equation} \hat{\eta}_{a\rightarrow i}^{(\lambda)}(\sigma_i) = \hat{\eta}_{a\rightarrow i}(\sigma_i)~\frac{\hat{Z}_{a\to i}}{\hat{Z}_{a\to i}^{(\lambda)}}~e^{\frac{\lambda \sigma_i^2}{2K}} \label{eq:nontrivial-identity2}. \end{equation} By inserting Eq.~(\ref{eq:nontrivial-identity2}) in the definition of $Z_i^{(\lambda)}$ in Eq.~(\ref{eq:free-energy-Zi-lambda}) one finds: \begin{eqnarray} Z_i^{(\lambda)} &=& \int_{-\infty}^{\infty} d\sigma_i~e^{-\lambda \sigma_i^2 /2}~\prod_{a\in \partial i} \hat{\eta}_{a\rightarrow i}^{(\lambda)}(\sigma_i) \nonumber \\ &=&\prod_{a\in \partial i} \left(\frac{\hat{Z}_{a\to i}}{\hat{Z}_{a\to i}^{(\lambda)}}\right) \int_{-\infty}^{\infty} d\sigma_i~\prod_{a\in \partial i} \hat{\eta}_{a\rightarrow i}(\sigma_i) \nonumber \\ &=& \prod_{a\in \partial i}\left(\frac{\hat{Z}_{a\to i}}{\hat{Z}_{a\to i}^{(\lambda)}}\right) ~Z_i, \end{eqnarray} so that the identity that we want prove is further simplified in \begin{equation} \sum_{(ai) \in E}\log(Z_{(ai)}) = \sum_{(ai) \in E}\log(Z_{(ai)}^{(\lambda)}) - \sum_{i=1}^N \sum_{a\in\partial i } \log\left( \frac{\hat{Z}_{a\to i}}{\hat{Z}_{a\to i}^{(\lambda)}} \right). \label{eq:identity-free-energies-3rd} \end{equation} Using, once again, Eq.~(\ref{eq:equivalence-eta}) we can write \begin{eqnarray} Z_{(ai)}^{(\lambda)} &=& \int_{-\infty}^{\infty} d\sigma_i ~ \hat{\eta}_{a\rightarrow i}^{(\lambda)}(\sigma_i) ~\eta_{i\rightarrow a}^{(\lambda)}(\sigma_i) \nonumber \\ &=& \int_{-\infty}^{\infty} d\sigma_i ~ \hat{\eta}_{a\rightarrow i}^{(\lambda)}(\sigma_i) ~\eta_{i\rightarrow a}(\sigma_i) \nonumber \\ & = & \frac{\hat{Z}_{a\to i}}{\hat{Z}_{a\to i}^{(\lambda)}} ~\int_{-\infty}^{\infty} d\sigma_i ~ e^{\frac{\lambda \sigma_i^2}{2K}}~\hat{\eta}_{a\rightarrow i}(\sigma_i) ~\eta_{i\rightarrow a}(\sigma_i) \nonumber \\ &\simeq& \frac{\hat{Z}_{a\to i}}{\hat{Z}_{a\to i}^{(\lambda)}} ~Z_{(ai)}, \label{eq:normalization-identity} \end{eqnarray} where the last line equality holds for large $N$ (see Eqns.~(\ref{eq:Zai-compute-first}), (\ref{eq:Zai-compute-2nd}) and (\ref{eq:Zai-compute-3rd}) in Appendix A). The $N\rightarrow\infty$ limit is equivalent to the $K\rightarrow\infty$ limit, since $K \sim N^{p-1}$. Finally, by plugging the result of Eq.~(\ref{eq:normalization-identity}) into Eq.~(\ref{eq:identity-free-energies-3rd}) we can conclude that the identity in Eq.~(\ref{eq:identity-free-energies-3rd}) is true in the limit $N\rightarrow\infty$. We have thus demonstrated that in the large-$N$ limit it is equivalent, and thus just a matter of convenience, to write down explicitly the spherical constraint inside the definition of the function-to-variable message $\hat{\eta}_{a\rightarrow i}(\sigma)$, as we have done, or inside the definition of the variable-to-function one, $\hat{\eta}_{i\rightarrow a}(\sigma)$. \section{One step Replica Symmetry Breaking solution} \label{sec:3} In the previous sections we have reviewed the replica symmetric solution that is the stable one for high temperatures. In this phase we have written closed cavity equations for the marginal distributions of the variables, relying on the assumption that the joint distribution of the cavity variables is factorized as in a single pure state. However, lowering the temperature, it is known from the replica solution \cite{Crisanti92}, that several metastable glassy states arise on top of the paramagnetic state. Their number being exponential in $N$ with a rate $\Sigma$ called \emph{complexity}. The function $\Sigma(f)$ is in general an increasing function of the state free-energy $f$, with a downwards curvature (for stability reasons as for the entropy). Comparing the total free-energy of the glassy states computed using $\Sigma(f)$ and the paramagnetic free-energy \cite{Zamponi} one can derive the dynamical critical temperature $T_d$ where the ergodicity breaks down and the thermodynamic critical temperature, also called Kauzmann temperature $T_K$, where a phase transition to a replica symmetry breaking phase takes place. Below $T_d$ the dynamics of the model is dominated by the states of larger free-energy, so-called threshold states, which are the most abundant and always exponentially many in $N$ (although a more refined picture has been recently presented in \cite{folena2019memories}). For $T<T_d$ the Gibbs measure is split over many different states, such that two different equilibrium configurations can be in the same (metastable) state or in different states. Defining the overlap between two different configurations as how much they are close to each other, the 1RSB phase is characterized by an overlap $q_1$ between configurations inside the same pure state (independently of the pure state) and an overlap $q_0<q_1$ between configurations in two different states. In formulas, the presence of many metastable pure states yields an additional contribution to the free-energy. The complexity $\Sigma(f)$, that counts the number of ``states'' (disjoint ergodic components of the phase-space) with the same free-energy $f$ can be written as \begin{equation} \Sigma(f) = \frac{1}{N}\log\left[\sum_{\eta=1}^{\mathcal N}~\delta(f-f_\eta)\right] \ , \label{eq:Sc} \end{equation} where $\mathcal N$ is the total number of metastable glassy states (formally they can be defined as the non-paramagnetic stationary points of the TAP free-energy \cite{Thouless77}) and $f_\eta$ is the free energy of the glassy state $\eta$. Please notice that expression in Eq.~(\ref{eq:Sc}) is identical to the standard microcanonical definition of entropy, with the only difference that now we measure the number of phase-space regions with the same free-energy rather than the volume of phase space with the same energy. The total free-energy is thus given by: \begin{equation} \mathcal{F}=-\frac{1}{\beta N}\log Z=-\frac{1}{\beta N}\log\left( \sum_{\eta}e^{-\beta N f_{\eta}}\right)= -\frac{1}{\beta N}\log\int df \sum_{\eta}~\delta(f-f_\eta) e^{-\beta N f}=-\frac{1}{\beta N}\log\int df e^{-N(\beta f-\Sigma)} \end{equation} The problem is that we do not know how to characterize the different states and how to count them to obtain $\Sigma$: we are still not able to compute $\mathcal{F}$. In the following we will solve this problem applying the method of real coupled replicas introduced by Monasson in Ref. \cite{Monasson95} (see also \cite{Mezard09} for a rigorous derivation and \cite{Zamponi} for a pedagogical review). This method was applied to the spherical $p$-spin in Ref.~\cite{Mezard99b} to compute the 1RSB free-energy with a replica computation. The idea of Ref.~\cite{Monasson95} is to introduce $x$ real clones, that we will call replicas, on a single realization of a graph. These replicas will be infinitesimally coupled together in such a way that, even when the coupling between them goes to zero, they will all fall in the same pure state below $T_d$: this cloning method is a way to select a state equivalent to what is usually done in ferromagnetic systems to select a state adding an infinitesimal magnetic field. The free energy $\Phi(x)$ of $x$ replicas in the same state is: \begin{equation} \Phi(x)=-\frac{1}{\beta N}\log\left( \sum_{\eta}e^{-\beta N xf_{\eta}}\right)=-\frac{1}{\beta N}\log\int df e^{-N(\beta x f-\Sigma)}=-\frac{1}{\beta}\max_f (\beta x f-\Sigma(f)). \end{equation} The complexity in this way simply results in the Legendre transform of the free energy of the replicated system. The total free-energy in the 1RSB phase is derived passing to the analytical continuation of $x$ to real values and turns out to be: $\mathcal{F}=\min_x \frac{\Phi(x)}{x}$. Beside the Monasson-Mezard clonig method, which is mostly useful to study the complexity of systems without quenched disorder, it is worth recalling the physical meaning of the analytic continuation to positive real values of $x$ in a more general setting: it allows to compute the large deviations of the free-enenergy, e.g., its sample-to-sample fluctuations~\cite{CPSV92,PDGR19}. In the following we will use this cloning method to write 1RSB closed cavity equations for the spherical $p$-spin, in a way analogous to what has been done in Ref.~\cite{AKUZ19} for the planted SK model. In a situation with many pure states, the factorization of the distribution of the cavity variables is valid only inside a single pure state: we can thus still write some cavity closed equations considering the coupled replicas in a same pure state. Then, we will compute the 1RSB free energy in a cavity approach below $T_d$, obtaining exactly the same expression found with replica computations in Refs.~\cite{Crisanti92,Mezard99b}. \subsection{The ansatz for the distribution of $x$ coupled replicas} \label{Sec:1rsbAnsatz} For the RS phase, in the dense case, we have written a Gaussian ansatz for the marginal probability of the spin on a given site in Eq.~(\ref{eq:Gaussian-ansatz}). In the 1RSB phase, we will consider the \emph{joint probability distribution} of $x$ coupled replicas that are all in a same pure state. We will comment in the next sections on the choice and the physical meaning of $x$. In order to lighten the notation, let us indicate as $\boldsymbol{\sigma}_i=\lbrace \sigma_i^\alpha \rbrace$, $\alpha =1,\ldots,x$, the vector of all $x$ replicas on site $i$. The 1RSB form of the ansatz for the marginal probability $\eta_{i \rightarrow a}(\boldsymbol{\sigma_i})$ amounts to \begin{equation} \eta_{i\rightarrow a}(\boldsymbol{\sigma}_i) = \int_{-\infty}^\infty dm_{i\rightarrow a}~\frac{1}{\sqrt{2\pi\Delta_{i\rightarrow a}^{(0)}}}\exp\left( -\frac{(m_{i\rightarrow a}-h_{i\rightarrow a})^2}{2\Delta_{i\rightarrow a}^{(0)}}\right) \frac{1}{\left[ \sqrt{2\pi\Delta_{i\rightarrow a}^{(1)}}\right]^x} \exp\left(-\sum_{\alpha=1}^x\frac{(\sigma_i^\alpha-m_{i\rightarrow a})^2}{2\Delta_{i\rightarrow a}^{(1)}} \right). \label{eq:1RSB-ansatz} \end{equation} This 1RSB Ansatz was firstly introduced in Ref.~\cite{AKUZ19}. By shortening the integration measure for the joint probability distribution $\boldsymbol{\sigma}_i$ with the symbol \begin{equation} \int \mathcal{D} \boldsymbol{\sigma}_i = \int_{-\infty}^\infty\prod_{\alpha =1}^x d\sigma_i^\alpha, \end{equation} and defining the distribution \begin{eqnarray} Q_{i\to a}\left(m_{i\rightarrow a} \right) \equiv \frac{1}{\sqrt{2\pi\Delta_{i\rightarrow a}^{(0)}}} \exp\left( -\frac{(m_{i\rightarrow a}-h_{i\rightarrow a})^2}{2\Delta_{i\rightarrow a}^{(0)}} \right), \label{eq:Q-1rsb-ansatz} \end{eqnarray} the first diagonal and second moments of the cavity marginal are simply computed as: \begin{eqnarray} \langle \sigma_i^\alpha \rangle &=& \int \mathcal{D} \boldsymbol{\sigma}_i ~\sigma_i^\alpha~\eta_{i\rightarrow a}(\boldsymbol{\sigma}_i)= \int_{-\infty}^\infty dm_{i\rightarrow a}~m_{i\rightarrow a}~ Q_{i\to a}\left(m_{i\rightarrow a} \right) = h_{i\rightarrow a} \nonumber \\ \langle (\sigma_i^\alpha)^2 \rangle &=& \int \mathcal{D} \boldsymbol{\sigma}_i ~(\sigma_i^\alpha)^2~\eta_{i\rightarrow a}(\boldsymbol{\sigma}_i) = \int_{-\infty}^\infty dm_{i\rightarrow a}~(\Delta_{i\rightarrow a}^{(1)}+m_{i\rightarrow a}^2)~Q_{i\to a}\left(m_{i\rightarrow a} \right) = \Delta_{i\rightarrow a}^{(1)} + \Delta_{i\rightarrow a}^{(0)} + h_{i\rightarrow a}^2 \nonumber \\ \langle \sigma_i^\alpha \sigma_i^\beta \rangle &=& \int \mathcal{D} \boldsymbol{\sigma}_i ~\sigma_i^\alpha~\sigma_i^\beta~\eta_{i\rightarrow a}(\boldsymbol{\sigma}_i)= \int_{-\infty}^\infty dm_{i\rightarrow a}~m_{i\rightarrow a}^2~ Q_{i\to a}\left(m_{i\rightarrow a} \right) = \Delta_{i\rightarrow a}^{(0)} + h_{i\rightarrow a}^2 \label{eq:1RSB-distribution-moments} \end{eqnarray} Let us comment briefly on the form of the Ansatz. The marginal probability of a single replica in a given state is still a Gaussian, being on a dense graph. If the real replicas are coupled, they will fall in the same state. The only effect of the infinitesimal coupling between the replicas will be that the configurations of the real replicas will be independent variables extracted from the same distribution in each state, once average $m$ and variance $\Delta^{(1)}$ are given: \begin{equation} \eta^{\rm s}_{i\rightarrow a}\left(\sigma_i^\alpha \right) \equiv \frac{1}{\sqrt{2\pi\Delta_{i\rightarrow a}^{(1)}}} \exp\left( -\frac{(\sigma_i^\alpha-m_{i\rightarrow a})^2}{2\Delta_{i\rightarrow a}^{(1)}}\right). \label{eq:replicas-1rsb-ansatz} \end{equation} In the same way, the average magnetizations in different states will be independent variables extracted from the same distribution $Q_{i\to a}(m_{i\rightarrow a}) $, that will depend on $\Delta^{(0)}$ and $h$, see, e.g., ref. \cite{Mezard85}. With this simple scenario in mind, we can give a simple physical interpretation to the parameters of the distribution in Eq.~(\ref{eq:1RSB-ansatz}) rewriting them as: \begin{eqnarray} h_{i\rightarrow a} &=& \langle \sigma_i^\alpha \rangle \nonumber \\ \Delta^{(1)}_{i\rightarrow a} &=& \langle (\sigma_i^\alpha)^2 \rangle - \langle \sigma_i^\alpha \sigma_i^\beta \rangle = 1- q_1^{i\rightarrow a}\nonumber \\ \Delta^{(0)}_{i\rightarrow a} &=& \langle \sigma_i^\alpha \sigma_i^\beta \rangle - \langle \sigma_i^\alpha \rangle^2 = q_1^{i\rightarrow a} - q_0^{i\rightarrow a}, \label{eq:1RSB-parameters} \end{eqnarray} where the average is taken with respect to the probability distribution in Eq.~(\ref{eq:1RSB-ansatz}). $q_1^{i\rightarrow a}$ and $q_0^{i\rightarrow a}$ are the local overlap (in absence of the link from $i$ to $a$) inside a state and between states that we mentioned at the beginning of this Section. Obviously on a complete graph, they will be independent of $i$ and $a$, as for the only parameter in the RS case (the magnetization) in the homogeneous case. However, we here prefer to write explicitly the dependence on $i$ and $a$, because in this way the equations we will obtain could be easily applied to non-complete graphs. \subsection{1RSB cavity equations} \label{sec:3B} We now write the replicated cavity equations for the 1RSB Ansatz introduced in the previous section: \begin{eqnarray} \eta_{i\rightarrow a}(\boldsymbol{\sigma}_i) &\propto& \prod_{b \in \partial i \setminus a} \hat{\eta}_{b\rightarrow i}(\boldsymbol{\sigma}_i) \label{eq:cavity1-1RSB} \\ \hat{\eta}_{a\rightarrow i}(\boldsymbol{\sigma}_i) &\propto& e^{-\frac{\lambda}{2 K} \sum_{\alpha=1}^x (\sigma^{\alpha}_i)^2} \int_{-\infty}^{\infty} \prod_{k\in \partial a \setminus i} ~ \mathcal{D}\boldsymbol{\sigma}_k ~ \eta_{k\rightarrow a}(\boldsymbol{\sigma}_k) ~ \exp\left\{\beta J_a \sum_{\alpha=1}^x \sigma_i^\alpha \prod_{k\in \partial a \setminus i}\sigma_k^\alpha \right\}. \label{eq:cavity2-1RSB} \end{eqnarray} We have omitted the normalization factors that are irrelevant in the subsequent computations. As we did for the RS case, in the dense limit we take the leading term in a small $J_a$ expansion (valid in the large $N$ limit for dense graphs) and in this setting we will close the equations on the parameters of the multivariate Gaussian. That is, we write: \begin{eqnarray} & & \int_{-\infty}^{\infty} \prod_{k\in \partial a \setminus i} ~ \mathcal{D}\boldsymbol{\sigma}_k ~ \eta_{k\rightarrow a}(\boldsymbol{\sigma}_k) ~ ~\exp\left\{\beta J_a \sum_{\alpha=1}^x \sigma_i^\alpha \prod_{k\in \partial a \setminus i}\sigma_k^\alpha \right\} \simeq \nonumber \\ ~&\simeq & \int_{-\infty}^{\infty} \prod_{k\in \partial a \setminus i} ~ \mathcal{D}\boldsymbol{\sigma}_k ~ \eta_{k\rightarrow a}(\boldsymbol{\sigma}_k) ~ \left[ 1 + \beta J_a \sum_{\alpha=1}^x \sigma_i^\alpha \prod_{k\in \partial a \setminus i} \sigma_k^\alpha + \frac{1}{2} \beta^2 J^2_a~\left( \sum_{\alpha=1}^x \sigma_i^\alpha \prod_{k\in \partial a \setminus i} \sigma_k^\alpha\right)^2 \right] = \nonumber \\ ~&=& 1 + \beta J_a \left(\prod_{k\in \partial a \setminus i} h_{k\rightarrow a}\right)~~\sum_{\alpha=1}^x \sigma_i^\alpha + \frac{1}{2} \beta^2 J^2_a~\left[\prod_{k\in \partial a \setminus i} \left( \Delta_{k\rightarrow a}^{(1)} + \Delta_{k\rightarrow a}^{(0)} + h_{k\rightarrow a}^2 \right)\right]~~\sum_{\alpha=1}^x \left(\sigma_i^\alpha\right)^2 + \nonumber \\ && + \frac{1}{2} \beta^2 J^2_a~\left[\prod_{k\in \partial a \setminus i} \left(\Delta_{k\rightarrow a}^{(0)} + h_{k\rightarrow a}^2\right)\right]~~\sum_{\alpha\neq\beta}^x \sigma_i^\alpha\sigma_i^\beta \simeq \nonumber \\ ~&\simeq&~\exp\left\lbrace \hat{A}_{a\to i}~\sum_{\alpha=1}^x \sigma_i^\alpha -\frac{1}{2}~\hat{B}_{a\to i}^{(\text{d})}~\sum_{\alpha=1}^x (\sigma_i^\alpha)^2 + \frac{1}{2}~\hat{B}_{a\to i}^{(\text{nd})}~ \sum_{\alpha\neq\beta}^x \sigma_i^\alpha\sigma_i^\beta \right\rbrace, \end{eqnarray} where the three coefficients are respectively \begin{eqnarray} \hat{A}_{a\rightarrow i} &=& \beta J_a \prod_{k\in \partial a \setminus i} h_{k\rightarrow a} \nonumber \\ \hat{B}_{a\rightarrow i}^{(\text{d})} &=& \beta^2 J^2_a ~\left[ \prod_{k\in \partial a \setminus i} h^2_{k\rightarrow a} - \prod_{k\in \partial a \setminus i} \left(\Delta_{k\rightarrow a}^{(1)} + \Delta_{k\rightarrow a}^{(0)} + h_{k\rightarrow a}^2 \right)\right] \nonumber \\ \hat{B}_{a\rightarrow i}^{(\text{nd})} &=& \beta^2 J^2_a~\left[ \prod_{k\in \partial a \setminus i} \left(\Delta_{k\rightarrow a}^{(0)} + h_{k\rightarrow a}^2\right) -\prod_{k\in \partial a \setminus i} h^2_{k\rightarrow a}\right]. \nonumber \\ \label{eq:multiv-gauss-coeffs} \end{eqnarray} The \emph{function-to-variable} message, expressed by Eq.~(\ref{eq:cavity2-1RSB}), reads therefore as \begin{equation} \hat{\eta}_{a\rightarrow i}(\boldsymbol{\sigma}_i) \propto \exp\left\lbrace \hat{A}_{a\rightarrow i}~\sum_{\alpha=1}^x \sigma_i^\alpha -\frac{1}{2}~\left( \hat{B}_{a\rightarrow i}^{(\text{d})} + \frac{\lambda}{K} \right)~\sum_{\alpha=1}^x (\sigma_i^\alpha)^2 + \frac{1}{2}~\hat{B}_{a\rightarrow i}^{(\text{nd})}~ \sum_{\alpha\neq\beta}^x \sigma_i^\alpha\sigma_i^\beta\right\rbrace, \end{equation} while from Eq.~(\ref{eq:cavity1-1RSB}) we have that the \emph{variable-to-function} message reads as \begin{equation} \eta_{i\rightarrow a}(\boldsymbol{\sigma}_i) \propto \exp\left\lbrace \left(\sum_{b\in \partial i \setminus a} \hat{A}_{b\rightarrow i}\right)~\sum_{\alpha=1}^x \sigma_i^\alpha -\frac{1}{2}~ \left( \sum_{b\in \partial i \setminus a} \hat{B}_{b\rightarrow i}^{(\text{d})} + \lambda \right)~\sum_{\alpha=1}^x (\sigma_i^\alpha)^2 + \frac{1}{2}~\left( \sum_{b\in \partial i \setminus a} \hat{B}_{b\rightarrow i}^{(\text{nd})}\right)~ \sum_{\alpha\neq\beta}^x \sigma_i^\alpha\sigma_i^\beta\right\rbrace. \label{eq:marginal-new} \end{equation} In order to keep the notation simple let us define: \begin{eqnarray} A_{i\rightarrow a} &\equiv& \sum_{b\in \partial i \setminus a}\hat{A}_{b\rightarrow i} = \sum_{b\in \partial i \setminus a} \beta J_b \prod_{k\in \partial b \setminus i} h_{k\rightarrow b} \nonumber \\ B^{(\text{d})}_{i\rightarrow a} &\equiv& \lambda + \sum_{b\in \partial i \setminus a}\hat{B}_{a\rightarrow i}^{(\text{d})} =\lambda - \sum_{b\in \partial i \setminus a} \beta^2 J^2_b ~\left[ \prod_{k\in \partial b \setminus i} \left(\Delta_{k\rightarrow b}^{(1)} + \Delta_{k\rightarrow b}^{(0)} + h_{k\rightarrow b}^2 \right) - \prod_{k\in \partial b \setminus i} h^2_{k\rightarrow b}\right] \nonumber \\ B^{(\text{nd})}_{i\rightarrow a} &\equiv& \sum_{b\in \partial i \setminus a} \hat{B}_{a\rightarrow i}^{(\text{nd})}= \sum_{b\in \partial i \setminus a} \beta^2 J^2_b~\left[ \prod_{k\in \partial b \setminus i} \left(\Delta_{k\rightarrow b}^{(0)} + h_{k\rightarrow b}^2\right) - \prod_{k\in \partial b \setminus i} h_{k\rightarrow b}^2 \right]\nonumber, \\ \end{eqnarray} so that Eq. (\ref{eq:cavity1-1RSB}) can be rewritten in the more compact form as: \begin{equation} \eta_{i\rightarrow a}(\boldsymbol{\sigma}_i) \propto \exp\left\lbrace A_{i\rightarrow a}\sum_{\alpha=1}^x \sigma_i^\alpha -\frac{1}{2}B^{(\text{d})}_{i\rightarrow a}\sum_{\alpha=1}^x (\sigma_i^\alpha)^2 + \frac{1}{2}B^{(\text{nd})}_{i\rightarrow a} \sum_{\alpha\neq\beta}^x \sigma_i^\alpha\sigma_i^\beta\right\rbrace. \label{eq:marginal-new-compact} \end{equation} The expression above can be further simplified by introducing the matrix $\mathcal{M}_{\alpha\beta}$ and the vector $u_\alpha$ such that \begin{eqnarray} \nonumber u_\alpha &=& \frac{A_{i\rightarrow a}}{B^{(\text{d})}_{i\rightarrow a}-(x-1)B^{(\text{nd})}_{i\rightarrow a}} ~~~~ \forall\alpha,\\ \mathcal{M}_{\alpha\beta} &=& \delta_{\alpha\beta}~B^{(\text{d})}_{i\rightarrow a} + (1-\delta_{\alpha\beta})~(-B^{(\text{nd})}_{i\rightarrow a}) \label{eq:matrixM} \end{eqnarray} and the normalized distribution, written in the standard form for a multivariate Gaussian, reads \begin{equation} \eta_{i\rightarrow a}(\boldsymbol{\sigma}_i) = \sqrt{\frac{\det \mathcal{M}}{(2\pi)^x}} \exp\left\lbrace -\frac{1}{2} (\boldsymbol{\sigma}_i-\bf u)^T \mathcal{M} (\boldsymbol{\sigma}_i-\bf u) \right\rbrace, \end{equation} The closed cavity equations, which in the 1RSB case are three rather than two, are simply obtained by taking the averages in Eq.~(\ref{eq:1RSB-parameters}) with respect to the marginal distribution $\eta_{i\rightarrow a}(\boldsymbol{\sigma}_i)$: \begin{eqnarray} h_{i\rightarrow a} &=& \langle \sigma_i^\alpha \rangle = u_\alpha \nonumber \\ \Delta^{(0)}_{i\rightarrow a} &=& \langle \sigma_i^\alpha \sigma_i^\beta \rangle - [\langle \sigma_i^\alpha \rangle]^2 = \mathcal{M}^{-1}_{\alpha\beta} \nonumber \\ \Delta^{(1)}_{i\rightarrow a} &=& \langle [\sigma_i^\alpha]^2 \rangle - \langle \sigma_i^\alpha \sigma_i^\beta \rangle = \mathcal{M}^{-1}_{\alpha\alpha}-\mathcal{M}^{-1}_{\alpha\beta}, \nonumber \end{eqnarray} where the general expression of the inverse matrix element is: \begin{equation} \mathcal{M}^{-1}_{\alpha\beta} = \frac{1}{\left(B^{(\text{d})}_{i\rightarrow a}+B^{(\text{nd})}_{i\rightarrow a}\right)}~\delta_{\alpha\beta} + \frac{B^{(\text{nd})}_{i\rightarrow a}}{\left(B^{(\text{d})}_{i\rightarrow a}+B^{(\text{nd})}_{i\rightarrow a}\right)\left(B^{(\text{d})}_{i\rightarrow a}+(1-x)B^{(\text{nd})}_{i\rightarrow a}\right)}. \label{eq:inverse-M} \end{equation} For the ease of the reader willing to implement them in a code, let us write explicitly the closed cavity equations: \begin{comment} \begin{eqnarray} h_{i\rightarrow a} &=& \frac{\beta \sum_{b\in \partial i\setminus a}J_b \prod_{k\in\partial b\setminus i} h_{k\rightarrow b} } {\substack{\Big\lbrace\lambda + \beta^2 \sum_{b\in \partial i \setminus a}~J_b^2\left[ x\prod_{k\in \partial b \setminus i} h^2_{k\rightarrow b} - \prod_{k\in \partial b \setminus i} \left(\Delta_{k\rightarrow b}^{(1)} + \Delta_{k\rightarrow b}^{(0)} + h_{k\rightarrow b}^2 \right) - (x-1) \prod_{k\in \partial b \setminus i} \left(\Delta_{k\rightarrow b}^{(0)} + h_{k\rightarrow b}^2\right)\right]\Big\rbrace} } \\ \Delta^{(0)}_{i\rightarrow a} &=& \frac{\beta^2\sum_{b\in\partial i\setminus a}J_b^2 \left[ \prod_{k\in\partial b\setminus i}\left(\Delta^{(0)}_{k\rightarrow b}+h^2_{k\rightarrow b}\right)-\prod_{k\in\partial b\setminus i}h^2_{k\rightarrow b}\right]}{ \left(\lambda + \beta^2\sum_{b\in\partial i\setminus a}J_b^2\left[ \prod_{k\in\partial b\setminus i}\left(\Delta^{(0)}_{k\rightarrow b}+h^2_{k\rightarrow b}\right) - \prod_{k\in\partial b\setminus i}\left(\Delta^{(0)}_{k\rightarrow b}+\Delta^{(1)}_{k\rightarrow b}+h^2_{k\rightarrow b}\right)\right] \right)} \cdot \nonumber \\ &\cdot& \frac{1}{\substack{\Big\lbrace\lambda + \beta^2 \sum_{b\in \partial i \setminus a}~J_b^2\left[ x\prod_{k\in \partial b \setminus i} h^2_{k\rightarrow b} - \prod_{k\in \partial b \setminus i} \left(\Delta_{k\rightarrow b}^{(1)} + \Delta_{k\rightarrow b}^{(0)} + h_{k\rightarrow b}^2 \right) - (x-1) \prod_{k\in \partial b \setminus i} \left(\Delta_{k\rightarrow b}^{(0)} + h_{k\rightarrow b}^2\right)\right]\Big\rbrace}}\nonumber \\ \Delta^{(1)}_{i\rightarrow a} &=& \frac{1}{\lambda + \beta^2\sum_{b\in\partial i\setminus a} J_b^2 \left[ \prod_{k\in\partial b\setminus i}\left(\Delta^{(0)}_{k\rightarrow b}+h^2_{k\rightarrow b}\right) - \prod_{k\in\partial b\setminus i}\left(\Delta^{(0)}_{k\rightarrow b}+\Delta^{(1)}_{k\rightarrow b}+h^2_{k\rightarrow b}\right)\right]}. \nonumber \label{eq:belief-prop} \end{eqnarray} \end{comment} \begin{eqnarray} h_{i\rightarrow a} &=& \frac{\beta \sum_{b\in \partial i\setminus a}J_b \prod_{k\in\partial b\setminus i} h_{k\rightarrow b} } {\mathcal{D}^{(1)}_{i\to a} - x \mathcal{D}^{(0)}_{i\to a} \nonumber } \\ \Delta^{(0)}_{i\rightarrow a} &=& \frac{ \mathcal{D}^{(0)}_{i\to a} }{ \mathcal{D}^{(1)}_{i\to a} \left(\mathcal{D}^{(1)}_{i\to a} - x \mathcal{D}^{(0)}_{i\to a} \right)} \label{eq:belief-prop} \\ \Delta^{(1)}_{i\rightarrow a} &=& \frac{1}{\mathcal{D}^{(1)}_{i\to a}} \nonumber \end{eqnarray} with \begin{eqnarray} \mathcal{D}^{(1)}_{i\to a} &\equiv& \lambda - \beta^2\sum_{b\in\partial i\setminus a}J_b^2\left[ \prod_{k\in\partial b\setminus i}\left(\Delta^{(0)}_{k\rightarrow b}+\Delta^{(1)}_{k\rightarrow b}+h^2_{k\rightarrow b}\right) - \prod_{k\in\partial b\setminus i}\left(\Delta^{(0)}_{k\rightarrow b}+h^2_{k\rightarrow b}\right) \right] \nonumber \\ \mathcal{D}^{(0)}_{i\to a} &\equiv& \beta^2\sum_{b\in\partial i\setminus a}J_b^2\left[ \prod_{k\in\partial b\setminus i}\left(\Delta^{(0)}_{k\rightarrow b}+h^2_{k\rightarrow b}\right) - \prod_{k\in\partial b\setminus i} h^2_{k\rightarrow b}\right] \nonumber \end{eqnarray} While the parameter $\lambda$ is fixed by the normalization condition, the parameter $x$ is a variational one and has to be choosen in order to extremize the free-energy, a quantity that is computed explicitly in the next subsection in the case of a complete graph. \gblue{In the limit of large mean degree the above saddle point equations can be further simplified by noticing that both $\mathcal{D}^{(0)}_{i\to a}$ and $\mathcal{D}^{(1)}_{i\to a}$ concentrate to their mean values, which we denote as $\langle \mathcal{D}^{(0)}_{i\to a} \rangle = \mathcal{D}^{(0)}$ and $\langle \mathcal{D}^{(1)}_{i\to a} \rangle = \mathcal{D}^{(1)}$, due to the law of large numbers, while $h_{i\to a}$ becomes a Gaussian variable and it is enough to consider its first two moments. \begin{eqnarray} m \equiv \langle h_{i\rightarrow a} \rangle &=& \frac{\beta \langle J \rangle K ~m^{p-1}}{\mathcal{D}^{(1)} - x \mathcal{D}^{(0)}} \\ q_0 \equiv \langle h_{i\rightarrow a}^2 \rangle &=& \frac{\beta^2 \left[ \langle J^2 \rangle K~q_0^{p-1}+\langle J\rangle^2 K^2 m^{2(p-1)}\right]}{\left(\mathcal{D}^{(1)} - x \mathcal{D}^{(0)}\right)^2} \\ \Delta^{(0)} &=& \frac{\mathcal{D}^{(0)}} {\mathcal{D}^{(1)}\left(\mathcal{D}^{(1)} - x \mathcal{D}^{(0)}\right)} \\ \Delta^{(1)} &=& \frac{1}{\mathcal{D}^{(1)}} \nonumber \end{eqnarray} with \begin{eqnarray} \mathcal{D}^{(1)} &\equiv& \lambda - \beta^2 \langle J^2 \rangle K \left[ \left(q_0 + \Delta^{(0)} + \Delta^{(1)}\right)^{p-1} - \left(q_0 + \Delta^{(0)}\right)^{p-1} \right] \nonumber \\ \mathcal{D}^{(0)} &\equiv& \beta^2 \langle J^2 \rangle K \left[ \left(q_0 + \Delta^{(0)}\right)^{p-1} - q_0^{p-1} \right] \nonumber \end{eqnarray} and where the symbols $\Delta^{(1)}$ and $\Delta^{(0)}$ represent, respectively, $\Delta^{(1)} = \langle \Delta^{(1)}_{i\rightarrow a}\rangle$ and $\Delta^{(0)} = \langle \Delta^{(0)}_{i\rightarrow a}\rangle$. By considering the most common model with Gaussian couplings of zero mean ($\langle J \rangle = 0$ and $\langle J^2 \rangle = p!/(2N^{p-1})$) and recalling that $\Delta^{(0)} = q_1-q_0 $, $\Delta^{(1)} = 1-q_1$ and \begin{equation} K = \binom{N}{p-1} \sim \frac{N^{p-1}}{(p-1)!}, ~~~\Longrightarrow~~~ \langle J^2 \rangle K \sim \frac{p}{2} \end{equation} we are left with the following three closed equations: \begin{eqnarray} \beta^2 \frac{p}{2} ~q_0^{p-2} &=& \left[ \lambda -p\frac{\beta^2}{2} \left(1-x q_0^{p-1}+(x-1) q_1^{p-1}\right)\right]^2 \\ q_1-q_0 &=& \frac{p\frac{\beta^2}{2}\left(q_1^{p-1}-q_0^{p-1}\right)}{\left[\lambda-p\frac{\beta^2}{2}\left(1-q_1^{p-1}\right)\right] \left[ \lambda -p\frac{\beta^2}{2} \left(1-x q_0^{p-1}+(x-1) q_1^{p-1}\right)\right]} \\ 1-q_1 &=& \frac{1}{\lambda - p\frac{\beta^2}{2}\left( 1 - q_1^{p-1} \right)} \label{eq:saddle-point-overlaps} \end{eqnarray} Let us notice that Eq.~(\ref{eq:saddle-point-overlaps}) allows us to easily re-express the spherical constraint parameter $\lambda$ as a function of $q_1$ and $\beta$, i.e. \begin{equation} \lambda = \frac{1}{1-q_1} + p\frac{\beta^2}{2} \left( 1 - q_1^{p-1}\right), \end{equation} which will be useful later on. Comparing the expression of the Langrange multiplier with the one obtained in the RS case, Eq. (\ref{eq:Lagrange-multiplier}), we see that they are the same with the substitution $q\to q_1$: in the 1RSB phase the Lagrange multiplier is enforcing the spherical constraint inside each pure state. } \begin{comment} The \emph{population dynamics} version of the above equations, where one takes the average on the graph statistics reads: \begin{eqnarray} h_{i} &=& \frac{\beta (K-1)\langle J\rangle \prod_{k=1}^{p-1} h_k}{\substack{\Big\lbrace\lambda + \beta^2 \langle J^2\rangle (K-1) \left[ x\prod_{k=1}^{p-1} h^2_k - \prod_{k=1}^{p-1} \left(\Delta_k^{(1)} + \Delta_k^{(0)} + h_k^2 \right) - (x-1) \prod_{k=1}^{p-1} \left(\Delta_k^{(0)} + h_k^2\right)\right]\Big\rbrace}}\nonumber \\ \Delta^{(0)}_{i} &=& \frac{\beta^2 \langle J^2\rangle (K-1)\left[ \prod_{k=1}^{p-1}\left(\Delta^{(0)}_k+h^2_k\right)-\prod_{k=1}^{p-1} h^2_k\right]}{ \left(\lambda + \beta^2\langle J^2\rangle (K-1) \left[ \prod_{k=1}^{p-1}\left(\Delta^{(0)}_k+h^2_k\right) - \prod_{k=1}^{p-1}\left(\Delta^{(0)}_k+\Delta^{(1)}_k+h^2_k\right)\right] \right) } \cdot \nonumber \\ &\cdot& \frac{1}{\substack{\Big\lbrace\lambda + \beta^2 \langle J^2\rangle (K-1) \left[ x\prod_{k=1}^{p-1} h^2_k - \prod_{k=1}^{p-1} \left(\Delta_k^{(1)} + \Delta_k^{(0)} + h_k^2 \right) - (x-1) \prod_{k=1}^{p-1} \left(\Delta_k^{(0)} + h_k^2\right)\right]\Big\rbrace}}\nonumber \\ \Delta^{(1)}_{i} &=& \frac{1}{\lambda + \beta^2 \langle J^2\rangle (K-1)\left[ \prod_{k=1}^{p-1}\left(\Delta^{(0)}_k+h^2_k\right) - \prod_{k=1}^{p-1}\left(\Delta^{(0)}_k+\Delta^{(1)}_k+h^2_k\right)\right]}. \nonumber \\ \label{eq:pop-dynamics} \end{eqnarray} \begin{eqnarray} h_{i} &=& \frac{\beta (K-1)\langle J\rangle \prod_{k=1}^{p-1} h_k}{\mathcal{D}^{(1)}-x \mathcal{D}^{(0)}}\nonumber \\ \Delta^{(0)}_{i} &=& \frac{\mathcal{D}^{(0)}}{ \mathcal{D}^{(1)}\left(\mathcal{D}^{(1)}-x\mathcal{D}^{(0)}\right)} \label{eq:pop-dynamics} \\ \Delta^{(1)}_{i} &=& \frac{1}{\mathcal{D}^{(1)}}. \nonumber \nonumber \end{eqnarray} where \begin{eqnarray} \mathcal{D}^{(1)} &\equiv& \lambda - \beta^2 (K-1) \langle J^2\rangle \left[ \prod_{k=1}^{p-1} \left(\Delta_k^{(1)} + \Delta_k^{(0)} + h_k^2 \right) - \prod_{k=1}^{p-1} \left(\Delta_k^{(0)} + h_k^2\right)\right] \nonumber \\ \mathcal{D}^{(0)} &\equiv& \beta^2 (K-1) \langle J^2\rangle \left[ \prod_{k=1}^{p-1} \left(\Delta_k^{(0)} + h_k^2\right) -\prod_{k=1}^{p-1} h^2_k \right] \nonumber \end{eqnarray} To practically implement the above equations, one creates a \emph{population of triplets} $\{h,\Delta^{(0)},\Delta^{(1)}\}$ which is updated as follows: at each time step $p-1$ triplets randomly extracted from the population are used to compute the right hand sides of Eq.~(\ref{eq:pop-dynamics}), that provide a new triplet, which is then inserted back in the population in a random position. The process is repeat until the population becomes stationary in time. From the fixed point distribution physical observables are computed. \subsection{1RSB free energy} \label{sec:1RSB-fe} We now want to compute the free-energy $\mathcal{F}(x)$ in the presence of a one-step replica symmetric Ansatz. The free-energy for the replicated system is \cite{Mezard09,Zamponi}: \begin{equation} \Phi(x) = -\left (\sum_{ a=1}^M \mathbb{F}_a^{ ^{\text{RSB}}}(x)+ \sum_{i=1}^N \mathbb{F}_i^{ ^{\text{RSB}}}(x) - \sum_{(ai) \in E} \mathbb{F}_{ai}^{ ^{\text{RSB}}}(x)\right). \end{equation} The total free energy of the system is just the free energy of the $x$ coupled replicas, divided by $x$ (and extremized over $x$). In principle the three contributions, respectively representing the {\it energetic} and the {\it entropic} contributions and a normalization, read as: \begin{equation} \beta\mathbb{F}_a^{ ^{\text{RSB}}}(x) = \log\left\lbrace \int \prod_{i\in \partial a} \left[ dm_{i\rightarrow a} Q_{i\to a}(m_{i\rightarrow a})\right]~e^{x \beta\mathbb{F}_a(\lbrace m_{i\rightarrow a} \rbrace)}\right\rbrace \label{eq:energy-1rsb} \end{equation} \begin{equation} \beta\mathbb{F}_i^{ ^{\text{RSB}}}(x) = \log\left\lbrace \int \prod_{a \in \partial i} \left[ d\hat{m}_{a \rightarrow i} \hat{Q}_{a\to i}(\hat{m}_{a \rightarrow i})\right]~e^{x \beta\mathbb{F}_i(\lbrace \hat{m}_{a \rightarrow i} \rbrace)}\right\rbrace \label{eq:entropy-1rsb} \end{equation} \begin{equation} \beta\mathbb{F}_{ai}^{ ^{\text{RSB}}}(x) = \log\left\lbrace \int d\hat{m}_{a\rightarrow i}~d{m}_{i\rightarrow a}~\hat{Q}_{a\to i}(\hat{m}_{a \rightarrow i})~Q_{i\to a}(m_{i\rightarrow a})~e^{x \beta\mathbb{F}_{ai}(m_{i \rightarrow a},\hat{m}_{a\rightarrow i} )}\right\rbrace \label{eq:normalization-1rsb} \end{equation} where $\mathbb{F}_a, \mathbb{F}_i$ and $\mathbb{F}_{ai}$ are the RS free energy parts in Eq. (\ref{eq:free-energy-def}), $Q_{i\to a}(m_{i\rightarrow a})$ is the distribution of the cavity marginals that in the case of the dense Gaussian 1RSB ansatz has been defined in Eq. (\ref{eq:Q-1rsb-ansatz}), Sec. \ref{Sec:1rsbAnsatz} and the distribution $\hat{Q}_{a\to i}(\hat{m}_{a \rightarrow i})$ should be the one of the function-to-variable fields. The detailed computation is reported in the Appendix \ref{app:1RSBfree_energy}, here we just write the result that is the same found with the replica approach \cite{Crisanti92}: \begin{eqnarray} -x\beta\mathcal{F}(x) &=& \sum_{a=1}^M \beta\mathbb{F}_a^{\text{RSB}} + \sum_{i=1}^N \beta\mathbb{F}_i^{\text{RSB}} -\sum_{(ai)\in E} \beta\mathbb{F}_{ai}^{\text{RSB}} \nonumber \\ &=& \frac{xN}{2} \Big\lbrace ~\frac{\beta^2}{2} \left[ 1 - (1-x)q_1^p - x q_0^p\right] + \frac{q_0}{\left[ 1 - xq_0 - (1-x) q_1 \right]} + \frac{x-1}{x} \log\left( 1-q_1\right)+\frac{1}{x}\log\left[ 1-xq_0-(1-x)q_1\right] \Big\rbrace \nonumber \\ \end{eqnarray} \section{Conclusions} \label{sec:4} In this paper we have derived the cavity equations for solving diluted spherical $p$-spin models. Such a cavity-based derivation makes evident the underlying assumption that reflects itself in the distribution of local fields: in the RS ansatz replicas are uncorrelated and have Gaussian local fields, while in the 1RSB ansatz replicas have correlated Gaussian local fields whose covariance matrix depends on whether the replicas are in the same state or not, as pointed out in Ref. \cite{AKUZ19}. We have derived the cavity equations exploiting the same high-temperature expansion that leads to mean-field approximations \cite{MFLKMZ19}. We have solved the cavity equations in the fully-connected case. In this case the solution is homogeneous, depends on very few parameters and can be written explicitly, leading to the same expression that was obtained via the replica method. The approach based on the cavity method has several advantages: \begin{compactitem} \item it makes clear the underlying assumptions; \item it holds also for the diluted version of the model (provided the solution does not condensate); \item it can be converted in message-passing algorithms, the RS Belief Propagation and the 1RSB Survey Propagation; \item it allows to study heterogeneous solutions in diluted models, until the condensation transition. \end{compactitem} Our work, besides providing the first complete reference on the equivalence between the replica and the cavity methods for spherical disordered models, paves the way to a more systematic study of \emph{inhomogeneous} glassy phases in diluted mean-field models and represents a reference point for the systematic development of algorithms for combinatorial optimization and inference problems characterized by continuous variables~\cite{Marruzzo18,H18,AKUZ19,MLKZ20}. \begin{acknowledgments} The research has been supported by the European Research Council under the European Unions Horizon 2020 research and innovation programme (grant No.~694925, G.~Parisi). G.~G.~acknowledges the financial support of the Simons Foundation (Grant No.~454949, G.~Parisi) and thanks the Physics Department of Sapienza, University of Rome for kind hospitality in the first stages of this work. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }